Every month, the AWS bill arrives. Every month, it's higher than expected. Every month, someone says "we should look into that" — and then nobody does.
Sound familiar?
You're not alone. We've audited dozens of AWS accounts, and the pattern is remarkably consistent: most businesses are overspending by 25-40% on cloud costs they don't need to pay.
Not because AWS is expensive. Because it's easy to leave money on the table when you don't know where to look.
Here are the 10 most common cost leaks we find — and what to do about each one.
1. Not using Reserved Instances or Savings Plans
This is the biggest one. If you're running EC2 instances 24/7 and paying on-demand rates, you're overpaying by 30-60%.
What's happening: On-demand pricing is AWS's most expensive option. It's meant for variable workloads where you can't predict usage. For steady-state infrastructure that runs all the time, it's the wrong choice.
The fix: Reserved Instances (commit to 1-3 years) or Savings Plans (commit to a dollar amount of usage) offer substantial discounts.
- 1-year commitment: 30-40% savings
- 3-year commitment: 50-60% savings
The catch: You need predictable baseline usage. Don't reserve capacity for workloads that fluctuate wildly or might not exist next year.
Quick win: Look at your EC2 usage over the past 3-6 months. Identify instances that run 24/7 with consistent utilization. Those are prime candidates for Reserved Instances.
2. Oversized EC2 instances
Bigger isn't always better. We routinely find EC2 instances running at 10-20% average CPU utilization — meaning they're paying for 5x the capacity they actually use.
What's happening: When developers provision infrastructure, they often guess high "just in case." Nobody goes back to right-size after the actual load is known.
The fix: Use AWS Compute Optimizer or review CloudWatch metrics to identify underutilized instances. Downsize to the next smaller instance type.
The math: An m5.xlarge costs about $140/month. An m5.large costs about $70/month. If you're using 20% of an xlarge, you could run on a large and save $70/month — per instance.
Quick win: Enable AWS Compute Optimizer (it's free) and review its recommendations weekly.
3. Unattached EBS volumes
When you terminate an EC2 instance, its EBS volumes don't automatically delete (by default). They just sit there, accruing charges.
What's happening: Old volumes from terminated instances, abandoned projects, or testing environments pile up. Each one costs money every month, doing nothing.
The fix: Find and delete unattached EBS volumes.
AWS Console → EC2 → Volumes → Filter by state = "available"
Those "available" volumes aren't attached to anything. Review them and delete what you don't need.
The math: A 500GB gp3 volume costs about $40/month. If you've got ten of them sitting around from old projects, that's $400/month — $4,800/year — for storage nobody's using.
Set up a monthly reminder to check for unattached volumes. It takes five minutes and routinely saves hundreds of dollars.
4. Old EBS snapshots accumulating
EBS snapshots are great for backups. But they accumulate forever unless you actively manage them.
What's happening: Automatic backup policies create snapshots daily or weekly. Without lifecycle policies, you end up with years of snapshots you'll never need.
The fix: Implement Data Lifecycle Manager (DLM) policies to automatically delete old snapshots. Decide how far back you actually need to recover (30 days? 90 days?) and delete everything older.
The math: Snapshot storage costs about $0.05/GB-month. Doesn't sound like much, but 1TB of old snapshots across multiple volumes adds up to $50/month — and it grows every day without lifecycle management.
5. S3 storage class mismatches
Not all data is created equal. Storing rarely-accessed archives in S3 Standard costs 3-4x more than necessary.
What's happening: Data gets uploaded to S3 Standard (the default) and stays there forever, even when it hasn't been accessed in years.
The fix: Use S3 Intelligent-Tiering for unpredictable access patterns. Use S3 Lifecycle policies to move data to cheaper storage classes (Infrequent Access, Glacier) as it ages.
The storage classes:
- S3 Standard: ~$0.023/GB-month. For frequently accessed data.
- S3 Infrequent Access: ~$0.0125/GB-month. For data accessed less than monthly.
- S3 Glacier Instant: ~$0.004/GB-month. For archives needing immediate access.
- S3 Glacier Deep Archive: ~$0.00099/GB-month. For long-term archives.
The math: 10TB of old logs in S3 Standard costs $230/month. In Glacier Deep Archive, it's $10/month. That's $2,640/year in savings on data nobody's looking at.
6. Unused Elastic IPs
Elastic IPs are free when attached to a running instance. The moment you stop that instance or detach the IP, you start paying.
What's happening: Instances get stopped for maintenance or decommissioned, but the Elastic IPs remain. They're easy to forget.
The fix: Find and release Elastic IPs that aren't attached to running instances.
AWS Console → EC2 → Elastic IPs → Look for IPs not associated with instances
The math: An unused Elastic IP costs about $3.60/month. Not huge, but they add up — and finding them takes seconds.
7. Over-provisioned RDS instances
The same over-provisioning problem that affects EC2 hits databases even harder. Database instances tend to be expensive, and right-sizing them is often neglected.
What's happening: Databases get provisioned for peak load that rarely occurs. Multi-AZ is enabled for development databases that don't need high availability. Storage is allocated far beyond what's needed.
The fix: Review RDS instance utilization in CloudWatch. Look for CPU and memory that rarely exceed 30-40%. Consider:
- Downsizing instance types
- Using Aurora Serverless for variable workloads
- Disabling Multi-AZ for non-production databases
- Using gp3 storage instead of provisioned IOPS when IOPS aren't needed
The math: A db.r5.xlarge in Multi-AZ with provisioned IOPS might cost $800/month. If a db.t3.medium in single-AZ with gp3 storage handles your actual load, that's $50/month — a 94% savings.
8. Missing lifecycle policies
This applies across AWS services: EBS snapshots, S3 objects, CloudWatch logs, ECR images. Without lifecycle policies, data accumulates forever.
What's happening: Every automated process generates data. Logs grow. Images pile up. Snapshots multiply. Nobody sets up automatic cleanup.
The fix: Audit each service for lifecycle policy support and implement appropriate retention:
- CloudWatch Logs: Set retention periods (7 days for debug logs, 90 days for audit logs)
- ECR: Set image lifecycle policies to remove old, untagged images
- S3: Transition and expire objects based on age
- EBS Snapshots: Use Data Lifecycle Manager
The math: CloudWatch Logs cost $0.50/GB-month for ingestion plus $0.03/GB-month for storage. A chatty application generating 10GB/day of logs costs $150/month in ingestion — forever stored if you never set retention.
The best time to set lifecycle policies is when you create resources. The second-best time is now.
9. No budget alerts configured
You can't optimize what you don't measure. Without budget alerts, cost problems grow for weeks or months before anyone notices.
What's happening: There's no alarm system. Costs spike because of a misconfigured resource, a runaway process, or a forgotten experiment — and nobody knows until the bill arrives.
The fix: Set up AWS Budgets with alerts at meaningful thresholds:
- Alert at 80% of expected monthly spend (early warning)
- Alert at 100% of expected monthly spend (you've hit budget)
- Alert for unusual daily spend (anomaly detection)
Configure alerts to go to actual humans who will act on them.
The math: This one's about prevention. Catching a $500/day cost anomaly on day 2 versus day 30 is the difference between $1,000 and $15,000.
10. Not using Spot Instances where appropriate
Spot Instances offer 60-90% discounts compared to on-demand pricing. They can be interrupted with 2 minutes' notice, but for many workloads, that's fine.
What's happening: Everything runs on on-demand or reserved capacity, even workloads that could tolerate interruption.
Good candidates for Spot:
- Batch processing jobs
- Test/dev environments
- Stateless web servers behind load balancers
- CI/CD build runners
- Data processing pipelines
Not good candidates:
- Databases
- Single-instance production workloads
- Anything with state that can't be quickly rebuilt
The fix: Identify interruptible workloads and migrate them to Spot. Use Spot Fleet or EC2 Auto Scaling with mixed instance policies for resilience.
The math: A c5.2xlarge on-demand costs about $245/month. The same instance as Spot might cost $50-80/month. That's $165+/month savings per instance.
Getting started
You don't need to fix all ten at once. Here's a prioritized approach:
This week: Enable AWS Cost Explorer and Compute Optimizer if you haven't. Set up basic budget alerts. Takes an hour, costs nothing.
This month: Hunt for unused resources — unattached EBS volumes, unused Elastic IPs, idle EC2 instances. Quick wins with immediate savings.
This quarter: Right-size your infrastructure. Review Reserved Instance and Savings Plan opportunities. Implement lifecycle policies.
Ongoing: Make cost review part of your regular operations. Check Cost Explorer weekly. Review recommendations monthly.
The payoff
Most businesses can reduce AWS spend by 25-40% without sacrificing performance or capability. On a $10,000/month AWS bill, that's $30,000-48,000/year back in your pocket.
Not bad for optimization work that often takes just a few days of focused effort.
Need help?
If your AWS bill has gotten out of control — or you're not sure where to start — we can help. Our AWS cost optimization reviews identify exactly where you're overspending and provide a prioritized action plan.
The first step is knowing where the money's going. Once you can see it, fixing it is straightforward.
Entvas Editorial Team
Helping businesses make informed decisions



