How to Reduce Your AWS Cost: 15 Checks That Actually Save Money
Most “reduce AWS cost” guides tell you to right-size instances and turn stuff off. No kidding. Here are 15 specific checks with exact AWS CLI commands, real dollar amounts, and the fixes you can run today.
TL;DR: I have audited AWS accounts spending anywhere from $5K to $200K per month. The same waste patterns show up in almost every account. Unattached EBS volumes nobody remembers creating. NAT Gateways burning $32/month doing nothing. RDS instances with zero connections running for months. CloudWatch logs on infinite retention. This guide covers the 15 most common AWS cost leaks I find, the CLI commands to detect them, and the exact commands to fix them.
Why Your AWS Bill Keeps Growing
AWS makes it easy to create resources and hard to remember they exist.
Spin up a test database on Friday, forget about it by Monday. Launch a NAT Gateway for a migration, never clean it up. Leave CloudWatch log retention on “never expire” because that is the default.
Cost Explorer shows you aggregate spend by service. It does NOT show you which specific resources are wasting money. That is why the bill keeps growing and nobody can explain why.
The checks below go resource by resource. Every one includes the AWS CLI command to find the waste and the command to fix it.
1. Delete Unattached EBS Volumes
This is the single most common waste pattern I find. Volumes left behind after terminating EC2 instances. Sitting in “available” state. Doing nothing. Costing you money every month.
What it costs: $0.08 to $0.125 per GB per month depending on volume type. A forgotten 500GB gp3 volume costs $40/month. That is $480/year for storage nobody is using.
Find them:
aws ec2 describe-volumes \
--filters Name=status,Values=available \
--query 'Volumes[*].{ID:VolumeId,Size:Size,Type:VolumeType,Created:CreateTime}' \
--output table
Fix:
# Snapshot first if you want a safety net ($0.05/GB/month, still cheaper than the volume)
aws ec2 create-snapshot --volume-id vol-xxxxx --description "backup before cleanup"
# Then delete
aws ec2 delete-volume --volume-id vol-xxxxx
Savings: 100% of volume cost.
2. Migrate EBS GP2 Volumes to GP3
If you still have gp2 volumes, you are overpaying by exactly 20%. GP3 is the newer generation. Same baseline performance. Lower price. The migration takes seconds and causes zero downtime.
The math:
- GP2: $0.10/GB/month
- GP3: $0.08/GB/month
- Savings: $0.02/GB/month (20%)
A 1TB gp2 volume saves $20/month just by switching types. Across 10 volumes, that is $200/month.
Find gp2 volumes:
aws ec2 describe-volumes \
--filters Name=volume-type,Values=gp2 \
--query 'Volumes[*].{ID:VolumeId,Size:Size,State:State}' \
--output table
Fix (zero downtime):
aws ec2 modify-volume --volume-id vol-xxxxx --volume-type gp3
Savings: 20% per volume. Confidence: very high. This is one of the safest cost optimizations you can make on AWS.
3. Stop Idle EC2 Instances
Instances running with max CPU under 5% and average CPU under 2% over two weeks are not doing useful work. They are costing you money to exist.
Find low-utilization instances:
aws cloudwatch get-metric-statistics \
--namespace AWS/EC2 \
--metric-name CPUUtilization \
--dimensions Name=InstanceId,Value=i-xxxxx \
--start-time $(date -u -v-14d +%Y-%m-%dT%H:%M:%S) \
--end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
--period 86400 \
--statistics Average Maximum \
--output table
If max CPU has not crossed 5% in 14 days, that instance is idle. Terminate it or at minimum stop it.
Savings: 100% of instance cost. An idle m5.large is $70/month doing absolutely nothing.
4. Migrate EC2 to Graviton (ARM)
AWS Graviton instances deliver 20-40% better price-performance than their x86 equivalents. If you are running m5, c5, r5, or t3 instances and your workload runs on Linux, you can likely switch.
Migration map:
- m5 → m7g (20-40% savings)
- c5 → c7g
- r5 → r7g
- t3 → t4g
Find x86 instances:
aws ec2 describe-instances \
--filters Name=instance-state-name,Values=running \
--query 'Reservations[*].Instances[*].{ID:InstanceId,Type:InstanceType,Platform:Platform}' \
--output table
Look for instance families without a “g” suffix. Those are x86.
Savings: 20-40% per instance. This requires testing your application on ARM, but for most Node.js, Python, Java, and Go workloads it is a straightforward swap.
5. Schedule Non-Production Instances
Your dev, staging, and QA environments do not need to run at 3am on a Sunday. If you are running non-production EC2 instances 24/7, you are paying for 108 hours per week that nobody uses.
The math: Business hours are roughly 60 hours per week (12h x 5 days). That leaves 108 off-hours per week. Stopping instances during off-hours saves 55-65% of compute cost.
Savings: 55-65% per non-production instance. A dev environment with 5 x m5.large instances at $70/month each saves roughly $200/month by running only during business hours.
Use AWS Instance Scheduler or a simple EventBridge + Lambda combination.
6. Clean Up Stale EBS Snapshots
Snapshots from deleted volumes and terminated instances pile up. At $0.05/GB/month, a few hundred gigs of orphaned snapshots adds up fast.
Find orphaned snapshots:
aws ec2 describe-snapshots \
--owner-ids self \
--query 'Snapshots[?StartTime<=`2025-01-01`].{ID:SnapshotId,Size:VolumeSize,Created:StartTime,VolumeId:VolumeId}' \
--output table
Check whether the source volume still exists. If the volume is gone and the snapshot is older than 90 days, it is almost certainly safe to delete.
Fix:
aws ec2 delete-snapshot --snapshot-id snap-xxxxx
For snapshots older than 180 days that you want to keep, move them to Archive tier for 75% savings:
aws ec2 modify-snapshot-tier --snapshot-id snap-xxxxx --storage-tier archive
Savings: 100% if deleted. 75% if archived.
7. Kill Idle NAT Gateways
NAT Gateways charge $0.045/hour just to exist. That is $32.40/month per gateway before a single byte flows through them. I have found NAT Gateways in accounts processing less than 1GB per day. Paying $32/month for the privilege of routing almost no traffic.
Find your NAT Gateways and their traffic:
aws ec2 describe-nat-gateways \
--query 'NatGateways[?State==`available`].{ID:NatGatewayId,VPC:VpcId,Subnet:SubnetId,Created:CreateTime}' \
--output table
Then check CloudWatch for BytesOutToDestination and BytesOutToSource over the past 14 days. If it is under 1GB/day average, that gateway is barely used.
Fix:
aws ec2 delete-nat-gateway --nat-gateway-id nat-xxxxx
Savings: $32.40/month per idle gateway + $0.045/GB data processing.
8. Add VPC Endpoints for S3 and DynamoDB
If your EC2 instances or Lambda functions access S3 or DynamoDB through a NAT Gateway, you are paying $0.045/GB for traffic that could be free. Gateway VPC Endpoints for S3 and DynamoDB cost nothing.
Create a Gateway Endpoint:
aws ec2 create-vpc-endpoint \
--vpc-id vpc-xxxxx \
--service-name com.amazonaws.us-east-1.s3 \
--route-table-ids rtb-xxxxx
Same command for DynamoDB, just change the service name.
Savings: $0.045/GB of S3 and DynamoDB traffic. If you process 1TB/month through NAT to S3, that is $45/month you can eliminate completely.
9. Fix CloudWatch Log Retention
The default CloudWatch Logs retention is “never expire.” That means every log line your application has ever written is stored forever at $0.03/GB/month. I have seen accounts paying thousands per month in log storage for data nobody will ever read.
Find log groups with no expiry:
aws logs describe-log-groups \
--query 'logGroups[?!retentionInDays].{Name:logGroupName,StoredBytes:storedBytes}' \
--output table
Fix (set 30-day retention):
aws logs put-retention-policy \
--log-group-name /aws/lambda/my-function \
--retention-in-days 30
For most applications, 7 to 30 days of log retention is plenty. If you need long-term logs, export to S3 where storage costs $0.023/GB/month instead of $0.03/GB/month, and you can lifecycle to Glacier at $0.004/GB/month.
Savings: 50-90% of log storage costs.
10. Find Idle RDS Instances
RDS instances with zero database connections are the most expensive form of nothing. An idle db.r6g.xlarge costs roughly $350/month. Just sitting there. Connected to nothing.
Check connections over the past 14 days:
aws cloudwatch get-metric-statistics \
--namespace AWS/RDS \
--metric-name DatabaseConnections \
--dimensions Name=DBInstanceIdentifier,Value=my-db-instance \
--start-time $(date -u -v-14d +%Y-%m-%dT%H:%M:%S) \
--end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
--period 86400 \
--statistics Maximum \
--output table
If the max connections over 14 days is zero, that instance is idle.
Fix: Stop it (aws rds stop-db-instance) or take a final snapshot and delete it.
Savings: 100% of instance cost if deleted. RDS auto-restarts stopped instances after 7 days, so stopping is only a temporary fix.
11. Turn Off Multi-AZ for Non-Production Databases
Multi-AZ doubles your RDS cost. It makes sense for production. It makes zero sense for dev, staging, or test databases.
Find non-production databases with Multi-AZ:
aws rds describe-db-instances \
--query 'DBInstances[?MultiAZ==`true`].{ID:DBInstanceIdentifier,Class:DBInstanceClass,MultiAZ:MultiAZ}' \
--output table
Look for identifiers containing “dev”, “staging”, “test”, or “qa”.
Fix:
aws rds modify-db-instance \
--db-instance-identifier my-dev-db \
--no-multi-az \
--apply-immediately
Savings: 45-50% of instance cost. A db.r6g.large in Multi-AZ costs roughly $350/month. Single-AZ: $175/month. For a dev database, that is $175/month saved with zero production risk.
12. Migrate RDS GP2 Storage to GP3
Same story as EBS. RDS GP2 storage costs $0.115/GB/month. GP3 costs $0.08/GB/month. That is a 30% savings on storage alone.
Find RDS instances on GP2:
aws rds describe-db-instances \
--query 'DBInstances[?StorageType==`gp2`].{ID:DBInstanceIdentifier,Storage:AllocatedStorage,Type:StorageType}' \
--output table
Fix:
aws rds modify-db-instance \
--db-instance-identifier my-db \
--storage-type gp3 \
--apply-immediately
Savings: 18-20% on storage costs.
13. Right-Size Oversized RDS Instances
An RDS instance averaging under 20% CPU with fewer than 10 connections is almost certainly oversized. You are paying for capacity you do not use.
Example: db.r6g.xlarge at $0.48/hour ($350/month). If CPU averages 15% and connections average 5, drop to db.r6g.large at $0.24/hour ($175/month). Same engine, half the cost, still plenty of headroom.
Fix:
aws rds modify-db-instance \
--db-instance-identifier my-db \
--db-instance-class db.r6g.large \
--apply-immediately
Savings: 30-60% per oversized instance.
14. Add S3 Lifecycle Policies
Large S3 buckets without lifecycle policies keep everything in Standard storage forever. S3 Standard costs $0.023/GB/month. Glacier costs $0.004/GB/month. For data older than 90 days that you rarely access, that is an 80% savings.
Find buckets without lifecycle rules:
aws s3api list-buckets --query 'Buckets[*].Name' --output text | tr '\t' '\n' | while read bucket; do
rules=$(aws s3api get-bucket-lifecycle-configuration --bucket "$bucket" 2>&1)
if echo "$rules" | grep -q "NoSuchLifecycleConfiguration"; then
size=$(aws cloudwatch get-metric-statistics --namespace AWS/S3 --metric-name BucketSizeBytes --dimensions Name=BucketName,Value="$bucket" Name=StorageType,Value=StandardStorage --start-time $(date -u -v-2d +%Y-%m-%dT%H:%M:%S) --end-time $(date -u +%Y-%m-%dT%H:%M:%S) --period 86400 --statistics Average --query 'Datapoints[0].Average' --output text 2>/dev/null)
echo "$bucket: no lifecycle, size: $size bytes"
fi
done
Savings: 40-70% of storage costs on large buckets. A 1TB bucket on Standard with no lifecycle costs $23/month. With a policy that moves data to Glacier after 90 days, most of that drops to $4/month.
15. Switch Lambda Functions to ARM64
Lambda on ARM (Graviton2) is 20% cheaper per millisecond than x86. The migration is a one-line configuration change for most runtimes (Node.js, Python, Java, .NET).
Find x86 Lambda functions:
aws lambda list-functions \
--query 'Functions[?Architectures[0]==`x86_64`].{Name:FunctionName,Runtime:Runtime,Memory:MemorySize}' \
--output table
Fix:
aws lambda update-function-configuration \
--function-name my-function \
--architectures arm64
Savings: 20% on Lambda compute costs. If you spend $500/month on Lambda, that is $100/month saved.
The Checks That Save the Most Money
Not all 15 carry equal weight. Based on what I find across real AWS accounts, here is where the biggest dollars typically hide:
| Check | Typical Monthly Savings | Effort |
|---|---|---|
| Idle RDS instances | $100 - $2,000+ | Low |
| Non-production scheduling | $200 - $1,500+ | Medium |
| Multi-AZ on non-prod DBs | $100 - $500+ | Low |
| RDS right-sizing | $100 - $500+ | Medium |
| Unattached EBS volumes | $50 - $500+ | Low |
| GP2 to GP3 (EBS + RDS) | $50 - $400+ | Low |
| NAT Gateway cleanup | $30 - $300+ | Low |
| CloudWatch log retention | $50 - $2,000+ | Low |
| S3 lifecycle policies | $50 - $1,000+ | Medium |
| Graviton migration (EC2 + Lambda) | $100 - $2,000+ | Medium |
Most accounts spending $20K+ per month have at least $2K-$5K in waste hiding across these categories.
How to Keep Costs From Drifting Back
Finding waste once is easy. Keeping it found is the hard part. New resources get created every sprint. Old patterns repeat. The bill creeps back up.
A few things that actually work:
- AWS Budgets with alerts. Set a monthly budget and get notified at 80% and 100%. Not a fix, but at least you will know when costs spike.
- Tagging enforcement. Every resource needs an
Environmenttag (production, staging, dev) and aTeamtag. Without tags, you cannot attribute cost and you cannot schedule shutdowns. Tag keys are case sensitive in AWS. I have seen accounts with Team, team, and TEAM as three different tag keys. Three versions of the same data, none of them complete. - Weekly cost reviews. 15 minutes. Look at Cost Explorer by service, filter by the past 7 days, compare to prior 7 days. Catch drift early.
- Automated scanning. Run these checks regularly, not once. Resources drift. New waste accumulates. What you cleaned up last month gets recreated next month.
Frequently Asked Questions
How much can I realistically save on my AWS bill?
It depends on how long the account has been running without an audit. Accounts spending $20K-$50K/month typically have $2K-$5K/month in waste from idle resources, oversized instances, and missing lifecycle policies. Accounts that have never been audited often have 15-30% waste. The low-hanging fruit (unattached volumes, idle databases, GP2 to GP3 migrations) can usually be fixed in a single afternoon.
What is the fastest way to reduce AWS costs?
Start with the checks that require zero application changes: delete unattached EBS volumes, remove idle NAT Gateways, set CloudWatch log retention, and turn off Multi-AZ on non-production databases. These four checks alone typically save $200-$1,000/month and can be done in under an hour with the CLI commands in this guide.
Is AWS Cost Explorer enough to find waste?
Cost Explorer shows you which services cost the most. It does NOT show you which specific resources are wasting money. You can see that EC2 costs $8K/month, but you cannot see that three of those instances have been idle for six months. Resource-level detection requires querying CloudWatch metrics and comparing them against utilization thresholds, which is what the commands in this guide do.
Does AWS Trusted Advisor find all cost waste?
Trusted Advisor catches some basics like idle EC2 instances and unassociated Elastic IPs. But it misses a lot. It does not check for GP2 to GP3 migration opportunities, CloudWatch log retention waste, NAT Gateway idle cost, Multi-AZ on non-production databases, Lambda ARM migration savings, or S3 lifecycle policy gaps. The recommendations it does give are vague (“consider rightsizing”) without telling you the exact CLI command to fix it.
How do I reduce AWS costs without affecting performance?
Focus on waste elimination first, not performance tradeoffs. Deleting unattached EBS volumes, removing idle NAT Gateways, setting log retention, and cleaning up stale snapshots have zero performance impact because those resources are not being used. GP2 to GP3 migration maintains the same baseline performance at lower cost. Only right-sizing (EC2 and RDS) requires checking that utilization stays within safe bounds after the change.
How often should I audit my AWS account for cost waste?
At minimum, monthly. But resources drift weekly. New volumes get created, new instances launch, log groups accumulate. A one-time cleanup saves money once. Continuous scanning catches new waste as it appears. That is why automated tools that run daily and alert you to new findings are more effective than manual quarterly reviews.
What AWS services waste the most money?
In my experience: RDS (idle instances, Multi-AZ on non-prod, oversized instances), EC2 (idle instances, previous-gen types, no scheduling), EBS (unattached volumes, GP2, stale snapshots), CloudWatch Logs (infinite retention), and NAT Gateways (idle gateways, missing VPC endpoints). RDS is usually the biggest offender because database instances are expensive and teams are afraid to touch them.
Stop Guessing, Start Scanning
I built CostPatrol because I got tired of running these checks manually across dozens of accounts. It encodes 100+ detection rules like the ones above, runs daily across all your accounts and regions, and delivers prioritized findings with exact CLI fix commands to Slack every morning.
Read-only. 2-minute setup via CloudFormation. No write access to your infrastructure.
If you are spending $10K+ per month on AWS and you have not audited your account in the past 90 days, you have waste. Run a free scan and see what it finds.