How to Reduce AWS EBS Costs: Volumes, Snapshots, and Storage Types
EBS is one of those AWS services that quietly drains your budget. Volumes persist after instances are terminated. Snapshots accumulate without anyone noticing. And most teams are still running gp2 volumes when gp3 is 20% cheaper with better baseline performance.
The numbers are straightforward. A single 500 GB gp2 volume costs $50/month. Switch it to gp3 and it drops to $40/month. Now multiply that across every volume in your account. Add orphaned volumes no one remembers creating, snapshots from instances deleted six months ago, and over-provisioned io2 volumes running workloads that would be fine on gp3. That’s where EBS bills get out of control.
This guide covers the highest-impact EBS optimizations, each with CLI commands you can run today. These are the same checks that CostPatrol’s EBS detection rules perform automatically.
1. Migrate gp2 Volumes to gp3 (EBS-O001)
This is the single easiest win in EBS cost optimization. gp3 volumes cost $0.08/GB-month compared to $0.10/GB-month for gp2. That’s a flat 20% storage savings on every volume you migrate.
But the pricing is only half the story. gp3 also gives you better baseline performance for free:
- 3,000 IOPS included (gp2 only gives you 3,000 IOPS at 1,000 GB or more)
- 125 MB/s throughput included (gp2 tops out at 250 MB/s only for volumes 334 GB and larger)
- IOPS and throughput are decoupled from volume size (gp2 scales IOPS at 3 per GB)
For volumes under 1,000 GB, gp3 is strictly better on both price and performance. A 100 GB gp2 volume only gets 300 IOPS. The same volume on gp3 gets 3,000 IOPS. That’s 10x the performance at 20% less cost.
Find all gp2 volumes in your account:
aws ec2 describe-volumes \
--filters Name=volume-type,Values=gp2 \
--query 'Volumes[].{ID:VolumeId,Size:Size,State:State,AZ:AvailabilityZone}' \
--output table
Migrate a volume from gp2 to gp3:
aws ec2 modify-volume --volume-id vol-0123456789abcdef0 --volume-type gp3
The migration happens online. No downtime, no detaching the volume, no stopping the instance. AWS performs the modification in the background. You can monitor progress with:
aws ec2 describe-volumes-modifications \
--volume-ids vol-0123456789abcdef0 \
--query 'VolumesModifications[].{State:ModificationState,Progress:Progress}'
Important caveats for larger volumes:
For gp2 volumes larger than 170 GB, you may need to provision additional IOPS or throughput on gp3 to match the gp2 performance. A 1,000 GB gp2 volume delivers 3,000 IOPS and 250 MB/s. The gp3 baseline matches on IOPS (3,000 free) but you’d need to provision the extra 125 MB/s of throughput at $0.04/MB/s-month ($5.00/month). Even with that, you still save $15/month on a 1 TB volume.
For volumes over 5,334 GB where gp2 delivers up to 16,000 IOPS, you’d need to provision those extra IOPS on gp3 at $0.005/IOPS-month. Run the numbers for each volume, but in nearly every case gp3 still wins.
CostPatrol’s EBS-O001 rule scans all gp2 volumes in your account and calculates exact savings for each one, accounting for the IOPS and throughput provisioning costs needed to match or exceed current gp2 performance.
2. Find and Delete Orphaned Volumes (EBS-O002, EBS-O003)
When you terminate an EC2 instance, its EBS volumes can persist. Unless “Delete on Termination” was enabled at launch, the volumes stick around in an “available” state. They cost exactly the same whether attached to an instance or sitting idle.
This is one of the most common sources of AWS waste. Development environments spin up and down. Auto Scaling groups create and destroy instances. Engineers launch test instances and forget to clean up. Over time, orphaned volumes accumulate and nobody notices because individual volumes seem cheap.
They’re not cheap at scale. Fifty orphaned 100 GB gp3 volumes cost $400/month, or $4,800/year. That’s real money for storage that’s doing absolutely nothing.
Find all unattached volumes:
aws ec2 describe-volumes \
--filters Name=status,Values=available \
--query 'Volumes[].{ID:VolumeId,Size:Size,Type:VolumeType,Created:CreateTime}' \
--output table
Calculate the monthly cost of all unattached volumes:
aws ec2 describe-volumes \
--filters Name=status,Values=available \
--query 'Volumes[].[Size,VolumeType]' \
--output text | awk '{
if ($2 == "gp2") rate=0.10;
else if ($2 == "gp3") rate=0.08;
else if ($2 == "io1" || $2 == "io2") rate=0.125;
else if ($2 == "st1") rate=0.045;
else if ($2 == "sc1") rate=0.015;
else rate=0.10;
total += $1 * rate;
} END {printf "Monthly cost of unattached volumes: $%.2f\n", total}'
Before deleting, take a snapshot as a safety net:
aws ec2 create-snapshot \
--volume-id vol-0123456789abcdef0 \
--description "Backup before cleanup - $(date +%Y-%m-%d)"
Then delete the volume:
aws ec2 delete-volume --volume-id vol-0123456789abcdef0
Prevent future orphaned volumes by setting Delete on Termination on new instances:
aws ec2 run-instances \
--block-device-mappings '[{
"DeviceName": "/dev/xvda",
"Ebs": {"DeleteOnTermination": true}
}]' \
# ... other instance parameters
For existing instances, you can modify the attribute:
aws ec2 modify-instance-attribute \
--instance-id i-0123456789abcdef0 \
--block-device-mappings '[{
"DeviceName": "/dev/xvda",
"Ebs": {"DeleteOnTermination": true}
}]'
CostPatrol’s EBS-O002 rule identifies orphaned volumes (available state with no recent attachment) and EBS-O003 flags unattached volumes that have been idle for more than 30 days, estimating monthly waste for each.
3. Clean Up Old and Unused EBS Snapshots
EBS snapshots cost $0.05/GB-month. That’s half the cost of a gp2 volume, but snapshots are sneaky. They accumulate quietly because nobody monitors them. A team that takes daily snapshots of ten 500 GB volumes creates 300 snapshots per month. Even though snapshots are incremental (only storing changed blocks), that storage grows steadily.
The bigger problem is orphaned snapshots. These are snapshots whose source volumes have been deleted. The snapshot still exists, still costs money, but there’s no volume to restore it to. In most accounts, 30-50% of snapshots fall into this category.
Find snapshots whose source volume no longer exists:
# Get all snapshot volume IDs
snapshot_vols=$(aws ec2 describe-snapshots --owner-ids self \
--query 'Snapshots[].VolumeId' --output text | tr '\t' '\n' | sort -u)
# Get all existing volume IDs
existing_vols=$(aws ec2 describe-volumes \
--query 'Volumes[].VolumeId' --output text | tr '\t' '\n' | sort -u)
# Find snapshots referencing deleted volumes
aws ec2 describe-snapshots --owner-ids self \
--query 'Snapshots[].{ID:SnapshotId,VolumeId:VolumeId,Size:VolumeSize,Created:StartTime}' \
--output table | while read line; do
for vol in $snapshot_vols; do
if ! echo "$existing_vols" | grep -q "$vol"; then
echo "Orphaned snapshot for deleted volume: $vol"
fi
done
done
A simpler approach. List all snapshots sorted by age:
aws ec2 describe-snapshots --owner-ids self \
--query 'sort_by(Snapshots, &StartTime)[].{ID:SnapshotId,Size:VolumeSize,Created:StartTime,Description:Description}' \
--output table
Calculate total snapshot storage cost:
aws ec2 describe-snapshots --owner-ids self \
--query 'Snapshots[].VolumeSize' --output text | tr '\t' '\n' | awk '{
total += $1
} END {
printf "Total snapshot storage: %d GB\n", total;
printf "Estimated monthly cost: $%.2f\n", total * 0.05
}'
Note: this overestimates because snapshots are incremental. Actual storage is typically 30-60% of the source volume size. But it gives you a useful upper bound.
Delete a snapshot:
aws ec2 delete-snapshot --snapshot-id snap-0123456789abcdef0
Bulk delete snapshots older than 90 days:
cutoff=$(date -u -d '90 days ago' +%Y-%m-%dT%H:%M:%S 2>/dev/null || \
date -u -v-90d +%Y-%m-%dT%H:%M:%S)
aws ec2 describe-snapshots --owner-ids self \
--query "Snapshots[?StartTime<'${cutoff}'].SnapshotId" \
--output text | tr '\t' '\n' | while read snap; do
echo "Deleting snapshot: $snap"
aws ec2 delete-snapshot --snapshot-id "$snap"
done
Before bulk-deleting, verify that none of these snapshots are referenced by AMIs or used as backup restore points.
4. Right-Size Over-Provisioned Volumes
Developers tend to over-provision EBS volumes “just in case.” A 500 GB volume gets created for a workload that uses 50 GB. An io2 volume gets provisioned with 10,000 IOPS for a database that peaks at 1,500.
You’re paying for the provisioned size, not what you use. A 500 GB gp3 volume costs $40/month even if only 50 GB contains data. Shrinking it to 100 GB saves $32/month per volume.
Check actual disk usage on a Linux instance:
df -h --type=ext4 --type=xfs
Monitor IOPS utilization with CloudWatch:
aws cloudwatch get-metric-statistics \
--namespace AWS/EBS \
--metric-name VolumeReadOps \
--dimensions Name=VolumeId,Value=vol-0123456789abcdef0 \
--start-time $(date -u -d '14 days ago' +%Y-%m-%dT%H:%M:%S) \
--end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
--period 3600 \
--statistics Maximum \
--query 'sort_by(Datapoints, &Timestamp)[-5:].{Time:Timestamp,MaxReadOps:Maximum}'
Do the same for VolumeWriteOps to get total IOPS usage. If the 95th percentile is well below your provisioned IOPS, you’re over-provisioned.
Check throughput utilization:
aws cloudwatch get-metric-statistics \
--namespace AWS/EBS \
--metric-name VolumeReadBytes \
--dimensions Name=VolumeId,Value=vol-0123456789abcdef0 \
--start-time $(date -u -d '14 days ago' +%Y-%m-%dT%H:%M:%S) \
--end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
--period 3600 \
--statistics Maximum \
--query 'sort_by(Datapoints, &Timestamp)[-5:].{Time:Timestamp,MaxBytes:Maximum}'
EBS volumes can’t be shrunk in place. To downsize, you need to:
- Create a snapshot of the current volume
- Create a new, smaller volume from the snapshot
- Attach the new volume and verify data integrity
- Detach and delete the old volume
# Step 1: Snapshot the volume
aws ec2 create-snapshot \
--volume-id vol-OLD123 \
--description "Pre-resize backup"
# Step 2: Create smaller volume from snapshot
aws ec2 create-volume \
--snapshot-id snap-FROMSTEP1 \
--volume-type gp3 \
--size 100 \
--availability-zone us-east-1a
This process involves downtime, so plan accordingly. For root volumes, you’ll need to stop the instance.
5. Use the Right Volume Type for the Workload
AWS offers six EBS volume types, but most teams default to gp2 or gp3 for everything. That’s fine for general workloads, but it means you might be overpaying for some and underperforming on others.
Here’s when to use each type:
| Volume Type | Price/GB-month | Included IOPS | Max IOPS | Best For |
|---|---|---|---|---|
| gp3 | $0.08 | 3,000 | 16,000 | Most workloads, databases under 16K IOPS |
| gp2 | $0.10 | 3/GB | 16,000 | Nothing. Migrate to gp3 |
| io2 | $0.125 | None (all provisioned) | 64,000 | High-performance databases, latency-sensitive apps |
| io2 Block Express | $0.125 | None | 256,000 | Extreme IOPS requirements (SAP HANA, large Oracle) |
| st1 | $0.045 | N/A | 500 MB/s throughput | Big data, log processing, sequential reads |
| sc1 | $0.015 | N/A | 250 MB/s throughput | Cold storage, infrequent access, archives |
Common misconfigurations:
io2 volumes running low-IOPS workloads. If your io2 volume uses fewer than 3,000 IOPS consistently, gp3 gives you the same performance at a lower price. An io2 volume with 3,000 provisioned IOPS costs $0.125/GB + $0.065 x 3,000 = $195/month for storage alone (on a 100 GB volume) plus $195 for IOPS. A 100 GB gp3 volume with the same 3,000 IOPS costs $8/month total because those IOPS are included.
gp3 volumes used for sequential workloads. Log processing, data warehousing, and Kafka brokers that read/write sequentially benefit from st1 volumes. A 1 TB st1 volume costs $45/month compared to $80/month for gp3, and delivers 500 MB/s throughput.
Find io2 volumes with low IOPS provisioning:
aws ec2 describe-volumes \
--filters Name=volume-type,Values=io2 \
--query 'Volumes[?Iops<=`3000`].{ID:VolumeId,Size:Size,IOPS:Iops,AZ:AvailabilityZone}' \
--output table
These are prime candidates for gp3 migration. You get the same IOPS for free.
6. Set Up Snapshot Lifecycle Policies
Manual snapshot management doesn’t scale. If you’re relying on engineers to create and clean up snapshots, you’ll end up with either gaps in coverage or thousands of forgotten snapshots.
AWS Data Lifecycle Manager (DLM) automates this. It’s free to use and handles creation, retention, and deletion on a schedule.
Create a lifecycle policy that takes daily snapshots and retains 7 days:
aws dlm create-lifecycle-policy \
--description "Daily snapshots, 7-day retention" \
--state ENABLED \
--execution-role-arn arn:aws:iam::012345678901:role/AWSDataLifecycleManagerDefaultRole \
--policy-details '{
"PolicyType": "EBS_SNAPSHOT_MANAGEMENT",
"ResourceTypes": ["VOLUME"],
"TargetTags": [{"Key": "Backup", "Value": "true"}],
"Schedules": [{
"Name": "DailySnapshots",
"CreateRule": {"Interval": 24, "IntervalUnit": "HOURS", "Times": ["03:00"]},
"RetainRule": {"Count": 7},
"CopyTags": true
}]
}'
Tag the volumes you want to include:
aws ec2 create-tags \
--resources vol-0123456789abcdef0 \
--tags Key=Backup,Value=true
For long-term retention, use the snapshot archive tier. Archive storage costs $0.0125/GB-month, 75% less than standard snapshot storage. Move snapshots older than 90 days to archive:
aws ec2 modify-snapshot-tier \
--snapshot-id snap-0123456789abcdef0 \
--storage-tier archive
Keep in mind that restoring from archive takes 24-72 hours, so this is only appropriate for compliance backups and disaster recovery snapshots you hope to never need.
List your existing lifecycle policies:
aws dlm get-lifecycle-policies \
--query 'Policies[].{ID:PolicyId,Description:Description,State:State}' \
--output table
A well-configured DLM policy prevents snapshot sprawl from the start. Pair it with a quarterly audit of existing snapshots to catch anything created outside the policy.
7. Consider EBS-Optimized Instances
EBS-optimized instances provide dedicated bandwidth between the EC2 instance and EBS volumes. Without it, EBS traffic competes with network traffic on the same pipe, leading to inconsistent I/O performance. When performance is inconsistent, teams tend to over-provision IOPS to compensate, which drives up costs.
The good news: most current-generation instance types (M5, C5, R5, M6i, C6i, R6i, and all newer families) are EBS-optimized by default at no extra charge. If you’re running older generation instances (M4, C4, R4), you’re either paying extra for EBS optimization or running without it.
Check if your instances are EBS-optimized:
aws ec2 describe-instances \
--query 'Reservations[].Instances[].{ID:InstanceId,Type:InstanceType,EBSOptimized:EbsOptimized}' \
--output table
Find instances running older types that may not be EBS-optimized by default:
aws ec2 describe-instances \
--query 'Reservations[].Instances[?!EbsOptimized].{ID:InstanceId,Type:InstanceType,State:State.Name}' \
--output table
If you’re on older instance types, migrating to current-generation instances gives you EBS optimization for free, plus better price-performance on the compute side. An m5.xlarge costs less than an m4.xlarge in most regions and delivers better EBS throughput.
Optimization Checklist
Run through this for every AWS account:
- List all gp2 volumes and migrate to gp3
- Find unattached volumes in “available” state
- Delete orphaned volumes (snapshot first if unsure)
- Enable “Delete on Termination” on all non-persistent volumes
- Identify snapshots referencing deleted volumes
- Delete snapshots older than retention requirements
- Set up DLM lifecycle policies for automated snapshot management
- Check io2 volumes for low IOPS utilization (candidates for gp3)
- Review volume sizes vs actual disk usage
- Move old compliance snapshots to archive tier
- Verify instances are EBS-optimized (or upgrade instance types)
What Savings to Expect
| Optimization | Typical Savings | Effort |
|---|---|---|
| gp2 to gp3 migration | 20% of gp2 spend | Low |
| Delete orphaned volumes | $5-50/month per volume | Low |
| Clean up old snapshots | 30-60% of snapshot spend | Medium |
| Right-size volumes | 20-50% per over-provisioned volume | Medium-High |
| io2 to gp3 (where applicable) | 50-80% per volume | Low |
| st1 for sequential workloads | 40-55% vs gp3 | Low |
| Snapshot archive tier | 75% on archived snapshots | Low |
For a team spending $2,000/month on EBS, these optimizations typically cut costs by $600-$1,200/month. The gp2 to gp3 migration alone delivers immediate, risk-free savings. Cleaning up orphaned volumes and snapshots comes next. Right-sizing takes more effort but pays off on the biggest volumes.
CostPatrol scans for all of these issues automatically. Its EBS rules (EBS-O001, EBS-O002, EBS-O003) analyze every volume and snapshot in your account, calculate exact savings per resource, and prioritize recommendations by dollar impact. Run a scan to see which EBS optimizations apply to your account.