How to Find Unused AWS Resources Costing You Money
The average organization wastes 30-40% of its cloud spend on idle, oversized, or unused resources. That’s not a rounding error. On a $50,000/month AWS bill, you’re looking at $15,000-$20,000 going to resources nobody is using.
The problem isn’t that teams don’t care about costs. It’s that AWS makes it remarkably easy to create resources and remarkably hard to notice when they’re sitting idle. An engineer spins up a test instance, a project gets shelved, the instance keeps running. A volume gets detached during maintenance, nobody reattaches it. An Elastic IP gets allocated for a feature that never ships.
This guide covers 12 categories of AWS resource waste, each with a CLI command to find it and a clear path to clean it up. These are the same checks that CostPatrol runs automatically across your accounts.
1. Idle EC2 Instances (EC2-O002)
An idle EC2 instance is one that’s running, billed at full price, but doing almost nothing. The most common indicator is sustained low CPU utilization, typically below 5% over a 14-day period.
A single m5.xlarge instance running idle in us-east-1 costs $140/month. Multiply that across a fleet of 20-30 forgotten instances and you’re burning $3,000-$4,000/month on compute that delivers no value.
Find instances with average CPU below 5% over the last 14 days:
for id in $(aws ec2 describe-instances \
--filters Name=instance-state-name,Values=running \
--query 'Reservations[].Instances[].InstanceId' --output text); do
cpu=$(aws cloudwatch get-metric-statistics \
--namespace AWS/EC2 \
--metric-name CPUUtilization \
--dimensions Name=InstanceId,Value=$id \
--start-time $(date -u -d '14 days ago' +%Y-%m-%dT%H:%M:%S) \
--end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
--period 1209600 \
--statistics Average \
--query 'Datapoints[0].Average')
echo "$id: ${cpu}% avg CPU"
done
Before terminating, verify:
- The instance isn’t part of an Auto Scaling group that expects it
- No cron jobs or batch processes run on longer cycles (weekly, monthly)
- No other instances depend on it for service discovery or internal DNS
- Check network traffic metrics, not just CPU. Some workloads (proxies, caches) are CPU-light but actively serving traffic
If the instance is genuinely idle, stop it first. Monitor for a week to confirm nothing breaks. Then terminate and delete associated resources.
CostPatrol’s EC2-O002 rule flags instances with average CPU utilization below 5% over a 14-day window, filtering out instances in Auto Scaling groups and newly launched instances that haven’t had time to stabilize.
2. Stopped EC2 Instances Still Incurring EBS Charges
Stopping an instance eliminates compute charges, but EBS volumes attached to stopped instances keep billing. A stopped instance with a 500 GB gp3 volume still costs $40/month in storage alone ($0.08/GB-month for gp3 in us-east-1).
Teams often stop instances “temporarily” as a cheaper alternative to termination. Months later, those stopped instances are still there, quietly accumulating EBS charges.
Find stopped instances and their attached storage costs:
aws ec2 describe-instances \
--filters Name=instance-state-name,Values=stopped \
--query 'Reservations[].Instances[].{
InstanceId: InstanceId,
Name: Tags[?Key==`Name`]|[0].Value,
StoppedSince: StateTransitionReason,
Volumes: BlockDeviceMappings[].Ebs.VolumeId
}' --output table
For each stopped instance, decide:
- If you need it again, create an AMI and terminate the instance. You’ll pay snapshot storage ($0.05/GB-month) instead of full EBS pricing, saving 37% or more on gp3 volumes.
- If you don’t need it, terminate the instance and delete its volumes.
- If you’re unsure, tag it with a review date. Set a CloudWatch alarm or calendar reminder. If nobody claims it in 30 days, terminate it.
3. Unattached EBS Volumes (EBS-O003)
When you terminate an EC2 instance, its root volume is deleted by default. But additional volumes are not. They stay behind in an “available” state, billing every month, attached to nothing.
This is one of the most common sources of AWS waste. A single forgotten 1 TB gp3 volume costs $80/month. Across an organization with dozens of engineers creating and destroying instances, unattached volumes accumulate fast.
Find all unattached EBS volumes:
aws ec2 describe-volumes \
--filters Name=status,Values=available \
--query 'Volumes[].{
VolumeId: VolumeId,
SizeGB: Size,
Type: VolumeType,
Created: CreateTime,
AZ: AvailabilityZone
}' --output table
Before deleting, check:
- Whether the volume contains data someone might need. Create a snapshot first if there’s any doubt.
- Whether any automation or IaC (Terraform, CloudFormation) expects the volume to exist
- The
CreateTimefield. Volumes created recently might be waiting for attachment by an in-progress deployment.
Delete a confirmed unused volume:
aws ec2 delete-volume --volume-id vol-0123456789abcdef0
CostPatrol’s EBS-O003 rule detects volumes in the “available” state and calculates the monthly cost based on volume type and size, so you can prioritize cleanup by dollar impact.
4. Old EBS Snapshots
EBS snapshots are cheap individually ($0.05/GB-month in us-east-1), but they accumulate. Automated backup solutions, CI/CD pipelines that snapshot before deployments, and manual “just in case” snapshots all contribute to a growing snapshot inventory.
A common pattern: an organization has 500 snapshots totaling 10 TB. That’s $500/month for data that nobody has accessed in a year.
Find snapshots older than 90 days:
aws ec2 describe-snapshots --owner-ids self \
--query "Snapshots[?StartTime<='$(date -u -d '90 days ago' +%Y-%m-%dT%H:%M:%S)'].{
SnapshotId: SnapshotId,
SizeGB: VolumeSize,
Created: StartTime,
Description: Description
}" --output table
Find orphaned snapshots (source volume no longer exists):
vol_ids=$(aws ec2 describe-volumes \
--query 'Volumes[].VolumeId' --output text)
aws ec2 describe-snapshots --owner-ids self \
--query 'Snapshots[].{SnapshotId: SnapshotId, VolumeId: VolumeId}' \
--output text | while read snap vol; do
echo "$vol_ids" | grep -q "$vol" || echo "ORPHANED: $snap (vol: $vol)"
done
Before deleting snapshots:
- Confirm they aren’t referenced by any AMI (see Section 10)
- Check if any compliance or audit requirements mandate retention
- Verify no Terraform state or CloudFormation stack depends on them
5. Unused Elastic IPs (EIP-O001)
Since February 2024, AWS charges $0.005/hour for ALL public IPv4 addresses, whether attached or not. That’s $3.65/month per IP. It sounds small until you count them. Organizations with multiple VPCs, environments, and regions can easily accumulate 50-100 unused Elastic IPs. That’s $182-$365/month for IP addresses doing nothing.
Find unassociated Elastic IPs:
aws ec2 describe-addresses \
--query "Addresses[?AssociationId==null].{
PublicIp: PublicIp,
AllocationId: AllocationId,
Domain: Domain
}" --output table
Release unused Elastic IPs:
aws ec2 release-address --allocation-id eipalloc-0123456789abcdef0
One important note: if your application relies on a specific IP address (for DNS records, allowlists, or firewall rules), releasing it means you won’t get the same IP back. Verify that no DNS records or external systems reference the IP before releasing.
CostPatrol’s EIP-O001 rule identifies unassociated Elastic IPs across all regions and calculates total monthly waste per account.
6. Idle Load Balancers
An Application Load Balancer (ALB) costs approximately $0.0225/hour ($16.20/month) in base charges alone, even with zero traffic. Network Load Balancers (NLBs) are similar at $0.0225/hour. Add LCU/NLCU charges on top for any minimal traffic they do handle.
Idle load balancers typically appear after a service is decommissioned but the infrastructure stays behind, or when a staging environment is created and forgotten.
Find ALBs with no registered targets:
for arn in $(aws elbv2 describe-load-balancers \
--query 'LoadBalancers[].LoadBalancerArn' --output text); do
targets=$(aws elbv2 describe-target-groups \
--load-balancer-arn "$arn" \
--query 'TargetGroups[].TargetGroupArn' --output text)
has_targets=false
for tg in $targets; do
health=$(aws elbv2 describe-target-health \
--target-group-arn "$tg" \
--query 'TargetHealthDescriptions' --output text)
[ -n "$health" ] && has_targets=true
done
$has_targets || echo "NO TARGETS: $arn"
done
Also check for load balancers with zero request count:
for arn in $(aws elbv2 describe-load-balancers \
--query 'LoadBalancers[].LoadBalancerArn' --output text); do
name=$(echo "$arn" | sed 's/.*:loadbalancer\///')
count=$(aws cloudwatch get-metric-statistics \
--namespace AWS/ApplicationELB \
--metric-name RequestCount \
--dimensions Name=LoadBalancer,Value="$name" \
--start-time $(date -u -d '7 days ago' +%Y-%m-%dT%H:%M:%S) \
--end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
--period 604800 \
--statistics Sum \
--query 'Datapoints[0].Sum // `0`')
echo "$name: $count requests (7d)"
done
Before deleting a load balancer, confirm no DNS records (Route 53 aliases or external CNAMEs) point to it. Deleting a load balancer that’s still referenced by DNS will cause an outage.
7. Unused RDS Instances
RDS instances are among the most expensive resources in a typical AWS account. A db.r6g.xlarge instance runs about $0.48/hour ($350/month) before storage costs. Idle RDS instances usually show up as development or staging databases that nobody shut down.
Find RDS instances with near-zero connections over 14 days:
for db in $(aws rds describe-db-instances \
--query 'DBInstances[].DBInstanceIdentifier' --output text); do
conns=$(aws cloudwatch get-metric-statistics \
--namespace AWS/RDS \
--metric-name DatabaseConnections \
--dimensions Name=DBInstanceIdentifier,Value=$db \
--start-time $(date -u -d '14 days ago' +%Y-%m-%dT%H:%M:%S) \
--end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
--period 1209600 \
--statistics Maximum \
--query 'Datapoints[0].Maximum // `0`')
echo "$db: max $conns connections (14d)"
done
If a database has zero connections over two weeks, it’s a strong candidate for cleanup. But before deleting:
- Take a final snapshot (RDS lets you create a final snapshot on deletion)
- Check if any application configs or environment variables reference the endpoint
- Verify that the database isn’t a read replica or part of a Multi-AZ failover pair
For databases you want to keep but rarely use, consider stopping the instance. RDS allows stopping instances for up to 7 days at a time (it automatically restarts after 7 days to apply maintenance). This eliminates compute charges while preserving data.
8. Orphaned Lambda Functions
Lambda functions don’t cost money when they’re not invoked. But orphaned functions create a different kind of waste: operational clutter, security exposure from outdated runtimes, and overly broad IAM roles that expand your blast radius.
That said, some orphaned functions DO incur costs through their associated resources. Each function typically has a CloudWatch log group that accumulates storage charges, and provisioned concurrency (if configured) bills continuously.
Find Lambda functions with zero invocations in the last 30 days:
for fn in $(aws lambda list-functions \
--query 'Functions[].FunctionName' --output text); do
inv=$(aws cloudwatch get-metric-statistics \
--namespace AWS/Lambda \
--metric-name Invocations \
--dimensions Name=FunctionName,Value=$fn \
--start-time $(date -u -d '30 days ago' +%Y-%m-%dT%H:%M:%S) \
--end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
--period 2592000 \
--statistics Sum \
--query 'Datapoints[0].Sum // `0`')
[ "$inv" = "0" ] && echo "UNUSED: $fn"
done
Before deleting, verify the function isn’t triggered on a cycle longer than 30 days (quarterly jobs, annual reports) and that no Step Functions workflows or API Gateway integrations reference it.
9. Unattached Elastic Network Interfaces
Elastic Network Interfaces (ENIs) don’t have a direct hourly cost, but they do hold onto resources that cost money. An ENI with an associated Elastic IP keeps billing for that IP at $0.005/hour. ENIs also count toward VPC limits and can hold security group associations that complicate audits.
ENIs are commonly left behind by Lambda functions (when VPC-attached), ECS tasks, RDS instances, and other services that create and sometimes fail to clean up network interfaces.
Find unattached ENIs:
aws ec2 describe-network-interfaces \
--filters Name=status,Values=available \
--query 'NetworkInterfaces[].{
InterfaceId: NetworkInterfaceId,
Description: Description,
SubnetId: SubnetId,
AZ: AvailabilityZone
}' --output table
Important caveat: Some ENIs in “available” status are managed by AWS services and will be reattached automatically. Check the Description field. If it starts with “AWS Lambda VPC ENI” or “ELB,” the owning service may still need it. Focus on ENIs with empty or custom descriptions that have been unattached for an extended period.
10. Old AMIs with Associated Snapshots
Custom AMIs are backed by EBS snapshots. When you deregister an AMI, AWS does not automatically delete its underlying snapshots. This means you can clean up old AMIs thinking you’ve reclaimed the space, only to keep paying for the snapshot storage.
A typical AMI with a 50 GB root volume and a 200 GB data volume has 250 GB in snapshots, costing $12.50/month at $0.05/GB-month. If your CI/CD pipeline creates a new AMI per deployment and you deploy daily, that’s 365 AMIs/year and over $4,500/year in snapshot storage alone.
Find AMIs older than 90 days:
aws ec2 describe-images --owners self \
--query "Images[?CreationDate<='$(date -u -d '90 days ago' +%Y-%m-%dT%H:%M:%S)'].{
ImageId: ImageId,
Name: Name,
Created: CreationDate,
Snapshots: BlockDeviceMappings[].Ebs.SnapshotId
}" --output table
Deregister an AMI and delete its snapshots:
# First, deregister the AMI
aws ec2 deregister-image --image-id ami-0123456789abcdef0
# Then delete each associated snapshot
aws ec2 delete-snapshot --snapshot-id snap-0123456789abcdef0
Before deregistering, confirm no launch templates, Auto Scaling groups, or instance configurations reference the AMI.
11. Idle NAT Gateways
NAT Gateways are expensive. Each one costs $0.045/hour ($32.40/month) plus $0.045 per GB of data processed. In a multi-AZ setup with one NAT Gateway per AZ, that’s $97.20/month in base charges before any data processing fees.
NAT Gateways become idle when the private subnets they serve no longer have instances that need outbound internet access. This happens when workloads are migrated, environments are torn down partially, or VPC architectures change.
Find NAT Gateways and check their recent data transfer:
for nat in $(aws ec2 describe-nat-gateways \
--filter Name=state,Values=available \
--query 'NatGateways[].NatGatewayId' --output text); do
bytes=$(aws cloudwatch get-metric-statistics \
--namespace AWS/NATGateway \
--metric-name BytesOutToDestination \
--dimensions Name=NatGatewayId,Value=$nat \
--start-time $(date -u -d '7 days ago' +%Y-%m-%dT%H:%M:%S) \
--end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
--period 604800 \
--statistics Sum \
--query 'Datapoints[0].Sum // `0`')
echo "$nat: $bytes bytes out (7d)"
done
A NAT Gateway with zero bytes transferred over 7 days is safe to investigate. Check the route tables for the private subnets, if no routes point to the NAT Gateway, or if the subnets have no running instances, delete it.
Delete an idle NAT Gateway:
aws ec2 delete-nat-gateway --nat-gateway-id nat-0123456789abcdef0
Remember to also release the associated Elastic IP after deleting the NAT Gateway, or it will continue billing at $0.005/hour.
12. CloudWatch Log Groups with No Retention Policy
By default, CloudWatch Logs retains log data forever. Every log group without an explicit retention policy accumulates data indefinitely, and you pay $0.03/GB-month for stored logs plus $0.50/GB for ingestion.
This is a slow bleed. A single application logging at 1 GB/day generates 365 GB/year. At $0.03/GB-month, that’s $10.95/month after year one, $21.90/month after year two, and it keeps growing. Across 50 log groups, you could be storing terabytes of logs that nobody will ever query.
Find log groups with no retention policy set:
aws logs describe-log-groups \
--query 'logGroups[?!retentionInDays].{
LogGroup: logGroupName,
StoredMB: storedBytes,
Created: creationTime
}' --output table
Set a 30-day retention policy on a log group:
aws logs put-retention-policy \
--log-group-name /aws/lambda/my-function \
--retention-in-days 30
Common retention periods:
- 1-7 days: Development and staging environments
- 30 days: Production application logs
- 90 days: Audit and compliance logs
- 365 days: Security logs subject to regulatory requirements
Setting retention doesn’t delete the log group or stop ingestion. It simply tells CloudWatch to purge data older than the specified period. This is a safe, non-destructive operation that can save significant money over time.
CostPatrol flags log groups with no retention policy and calculates the projected annual storage cost based on current ingestion rates.
Cost Impact Summary
Here’s what each resource type typically wastes per unit, per month, based on us-east-1 pricing:
| Resource Type | Typical Monthly Waste (per unit) | Common Count per Org | Estimated Monthly Total |
|---|---|---|---|
| Idle EC2 (m5.xlarge) | $140.16 | 5-20 | $700 - $2,803 |
| Stopped EC2 (500 GB gp3) | $40.00 | 10-30 | $400 - $1,200 |
| Unattached EBS (500 GB gp3) | $40.00 | 20-50 | $800 - $2,000 |
| Old EBS Snapshots (100 GB avg) | $5.00 | 50-200 | $250 - $1,000 |
| Unused Elastic IP | $3.65 | 10-50 | $36 - $182 |
| Idle ALB | $16.20 | 2-5 | $32 - $81 |
| Idle RDS (db.r6g.xlarge) | $350.40 | 1-5 | $350 - $1,752 |
| Orphaned Lambda (log storage) | $5 - $20 | 10-30 | $50 - $600 |
| Unattached ENI (with EIP) | $3.65 | 5-20 | $18 - $73 |
| Old AMIs (250 GB snapshots) | $12.50 | 20-100 | $250 - $1,250 |
| Idle NAT Gateway | $32.40 | 1-3 | $32 - $97 |
| CW Logs (no retention, 50 GB) | $1.50 | 20-50 | $30 - $75 |
| Total potential waste | $2,948 - $11,113 |
For a mid-size organization, cleaning up unused resources typically reduces the monthly AWS bill by 5-15%. On a $100,000/month account, that’s $5,000-$15,000 in savings from resources that were delivering zero value.
Cleanup Checklist
Run through this list monthly, or better yet, automate it:
- Identify EC2 instances with average CPU below 5% for 14+ days
- Review all stopped EC2 instances. Terminate or AMI-and-terminate those older than 30 days
- Delete unattached EBS volumes (snapshot first if unsure)
- Purge EBS snapshots older than 90 days that aren’t tied to active AMIs
- Release unassociated Elastic IPs
- Remove load balancers with no healthy targets and zero traffic
- Stop or terminate RDS instances with zero connections for 14+ days
- Clean up Lambda functions with zero invocations for 30+ days
- Delete unattached ENIs not managed by AWS services
- Deregister old AMIs and delete their underlying snapshots
- Delete NAT Gateways with zero data transfer for 7+ days (and release their EIPs)
- Set retention policies on all CloudWatch log groups
Automate the Process
Running CLI commands manually works for a one-time cleanup. It doesn’t scale for ongoing cost management across multiple accounts and regions.
CostPatrol scans your AWS accounts continuously and detects all 12 resource waste categories covered in this guide. Each finding includes the specific resource, its monthly cost impact, and a recommended action. Instead of running a dozen CLI scripts and cross-referencing the output, you get a prioritized list of savings opportunities ranked by dollar impact.
The rules referenced throughout this guide (EC2-O002, EBS-O003, EIP-O001, and others) are the same detection rules CostPatrol runs in production. They’re designed to minimize false positives by incorporating time windows, utilization thresholds, and cross-referencing with related resources.
Start with the biggest items. In most accounts, idle EC2 instances, forgotten RDS databases, and unattached EBS volumes account for 70% of resource waste. Clean those up first, then work through the long tail.