· 11 min read

S3 Storage Class Optimization: Choose the Right Tier and Stop Overpaying

Most S3 buckets are sitting in Standard storage by default. Not because someone decided Standard was the right fit, but because nobody changed it after the initial upload. The result: you are paying $0.023/GB/month for data that hasn’t been touched in six months, when the same bytes could cost $0.004/GB or less in a colder tier.

S3 offers seven storage classes, each with different pricing for storage, requests, and retrieval. Picking the right class for each dataset is one of the highest-ROI cost optimizations in AWS. No architectural changes. No code rewrites. Just moving data to where it belongs.

This guide walks through each storage class, shows you how to audit your buckets, and gives you the lifecycle policies and CLI commands to make the shift.

1. S3 Storage Class Overview

AWS S3 has seven storage classes. The right choice depends on two factors: how often you access the data and how quickly you need it back when you do.

All pricing below is for us-east-1 (N. Virginia) as of March 2026.

Storage ClassStorage $/GB/moPUT per 1KGET per 1KRetrieval $/GBMin DurationMin Object Size
S3 Standard$0.023$0.005$0.0004NoneNoneNone
S3 Intelligent-Tiering$0.023 (frequent)$0.005$0.0004NoneNoneNone
S3 Standard-IA$0.0125$0.01$0.001$0.0130 days128 KB
S3 One Zone-IA$0.01$0.01$0.001$0.0130 days128 KB
S3 Glacier Instant Retrieval$0.004$0.02$0.01$0.0390 days128 KB
S3 Glacier Flexible Retrieval$0.0036$0.03$0.0004$0.01 (std)90 days40 KB
S3 Glacier Deep Archive$0.00099$0.05$0.0004$0.02 (std)180 days40 KB

The pattern is clear. Storage gets cheaper as you go down the table. Requests and retrievals get more expensive. Minimum duration charges mean you pay for the full period even if you delete the object early.

A quick example: 10 TB in Standard costs $230/month. Move that same 10 TB to Glacier Instant Retrieval and you are paying $40/month. That is $2,280/year in savings from a single policy change.

2. Analyze Your Current Storage Distribution

Before making changes, you need to know what you have. This CLI command breaks down each bucket by storage class and total size.

List all buckets and their regions:

aws s3api list-buckets --query 'Buckets[].Name' --output text | \
  tr '\t' '\n' | while read bucket; do
  region=$(aws s3api get-bucket-location \
    --bucket "$bucket" \
    --query 'LocationConstraint // `us-east-1`' --output text)
  echo "$bucket ($region)"
done

Get storage breakdown by class for a specific bucket:

aws cloudwatch get-metric-statistics \
  --namespace AWS/S3 \
  --metric-name BucketSizeBytes \
  --dimensions Name=BucketName,Value=my-bucket \
               Name=StorageType,Value=StandardStorage \
  --start-time $(date -u -d '1 day ago' +%Y-%m-%dT%H:%M:%S) \
  --end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
  --period 86400 \
  --statistics Average \
  --query 'Datapoints[0].Average'

Replace StandardStorage with StandardIAStorage, GlacierStorage, IntelligentTieringFAStorage, or DeepArchiveStorage to check other classes.

Count objects per bucket:

aws cloudwatch get-metric-statistics \
  --namespace AWS/S3 \
  --metric-name NumberOfObjects \
  --dimensions Name=BucketName,Value=my-bucket \
               Name=StorageType,Value=AllStorageTypes \
  --start-time $(date -u -d '1 day ago' +%Y-%m-%dT%H:%M:%S) \
  --end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
  --period 86400 \
  --statistics Average \
  --query 'Datapoints[0].Average'

CostPatrol’s S3-O002 rule runs this analysis across all buckets in your account and flags any bucket where more than 80% of storage is in Standard class with low access frequency.

3. S3 Intelligent-Tiering: When It’s Worth the Monitoring Fee

Intelligent-Tiering is the “set it and forget it” option. S3 monitors access patterns per object and automatically moves data between tiers:

  • Frequent Access tier: Same price as Standard ($0.023/GB)
  • Infrequent Access tier: Kicks in after 30 days of no access (40% savings)
  • Archive Instant Access tier: Kicks in after 90 days of no access (68% savings)
  • Archive Access tier (optional): After 90+ days, configurable
  • Deep Archive Access tier (optional): After 180+ days, configurable

The monitoring fee is $0.0025 per 1,000 objects per month. No retrieval fees. No minimum storage duration charge.

When Intelligent-Tiering makes sense:

  • You don’t know the access pattern and can’t predict it
  • Access patterns change over time (some months hot, some months cold)
  • You want hands-off optimization without lifecycle policy maintenance

When it does NOT make sense:

  • You have millions of tiny objects (under 128 KB). Objects below 128 KB are not monitored and always stay in the Frequent Access tier, so you pay Standard pricing with no benefit.
  • Your access pattern is well-known and consistent. A simple lifecycle rule to Standard-IA will save more because there is no monitoring fee.
  • High object count with low total storage. If you have 10 million objects but only 50 GB of storage, the monitoring fee alone is $25/month. The potential savings on 50 GB of storage would be around $7/month. You are losing money.

The breakeven math: The monitoring fee for 1,000 objects is $0.0025/month. If each object averages 1 MB, that is roughly 1 GB of storage. The savings from moving 1 GB from Standard to the IA tier is about $0.010/month. So the payback is within the first 30 days for average-sized objects.

Enable Intelligent-Tiering on new uploads:

aws s3 cp my-file.txt s3://my-bucket/my-file.txt \
  --storage-class INTELLIGENT_TIERING

Set Intelligent-Tiering as the default for a bucket using a lifecycle rule:

aws s3api put-bucket-lifecycle-configuration \
  --bucket my-bucket \
  --lifecycle-configuration '{
    "Rules": [{
      "ID": "MoveToIntelligentTiering",
      "Status": "Enabled",
      "Filter": { "Prefix": "" },
      "Transitions": [{
        "Days": 0,
        "StorageClass": "INTELLIGENT_TIERING"
      }]
    }]
  }'

4. Lifecycle Policies for Automatic Transitions

Lifecycle policies are the primary mechanism for S3 cost optimization. They transition objects between storage classes on a schedule and can expire (delete) objects automatically.

S3 enforces a waterfall model. You can only transition downward through the storage class hierarchy. You cannot move an object from Glacier back to Standard-IA with a lifecycle rule.

A practical lifecycle configuration for log data:

{
  "Rules": [
    {
      "ID": "LogRetention",
      "Status": "Enabled",
      "Filter": { "Prefix": "logs/" },
      "Transitions": [
        { "Days": 30, "StorageClass": "STANDARD_IA" },
        { "Days": 90, "StorageClass": "GLACIER" },
        { "Days": 365, "StorageClass": "DEEP_ARCHIVE" }
      ],
      "Expiration": { "Days": 2555 }
    }
  ]
}

This moves logs to Standard-IA after 30 days, Glacier after 90, Deep Archive after a year, and deletes them after 7 years.

Apply a lifecycle configuration:

aws s3api put-bucket-lifecycle-configuration \
  --bucket my-bucket \
  --lifecycle-configuration file://lifecycle.json

View existing lifecycle rules on a bucket:

aws s3api get-bucket-lifecycle-configuration --bucket my-bucket

Key constraints to know:

  • Objects must be in Standard for at least 30 days before transitioning to Standard-IA or One Zone-IA
  • Objects smaller than 128 KB are not transitioned (the transition request cost would exceed the storage savings)
  • Minimum storage duration charges apply. If you delete an object from Standard-IA after 15 days, you still pay for 30 days
  • Each transition incurs a request charge (visible in the pricing table above)

CostPatrol’s S3-O001 rule checks every bucket for lifecycle policies and flags buckets with no policies or policies missing key transitions for their access patterns.

5. Request Pricing: The Hidden Cost Multiplier

Storage class pricing gets all the attention, but request costs can dominate your bill for high-traffic buckets.

Consider this scenario: a bucket serves 100 million GET requests per month.

Storage ClassStorage (1 TB)GET Requests (100M)Total Monthly
S3 Standard$23.00$40.00$63.00
S3 Standard-IA$12.50$100.00$112.50
S3 Glacier Instant$4.00$1,000.00$1,004.00

Standard-IA costs 2.5x more per GET than Standard. Glacier Instant costs 25x more. For a high-traffic bucket, moving to a cheaper storage class can actually increase your total bill.

The rule of thumb: If an object is accessed more than once per month, keep it in Standard. Standard-IA only saves money when access is genuinely infrequent (roughly once every 30+ days).

Check request patterns for a bucket:

aws cloudwatch get-metric-statistics \
  --namespace AWS/S3 \
  --metric-name GetRequests \
  --dimensions Name=BucketName,Value=my-bucket \
               Name=FilterId,Value=EntireBucket \
  --start-time $(date -u -d '30 days ago' +%Y-%m-%dT%H:%M:%S) \
  --end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
  --period 2592000 \
  --statistics Sum

Note: S3 request metrics must be enabled on the bucket first. They are not on by default.

6. Clean Up Incomplete Multipart Uploads

This is free money. When a multipart upload starts but never completes, the uploaded parts remain in your bucket. You are paying Standard storage rates for data that serves no purpose. These orphaned parts do not show up in aws s3 ls output, making them invisible unless you specifically look.

List incomplete multipart uploads for a bucket:

aws s3api list-multipart-uploads --bucket my-bucket

Abort a specific incomplete upload:

aws s3api abort-multipart-upload \
  --bucket my-bucket \
  --key "path/to/object" \
  --upload-id "upload-id-from-list"

The permanent fix is a lifecycle rule that auto-cleans incomplete uploads:

aws s3api put-bucket-lifecycle-configuration \
  --bucket my-bucket \
  --lifecycle-configuration '{
    "Rules": [{
      "ID": "AbortIncompleteMultipartUploads",
      "Status": "Enabled",
      "Filter": { "Prefix": "" },
      "AbortIncompleteMultipartUpload": {
        "DaysAfterInitiation": 7
      }
    }]
  }'

Seven days is a safe default. Most legitimate multipart uploads complete within hours. This lifecycle action has zero cost to configure and zero early-delete charges.

Every bucket in your account should have this rule. No exceptions.

7. S3 Analytics for Data-Driven Storage Class Decisions

Guessing access patterns leads to bad decisions. S3 Storage Class Analysis tracks access frequency over time and tells you whether objects in Standard would benefit from a move to Standard-IA.

Enable storage class analysis on a bucket:

aws s3api put-bucket-analytics-configuration \
  --bucket my-bucket \
  --id "full-bucket-analysis" \
  --analytics-configuration '{
    "Id": "full-bucket-analysis",
    "StorageClassAnalysis": {
      "DataExport": {
        "OutputSchemaVersion": "V_1",
        "Destination": {
          "S3BucketDestination": {
            "Format": "CSV",
            "BucketAccountId": "123456789012",
            "Bucket": "arn:aws:s3:::my-analytics-bucket",
            "Prefix": "analytics/"
          }
        }
      }
    }
  }'

Key points about S3 Analytics:

  • Wait at least 30 days for meaningful results. AWS needs time to observe access patterns.
  • Analysis only recommends transitions from Standard to Standard-IA. It does not cover Glacier or other classes.
  • You can filter by prefix or tags to analyze specific datasets within a bucket.
  • Export reports land daily in CSV format. Parse them to identify cold data.

Check analysis configuration:

aws s3api list-bucket-analytics-configurations --bucket my-bucket

Start with your largest buckets. A 5 TB bucket where 80% of data is cold will save more than optimizing fifty 10 GB buckets.

8. Versioning Cost Implications

S3 versioning is critical for data protection, but it has real cost implications that are easy to overlook. Every overwrite creates a new version. Every delete creates a delete marker. All versions count toward your storage bill.

A common pattern: an application writes the same key thousands of times (think log aggregation or ETL state files). With versioning enabled, every write persists as a separate version. A 1 KB file written 10,000 times means 10,000 versions, each billed individually.

Check how many versions exist in a bucket:

aws s3api list-object-versions --bucket my-bucket \
  --query 'length(Versions)' --max-items 10000

Count delete markers:

aws s3api list-object-versions --bucket my-bucket \
  --query 'length(DeleteMarkers)' --max-items 10000

Add a lifecycle rule to expire old versions:

{
  "Rules": [
    {
      "ID": "ExpireOldVersions",
      "Status": "Enabled",
      "Filter": { "Prefix": "" },
      "NoncurrentVersionTransitions": [
        { "NoncurrentDays": 30, "StorageClass": "STANDARD_IA" },
        { "NoncurrentDays": 90, "StorageClass": "GLACIER" }
      ],
      "NoncurrentVersionExpiration": {
        "NoncurrentDays": 365
      },
      "ExpiredObjectDeleteMarker": true
    }
  ]
}

This transitions old versions to cheaper storage and deletes them after a year. The ExpiredObjectDeleteMarker setting automatically cleans up delete markers when they are the only remaining version of an object.

The takeaway: If you have versioning enabled, you need a lifecycle policy for noncurrent versions. Without one, storage costs grow without bound.

9. Cross-Region Replication Cost Awareness

Cross-region replication (CRR) is essential for disaster recovery and compliance, but it doubles your storage costs and adds data transfer charges on top.

Costs involved in CRR:

  • Storage in the destination region (same rates as the destination region’s storage class)
  • Data transfer out from the source region ($0.02/GB for cross-region within the US)
  • PUT request charges in the destination region
  • S3 Replication Time Control (if enabled): additional charges for SLA-backed replication

Check replication configuration:

aws s3api get-bucket-replication --bucket my-bucket

Cost optimization strategies for CRR:

  • Replicate to a cheaper storage class in the destination. You can specify STANDARD_IA or GLACIER as the destination class in the replication rule.
  • Use prefix filters to replicate only critical data, not the entire bucket.
  • Consider Same-Region Replication (SRR) if your use case is cross-account access rather than geographic redundancy. SRR avoids the cross-region data transfer cost.
  • Review whether both copies need the same retention policy. The destination can have a more aggressive lifecycle policy.

Example: replicate to Standard-IA in the destination:

aws s3api put-bucket-replication --bucket my-source-bucket \
  --replication-configuration '{
    "Role": "arn:aws:iam::123456789012:role/s3-replication-role",
    "Rules": [{
      "Status": "Enabled",
      "Destination": {
        "Bucket": "arn:aws:s3:::my-destination-bucket",
        "StorageClass": "STANDARD_IA"
      },
      "Filter": { "Prefix": "" }
    }]
  }'

10. Bucket-Level Cost Analysis with S3 Storage Lens

S3 Storage Lens provides organization-wide visibility into storage usage and activity across all your accounts and buckets. It surfaces metrics like total storage by class, request counts, and bytes downloaded per bucket.

Create a Storage Lens configuration via CLI:

aws s3control put-storage-lens-configuration \
  --account-id 123456789012 \
  --config-id "cost-optimization-dashboard" \
  --storage-lens-configuration '{
    "Id": "cost-optimization-dashboard",
    "AccountLevel": {
      "BucketLevel": {
        "ActivityMetrics": { "IsEnabled": true }
      }
    },
    "IsEnabled": true
  }'

What to look for in Storage Lens:

  • Buckets with high storage but low request counts (candidates for IA or Glacier)
  • Buckets with a high ratio of noncurrent versions to current versions (versioning bloat)
  • Buckets with no lifecycle policies
  • Accounts or buckets where incomplete multipart uploads are accumulating

The free tier of Storage Lens provides 28 metrics across all your buckets. The advanced tier ($0.20 per million objects monitored) adds 35 more metrics including cost optimization recommendations, activity insights, and 15-month data retention.

Export Storage Lens metrics for programmatic analysis:

aws s3control put-storage-lens-configuration \
  --account-id 123456789012 \
  --config-id "cost-export" \
  --storage-lens-configuration '{
    "Id": "cost-export",
    "AccountLevel": {
      "BucketLevel": {}
    },
    "DataExport": {
      "S3BucketDestination": {
        "Format": "CSV",
        "OutputSchemaVersion": "V_1",
        "AccountId": "123456789012",
        "Arn": "arn:aws:s3:::my-lens-export-bucket",
        "Prefix": "storage-lens/"
      }
    },
    "IsEnabled": true
  }'

S3 Storage Optimization Checklist

Run through this for every account:

  • Audit all buckets for storage class distribution (Section 2)
  • Enable S3 Storage Lens for organization-wide visibility (Section 10)
  • Add incomplete multipart upload cleanup to every bucket (Section 6)
  • Enable S3 Analytics on your largest Standard-class buckets (Section 7)
  • Wait 30 days, then review analytics recommendations
  • Create lifecycle policies for known access patterns (Section 4)
  • Evaluate Intelligent-Tiering for unpredictable workloads (Section 3)
  • Add noncurrent version expiration to all versioned buckets (Section 8)
  • Review cross-region replication destination storage classes (Section 9)
  • Check request costs before moving high-traffic buckets to IA (Section 5)
  • Set up monthly Storage Lens review cadence

Where CostPatrol Fits

CostPatrol scans your AWS accounts and runs these checks automatically. Rule S3-O001 flags buckets missing lifecycle policies, including the incomplete multipart upload cleanup that every bucket should have. Rule S3-O002 identifies buckets where the storage class distribution does not match the actual access patterns, with estimated monthly savings for each recommendation.

Instead of running these CLI commands manually across dozens of accounts and hundreds of buckets, CostPatrol does it on a schedule and surfaces the findings that actually matter. The checks in this guide are the same logic behind the detection rules.

See what CostPatrol finds in your AWS account

Free scan shows your total savings. Upgrade to Pro for full findings, fix commands, and daily Slack alerts.