· 10 min read

How to Reduce AWS CloudWatch Logs Costs: Ingestion, Storage, and Alternatives

CloudWatch Logs has a way of sneaking into your top 5 AWS costs. You deploy a few Lambda functions, spin up an ECS cluster, enable VPC Flow Logs, and suddenly you’re ingesting hundreds of gigabytes per month at $0.50/GB. That’s $50 for every 100 GB before storage even enters the picture.

The worst part: most of that data is never queried. It sits there, accumulating storage charges, because no one set a retention policy. This guide covers the highest-impact optimizations for CloudWatch Logs, each with CLI commands you can run today.

CloudWatch Logs Pricing Breakdown

Before optimizing, you need to understand where the money goes. CloudWatch Logs has three cost dimensions that matter.

Cost ComponentStandard ClassInfrequent Access Class
Ingestion$0.50/GB$0.25/GB
Storage (archival)$0.03/GB/month$0.03/GB/month
Logs Insights queries$0.005/GB scanned$0.005/GB scanned

The ingestion cost hits you once, when the log data arrives. Storage charges are recurring, every month, for as long as the data exists. And Logs Insights charges apply every time you query, based on the total volume scanned, not the results returned.

For Lambda functions specifically, AWS introduced tiered ingestion pricing in May 2025. The first 10 TB is still $0.50/GB, but it drops to $0.25/GB from 10-30 TB and $0.05/GB above 50 TB. This tiering only applies to Lambda-generated logs, not to logs from EC2, ECS, or other sources.

The free tier covers 5 GB of combined ingestion, storage, and queries per month. Most production accounts blow through that in the first day.

1. Find Your Most Expensive Log Groups

You can’t optimize what you can’t see. Start by identifying which log groups consume the most storage.

List log groups sorted by stored bytes (descending):

aws logs describe-log-groups \
  --query 'logGroups | sort_by(@, &storedBytes) | reverse(@) | [0:20].{Name: logGroupName, StoredGB: to_string(storedBytes / `1073741824`), RetentionDays: retentionInDays || `Never Expire`}' \
  --output table

This gives you the top 20 log groups by storage. Pay attention to any group where RetentionDays shows “Never Expire.” That’s the default, and it means logs accumulate forever.

Check ingestion volume over the last 7 days for a specific log group:

aws cloudwatch get-metric-statistics \
  --namespace AWS/Logs \
  --metric-name IncomingBytes \
  --dimensions Name=LogGroupName,Value=/aws/lambda/my-function \
  --start-time $(date -u -d '7 days ago' +%Y-%m-%dT%H:%M:%S) \
  --end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
  --period 604800 \
  --statistics Sum \
  --query 'Datapoints[0].Sum'

For a broader view, open Cost Explorer and group by API Operation. The PutLogEvents operation corresponds to ingestion costs. HourlyStorageMetering is your archival charges.

2. Set Retention Policies (CWL-O001)

This is the single highest-impact change you can make. Every log group without a retention policy stores data indefinitely. A log group ingesting 10 GB/month with no retention will accumulate 120 GB/year in storage alone, costing $3.60/month by year-end, and growing every month after that.

Find all log groups with no retention policy:

aws logs describe-log-groups \
  --query 'logGroups[?!retentionInDays].{Name: logGroupName, StoredGB: to_string(storedBytes / `1073741824`)}' \
  --output table

Set a 30-day retention policy on a log group:

aws logs put-retention-policy \
  --log-group-name /aws/lambda/my-function \
  --retention-in-days 30

Set retention across all log groups that currently have none:

for lg in $(aws logs describe-log-groups \
  --query 'logGroups[?!retentionInDays].logGroupName' --output text); do
  aws logs put-retention-policy \
    --log-group-name "$lg" \
    --retention-in-days 30
  echo "Set 30-day retention: $lg"
done

Here’s a practical framework for choosing retention periods:

EnvironmentRecommended RetentionRationale
Development3 daysOnly needed during active debugging
Staging7 daysEnough for release validation
Production (application)30 daysCovers most incident investigations
Production (audit/compliance)90-365 daysRegulatory requirements
Security logs365 daysCompliance and forensics

If you need logs beyond these windows for compliance, export them to S3 first (covered in section 5). S3 storage costs $0.023/GB/month for Standard, $0.004/GB/month for Glacier Instant Retrieval, or $0.00099/GB/month for Glacier Deep Archive. All of those are dramatically cheaper than the $0.03/GB/month CloudWatch charges.

CostPatrol’s CWL-O001 rule automatically identifies log groups with excessive retention, those exceeding 90 days or with no retention policy set. It estimates the savings from reducing retention to appropriate levels based on current stored bytes.

3. Reduce Log Verbosity at the Source

Ingestion at $0.50/GB is the biggest cost driver. The most effective way to reduce it is to stop logging data you don’t need.

Common sources of log noise:

  • Lambda functions logging full request/response payloads in production
  • DEBUG or TRACE log levels left on after a troubleshooting session
  • Health check endpoints logging every invocation (ALB, API Gateway)
  • SDK retry logs and HTTP client debug output
  • Serialized objects dumped at INFO level

Check what’s actually in your logs before you optimize:

aws logs filter-log-events \
  --log-group-name /aws/lambda/my-function \
  --start-time $(date -u -d '1 hour ago' +%s)000 \
  --limit 50 \
  --query 'events[].message' --output text

Use Log Levels Properly in Lambda

In Node.js Lambda functions, use the built-in structured logging that AWS provides since November 2023:

// Set LOG_LEVEL environment variable to WARN in production
console.info('Processing order', { orderId }); // Skipped when LOG_LEVEL=WARN
console.warn('Retry attempt', { orderId, attempt: 3 }); // Logged
console.error('Payment failed', { orderId, error }); // Logged

Set the log level via CLI:

aws lambda update-function-configuration \
  --function-name my-function \
  --logging-config LogLevel=WARN

For Python Lambda functions, use the AWS_LAMBDA_LOG_LEVEL environment variable or configure Python’s logging module:

aws lambda update-function-configuration \
  --function-name my-python-function \
  --environment 'Variables={AWS_LAMBDA_LOG_LEVEL=WARN}'

Filter Health Check Noise in ECS

If you’re running ECS tasks behind an ALB, health check logs can account for 30-50% of total log volume. A health check every 10 seconds across 5 targets generates over 43,000 log entries per day, all saying the same thing.

Filter these at the application level. In Express.js:

app.use((req, res, next) => {
  if (req.path === '/health') {
    return res.status(200).send('ok');
  }
  next();
});

Place this BEFORE your logging middleware so health checks never hit the logger.

4. Use the Infrequent Access Log Class

CloudWatch offers two log classes. Standard costs $0.50/GB for ingestion. Infrequent Access costs $0.25/GB, a 50% reduction.

The tradeoff: Infrequent Access logs don’t support live tail, metric filters, or subscription filters. You can still query them with Logs Insights and view them in the console.

Create a new log group with Infrequent Access class:

aws logs create-log-group \
  --log-group-name /aws/lambda/my-batch-job \
  --log-group-class INFREQUENT_ACCESS

You can’t change the class of an existing log group. You’d need to create a new one and redirect your application’s logging. Good candidates for Infrequent Access:

  • Batch processing jobs
  • Scheduled cron functions
  • Development and staging environments
  • Low-priority background workers

For Lambda functions, you can specify the log group directly:

aws lambda update-function-configuration \
  --function-name my-batch-function \
  --logging-config LogGroup=/aws/lambda/my-batch-job

5. Route Logs to S3 via Subscription Filters

For logs you need to retain long-term but rarely query, S3 is dramatically cheaper than CloudWatch storage. Here’s the cost comparison for storing 1 TB of logs for 12 months:

Storage OptionMonthly CostAnnual Cost
CloudWatch Logs$30.00/TB$360.00/TB
S3 Standard$23.00/TB$276.00/TB
S3 Standard-IA$12.50/TB$150.00/TB
S3 Glacier Instant Retrieval$4.00/TB$48.00/TB
S3 Glacier Deep Archive$0.99/TB$11.88/TB

That’s a 97% cost reduction if you move from CloudWatch to Glacier Deep Archive. Even S3 Standard saves you 23%.

The standard pipeline is CloudWatch Logs, to a subscription filter, to Amazon Data Firehose, to S3. Data sent from CloudWatch Logs to Firehose is automatically compressed with gzip level 6, so you get additional savings on transfer and storage.

Create a subscription filter to send logs to a Firehose delivery stream:

aws logs put-subscription-filter \
  --log-group-name /aws/lambda/my-function \
  --filter-name "to-s3" \
  --filter-pattern "" \
  --destination-arn arn:aws:firehose:us-east-1:123456789012:deliverystream/logs-to-s3

An empty filter pattern ("") forwards all log events. You can also filter to forward only specific patterns, like error logs:

aws logs put-subscription-filter \
  --log-group-name /aws/lambda/my-function \
  --filter-name "errors-to-s3" \
  --filter-pattern "?ERROR ?Error ?error" \
  --destination-arn arn:aws:firehose:us-east-1:123456789012:deliverystream/logs-to-s3

Then combine this with a short CloudWatch retention (3-7 days) and S3 lifecycle rules to transition logs through storage tiers automatically. This gives you the best of both worlds: recent logs queryable in CloudWatch, long-term logs in cheap storage.

Note that the subscription filter does not reduce your CloudWatch ingestion cost. You still pay $0.50/GB (or $0.25/GB for Infrequent Access) when the logs arrive. The savings come from storage: you can set a short retention in CloudWatch because S3 holds the long-term copy.

6. CloudWatch Logs Insights Cost Considerations

Logs Insights charges $0.005 per GB of data scanned. That sounds small, but it scales with the total volume of logs in the time range you query, not the volume of results.

If your log group contains 500 GB of data from the last 30 days and you run a query over that window, you pay $2.50. It doesn’t matter if your query returns 10 lines or 10,000 lines. You’re charged for what gets scanned.

Strategies to reduce Logs Insights costs:

  1. Narrow the time range. Query the last 1 hour instead of the last 24 hours whenever possible. This directly reduces the data scanned.

  2. Use filter patterns before Insights. If you’re looking for specific error messages, filter-log-events with a pattern is free (no Insights charges).

aws logs filter-log-events \
  --log-group-name /aws/lambda/my-function \
  --filter-pattern "OutOfMemoryError" \
  --start-time $(date -u -d '1 hour ago' +%s)000 \
  --query 'events[].message' --output text
  1. Avoid querying multiple large log groups simultaneously. Each log group adds its volume to the scan total.

  2. Set shorter retention. Less stored data means less data scanned per query. A 7-day retention versus a 365-day retention means roughly 50x less data scanned for full-range queries.

  3. Consider Contributor Insights for repeated analytics. If you’re running the same Insights query repeatedly (e.g., “top 10 error types”), a CloudWatch Contributor Insights rule at $0.02 per rule per month is far cheaper than running the same Insights query daily.

7. Embedded Metric Format vs Custom Metrics via Logs

A common pattern is to extract metrics from logs using metric filters. For example, counting error occurrences or measuring latency by parsing log lines. This works, but there’s a better approach.

CloudWatch Embedded Metric Format (EMF) lets you embed metric data directly in your log output. CloudWatch automatically extracts the metrics without needing a metric filter. The key cost advantage: EMF is asynchronous. It doesn’t make PutMetricData API calls, which means no API call latency added to your function execution and no risk of throttling.

PutMetricData costs:

  • $0.01 per 1,000 API requests
  • $0.30 per custom metric per month

EMF costs:

  • Only the log ingestion cost ($0.50/GB for Standard)
  • $0.30 per custom metric per month (same as PutMetricData)

The real savings come from reduced function execution time. Each PutMetricData call adds 20-50ms of latency. In a Lambda function that runs thousands of times per hour, that execution time adds up fast. One organization reported a 65% reduction in CloudWatch metrics costs after switching from PutMetricData to EMF.

Example EMF output in a Lambda function (Node.js):

const metrics = {
  _aws: {
    Timestamp: Date.now(),
    CloudWatchMetrics: [
      {
        Namespace: 'MyApp',
        Dimensions: [['ServiceName']],
        Metrics: [{ Name: 'OrderProcessingTime', Unit: 'Milliseconds' }],
      },
    ],
  },
  ServiceName: 'order-service',
  OrderProcessingTime: 142,
};
console.log(JSON.stringify(metrics));

CloudWatch extracts OrderProcessingTime as a custom metric automatically. No additional API calls, no metric filters to maintain.

8. Use Infrequent Access for High-Volume Log Groups (CWL-O002)

Some log groups are high volume by nature. VPC Flow Logs, CloudTrail data events, and API Gateway access logs can each generate tens or hundreds of gigabytes per month.

Identify your highest-ingestion log groups over the last 30 days:

for lg in $(aws logs describe-log-groups \
  --query 'logGroups[].logGroupName' --output text); do
  bytes=$(aws cloudwatch get-metric-statistics \
    --namespace AWS/Logs \
    --metric-name IncomingBytes \
    --dimensions Name=LogGroupName,Value="$lg" \
    --start-time $(date -u -d '30 days ago' +%Y-%m-%dT%H:%M:%S) \
    --end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
    --period 2592000 \
    --statistics Sum \
    --query 'Datapoints[0].Sum // `0`' --output text)
  gb=$(echo "scale=2; $bytes / 1073741824" | bc)
  [ "$(echo "$gb > 1" | bc)" -eq 1 ] && echo "$gb GB - $lg"
done | sort -rn

For each high-volume group, ask:

  • Do we need real-time metric filters on this data? If not, Infrequent Access saves 50% on ingestion.
  • Do we need this data in CloudWatch at all? VPC Flow Logs can go directly to S3.
  • Can we reduce the volume? API Gateway access logs can be filtered to only log errors.

Send VPC Flow Logs directly to S3 instead of CloudWatch:

aws ec2 create-flow-logs \
  --resource-type VPC \
  --resource-ids vpc-12345678 \
  --traffic-type ALL \
  --log-destination-type s3 \
  --log-destination arn:aws:s3:::my-flowlogs-bucket

This skips CloudWatch ingestion entirely, saving the full $0.50/GB. You can still query the logs using Athena against S3, often at lower cost than Logs Insights for large datasets.

CostPatrol’s CWL-O002 rule flags high-volume log groups and identifies cases where logs are being dual-written to both CloudWatch and an external APM tool, a pattern that doubles your ingestion costs for no added value.

9. When to Use Alternatives

CloudWatch Logs is the default for AWS workloads, but it’s not always the cheapest option, especially at scale.

Cost comparison at 500 GB/month ingestion:

SolutionMonthly IngestionMonthly Storage (1 TB retained)Total
CloudWatch Standard$250$30$280
CloudWatch Infrequent Access$125$30$155
Grafana Loki (self-hosted on S3)~$0 (compute only)~$23 (S3)~$50-80
Grafana Cloud Pro$250Included~$270
Datadog Log Management~$375 (at $0.75/GB)Included~$375

Consider Grafana Loki when:

  • You’re ingesting over 100 GB/day
  • You already run Grafana for dashboards
  • Your team is comfortable operating Kubernetes workloads
  • You need multi-cloud log aggregation

Loki stores log data in S3 (or GCS, or Azure Blob) and only indexes labels, not full text. This makes it 75-90% cheaper at scale. No per-query charges. The tradeoff is operational complexity: you’re running and maintaining the infrastructure yourself.

Consider staying with CloudWatch when:

  • Your log volume is under 50 GB/month
  • You’re all-in on AWS with no multi-cloud requirements
  • You need zero operational overhead
  • You rely heavily on metric filters and alarms from log data
  • Compliance requires a fully managed, AWS-native solution

The hybrid approach works well for many teams. Keep CloudWatch for AWS service logs (Lambda, API Gateway, ECS) where integration is automatic. Route high-volume application logs to Loki or S3 directly. This captures the cost savings where they matter most without giving up the convenience of CloudWatch for AWS-native services.

Optimization Checklist

Run through this list for every AWS account. Each item has a direct impact on your CloudWatch Logs bill.

  • Audit all log groups for missing retention policies (CWL-O001)
  • Set retention to 30 days or less for non-compliance log groups
  • Identify the top 10 log groups by stored bytes and ingestion volume
  • Switch development/staging log groups to Infrequent Access class
  • Set Lambda log levels to WARN or ERROR in production
  • Filter health check logs at the application level
  • Set up S3 export via Firehose for logs requiring long-term retention
  • Add S3 lifecycle rules to transition exported logs to Glacier
  • Send VPC Flow Logs directly to S3 instead of CloudWatch
  • Replace PutMetricData calls with Embedded Metric Format
  • Narrow Logs Insights query time ranges to minimize scan costs
  • Identify and eliminate dual-write patterns (CW + external APM) (CWL-O002)
  • Evaluate Grafana Loki for workloads exceeding 100 GB/day

Most teams can cut their CloudWatch Logs bill by 40-60% in a single afternoon by setting retention policies and switching non-critical log groups to Infrequent Access. The more advanced optimizations, S3 routing, EMF, and alternative platforms, take more effort but deliver even larger savings at scale.

CostPatrol scans for the most common CloudWatch Logs cost issues automatically. Rules CWL-O001 and CWL-O002 flag excessive retention and high-volume log groups across all your AWS accounts, with estimated savings for each finding.

See what CostPatrol finds in your AWS account

Free scan shows your total savings. Upgrade to Pro for full findings, fix commands, and daily Slack alerts.