DynamoDB On-Demand vs Provisioned: Which Pricing Mode Saves More?
Picking the wrong DynamoDB capacity mode can quietly double your database bill. And since the November 2024 price cut slashed on-demand costs by 50%, the old rules of thumb no longer apply.
This guide breaks down exactly when each mode saves money, with real pricing numbers, break-even calculations, and CLI commands to analyze your own tables. These are the same patterns that CostPatrol’s DDB detection rules flag automatically.
Quick Pricing Comparison
All prices are for US East (N. Virginia). Other regions vary by 10-30%.
On-Demand Mode
You pay per request. No capacity planning required.
| Operation | Price |
|---|---|
| Write Request Unit (WRU) | $0.625 per million |
| Read Request Unit (RRU) | $0.125 per million |
One WRU handles a single write up to 1 KB. One RRU handles a single eventually consistent read up to 4 KB, or half a strongly consistent read.
Provisioned Mode
You pay per hour for the capacity you allocate, whether you use it or not.
| Operation | Price per Hour | Monthly (730 hours) |
|---|---|---|
| 1 Write Capacity Unit (WCU) | $0.00065 | $0.4745 |
| 1 Read Capacity Unit (RCU) | $0.00013 | $0.0949 |
One WCU handles one write per second. One RCU handles one strongly consistent read per second (or two eventually consistent reads).
Check your current table’s billing mode:
aws dynamodb describe-table \
--table-name my-table \
--query 'Table.BillingModeSummary.BillingMode' \
--output text
When On-Demand Wins
On-demand is the right choice in these scenarios:
1. Unpredictable or spiky traffic. If your peak-to-average ratio exceeds 4x, on-demand is almost certainly cheaper. You would need to over-provision so much headroom with provisioned mode that you’d pay for idle capacity most of the time.
2. New tables where you don’t know the traffic pattern yet. Start with on-demand. Collect two weeks of CloudWatch metrics. Then decide whether to switch.
3. Low-volume tables. If a table handles fewer than 1,000 requests per day, the cost difference between modes is negligible, often under $1/month. On-demand eliminates the risk of throttling with zero configuration.
4. Development and staging environments. These tables see sporadic traffic during working hours and nothing at night. On-demand means you pay exactly $0 during idle periods.
5. Event-driven workloads. Batch jobs, cron-triggered ingestion, or webhook handlers that spike for minutes then go silent. Auto-scaling can’t react fast enough for sub-minute bursts.
List all on-demand tables in your account:
for table in $(aws dynamodb list-tables --query 'TableNames[]' --output text); do
mode=$(aws dynamodb describe-table \
--table-name "$table" \
--query 'Table.BillingModeSummary.BillingMode' \
--output text 2>/dev/null)
[ "$mode" = "PAY_PER_REQUEST" ] && echo "ON-DEMAND: $table"
done
When Provisioned Wins
Provisioned capacity is cheaper when your traffic is steady and predictable.
1. Sustained throughput above the break-even point. If a table consistently uses more than ~14.4% of its provisioned capacity, provisioned mode costs less than on-demand (see the break-even math below).
2. Steady traffic with predictable patterns. E-commerce catalog tables, user profile stores, session tables for applications with consistent daily active users. These workloads have a tight peak-to-average ratio, usually under 2x.
3. High-volume tables. At scale, the savings compound. A table doing 10,000 writes per second continuously costs roughly $4,745/month on provisioned vs $16,200/month on on-demand. That is a 3.4x difference.
4. Tables eligible for reserved capacity. If you commit to 1-year or 3-year terms, provisioned pricing drops by another 54-77%. This stacks with the already lower per-unit cost.
Check provisioned utilization for a table:
aws cloudwatch get-metric-statistics \
--namespace AWS/DynamoDB \
--metric-name ConsumedWriteCapacityUnits \
--dimensions Name=TableName,Value=my-table \
--start-time $(date -u -d '7 days ago' +%Y-%m-%dT%H:%M:%S) \
--end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
--period 3600 \
--statistics Average \
--query 'Datapoints | sort_by(@, &Timestamp) | [-24:].[Timestamp,Average]' \
--output table
CostPatrol’s DDB-002 rule automatically compares your consumed vs provisioned capacity and flags tables where switching modes would save money.
Break-Even Calculation
Here’s the math that determines which mode is cheaper for your workload.
Writes
- On-demand: $0.625 per 1 million writes = $0.000000625 per write
- Provisioned: 1 WCU = 1 write/second = 2,592,000 writes/month (at 100% utilization), costs $0.4745/month
- Cost per write at 100% utilization: $0.4745 / 2,592,000 = $0.000000183
On-demand writes cost 3.4x more than provisioned writes at full utilization.
Reads (eventually consistent)
- On-demand: $0.125 per 1 million reads = $0.000000125 per read
- Provisioned: 1 RCU = 2 eventually consistent reads/second = 5,184,000 reads/month, costs $0.0949/month
- Cost per read at 100% utilization: $0.0949 / 5,184,000 = $0.0000000183
On-demand reads cost 6.8x more than provisioned reads at full utilization.
The Break-Even Utilization
The break-even point is where on-demand cost equals provisioned cost. Since provisioned charges you for allocated capacity regardless of usage, you need to calculate what utilization percentage makes the two modes equal.
For writes: break-even at roughly 29% utilization (1/3.4). For reads: break-even at roughly 15% utilization (1/6.8).
Translation: If your provisioned table averages above 29% write utilization and 15% read utilization across the month, provisioned mode is cheaper.
Practical Example
A table provisioned at 100 WCU and 500 RCU:
| Scenario | Provisioned Cost/Month | On-Demand Equivalent |
|---|---|---|
| 100% utilized (100 WCU, 500 RCU sustained) | $47.45 + $47.45 = $94.90 | $162.00 + $324.00 = $486.00 |
| 50% utilized (avg 50 WCU, 250 RCU) | $94.90 (still pay full) | $81.00 + $162.00 = $243.00 |
| 20% utilized (avg 20 WCU, 100 RCU) | $94.90 (still pay full) | $32.40 + $64.80 = $97.20 |
| 10% utilized (avg 10 WCU, 50 RCU) | $94.90 (still pay full) | $16.20 + $32.40 = $48.60 |
At 20% utilization, the costs nearly break even. Below that, you are overpaying with provisioned mode.
Auto-Scaling Configuration and Pitfalls
Auto-scaling is the middle ground. You get provisioned pricing with automatic capacity adjustments. But it has real limitations that catch teams off guard.
Setting Up Auto-Scaling
# Register the table as a scalable target
aws application-autoscaling register-scalable-target \
--service-namespace dynamodb \
--resource-id "table/my-table" \
--scalable-dimension "dynamodb:table:WriteCapacityUnits" \
--min-capacity 5 \
--max-capacity 1000
# Create the scaling policy
aws application-autoscaling put-scaling-policy \
--service-namespace dynamodb \
--resource-id "table/my-table" \
--scalable-dimension "dynamodb:table:WriteCapacityUnits" \
--policy-name "my-table-write-scaling" \
--policy-type TargetTrackingScaling \
--target-tracking-scaling-policy-configuration '{
"TargetValue": 70.0,
"PredefinedMetricSpecification": {
"PredefinedMetricType": "DynamoDBWriteCapacityUtilization"
},
"ScaleOutCooldown": 60,
"ScaleInCooldown": 60
}'
Repeat for reads by replacing WriteCapacityUnits with ReadCapacityUnits and the metric type with DynamoDBReadCapacityUtilization.
Target Utilization
Set your target between 60-70%. Lower values waste capacity. Higher values risk throttling during scaling lag.
Check current auto-scaling policies:
aws application-autoscaling describe-scaling-policies \
--service-namespace dynamodb \
--resource-id "table/my-table" \
--query 'ScalingPolicies[].{Dimension:ScalableDimension,Target:TargetTrackingScalingPolicyConfiguration.TargetValue}' \
--output table
The Pitfalls
1. Scaling lag. Auto-scaling reacts to CloudWatch alarms that trigger after consumed capacity breaches the target for two consecutive minutes. Then the scaling action itself takes additional time. Total delay: 2-5 minutes. If your traffic spikes happen faster than that, you will get throttled.
2. Scale-down limits. DynamoDB allows only 4 capacity decreases per day (plus one additional decrease per hour if none occurred in the previous hour). If your traffic has multiple daily dips, auto-scaling may not be able to reduce capacity fast enough to match.
3. GSIs scale independently. Forgetting to configure auto-scaling on Global Secondary Indexes is one of the most common DynamoDB mistakes. A GSI without auto-scaling will throttle writes to the base table when it falls behind.
4. Burst capacity masks the problem. DynamoDB saves unused capacity for up to 5 minutes of burst. This can hide under-provisioning during development. Then in production, sustained traffic depletes the burst buffer and throttling starts.
5. Minimum capacity floor. If you set the minimum too low, the first request after a quiet period can get throttled while scaling kicks in. Set your minimum to your baseline traffic level, not zero.
Check if auto-scaling is enabled on all your tables:
for table in $(aws dynamodb list-tables --query 'TableNames[]' --output text); do
mode=$(aws dynamodb describe-table --table-name "$table" \
--query 'Table.BillingModeSummary.BillingMode' --output text 2>/dev/null)
if [ "$mode" = "PROVISIONED" ]; then
policies=$(aws application-autoscaling describe-scaling-policies \
--service-namespace dynamodb \
--resource-id "table/$table" \
--query 'ScalingPolicies | length(@)' --output text 2>/dev/null)
[ "$policies" = "0" ] && echo "NO AUTO-SCALING: $table"
fi
done
CostPatrol’s DDB-001 rule detects provisioned tables with zero consumed capacity over 30 days. Tables with auto-scaling but near-zero traffic are prime candidates for switching to on-demand.
Reserved Capacity for Provisioned Tables
If you have committed to provisioned mode for a stable, high-volume table, reserved capacity is the next level of savings.
| Term | Savings vs On-Demand Provisioned |
|---|---|
| 1-year partial upfront | ~54% discount |
| 3-year partial upfront | ~77% discount |
When Reserved Capacity Makes Sense
Reserved capacity requires a minimum purchase of 100 capacity units. It applies only to the DynamoDB Standard table class in a single region.
Good candidates:
- Tables with steady traffic that won’t decrease over the commitment term
- Tables already running provisioned mode with consistently high utilization
- Core application tables (user profiles, orders, sessions) that will exist for years
Bad candidates:
- Tables that might be migrated, consolidated, or deleted
- Tables with declining traffic trends
- Tables in regions you might stop using
Cost Example with Reserved Capacity
A table with 1,000 WCU sustained:
| Pricing Model | Monthly Cost | Annual Cost |
|---|---|---|
| On-demand (at 100% sustained) | $1,620 | $19,440 |
| Provisioned (standard) | $474.50 | $5,694 |
| Provisioned + 1-year reserved | ~$218 | ~$2,619 |
| Provisioned + 3-year reserved | ~$109 | ~$1,310 |
The difference between on-demand and 3-year reserved is nearly 15x. For large-scale, stable workloads, this is significant.
View your existing reserved capacity:
aws dynamodb describe-reserved-capacity \
--query 'ReservedCapacity[].{Name:ReservedCapacityOfferingId,Type:CapacityUnits,Remaining:RemainingTotalReservedCapacityUnits}' \
--output table
Analyzing Your Current Table Usage Patterns
Before switching modes, collect data. Two weeks of CloudWatch metrics gives you enough signal to make a decision.
Step 1: Get Consumed vs Provisioned Capacity
TABLE="my-table"
START=$(date -u -d '14 days ago' +%Y-%m-%dT%H:%M:%S)
END=$(date -u +%Y-%m-%dT%H:%M:%S)
echo "=== Write Capacity ==="
aws cloudwatch get-metric-statistics \
--namespace AWS/DynamoDB \
--metric-name ConsumedWriteCapacityUnits \
--dimensions Name=TableName,Value=$TABLE \
--start-time $START --end-time $END \
--period 86400 --statistics Average Maximum \
--query 'Datapoints | sort_by(@, &Timestamp) | [].[Timestamp,Average,Maximum]' \
--output table
echo "=== Read Capacity ==="
aws cloudwatch get-metric-statistics \
--namespace AWS/DynamoDB \
--metric-name ConsumedReadCapacityUnits \
--dimensions Name=TableName,Value=$TABLE \
--start-time $START --end-time $END \
--period 86400 --statistics Average Maximum \
--query 'Datapoints | sort_by(@, &Timestamp) | [].[Timestamp,Average,Maximum]' \
--output table
Step 2: Check for Throttling
Throttled requests mean your current capacity is too low. If you see throttling on a provisioned table, either increase capacity, enable auto-scaling, or switch to on-demand.
aws cloudwatch get-metric-statistics \
--namespace AWS/DynamoDB \
--metric-name ThrottledRequests \
--dimensions Name=TableName,Value=$TABLE \
--start-time $START --end-time $END \
--period 86400 --statistics Sum \
--query 'Datapoints | sort_by(@, &Timestamp) | [].[Timestamp,Sum]' \
--output table
Step 3: Calculate Your Peak-to-Average Ratio
From the Step 1 output, divide the highest Maximum value by the overall Average. This ratio determines your best mode:
| Peak-to-Average Ratio | Recommended Mode |
|---|---|
| Below 2x | Provisioned with auto-scaling |
| 2x to 4x | Either mode. Run the break-even math |
| Above 4x | On-demand |
| Highly variable day to day | On-demand |
Step 4: Estimate Monthly Cost in Both Modes
Take your average consumed capacity and calculate what each mode would cost:
# Quick estimate (replace values with your actual averages)
AVG_WCU=50
AVG_RCU=200
# On-demand monthly cost
OD_WRITE=$(echo "$AVG_WCU * 2592000 * 0.000000625" | bc)
OD_READ=$(echo "$AVG_RCU * 2592000 * 0.000000125" | bc)
echo "On-demand: Write=$OD_WRITE Read=$OD_READ Total=$(echo "$OD_WRITE + $OD_READ" | bc)"
# Provisioned monthly cost (using peak for provisioned amount)
PEAK_WCU=150
PEAK_RCU=400
PROV_WRITE=$(echo "$PEAK_WCU * 0.4745" | bc)
PROV_READ=$(echo "$PEAK_RCU * 0.0949" | bc)
echo "Provisioned: Write=$PROV_WRITE Read=$PROV_READ Total=$(echo "$PROV_WRITE + $PROV_READ" | bc)"
CostPatrol’s DDB-002 rule runs this analysis automatically across all tables and flags the ones where switching would save at least 20%.
Common Mistakes
1. Over-Provisioning Without Auto-Scaling
The most expensive mistake. Teams provision for peak traffic and leave it there 24/7. If your table handles a lunchtime rush but is quiet overnight, you are paying for peak capacity during 16+ hours of low traffic.
Fix: Enable auto-scaling or switch to on-demand.
2. Under-Provisioning and Eating Throttles
The opposite problem. Some teams set capacity too low to save money, then rely on retries in their application code. Throttled requests add latency, cause retry storms, and can cascade into failures. The “savings” from under-provisioning get eaten by the operational cost of debugging.
Fix: Monitor ThrottledRequests metric. Any non-zero value needs attention.
3. Using On-Demand for Everything
After the November 2024 price cut, some teams moved all tables to on-demand. For high-volume, steady-traffic tables, this is still significantly more expensive. The price cut made on-demand more competitive, not universally cheaper.
Fix: Audit tables doing more than 10,000 requests per day with steady patterns. Run the break-even math.
4. Forgetting GSI Capacity
Global Secondary Indexes have their own capacity settings. A common pattern: the base table has auto-scaling configured, but the GSIs are stuck at their initial provisioned values. When write traffic increases, the GSI can’t keep up and throttles the base table.
Fix: Always configure auto-scaling on GSIs when you configure it on the base table.
# Check GSI capacity for a table
aws dynamodb describe-table \
--table-name my-table \
--query 'Table.GlobalSecondaryIndexes[].{IndexName:IndexName,WCU:ProvisionedThroughput.WriteCapacityUnits,RCU:ProvisionedThroughput.ReadCapacityUnits}' \
--output table
5. Not Reviewing After Launch
Traffic patterns change. A table that was spiky during launch settles into a predictable pattern after a few months. A table that was steady sees a traffic shift after a new feature ships. Set a calendar reminder to review capacity mode quarterly.
6. Ignoring Table Class
DynamoDB offers two table classes: Standard and Infrequent Access (IA). IA tables cost 60% less for storage but charge more for reads and writes. For tables where storage is the dominant cost (large items, low request rates), switching to IA class can save more than any capacity mode change.
CostPatrol’s DDB-005 rule identifies tables where the Infrequent Access class would reduce costs.
Migration Strategy Between Modes
Switching from Provisioned to On-Demand
aws dynamodb update-table \
--table-name my-table \
--billing-mode PAY_PER_REQUEST
The switch takes effect immediately. No downtime. Your table can handle requests during the transition.
Important: After switching to on-demand, you must wait 24 hours before switching back to provisioned.
Switching from On-Demand to Provisioned
aws dynamodb update-table \
--table-name my-table \
--billing-mode PROVISIONED \
--provisioned-throughput ReadCapacityUnits=100,WriteCapacityUnits=50
You need to specify the initial capacity values. Use your CloudWatch data from the analysis step to set these. Start with your average consumed capacity plus 30% headroom, then enable auto-scaling immediately after.
Migration Checklist
Before switching any table:
- Collect at least 14 days of CloudWatch metrics (consumed capacity, throttled requests)
- Calculate your peak-to-average ratio
- Run the break-even cost comparison
- Check if the table has GSIs that also need mode changes
- Verify no reserved capacity is tied to this table
- Switch during a low-traffic period (even though it’s non-disruptive, fewer surprises)
- Monitor for 48 hours after the switch for any throttling
Optimization Checklist
Use this checklist to audit your DynamoDB tables:
- List all tables and their current billing mode
- Identify tables with zero consumed capacity in the last 30 days (candidates for deletion)
- For each provisioned table, check if auto-scaling is enabled
- For auto-scaled tables, verify GSIs also have auto-scaling configured
- Calculate utilization rate for each provisioned table (consumed / provisioned)
- Flag provisioned tables under 30% utilization as on-demand candidates
- Flag on-demand tables with steady, high traffic as provisioned candidates
- Check for throttled requests on all tables
- Evaluate reserved capacity for tables with 1,000+ WCU or RCU sustained
- Review table class (Standard vs Infrequent Access) for storage-heavy tables
- Set a quarterly review reminder
Audit all tables in one pass:
for table in $(aws dynamodb list-tables --query 'TableNames[]' --output text); do
info=$(aws dynamodb describe-table --table-name "$table" \
--query '{Mode:BillingModeSummary.BillingMode,WCU:ProvisionedThroughput.WriteCapacityUnits,RCU:ProvisionedThroughput.ReadCapacityUnits,Size:TableSizeBytes,Items:ItemCount}' \
--output json 2>/dev/null)
echo "--- $table ---"
echo "$info" | python3 -c "
import sys,json
d=json.load(sys.stdin)
mode=d.get('Mode','PROVISIONED')
print(f\" Mode: {mode}\")
print(f\" WCU: {d.get('WCU',0)} | RCU: {d.get('RCU',0)}\")
print(f\" Size: {d.get('Size',0)/1024/1024:.1f} MB | Items: {d.get('Items',0):,}\")
"
done
CostPatrol runs all of these checks automatically. The DDB-001 rule catches unused tables, DDB-002 flags capacity mode mismatches, DDB-003 identifies tables with zero traffic that should be deleted, and DDB-005 recommends table class changes. Connect your AWS account and get a full DynamoDB cost report in under a minute.