Home/Blog/AWS S3 Storage Classes: Complete Guide to Cost Optimization
Cloud & DevOps

AWS S3 Storage Classes: Complete Guide to Cost Optimization

Master AWS S3 storage classes to optimize costs. Compare Standard, Intelligent-Tiering, Glacier, and Deep Archive with pricing, use cases, and lifecycle policies.

By Inventive HQ Team
AWS S3 Storage Classes: Complete Guide to Cost Optimization

Storing data in S3 using only the default Standard class is like paying premium rates for a storage unit you rarely visit. AWS offers seven storage classes, each optimized for different access patterns and retention needs. Choosing the right class can reduce storage costs by 50-95% without sacrificing durability.

S3 Storage Classes Overview

AWS S3 provides seven storage classes, all offering 99.999999999% (11 nines) durability but with different availability, retrieval times, and pricing.

Storage ClassUse CaseAvailabilityMin DurationRetrieval TimeStorage Cost*
S3 StandardFrequently accessed data99.99%NoneInstant$0.023/GB
S3 Intelligent-TieringUnknown/changing access99.9%NoneInstant$0.023-0.004/GB
S3 Standard-IAInfrequent access (monthly)99.9%30 daysInstant$0.0125/GB
S3 One Zone-IARecreatable infrequent data99.5%30 daysInstant$0.01/GB
S3 Glacier InstantArchive with instant access99.9%90 daysMilliseconds$0.004/GB
S3 Glacier FlexibleArchive (quarterly access)99.99%90 days1-12 hours$0.0036/GB
S3 Glacier Deep ArchiveLong-term archive (yearly)99.99%180 days12-48 hours$0.00099/GB

*Prices for us-east-1; vary by region.

Understanding Each Storage Class

S3 Standard

The default class for frequently accessed data. No minimum storage duration or retrieval fees make it ideal for active workloads.

Best for:

  • Application assets and media
  • Content distribution origins
  • Big data analytics
  • Dynamic websites
# Upload with Standard (default)
aws s3 cp file.txt s3://my-bucket/

# Explicitly specify Standard
aws s3 cp file.txt s3://my-bucket/ --storage-class STANDARD

S3 Intelligent-Tiering

Automatically moves objects between access tiers based on usage patterns. Objects not accessed for 30 days move to Infrequent Access, then to Archive tiers after 90 and 180 days.

Best for:

  • Unpredictable access patterns
  • Data lakes with mixed workloads
  • Long-lived objects with changing access
  • When you don't want to manage lifecycle policies
# Upload with Intelligent-Tiering
aws s3 cp file.txt s3://my-bucket/ --storage-class INTELLIGENT_TIERING

Tier breakdown:

  • Frequent Access: Same price as Standard
  • Infrequent Access (30+ days): ~40% savings
  • Archive Instant Access (90+ days): ~68% savings
  • Archive Access (90+ days, optional): ~71% savings
  • Deep Archive Access (180+ days, optional): ~95% savings

S3 Standard-IA (Infrequent Access)

Lower storage cost than Standard, but with per-GB retrieval fees. The 30-day minimum means objects deleted earlier are still charged for 30 days.

Best for:

  • Backups accessed monthly
  • Disaster recovery files
  • Older application data
  • Long-tail content
# Upload directly to Standard-IA
aws s3 cp backup.tar.gz s3://my-bucket/ --storage-class STANDARD_IA

# Sync directory with Standard-IA
aws s3 sync ./backups/ s3://my-bucket/backups/ --storage-class STANDARD_IA

S3 One Zone-IA

Same as Standard-IA but stored in a single Availability Zone. 20% cheaper but data is lost if that AZ fails.

Best for:

  • Secondary backups (primary exists elsewhere)
  • Recreatable data (thumbnails, transcoded media)
  • Cross-region replicas
  • Development/test data
aws s3 cp thumbnail.jpg s3://my-bucket/ --storage-class ONEZONE_IA

S3 Glacier Instant Retrieval

Archive storage with millisecond retrieval. Ideal for data accessed once per quarter that still needs immediate access when requested.

Best for:

  • Medical images
  • News media archives
  • User-generated content archives
  • Quarterly compliance reports
aws s3 cp archive.zip s3://my-bucket/ --storage-class GLACIER_IR

S3 Glacier Flexible Retrieval

Traditional archive storage with retrieval times from minutes to hours. Three retrieval options balance speed and cost.

Best for:

  • Backup archives
  • Digital preservation
  • Magnetic tape replacement
  • Compliance archives (annual access)
aws s3 cp compliance-data.tar s3://my-bucket/ --storage-class GLACIER

Retrieval options:

TypeTimeCost
Expedited1-5 minutes$0.03/GB + $10/1000 requests
Standard3-5 hours$0.01/GB + $0.05/1000 requests
Bulk5-12 hours$0.0025/GB + $0.025/1000 requests

S3 Glacier Deep Archive

Lowest cost storage for data retained 7-10+ years. Retrieval takes 12-48 hours.

Best for:

  • Regulatory compliance (7+ year retention)
  • Healthcare records (HIPAA)
  • Financial records (SOX, SEC)
  • Legal hold data
  • Replacement for tape archives
aws s3 cp legal-archive.tar s3://my-bucket/ --storage-class DEEP_ARCHIVE

Cost Comparison

For 1 TB stored for one year with monthly access patterns:

Storage ClassStorage CostRetrieval CostTotal Annual
Standard$276$0$276
Standard-IA$150$12$162
One Zone-IA$120$12$132
Glacier Instant$48$36$84
Glacier Flexible$43$12*$55
Deep Archive$12$24*$36

*Assumes standard retrieval; expedited costs more.

Key insight: Moving from Standard to Standard-IA saves 41%. Moving to Glacier Deep Archive saves 87%—but only if you can tolerate 12+ hour retrieval times.

Setting Storage Class on Upload

Using AWS CLI

# Single file upload
aws s3 cp document.pdf s3://my-bucket/ --storage-class GLACIER

# Recursive upload with storage class
aws s3 cp ./logs/ s3://my-bucket/logs/ --recursive --storage-class STANDARD_IA

# Sync with storage class
aws s3 sync ./archive/ s3://my-bucket/archive/ --storage-class DEEP_ARCHIVE

Using AWS SDK (Python)

import boto3

s3 = boto3.client('s3')

# Upload with storage class
s3.upload_file(
    'local-file.txt',
    'my-bucket',
    'remote-file.txt',
    ExtraArgs={'StorageClass': 'STANDARD_IA'}
)

# Put object with storage class
s3.put_object(
    Bucket='my-bucket',
    Key='data.json',
    Body=json.dumps(data),
    StorageClass='INTELLIGENT_TIERING'
)

Lifecycle Policies

Lifecycle policies automatically transition objects between storage classes based on age. This is the most effective way to optimize costs at scale.

Basic Lifecycle Policy

{
  "Rules": [
    {
      "ID": "ArchiveOldData",
      "Status": "Enabled",
      "Filter": {
        "Prefix": "logs/"
      },
      "Transitions": [
        {
          "Days": 30,
          "StorageClass": "STANDARD_IA"
        },
        {
          "Days": 90,
          "StorageClass": "GLACIER"
        },
        {
          "Days": 365,
          "StorageClass": "DEEP_ARCHIVE"
        }
      ],
      "Expiration": {
        "Days": 2555
      }
    }
  ]
}

Applying Lifecycle Policy via CLI

# Save policy to file
cat > lifecycle.json << 'EOF'
{
  "Rules": [
    {
      "ID": "MoveToIA",
      "Status": "Enabled",
      "Filter": {"Prefix": ""},
      "Transitions": [
        {"Days": 30, "StorageClass": "STANDARD_IA"}
      ]
    }
  ]
}
EOF

# Apply to bucket
aws s3api put-bucket-lifecycle-configuration \
  --bucket my-bucket \
  --lifecycle-configuration file://lifecycle.json

# Verify
aws s3api get-bucket-lifecycle-configuration --bucket my-bucket

Transition Constraints

Objects can only transition "downward" through storage classes:

Standard → Standard-IA → Intelligent-Tiering → One Zone-IA
         → Glacier Instant → Glacier Flexible → Deep Archive

Minimum days before transition:

  • To Standard-IA or One Zone-IA: 30 days from creation
  • To Glacier classes: 90 days from creation (or 30 days after IA)

Transitioning Existing Objects

Lifecycle policies apply to existing objects. When you add a policy, S3 evaluates all matching objects and transitions those meeting the age criteria.

Using S3 Batch Operations

For immediate one-time migrations, use S3 Batch Operations:

# Create manifest of objects to transition
aws s3api list-objects-v2 --bucket my-bucket --prefix old-data/ \
  --query 'Contents[].Key' --output text > manifest.txt

# Create batch job (requires IAM role with S3 permissions)
aws s3control create-job \
  --account-id 123456789012 \
  --operation '{"S3PutObjectCopy":{"StorageClass":"GLACIER"}}' \
  --manifest '{"Spec":{"Format":"S3BatchOperations_CSV_20180820","Fields":["Bucket","Key"]},"Location":{"ObjectArn":"arn:aws:s3:::manifest-bucket/manifest.csv","ETag":"..."}}' \
  --report '{"Bucket":"arn:aws:s3:::report-bucket","Format":"Report_CSV_20180820","Enabled":true}'

Copy-to-Self Method

For small numbers of objects, copy to the same location with a new storage class:

# Change single object's storage class
aws s3 cp s3://my-bucket/file.txt s3://my-bucket/file.txt \
  --storage-class GLACIER \
  --metadata-directive COPY

# Batch change with xargs
aws s3 ls s3://my-bucket/old-logs/ --recursive | \
  awk '{print $4}' | \
  xargs -I {} aws s3 cp s3://my-bucket/{} s3://my-bucket/{} \
    --storage-class STANDARD_IA \
    --metadata-directive COPY

Storage Class Decision Matrix

ScenarioRecommended ClassWhy
Website assets, CDN originStandardFrequent access, no retrieval fees
Application logs (<30 days)StandardStill being analyzed
Application logs (30-90 days)Standard-IAOccasional access for debugging
Application logs (90+ days)Glacier FlexibleCompliance retention
User uploads (active)StandardUsers expect instant access
User uploads (inactive 90+ days)Glacier InstantRare access but needs to be fast
Database backups (weekly)Standard-IARegular restore testing
Database backups (monthly+)Glacier FlexibleDisaster recovery
Compliance archivesDeep Archive7+ year retention, rare access
Unknown access patternIntelligent-TieringAutomatic optimization
Secondary/recreatable dataOne Zone-IACost savings, acceptable risk

Best Practices

1. Start with Intelligent-Tiering for New Workloads

If you're unsure about access patterns, Intelligent-Tiering removes guesswork. The monitoring fee is minimal compared to potential savings.

2. Implement Lifecycle Policies from Day One

Don't let data accumulate in Standard. Define lifecycle policies when creating buckets:

# Create bucket with lifecycle policy in one flow
aws s3 mb s3://new-bucket
aws s3api put-bucket-lifecycle-configuration \
  --bucket new-bucket \
  --lifecycle-configuration file://standard-lifecycle.json

3. Use Prefix-Based Policies

Organize data by access pattern and apply targeted policies:

my-bucket/
├── hot/          → Standard (no transition)
├── warm/         → Standard-IA after 30 days
├── cold/         → Glacier after 7 days
└── archive/      → Deep Archive after 1 day

4. Monitor with S3 Storage Lens

Enable Storage Lens to track storage class distribution and identify optimization opportunities:

aws s3control put-storage-lens-configuration \
  --account-id 123456789012 \
  --config-id my-dashboard \
  --storage-lens-configuration '{...}'

5. Account for Minimum Storage Duration

Don't transition objects that might be deleted soon. If logs are deleted after 14 days, Standard-IA (30-day minimum) costs more than Standard.

6. Test Retrieval Workflows

Before archiving to Glacier, verify your team knows how to restore:

# Initiate restore from Glacier
aws s3api restore-object \
  --bucket my-bucket \
  --key archived-file.zip \
  --restore-request '{"Days":7,"GlacierJobParameters":{"Tier":"Standard"}}'

# Check restore status
aws s3api head-object --bucket my-bucket --key archived-file.zip

Common Mistakes to Avoid

  1. Archiving frequently accessed data - Retrieval fees can exceed storage savings
  2. Ignoring minimum storage duration - Deleting early doesn't save money
  3. Using One Zone-IA for critical data - Single AZ failure loses data
  4. Not testing Glacier restores - 12-hour retrieval surprises during incidents
  5. Over-transitioning small objects - Per-request fees dominate for tiny files
  6. Forgetting versioning costs - Old versions accumulate in expensive classes

Summary

S3 storage classes offer a spectrum from instant-access Standard to ultra-low-cost Deep Archive. The key is matching storage class to actual access patterns:

  • Hot data (daily access): Standard or Intelligent-Tiering
  • Warm data (monthly access): Standard-IA
  • Cold data (quarterly access): Glacier Instant or Flexible
  • Frozen data (yearly or compliance): Deep Archive

Use lifecycle policies to automate transitions and prevent cost accumulation. For complex CLI operations, use our AWS S3 Command Generator to build commands with the correct storage class flags.

Frequently Asked Questions

Find answers to common questions

AWS S3 offers seven storage classes: S3 Standard (frequently accessed), S3 Intelligent-Tiering (automatic optimization), S3 Standard-IA (infrequent access), S3 One Zone-IA (single AZ infrequent), S3 Glacier Instant Retrieval (archive with millisecond access), S3 Glacier Flexible Retrieval (archive with hours), and S3 Glacier Deep Archive (long-term with 12+ hours retrieval).

Let's turn this knowledge into action

Get a free 30-minute consultation with our experts. We'll help you apply these insights to your specific situation.