Data loss happens—through accidental deletion, application bugs, ransomware, or regional outages. S3 versioning and replication provide the foundation for data protection and disaster recovery. This guide covers how to configure both features and build resilient storage architectures.
S3 Versioning Explained
Versioning maintains a complete history of an object. Every modification creates a new version rather than overwriting the existing data.
How Versioning Works
┌─────────────────────────────────────────────────────────────────────────┐
│ S3 VERSIONING TIMELINE │
└─────────────────────────────────────────────────────────────────────────┘
Day 1 Day 5 Day 10 Day 15
│ │ │ │
▼ ▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│ v1.txt │ │ v2.txt │ │ v3.txt │ │ (DELETE)│
│ ID: abc │ │ ID: def │ │ ID: ghi │ │ Marker │
│ 100 KB │ │ 120 KB │ │ 115 KB │ │ │
└─────────┘ └─────────┘ └─────────┘ └─────────┘
│ │ │ │
└──────────────────┴──────────────────┴──────────────────┘
│
All versions preserved
Total storage: 335 KB + marker
When you GET an object without specifying a version, S3 returns the current (latest non-delete) version. Previous versions remain accessible by version ID.
Enabling Versioning
# Enable versioning
aws s3api put-bucket-versioning \
--bucket my-bucket \
--versioning-configuration Status=Enabled
# Check versioning status
aws s3api get-bucket-versioning --bucket my-bucket
Version States
| State | Behavior |
|---|---|
| Unversioned | Default. Objects are overwritten/deleted normally |
| Enabled | All modifications create new versions; deletes add delete markers |
| Suspended | New objects get null version ID; existing versions preserved |
Important: Once enabled, you cannot return to unversioned—only suspended.
Working with Versions
# List all versions of an object
aws s3api list-object-versions \
--bucket my-bucket \
--prefix config.json
# Get specific version
aws s3api get-object \
--bucket my-bucket \
--key config.json \
--version-id "abc123" \
config-old.json
# Restore previous version (copy over current)
aws s3api copy-object \
--bucket my-bucket \
--copy-source "my-bucket/config.json?versionId=abc123" \
--key config.json
# Delete specific version permanently
aws s3api delete-object \
--bucket my-bucket \
--key config.json \
--version-id "abc123"
Delete Behavior with Versioning
When you delete an object in a versioned bucket:
# Simple delete adds a delete marker (doesn't remove data)
aws s3 rm s3://my-bucket/file.txt
# Result: File appears deleted, but versions remain
# Delete the delete marker to "undelete"
aws s3api delete-object \
--bucket my-bucket \
--key file.txt \
--version-id "delete-marker-id"
# Permanently delete requires specifying version ID
aws s3api delete-object \
--bucket my-bucket \
--key file.txt \
--version-id "specific-version-id"
Version Lifecycle Policies
Versions accumulate storage costs. Use lifecycle policies to manage old versions:
Transition Noncurrent Versions
Move old versions to cheaper storage:
{
"Rules": [
{
"ID": "ManageVersions",
"Status": "Enabled",
"Filter": {},
"NoncurrentVersionTransitions": [
{
"NoncurrentDays": 30,
"StorageClass": "STANDARD_IA"
},
{
"NoncurrentDays": 90,
"StorageClass": "GLACIER"
}
],
"NoncurrentVersionExpiration": {
"NoncurrentDays": 365
}
}
]
}
Keep Only Recent Versions
{
"Rules": [
{
"ID": "KeepRecentVersions",
"Status": "Enabled",
"Filter": {},
"NoncurrentVersionExpiration": {
"NewerNoncurrentVersions": 5,
"NoncurrentDays": 1
}
}
]
}
This keeps the 5 most recent noncurrent versions and deletes older ones after 1 day.
Delete Expired Markers
Clean up delete markers left when all versions are removed:
{
"Rules": [
{
"ID": "CleanupDeleteMarkers",
"Status": "Enabled",
"Filter": {},
"Expiration": {
"ExpiredObjectDeleteMarker": true
}
}
]
}
Apply Lifecycle Policy
aws s3api put-bucket-lifecycle-configuration \
--bucket my-bucket \
--lifecycle-configuration file://version-lifecycle.json
Cross-Region Replication (CRR)
CRR automatically copies objects from a source bucket to a destination bucket in a different AWS region.
Prerequisites
- Versioning must be enabled on both source and destination buckets
- IAM role with permissions to read from source and write to destination
- Destination bucket must exist (or use auto-create)
Create Replication Role
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetReplicationConfiguration",
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::source-bucket"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObjectVersionForReplication",
"s3:GetObjectVersionAcl",
"s3:GetObjectVersionTagging"
],
"Resource": "arn:aws:s3:::source-bucket/*"
},
{
"Effect": "Allow",
"Action": [
"s3:ReplicateObject",
"s3:ReplicateDelete",
"s3:ReplicateTags"
],
"Resource": "arn:aws:s3:::destination-bucket/*"
}
]
}
Configure Replication Rule
{
"Role": "arn:aws:iam::123456789012:role/s3-replication-role",
"Rules": [
{
"ID": "CRR-to-DR-Region",
"Status": "Enabled",
"Priority": 1,
"Filter": {},
"Destination": {
"Bucket": "arn:aws:s3:::dr-bucket-us-west-2",
"StorageClass": "STANDARD_IA"
},
"DeleteMarkerReplication": {
"Status": "Disabled"
}
}
]
}
# Apply replication configuration
aws s3api put-bucket-replication \
--bucket source-bucket \
--replication-configuration file://replication-config.json
Replication with Filters
Replicate only specific prefixes or tags:
{
"Rules": [
{
"ID": "ReplicateCriticalData",
"Status": "Enabled",
"Priority": 1,
"Filter": {
"And": {
"Prefix": "critical/",
"Tags": [
{"Key": "replicate", "Value": "true"}
]
}
},
"Destination": {
"Bucket": "arn:aws:s3:::dr-bucket"
}
}
]
}
Replicate to Different Storage Class
Save costs by storing replicas in a cheaper tier:
{
"Destination": {
"Bucket": "arn:aws:s3:::dr-bucket",
"StorageClass": "GLACIER"
}
}
Same-Region Replication (SRR)
SRR copies objects within the same region. Common use cases:
- Log aggregation — Collect logs from multiple buckets into one
- Cross-account copies — Share data with another AWS account
- Compliance — Maintain copies with different access controls
- Test/production separation — Replicate production data to test
SRR Configuration
Configuration is identical to CRR, just with buckets in the same region:
{
"Role": "arn:aws:iam::123456789012:role/s3-replication-role",
"Rules": [
{
"ID": "SRR-LogAggregation",
"Status": "Enabled",
"Priority": 1,
"Filter": {
"Prefix": "logs/"
},
"Destination": {
"Bucket": "arn:aws:s3:::central-logs-bucket",
"Account": "123456789012"
}
}
]
}
Batch Replication for Existing Objects
Replication rules only apply to new objects. Use S3 Batch Replication to copy existing objects:
# Create batch replication job
aws s3control create-job \
--account-id 123456789012 \
--operation '{
"S3ReplicateObject": {}
}' \
--manifest '{
"Spec": {
"Format": "S3BatchOperations_CSV_20180820",
"Fields": ["Bucket", "Key"]
},
"Location": {
"ObjectArn": "arn:aws:s3:::manifest-bucket/manifest.csv",
"ETag": "abc123"
}
}' \
--report '{
"Bucket": "arn:aws:s3:::report-bucket",
"Format": "Report_CSV_20180820",
"Enabled": true,
"Prefix": "batch-reports/"
}' \
--priority 10 \
--role-arn arn:aws:iam::123456789012:role/batch-replication-role
Alternatively, use the S3 console's "Replicate existing objects" wizard.
Replication Time Control (RTC)
For SLA-backed replication, enable RTC:
{
"Destination": {
"Bucket": "arn:aws:s3:::dr-bucket",
"ReplicationTime": {
"Status": "Enabled",
"Time": {
"Minutes": 15
}
},
"Metrics": {
"Status": "Enabled",
"EventThreshold": {
"Minutes": 15
}
}
}
}
RTC guarantees:
- 99.99% of objects replicate within 15 minutes
- CloudWatch metrics for replication monitoring
- SLA-backed delivery times
Pricing: Additional fee for RTC (approximately $0.015 per GB replicated).
Disaster Recovery Patterns
Pattern 1: Active-Passive DR
┌─────────────────────────────────────────────────────────────────────────┐
│ ACTIVE-PASSIVE DR PATTERN │
└─────────────────────────────────────────────────────────────────────────┘
┌─────────────────┐ ┌─────────────────┐
│ PRIMARY │ CRR │ DR REGION │
│ us-east-1 │ ─────────────────► │ us-west-2 │
├─────────────────┤ ├─────────────────┤
│ • Active reads │ │ • Standby │
│ • Active writes │ │ • Read replica │
│ • Standard tier │ │ • Glacier tier │
└─────────────────┘ └─────────────────┘
▲ │
│ │
│ Failover (manual) │
└───────────────────────────────────────┘
Configuration:
- CRR from primary to DR region
- DR bucket uses cheaper storage class
- Delete markers NOT replicated (preserves deleted data in DR)
- Failover: Update application to use DR bucket
Pattern 2: Multi-Region Active-Active
┌─────────────────────────────────────────────────────────────────────────┐
│ MULTI-REGION ACTIVE-ACTIVE │
└─────────────────────────────────────────────────────────────────────────┘
┌─────────────────┐ ┌─────────────────┐
│ us-east-1 │ ◄───────────────► │ eu-west-1 │
├─────────────────┤ Bidirectional ├─────────────────┤
│ • Read/Write │ Replication │ • Read/Write │
│ • Users: US │ │ • Users: EU │
└─────────────────┘ └─────────────────┘
Requirements:
- S3 Multi-Region Access Points (MRAP)
- Bidirectional replication rules
- Application handles eventual consistency
Pattern 3: Ransomware-Resistant Backup
┌─────────────────────────────────────────────────────────────────────────┐
│ RANSOMWARE-RESISTANT BACKUP │
└─────────────────────────────────────────────────────────────────────────┘
┌─────────────────┐ CRR ┌─────────────────┐
│ PRODUCTION │ ────────────────► │ BACKUP │
│ us-east-1 │ │ us-west-2 │
├─────────────────┤ ├─────────────────┤
│ • Standard S3 │ │ • Object Lock │
│ • Versioning │ │ • Compliance │
│ • Normal access │ │ • Read-only │
└─────────────────┘ └─────────────────┘
│
No delete possible
for retention period
Configuration:
- Backup bucket with Object Lock (Compliance mode)
- CRR without delete marker replication
- Different AWS account for backup (optional, extra protection)
- Minimal IAM permissions to backup bucket
Monitoring Replication
CloudWatch Metrics
Enable replication metrics:
# Key metrics to monitor
# - ReplicationLatency (seconds to replicate)
# - BytesPendingReplication (backlog size)
# - OperationsPendingReplication (operations in queue)
# - OperationsFailedReplication (failures)
# Create CloudWatch alarm for replication lag
aws cloudwatch put-metric-alarm \
--alarm-name "S3ReplicationLag" \
--metric-name ReplicationLatency \
--namespace AWS/S3 \
--statistic Average \
--period 300 \
--threshold 900 \
--comparison-operator GreaterThanThreshold \
--dimensions Name=SourceBucket,Value=my-bucket \
Name=DestinationBucket,Value=dr-bucket \
Name=RuleId,Value=CRR-Rule
Replication Status
Check individual object replication status:
aws s3api head-object \
--bucket my-bucket \
--key important-file.txt
# Look for: "ReplicationStatus": "COMPLETED" | "PENDING" | "FAILED"
Cost Considerations
Versioning Costs
- Storage: Every version costs storage
- Requests: Each version counts as a separate object for requests
- Mitigation: Lifecycle policies to expire/transition old versions
Replication Costs
| Component | Cost |
|---|---|
| PUT requests | ~$0.005 per 1,000 (destination region) |
| Data transfer (CRR) | $0.02/GB between regions |
| Data transfer (SRR) | Free within same region |
| RTC add-on | ~$0.015/GB |
Example: Replicating 1 TB with CRR costs approximately $20 in transfer fees.
Best Practices
Versioning
- Enable versioning on all production buckets
- Implement lifecycle rules for version management
- Keep 5-10 recent versions; archive older ones
- Use MFA Delete for critical buckets
- Test restore procedures regularly
Replication
- Always enable versioning before replication
- Use different storage classes for DR replicas
- Disable delete marker replication for DR
- Monitor replication metrics in CloudWatch
- Document failover procedures
- Use Batch Replication for existing objects
- Consider RTC for critical data
Disaster Recovery
- Test DR failover quarterly
- Maintain runbook for bucket failover
- Use Object Lock for immutable backups
- Consider cross-account replication for isolation
- Set RTO/RPO targets and validate with testing
Conclusion
S3 versioning and replication form the foundation of data protection in AWS:
- Versioning protects against accidental deletion and corruption
- Lifecycle policies manage version costs automatically
- CRR provides geographic disaster recovery
- SRR enables log aggregation and cross-account sharing
- Object Lock adds ransomware protection
Start with versioning on all critical buckets, then add replication based on your RTO/RPO requirements. Always test your recovery procedures—backup systems that haven't been tested are not backup systems.
For complete S3 coverage, see our S3 Complete Guide. For securing your buckets, see S3 Security Best Practices. For building CLI commands, use our AWS S3 Command Generator.