Two commands dominate S3 file transfers: aws s3 sync and aws s3 cp. While both move files to and from S3, they work fundamentally differently—and choosing wrong can waste time, money, or worse, delete data you meant to keep.
This guide breaks down exactly when to use each command, with real-world examples that help you make the right choice every time.
The Core Difference
aws s3 cp — Copies specified files, always.
aws s3 sync — Synchronizes directories, transferring only what's changed.
Think of it this way:
- cp is like a copy machine—it duplicates whatever you give it
- sync is like a smart backup—it only transfers what's new or different
Quick Decision Guide
| Scenario | Use | Why |
|---|---|---|
| Single file transfer | cp | Simpler, no comparison overhead |
| First-time directory upload | Either | Both transfer everything |
| Subsequent directory updates | sync | Only transfers changes |
| Website deployment | sync --delete | Updates files, removes old ones |
| Backup (keep versions) | cp --recursive | Doesn't delete old backups |
| Mirror/replica | sync --delete | Exact copy including deletions |
| Restore from backup | sync or cp | Depends on restore strategy |
Command Comparison
Basic Syntax
# Copy single file
aws s3 cp file.txt s3://bucket/file.txt
# Copy directory (requires --recursive)
aws s3 cp ./dir s3://bucket/dir --recursive
# Sync directory (recursive by default)
aws s3 sync ./dir s3://bucket/dir
Key Differences in Behavior
# Initial upload: both transfer 100 files
aws s3 cp ./website s3://bucket/ --recursive # Uploads 100 files
aws s3 sync ./website s3://bucket/ # Uploads 100 files
# After changing 3 files locally:
aws s3 cp ./website s3://bucket/ --recursive # Uploads 100 files (all)
aws s3 sync ./website s3://bucket/ # Uploads 3 files (changed only)
When to Use aws s3 cp
Single File Transfers
# Upload
aws s3 cp report.pdf s3://documents/reports/
# Download
aws s3 cp s3://documents/reports/report.pdf ./
# Copy between buckets
aws s3 cp s3://source/file.txt s3://destination/file.txt
Rename While Copying
# cp allows different destination name
aws s3 cp local.txt s3://bucket/renamed.txt
# sync copies directory structure as-is
aws s3 sync ./data s3://bucket/data # Preserves all names
Stream Processing
# Pipe data directly (cp supports stdin)
pg_dump mydb | gzip | aws s3 cp - s3://backups/db-$(date +%Y%m%d).sql.gz
# Sync doesn't support streaming
Backup Without Deletion Risk
When you want to ensure files accumulate rather than mirror:
# Timestamped backup—old backups preserved
aws s3 cp ./logs s3://archive/logs-$(date +%Y%m%d)/ --recursive
# Never risk deleting previous backups
Copy with Specific Metadata
aws s3 cp large-file.zip s3://bucket/ \
--storage-class STANDARD_IA \
--metadata '{"project":"alpha","version":"1.2"}'
When to Use aws s3 sync
Website Deployments
The classic sync use case:
# Build and deploy
npm run build
aws s3 sync ./dist s3://my-website/ --delete
# Only changed files upload, old files removed
Why sync is better:
- Faster deployments (only changes transfer)
- Lower costs (fewer PUT requests)
- Cleaner bucket (old files removed)
Regular Backups
# Daily backup of important directory
aws s3 sync /var/data s3://backups/data/
# Only new/changed files transfer each day
Development Synchronization
Keep local and S3 in sync during development:
# Push changes up
aws s3 sync ./project s3://dev-bucket/project
# Pull changes down
aws s3 sync s3://dev-bucket/project ./project
Disaster Recovery Replication
# Mirror production bucket to DR region
aws s3 sync s3://prod-bucket s3://dr-bucket \
--source-region us-east-1 \
--region us-west-2
Large Directory with Frequent Updates
# Log directory with constant new files
aws s3 sync /var/log/app s3://logs/app/ --exclude "*.tmp"
# Only new logs upload—existing ones skipped
Critical Flag Differences
The --delete Flag (Sync Only)
# WITHOUT --delete: Files only added/updated, never removed
aws s3 sync ./src s3://bucket/
# If you delete local file, it remains in S3
# WITH --delete: True mirror—deletions propagate
aws s3 sync ./src s3://bucket/ --delete
# Local deletions also delete from S3
Warning: --delete is dangerous. Always preview first:
# ALWAYS do this first
aws s3 sync ./src s3://bucket/ --delete --dryrun
The --recursive Flag (cp Only)
# cp requires --recursive for directories
aws s3 cp ./dir s3://bucket/dir # Fails
aws s3 cp ./dir s3://bucket/dir --recursive # Works
# sync is inherently recursive
aws s3 sync ./dir s3://bucket/dir # Works
Exclude and Include Patterns
Both commands support filtering, but behavior differs:
# Exclude patterns (both commands)
aws s3 sync ./project s3://bucket/ --exclude "*.log" --exclude "node_modules/*"
aws s3 cp ./project s3://bucket/ --recursive --exclude "*.log"
# Include after exclude (both)
aws s3 sync ./data s3://bucket/ --exclude "*" --include "*.csv"
Performance Comparison
First-Time Transfer: 10,000 Files
| Metric | sync | cp --recursive |
|---|---|---|
| API Calls | ~10,000 (list + put) | ~10,000 (put only) |
| Time | Similar | Similar |
| Network | Same | Same |
Subsequent Transfer: 10 Files Changed
| Metric | sync | cp --recursive |
|---|---|---|
| API Calls | ~100 (list) + 10 (put) | ~10,000 (put) |
| Time | Much faster | Same as first time |
| Cost | ~$0.01 | ~$0.05 |
Sync wins dramatically for updates.
Comparison Overhead
Sync must list and compare before transferring:
# For tiny transfers, cp may be faster
time aws s3 cp single-file.txt s3://bucket/ # ~1 second
time aws s3 sync ./one-file-dir s3://bucket/ # ~2 seconds (list overhead)
# For large directories, sync wins on updates
time aws s3 cp ./10k-files s3://bucket/ --recursive # 5 minutes
time aws s3 sync ./10k-files s3://bucket/ # 5 minutes (first time)
time aws s3 sync ./10k-files s3://bucket/ # 30 seconds (subsequent)
Common Patterns
Website Deployment Pipeline
#!/bin/bash
# build-and-deploy.sh
# Build
npm run build
# Sync HTML with short cache
aws s3 sync ./dist s3://website/ \
--exclude "*" \
--include "*.html" \
--cache-control "max-age=3600" \
--delete
# Sync assets with long cache
aws s3 sync ./dist s3://website/ \
--exclude "*.html" \
--cache-control "max-age=31536000" \
--delete
# Invalidate CDN
aws cloudfront create-invalidation \
--distribution-id E12345 \
--paths "/*"
Backup with Retention
#!/bin/bash
# backup-with-retention.sh
# Daily incremental backup
aws s3 sync /data s3://backups/current/
# Weekly snapshot (copy, not sync, to preserve)
if [ $(date +%u) -eq 7 ]; then
WEEK=$(date +%Y-W%V)
aws s3 cp s3://backups/current s3://backups/weekly/$WEEK/ --recursive
fi
Bidirectional Sync
#!/bin/bash
# two-way-sync.sh
# Pull remote changes first
aws s3 sync s3://shared/project ./project
# Make local changes...
# ...
# Push local changes
aws s3 sync ./project s3://shared/project
Warning: Bidirectional sync can cause conflicts. Consider proper version control for code.
Selective Restore
# Restore only specific files with cp
aws s3 cp s3://backup/config.json ./config.json
# Restore entire directory with sync
aws s3 sync s3://backup/data ./data
# Restore without overwriting newer local files
aws s3 sync s3://backup/data ./data --exact-timestamps
Cost Analysis
Transfer Costs
Both commands incur the same data transfer costs—they differ in request costs:
| Operation | Cost per 1,000 |
|---|---|
| PUT/POST | $0.005 |
| GET | $0.0004 |
| LIST | $0.005 |
Scenario: Deploy 10,000-File Website Daily
Using cp --recursive (every file, every time):
Daily: 10,000 PUTs = $0.05
Monthly: $1.50
Using sync (average 100 changed files):
Daily: 10 LISTs + 100 PUTs = $0.05 + $0.0005 = ~$0.05
Monthly: $1.50
# But sync is much faster!
Using sync with --delete (same but cleaner):
Same cost, but bucket stays clean
For request costs, the difference is often negligible. The real savings with sync come from time and bandwidth for large directories with few changes.
Edge Cases and Gotchas
Timestamp Issues
# Sync uses modification time—can cause issues with builds
npm run build # All files get new timestamps
aws s3 sync ./dist s3://bucket/ # Re-uploads everything!
# Solution: use --size-only for build artifacts
aws s3 sync ./dist s3://bucket/ --size-only
Hidden Files
# Both commands transfer hidden files by default
aws s3 sync ./project s3://bucket/ # Includes .env, .git, etc.
# Exclude sensitive files
aws s3 sync ./project s3://bucket/ \
--exclude ".env" \
--exclude ".git/*" \
--exclude "*.secret"
Empty Directories
# S3 doesn't have true directories—both commands skip empty folders
mkdir empty-dir
aws s3 sync ./empty-dir s3://bucket/empty-dir/ # Nothing uploaded
# To create "folder" marker:
aws s3api put-object --bucket bucket --key empty-dir/
Sync with Versioned Buckets
# Sync doesn't automatically handle versions
aws s3 sync ./data s3://versioned-bucket/data
# Previous versions remain (good for recovery)
# But --delete only affects current versions
Summary: Decision Matrix
| Question | If Yes → | If No → |
|---|---|---|
| Single file? | cp | Continue |
| First-time bulk transfer? | Either (cp simpler) | Continue |
| Need to mirror exactly? | sync --delete | Continue |
| Regular updates to same destination? | sync | Continue |
| Streaming data from stdin? | cp - | N/A |
| Want old destination files preserved? | cp | sync |
| Website/app deployment? | sync --delete | N/A |
Conclusion
Both commands have their place:
- Use
cpfor single files, one-time transfers, streaming, and when you want full control over what gets transferred - Use
syncfor directories that update regularly, deployments, backups, and when you want incremental efficiency
The key insight: sync is about efficiency over time, while cp is about explicit control. For most directory operations where you'll run the command repeatedly, sync is the better choice.
Generate commands with proper flags using our AWS S3 Command Generator to avoid syntax errors and ensure best practices.