Home/Blog/AWS S3 CLI Commands: Complete Reference Guide
Cloud & DevOps

AWS S3 CLI Commands: Complete Reference Guide

Master AWS S3 CLI commands with this comprehensive reference guide. Learn cp, sync, mv, rm, ls and advanced options for efficient S3 operations.

By Inventive HQ Team
AWS S3 CLI Commands: Complete Reference Guide

The AWS S3 CLI is the most efficient way to manage files in Amazon S3 at scale. Whether you're uploading a website, syncing backups, or managing terabytes of data, mastering these commands will save you hours of work.

This guide covers every essential S3 CLI command with practical examples you can use immediately.

Prerequisites

Installing AWS CLI

macOS:

brew install awscli

Linux:

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

Windows: Download and run the MSI installer from AWS.

Configuration

aws configure
# Enter:
# - AWS Access Key ID
# - AWS Secret Access Key
# - Default region (e.g., us-east-1)
# - Default output format (json)

Verify installation:

aws --version
aws s3 ls  # List buckets

Essential Commands Overview

CommandPurpose
aws s3 lsList buckets or objects
aws s3 cpCopy files to/from S3
aws s3 syncSynchronize directories
aws s3 mvMove/rename objects
aws s3 rmDelete objects
aws s3 mbMake bucket
aws s3 rbRemove bucket
aws s3 presignGenerate presigned URLs

Listing Objects: aws s3 ls

List All Buckets

aws s3 ls

Output:

2024-01-15 10:30:00 my-website-bucket
2024-02-20 14:22:00 backup-bucket
2024-03-01 09:15:00 logs-bucket

List Bucket Contents

# List root of bucket
aws s3 ls s3://my-bucket/

# List specific prefix (folder)
aws s3 ls s3://my-bucket/images/

# Recursive listing (all objects)
aws s3 ls s3://my-bucket/ --recursive

# Human-readable sizes
aws s3 ls s3://my-bucket/ --recursive --human-readable

# With total summary
aws s3 ls s3://my-bucket/ --recursive --summarize

Filter by Date

# List objects, then filter with grep
aws s3 ls s3://my-bucket/ --recursive | grep "2024-01"

Copying Files: aws s3 cp

Upload to S3

# Single file
aws s3 cp file.txt s3://my-bucket/

# To specific path
aws s3 cp file.txt s3://my-bucket/folder/newname.txt

# Entire directory (recursive)
aws s3 cp ./local-dir s3://my-bucket/prefix/ --recursive

Download from S3

# Single file
aws s3 cp s3://my-bucket/file.txt ./

# Download with new name
aws s3 cp s3://my-bucket/file.txt ./local-file.txt

# Entire prefix (recursive)
aws s3 cp s3://my-bucket/prefix/ ./local-dir/ --recursive

Copy Between Buckets

# Single object
aws s3 cp s3://source-bucket/file.txt s3://dest-bucket/

# Entire bucket
aws s3 cp s3://source-bucket/ s3://dest-bucket/ --recursive

Copy Options

# Set storage class
aws s3 cp file.txt s3://bucket/ --storage-class STANDARD_IA

# Set ACL (permissions)
aws s3 cp file.txt s3://bucket/ --acl public-read

# Set content type
aws s3 cp image.jpg s3://bucket/ --content-type "image/jpeg"

# Server-side encryption
aws s3 cp file.txt s3://bucket/ --sse AES256
aws s3 cp file.txt s3://bucket/ --sse aws:kms --sse-kms-key-id alias/my-key

# Metadata
aws s3 cp file.txt s3://bucket/ --metadata '{"key1":"value1","key2":"value2"}'

# Cache control (for web hosting)
aws s3 cp index.html s3://bucket/ --cache-control "max-age=86400"

Synchronizing: aws s3 sync

Sync keeps a destination in sync with a source—only copying changed files.

Basic Sync

# Local to S3
aws s3 sync ./local-folder s3://my-bucket/prefix/

# S3 to local
aws s3 sync s3://my-bucket/prefix/ ./local-folder

# S3 to S3
aws s3 sync s3://source-bucket/ s3://dest-bucket/

Sync Options

# Delete files in destination not in source
aws s3 sync ./local s3://bucket/ --delete

# Dry run (preview changes)
aws s3 sync ./local s3://bucket/ --dryrun

# Exclude patterns
aws s3 sync ./local s3://bucket/ --exclude "*.log" --exclude ".git/*"

# Include only specific patterns
aws s3 sync ./local s3://bucket/ --exclude "*" --include "*.jpg" --include "*.png"

# Skip existing files (don't check timestamps)
aws s3 sync ./local s3://bucket/ --size-only

# Only sync files larger than
aws s3 sync ./local s3://bucket/ --exclude "*" --include "*.mp4" --size-only

Sync with Storage Classes

# Sync to Glacier
aws s3 sync ./backups s3://backup-bucket/ --storage-class GLACIER

# Sync to Intelligent-Tiering
aws s3 sync ./data s3://bucket/ --storage-class INTELLIGENT_TIERING

Moving Files: aws s3 mv

Move operations copy then delete the source.

# Move single file
aws s3 mv file.txt s3://my-bucket/

# Move from S3 to local
aws s3 mv s3://my-bucket/file.txt ./

# Rename in S3
aws s3 mv s3://bucket/old-name.txt s3://bucket/new-name.txt

# Move between buckets
aws s3 mv s3://source-bucket/file.txt s3://dest-bucket/

# Move recursively
aws s3 mv s3://bucket/old-prefix/ s3://bucket/new-prefix/ --recursive

Deleting Objects: aws s3 rm

Delete Single Object

aws s3 rm s3://my-bucket/file.txt

Delete with Prefix

# Delete all objects with prefix (recursive required)
aws s3 rm s3://my-bucket/logs/ --recursive

# ALWAYS use dryrun first
aws s3 rm s3://my-bucket/logs/ --recursive --dryrun

Delete with Patterns

# Delete all .log files
aws s3 rm s3://my-bucket/ --recursive --exclude "*" --include "*.log"

# Delete everything except images
aws s3 rm s3://my-bucket/ --recursive --exclude "*.jpg" --exclude "*.png"

Delete Versioned Objects

For versioned buckets:

# Delete specific version
aws s3api delete-object --bucket my-bucket --key file.txt --version-id "abc123"

# Delete all versions (requires s3api and scripting)
aws s3api list-object-versions --bucket my-bucket --prefix "file.txt" \
  --query 'Versions[].{Key:Key,VersionId:VersionId}' --output json | \
  jq -c '.[]' | while read obj; do
    key=$(echo $obj | jq -r '.Key')
    vid=$(echo $obj | jq -r '.VersionId')
    aws s3api delete-object --bucket my-bucket --key "$key" --version-id "$vid"
  done

Bucket Management

Create Bucket

# Default region
aws s3 mb s3://my-new-bucket

# Specific region
aws s3 mb s3://my-new-bucket --region us-west-2

Delete Bucket

# Empty bucket first
aws s3 rm s3://my-bucket/ --recursive

# Remove bucket
aws s3 rb s3://my-bucket

# Or force (deletes contents and bucket)
aws s3 rb s3://my-bucket --force

Presigned URLs

Generate temporary URLs for private objects:

# Default 1-hour expiration
aws s3 presign s3://my-bucket/private-file.pdf

# Custom expiration (seconds)
aws s3 presign s3://my-bucket/file.pdf --expires-in 3600  # 1 hour
aws s3 presign s3://my-bucket/file.pdf --expires-in 604800  # 7 days

Output:

https://my-bucket.s3.amazonaws.com/private-file.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=...&X-Amz-Expires=3600&X-Amz-Signature=...

Performance Optimization

Multipart Upload Configuration

# Set multipart threshold (default 8MB)
aws configure set s3.multipart_threshold 64MB

# Set chunk size
aws configure set s3.multipart_chunksize 16MB

# Increase concurrent requests
aws configure set s3.max_concurrent_requests 20

Transfer Acceleration

For buckets with transfer acceleration enabled:

aws s3 cp large-file.zip s3://my-bucket/ \
  --endpoint-url https://my-bucket.s3-accelerate.amazonaws.com

Parallel Uploads with GNU Parallel

# Upload many files in parallel
find ./images -type f | parallel -j 10 aws s3 cp {} s3://bucket/images/

Common Patterns

Website Deployment

# Sync website files
aws s3 sync ./dist s3://my-website-bucket/ \
  --delete \
  --cache-control "max-age=31536000" \
  --exclude "*.html"

# HTML with shorter cache
aws s3 sync ./dist s3://my-website-bucket/ \
  --cache-control "max-age=3600" \
  --include "*.html" \
  --exclude "*" \
  --content-type "text/html"

# Invalidate CloudFront
aws cloudfront create-invalidation --distribution-id E123 --paths "/*"

Database Backup

# Backup with timestamp
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
pg_dump mydb | gzip | aws s3 cp - s3://backups/db/mydb-$TIMESTAMP.sql.gz

# Restore
aws s3 cp s3://backups/db/mydb-20240115.sql.gz - | gunzip | psql mydb

Log Rotation

# Move logs older than 7 days to Glacier
aws s3 sync s3://logs/current/ s3://logs/archive/ \
  --storage-class GLACIER \
  --exclude "*" \
  --include "*.log"

# Clean up archived logs from current
aws s3 rm s3://logs/current/ --recursive --exclude "*" --include "*.log"

Mirror Between Accounts

# Cross-account sync (requires bucket policy on destination)
aws s3 sync s3://source-bucket/ s3://dest-bucket/ \
  --source-region us-east-1 \
  --region us-west-2 \
  --acl bucket-owner-full-control

Error Handling

Common Errors

ErrorCauseSolution
AccessDeniedMissing permissionsCheck IAM policy
NoSuchBucketBucket doesn't existVerify bucket name
NoSuchKeyObject doesn't existCheck key/path
SlowDownToo many requestsAdd retry logic
EntityTooLargeFile > 5GBUse multipart

Retry Configuration

# Configure max retries
aws configure set retry_mode adaptive
aws configure set max_attempts 5

Quick Reference Cheat Sheet

# List
aws s3 ls                                  # Buckets
aws s3 ls s3://bucket/                    # Objects
aws s3 ls s3://bucket/ --recursive        # All objects

# Copy
aws s3 cp file s3://bucket/               # Upload
aws s3 cp s3://bucket/file ./             # Download
aws s3 cp s3://src/ s3://dest/ --recursive # Bucket to bucket

# Sync
aws s3 sync ./local s3://bucket/          # Upload changes
aws s3 sync s3://bucket/ ./local          # Download changes
aws s3 sync src dest --delete             # Mirror (remove extra)

# Delete
aws s3 rm s3://bucket/file                # Single file
aws s3 rm s3://bucket/ --recursive        # All files

# Always test first
aws s3 sync/rm ... --dryrun               # Preview changes

Conclusion

The AWS S3 CLI is powerful but can be dangerous—a wrong --recursive --delete command can wipe critical data. Always:

  1. Use --dryrun before destructive operations
  2. Enable versioning on important buckets
  3. Test in non-production first
  4. Script with error handling for automation

For complex S3 commands, use our AWS S3 Command Generator to build commands visually and avoid syntax errors.

Frequently Asked Questions

Find answers to common questions

The AWS S3 CLI is a command-line tool for interacting with Amazon S3 (Simple Storage Service). It's part of the AWS CLI and allows you to upload, download, copy, move, delete, and list objects in S3 buckets. It supports advanced features like multipart uploads, recursive operations, and filtering.

Let's turn this knowledge into action

Get a free 30-minute consultation with our experts. We'll help you apply these insights to your specific situation.