Home/Blog/What are common cron job patterns for backups and maintenance tasks?
Web Development

What are common cron job patterns for backups and maintenance tasks?

Learn the most useful cron patterns for common tasks like backups, database maintenance, log rotation, and cleanup jobs with real-world examples.

By Inventive HQ Team
What are common cron job patterns for backups and maintenance tasks?

Common Cron Patterns for System Administration

Cron is most commonly used for automation of routine maintenance and backup tasks. Understanding common patterns helps you implement reliable automated systems.

Daily Backups

Daily Backup at 2 AM

0 2 * * * /backup/scripts/daily-backup.sh

This runs every day at 2 AM, a common time when systems have low traffic.

Backup Script Example:

#!/bin/bash
BACKUP_DIR="/backups/daily"
DATE=$(date +%Y%m%d)
DB_FILE="$BACKUP_DIR/database-$DATE.sql"

# Create directory if needed
mkdir -p "$BACKUP_DIR"

# Backup database
mysqldump -u backup_user -p'password' --all-databases > "$DB_FILE"

# Compress backup
gzip "$DB_FILE"

# Delete backups older than 30 days
find "$BACKUP_DIR" -name "*.sql.gz" -mtime +30 -delete

echo "Backup completed: $DB_FILE.gz" | mail -s "Backup Report" [email protected]

Multiple Backups per Day

For critical systems, backup more frequently:

0 2,8,14,20 * * * /backup/scripts/daily-backup.sh

This runs 4 times daily (2 AM, 8 AM, 2 PM, 8 PM).

Incremental Backups

Run full backups weekly, incremental daily:

# Full backup every Sunday at 2 AM
0 2 * * 0 /backup/scripts/full-backup.sh

# Incremental backup daily at 2 AM, except Sunday
0 2 * * 1-6 /backup/scripts/incremental-backup.sh

Database Maintenance

Optimize Database Weekly

0 3 * * 0 mysqladmin -u admin -p'password' optimize

Runs every Sunday at 3 AM to optimize all databases.

Database Integrity Check

0 4 * * 0 mysqlcheck -u admin -p'password' --all-databases --check-only | mail -s "Database Check Report" [email protected]

Checks database integrity every Sunday at 4 AM and emails results.

Vacuum PostgreSQL Database

0 2 * * 0 vacuumdb -U postgres -d mydb

Reclaims storage in PostgreSQL every Sunday at 2 AM.

Log Rotation and Cleanup

Daily Log Cleanup

0 1 * * * /log-cleanup/scripts/cleanup.sh

Log Cleanup Script:

#!/bin/bash
LOG_DIR="/var/log/app"

# Delete logs older than 90 days
find "$LOG_DIR" -name "*.log" -mtime +90 -delete

# Compress logs older than 7 days
find "$LOG_DIR" -name "*.log" -mtime +7 -exec gzip {} \;

# Keep compressed logs for 180 days
find "$LOG_DIR" -name "*.log.gz" -mtime +180 -delete

Hourly Log Size Check

0 * * * * /log-cleanup/scripts/check-size.sh

Prevents logs from consuming too much disk space by running hourly.

Rotate Logs Daily

0 0 * * * logrotate /etc/logrotate.conf

Runs daily at midnight to rotate logs based on logrotate configuration.

Cache Clearing

Clear Cache Hourly

0 * * * * /cache/scripts/clear-cache.sh

Cache Clearing Script:

#!/bin/bash
CACHE_DIR="/var/cache/app"

# Clear temporary cache files
rm -f "$CACHE_DIR"/*.tmp
rm -rf "$CACHE_DIR"/sessions/*

# Restart cache service if needed
systemctl restart memcached

echo "Cache cleared at $(date)" >> /var/log/cache-clear.log

Clear Old Cache Files Weekly

0 2 * * 0 find /var/cache/app -type f -mtime +7 -delete

Deletes cache files not accessed in 7 days, every Sunday at 2 AM.

File Cleanup and Archival

Archive Old Files

0 3 * * * /archive/scripts/archive-old-files.sh

Archive Script:

#!/bin/bash
SOURCE_DIR="/data/uploads"
ARCHIVE_DIR="/data/archive"
DAYS=180

# Find files older than 180 days
find "$SOURCE_DIR" -type f -mtime +"$DAYS" | while read file; do
  # Archive to dated subdirectory
  YEAR=$(stat -c %y "$file" | cut -d- -f1)
  mkdir -p "$ARCHIVE_DIR/$YEAR"
  mv "$file" "$ARCHIVE_DIR/$YEAR/"
done

# Compress archives monthly
find "$ARCHIVE_DIR" -type f -mtime +30 -exec gzip {} \;

Delete Temporary Files

0 4 * * * find /tmp -type f -mtime +7 -delete
find /var/tmp -type f -mtime +14 -delete

Cleans up temporary files older than a week (or two weeks for /var/tmp).

Certificate Management

Renew SSL Certificates

0 3 * * * /opt/letsencrypt/certbot renew --quiet && systemctl reload nginx

Checks for certificates needing renewal daily at 3 AM and reloads nginx if certificates are updated.

For Let's Encrypt with certbot, running twice daily provides redundancy:

0 3 * * * /opt/letsencrypt/certbot renew --quiet
0 15 * * * /opt/letsencrypt/certbot renew --quiet

Configuration File Syncing

Sync Configuration Daily

0 1 * * * /scripts/sync-config.sh

Sync Script:

#!/bin/bash
# Backup current config
cp -r /etc/app /backups/config-$(date +%Y%m%d).bak

# Sync from central repository
git -C /etc/app pull origin main

# Restart service if config changed
if ! diff -q /backups/config-$(date +%Y%m%d).bak /etc/app > /dev/null; then
  systemctl restart app
  echo "Config updated and service restarted" | mail -s "Config Update" [email protected]
fi

Security Scanning

Run Security Scans Weekly

0 2 * * 0 /security/scripts/vulnerability-scan.sh

Security Scan Script:

#!/bin/bash
# Run security scanner
trivy image --exit-code 0 myapp:latest > /reports/security-scan-$(date +%Y%m%d).txt

# Email report if vulnerabilities found
if grep -q "vulnerability" /reports/security-scan-$(date +%Y%m%d).txt; then
  mail -s "Security Vulnerabilities Found" [email protected] < /reports/security-scan-$(date +%Y%m%d).txt
fi

Update Security Patches

0 1 * * * apt-get update && apt-get upgrade -y

Checks for and installs security updates daily at 1 AM.

Monitoring and Health Checks

Check System Health Hourly

0 * * * * /monitoring/scripts/health-check.sh

Health Check Script:

#!/bin/bash
THRESHOLD=80

# Check disk usage
DISK_USAGE=$(df / | tail -1 | awk '{print $5}' | sed 's/%//')

if [ "$DISK_USAGE" -gt "$THRESHOLD" ]; then
  echo "Disk usage at ${DISK_USAGE}%" | mail -s "Disk Space Alert" [email protected]
fi

# Check memory
FREE_MEM=$(free | awk 'NR==2 {print int($3/$2 * 100)}')

if [ "$FREE_MEM" -gt 80 ]; then
  echo "Memory usage at ${FREE_MEM}%" | mail -s "Memory Alert" [email protected]
fi

Database Connection Test

0 * * * * /monitoring/scripts/db-connection-test.sh

Tests database connectivity every hour and alerts if connection fails.

Data Synchronization

Sync Data to Remote Server

0 22 * * * rsync -avz /data/important/ user@backup-server:/backups/data/

Syncs important data to a backup server every day at 10 PM.

Report Generation

Generate Daily Reports

0 6 * * * /reports/scripts/generate-daily-report.sh

Generates reports and emails them before business hours.

Report Generation Script:

#!/bin/bash
REPORT_DIR="/reports/$(date +%Y/%m)"
mkdir -p "$REPORT_DIR"

# Generate various reports
/reports/scripts/user-activity-report.sh > "$REPORT_DIR/users-$(date +%Y%m%d).txt"
/reports/scripts/system-status-report.sh > "$REPORT_DIR/status-$(date +%Y%m%d).txt"
/reports/scripts/error-report.sh > "$REPORT_DIR/errors-$(date +%Y%m%d).txt"

# Email report
tar czf "$REPORT_DIR.tar.gz" "$REPORT_DIR"
mail -s "Daily Reports" [email protected] -a "$REPORT_DIR.tar.gz"

Statistics and Reporting

Collect Statistics

0 0 * * * /scripts/collect-stats.sh

Stats Collection Script:

#!/bin/bash
STATS_DB="/var/stats/stats.db"

# Collect various metrics
TIMESTAMP=$(date +%s)
UPTIME=$(cat /proc/uptime | awk '{print int($1)}')
LOAD=$(cat /proc/loadavg | awk '{print $1}')
CPU_USAGE=$(top -bn1 | grep Cpu | awk '{print 100-$8}')
MEM_USAGE=$(free | awk 'NR==2 {print $3/$2*100}')

# Store in database
sqlite3 "$STATS_DB" "INSERT INTO metrics VALUES ($TIMESTAMP, $UPTIME, $LOAD, $CPU_USAGE, $MEM_USAGE)"

Best Practices for Maintenance Cron Jobs

  1. Always log output:
0 2 * * * /backup/scripts/daily-backup.sh >> /var/log/backup.log 2>&1
  1. Use lock files to prevent concurrent execution:
LOCK_FILE="/tmp/backup.lock"
if [ -f "$LOCK_FILE" ]; then
  echo "Backup already running"
  exit 1
fi
touch "$LOCK_FILE"
# do work
rm "$LOCK_FILE"
  1. Monitor cron job success/failure:
0 6 * * * /monitoring/scripts/check-cron-success.sh
  1. Schedule during low-traffic periods: 2-4 AM is typical

  2. Stagger multiple jobs to avoid overwhelming the system

  3. Document the purpose of each cron job with comments

  4. Test scripts manually before scheduling

  5. Monitor disk space for backups and logs

  6. Implement retention policies for old backups and logs

  7. Set appropriate permissions on cron scripts and output files

Common cron patterns make system administration much simpler. By using these proven patterns, you establish reliable automated systems that keep your infrastructure running smoothly.

Need Expert IT & Security Guidance?

Our team is ready to help protect and optimize your business technology infrastructure.