Home/Blog/How do I prevent multiple instances of the same cron job from running?
Web Development

How do I prevent multiple instances of the same cron job from running?

Learn techniques to prevent concurrent execution of the same cron job, using lock files, process checks, and other mechanisms to ensure only one instance runs.

By Inventive HQ Team
How do I prevent multiple instances of the same cron job from running?

The Problem of Concurrent Cron Jobs

When a scheduled cron job takes longer than expected, the scheduler might start another instance of the same job before the first one completes. This can cause problems like database conflicts, file corruption, duplicate data, and resource exhaustion.

For example, if a daily backup job runs at 2 AM and takes 2 hours to complete, but another instance starts at 3 AM, you'll have two backup processes running simultaneously, competing for resources.

Lock File Method

The most common and reliable approach is using lock files. A lock file serves as a flag indicating that a job is already running.

Basic Lock File Implementation

#!/bin/bash

LOCK_FILE="/tmp/backup.lock"

# Check if lock file exists
if [ -f "$LOCK_FILE" ]; then
  echo "Backup job is already running (lock file exists)"
  exit 1
fi

# Create lock file
touch "$LOCK_FILE"

# Trap to remove lock file on exit (success or failure)
trap "rm -f $LOCK_FILE" EXIT

# Your actual job code
echo "Starting backup at $(date)"
mysqldump --all-databases > /backups/backup.sql
gzip /backups/backup.sql
echo "Backup completed at $(date)"

# Lock file is removed by trap on exit

Improved Lock File with PID

Store the process ID (PID) in the lock file to verify the job is still running:

#!/bin/bash

LOCK_FILE="/tmp/backup.lock"

# Check if lock file exists
if [ -f "$LOCK_FILE" ]; then
  LOCK_PID=$(cat "$LOCK_FILE")

  # Check if the process with that PID still exists
  if kill -0 "$LOCK_PID" 2>/dev/null; then
    echo "Backup job is already running (PID: $LOCK_PID)"
    exit 1
  else
    # Process doesn't exist, remove stale lock file
    rm -f "$LOCK_FILE"
  fi
fi

# Create lock file with our PID
echo $$ > "$LOCK_FILE"

# Trap to remove lock file on exit
trap "rm -f $LOCK_FILE" EXIT

# Your actual job code
echo "Starting backup at $(date)"
mysqldump --all-databases > /backups/backup.sql
echo "Backup completed at $(date)"

Lock File with Timeout

Remove lock files that are too old to prevent permanent blocking:

#!/bin/bash

LOCK_FILE="/tmp/backup.lock"
LOCK_TIMEOUT=7200  # 2 hours in seconds

# Check if lock file exists
if [ -f "$LOCK_FILE" ]; then
  LOCK_TIME=$(stat -f%m "$LOCK_FILE" 2>/dev/null || stat -c%Y "$LOCK_FILE")
  CURRENT_TIME=$(date +%s)
  LOCK_AGE=$((CURRENT_TIME - LOCK_TIME))

  if [ $LOCK_AGE -lt $LOCK_TIMEOUT ]; then
    echo "Backup job is already running (lock age: $LOCK_AGE seconds)"
    exit 1
  else
    echo "Removing stale lock file (age: $LOCK_AGE seconds)"
    rm -f "$LOCK_FILE"
  fi
fi

# Create lock file
echo $$ > "$LOCK_FILE"
trap "rm -f $LOCK_FILE" EXIT

# Your actual job code
echo "Starting backup at $(date)"
mysqldump --all-databases > /backups/backup.sql
echo "Backup completed at $(date)"

Process Check Method

Instead of lock files, check if a process with the same name is already running:

#!/bin/bash

JOB_NAME="backup"

# Count running instances of this job
RUNNING=$(pgrep -f "$JOB_NAME" | wc -l)

if [ $RUNNING -gt 1 ]; then
  echo "Another instance of $JOB_NAME is already running"
  exit 1
fi

# Your actual job code
echo "Starting backup at $(date)"
mysqldump --all-databases > /backups/backup.sql
echo "Backup completed at $(date)"

This method is simpler but less reliable if multiple different jobs might have similar names.

Mutex with flock

The flock command creates exclusive locks that are automatically released:

#!/bin/bash

# Use flock to ensure only one instance runs
# flock -n (non-blocking) will fail immediately if lock can't be obtained
(
  flock -n 200 || { echo "Another instance is running"; exit 1; }

  # Your actual job code goes here
  echo "Starting backup at $(date)"
  mysqldump --all-databases > /backups/backup.sql
  echo "Backup completed at $(date)"

) 200>/tmp/backup.lock

flock is more robust because:

  • Automatically released when the script exits
  • Handles signal interruption properly
  • Works across different shells

Database-Based Locking

For systems where file-based locks aren't appropriate, use database locks:

#!/bin/bash

DB_HOST="localhost"
DB_USER="backup"
DB_PASS="password"
DB_NAME="system"
JOB_NAME="backup"

# Try to acquire lock in database
LOCK_QUERY="INSERT INTO job_locks (job_name, started_at) VALUES ('$JOB_NAME', NOW()) ON DUPLICATE KEY UPDATE started_at=NOW();"

mysql -h "$DB_HOST" -u "$DB_USER" -p"$DB_PASS" "$DB_NAME" -e "$LOCK_QUERY"

if [ $? -ne 0 ]; then
  echo "Failed to acquire database lock"
  exit 1
fi

# Function to release lock on exit
release_lock() {
  DELETE_QUERY="DELETE FROM job_locks WHERE job_name='$JOB_NAME';"
  mysql -h "$DB_HOST" -u "$DB_USER" -p"$DB_PASS" "$DB_NAME" -e "$DELETE_QUERY"
}

trap release_lock EXIT

# Your actual job code
echo "Starting backup at $(date)"
mysqldump --all-databases > /backups/backup.sql
echo "Backup completed at $(date)"

Language-Specific Solutions

Different programming languages offer specialized approaches:

Python

import os
import sys
import time
from pathlib import Path

LOCK_FILE = "/tmp/backup.lock"

def acquire_lock():
    if os.path.exists(LOCK_FILE):
        try:
            with open(LOCK_FILE, 'r') as f:
                pid = int(f.read())
            # Check if process exists
            os.kill(pid, 0)
            print("Another instance is running")
            return False
        except (ProcessLookupError, ValueError):
            # Process doesn't exist or invalid PID, remove lock
            os.remove(LOCK_FILE)

    # Create lock file with our PID
    with open(LOCK_FILE, 'w') as f:
        f.write(str(os.getpid()))
    return True

def release_lock():
    if os.path.exists(LOCK_FILE):
        os.remove(LOCK_FILE)

try:
    if not acquire_lock():
        sys.exit(1)

    # Your actual job code
    print(f"Starting backup at {time.ctime()}")
    # Run backup commands
    print(f"Backup completed at {time.ctime()}")

finally:
    release_lock()

Node.js

const fs = require('fs');
const path = require('path');

const LOCK_FILE = '/tmp/backup.lock';

async function acquireLock() {
  try {
    // Check if lock exists
    if (fs.existsSync(LOCK_FILE)) {
      const pid = fs.readFileSync(LOCK_FILE, 'utf-8').trim();

      // Check if process is running
      try {
        process.kill(pid, 0);
        console.log('Another instance is running');
        return false;
      } catch (e) {
        // Process doesn't exist, remove lock
        fs.unlinkSync(LOCK_FILE);
      }
    }

    // Create lock file
    fs.writeFileSync(LOCK_FILE, process.pid.toString());
    return true;
  } catch (err) {
    console.error('Lock error:', err);
    return false;
  }
}

function releaseLock() {
  try {
    if (fs.existsSync(LOCK_FILE)) {
      fs.unlinkSync(LOCK_FILE);
    }
  } catch (err) {
    console.error('Error releasing lock:', err);
  }
}

(async () => {
  if (!await acquireLock()) {
    process.exit(1);
  }

  try {
    console.log(`Starting backup at ${new Date()}`);
    // Run backup commands
    console.log(`Backup completed at ${new Date()}`);
  } finally {
    releaseLock();
  }
})();

Cron Configuration Best Practices

Set Appropriate Time Intervals

Make sure cron intervals allow jobs to complete:

# If backup takes 1 hour, schedule every 2 hours minimum
0 */2 * * * /backup/scripts/backup.sh

# Don't schedule more frequently than execution time
# Every 5 minutes when job takes 30 minutes = problem
*/5 * * * * /backup/scripts/backup.sh  # BAD

Add Logging and Monitoring

#!/bin/bash

LOCK_FILE="/tmp/backup.lock"
LOG_FILE="/var/log/backup.log"

if [ -f "$LOCK_FILE" ]; then
  echo "[$(date)] Backup already running - skipping" >> "$LOG_FILE"
  exit 1
fi

touch "$LOCK_FILE"
trap "rm -f $LOCK_FILE" EXIT

echo "[$(date)] Backup started" >> "$LOG_FILE"
# Your backup code
echo "[$(date)] Backup completed" >> "$LOG_FILE"

Alert on Lock Conflicts

Monitor and alert when concurrent execution is prevented:

#!/bin/bash

LOCK_FILE="/tmp/backup.lock"
ALERT_EMAIL="[email protected]"

if [ -f "$LOCK_FILE" ]; then
  # Send alert
  echo "Backup conflict detected at $(date)" | \
    mail -s "Cron Job Overlap Alert" "$ALERT_EMAIL"
  exit 1
fi

touch "$LOCK_FILE"
trap "rm -f $LOCK_FILE" EXIT

# Your backup code

Testing Concurrency Prevention

Test your implementation:

# Simulate long-running job
#!/bin/bash
LOCK_FILE="/tmp/test.lock"

if [ -f "$LOCK_FILE" ]; then
  echo "Already running"
  exit 1
fi

touch "$LOCK_FILE"
trap "rm -f $LOCK_FILE" EXIT

echo "Job started at $(date)"
sleep 60  # Simulate 60-second job
echo "Job completed at $(date)"

Then run it multiple times quickly:

# Start first instance
./job.sh &

# Try to start second instance immediately
./job.sh  # Should be blocked

# Wait for first to complete
wait

Which Method to Use

MethodProsConsBest For
Lock fileSimple, reliableFile system dependentMost cases
Process checkNo cleanup neededLess reliable with similar namesSimple jobs
flockAutomatic cleanupRequires flock utilityRobust implementation
Database lockDistributed systemsMore complexMulti-server setups

Recommendation: Use the lock file with PID and timeout for most applications. It's simple, reliable, and handles edge cases well.

Preventing concurrent execution of the same cron job is essential for data integrity and system stability. Implementing proper locking mechanisms ensures that your scheduled jobs complete their work without conflicts, regardless of how long they take to run.

Need Expert IT & Security Guidance?

Our team is ready to help protect and optimize your business technology infrastructure.