Copied!
Programming
Laravel
PHP

Laravel Job Class Deep Dive – Retries, Timeouts, Failed Jobs, Chaining & Batching

Laravel Job Class Deep Dive – Retries, Timeouts, Failed Jobs, Chaining & Batching
Shahroz Javed
Mar 13, 2026 . 179 views

Essential Job Properties

Most developers only use the handle() method in their jobs. But the job class supports a rich set of properties that control exactly how the job behaves in production — retries, timeouts, delays, connections, and more.

<?php

namespace App\Jobs;

use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;

class ProcessVideoUpload implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    // Which queue to use
    public string $queue = 'media';

    // Which connection to use (database, redis, sqs...)
    public string $connection = 'redis';

    // Max number of attempts before the job is marked as failed
    public int $tries = 3;

    // Max exceptions allowed before failing (useful with retryUntil)
    public int $maxExceptions = 2;

    // Timeout in seconds — kill the job if it takes longer
    public int $timeout = 120;

    // Delay in seconds before the first attempt
    public int $delay = 30;

    // Seconds to wait between retries (backoff)
    public int $backoff = 60;

    // Delete the job from the queue if the related model no longer exists
    public bool $deleteWhenMissingModels = true;

    public function __construct(protected Video $video) {}

    public function handle(): void
    {
        // process the video...
    }
}

deleteWhenMissingModels

This is a critical property that many developers don't know about. When your job holds an Eloquent model and that model gets deleted before the job runs, Laravel will throw a ModelNotFoundException. Set $deleteWhenMissingModels = true to silently discard the job instead of failing.

public bool $deleteWhenMissingModels = true;

Retries & Backoff Strategy

When a job fails (throws an exception), Laravel can automatically retry it. Getting the retry strategy right is important in production — a bad strategy can cause a "retry storm" that overwhelms your server.

Simple Retry

public int $tries = 5;       // try up to 5 times
public int $backoff = 30;    // wait 30 seconds between each retry

Exponential Backoff (The Right Way)

For external API calls (payment gateways, email services), use exponential backoff. Wait longer between each retry so you don't hammer a service that's already struggling. Define $backoff as an array:

// Retry 1 → wait 10s, Retry 2 → wait 30s, Retry 3 → wait 60s
public array $backoff = [10, 30, 60];

retryUntil — Time-Based Retries

Instead of a fixed number of tries, you can retry until a deadline is reached. This is perfect for jobs that need to succeed within a specific time window:

use DateTime;

public function retryUntil(): DateTime
{
    // Keep retrying for up to 2 hours
    return now()->addHours(2);
}
⚠️ When using retryUntil(), the $tries property is ignored. Also combine it with $maxExceptions to prevent retrying on unrecoverable errors like a 404 response from an API.

maxExceptions

$maxExceptions limits the total number of unhandled exceptions allowed before the job is permanently failed — even if you're still within the retry window:

public int $maxExceptions = 3;

public function retryUntil(): DateTime
{
    return now()->addHours(6);
}
// Will retry for up to 6 hours, BUT fails permanently after 3 exceptions

Handling Failed Jobs

When a job exhausts all retries, it's moved to the failed_jobs table. First, create the table:

php artisan queue:failed-table
php artisan migrate

The failed() Method

Define a failed() method on your job to run custom cleanup or notification logic when the job permanently fails:

use Throwable;

public function failed(Throwable $exception): void
{
    // Notify the admin, rollback partial work, log the error...
    \Log::error("Job failed for user {$this->user->id}: " . $exception->getMessage());

    // Notify the team via Slack or email
    \Notification::route('mail', config('app.admin_email'))
        ->notify(new JobFailedNotification($this->user, $exception));
}

Managing Failed Jobs via Artisan

# List all failed jobs
php artisan queue:failed

# Retry a specific failed job by its ID
php artisan queue:retry 5

# Retry all failed jobs
php artisan queue:retry all

# Retry all jobs of a specific class
php artisan queue:retry --queue=emails

# Delete a specific failed job
php artisan queue:forget 5

# Delete all failed jobs
php artisan queue:flush

Ignoring Specific Exceptions

Sometimes you know certain exceptions are unrecoverable (like a user not found error). Use $dontReport or skip retrying entirely for those exceptions:

use App\Exceptions\UserDeletedException;

// Don't retry if this specific exception is thrown
protected $dontRetryOnExceptions = [
    UserDeletedException::class,
];

Or handle it directly in handle():

public function handle(): void
{
    if (! User::find($this->userId)) {
        // Silently delete the job — no retry needed
        return $this->delete();
    }
    // ...
}

InteractsWithQueue Methods

The InteractsWithQueue trait gives your job runtime control over itself. These are some of the most powerful but least documented methods:

public function handle(): void
{
    // Check how many times this job has been attempted
    if ($this->attempts() > 2) {
        // do something different on the 3rd try
    }

    // Manually release the job back to the queue (with optional delay in seconds)
    // This is NOT the same as failing — it goes back and tries again later
    $this->release(30);

    // Manually delete this job from the queue (success, no failure record)
    $this->delete();

    // Mark as failed right now without waiting for more retries
    $this->fail(new \Exception('Something went wrong'));
}
⚠️ release() counts as an attempt. If you release a job too many times it will eventually hit $tries and be marked as failed.

Job Chaining

Job chaining lets you run jobs in sequence — the next job only runs if the previous one succeeded. If any job in the chain fails, the rest are cancelled.

use Illuminate\Support\Facades\Bus;

Bus::chain([
    new ProcessVideoUpload($video),
    new GenerateThumbnail($video),
    new NotifySubscribers($video),
])->dispatch();

Chain with Delay

Bus::chain([
    new ProcessVideoUpload($video),
    (new GenerateThumbnail($video))->delay(now()->addMinutes(5)),
    new NotifySubscribers($video),
])->dispatch();

Chain Failure Callback

Run a callback if any job in the chain fails:

Bus::chain([
    new ProcessVideoUpload($video),
    new GenerateThumbnail($video),
])->catch(function (Throwable $e) use ($video) {
    // Notify uploader that processing failed
    $video->update(['status' => 'failed']);
    \Log::error('Video chain failed: ' . $e->getMessage());
})->dispatch();

Appending to a Chain from Within a Job

A job can append more jobs to the existing chain at runtime:

public function handle(): void
{
    // Process the video...

    // Add another job to the end of this chain
    $this->chain([
        new CleanupTempFiles($this->video),
    ]);
}

Job Batching

Batching is different from chaining. In a batch, all jobs run in parallel (not one by one). You get callbacks for when the whole batch finishes, partially fails, or fully fails. This is perfect for processing large datasets.

Setup

php artisan queue:batches-table
php artisan migrate

Create Batchable Jobs

Add the Batchable trait to jobs that will run in a batch:

use Illuminate\Bus\Batchable;

class ImportCsvRow implements ShouldQueue
{
    use Batchable, Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public function handle(): void
    {
        // Check if the batch was cancelled before doing work
        if ($this->batch()->cancelled()) {
            return;
        }

        // Process this row...
    }
}

Dispatch a Batch

use Illuminate\Support\Facades\Bus;

$batch = Bus::batch([
    new ImportCsvRow($rows[0]),
    new ImportCsvRow($rows[1]),
    new ImportCsvRow($rows[2]),
    // ... hundreds more
])->then(function (Batch $batch) {
    // All jobs completed successfully
    \Log::info('Import complete: ' . $batch->totalJobs . ' rows processed.');

})->catch(function (Batch $batch, Throwable $e) {
    // First job failure — other jobs still run
    \Log::error('A row failed: ' . $e->getMessage());

})->finally(function (Batch $batch) {
    // Always runs when batch is finished (success or failure)
    ImportJob::find($batch->id)?->update(['completed_at' => now()]);

})->name('CSV Import')
  ->allowFailures()  // don't cancel batch on first failure
  ->onQueue('imports')
  ->dispatch();

Monitoring a Batch

// Store batch ID somewhere (e.g. in your jobs table)
$batchId = $batch->id;

// Later, check the batch status
$batch = Bus::findBatch($batchId);

echo $batch->totalJobs;       // total jobs in batch
echo $batch->processedJobs(); // jobs processed so far
echo $batch->failedJobs;      // failed count
echo $batch->progress();      // percentage complete (0-100)
$batch->cancelled();          // true if cancelled
$batch->finished();           // true if done

Cancel a Batch

$batch->cancel();
// Remaining unstarted jobs will not run

Conclusion

The job class is far more powerful than most tutorials show. Here's a recap of what you now know:

  • Use job properties like $tries, $timeout, $backoff, and $deleteWhenMissingModels to control job behavior without touching the worker command

  • Use array backoff ([10, 30, 60]) for exponential backoff on API calls

  • Use retryUntil() + $maxExceptions for time-window retries with a safety cap

  • Implement failed() to clean up or alert on permanent failure

  • Use Bus::chain() for sequential workflows — next step only runs if the previous succeeds

  • Use Bus::batch() for parallel processing — track progress and get completion callbacks

📑 On This Page