Laravel Queue Chains vs Batches — When to Pick Which

Bus::chain runs sequentially with fail-fast; Bus::batch runs in parallel with completion hooks. Decision framework, transaction patterns, and gotchas inside.

Steven Richardson
Steven Richardson
· 10 min read

Bus::chain and Bus::batch live next to each other in the Laravel queue docs and look interchangeable at a glance. They are not. One enforces order and stops the rest of the work the moment something fails. The other runs in parallel, keeps going past failures, and gives you completion hooks you can build progress UI on. Picking the wrong one costs you either correctness or throughput.

This is the cheat-sheet I keep handing to teammates. The one-line rule: chain when each job depends on the previous one and you want to fail fast; batch when jobs are independent and you want parallelism plus a "we're done" hook.

If you're already running queues seriously in production, the rest of this stack — worker tuning, monitoring, supervisor — is covered in scaling Laravel queues in production.

What Bus::chain and Bus::batch actually do#

Bus::chain accepts an ordered list of jobs and dispatches them one at a time. Job N starts only after job N-1 has executed successfully:

use App\Jobs\OptimizePodcast;
use App\Jobs\ProcessPodcast;
use App\Jobs\ReleasePodcast;
use Illuminate\Support\Facades\Bus;

Bus::chain([
    new ProcessPodcast($podcast),
    new OptimizePodcast($podcast),
    new ReleasePodcast($podcast),
])->dispatch();

Bus::batch dispatches a group of jobs that all become eligible to run as soon as the batch is created. Multiple workers pick them up in parallel and the batch tracks progress against a job_batches table:

use App\Jobs\ImportCsvChunk;
use Illuminate\Bus\Batch;
use Illuminate\Support\Facades\Bus;

$batch = Bus::batch([
    new ImportCsvChunk(1, 100),
    new ImportCsvChunk(101, 200),
    new ImportCsvChunk(201, 300),
])->name('Import Customers')->dispatch();

return $batch->id;

Two practical setup notes. Batches need the job_batches table — generate the migration with php artisan make:queue-batches-table and run it. Each job that participates in a batch must use the Illuminate\Bus\Batchable trait so it can call $this->batch() to talk to the parent.

use Illuminate\Bus\Batchable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Queue\Queueable;

class ImportCsvChunk implements ShouldQueue
{
    use Batchable, Queueable;

    public function handle(): void
    {
        if ($this->batch()->cancelled()) {
            return;
        }

        // Import this slice of the file...
    }
}

If you're already using Queue::route() to centralise job-to-queue mapping, keep in mind that all jobs inside a single batch must run on the same connection and queue. The batch-level onConnection/onQueue win over per-job overrides.

Failure semantics — fail-fast vs partial-fail#

This is the most important difference. Pick the wrong model and you'll either ship a partially-applied state to production or burn through downstream API quotas after the upstream step has already failed.

A chain is fail-fast. The first job that throws (and exhausts its retries) terminates the chain. Subsequent jobs never dispatch:

Bus::chain([
    new ChargeCustomer($order),       // throws → stops here
    new MarkOrderPaid($order),        // never runs
    new SendOrderConfirmation($order),// never runs
])->catch(function (Throwable $e) {
    Log::error('Order pipeline failed', ['exception' => $e]);
})->dispatch();

A batch keeps going. By default, the first job failure marks the batch as cancelled and skips remaining jobs that check $this->batch()->cancelled() — but jobs already in flight will finish, and any job that doesn't check the cancellation flag will still run. If you want true fire-and-forget where individual failures don't poison the whole batch, opt in with allowFailures():

$batch = Bus::batch([
    new SendDigestEmail($user1),
    new SendDigestEmail($user2),
    new SendDigestEmail($user3),
])->allowFailures()
  ->then(function (Batch $batch) {
      // All eligible jobs completed (some may have failed)...
  })
  ->dispatch();

The failed_jobs table still records the individual failures, so you can retry them later with php artisan queue:retry-batch {uuid}.

The transaction trap with chains catches teams every time. If three chained jobs each write to the database and the second one fails halfway through, you've left rows in an inconsistent state. The chain's catch callback fires, but it doesn't roll anything back — by then the writes from the first job are committed. Wrap each job that touches multiple tables in a transaction inside its own handle():

public function handle(): void
{
    DB::transaction(function () {
        $this->order->update(['status' => 'paid']);
        $this->order->customer->ledger()->create([
            'amount' => $this->order->total,
        ]);
    });
}

For state that spans jobs, you have a harder choice — either compensating jobs (saga pattern) or a workflow engine. Chains aren't a transaction.

Hooks and observability#

Batches expose a full lifecycle: before, progress, then, catch, finally. Each receives the Batch instance, so you can update a UI, post to Slack, or kick off a follow-on workflow:

Bus::batch($jobs)
    ->before(function (Batch $batch) {
        // Batch row created, no jobs queued yet
    })
    ->progress(function (Batch $batch) {
        // One job finished — fire on every success
        broadcast(new BatchProgress($batch->id, $batch->progress()));
    })
    ->then(function (Batch $batch) {
        // All jobs completed without (uncaught) failures
    })
    ->catch(function (Batch $batch, Throwable $e) {
        // First failure within the batch
    })
    ->finally(function (Batch $batch) {
        // Always — success or failure
    })
    ->name('Nightly invoice export')
    ->dispatch();

Chains have one hook: catch. That's it. No success callback, no progress callback, no finalizer:

Bus::chain([...])->catch(function (Throwable $e) {
    // A job within the chain has failed
})->dispatch();

If you need a "this whole pipeline finished successfully" hook in a chain, the convention is to make the last job be the notification — SendOrderConfirmation runs only if everything before it succeeded, so its execution is the success signal.

Two observability notes:

  • Always call ->name('Something Useful') on a batch. Horizon and Telescope both surface that name in their dashboards, and "Untitled Batch" multiplied by ten dispatches a day is unreadable. If you're running Horizon already, monitoring production Laravel queues with Horizon covers the metrics and alerting setup.
  • The closures in chain/batch hooks are serialized and executed by a worker later. Don't reference $this and don't capture huge objects — they hit the queue payload.

Loader jobs for huge batches#

Bus::batch([... 50,000 jobs ...]) looks reasonable until you actually try it. You're either timing out the web request that creates them or pinning a single worker for several minutes serializing the array. The pattern Laravel documents — and the one I use — is the loader job.

Dispatch a small parent batch of 5–10 loader jobs. Each loader hydrates the batch with thousands of real jobs using $this->batch()->add():

$batch = Bus::batch([
    new LoadImportSlice(1, 10000),
    new LoadImportSlice(10001, 20000),
    new LoadImportSlice(20001, 30000),
])->name('Import Contacts')
  ->then(function (Batch $batch) {
      Log::info('Import finished', ['total' => $batch->totalJobs]);
  })
  ->dispatch();
class LoadImportSlice implements ShouldQueue
{
    use Batchable, Queueable;

    public function __construct(
        public int $from,
        public int $to,
    ) {}

    public function handle(): void
    {
        if ($this->batch()->cancelled()) {
            return;
        }

        Contact::query()
            ->whereBetween('id', [$this->from, $this->to])
            ->lazyById(500)
            ->each(function (Contact $contact) {
                $this->batch()->add(new ImportContact($contact->id));
            });
    }
}

Three things make this work. lazyById keeps memory flat regardless of slice size. $this->batch()->add() only works from inside a job that's already in the batch. And totalJobs updates as new jobs are added, so the progress() percentage stays meaningful — you just can't trust it during the loading phase.

For huge dataset processing more generally, processing CSVs with Laravel lazy collections covers the memory side of the same problem.

Combining: chain-inside-batch vs batch-inside-chain#

Both compositions are supported. The pattern you reach for depends on which axis the dependencies run on.

Chain inside batch — for several independent pipelines that should run in parallel. Each inner array is a chain; the outer batch tracks completion across all of them:

Bus::batch([
    [
        new ReleasePodcast($podcastA),
        new SendPodcastReleaseNotification($podcastA),
    ],
    [
        new ReleasePodcast($podcastB),
        new SendPodcastReleaseNotification($podcastB),
    ],
])->then(function (Batch $batch) {
    // Both pipelines completed
})->dispatch();

I use this when I'm processing N independent entities through the same multi-step pipeline. Each entity's chain is sequential within itself; entities run in parallel.

Batch inside chain — for an ordered pipeline where one of the steps fans out to parallel work. The batch is treated as a single chain step that completes when all of its jobs do:

Bus::chain([
    new FlushPodcastCache,
    Bus::batch([
        new ReleasePodcast(1),
        new ReleasePodcast(2),
        new ReleasePodcast(3),
    ]),
    Bus::batch([
        new SendPodcastReleaseNotification(1),
        new SendPodcastReleaseNotification(2),
        new SendPodcastReleaseNotification(3),
    ]),
])->dispatch();

The chain enforces "flush cache → release everything → notify everyone" in that order, but each phase parallelises across the items.

A word of caution on this composition. The chain only progresses when the batch step finishes, so a single failed job in that batch (without allowFailures()) cancels the batch and stops the outer chain. Batches inside chains are great for scatter-gather; they're not a way to retry indefinitely.

When neither is the right tool#

A few cases where I reach for something else:

  • Sub-second sequential work after the response — use defer() instead of a chain. Using Laravel 12 defer for post-response HTTP batches covers this — no queue worker required.
  • Long-running stateful workflows with retries and human approval steps — use Laravel Workflow or a saga library. Chains have no durable state model beyond the queued payload.
  • High-frequency single jobs — just dispatch the job. Wrapping a single job in Bus::chain adds a row to the bus and buys you nothing.

Gotchas and Edge Cases#

Chain hooks can't see $this. The closure passed to ->catch() is serialized and executed later by a worker. Don't reference $this from your dispatching context — it'll either error on serialize or silently hold a stale snapshot.

Batches wrap each job in a database transaction. This is in the docs but easy to miss: "since batched jobs are wrapped within database transactions, database statements that trigger implicit commits should not be executed within the jobs." If your batched job does DDL (creating temp tables, altering schema) you'll see the implicit commit close the transaction unexpectedly.

$batch->cancel() does not stop in-flight jobs. It marks the batch as cancelled and prevents future jobs from starting. Jobs already dispatched to a worker will run to completion unless they explicitly check $this->batch()->cancelled() — which is why the SkipIfBatchCancelled middleware exists. Add it to long-running batched jobs:

use Illuminate\Queue\Middleware\SkipIfBatchCancelled;

public function middleware(): array
{
    return [new SkipIfBatchCancelled];
}

Prune the job_batches table or it grows forever. Schedule queue:prune-batches daily. The default keeps 24 hours of finished batches. Pass --unfinished=72 and --cancelled=72 to clean up dead-end rows that the default doesn't touch:

Schedule::command('queue:prune-batches --hours=24 --unfinished=72 --cancelled=72')->daily();

Retry behaviour is per-job, not per-construct. Chains and batches don't have their own tries value. Each job inside them follows the same retry rules as a normally-dispatched job — $tries, $timeout, the --tries flag on queue:work. The chain or batch fails when an inner job exhausts its individual retries. Worth pairing with stop queue workers from leaking memory with --max-jobs and --max-time if you're running long batches against a long-lived worker.

Mid-chain appendToChain and prependToChain only work from inside a chained job. They're callable as $this->appendToChain(new NextJob) and they modify the running chain. Useful when a job's output decides what comes next, but easy to misuse — keep the dynamism shallow or you lose the mental model of what runs when.

Wrapping Up#

Chains are the right tool when order and fail-fast matter; batches are the right tool when independence and completion-hooks matter. The combinations cover almost everything else — chain-inside-batch for parallel pipelines, batch-inside-chain for ordered fan-out.

Two natural follow-ons: if you haven't already centralised your queue config, Laravel 13's Queue::route() cleans up the connection/queue boilerplate that's easy to scatter across batched jobs. And once you're running batches in production, monitoring production Laravel queues with Horizon is the dashboard you actually need.

FAQ#

What's the difference between queue chains and batches in Laravel?

A chain runs jobs sequentially — each job waits for the previous one to succeed, and the chain stops on the first failure. A batch runs jobs in parallel as soon as workers are available, keeps going past failures (or stops on first failure depending on allowFailures()), and exposes lifecycle hooks like then and finally that fire when the whole group is done. Pick chain when ordering matters; pick batch when you need parallelism and a completion callback.

When should I choose Bus::chain over Bus::batch?

Choose Bus::chain when each job depends on the result of the previous one, when partial completion would leave your data in a broken state, and when you want execution to stop the moment something fails. Choose Bus::batch when the jobs are independent of each other, when you want to maximise throughput by running them in parallel, or when you need the then/catch/finally lifecycle hooks to drive a UI or trigger downstream work.

Can I combine chains and batches in Laravel?

Yes — both directions. Pass arrays inside Bus::batch to run multiple chains in parallel under one batch (chain-inside-batch). Or pass Bus::batch(...) as an item inside Bus::chain to make an ordered pipeline where one step fans out to parallel work (batch-inside-chain). The chain only advances after the inner batch finishes, so a failed batch step (without allowFailures()) will halt the outer chain.

How do I handle a failure in a queue chain?

Attach a ->catch(function (Throwable $e) { ... }) callback before ->dispatch(). It fires when any job in the chain fails after exhausting its retries, and remaining chained jobs will not run. The closure is serialized and executed by a worker, so don't reference $this or capture large objects. For state-consistency, wrap database writes inside each job's handle() in a DB::transaction — the chain's catch does not roll back any work the previous jobs already committed.

Do chains and batches share retry behaviour?

Retries are per-job, not per-construct. Each job inside a chain or a batch follows its own retry rules — the $tries and $timeout properties on the job class, or the --tries flag on the worker. The chain or batch only sees a failure once an individual job has exhausted its retries. There is no chain-wide or batch-wide retry policy, although queue:retry-batch {uuid} will replay all the failed jobs from one batch in a single command.

Is there a limit on how many jobs I can put in a batch?

Laravel doesn't impose a hard limit, but the practical ceilings are your serialisation cost and your job_batches table. Dispatching 50,000 jobs in one Bus::batch([...]) call is slow and risks timing out the request. The recommended pattern is to dispatch a handful of "loader" jobs that each call $this->batch()->add() to hydrate the batch from inside a worker — this keeps memory flat and the dispatching path fast. Schedule queue:prune-batches daily so the metadata table doesn't grow without bound.

Steven Richardson
Steven Richardson

CTO at Digitonic. Writing about Laravel, architecture, and the craft of leading software teams from the west coast of Scotland.