Your Laravel queue worker has been running for three days straight. Memory usage has crept from 80 MB to 600 MB. Jobs are processing slower than usual — or silently failing. This is the Laravel queue worker memory leak problem, and the fix is two flags you're probably not using.
Why Laravel queue workers accumulate memory#
PHP wasn't designed for long-running processes. Every time a job runs, it can leave behind static state, cached Eloquent models, event listeners registered mid-flight, or objects the garbage collector never releases.
Laravel itself caches things across requests — bindings resolved from the container, model attribute casters, relationship definitions. In a web request this is fine: the process dies when the response is sent. In a queue worker, nothing dies. The process lives on, and those caches grow.
The result is a memory profile that looks like a slow escalator: up, up, up, plateau, crash.
The solution isn't to find every possible memory leak (you won't). It's to give workers a graceful exit strategy so they restart before things go wrong.
The --max-jobs flag#
--max-jobs tells the worker to stop after processing a set number of jobs:
php artisan queue:work redis --max-jobs=500
After 500 jobs, the worker exits cleanly. Supervisor restarts it immediately. Fresh memory footprint. Any accumulated static state is gone.
This is ideal for high-throughput queues where you're processing hundreds of jobs per hour. The worker cycles regularly regardless of time.
The default is 0 — unlimited. Without this flag, your worker will run until something forces it to stop.
The --max-time flag#
--max-time stops the worker after a set number of seconds, regardless of how many jobs it has processed:
php artisan queue:work redis --max-time=3600
After 3600 seconds (one hour), the worker exits gracefully and Supervisor restarts it.
This covers the low-traffic scenario where --max-jobs alone isn't enough. If you're only processing 10 jobs per hour, you might never hit 500 jobs — but the process still accumulates memory over days of running. --max-time catches that.
Critically, the worker won't cut off a job mid-execution. When --max-time is hit, the worker finishes its current job, then exits cleanly. No data corruption, no partial state.
Using both together#
The real pattern is both flags combined:
php artisan queue:work redis \
--max-jobs=500 \
--max-time=3600 \
--sleep=3 \
--tries=3
--max-jobs=500 handles high-traffic queues that need frequent cycling. --max-time=3600 acts as a safety net for low-traffic queues that may not hit the job limit for hours. Together they cover both scenarios — you'll never have a worker run unbounded again.
I use --sleep=3 as the polling interval for queues without activity, and --tries=3 so transient failures retry before landing in the failed jobs table.
Supervisor configuration#
Supervisor needs autorestart=true to actually bring the worker back after it exits. Without it, --max-jobs and --max-time just kill your workers and leave them dead.
; /etc/supervisor/conf.d/laravel-worker.conf
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/html/artisan queue:work redis --max-jobs=500 --max-time=3600 --sleep=3 --tries=3
autostart=true
autorestart=true
stopasgroup=true
killasgroup=true
numprocs=4
redirect_stderr=true
stdout_logfile=/var/www/html/storage/logs/worker.log
stopwaitsecs=3600
A few things worth knowing here:
stopwaitsecs=3600 should match your --max-time value. This tells Supervisor how long to wait for a graceful shutdown before force-killing the process. If your jobs can run up to 60 seconds and your workers restart every hour, set both to 3600.
stopasgroup=true and killasgroup=true ensure child processes spawned by the worker are also terminated cleanly. Without these, orphaned PHP processes can linger.
numprocs=4 runs four parallel workers. Scale this to your server's CPU count and queue volume.
Laravel Horizon configuration#
If you're using Horizon, maxJobs and maxTime map directly to the same behaviour inside supervisor pool configuration:
// config/horizon.php
'environments' => [
'production' => [
'supervisor-1' => [
'connection' => 'redis',
'queue' => ['default', 'emails'],
'balance' => 'auto',
'minProcesses' => 1,
'maxProcesses' => 8,
'maxJobs' => 500, // restart after N jobs
'maxTime' => 3600, // restart after N seconds
'tries' => 3,
'timeout' => 60,
],
],
],
Horizon handles process management itself — you don't need Supervisor for individual workers when Horizon is running. The maxJobs and maxTime values control exactly the same exit behaviour as the CLI flags.
I set balance => 'auto' so Horizon can scale processes up when a queue builds up, and minProcesses => 1 so there's always at least one worker available even during quiet periods.
Gotchas and edge cases#
stopwaitsecs must be at least as large as your longest job. If a job runs for 5 minutes and stopwaitsecs is 60, Supervisor will force-kill the worker mid-job when it tries to restart. Set stopwaitsecs to your job timeout, not your restart interval.
--max-time is checked between jobs, not mid-job. If a job takes 30 minutes and your --max-time=3600, the worker might run for up to 3630 seconds before restarting. This is expected — it's graceful, not a hard kill.
The --memory flag is a different mechanism. --memory=256 causes the worker to exit if it exceeds 256 MB of memory usage. I use this as a backstop alongside --max-jobs and --max-time, not instead of them. Time and job limits are predictable; memory limits catch the unexpected.
Zero means unlimited. Both --max-jobs=0 and --max-time=0 disable their respective limits. If you accidentally set either to 0 in your Horizon config, you've turned the feature off.
Wrapping up#
Add --max-jobs=500 --max-time=3600 to every queue:work command in your Supervisor config today. If you're on Horizon, set maxJobs and maxTime in each supervisor pool in config/horizon.php. Workers that restart themselves don't crash your queues — and you stop firefighting at 2am.
FAQ#
How do I choose the right value for --max-jobs?
Start with 500 for high-throughput queues (100+ jobs/hour). Use 1000–2000 if your jobs are lightweight (processing typically finishes in < 1 second). The goal is to recycle frequently enough to prevent memory creep — whether that's every 500 jobs or every 2000 is workload-dependent. Monitor peak memory in Horizon and adjust down if you see it climbing.
What's the difference between --max-time and stopwaitsecs in Supervisor?
--max-time tells the worker to exit cleanly after N seconds. stopwaitsecs tells Supervisor how long to wait for a graceful shutdown before force-killing the process. Set stopwaitsecs to at least as large as your --max-time value so Supervisor doesn't cut off a running job that hasn't finished yet.
Will --max-time cut off a running job mid-execution?
No. The worker checks the time limit between jobs, not during. If a job takes 10 minutes and your --max-time=3600, the worker will run for up to 3610 seconds before restarting — the job finishes cleanly, then the worker exits.
Can I use --memory instead of --max-jobs and --max-time?
The --memory flag (e.g., --memory=256) causes the worker to exit if it exceeds that memory threshold. It's useful as a backstop alongside --max-jobs and --max-time, but don't rely on it alone — predictable recycling based on time and job count is more reliable than waiting for memory to hit a threshold.