Adding Redis to a small Laravel app for the sole purpose of running WebSockets has always felt like overkill. Extra deploy target, extra failure mode, extra monthly bill. The Laravel 13 Reverb database driver finally gives single-server deployments a way out — Redis becomes optional infrastructure rather than a hard dependency. Here's the production setup, and the trade-offs you need to know before flipping the switch.
If you've never wired up Reverb before, my walkthrough on real-time notifications with Laravel Reverb and Echo is the foundation this article assumes. From here we focus on the new database scaling driver and what it changes about your deployment shape.
Upgrade Reverb to the release that ships the database driver#
Pull the latest Reverb release into your Laravel 13 project. The database scaling driver lives in the same package — you don't add a new dependency, you just update the existing one. Make sure you're on Laravel 13 first; if you're not, work through the Laravel 12 to 13 upgrade guide before starting on Reverb.
composer require laravel/reverb --update-with-dependencies
php artisan about | grep -i reverb
Confirm the version reported by php artisan about is the release that introduced the database driver. If you previously installed Reverb on an older Laravel, the existing config/reverb.php will not contain the new scaling.driver key — we'll publish the fresh config in the next step.
Publish and run the new Reverb migrations#
The database driver introduces three Reverb-owned tables: one for connection state, one for channel subscriptions, and one for pinger entries used to expire stale connections. Publish the new config and migrations, then run them against your primary database.
php artisan vendor:publish --tag=reverb-config --force
php artisan vendor:publish --tag=reverb-migrations
php artisan migrate
Open the published migration to see what was created. Each row in reverb_connections represents a live WebSocket; reverb_channels tracks subscriptions per channel; reverb_pings stores the last-seen timestamp the prune job uses to clear out dead sockets. None of this exists in your domain — treat them as system tables you don't touch directly.
Switch REVERB_SCALING_DRIVER to database#
The new config exposes a driver key under scaling. Setting REVERB_SCALING_DRIVER=database tells Reverb to coordinate cross-process messages through your primary connection rather than Redis pub/sub. The REVERB_SCALING_ENABLED flag still gates the feature — leave it on.
// config/reverb.php
'scaling' => [
'enabled' => env('REVERB_SCALING_ENABLED', false),
'driver' => env('REVERB_SCALING_DRIVER', 'redis'),
'channel' => env('REVERB_SCALING_CHANNEL', 'reverb'),
],
# .env
REVERB_SCALING_ENABLED=true
REVERB_SCALING_DRIVER=database
DB_CONNECTION=mysql
You can keep the Redis driver entries in config/reverb.php — they're ignored when the database driver is selected. That makes flipping back later a single env change plus a php artisan config:clear on the box.
Configure connection pruning and pulse intervals#
The biggest production gotcha is letting reverb_connections grow unbounded. Reverb runs a periodic prune that deletes connection rows whose last ping is older than a threshold. Tune two values: how often the pulse fires from the server, and how aggressively the prune removes stale rows.
// config/reverb.php
'servers' => [
'reverb' => [
// ...existing keys...
'pulse_interval' => env('REVERB_PULSE_INTERVAL', 30),
'prune_interval' => env('REVERB_PRUNE_INTERVAL', 60),
],
],
Defaults of 30s pulse and 60s prune work for most apps. If you're handling thousands of concurrent connections, drop both — a 10s pulse with a 30s prune keeps the connections table under tight control at the cost of more write traffic. Watch the row count in reverb_connections over a busy hour: if it's growing past your real concurrent users, the prune is too lazy.
Run reverb:start under supervisor#
Reverb is a long-running ReactPHP process. You need a process manager that restarts it on crash and makes file descriptors available — Supervisor is the default for Forge-style boxes, but systemd works equally well. The supervisor block looks identical whether you're on Redis or the database driver; only the env vars change.
[program:reverb]
process_name=%(program_name)s
command=php /var/www/your-app/artisan reverb:start --host=0.0.0.0 --port=8080
autostart=true
autorestart=true
user=forge
redirect_stderr=true
stdout_logfile=/var/www/your-app/storage/logs/reverb.log
stopwaitsecs=10
minfds=10000
After updating the config, run sudo supervisorctl reread && sudo supervisorctl update. Front the WebSocket port with Nginx for TLS termination — the same proxy_pass block you'd use for Redis. If you're already running an Nginx hardening setup, see my Nginx + Nightwatch 404 hardening guide for the surrounding directives I run on every public box.
Broadcast a test event end to end#
The smoke test that catches 90% of misconfigurations: dispatch a broadcast event from php artisan tinker and confirm it lands on a subscribed Echo client. If the database driver is wired correctly, every Reverb worker on the box will see the message via the reverb_channels table — the same way Redis pub/sub fans out across worker processes.
// app/Events/PingFired.php
namespace App\Events;
use Illuminate\Broadcasting\Channel;
use Illuminate\Broadcasting\InteractsWithSockets;
use Illuminate\Contracts\Broadcasting\ShouldBroadcast;
use Illuminate\Foundation\Events\Dispatchable;
class PingFired implements ShouldBroadcast
{
use Dispatchable, InteractsWithSockets;
public function __construct(public string $message) {}
public function broadcastOn(): Channel
{
return new Channel('diagnostics');
}
}
php artisan tinker --execute 'event(new App\Events\PingFired("hello-from-db-driver"));'
Open browser dev tools, watch the WebSocket frame, and confirm the payload arrives. If nothing fires, check three places: the reverb_channels row exists for the diagnostics channel, your Echo client is pointing at the right host/port, and REVERB_SCALING_ENABLED=true is set in the queue worker's environment as well as the web environment.
Decide when to upgrade back to Redis#
The database driver gives up exactly one capability: cross-server pub/sub. If you stay on a single Reverb process on a single box, the database round-trip is fine — most chat apps, presence indicators, and live dashboards never outgrow this. The moment you put a load balancer in front of two Reverb boxes, you need Redis. Database polling cannot match Redis pub/sub for low-latency fan-out across hosts.
Treat the database driver as a launch-day choice. Ship with it, watch your concurrency. When you need to scale horizontally, flip REVERB_SCALING_DRIVER back to redis, point at a managed Redis instance, and restart. The application code does not change. If your real bottleneck turns out to be queue throughput rather than WebSockets, scaling Laravel queues in production is the next stop.
Gotchas and Edge Cases#
A handful of footguns to know about before going live:
The reverb_connections table is the canary. Pulse fires on a timer, but a hard server crash leaves orphaned rows behind. The prune cleans them up, but until it runs, your channel member counts are inflated. Don't panic if you see the count spike during a deploy — Supervisor will restart Reverb, fresh connections come back in, and the next prune tidies the table. Set Pulse alerts on row growth, not just absolute count.
Long-running database transactions block the prune. If you have a slow queue job that holds a transaction open on the same database, the prune query waits behind it. Run the prune against a dedicated connection if your app does heavy transactional work — most teams pair the database driver with Horizon for queue monitoring and a separate connection prevents these from interfering.
Polling cadence has a real cost. Each Reverb process polls the channels table on the pulse interval. With a default 30s pulse, that's one query per second across two processes. With a 5s pulse on a busy box with eight processes, you're at twelve queries per second just for coordination. Database CPU spikes that map to your pulse interval are usually the cause — relax the interval before reaching for Redis.
Connection state survives a Reverb restart. That's a feature with the database driver — your subscriptions don't all churn through reconnect storms when you redeploy. With Redis, you'd typically FLUSHDB the Reverb keys on startup; do not do this with the database driver. Let the prune handle expiry naturally.
Forge and Vapor managed environments still default to Redis. If you're on managed hosting, you may not save much by switching — Forge's bundled Redis is included with most plans. The driver matters most for self-hosted single-VPS setups and cost-sensitive staging environments.
Wrapping Up#
Drop in the database driver, run the migration, flip the env var, restart Reverb under Supervisor — that's the entire migration off Redis for a single-server WebSocket deployment. The trade-off is honest: no horizontal pub/sub, slightly slower fan-out under high concurrency, and a connections table you have to keep an eye on. For everything below the multi-server threshold, the simplification is worth it.
If you're standing up Reverb for the first time, work through the real-time notifications guide first, then come back here for the production wiring. Once your traffic outgrows one box, the same article series covers scaling Laravel queues in production — the same horizontal-scaling decision shows up there too.
FAQ#
Can I run Laravel Reverb without Redis?
Yes — with the Laravel 13 Reverb database driver, Redis is optional for single-server deployments. Set REVERB_SCALING_DRIVER=database and Reverb stores connection state, channel subscriptions, and pings in your primary database instead. Redis is still required if you want to scale Reverb horizontally across multiple servers.
How does the Reverb database driver scale?
The database driver scales vertically on a single host. Multiple Reverb worker processes on the same machine coordinate through the reverb_channels table, so a broadcast on one process reaches subscribers on every other process. It does not coordinate across hosts because polling a database for cross-server fan-out adds too much latency — that's where Redis pub/sub remains the right tool.
Is the Reverb database driver suitable for production?
It's production-ready for single-server deployments. Chat apps, live notifications, dashboards, and presence indicators all run comfortably on the database driver provided you tune the prune interval and watch the reverb_connections table. The moment you need a load balancer in front of two Reverb hosts, switch back to Redis — that's the only hard limit.
How do I migrate from Reverb's Redis driver to the database driver?
Three steps: publish and run the new Reverb migrations with php artisan vendor:publish --tag=reverb-migrations followed by php artisan migrate, set REVERB_SCALING_DRIVER=database in your .env, and restart Reverb under Supervisor. No application code changes — your events, channels, and Echo client config stay identical. If something goes wrong, flip the env var back to redis and restart.
What tables does the Reverb database driver create?
The migration creates three tables: reverb_connections for live WebSocket state, reverb_channels for active channel subscriptions, and reverb_pings for the last-seen timestamps used by the prune job. Treat them as system tables — your application code never reads or writes them directly. The prune interval determines how aggressively stale rows are removed.
When should I prefer Redis over the database driver?
Reach for Redis when you need horizontal scaling across multiple Reverb hosts, when your concurrent connection count climbs into the tens of thousands, or when database write contention from the pulse becomes a measurable bottleneck. For a single VPS hosting a chat app, internal dashboard, or notification system, the database driver removes a moving part with no real performance cost.