Fine-grained rate limiting on Laravel API routes
Most Laravel apps I've inherited have throttle:60,1 stuck on the api middleware group and nothing else. Sixty requests a minute for everyone — free users, paying customers, background jobs, all treated identically. That's not a rate limiting strategy, it's a placeholder. The RateLimiter facade gives you the full picture: per-user limits based on plan, per-IP caps for unauthenticated routes, stacked limiters, and custom 429 responses with proper headers.
Laravel API rate limiting with named limiters
The throttle:60,1 shorthand is still valid but it offers zero flexibility. Named rate limiters fix that. You define them in AppServiceProvider::boot() and reference them by name in your routes.
// app/Providers/AppServiceProvider.php
use Illuminate\Cache\RateLimiting\Limit;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\RateLimiter;
public function boot(): void
{
RateLimiter::for('api', function (Request $request) {
$user = $request->user();
if (! $user) {
// Unauthenticated: cap by IP address
return Limit::perMinute(30)->by($request->ip());
}
// Authenticated: per-user limits based on plan
return match ($user->plan) {
'enterprise' => Limit::perMinute(500)->by($user->id),
'pro' => Limit::perMinute(120)->by($user->id),
default => Limit::perMinute(30)->by($user->id),
};
});
}
The ->by() key determines what's counted. Using $user->id means each user has their own counter. Using $request->ip() groups all requests from the same IP.
Applying named limiters to routes
Once defined, you reference the limiter by name in the throttle middleware:
// routes/api.php
Route::middleware(['auth:sanctum', 'throttle:api'])
->group(function () {
Route::get('/products', [ProductController::class, 'index']);
Route::post('/orders', [OrderController::class, 'store']);
});
The default api middleware group in Laravel already applies throttle:api, so if you're overriding that named limiter in AppServiceProvider, your existing routes pick up the new behaviour without any route changes.
Stacking multiple limiters on a route
You can apply more than one limiter to a single route by chaining throttle middleware. This is useful when you want a global IP cap alongside a per-user cap:
RateLimiter::for('search', function (Request $request) {
return [
// Global IP cap — prevents scraping regardless of auth state
Limit::perMinute(10)->by($request->ip()),
// Per-user cap on top
Limit::perMinute(5)->by($request->user()?->id ?: $request->ip()),
];
});
// Return an array of Limit instances for multiple simultaneous constraints
Route::middleware(['auth:sanctum', 'throttle:api', 'throttle:search'])
->get('/search', [SearchController::class, 'index']);
The request fails as soon as any single limiter is exceeded. The response headers will reflect the limiter that triggered the 429.
Custom 429 response
By default Laravel returns a simple Too Many Requests response. You can override it per limiter using the ->response() callback:
RateLimiter::for('api', function (Request $request) {
return Limit::perMinute(60)
->by($request->user()?->id ?: $request->ip())
->response(function (Request $request, array $headers) {
return response()->json([
'message' => 'Rate limit exceeded. Please slow down.',
'retry_after' => $headers['Retry-After'] ?? null,
], 429, $headers);
});
});
The $headers array already contains X-RateLimit-Limit, X-RateLimit-Remaining, and Retry-After. Pass them through to the response so clients can back off intelligently without guessing.
Redis for multi-server deployments
The default cache driver backs rate limiting. On a single server that's fine — file or database cache works. Behind a load balancer you need a shared store, otherwise each server has its own independent counters and the limit is effectively multiplied by your node count.
# .env — switch to Redis for shared rate limit counters
CACHE_STORE=redis
That's the only change needed. Laravel's throttle middleware reads from whatever CACHE_STORE is configured. No middleware swap, no ThrottleRequestsWithRedis class to reference manually — just point the cache at Redis.
Gotchas and edge cases
Rate limits reset on the minute boundary, not a rolling window. A user can make 60 requests at 12:00:59 and another 60 at 12:01:00. If you need a true sliding window, you'll want a custom approach using Redis sorted sets.
The ->response() callback must include the headers. If you return a custom response without passing through $headers, the Retry-After header disappears and clients have no way to know when to retry.
Named limiters with the same name override each other. If you call RateLimiter::for('api', ...) twice, the second call wins. This trips people up when packages also register a limiter named api.
Returning an array of Limit instances is valid for a single limiter. You don't need two separate named limiters to apply two constraints. A single RateLimiter::for() can return [Limit::perMinute(10), Limit::perHour(100)] and both apply.
Wrapping Up
Named rate limiters are a one-time investment in AppServiceProvider that pays off every time your API grows a new tier or endpoint. Define the logic once, reference it by name in routes, and use Redis if you're running more than one server. The throttle:60,1 shorthand is still there for simple cases — it just shouldn't be the only tool in use.
Steven is a software engineer with a passion for building scalable web applications. He enjoys sharing his knowledge through articles and tutorials.