Laravel MCP Server — Expose Your App to AI Clients in 15 Minutes
Your REST API works fine for mobile and frontend clients. AI assistants like ChatGPT, Claude, and Cursor are different — they discover capabilities through tool descriptions, not OpenAPI specs. The laravel/mcp package gives your app a proper Model Context Protocol endpoint in minutes, using the same middleware, validation, and routing patterns you already know.
Installing the Laravel MCP Server Package
composer require laravel/mcp
php artisan vendor:publish --tag=ai-routes
The second command creates routes/ai.php — the file where you register your MCP servers. Think of it as routes/api.php, but for AI clients. The package requires Laravel 10+ and PHP 8.1+.
Create Your First MCP Tool
Generate a server and a tool class:
php artisan make:mcp-server InvoiceServer
php artisan make:mcp-tool SearchInvoicesTool
Tool classes live in App\Mcp\Tools. Here's a real search tool — not a weather example:
<?php
namespace App\Mcp\Tools;
use App\Models\Invoice;
use Laravel\Mcp\Attributes\Description;
use Laravel\Mcp\Attributes\Name;
use Laravel\Mcp\Attributes\Title;
use Laravel\Mcp\Server\Request;
use Laravel\Mcp\Server\Response;
use Laravel\Mcp\Server\Tool;
use Laravel\Mcp\Server\Type\JsonSchema;
#[Name('search-invoices')]
#[Title('Search Invoices')]
#[Description('Search invoices by customer name or status. Returns invoice IDs, amounts, statuses, and due dates. Limit 20 results.')]
class SearchInvoicesTool extends Tool
{
public function handle(Request $request): Response
{
$request->validate([
'query' => 'required|string|min:2',
'status' => 'nullable|in:paid,pending,overdue',
]);
$invoices = Invoice::query()
->when($request->get('query'), fn ($q, $search) =>
$q->where('customer_name', 'like', "%{$search}%")
)
->when($request->get('status'), fn ($q, $status) =>
$q->where('status', $status)
)
->limit(20)
->get(['id', 'customer_name', 'amount', 'status', 'due_date']);
return Response::text($invoices->toJson());
}
public function schema(JsonSchema $schema): array
{
return [
'query' => $schema->string()
->description('Customer name or keyword to search for')
->required(),
'status' => $schema->string()
->enum(['paid', 'pending', 'overdue'])
->description('Filter by invoice status. Omit to return all statuses.'),
];
}
}
Register it on the server:
<?php
namespace App\Mcp\Servers;
use App\Mcp\Tools\SearchInvoicesTool;
use Laravel\Mcp\Attributes\Name;
use Laravel\Mcp\Attributes\Version;
use Laravel\Mcp\Server\Server;
#[Name('Invoice Server')]
#[Version('1.0.0')]
class InvoiceServer extends Server
{
protected array $tools = [
SearchInvoicesTool::class,
];
}
Registering the Laravel MCP Server Route with Authentication
Open routes/ai.php and expose the server via HTTP:
use App\Mcp\Servers\InvoiceServer;
use Laravel\Mcp\Facades\Mcp;
Mcp::web('/mcp/invoices', InvoiceServer::class)
->middleware(['auth:sanctum', 'throttle:60,1']);
auth:sanctum is identical to what you'd apply to any API route — no new concepts. AI clients pass an Authorization: Bearer {token} header. For remote AI client integrations that require OAuth 2.1 (the MCP spec recommends it for production), swap in auth:api backed by Laravel Passport.
For fine-grained rate limiting per token or per user on your API routes, the same RateLimiter facade patterns apply here too — useful when you're metering AI client usage separately from human API calls.
Test with an MCP Inspector
Verify your server before connecting a real AI client:
npx @modelcontextprotocol/inspector
Point it at http://localhost:8000/mcp/invoices with a valid Sanctum token in the headers. You should see search-invoices listed with its schema. Call it directly from the inspector UI to confirm the response shape.
For local development without HTTP, you can register a local server that runs as an Artisan command instead:
// routes/ai.php — runs via: php artisan mcp:serve invoices
Mcp::local('invoices', InvoiceServer::class);
Local servers are how tools like Laravel Boost give AI assistants terminal-level access to your codebase.
Gotchas and Edge Cases
Descriptions are load-bearing. The #[Description(...)] attribute on each tool and each schema field is what the model reads to decide whether and how to call your tool. Vague descriptions ("gets invoice data") produce wrong calls. Describe the exact data returned, the limit, and any important constraints.
Mutating tools need confirmation patterns. A read-only SearchInvoicesTool is safe to retry. A CancelInvoiceTool is not. Either require an explicit confirmed: true parameter in the schema, or restrict destructive tools to agents with a specific scope on their token.
No session, no state. Each tool invocation is a stateless HTTP request. Don't store anything on the tool class instance between calls — it won't persist. Write to the database if you need to carry state across multiple tool invocations.
Test output schemas. Adding an outputSchema() method to your tool gives AI clients a machine-readable contract for the response structure. Without it, models sometimes hallucinate field names. It's optional but worth adding for any tool that returns complex data.
If you're building tools that need to call an LLM as part of their response — summarising the returned invoices, for example — use Laravel Prism to handle those LLM calls rather than reaching for a raw HTTP client.
MCP Server vs Traditional REST API
You don't replace your REST API with an MCP server. They sit alongside each other serving different clients:
| REST API | Laravel MCP Server | |
|---|---|---|
| Primary client | Mobile app, frontend | AI assistant, agent |
| Discovery | OpenAPI / Swagger | Tool #[Description] |
| Auth | Token / OAuth | Token / OAuth (same) |
| Response format | HTTP JSON | Tool response text/JSON |
| Retry logic | Client handles | MCP protocol handles |
The PHP attribute syntax on tool classes — #[Name], #[Description], #[Version] — is native PHP 8.1+, not framework magic. Laravel resolves these at boot to build the tool manifest it serves to AI clients.
Wrapping Up
Three commands (composer require, make:mcp-server, make:mcp-tool) and your Laravel app speaks MCP. Auth and rate limiting use the same patterns as your existing API routes. The real work is writing precise #[Description] strings — that's what separates a useful AI integration from one that confuses every model that touches it.
Once your server is live and tools are registered, building LLM tool-calling agents with Laravel Prism covers the other half: orchestrating multi-step agents that call your MCP tools in sequence. For the full picture of where MCP fits in Laravel's AI stack, the complete guide to the Laravel AI SDK is the logical next read.
Steven is a software engineer with a passion for building scalable web applications. He enjoys sharing his knowledge through articles and tutorials.