Laravel Prism Tool Calling: Build AI Agents That Actually Do Things
Text generation is table stakes. The moment you want your AI to actually do something — look up an order, check a user's subscription status, send a notification — you need tool calling. Laravel Prism tool calling lets you register PHP callables as tools the LLM can invoke, then handles the back-and-forth automatically. If you're new to Prism itself, start with getting started with Laravel Prism before continuing here.
How Laravel Prism Tool Calling Works
The flow is straightforward, but worth spelling out because it shapes how you structure your code.
- You define tools — PHP callables with a name, description, and parameter schema
- You register them on a Prism request alongside your prompt
- Prism sends the prompt + tool definitions to the LLM
- If the LLM decides it needs data, it returns a tool call request instead of a text response
- Prism executes your PHP function with the LLM's arguments
- The result is fed back to the LLM, which either calls another tool or produces the final response
withMaxSteps()caps the number of LLM calls so you can't get an infinite loop
Crucially, Prism manages this loop for you. You call asText() once and get back the final response — no while-loop, no manual ToolResultMessage handling.
Defining a Tool
Install the package first if you haven't:
composer require prism-php/prism
Tools are built with a fluent builder. Here's one that fetches order status from Eloquent:
use Prism\Prism\Facades\Tool;
use App\Models\Order;
$getOrderStatus = Tool::as('get_order_status')
->for('Retrieves the current status of an order by its ID. Use this when the user asks about a specific order.')
->withNumberParameter('order_id', 'The numeric ID of the order to look up')
->using(function (int $order_id): string {
$order = Order::find($order_id);
if (! $order) {
return "Order #{$order_id} not found.";
}
// Return a string — the LLM reads this as context
return "Order #{$order_id} status: {$order->status}. Last updated: {$order->updated_at->toDateTimeString()}.";
});
A few things to note:
- The description matters a lot. The LLM uses it to decide when to call the tool. Be specific.
- Tools must return a
string. If you're returning structured data, JSON-encode it. - Returning
"not found"is better than throwing — the LLM can relay that to the user gracefully.
Here's a second tool that dispatches a notification job:
use Prism\Prism\Facades\Tool;
use App\Jobs\SendUserNotification;
$notifyUser = Tool::as('notify_user')
->for('Sends a notification to a user. Use this when the user asks to be notified or when a status update should be communicated.')
->withNumberParameter('user_id', 'The ID of the user to notify')
->withStringParameter('message', 'The notification message to send')
->using(function (int $user_id, string $message): string {
// Dispatch to a queue — tools don't have to be synchronous
SendUserNotification::dispatch($user_id, $message);
return "Notification queued for user #{$user_id}.";
});
Dispatching to a queue from inside a tool is completely fine. If you're not familiar with how Laravel queues work at scale, scaling Laravel queues in production covers the patterns you need before you start firing jobs from inside an agent loop.
Registering Prism Tool Calls and Controlling the Loop
Pass your tools to the Prism builder with withTools(). The critical setting is withMaxSteps() — Prism defaults to a single step, which means tools will never be called. You must set it to at least 2.
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;
$response = Prism::text()
->using(Provider::Anthropic, 'claude-3-5-sonnet-latest')
->withSystemPrompt('You are a helpful order support assistant. Use the available tools to look up order information and send notifications when requested.')
->withPrompt('My order #4821 hasn\'t arrived. Can you check the status and let me know?')
->withTools([$getOrderStatus, $notifyUser])
->withMaxSteps(5) // Up to 5 LLM calls before stopping
->asText();
echo $response->text;
withMaxSteps(5) means up to 5 round trips with the LLM. A reasonable value for most support bots is 3–5. Bump it higher for complex multi-tool workflows, but watch your token costs.
Practical Example: Full Order Status Bot
Putting it all together — a complete controller method:
use Prism\Prism\Facades\Prism;
use Prism\Prism\Facades\Tool;
use Prism\Prism\Enums\Provider;
use App\Models\Order;
use App\Jobs\SendUserNotification;
class SupportController extends Controller
{
public function handle(Request $request): JsonResponse
{
$getOrderStatus = Tool::as('get_order_status')
->for('Retrieves the current status of a customer order by its ID.')
->withNumberParameter('order_id', 'The numeric order ID')
->using(function (int $order_id): string {
$order = Order::find($order_id);
return $order
? "Order #{$order_id}: {$order->status}, last updated {$order->updated_at->toDateTimeString()}."
: "Order #{$order_id} not found.";
});
$notifyUser = Tool::as('notify_user')
->for('Sends a notification message to a specific user.')
->withNumberParameter('user_id', 'The user ID to notify')
->withStringParameter('message', 'The message to send')
->using(function (int $user_id, string $message): string {
SendUserNotification::dispatch($user_id, $message);
return "Notification queued successfully for user #{$user_id}.";
});
$response = Prism::text()
->using(Provider::Anthropic, 'claude-3-5-sonnet-latest')
->withSystemPrompt('You are a support assistant. Look up orders and send notifications when asked.')
->withPrompt($request->input('message'))
->withTools([$getOrderStatus, $notifyUser])
->withMaxSteps(5)
->asText();
return response()->json(['reply' => $response->text]);
}
}
When the user asks "what's the status of order 4821?", the LLM will call get_order_status with {"order_id": 4821}, receive the result, and then write a natural language response — all in the single asText() call.
Inspecting What the Agent Did
If you need to audit tool usage — for logging, billing, or debugging — inspect $response->steps:
foreach ($response->steps as $step) {
foreach ($step->toolCalls as $toolCall) {
logger()->info('Tool called', [
'tool' => $toolCall->name,
'args' => $toolCall->arguments(),
]);
}
foreach ($step->toolResults as $result) {
logger()->info('Tool result', [
'tool' => $result->toolName,
'result' => $result->result,
]);
}
}
For high-throughput use cases, Prism also supports concurrent tool execution — mark a tool with ->concurrent() and it will run in parallel with other concurrent tools using Laravel's Concurrency facade. This is worth exploring if your tools are hitting external APIs. The async patterns covered in PHP 8.4 fibers and async patterns give useful context on how PHP handles concurrency under the hood.
Gotchas and Edge Cases
Forgetting withMaxSteps is the most common mistake. Without it, the LLM will return a tool call that never gets executed. Set it to at least 2.
Tool descriptions drive everything. If the LLM isn't calling your tool when you expect it to, the description is usually the culprit. Be explicit: "Use this when the user mentions an order number" beats "Gets order info."
Provider differences matter. Tool calling is well-supported on OpenAI and Anthropic. Gemini has had intermittent issues where it returns a Stop finish reason instead of invoking tools. If you're using Gemini, test thoroughly and add fallback handling.
Return strings, not exceptions. If a tool throws, Prism's default error handling converts it to a string and sends it back to the LLM, which can adapt. If you call ->withoutToolErrorHandling(), exceptions will propagate — useful in testing, dangerous in production.
Token costs compound. Each step is a full LLM call. An agent that makes 4 tool calls is billed for 5 requests (4 tool calls + 1 final response). Factor this into your withMaxSteps decisions and your cost projections.
Wrapping Up
Laravel Prism tool calling is the bridge between a chatbot that talks and an agent that acts. Define your tools clearly, set a sensible withMaxSteps, and let Prism handle the loop. The main things to get right are the tool descriptions and returning clean strings that the LLM can reason about.
Once you're comfortable with basic tool calling, explore the broader Laravel AI SDK — the complete guide to the Laravel AI SDK covers structured outputs, embeddings, and multi-turn conversations that pair naturally with agentic workflows.
Steven is a software engineer with a passion for building scalable web applications. He enjoys sharing his knowledge through articles and tutorials.