Getting Started With Laravel Prism: Add AI to Any Laravel App in Minutes
Every time I've needed to add AI to a Laravel app, I've ended up with a pile of vendor-specific SDK calls scattered through controllers and services. Switch from OpenAI to Anthropic? Rewrite half your codebase. Prism fixes that. It's a Laravel-native AI package with a fluent, provider-agnostic interface — and it takes about five minutes to get running.
Installing Laravel Prism AI
Prism supports Laravel 11, 12, and 13, and requires PHP 8.2 or higher. Install via Composer:
composer require prism-php/prism
Then publish the config:
php artisan vendor:publish --tag=prism-config
This drops a config/prism.php file into your project. You'll configure your providers there.
Configuring Your Providers
Add your API keys to .env:
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
Prism picks these up automatically via the published config. The relevant section of config/prism.php looks like this out of the box:
'providers' => [
'anthropic' => [
'api_key' => env('ANTHROPIC_API_KEY', ''),
'version' => env('ANTHROPIC_API_VERSION', '2023-06-01'),
],
'openai' => [
'url' => env('OPENAI_URL', 'https://api.openai.com/v1'),
'api_key' => env('OPENAI_API_KEY', ''),
'organization' => env('OPENAI_ORGANIZATION', null),
],
'ollama' => [
'url' => env('OLLAMA_URL', 'http://localhost:11434/v1'),
],
// ...
],
You only need to set keys for the providers you actually use. Everything else can stay as-is.
Your First AI Call with Laravel Prism
Here's the simplest possible text generation call — summarising a blog post body using OpenAI:
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;
$response = Prism::text()
->using(Provider::OpenAI, 'gpt-4o-mini')
->withSystemPrompt('You are a helpful assistant that summarises content concisely.')
->withPrompt("Summarise this post in two sentences:\n\n{$post->body}")
->asText();
echo $response->text;
The ->asText() call fires the request and returns a TextResponse object. $response->text contains the generated string.
If you want to log token usage:
$usage = $response->firstStep()->usage;
logger('AI usage', [
'prompt_tokens' => $usage->promptTokens,
'completion_tokens' => $usage->completionTokens,
]);
There's no totalTokens shortcut — you sum them yourself if needed. Worth knowing before you go hunting for a property that isn't there.
Swapping Providers Without Changing Business Code
This is Prism's main selling point. To run the exact same call through Anthropic instead of OpenAI, change two arguments:
$response = Prism::text()
->using(Provider::Anthropic, 'claude-3-5-sonnet-20241022') // <- swap here
->withSystemPrompt('You are a helpful assistant that summarises content concisely.')
->withPrompt("Summarise this post in two sentences:\n\n{$post->body}")
->asText();
echo $response->text;
Everything else stays identical. Your controller, your service class, your tests — none of it changes. In practice I reach for this when I want to compare output quality between providers, or when I need to fall back to a local Ollama model in development.
You can also pass the provider as a plain string if you'd rather not import the enum:
->using('openai', 'gpt-4o-mini')
->using('anthropic', 'claude-3-5-sonnet-20241022')
->using('ollama', 'llama3.2')
Both forms work. The enum is safer for IDE support.
Gotchas and Edge Cases
The API method is ->asText(), not ->generate(). A few older tutorials reference ->generate() — that was the previous API. As of v0.100.x it's ->asText() for text responses and ->asStream() for streaming.
Ollama needs a timeout increase. Local models can be slow. The default request timeout is 30 seconds. Override it per call:
Prism::text()
->using('ollama', 'llama3.2')
->withClientOptions(['timeout' => 120])
->withPrompt($prompt)
->asText();
Anthropic is strict about message order. If you're building a conversation with withMessages(), messages must alternate: UserMessage → AssistantMessage → UserMessage. Out-of-order messages will return an error from the API.
Pin your version in production. Prism is actively developed and ships breaking changes. The current stable release is v0.100.x. Lock it in composer.json:
composer require "prism-php/prism:^0.100.0"
Wrapping Up
Installing Prism, publishing the config, and making your first AI call takes under five minutes. The provider-agnostic API is the real payoff — you can prototype with a cheap OpenAI model and swap to Anthropic or a self-hosted Ollama model without touching your application code. From here, the natural next steps are streaming responses, tool calling for agentic workflows, and structured output for when you need the model to return typed data rather than freeform text.
Steven is a software engineer with a passion for building scalable web applications. He enjoys sharing his knowledge through articles and tutorials.