Released on March 12th, v0.3.0 is one of those updates that doesn't get a flashy blog post but absolutely deserves your attention — especially if you've already shipped something with the SDK.
Eleven contributors. Thirteen merged PRs. A mix of bug fixes and features, the kind of release that shows a package is getting used in real production environments by people who hit real edge cases.
Here's what changed.
Start here, because this one matters.
RemembersConversations::forUser() had a bug where conversation history could bleed between users. In the wrong scenario, user A could end up with fragments of user B's conversation. That's not a quirky edge case — that's a serious problem if you're building anything with per-user AI interactions.
Fixed in #260. If you're using conversation memory scoped by user, update now and don't think twice about it.
When an agent used a tool mid-conversation, the tool call and its result weren't being carried forward in subsequent turns. The model would lose context on what had already happened, which breaks multi-step agent workflows in subtle and annoying ways.
Fixed in #203. If you've had agents that seemed to "forget" what they just did, this was probably why.
The SDK supports provider failover — you list multiple providers and it falls back automatically when one fails. But until this release, that failover only triggered on rate limits and outages. If your API credits ran out, it just failed.
#249 extends failover to cover insufficient credits and quota errors too. Practically, this means your app stays up even if you've burned through your OpenAI budget for the month, assuming you have another provider configured.
Embedding requests — especially batches over larger chunks of text — can be slow. The SDK's default HTTP timeout wasn't always appropriate for this, and there was no way to configure it separately from regular requests.
#262 adds a configurable timeout specifically for embedding requests. If you're running batch embedding jobs and hitting timeout errors, this is the fix.
OpenRouter was already usable for text generation. This release adds embedding support for it as well (#237).
Useful if you're using OpenRouter to route across multiple providers under one API key and want to keep your embedding calls in the same place.
dimensions option for embeddingsIf you're running local models via Ollama, you can now specify embedding dimensions when generating vectors (#190).
Different models produce differently-sized vectors, and when you're storing them in a vector column you need that to be predictable. This gives you the control to set it explicitly rather than relying on defaults.
Structured output — where you define a PHP class as the expected response shape and the model returns validated, typed data — had a bug in how the schema was being generated.
#252 by Ralph J. Smit fixes this. If you've been seeing unexpected validation failures when using structured output, give this a try.
Previously, if you wanted to pass provider-specific options you had to do it at the call site. #166 lets you define those options directly on the agent class.
Cleaner agents, less repetition at every call site.
Some audio MIME types weren't being recognized when passing attachments to Prism-backed requests. Fixed in #247. Niche, but if audio attachments are part of your workflow, it matters.
@jackbayliss added a proper CI test workflow, fixed a Pint conflict with anonymous class helper method naming, and adjusted the AddsToolsToPrismRequestsTest suite. Nothing glamorous, but this is exactly how you keep a young package from accumulating quiet technical debt.
1composer update laravel/ai
No breaking changes in this release. The conversation leakage fix alone makes it worth doing immediately on any app that uses per-user conversation memory.
The full changelog is on GitHub.
If you enjoyed this article, please consider supporting our work for as low as $5 / month.
Sponsor
Written by
Writing and maintaining @LaravelMagazine. Host of "The Laravel Magazine Podcast". Pronouns: vi/vim.
Get latest news, tutorials, community articles and podcast episodes delivered to your inbox.