Almost a year ago, we first shared Mastra here (https://news.ycombinator.com/item?id=43103073). It’s kind of fun looking back since we were only a few months into building at the time. The HN community gave a lot of enthusiasm and some helpful feedback.
Today, we released Mastra 1.0 in stable, so we wanted to come back and talk about what’s changed.
If you’re new to Mastra, it's an open-source TypeScript agent framework that also lets you create multi-agent workflows, run evals, inspect in a local studio, and emit observability.
Since our last post, Mastra has grown to over 300k weekly npm downloads and 19.4k GitHub stars. It’s now Apache 2.0 licensed and runs in prod at companies like Replit, PayPal, and Sanity.
Agent development is changing quickly, so we’ve added a lot since February:
- Native model routing: You can access 600+ models from 40+ providers by specifying a model string (e.g., `openai/gpt-5.2-codex`) with TS autocomplete and fallbacks.
- Guardrails: Low-latency input and output processors for prompt injection detection, PII redaction, and content moderation. The tricky thing here was the low-latency part.
- Scorers: An async eval primitive for grading agent outputs. Users were asking how they should do evals. We wanted to make it easy to attach to Mastra agents, runnable in Mastra studio, and save results in Mastra storage.
- Plus a few other features like AI tracing (per-call costing for Langfuse, Braintrust, etc), memory processors, a `.network()` method that turns any agent into a routing agent, and server adapters to integrate Mastra within an existing Express/Hono server.
(That last one took a bit of time, we went down the ESM/CJS bundling rabbithole, ran into lots of monorepo issues, and ultimately opted for a more explicit approach.)
Anyway, we'd love for you to try Mastra out and let us know what you think. You can get started with `npm create mastra@latest`.
We'll be around and happy to answer any questions!
shudders in vietnam war flashbacks congrats on launch guys!!!
for those who want an independent third party endorsement, here's Brex CTO talking about Mastra in their AI engineering stack http://latent.space/p/brex
https://github.com/mastra-ai/mastra/blob/main/book/principle...
One thing to consider is that it felt clunky working with workflows and branching logic with non LLM agents. I have a strong preference for using rules based logic and heuristics first. That way, if I do need to bring in the big gun LLM models, I already have the context engineering solved. To me, an agent means anything with agency. After a couple weeks of frustration, I started using my own custom branching workflows.
One reason to use rules, they are free and 10,000x faster, with an LLM agent fallback if validation rules were not passing. Instead of running an LLM agent to solve a problem every single time, I can have the LLM write the rules once. The whole thing got messy.
Otherwise, Mastra is best in class for working with TypeScript.
I try to transfer as much work as I can out of LLMs and into deterministic steps. This includes most of the “orchestration” layer which is usually deterministic by nature.
Sprinkle a little bit of AI in the right places and you’ll get something that appears genuinely intelligent. Rely too much on AI and it’s dumb as fuck.
Make their tasks very small and simple (ideally, one step), give them only the context and tools that they need and nothing else, and provide them with feedback when they inevitably mess up (ideally, deterministically), and hope for the best.
Do you have code snippets you can share about how you wanted to write the rules? Want to understand desired grammar / syntax better.
- How do you compare Mastra with Tanstack AI? And/or do you plan to build on top of Tanstack AI like the Vercel AI SDK?
- Since there's a Mastra cloud, do you have an idea as to what features will be exclusive to the hosted version?
Re: Mastra cloud -- this is basically hosted services, eg observability, hosted studio, hosted serverless deployments, as distinct from the framework.
With server adapters you can now deploy your studio in your infra. We're going to pull multi-project / multi-user Mastra cloud features into a Mastra admin feature so you can run these locally or deploy them on your infra as well (with EE licensing for stuff like RBAC). Stay tuned here.
It's built on top of Vercel AI elements/SDK and it seems to me that was a good decision.
My mental heuristic is:
Vercel AI SDK = library, low level
Mastra = framework
Then Vercel AI Elements gives you an optional pre built UI.
However, I read the blog post for the upcoming AI SDK 6.0 release last week, and it seems like it's shifting more towards being a framework as well. What are your thoughts on this? Are these two tools going to align further in the future?
I see each of us having different architectures. AI SDK is more low-level, and Mastra is more integrated with storage powering our studio, evals, memory, workflow suspend/resume etc.
I was hoping to actually engage with you but I guess you just came here to do marketing.
> AI SDK is more low-level
AI SDK was more low level. My question was, since the latest V6 release is moving towards higher level components, what do you think about that? How will you continue to differentiate your product if Vercel makes moves to eat your lunch?
That's almost certainly their intention here, following their highly successful Next.js playbook: start by creating low level dev tools, gradually expand the scope, make sure that all the docs and setup guides you towards deploying on their infrastructure.
I wonder: Are there any large general purpose agent harnesses developed using Mastra? From what I can tell OpenCode chose not to use it.
A lot of people on here repeat that rolling your own is more powerful than using Langchain or other frameworks and I wonder how Mastra relates to this sentiment.
These days we see things going the other way, where teams that started rolling their own shift over to Mastra so they can focus on the agent vs having to maintain an internal framework.
The Latent Space article swyx linked earlier includes a quote from the Brex CTO talking about how they did that.
We're TypeScript-first, TypeScript-only so a lot of the teams who use us are full-stack TypeScript devs and want an agent framework that feels TS-native, easy to use, and feature-complete.
I'm a happy Mastra user and I'm biased to their success. But I think linking it to an unrelated project is only going to matter to non-technical CXOs who choose technology based on names not merits. And that's not the audience Mastra needs to appeal to to be successful. Good dev tools and techs trickle from the bottom up in engineering organizations.
Most of us spent a lot of the last decade building Gatsby so it's sort of a personal identity/pride thing for us more than a marketing thing. But maybe we need to keep our identity small! Either way, thanks for saying something, worth thinking about.
say more pls?
But people kept asking us for a multi-agent primitive out of the box so we shipped `agent.network()`, which is basically dynamic hierarchy decided at runtime, pass in an array of workflows and agents to the routing agent and let it decide what to do, how long to execute for, etc!
Another useful question to ask: since you’re likely using 1 of 3 frontier models anyway, do you believe Claude Agent SDK will increasingly become the workflow and runtime of agentic work? Or if not Claude itself, will that set the pattern for how the work is executed? If you do, why use a wrapper?
For any agent you're shipped to production though you probably want a harness that's open-source so you more fully control / can customize the experience.
You can take a look at the cloud platform at cloud.mastra.ai, it's in beta currently
It's the same play we did at Gatsby to get to several million in ARR in a couple of years
But tons of other use cases too, eg dev teams at Workday and PayPal have built an agentic SRE to triage their alerts, etc etc