So, why would you use a SaaS contract for an agent in the first place? It should be like a subcontractor. I pay you to send 10k emails a day to all my clients. If you use an agent and it messes up, that's on you. If you use an agent and it saves you time, you get the reward.
I think people want to assign responsibility to the "agent" to wash their hands in various ways. I can't see it working though
It's just that you can't advertise that, or you ruin the service.
And it already does work. See the sweet, sweet deal Anthropic got recently (and if you think $1.5B isn't a good deal, look at the range of of compensation they could have been subject to had they gone to court and lost).
Remember the story about Replit's LLM deleting a production database? All the stories were AI goes rogue, AI deletes database, etc.
If an Amazon RDS database was just wiped a production DB out of nowhere, with no reason, the story wouldn't be "Rogue hosted database service deletes DB" it would be "AWS randomly deletes production DB" (and, AWS would take a serious reputational hit because of that).
Me as the person who sold it? OpenAI who I use below? Anthropic who performs some of the work too? My customer responsible themselves?
These are questions that classic contracts don't usually cover because things tend to be more deterministic with static code.
Why? You have a delivery and you entered into some guarantees as part of the contract. Whether you use an agent, or roll a dice - you are responsible for upholding the guarantees you entered into as part of the contract. If you want to offload that guarantee, then you need to state it in the contract. Basically, what the MIT Licenses do: "No guarantees, not even fitness for purpose". Whether someone is willing to pay for something where you enter no liability for anything is an open question.
Me as the person who sold it? The vendor of a core library I use? AWS who hosts it? Is my customer responsible themselves?
These are questions that classic contracts typically cover and the legal system is used to dealing with, because technical solutions have always had bugs and do unexpected things from time to time.
If your technical solution is inherently unreliable due to the nature of the problem it's solving (because it's an antivirus or firewall which tries its best to detect and stop malicious behavior but can't stop everything, because it's a DDoS protection service which can stop DDoS attacks up to a certain magnitude, because it's providing satellite Internet connectivity and your satellite network doesn't have perfect coverage, or because it uses a language model which by its nature can behave in unintended ways), then there will be language in the contract which clearly defines what you guarantee and what you do not guarantee.
If you chose OpenAI to be the one running your model, that's your choice not mine. If your contract with them has a clause that they pay you if they mess up, great for you. Otherwise, that's the risk you took choosing them
There’s nothing quite like CGL in software.
Did your product fail to render those services? Or do damage to the customer by operating outside of the boundaries of your agreement?
There is no difference between "Company A did not fulfill the services they agreed to fulfill" and "Company A's product did not fulfill the services they agreed to fulfill", therefore there is no difference between "Company A's product, in the category of AI agents, did not fulfill the services they agreed to fulfill."
Otherwise, "agents" as a class in contracts are well covered by existing law:
If I make roller skates and I use a bearing that results in the wheels falling off at speed and someone gets hurt, they don't sue the ball bearing manufacturer. They sue me.
LLMs are not actually intelligent, and absolutely should not be used for autonomous decision making. But they are capable of it... as in, if you set up a system where an LLM is asked about its "opinion" on what should be done, it will give a response, and you can make the system execute the LLM's "decision". Not a good idea, but it's possible, which means someone's gonna do it.
11/10 content marketing but it will be a shame if this gets any attention outside this comment section.
Unless I'm misunderstanding and GitLaw and CommonPaper are related or collaborating, I feel like this callout deserves to be mentioned earlier on and the changes / distinctions ought to be called out more explicitly. Otherwise, why not just use CommonPaper's version?
The contract establishes that your agent functions as a sophisticated tool, not an autonomous employee. When a customer's agent books 500 meetings with the wrong prospect list, the answer to "who approved that?" cannot be "the AI decided."
It has to be "the customer deployed the agent with these parameters and maintained oversight responsibility."
The MSA includes explicit language in Section 1.2 that protects you from liability for autonomous decisions while clarifying customer responsibility.
The alternative is that the service has financial responsibility for its mistakes. This is the norm in the gambling industry. Back when GTech was publicly held, their financial statements listed how much they paid out for their errors. It was about 3%-5% of revenue.
Since this kind of product is sold via large scale B2B deals, buyers can negotiate. Perhaps service responsibility for errors backed up by reinsurance above some limit.
Good for consultants, maybe, horrible for businesses that want to mark things as "done" and move them to limited maintenance/care and feeding teams. You're going to be dedicating senior folks to the project indefinitely.
[0] https://simonwillison.net/2025/Aug/8/surprise-deprecation-of...
Imagine I create a new agreement with a customer once a week. I’m no lawyer so might not notice the impact of small wording changes on the meaning or interpretation of each sequential contract.
Can I try and prompt engineer this out? Yeah sure. Do I as a non lawyer know I have fixed it - not to a high level of confidence.
What is the stated use case here?
Do you have any examples where it would be okay?
Also it might be that with systems that learn and change behavior over time, some sort of contract structure is needed. Not sure if traditional is the answer though.
This is not the way we want to be going.