With the recent buzz around MCP, it made me think about what I've read about other unifying protocol attempts in the past. Why did these 2000s era interoperability protocols fail, and what does MCP do different? Was it a matter of security issues in a newly networked world? A matter of bad design? A matter of being too calcified? I would love to hear from those who were around that time.
They didn't. SOAP is still widely used. COM and CORBA and similar IPC were mostly replaced by HTTP-based protocols (which would have seemed as a wasteful overkill for a few decades ago, now nobody bats an eye) like REST or GraphQL.
> what does MCP do different?
Nothing, it reinvents the wheel. To be charitable, let's call it starting from a clean slate :)
> Was it a matter of security issues in a newly networked world?
Lol, no. As we all know, "S" in "MCP" stands for "security". These older geezers like SOAP can be secure when properly implemented.
> A matter of bad design?
They are definitely much more complex then some of the newer stuff, mostly because they grew to support more complex use cases that newer protocols can avoid or simplify. And yeah as commented on other comments, heavy "oop" influence which new stuff has rolled back considerably.
> A matter of being too calcified
More a matter of not being in vogue and not supported out of the box in languages such as JS or Python.
You have to consider how much REST gives you for "free": encryption, compression, authentication, partial content, retransmission, congestion control, etc.
Of course HTTPS only applies to transport, not storage...
COM is alive and well in the LAN space, too. I see it in industrial automation under the guise of OPC, fairly frequently, too.
So COM did not fail as a standard, it just failed to conquer the whole world. It's doing fine though in its natural habitat.
My view mostly it was a confluence of poor dev experience and over-engineering that killed them.
Some of those protocols were well designed. Some were secure, all were pretty awful to implement.
It’s worthwhile calling out REST as a long term success. Mainly because it was simple and flexible.
Whether MCP will have that staying power I dunno, personally I think it still has some flaws, and the implementation quality is all over the shop. Some of the things that make it easy (studio) also create its biggest flaws.
To be fair, REST as described in the Fielding paper is rare to come across - the success is JSON via HTTP
> REST [...] is a software architectural style that was created to describe the design [...] of the architecture for the World Wide Web. [1]
People forget that the Web is the original REST. You have
- resources (https://news.ycombinator.com/item?id=45468477)
- HTTP verbs (GET, POST)
- and hypermedia controls (<a href="item?id=45468365">parent</a>)
This is all you need to meet Fielding's definition, and it has become so ubiquitous that we don't even notice it anymore.
{
"links": {
"users": "/root/users",
"posts": "/root/posts",
"mail": "/root/mail"
}
}
So then the client could make a request for "/root/users" which would return a list of users along with URLs for how to access individual users.So then the client could make a request for "/root/users/alice" and the server would return the information on alice along with URLs for actions like sending her a message or deleting her.
But all of that interaction is driven from information returned from the server. All the client needed to begin was just the root URL.
Most non-REST JSON APIs today communicate their endpoints via an out-of-band API documentation like swagger.
Quoting the Fielding:
> A REST API must not define fixed resource names or hierarchies (an obvious coupling of client and server). Servers must have the freedom to control their own namespace. Instead, allow servers to instruct clients on how to construct appropriate URIs
> [...]
> A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations.
https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...
With something like
Content-Type: application/json
{
"name": "",
...
}
you don't really know much. With something like example/vnd.mycoolapp.user+json, you get typing information (i.e. you know that the representation of /users/me that the server has given you is a JSON variant of a mycoolapp User. (How the types are known is a priori knowledge of the client (but there are standard link relations you can use to reference API documentation, both human-readable and machine-readable)).A good example of this is your web browser: it knows how to display text/html documents of various character sets, probably knows text/plain, might know application/json.
The best part is? HTTP has a standard mechanism for defining these link relations even if what you have isn't necessarily amenable to link relations[2] (the Link header). You can use that to navigate through a collection (previous, item, collection, etc), get documentation about the API (service-doc, service-desc), find things that are related (related), get STUN and TURN information (ice-server), etc. I doubt very much of this is used in practice, but there are a very rich set of standardized relationships that could be used here.
(A lot of people admittedly assume "using PUT in a couple of places" is doing REST or that not calling all of your method endpoints doAction.pl or whatever is "RESTful" and I think that definition has become so widespread that talking about something that approaches actually doing something that resembles the architectural style tends to require reduplication.)
[1]: https://www.iana.org/assignments/media-types/media-types.xht...
[2]: https://www.iana.org/assignments/link-relations/link-relatio...
Edit: one other thought, I'm starting to see how the semantic web fits into this vision. Too bad it was too brittle in practice (though the SOLID folks are doing some exciting stuff).
Relatively speaking I think there aren't that many people that know that it is even really a possibility, let alone use it.
(You even see discussions on even whether the status codes are to be used at all, when they were really purpose-built to be generalizable.)
This partially is self-reinforcing: the server libraries don't do anything with media types generally outside of maybe a "FileResponse"-ish class or function and the client libraries don't tend to implement it because it's not supported.
Part of it too may be that interoperability isn't necessarily an underlying goal.
> one other thought, I'm starting to see how the semantic web fits into this vision. Too bad it was too brittle in practice (though the SOLID folks are doing some exciting stuff).
Indeed! There's a lot of similar ideas with this and you definitely have things you can fit in there. :)
[1] https://modelcontextprotocol.io/specification/2025-06-18/bas...
I say this because COM and DCOM are very much alive in the Windows ecosystem, underlying WinRT, which underlies the object-oriented APIs for modern Windows apps.
But I work in the industrial automation space and we deal with OPC-DA all the time, which is layered on top of DCOM which is layered on COM on Windows. DCOM is a pain to administer, and the "hardening" patch a couple of years ago only made it worse. These things linger.
SOAP was nice and simple until the Architecture Astronauts got their hands on it and layered-on lots of higher level services.
MCP isn't really like either of these - its an application-level protocol, not a general-purpose one.
For the most part, everyone used some kind of SDK that translated WSDL (Web Services Description Language) specifications to their chosen language.
So you could define almost any function - like PostBlog(Blog blog), and then publish it as a WSDL interface to be consumed by a client. We could have a Java server, with a C# client, and it more or less just worked.
We used it with things like signatures, so the data in the message wasn't tampered with.
Why did it stop getting popular? It probably really started to fall out of favor when Java/C# stopped being some of the more popular programming languages for web development, and PHP and Ruby got a lot more momentum.
The idea was that REST/JSON interfaces would be easier to understand, as we would have a hypermedia interface. There was sort of an attempt to make a RESTy interface work with XML, called WebDAV, that Microsoft Office supported for a while, but it was pretty hard to work with.
I've got some old SOAP code from 2001 here at the bottom of this article:
https://www.infoworld.com/article/2160672/build-portals-with...
Could do a whole API AMA on this.
A lot of the top vulnerabilities are from SOAP: https://owasp.org/www-project-top-ten/
This is one that affects SOAP:https://owasp.org/www-community/vulnerabilities/XML_External...
XML Injection is tied to that and with the decline of SOAP, it was no longer a top vulnerability.
There's another more comprehensive list here: https://brightsec.com/blog/top-7-soap-api-vulnerabilities/#t...
Wow, this is a great example of the importance of making escaping rules clear and simple.
Relatedly, nobody really does REST as Roy F initially defined, which is now referred to as HATEOS. Also too much work.
If you haven't read the Worse is Better paper, definitely worth it. Top 5 all time.
XML had two problems. Most obviously it is verbose, but people didn’t care because XML was really smart. Amazingly smart. The second problem is that XML technologies were too smart. Most developers aren’t that smart and had absolutely no imagination necessary to implement any of this amazing smartness.
JSON kind of, but not really, killed XML. It’s like how people believe Netflix killed Blockbuster. Blockbuster died because of financial failures due to too rapid late stage expansion and format conversion. Netflix would have killed Blockbuster later had Blockbuster not killed itself first. JSON and XML are kind of like that. JSON allowed for nested data structures but JSON tried to be smart. To the contrary JSON tried to be as dumb as possible, not as dumb as CSV, but pretty close.
What amazes me in all of this is that people are still using HTTP for so much data interchange like it’s still the late 90s. Yeah, I understand it’s ubiquitous and sessionless but after that it’s all downhill and extremely fragile for any kind of wholesale large data replication or it costs too much at the thread level for massively parallel operations.
With these technologies, a server can return a reference to, say, a Person. The client can then do "person.GetName()" or similar. The method calls are implemented as "stubs" that act as proxies that simply send the RPC along with the references to the objects they operate on. The server-side RPC implementation keeps a mapping between references and actual in-memory objects, so that calls to references call the right thing in the server process.
The benefit is that you can work with APIs in ways that feel natural. You can do "persons.GetPerson("123").GetEmployer().GetEmployees()" or whatever — everything feels like you're working with in-memory objects.
This has drawbacks. One is that the cost of method calls is obscured by this "referential transparency", as it's never obvious what is remote or local. Another problem is that the server is required to keep an object around until a client releases it or dies. If the client dies without first releasing, the objects will live until a keepalive timer triggers. But because a malformed client can keep objects around, the system is vulnerable to high memory use (and abuse). In the end you'd often end up holding a whole graph of objects, and nothing would be released until all references were released. Leak can be difficult to find.
My knowledge of COM/CORBA may be incomplete, but never understood why the server couldn't implement these in terms of "locators". For example, if a server has a "GetPerson(string id) -> Person" type method, rather than sending an object reference that points to an in-memory person object, it could return a lightweight, opaque string like "person:123". Any time the client's internal proxy passed this back to the server, the server could look it up; the glue needed to resolve these identifiers back into real objects would be a little more work on the part of the developer, but it would sidestep the whole need to keep objects around. And they could cached quite easily.
Cap'n Web [1] is the first RPC system in a long time (as far as I know) that implements object references. However, it does this in a pretty different way with different pros and cons.
[1] https://blog.cloudflare.com/capnweb-javascript-rpc-library/
Cap'n Proto does it too, and has been around since 2013. We use it extensively it the implementation of Cloudflare Workers. Many people have joined the team, initially thought "what is this weird thing? Why don't we just use gRPC instead?", and then after a few months of using it decided it's actually a superpower. I'm planning to write more about this on the Cloudflare blog in the next couple months, probably.
(Cap'n Proto is itself based on CapTP, the protocol used in the E programming language.)
I never actually used COM nor CORBA, but my impression is there's a few reasons they didn't work where Cap'n Proto does:
1. Excessive complexity. CORBA is a monstrously large standard, covering not just protocol but also system architecture ("Object Resource Brokers").
2. Lack of asynchronous programming. CORBA calls would synchronously block the calling thread until the call completed. But when making calls over a network (rather than locally), it's much more important that you be able to do other things while you wait. CORBA added (extremely complex) asynchronous I/O support late in its life but few people ever used it.
3. Lack of promise pipelining. This sort of follows from #2 (at least, I don't know how you'd express promise pipelining if you don't have promises to start with). Without promise pipelining, it's incredibly hard to design composable interfaces, because they cannot be composed without adding a round trip for every call. So instead you end up pushed towards big batch requests, but those don't play well with object-oriented API design.
4. Poor lifecycle management. An object reference in CORBA was (I am told) "just data", which could be copied anywhere and then used. The server had no real way of being notified when the object reference was no longer needed, unless clients proactively told it so (but this was up to the app). Cap'n Proto ties object lifetime to connections, so when a connection is lost, all the object references held across it are automatically disposed. Cap'n Proto's client libraries are also designed to carefully track the lifecycle of a reference within the client app, so that as soon as it goes out-of-scope (GC'd, destructor runs, etc.), a message can be sent to the server letting it know. This works pretty well.
5. Bad security model. All objects existed in a global namespace and any client could connect to any object. Access control lists had to be maintained to decide which clients were allowed access to which objects. This is a bolted-on security mechanism that sounds simple but in practice is extremely tedious and error-prone, and often people would avoid it by implementing coarse-grained security models. Cap'n Proto implements an object-capability model, aka capability-based security. There is no global namespace of objects. To access one, you have to first receive an object reference from someone who already has one. Passing someone an object reference implies giving them permission to use it. This may at first sound more complicated, but in practice it turns out to map very cleanly to common object-oriented API design patterns.
As a result of all this, in Cap'n Proto (and Cap'n Web), you can pretty much use the exact same API design patterns you'd use in a modern programming language, with lots of composable objects and methods, and it's all safe and efficient.
(I'm the author of Cap'n Proto and Cap'n Web.)
Neither DCOM nor CORBA worked well over the internet. DCOM was designed in an era when Bill Gates was publicly shitting on the internet and saying the future was closed BBS networks like CompuServe, AOL and MSN. Microsoft didn't have a reasonable internet strategy back then, even TCP/IP support was kinda ropey. So DCOM assumed an entirely flat network in which every machine had a unique network address, was up all the time, there were no firewalls anywhere and encryption wasn't needed. As a consequence passing a callback to an object - a very OOP and idiomatic thing to do - meant the server would try to connect back to the client. Every RPC system back then made this mistake, also in the UNIX world with Java's RPC system (RMI), Sun RPC etc. Thus every client in these architectures was also a server. This idea was just about tenable up until the DSL rollout and teenagers started noticing that every Windows XP box on the internet was also a server with a bunch of exposed RPC interfaces, all of which was connected to piles of crappy old C++ riddled with memory safety bugs. After a few megaworms Microsoft pushed updates that added a firewall and closed the entire RPC port by default, instead of providing any kind of finer grained support for firewalling at the object exporter level, and that was the end of any chance of using this 90s generation of RPC.
HTTP, on the other hand, had a very clear notion of what was a server and what was a client. It also had SSL/TLS, developed by Netscape for the web, which was one of the first cryptographic protocols. We take it for granted now but stuff like DCOM had no equivalent and no effort to develop any. After all, objects are exported over wired ethernet ports at your office, right? Why would you need any encryption? The need for it on the web was driven by eCommerce but nobdy used DCOM or CORBA for exporting shops over the network.
Systems like DCOM did have object capabilities and lifecycle handling, but that wasn't terribly useful because the default timeout on pings was absurdly high, like 20 minutes or something, and so in practice the ability to hold references to stateful objects could cause memory leaks accidentally and of course the moment anyone wanted to DoS attack you it was all over.
That's started going the other direction, with people are more willing to do things like generate code for GraphQL, now that code size is less of an issue.
Besides that, a lot of these protocols come with other baggage due to their legacy. Try reading the COM documentation relating to threading: https://learn.microsoft.com/en-us/windows/win32/com/in-proce...
https://medium.com/@octskyward/why-did-the-%C3%BCber-protoco...
When web development became accessible to the masses and the number of fast-moving resource-strapped startups boomed, apps and websites needed to integrate data from 3rd parties they had no prior relationship/interaction with, and a lighter and looser mechanism won -- REST (ish), without client/server transactional contracts and without XML, using formats and constructs people already knew (JSON, HTTP verbs).
A parallel to SOAP would be hypermedia and OpenAPI, which allow to dynamically discover the API by a remote call, and generate a matching set of request and response data structures to interact with that API.
SOAP was actually pretty cool, if a bit heavyweight. It's still very much alive in the corporate .NET world.
Simple is beautiful.