"But you can use 3rd party repositories!" Yeah, and I also can just download the library from its author's site. I mean, if I trust them enough to run their library, why do I need opinionated middle-men?
> "But you can use 3rd party repositories!"
That's not something I said.
You're saying it's _rare_ for developers to want to advance a dependency past the ancient version contained in <whatever the oldest release they want to support> is?
Speaking for the robotics and ML space, that is simply the opposite of a true statement where I work.
Also doesn't your philosophy require me to figure out the packaging story for every separate distro, too? Do you just maintain multiple entirely separate dependency graphs, one for each distro? And then say to hell with Windows and Mac? I've never practiced this "just use the system package manager" mindset so I don't understand how this actually works in practice for cross-platform development.
These are very, very common problems; not edge cases.
Put another way: y'all know we got all these other package management/containerization/isolation systems in large part because people tried the C-library-install-by-hand/system-package-all-the-things approaches and found them severely lacking, right? CPAN was considered a godsend for a reason. NPM, for all its hilarious failings, even moreso.
Honestly? Over the course of my career, I've only rarely encountered these sorts of problems. When I have, they've come from poorly engineered libraries anyway.
That risk/QA load can be worth it, but is not always. For an OS, it helps to be able to upgrade SSL (for instance).
In my use cases, all this is a strong net negative. npm-base projects randomly break when new "compatible" version of libraries install for new devs. C/C++ projects don't build because of include/lib path issues or lack of installation of some specific version or who knows what.
If I need you to install the SDL 2.3.whatever libraries exactly, or use react 16.8.whatever to be sure the app runs, what's the point of using a complex system that will almost certainly ensure you have the wrong version? Just check it in, either by an explicit version or by committing the library's code and building it yourself.
1. The accepted solution to what you're describing in terms of development, is passing appropriate flags to `./configure`, specifying the path for the alternative versions of the libraries you want to use. This is as simple as it gets.
As for where to get these libraries from in the event that the distro doesn't provide the right version, `./configure` is basically a script. Nothing stopping you from printing a couple of ftp mirrors in the output to be used as a target to wget.
2. As for the problem of distribution of binaries and related up-to-date libraries, the appropriate solution is a distro package manager. A c package manager wouldn't come into this equation at all, unless you wanted to compile from scratch to account for your specific circumstances, in which case, goto 1.
I primarily write C nowadays to regain sanity from doing my day job, and the fact that there is zero bit rot and setup/fixing/middling to get things running is in stark contrast to the horrors I have to deal with professionally.
I have spent the better half of 10 years navigating around C++'s deplorable dependency management story with a slurry of Docker and apt, which had better not be part of everyone's story about how C is just fine. I've now been moving our team to Conan, which is also a complete shitshow for the reasons outlined in the article: there is still an imaginary line where Conan lets go and defers to "system" dependencies, with a completely half-assed and non-functional system for communicating and resolving those dependencies which doesn't work at all once you need to cross compile.
Distributions have solved a very specific problem quite nicely: they are building what is effectively one application (the distro) with many optional pieces, it has one set of dependencies, and the users update the whole thing when they update. If the distro wants to patch a dependency, it does so. ELF programs that set DT_INTERP to /lib/ld-linux-[arch].so.1 opt in to the distro's set of dependencies. This all works remarkably well and a lot of tooling has been built around it.
But a lot of users don't work in this model. We build C/C++ programs that have their own set of dependencies. We want to try patching some of them. We want to try omitting some. We want to write programs that are hermetic in the sense that we are guaranteed to notice if we accidentally depend on something that's actually an optional distro package. The results ... are really quite bad, unless the software you are building is built within a distro's build system.
And the existing tooling is terrible. Want to write a program that opts out of the distro's library path? Too bad -- DT_INTERP really really wants an absolute path, and the one and only interpreter reliably found at an absolute path will not play along. glibc doesn't know how to opt out of the distro's library search path. There is no ELF flag to do it, nor is there an environment variable. It doesn't even really support a mode where DT_INTERP is not used but you can still do dlopen! So you can't do the C equivalent of Python venvs without a giant mess.
pkgconf does absolutely nothing to help. Sure, I can write a makefile that uses pkgconf to find the distro's libwhatever, and if I'm willing to build from source on each machine* (or I'm writing the distro itself) and if libwhatever is an acceptable version* and if the distro doesn't have a problematic patch to it, then it works. This is completely useless for people like me who want to build something remotely portable. So instead people use enormous kludges like Dockerfile to package the entire distro with the application in a distinctly non-hermetic way.
Compare to solutions that actually do work:
- Nix is somewhat all-encompassing, but it can simultaneously run multiple applications with incompatible sets of dependencies.
- Windows has a distinct set of libraries that are on the system side of the system vs ISV boundary. They spend decades doing an admirable job of maintaining the boundary. (Okay, they seem to have forgotten how to maintain anything in 2026, but that's a different story.) You can build a Windows program on one machine and run it somewhere else, and it works.
- Apple bullies everyone into only targeting a small number of distros. It works, kind of. But ask people who like software like Aperture whether it still runs...
- Linux (the syscall interface, not GNU/Linux) outdoes Microsoft in maintaining compatibility. This is part of why Docker works. Note that Docker and all its relatives basically completely throw out the distro model of interdependent packages all with the same source. OCI tries to replace it with a sort-of-tree of OCI layers that are, in theory, independent, but approximately no one actually uses it as such and instead uses Docker's build system and layer support as an incredibly poorly functioning and unreliable cache.
- The BSDs are basically the distro model except with one single distro each that includes the kernel.
I would love functioning C virtual environments. Bring it on, please.
The idea of a protocol for “what artifacts in what languages does $thing depend on and how will it find them?” as discussed in the article would be incredibly powerful…IFF it were adopted widely enough to become a real standard.
I'm not very familiar with MySQL, but for C (which is what we're talking about here) I typed mysql here and it gave me a bunch of suggestions: https://packages.debian.org/search?suite=default§ion=all... Debian doesn't ship binary blobs, so I guess that's not a problem.
"I have to build something on 10 different distros" is not actually a problem that many people have.
Also, let the distros package your software. If you're not doing that, or if you're working against the distros, then you're storing up trouble.
I think you're going to need to know that either way if you want to run a dynamically linked binary using a library provided by the OS. A package manager (for example Cargo) isn't going to help here because you haven't vendored the library.
To match the npm or pip model you'd go with nix or guix or cmake and you'd vendor everything and the user would be expected to build from scratch locally.
Alternatively you could avoid having to think about distro package managers by distributing with something like flatpak. That way you only need to figure out the name of the libssl package the one time.
Really issues shouldn't arise unless you try to use a library that doesn't have a sane build system. You go to vendor it and it's a headache to integrate. I guess there's probably more of those in the C world than elsewhere but you could maybe just try not using them?
GLIBC_2.38 not found
If you're supplying your own binaries and not compiling/linking them against the distro-supplied glibc, that's on you.
But that's not the point I'm making. I'm attacking the idea that they're "working just fine" when the above is a bug that nearly everyone hits in the wild as a user and a developer shipping software on Linux. It's not the only one caused by the model, but it's certainly one of the most common.
That was very much the point of using a Linux distro (the clue is in the name!) Trying to work in a Windows/macOS way where the "platform" does fuck-all and the developer has to do it all themselves is the opposite of how distros work.
Plus, we already have great C package management. Its called CMake.
That would be fine if it only effected that first layer, of a basic library and a basic app, but it becomes multiple layers of this kind of habit that then ends up in multiple layers of software used by many people.
Not sure that I would go so far as to suggest these kinds of languages with runaway dependency cultures shouldn't exist, but I will go so far as to say any languages that don't already have that culture need to be preserved with respect like uncontacted tribes in the Amazon. You aren't just managing a language, you are also managing process and mind. Some seemingly inefficient and seemingly less powerful processes and ways of thinking have value that isn't always immediately obvious to people.
I am not sure if it is just me, but I seem to constantly run into broken vcpkg packages with bad security patches that keep them from compiling, cmake scripts that can't find the binaries, missing headers and other fun issues.
Avoid at all cost.
And so when it comes to dynamic dependencies (including shared libraries) that are not resolved until runtime you hit language-level constraints. With C libraries the problem is not merely that distribution packagers chose to support single versions of dependencies because it is easy but because the loader (provided by your C toolchain) isn't designed to support it.
And if you've ever dug into the guts of glibc's loader it's 40 years of unreadable cruft. If you want to take a shot at the C-shaped hole, take a look at that and look at decoupling it from the toolchain and add support for multiple version resolution and other basic features of module resolution in 2026.
You meant: it's 40 years of debugged and hardened run-everywhere never-fails code, I suppose.
I've written two C package managers in my life. The most recent one is mildly better than the first from a decade ago, but still not quite right. If I ever build one I think is good enough I'll share, only to mostly likely learn about 50 edge cases I didn't think of :)