Looks like each container gets its own lightweight Linux VM.
Can take it for a spin by downloading the container tool from here: https://github.com/apple/container/releases (needs macOS 26)
The former is for apps to ship with container sidecars (and cooler news IMO); the latter is 'I am a developer and I want to `docker run ...`'.
(Oh, and container has a submission here: https://news.ycombinator.com/item?id=44229239)
That sounds pretty heavyweight. A project with 12 containers will run 12 kernels instead of 1?
Curious to see metrics on this approach.
The performance overhead of the VM is minimal, the main tradeoffs is container startup time.
I imagine this is certainly happening already inside Apple datacenters.
Linux kernel overhead itself while non-trivial is still very manageable in those settings. AWS Nitro stripped down VM kernel is about 40 MB, I suppose for Apple solution it will be similar.
For hosted services, you want to choose - is it worth running a single kernel with a lot of containers for the cost savings from shared resources, or isolate them by making them different VMs. There are certainly products for containers which lean towards the latter, at least by default.
For development it matters a lot less, as long as the sum resources of containers you are planning to run don't overload the system.
On non-Linux, you obviously need an additional kernel running (the Linux kernel). In this case, there are N additional kernels running.
That seems to be true in practice, but I don't think it's obviously true. As WSL1 shows, it's possible to make an emulation layer for Linux syscalls on top of quite a different operating system.
It was a strategy that failed in practice and needed to be replaced with a vm based approach.
The Linux kernel have a huge surface area with some subtle behavior in it. There was no economic way to replicate all of that and keep it up to date in a proprietary kernel. Specially as the VM tech is well established and reusable.
> an emulation layer for Linux syscalls on top of quite a different operating system.
My point was that, in principle, it could be possible to implement Linux containers on another OS without using VMs.
However, as you said (and so did I), in practice no one has. Probably because it's just not worth the effort compared to just using a VM. Especially since all your containers can share a single VM, so you end up only running 2 kernels (rather than e.g. 11 for 10 containers). That's exactly how Docker on WSL2 works.
Though I don't think it ever supported docker. And wasn't really expected to, since the entire namespaces+cgroup stuff is way deeper than just some surface level syscall shims.
Only "obvious" for running Linux processes using Linux container facilities (cgroups)
Windows has its own native facilities allowing Windows processes to be containerised. It just so happens that in addition to that, there's WSL2 at hand to run Linux processes (containerised or not).
There is nothing preventing Apple to implement Darwin-native facilities so that Darwin processes would be containerised. It would actually be very nice to be able to distribute/spin up arbitrary macOS environments with some minimal CLI + CLT base† and run build/test stuff without having to spawn full-blown macOS VMs.
† "base" in the BSD sense.
And we can thank predecessor systems like BSD jails, Solaris zones, as well as Virtuozzo/openVZ and lxc as previous container systems on linux.
Docker's main improvements over lxc, as I understand it, were adding a layered, immutable image format (vs. repurposing existing VM image formats) and a "free" public image repository.
But the userspace implementation isn't exactly rocket science, which is why we periodically see HN posts of tiny systems that can run docker images.
[0] https://github.com/apple/container/blob/main/docs/technical-...
Not a container "as such" then.
How hard is it to emulate linux system calls?
It’s doable but a lot more effort. Microsoft did it with WSL1 and abandoned it with WSL2.
Solaris/illumos has been able to do actual "containers" since 2004[0] and FreeBSD has had jails even before that[1].
[0] https://www.usenix.org/legacy/event/lisa04/tech/full_papers/... [1] https://papers.freebsd.org/2000/phk-jails.files/sane2000-jai...
So it's more cultural than technical. I believe you can run OCI Windows containers on Windows with no VM, although I haven't tried this myself.
Due to innate features of a container, it can be of the same OS of the host running on the system, since they have no kernel. Otherwise you need to go the VM route.
Not that it helps them run on any other Windows OS other than the version they were built on, it seems.
The following piece of documentation disagrees:
https://learn.microsoft.com/en-us/virtualization/windowscont...
> Containers build on top of the host operating system's kernel (...), and contain only apps and some lightweight operating system APIs and services that run in user mode
> You can increase the security by using Hyper-V isolation mode to isolate each container in a lightweight VM
Additionally you can decide if the images contain the kernel, or not.
There is nothing in OS containers that specifies the golden rule how the kernel sharing takes place.
Remember containers predate Linux.
Note that containers, by definition, rely on the host OS kernel. So a Windows container can only run Windows binaries that interact with Windows syscalls. You can't run Linux binaries in a Windows container anymore than you can run them on Windows directly. You can run Word in a Windows container, but not GCC.
[0] https://learn.microsoft.com/en-us/virtualization/windowscont...
Some examples would be Sitecore XP/XM, SharePoint, Dynamics deployments.
You can also use just use cgroups with systemd.
Now, you could implement something fairly similar in each OS, but you wouldn't be able to use the vast majority of contained software, because it's ultimately linux software.
You can have Windows containers running on Windows, for instance.
Containers themselves are a packaging format, and do rather little to solve the problem of e.g. running Linux-compiled executables on macOS.
FreeBSD has linuxulator and illumos comes with lx-zones that allow running some native linux binaries inside a "container". No idea why Apple didn't go for similar option.
It puts them on par with Windows that has container support with a free option, plus I imagine it's a good way to pressure test swift as a language to make sure it really can be the systems programming language they are betting that it can and will be.
OrbStack has a great UX and experience, so I imagine this will eat into Docker Desktop on Mac more than OrbStack.
Emulating Linux only makes sense on devices with constrained resources.
Just replace the XNU kernel with Linux already.
Do people learn docker not via the CLI?
And like I'm not all anti-GUI, it's just that docker is one of those things I've never even imagined using a GUI for
It’s just that Docker Desktop makes it easy and also provides other integrations like file system sharing etc.
For Kubernetes, something like K9s [1] or Headlamp [2] works fine. I remember seeing something similar for Docker but I can't remember the name.
I think docker desktop and apple's containerization are both targeted firmly at developers only.
It's like programming, sure it's possible to write code in microsoft office or xcode or vscode, but all programmers I've met opt for ed or vi.
This can sometimes be true, but on many ocasion be the opposite: For instance I've been spending 3 hours watching an IT support techician seemingly clicking randomly everywhere to debug why the corporate sec/antivirus on my laptop is saying my configuration is not compliant. The provided gui and accompanied interface to check events is strikingly uninformative, slow and inefficient and having a simple cli tool with a -status or -report flag that would give you the reason it complain would be much easier to everyone involved.
I have a Mac for work and containers are a pain. I've tried Podman, UTM, colima, Docker Desktop etc and it all boils down to the same thing - run a linux VM and have the command line utils cooperate with the VM to run the containers.
It comes down to which solution has the least friction and irritations and Docker might still win there.
My current setup is UTM running a debian VM which I share my source directory with and ssh into to run docker. This is simpler for my brain to understand because the linux VM isn't a hidden component I forget to manage.
But it's not obvious how to mount the shared directory and I'm constantly running into networking problems - currently I cannot connect as myself and must sudo ssh for it to work. A reboot (of the Mac) used to fix it, but no longer does. I've given up trying to fix it and just sudo.
The nice part is that they (a) set up the Linux VM that runs the Docker daemon for you and (b) handle the socket forwarding magic that lets you communicate with it "directly" by running the Docker client on the host OS. This is likewise true for Podman Desktop, Rancher Desktop, etc.
The GUI is icing on the cake, imo.
Some progress has been made to create a non-Docker implementation that integrates with all those random tools that expect to be able to yeet bytes into or out of the Docker socket, but I still hit blockers the last time I tried.
Docker for Desktop sits on-top of container/virtualization software (Hypervisor.framework and QEMU on Mac, WSL on Windows, containerd on Linux). So there's a good chance that future versions of Docker for Desktop will use this library, but they don't really compete with each other.
If it doesn't, then it's still a toss-up whether or not user chooses docker/podman/this...etc.
If it ends up shipping by default and is largely compatible with the same command line flags and socket API... Then docker has a problem.
For what it's worth, I prefer podman but even on Linux where the differentiators should be close to zero, I still find certain things that only docker does.
My org's management wasn't taking the issue seriously, but once the subscription cost reached one FTE's salary, they started listening to people who had already switched to Podman, Rancher or (Co)Lima.
I'll not deny that it's a bit niche, but not so much so that it's completely unknown.
"Apple developer circles" to me means the few mostly indies who build non-electron Mac apps and non-ReactNative ios apps, but those developers mostly are writing client code and don't even touch servers.
All this said, my above "gut feelings" don't explain why Apple would have bothered spending their time making this when Orbstack, Docker, etc. already meet the needs of the developers on Mac who actually need and use containers.
[0]: besides the "Command line tools" that allow compilation to work, of course.
Before Orbstack, running Docker on Macs was a total pain - the official desktop app is so awful, I doubt anyone at Docker is actually using it. Nevertheless, it was still too useful to let it pass. It was time either Docker or Apple stepped up, but they are both 10 years late to this party. Orbstack fixed the problem.
It would be interesting to see the reaction from Danny Lee, he's hanging out on HN sometimes. I hope this framework ends up being a building block, rather than outright competition.
I'm sure Apple will try to push their own version of Docker but I'm not sure if they'll be able to win over any Docker Desktop businesses unless their tool also works on other operating systems.
Sadly all of them are Electron based.
WSL2 provides everything you need to install the docker daemon and CLI, and the VS Code extension gives you a pretty decent GUI, there's no need for anything else really.
> Contributions to `container` are welcomed and encouraged. Please see our main contributing guide for more information.
This is quite unusual for Apple, isn't it? WebKit was basically a hostile fork of KHTML, Darwin has been basically been something they throw parts of over the wall every now and then, etc.
I hope this and other projects Apple has recently put up on GitHub see fruitful collaboration from user-developers.
I'm a F/OSS guy at heart who has reluctantly become a daily Mac user due to corporate constraints that preclude Linux. Over the past couple of years, Apple Silicon has convinced me to use an Apple computer as my main laptop at home (nowadays more comparable, Linux-friendly alternatives seem closer now than when I got my personal MacBook, and I'm still excited for them). This kind of thing seems like a positive change that lets me feel less conflicted.
Anyway, success here could perhaps be part of a virtuous cycle of increasing community collaboration in the way Apple engages with open-source. I imagine a lot of developers, like me, would both personally benefit from this and respect Apple for it.
Chromiom is a hostile fork of WebKit. Webkit was a rather polite fork of KHTML, just that they had a team of full time programmers so KHTML couldn't keep up with the upstream requests and gave up since WebKit did a better job anyway.
I personally would LOVE if a corporation did this to any of my open source projects.
Even KDE eventually dropped KHTML in favor of KHTML’s own successor, WebKit-based engines (like QtWebKit and later Qt WebEngine based on Chromium).
A web engine isn’t just software — it needs to keep evolving.
Recognising the value of someone’s work is better than ignoring it and trying to build everything from scratch on your own, Microsoft's Internet Explorer did not last.
You are rewriting history here. The main KHTML developers were hired by Apple and Konqueror got on with the new engine. There was no fuss and no drama.
The reason why it’s fair play is that the license allows it. Google is no white knight out to avenge the poor KHTML users from 2003.
> Google is no white knight out to avenge the poor KHTML users from 2003.
Nope. They're here to molest your runtime. Portions of are not expected to survive the assault.
Normally, this is where I'd say "us Linux and Mac users should join arms and fight the corporations!" but that bridge has been burning for almost 20 years now. These days I'm quite content with Safari's fate regardless of how cruel it's treated; after all, the license allows it. No fuss, and no drama. Healthy as a horse, honest.
There’s more blood and drama every time there’s a GTK update.
> These days I'm quite content with Safari's fate regardless of how cruel it's treated; after all, the license allows it. No fuss, and no drama.
Well, bitching is not very productive. We can regret a Blink monoculture, but it would have been exactly the same if Chrome kept using WebKit (if anything, that would have been worse), or if they switched to Gecko. The drama with Chrome has nothing to do with who forked whom.
I didn't write my initial comment here to relitigate this, but you are absolutely the one rewriting history. I remember reading about it because I was a KDE user at the time. But sources are easy to find; there are blog posts and press articles cited in Wikipedia. Here's a sample from one:
> Do you have any idea how hard it is to be merging between two totally different trees when one of them doesn't have any history? That's the situation KDE is in. We created the khtml-cvs list for Apple, they got CVS accounts for KDE CVS. What did we get? We get periodical code bombs in the form of them releasing WebCore. Many of us wanted to even sign NDA's with Apple to at least get access to the history of their internal vcs and be able to be merging the changes incrementally, the way they can right now. Nothing came out of it. They do the very, very minimum required by LGPL.
> And you know what? That's their right. They made a conscious decision about not working with KDE developers. All I'm asking for is that all the clueless people stop talking about the cooperation between Safari/Konqueror developers and how great it is. There's absolutely nothing great about it. In fact "it" doesn't exist. Maybe for Apple - at the very least for their marketing people. Clear?
https://web.archive.org/web/20100529065425/http://www.kdedev...
From another, the very developer they later hired described the same frustrations in more polite language:
> As is somewhat well known, Apple's initial involvement in the open-source project known at KHTML was tense. KHTML developers like Lars were frustrated with Apple's bare-bones commitment to contributing their changes back to the project. "It was hard, and in some cases impossible to pick apart Apple's changes and apply them back to KHTML," he told us. Lars went on to say, "This kind of development is really not what I wanted to do. Developers want to spend their time implementing new features and solving problems, not cherry picking through giant heaps of code for hours at a time."
https://arstechnica.com/gadgets/2007/06/ars-at-wwdc-intervie...
This uncooperative situation persisted for the first 3 or 4 years of the lifetime of Apple's fork, at least.
> The reason why it’s fair play is that the license allows it. Google is no white knight out to avenge the poor KHTML users from 2003.
You're right about this, though.
Anyway, there's no need to deny or erase this in order to defend Apple. Just pointing to other open-source projects they released or worked with in the intervening years, as many other commenters have done in reply to my initial comment, is sufficient!
Those are unrelated things.
WebKit has been a fully proper open source project - with open bug tracker, patch review, commit history, etc - since 2005.
Swift has been a similarly open project since 2015.
Timeline-wise, a new high profile open source effort in 2025 checks out.
I do have a personal MacBook pro that I maxed out (https://gigatexal.blog/pages/new-laptop/new-laptop.html) but I do miss tinkering with my i3 setup and trying out new distos etc. I might get a used thinkpad just for this.
But yeah my Mac personal or work laptop just works and as I get older that’s what I care about more.
Going to try out this container binary from them. Looks interesting.
If my biases are already outdated, I'm happy to learn that. Either way, my hopes are the same. :)
They took over LLVM by hiring Chris Lattner. It was still a significant investment and they keep pouring resources into it for a long while before it got really widespread adoption. And yes, that project is still going.
The name stands for Common Unix Printing System, and Apple CUPS ceased to meaningfully be that after its author left the company. But Apple still uses CUPS in their operating systems!
It’s all very corporate, but also widely distributed and widely owned.
I suspect this move was designed to stop losing people like you to WSL.
I can happily use my Mac as my primary machine without much hassle, just like I would often do with WSL.
I am also thinking the same, Docker desktop experience was not that great at least on Intel Macs
They don't have to do literally any of this.
Besides, I think OP wasn't talking about licenses; Apple has a lot of software under FOSS licenses. But usually, with their open-source projects, they reject most incoming contributions and don't really foster a community for them.
Or distributing builds of something that statically links to it. (Which some would argue creates a derivative work.)
This project had its own kernel, but it also seems to be able to use the firecracker one. I wonder what the advantages are. Even smaller? Making use of some apple silicon properties?
Has anyone tried it already and is it fast? Compared to podman on Linux or Docker Desktop for Mac?
Virtualization.framework and co was buggy af when introduced and even after a few major macOS versions there are still lots of annoyances, for example the one documented in "Limitations on macOS 15" of this project, or half-assed memory ballooning support.
Hypervisor.framework on the other hand, is mostly okay, but then you need to write a lot more codes. Hypervisor.framework is equivalent to KVM and Virtualization.framework is equivalent to qemu.
Laughs in Xcode
MacOS just understands ext4 directly, and should be able to read/write it with no performance penalty.
You can make some kind of argument from this that Linux has won; certainly the Linux syscall API is now perhaps the most ubiquitous application API.
Needing two of the most famous non-Linux operating systems for the layman to sanely develop programs for Linux systems is not particularly a victory if you look at it from that perspective. Just highlights the piss-poor state of Linux desktop even after all these years. For the average person, it's still terrible on every front and something I still have a hard time recommending when things so often go belly up.
Before you jump on me, every year, I install the latest Fedora/Ubuntu (supposedly the noob-friendly recommendations) on a relatively modern PC/Laptop and not once have I stopped and thought "huh, this is actually pretty usable and stable".
Everybody has been making fun of Blender forever but they consistently made things better step by step and suddenly few UX enhancements the wind started shift. It completely flipped and now everybody is using it.
I wouldn’t be surprised if desktop Linux days are still ahead. It’s not only Valve and gaming. Many things seems start to work in tandem. Wayland, Pipewire, Flatpack, atomic distros… hey even Gnome is starting to look pretty.
- there's not one desktop Linux that everyone uses (or even uses by default), and it's not resolving any time soon
- I use Ubuntu+Gnome by default, and I wouldn't say it looks great at all, other than the nice Ubuntu desktop background, and the large pretty sidebar icons
- open source needs UX people to make their stuff look professional. I'm looking at you, LibreOffice
The standard Ubuntu+Gnome desktop crashes far too often.
Now I have no idea whose fault that is ( graphics driver, window system, or desktop code - or all three ) - but it's been a persistent problem for linux 'desktops' over many many years.
I suspect a lot of the problem is in the graphics drivers - they just don't get the love and attention that happens for Windows, and definitely not the Mac ( where they intentionally keep the number of things they need to support low ).
I’ve not been waiting 20 years for linux. But looking at it right now seems pretty positive to me.
For Windows and MacOS you can throw a few quick bucks over the wall and tick a whole bunch of ISO checkboxes. For Linux, you need more bespoke software customized to your specific needs, and that requires more work. Sure, the mindless checkboxes add nothing to whatever compliance you're actually trying to achieve, but in the end the auditor is coming over with a list of checkboxes that determine whether you pass or not.
I haven't had a Linux system collapse on me for years now thanks to Flatpak and all the other tools that remove the need for scarcely maintained external repositories in my package manager. I find Windows to be an incredible drag to install compared to any other operating system, though. Setup takes forever, updates take even longer, there's a pretty much mandatory cloud login now, and the desktop looks like a KDE distro tweaked to hell (in a bad way).
Gnome's "who needs a start button when there's one on the keyboard" approach may take some getting used to, but Valve's SteamOS shows that if you prevent users from mucking about with the system internals because gary0x136 on Arch Forums said you need to remove all editors but vi, you end up with a pretty stable system.
Yes, a lot of MDM feature are just there to check ISOwhatever boxes. Some are legitimately great, though. And yes, even though I’m personally totally comfortable running a Linux laptop, come SOC2 audit time it’s way harder to prove that a bunch of Linux boxes meet required controls when you can’t just screenshot the Jamf admin page and call it good.
One day I asked our CFO something, and watched him log into his laptop with like 4 keypresses. And that’s how we got more complex password requirements deployed everywhere.
Having spent a few years as a CISO, I’m now understand much more about why we have all those pain in the neck controls. There’s a saying about OSHA regulations that each rule is written in blood. I don’t know what the SOC2 version of that is, but there should be one.
The desktop marketshare stats back me up on the earlier point and last I checked, no distro got anywhere close?
Sure, Android is the exception (if we agree to consider) but until we get serious dev going there and until Android morphs into a full-fledged desktop OS, my point stands.
And yes, that's bought by the 'average person'.
On the contrary, our devs generally clamor for expanded Linux support from company IT.
There's just no other OS that's anywhere near as useful for real software engineering that isn't on a web stack.
MacOS is a quirky almost-Linux where you have to fiddle with Homebrew to get useful tools. On Windows you end up installing three copies of half of Linux userspace via WSL, Cygwin and chocolatey to get things done. All real tools are generally the open source ones that run better on native Linux, with Windows equivalents often proprietary and dead/abandoned.
Let me give you a basic embedded SW example: Proxying a serial connection over a TCP or UDP socket. This is super trivial on Linux with standard tools you get in every distro. You can get similar tools for Windows (virtual COM port drivers, etc.), but they're harder to trust (pre-compiled binaries with no source), often half-abandoned (last release 2011 or something) and unreliable. And the Linux tools are fiddly to build on MacOS because it's just not the standard. This pattern replicates across many different problems. It's simply less headache to run the OS where things just work and are one package manager invocation away.
There's simply significant swaths of software development where Linux and Linux-friendly Open Source tools/projects have hands-down won, are the ubiquitous and well-maintained option, and on the other systems to have to jump through hoops and take extra steps to set up a pseudo-Linux to get things done.
Honestly, there's also the fact that MacOS and Windows users are equally used to their systems as Linux users are to theirs, and are equally blind to all the bugs, hoops and steps they have to take. If you're a regular, happy Linux user and attempt to switch (and I have done this just recently, actually, porting a library and GUI app to control/test/debug servo motors to Window), the amount of headache to endure on the other operating systems just to get set up with a productive environment is staggering, not to mention the amount of crap you have to click away. Granted, MacOS is a fair bit less annoying than Windows in the latter regard, though.
I'll happily claim that Linux today is the professional option for professional developers, anyhow. And you web folks would likely be surprised how much of the code of the browser engines your ecosystem relies on was written and continues to be written on Linux desktops (I was there :-), and ditto for a lot of the backend stuff you're building your apps on, and a fair amount of the high-end VFX/graphics and audio SW used to make the movies you're watching, and so on and so forth.
Are there more web devs churning out CRUD apps and their mobile wrappers on MacOS in the absolute? For sure, by orders of magnitude. But the real stuff happens on Linux, and my advice to young devs who want to get good and do stuff that matters (as someone who hires them) is to get familiar with that environment.
Funnily enough that's how I feel every time I use Windows or Mac. Yet I'm not bold enough to call them "piss poor". I'm pretty sure I - mostly - feel like that because they are different from what I'm used to.
Transitioning from Windows to Mac was much more of an adjustment than Linux Desktop. It's just that Linux has too many rough edges. While it's possible I've simply been unlucky, everytime I've tried Linux it's been small niggling issue after small niggling issue that I have to work around and it feels like a death of a thousand paper cuts. (BTW I first tried Linux desktop back in the late 90s and most recently used it as my main work laptop for 9 months this past year.)
Note, I'm more than happy to use Linux as a server. I run Linux servers at home and have for decades. But the desktop environments I've tried have all been irksome.
Note that I'm not mentioning particular distros or desktop environments because I've tried various over the years.
After all there are plenty of people - including me - who do not share that experience at all.
I had other issues that were not hardware related though. The desktop environment was missing some basic features for things like mouse settings that I had to install community extensions for, which were buggy.
I also had issues with printers, HDMI output, keyboard settings, and more. The list goes on and on. Each was something I spent time on that I haven't had to spend time on with MacOS (it's been a decade and a half since I've used Windows, but I remember it having fewer issues).
BTW, I also dread OS updates on Linux, and that includes server-side. Definitely another pain point that feels more severe than on MacOS.
Anyway, I'm glad that Linux works for some people's usecases, but it feels like it's been in this limbo of quasi-usable for quite a while from my perspective.
What exactly is wrong with it? I prefer KDE to either Windows or MacOS. Obviously a Linux desktop is not going to be identical to whatever you use so there is a learning curve, but the same is true, and to a much greater extent, for moving from Windows to MacOS.
> layman to sanely develop programs for Linux systems
> or the average person
The "layman" or "average person" does not develop software.
The average person has plenty of problems dealing with Windows. They are just used to putting up with being unable to get things to work. Ran into that (a multi-function printer/scanner not working fully) with someone just yesterday.
If you find it hard to adjust to a Linux desktop you should not be developing software (at any rate not developing software that matters to anyone).
I have switched a lot of people to Linux (my late dad, my ex-wife, my daughter's primary school principal) who preferred it to Windows and my kids grew up using it. No problems.
KDE is my choice as well (Xfce #2) if I have to be stuck with a Linux distro for a long period but I'd rather not put myself in that position because it's still going to be a nightmare. My most recent install from this year of Kubuntu/KDE Fedora had strange bugs where applications froze and quitting them were more painful than macOS/Windows, or that software updates through their app store thingy end up in some weird state that won't reset no matter how many times I reboot, hard crashes and so on on a relatively modern PC (5900X, RTX 3080, 32G RAM). I had to figure out the commands to force reset/clean up things surrounding the package management in order to continue to install/update packages. This is the kind of thing I never face with Silicon macs or even Windows 10/11.
This is a dealbreaker for the vast majority of people but let's come to your more interesting take:
> If you find it hard to adjust to a Linux desktop you should not be developing software
And that sums up the vast majority of Linux users who still think every other year is the year of "Linux desktop". It's that deeply ignorant attitude instead of acknowledging all these years of clusterfuck after clusterfuck of GUIs, desktop envs, underlying tech changes (Xorg, Wayland) and myriads of confusing package distribution choices (debs, rpms, snaps, flatpaks, appimages and so on), that no sane person is ever going to embrace a Linux distro as their daily driver.
You need a reality reset if you think getting used to Linux is a qualifier to making great software.
A matter of your experience. Its not something that happens to me or anyone I know personally. Even using a less newbie friendly distro (I use Manjaro) its very rare.
I have not tried Fedora for many years, but the last time I did it was not a particularly easy distro to use. It is also a test distro for RHEL and Centos so should be expected to be a bit unstable.
> It's that deeply ignorant attitude instead of acknowledging all these years of clusterfuck after clusterfuck of GUIs, desktop envs, underlying tech changes (Xorg, Wayland) and myriads of confusing package distribution choices (debs, rpms, snaps, flatpaks, appimages and so on)
Most of which is hidden from the user behind appstores. The only thing non-geek users need to know is which DE they prefer (or they can let someone else pick it for them, or use the distro default).
Even a user who wants to tinker only needs to know one of the distribution formats, one desktop environment. You are free to learn about more, but there is absolutely no need to. You also need to learn these if you use WSL or some other container.
> You need a reality reset if you think getting used to Linux is a qualifier to making great software.
What I said is that the ability to cope with the tiny learning curve to adjust to a different desktop look and feel is a disqualifier for for being a developer.
Every non-technical user who switches from Windows to MacOS does it, so its very odd it is a barrier for a developer.
On the other hand do people care that much about DEs? Most people just want to start their web browser or whatever.
For most it’s not a case of whether you can do it, it’s whether it’s worth doing it. For me Linux lacks the killer feature that makes any of that adjustment worth my (frankly, valuable) time. That’s doubly so for any of us that develop user facing software: our users aren’t going to be on Linux so we need to have a more mainstream OS to hand for testing anyway.
The objection is really I do not want to use anything different, which is fine. After many years of using Linux I feel the same about using Windows or MacOS
> For me Linux lacks the killer feature that makes any of that adjustment worth my (frankly, valuable) time
It lacks all the irritants in Windows 11 every Windows user seems to complain of?
> That’s doubly so for any of us that develop user facing software: our users aren’t going to be on Linux so we need to have a more mainstream OS to hand for testing anyway.
SO for desktop software, that is not cross platform, yes. If you are developing Windows software you need Windows.
If you are developing server software it will probably be deployed to Linux, if you are developing web apps the platform is the browser and the OS is irrelevant, and if you are developing cross platform desktop apps then you need to test on all of them so you need all.
That said, counterpoint to my own, Android is Linux and has billions of installations, and SteamOS is Linux. I think the next logical step for SteamOS is desktop PCs, since (anecdotally) gaming PCs only really play games and use a browser or web-tech-based software like Discord. If that does happen, it'll be a huge boost to Linux on the consumer desktop.
I think we need to have a specific audience in mind when saying whether or not it's stable. My Arch desktop (user: me) is actually really stable, despite the reputation. I have something that goes sideways maybe once a year or so, and it's a fairly easy fix for me when that does happen. But despite that, I would never give my non-techy parents an Arch desktop. Different users can have different ideas of stable.
This is when I gave up and switched to Apple. I am now moving back to Linux but Arch still seems like it’s too hacky and too little structured organizationally to be considered trustworthy. So, Ubuntu or Debian it is, but fully haven’t decided yet.
Still, I would be happy to be convinced otherwise. I’m particularly surprised Steam uses it for their OS.
I've crapped my system on install, or when trying to reconfigure core features.
Updates? 0 issues. Like genuinely, none.
I've used Ubuntu and Mint before and Arch "just works" more then either of them in my experience.
When you are a DE that’s embedded in FOSS no one has an appetite to fund user experience the same way as corporate OS can.
We do have examples where this can work, like with the steam deck/steamOS but it’s almost counter to market incentives because of how slow dev can become.
I see the same problem with chat and protocol adoption. IRC as a protocol is too slow for companies who want to move fast and provide excellent UX, so they ditch cross collaboration in order to move fast.
That’s not to say it can’t or doesn’t work for some people in the middle, but for this group it’s much more likely that there’s some kind of fly in the soup that’s preventing them from switching.
It’s where I’m at. I keep secondary/tertiary Linux boxes around and stay roughly apprised of the state of the Linux desktop but I don’t think I could ever use it as my “daily driver” unless I wrote my own desktop environment because nothing out there checks all of the right boxes.
> That’s not to say it can’t or doesn’t work for some people in the middle, but for this group it’s much more likely that there’s some kind of fly in the soup that’s preventing them from switching.
Generally agree with these points with some caveats when it comes to "extremes".
I think for middle to power users, as long as their apps and workflows have a happy path on Linux, their needs are served. That happy path necessarily has to exist either by default or provisioned by employers/OEMs, and excludes anything that requires more than a button push like the terminal.
This is just based on my own experience, I know several people ranging from paralegals working on RHEL without even knowing they're running Linux, to people in VFX who are technically skilled in their niche, but certainly aren't sys admins or tiling window manager users.
Then there are the ~dozen casual gamers with Steam Decks who are served well by KDE on their handhelds, a couple moved over to Linux to play games seemingly without issue.
Plus, one could argue they've actually just established dominance through market lockin by ensuring the culture never had a chance and making operating system moves hard for the normal person.
But more importantly if we instead consider the context that this is largely a collection of small utilities made by volunteers vs huge companies with paid engineering teams, one should be amazed at how comparable they are at all.
If Gnome implemented that as well as macOS does I’d happily switch permanently.
I've worked in jobs that only used Linux as the day to day desktop operating system. I currently work on macOS.
What features do you think are missing?
However on embedded, and desktop, the market belongs to others, like Zehyr, NutXX, Arduino, VxWorks, INTEGRITY,... and naturally Apple, Google and Microsoft offerings.
Also Linux is an implementation detail on serverless/lambda deployments, only relevant to infrastructure teams.
And it’s in incredible numbers - hundreds of millions of units - of game consoles.
The BSD family isn’t taking a bow in public, that’s all.
And outside NetFlix, there aren't many big shots talking about it nowadays.
It’s a BSD.
I think what slows down market share of Linux on desktop is Linux on desktop itself.
I use Linux, and I understand that it's a very hard job to take it to the level of Windows or macOS, but it is what it is.
More software gets developed for that base Linux platform API, which makes releasing Linux-native software easier/practically free, which in turn makes desktop Linux an even more viable daily driver platform because you can run the same apps you use on macOS or Windows.
Eventually I got practical and fed up with ways of Linux Desktop.
Like, suspend-wake is honestly 100% reliable compared to whatever my Windows 11 laptop does, random freezes, updates are still a decade behind what something like NixOS has (I can just start an update and since the system is immutable it won't disturb me in any shape or form).
I was in the same boat and used macOS for a decade since it was practical for my needs.
These days I find it easier to do my work on Linux, ironically cross-platform development & audio. At least in my experience, desktop Linux is stable, works with my commercial apps, and things like collaboration over Zoom/Meet/etc with screen sharing actually work out of the box, so it ticks all of my boxes. This certainly wasn't the case several years ago, where Linux incompatibility and instability could be an issue when it comes to collaboration and just getting work done.
I have spent several months trying to make it work, across a couple of distros and partition layouts, only managing to boot them, if placed on external storage.
Until I can get into Media Market kind of store and get a PC, of whatever shape, with something like Ubuntu pre-installed, and everything single hardware feature works without "yes but", I am not caring.
IMO, just like with macOS, one should buy hardware based on whether their OS supports it. There are plenty of mini PCs with Linux pre-installed or with support if you just Google the model + Linux. There's entire sites like this where you can look up computers and components by model and check whether there is support: https://linux-hardware.org/?view=computers
You can even sort mini PCs on Amazon based on whether they come with Linux: https://www.amazon.com/Mini-Computers-Linux-Desktop/s?keywor...
The kernel already has workarounds for poorly implemented firmware, ACPI, etc. There's only so much that can be done to support bespoke platforms when manufacturers don't put in the work to be compatible, so buy from the ones that do.
> Until I can get into Media Market kind of store and get a PC, of whatever shape, with something like Ubuntu pre-installed, and everything single hardware feature works without "yes but", I am not caring.
You can go to Dell right now and buy laptops pre-installed with Ubuntu instead of Windows: https://www.dell.com/en-us/shop/dell-laptops/scr/laptops/app...
Notice how quickly this has turned into the usual Linux forums kind of discussion that we have been having for the last 30 years regarding hardware support?
I don't think I'll make my 2030 date at this point but there might be some version of Windows like this at some point.
I also recognize that Windows' need to remain backwards compatible might prevent this, unless there's a Rosetta-style emulation layer to handle all the Win32 APIs etc..
The average end user will be using some sort of Tivoized device, which will be running a closed-source fork of an open-source kernel, with state-of-the-art trusted computing modules making sure nobody can run any binaries that weren't digitally signed and distributed through an "app store" owned by the device vendor and from which they get something like a 25% cut of all sales.
In other words, everything will be a PlayStation, and Microsoft will be selling their SaaS services to enterprise users through those. That is my prediction.
Hell, Samsung is delivering Linux to the masses in the form of Wayland + PulseAudio under the brand name "Tizen". Unlike desktop land, Tizen has been all-in on Wayland since 2013 and it's been doing fine.
Likewise with ChromeOS.
They are Pyrrhic victories.
As for Tizen, interesting that Samsung hasn't yet completely lost interest on it.
Except neither will support even a fraction of the originals' capabilities, at much worse performance and millions of incompatibilities at every corner.
The OS is a mix of Java, Kotlin, JavaScript, NDK APIs and the standard ISO C and ISO C++ libraries.
This would be better phrased If Google could replace Linux kernel with something else noone would notice,
Google have spent a decade trying to replace the Linux with something else (Fuschia), and don't seem to have gotten anywhere
Also don't forget Fuchsia has been mostly a way to keep valuable engineers at Google as retention project.
They haven't been trying to replace anything as such, and Linux kernel on Android even has userspace drivers with stable ABI for Java and C++, Rust on the kernel, all features upstream will never get.
Or on Rust's case, Google didn't bother with the drama, they decided to include it, and that was it.
I do pro audio on Linux, my commercial DAWs, VSTs, etc are all Linux-native these days. I don't have to think about anything sound-wise because Pipewire handles it all automatically. IMO, Linux has arrived when it comes to this niche recently, five years ago I'd have to fuck around with JACK, install/compile a realtime kernel and wouldn't have as many DAWs & VSTs available.
Similarly, I have a friend in video production and VFX whose studio uses Linux everywhere. Blender, DaVinci Resolve, etc make that easy.
There is a lack of options when it comes to pro illustration and raster graphics. The Adobe suite reigns supreme there.
I am more amateur/hobbyist than pro, but this is the primary reason I’m on macOS and I wouldn’t mind reasons to try Linux again (Ubuntu Studio ~8 years ago was my last foray).
This minus live sound, and I stick exclusively to MIDI controllers.
> and the distro and applications you use?
I'm on EndeavourOS, which is just Arch with a GUI installer + some default niceties.
I came from using Reaper on macOS, which is native on Linux, but was really impressed with Bitwig Studio[1] so I use that for most of everything.
I really like u-he & TAL's commercial offerings, Vital, and I got mileage out of pages like this[2] that list plugins that are Linux compatible. I'm insane so I also sometimes use paid Windows plugins over Yabridge, which works surprisingly well, but my needs have been suited well by what's available for Linux.
There's also some great open source plugins like Surge XT, Dexed & Vaporizer2, and unique plugins ChowMatrix.
> I wouldn’t mind reasons to try Linux again (Ubuntu Studio ~8 years ago was my last foray).
IMO the state of things is pretty nice now, assuming your hardware and software needs can be met. If you give it a try, I think a rolling release would be best, as you really want the latest Pipewire/Wireplumber support you can get.
Brag about this to an average Windows or Mac user and they will go "huh?" and "what is Linux?"
Depending on what you mean with "the game", I'd say even more so.
MS/Apple used to villify or ridicule Linux, now they need to distribute it to make their own product whole, because it turns out having an Open Source general purpose OS is so convenient and useful it's been utilized in lots of interesting ways - containers, for example - that the proprietary OS implementations simply weren't available for. I'd say it's a remarkable development.
I personally don't know a dev worth his salt who'd prefer windows
Apple’s docs say nested virtualization is only available on M3-class Macs and newer (VZGenericPlatformConfiguration.isNestedVirtualizationSupported) developer.apple.com, but I don’t see an obvious flag in the container tooling to enable it. Would love to hear if anyone’s managed to get KVM (or even qemu-kvm) running inside one of these VMs.
Could games be run inside a virtual Linux environment, rather than Apple’s Metal or similar tool?
This would also help game developers - now they only need to build for Windows, Linux, and consoles.
The reverse, i.e. running Linux binaries on Windows or macOS, is not easily possible without virtualization, since Linux uses direct syscalls instead of always going through a dynamically linked static library that can take care of compatibility in the way that Wine does. (At the very least, it requires kernel support, like WSL1; Wine is all userspace.)
> But after that, Rosetta will be pared back and will only be available to a limited subset of apps—specifically, older games that rely on Intel-specific libraries but are no longer being actively maintained by their developers. Devs who want their apps to continue running on macOS after that will need to transition to either Apple Silicon-native apps or universal apps that run on either architecture.
https://arstechnica.com/gadgets/2025/06/apple-details-the-en...
It's all a question of using the right/performant hardware interfaces, e.g. IOMMU-based direct hardware access rather than going through software emulation for performance-critical devices.
https://developer.apple.com/documentation/virtualization/run...
Given that they announced a timeline for sunsetting Rosetta 2, it may be low priority.
They have Xcode cloud.
The $4B contract with Amazon ends, and it’s highly profitable.
Build a container, deploy on Apple, perhaps with access to their CPU’s
People still want the nice UI/UX, and this is just a Swift package.
License aside, though, I would still bet that relying on the Apple-specific version of something like this will cause headaches for teams unless you're operating in an environment that's all-in on Apple. Like, your CI tooling in the cloud runs on a Mac, that degree of vendor loyalty. I've never seen any shop like that.
Plus when this tooling does have interoperability bugs, I do not trust Apple to prioritize or even notice the needs of people like me, and they're the ones in charge of the releases.
As opposed to that, there's OrbStack, a venture-backed closed source application thriving off of user licenses, developed by a small team. As empathetic as I am with them, I know where I bet my money on in this race.
Orbstack started out as one kid with a passion for reducing the suffering of the masses, and from day 1 he was relentless about making the experience as smooth as possible, even for the weirdos like me (e.g. I have a very elaborate ssh config). He was very careful and thoughtful about choosing a monetisation model that wouldn't hinder people exactly like him - passionate hackers on a shoestring budget.
Yeah, it's now venture-backed. I'm not concerned, as long as Danny is in charge.
Presumably it's not as good right now but where it ends up depends entirely on Apple's motivation. When they are determined they can build very good things.
Also the EULA limits you to just two VMs per computer and only for very specific purposes. Clearly because they want you to buy their damn computers
I would really want to have a macOS (not just Darwin) container, but it seems that it is not possible with macOS. I don't remember the specifics, but there was a discussion here at HN a couple of month ago and someone with intrinsic Darwin knowledge explained why.
Heck even Microsoft managed to run Windows containers on Windows, even with the technical debt and bloat they had. Apple could, they just don't want to because it goes straight against their financial interests
Docker for Mac builds it in 4 minutes.
container tool... 17 minutes. Maybe even more. And I did set the cpu and memory for the builder as well to higher number than defaults (similar what Docker for Mac is set for). And in reality it is not the build stage, but "=> exporting to oci image format" that takes forever.
Running containers - have not seen any issues yet.
If you're a dev team that creates Mac/iOS/iPad/etc apps, you might want Mac hardware in your CI/CD stack. Cloud providers do offer virtual Macs for this purpose.
If you're a really big company (eg. a top-10 app, eg. Google) you might have many teams that push lots of apps or app updates. You might have a CI/CD workflow that needs to scale to a cluster of Macs.
Also, I'm pretty sure apple at least partially uses Apple hardware in the serving flow (eg. "Private Cloud Compute") and would have an interest in making this work.
Oh, and it'd be nice to be able to better sand-box untrusted software running on my personal dev machine.
Private Cloud Compute is different hardware: https://security.apple.com/blog/private-cloud-compute/
I would cal this "Apple Hardware" even if its not the same thing you can buy at an Apple Store.
In my experience, the only use case for cloud macs is CI/CD (and boy does it suck to use macOS in the cloud).
I wonder what the memory overhead is, especially if running multiple containers - as that would spin up multiple VM's.
[0]: https://developer.apple.com/videos/play/wwdc2025/346 10:10 and forwards
> Containers achieve sub-second start times using an optimized Linux kernel configuration[0] and a minimal root filesystem with a lightweight init system.
[0]: https://github.com/apple/containerization/blob/main/kernel/c...
Container: Apple's Linux-Container Runtime - https://news.ycombinator.com/item?id=44229239 - June 2025 (11 comments)
Apple announces Foundation Models and Containerization frameworks, etc - https://news.ycombinator.com/item?id=44226978 - June 2025 (345 comments)
(Normally we'd merge them but it seems there are significant if subtle differences)
Many developers I know don't use MacOS mainly because they depend on containers and virtualisation is slow, but if Apple can pull off efficient virtualisation and good system integration (port mapping, volumes), then it will eat away at a large share of linux systems.
But is it also finally time to fix dtrace on MacOS[0]?
[0]: https://developer.apple.com/forums/thread/735939?answerId=76...
It’s some nice tooling wrapped around lightweight VMs, so basically WSL2
This guide seems to have no specific license agreement.
https://www.freecodecamp.org/news/install-xcode-command-line...
Just because you click through them all without reading doesn’t mean they are all equivalent. Xcode has an EULA. Swift and Make do not, being free software.
They are not the same.
Not even the first non-hyperbolic part of what you wrote is correct. "Container" most often refers to OS-level virtualization on Linux hosts using a combination of cgroups, resource groups, SDN, and some mount magic (among other things). MacOS is BSD-based and therefore doesn't support the first two things in that list. Apple can either write a compatibility shim that emulates this functionality or virtualize the Linux kernel to support it. They chose the latter. There is no Docker involved.
This is a completely sane and smart thing for them to do. Given the choice I'd still much rather run Linux but this brings macOS a step closer to parity with such.
I'm not sure this is the same, though. This feels more like docker for desktop running on a lightweight vm like Colima. Am I wrong?
It isn't systemd:
> Containers achieve sub-second start times using an optimized Linux kernel config, minroot filesystem, and a lightweight init system, vminitd
OCI containers are supposed to be "one container, one PID": at the very least the container's server is PID1 (at times other processes may be spawned but typically the container's main process is going to be PID1).
Containerization is literally the antithesis of systemd.
So I don't understand your comment.
For instance, Orbstack implements the docker daemon socket protocol, so despite not being docker, it still allows using docker compose where containers are created inside of Orbstack.
> You need an Apple silicon Mac to build and run Containerization.
> To build the Containerization package, your system needs either:
> macOS 15 or newer and Xcode 26 Beta
> macOS 26 Beta 1 or newer
Those on Intel Macs, this is your last chance to switch to Apple Silicon, (Sequoia was the second last)[0] as macOS Tahoe is the last version to support Intel Macs.
I like the hardware, hate the absurd greedy storage and RAM prices.
Source? Is this self-imposed, or what does “allowed” mean?
Even if true, technical people can work around this by either spoofing a non-external drive or using `ln`, no?
Of course, predictably, iCloud Drive gives you no configuration of where you store the local copy, so it’s stored in some weird path specifically on your boot volume.
IIRC Google Drive for Desktop won't sync the target of a symbolic link. It will sync the target of a hard link, but hard links can only target the same filesystem that the link is on, so you can't target an external drive on macOS AFAIK.
I can't speak for the other software you mentioned.
Probably because Apple spent half a billion dollars for the patent portfolio of a company building enterprise SSD controllers a decade ago. People seem to like data storage integrity.
> Anobit appears to be applying a lot of signal processing techniques in addition to ECC to address the issue of NAND reliability and data retention. In its patents there are mentions of periodically refreshing cells whose voltages may have drifted, exploiting some of the behaviors of adjacent cells and generally trying to deal with the things that happen to NAND once it's been worn considerably.
Through all of these efforts, Anobit is promising significant improvements in NAND longevity and reliability.
https://www.anandtech.com/show/5258/apple-acquires-anobit-br...
If I had to place a bet on why the patents were purchased, it would be to protect them against someone else purchasing them and alleging that literally any SSD controller Apple put into their silicon was infringing.
Watch the talk "Zebras all the way down" by Bryan Cantril; TLDR most storage devices lie to you and are constantly corrupting your data, and you only find out if you run paranoid filesystems like ZFS + run your own burn-in and integrity tests.
https://askubuntu.com/questions/55868/installing-broadcom-wi...
Not sure about the newer ones.
Gathering this information and putting together a distro to rescue old Macbooks from the e-waste bin would be a worthwhile project. As far as I can tell they're great hardware.
I imagine things get harder once you get into the USB-C era.