Besides the obvious build failures on heavily sandboxed build servers with no access to the internet, this forces anyone with even a little concern for security to do a full audit of any build recipes before using them, as merely studying and making available the dependencies listed in READMEs and build manifests like requirements.txt, package.json etc., is no longer enough.
I find this a very worrying development, especially given the rise in critical computer infrastructure failures and supply chain attacks we've seen lately.
Self-contained distribution should be the norm.
There's so much churn in devops space that nobody has time to figure out "the correct way" anymore.
Typically, Fedora and openSUSE have a policy that distributed packages (which includes container images) have to build with only packages from the repository, or explicitly added binaries during the build. So once you can `dnf/zypper install` something (or pull it from the vendor's container registry), you know the artifacts are trusted.
If you need to be on a bleeding edge, you deal with random internet crap shrug.
Of course a random OSS developer won't create offline-ready, trusted build artifacts. They don't have the infrastructure for it. And this is why companies like Red Hat or SUSE exist - a multi-billion dollar corporation is happy to pay for someone to do the plumbing and make a random artifact from the internet a trusted, reproducible, signed artifact, which tracks CVEs and updates regularly.
In the 80s we envisioned modular, reusable software components you drop in like Lego bricks (we called it CASE then), and here we have it, success! Spoiler, it comes with tradeoffs...
Apple Silicon build of MacOS probably not going to be emulatable any time soon, though there is some early work in booting ARM darwin
Also Intel VT-x is missing on AMD, so virtualization is busted on AMD hosts although some crazy hacks with old versions of virtualbox can make docker kind of work through emulation
AMD has its own VT-X alternative (AMD-V) that should work just fine. There are other challenges to getting macOS to boot on AMD CPUs, though, usually fixed by loading kexts and other trickery.
I don't really see the point of using Docker for running a full OS. Just distribute an OVA or whatever virtualisation format you prefer. Even a qcow2 with a bash script to start the VM would probably work.
[0] - https://developer.apple.com/documentation/paravirtualizedgra...
Wouldn’t that work with AMD-V?
But isn't the use case here running macOS in Docker on a AMD-based computer for example? So macOS would only see what QEMU presents it.
Or are you talking about virtualization within that container? Then I probably misunderstood.
Also, there are a couple of kext projects that allow you to use AMD graphics, even iGPUs, on Hackintoshes. I have not tested this myself, but there are rumblings you may even be able to get this to work with a Steam Deck.
https://github.com/ChefKissInc/NootedRed
https://github.com/ChefKissInc/NootRX
A lot has changed in the Hackintosh space around AMD of late. I don’t think the automatic pessimism is as warranted as it once was.
id guess vmxon and vmxoff and vmcs structures etc. will still be the same on both? a lot of security stuff etc. is totally different (amd psp vs intel ME etc.)
(still agree ofc, but just thinking about where these differences are located as the cpus can run very similar or the same code)
Docker-OSX: Run macOS VM in a Docker - https://news.ycombinator.com/item?id=34374710 - Jan 2023 (110 comments)
macOS in QEMU in Docker - https://news.ycombinator.com/item?id=23419101 - June 2020 (186 comments)
Worked really great otherwise, though. Very useful in a pinch.
But I think this tool is more useful for things like build scripts (that rely on proprietary macOS frameworks) more than it is for actually using it like a personal computer.
This could be pretty awesome in terms of freedom, even if the build takes 5x more.
This is how Godot targets iOS: https://github.com/godotengine/build-containers/blob/main/Do...
Here's a Docker image with the tools preinstalled, though you'll need some tweaks to target iOS: https://github.com/shepherdjerred/macos-cross-compiler
While at RStudio (now called Posit), I worked on cross-compiling C/C++/Fortran/Rust on a Linux host targeting x86_64/aarch64 macOS. If you download an R package with native code from Posit Package Manager (https://p3m.dev/client/), it was cross-compiled using this approach :)
Also wanna point out the existence of OSX-PROXMOX, which does something similar for Proxmox home servers: https://github.com/luchina-gabriel/OSX-PROXMOX
I’ve personally been using the latter on my HP Z420 Xeon; it’s very stable, especially with GPU passthrough.
https://github.com/steilerDev/icloud-photos-sync
https://github.com/icloud-photos-downloader/icloud_photos_do...
My guess: Being able to run it on a non-Mac/Windows machine.
You can extract the images yourself from official install media (for instance, the installers you can create from within macOS) and use it for whatever personal project you want; you'd be breaking the EULA, but that doesn't mean much. You're not allowed to throw your copy on the internet, though.
Other projects I've seen download the installer images directly from Apple, something they could probably detect and block if they wanted to. That would probably be completely legal, as nobody is unlawfully distributing the files. This is different; the Docker images contain a copy of macOS.
Apple could probably take this project down any time they want to, but if they wanted to they probably would've already.
If a choir teacher distributes the lyrics to a Britney Spears song to their students for practice, there is nothing illegal about this
If a choir teacher starts a website britneylyrics.com and puts ads on the website, that would qualify
The EULA might prohibit redistribution, but you don't need to accept an EULA to copy-paste files, as far as I know.
> The EULA might prohibit redistribution
I don’t think it matters. Copyright law automatically forbids copying. Well, assuming Apple complied with any requirements to have a valid copyright, which seems a safe bet.
Can I run docker inside this container to get MacOS to run inside MacOS? ;)
(However, "USB over ethernet proxy" is also a true passthrough, just one with higher latency than VirtIO.)
But tell me please, which problems do you have with PCIe passthrough?
Also speaking from experience in large VM test farms with a significant amount of forwarded hardware. I've never experienced problems with hundreds of machines doing exactly this, for years.
1. VMs operate on a copy of certain PCIe descriptors obtained during enumeration/when forwarding was setup, meaning that some firwmare updates that depend on these changing cannot work correctly. The exact details have left my memory.
2. Foo states that only happen when forwarding. Hardware that seems so stable when used directly that bugs would seem inconceivable enter into broken states when forwarded and fail to initialize within the VM.
Hardware and drivers are both full of bugs, and things become "fun" when either get surprised. You can deal with it when you're doing the forwarding your own hardware and using your own drivers so discovered issues can be debugged and sorted out, but it's much less fun when you're forwarding stuff from other vendors out of necessity.
Dealt with this one just this morning.
3. Reset bugs. Hardware reset and sequencing is a tricky area (speaking from old FPGA experience), and some devices cannot recover without a full power cycle.
In some cases, I can recover the device by stopping the forward, removing the device (echo 1 > /sys/bus/pci/devices/.../remove), rescanning and letting the host kernel temporarily load drivers and initialize the device, and then forward it again. Did that today.
4. Host crashes. Yay.
Forwarding a single device on a user machine that still gets regular reboots tends to work fine, but things get hairy when you scale this up. I've had to do a lot of automation of things like handing devices back to the hypervisor for recovery and firmware management.
In my case I need 3rd party USB devices (that always just work(™)) to communicate and interact with hardware. Been automating/running literally hundreds of these configurations without a single issue related to USB or PCI passthrough. Even got switchable HUBs for USB in the mix sometimes, too (for power cycling specific USB devices). Works fine as well.
My experience is in testing both USB downstream devices and PCIe devices developed in-house. Some of the forwarded devices might be 3rd-party devices like hubs, relays for power cycling and USB isolators to simulate hot-plug, but the DUTs are stuff we manufacture.
In the USB test scenarios (we have about ~100 such machines, on average connected to a dozen DUTs, some more), the symptom of failure is generally that the entire controller can discover downstream devices but permanently fail to communicate with any of them, or that the controller itself fails to initialize entirely.
The PCIe test scenarios is not something I actively work with anymore, but involves a server room full of machines with 4-7 DUTs each and much more custom handling - such as hot-unplugging the device from the VM, resetting and firmware updating the device, and hot-plugging it back as part of the test running in that VM - as testing PCIe devices themselves exercise many more issues that you don't see with standardized hardware.
I have done this for about a decade, so I've been through a few iterations and tech stacks. One can find things that work, but it's not in any way or form guaranteed to work.
This is really nice WRT the ease of installation: no manual setup steps and all.
This likely expressly violates the [macOS EULA], which says: «you are granted a limited, non-exclusive license to install, use and run one (1) copy of the Apple Software on a single Apple-branded computer at any one time» — because the point is to run it not on a Mac. So, pull it and keep it around; expect a C&D letter come any moment.
[macOS EULA]: https://www.apple.com/legal/sla/docs/macOSMonterey.pdf (Other versions contain the same language.)
The question I've always had is how enforceable is that really? Obviously the whole point of Apple making macOS freely available is to run it on Apple hardware. They don't give it out for free to run on other hardware but can they really do anything about that other than require you to enter a serial number to download an image? If they really cared, they would just do something like hashing the serial number and current date and time against a secret key (maybe inside a read-only portion of the TPM) and only Apple would be able to verify that the hardware is legit. You would need to somehow expose the TPM to the hypervisor to be able to generate hashes for macOS to verify it's license. Clearly this is not a huge problem for Apple because they would already be doing this if it was an issue.
With other hosts, it’s kind of an Adobe approach - you either weren’t gonna buy a Mac anyways, or you might be tempted to buy a Mac after using macOS in a VM. Realistically, it’s not worth Apple coming after you unless you’re an enterprise making your money by breaking the EULA.
I’m omitting a few details for brevity (MS licensing is nuts when you get into the weeds).
https://i.imgur.com/fop769Z.jpeg
(too bad they didn't pay such close attention to the power profiles their partners were attaching the intel brand name to...)
Serious stuff!
But this is packaged as a Docker image, and Docker is Linux-specific. Linux is not officially supported by Apple on their hardware, and is certainly not prevalent on it. I doubt that the intended target audience of this project is limited to Asahi Linux.
For people who want an open-source CLI solution rather than a commercial product which for larger businesses requires payment, there's also colima which does roughly the same thing.
So, lots of people very successfully use Docker on macOS, including on Apple hardware.
This particular software would need nested virtualization to be highly performant, but at least on M3 or newer Macs running macOS 15 or newer, this is now supported by Apple's virtualization framework:
https://developer.apple.com/documentation/virtualization/vzg...
So, if that's not easy to do in a useful and performant way now, it will absolutely be possible in the foreseeable future. I'm sure that the longtime macOS virtualization product Parallels Desktop will add support for nested virtualization quite soon if they haven't already, in addition to whatever Docker Desktop and colima do.
(Tangent: Asahi Linux apparently supports nested virtualization on M2 chips even though macOS doesn't.)
The same result can be achieved by running macOS right in the VM. This can be extra efficient since both the host OS and the guest OS are macOS, and the VM could use this fact.
It may make sense to run macOS in an emulator like QEMU under macOS, if the host version us ARM and the guest version is x64 (or vice versa). But I don't see where Linux and Docker would be useful in this case.
One such case, however, is when the user is already managing Linux Docker containers for other parts of their development or testing workflow and wants to manage macOS containers with the same tooling. That’s legitimate enough, especially when it ends up supporting nested virtualization of the same architecture and not true emulation, to keep the performance penalty modest enough.
> that is already running the Apple Software
Running Linux on Apple hardware would not follow that part of the EULA.
It could be in a VM.
""" "Corporate Headquarters has commanded," continued the magician, "that everyone use this workstation as a platform for new programs. Do you agree to this?"
"Certainly," replied the master, "I will have it transported to the data center immediately!" And the magician returned to his tower, well pleased.
Several days later, a novice wandered into the office of the master programmer and said, "I cannot find the listing for my new program. Do you know where it might be?"
"Yes," replied the master, "the listings are stacked on the platform in the data center." """
Serial ports were slow, grep wasn't really a thing, so having a printout (or "listing") of your program was a more efficient way (or only way!)to debug your program after the fact. https://www.youtube.com/watch?v=tJGrie7k97c
Back in the 90's, I had some programming classes in high school where there were 30 chairs, but 15 computers (around the edge)... bring your own 360kb floppy disk! So you had a real incentive (and a strict teacher) who insisted that you wrote out your program ahead of time, show it to her for a first-pass/feedback, and _then_ you'd get to go type it on the computer and see if it worked. Submissions were via printouts (of the program, aka "listing", along with the output) which she then took home and graded.
The whole document IMHO is worth a read, but is definitely a product of it's time (70's/80's/90's). https://en.wikipedia.org/wiki/The_Tao_of_Programming
Stick tongue firmly in cheek, empty your cup, and enjoy the ride!
Edit: ...and the relationship to the cantankerous original comment who "couldn't figure why they'd want to run OSX?", this is the zen-koan sarcastic response of: "use it as a platform for development" (ie: stack your papers on top if it)
I guess that part of the license is meant to automatically disqualify an apple branded computer running a linux distro as host OS from running MacOS in a VM: "on each Apple-branded computer you own or control that is already running the Apple Software"
Some smart ass might argue that "already running the Apple software" doesn't mean at the exact same time but more like "I am still running it sometimes as dual boot" but I am not sure this would pass the court test.
And since I believe docker on MacOS runs on linux VM, so this would be running qemu on top of a linux vm on top of MacOS.
I can't see any legit use of this. Anyone who would need automatized and disposable environments for CI/CD would simply use UTM on mac minis: https://docs.getutm.app/scripting/scripting/
This repo is 4 years old... I don't think it's coming.
In that case... If I run Asahi Linux on my apple-silicon macbook pro as main operating system and then run macOS in a container I should be fine.
ubuntu installs and runs easily. Other versions of linux - it depends.
Even Debian has lost its favorability by having sooo much legacy bloat, bugs, and outdated kernels that wont run Nvidia GPUs(2023) or other recent peripherals.
I'd be much more curious how Fedora or OpenSUSE hold up.
But I think its an experience thing rather than 'years' thing. If you only used Ubuntu for 10 years, you wont know what modern linux is like.
You sound like a Kubunutu expert, not a linux expert.
This is just pointless gatekeeping doubled down on at this point. People can be experts and use Kubuntu. People can be veterans and use Ubuntu. People can be absolute beginners and use Arch or OpenSUSE or literally any other distro. Use of distro is in no way shape or form indicative of experience other than that some are easier to get started with for absolute beginners than others. But that doesn't make them any less good.
It's a personal choice with each options having its own pros and cons. Not some indicator of experience or knowledge.
Its not even gatekeeping in 2024. Linux pros are avoiding debian-family.
Why would you use a buggy, slow, outdated distro when we have fast, modern, and fewer bugs?
This is an ignorance thing, much of the linux community repeats what they did in the past, afraid to change.
You’re not a “Linux pro” (not that that’s a thing in the first place ffs), so your opinion doesn’t matter, as much as you might think it does.
Glad to see pointless rabid fanaticism is still a thing in 2024, even in the Linux community.
Honestly, if you care this much about what distro someone uses, you need to get a life. This is by far the most pointless hill to die on.
I tried Ubuntu on my MBP because I thought its popularity would mean the best chance of things working out of the box. I’m long past having time to spend on getting basic things working.
Apple now even publicly distributes macOS from its site with no authentication required, something that certainly wasn't true in the early days of Hackintosh.
Given that some Hackintoshers may be doing it for the purposes of "security research" (bug bounty chasing), which indirectly benefits Apple, I don't think they will change the unsaid stance anytime soon.
On the other hand, its attempts at destroying right-to-repair and third-party OEM parts shows what it actually worries about.
Says who? My Mac Mini runs Linux as the host OS, this project allows me to run MacOS as a guest OS on Apple hardware on demand.
A bit tangential but is this more performant/"better" than running MacOS on say Hyper-V? I understand my zen 4 laptop anyway won't allow GPU acceleration, I'm only looking to run a few apps (and maybe Safari) on it.
Also, wouldn’t it be the end user potentially in violation of the EULA, not the git repo provider?
Edit: agreed about OS images, that does not look legit.
https://github.com/ytdl-org/youtube-dl
Did it get taken down again? The takedown I remember was a few years ago, and GitHub announced some policy changes to make it harder for that to happen when they very loudly reinstated it:
https://github.blog/news-insights/policy-news-and-insights/s...
I guess I'm curious why you're so focused on this violating anything? Apple clearly doesn't care as folks like myself have used it for years. Apple's target market is hardware buyers, not people who do things like this. If this actually impacted sales, sure - but Apple doesn't sell OSX anymore.
As an aside the sickcodes work is great for people wanting to leverage Apple's "Find My" network with non-Apple devices by leveraging OpenHaystack [0].
In 2021 Blackberry, surprisingly, wrote this article about getting emulating the XNU kernel and getting it running on non-apple hardware, but its just a terminal:
https://blogs.blackberry.com/en/2021/05/strong-arming-with-m...
Someone would have to write something that can emulate/abstract the apple iGPU to get anywhere near a usable GUI - I'm no expert but I don't think this is going to happen anytime soon, so when Intel releases of MacOS stop happening apple hardware might be the only way to virtualize MacOS for a while
I'm not familiar with what Apple's GPU architecture on its ARM SoCs looks like, but wouldn't a framebuffer be sufficient? Or does ARM macOS have absolutely no software rendering fallback and relies on the GPU to handle all of it?
I know that regular amd64 macOS runs fine without GPU acceleration in a VM (like what is shown here), and arm64 Windows likewise with an emulated EFI framebuffer in QEMU on an amd64 host (it's bloody slow, being 100% emulated, but it works well enough to play around with.)
You could run the amd64 version of macOS 11 in QEMU on the M1, but that's ARM-to-x86 emulation, which will be slow, and I suppose isn't what you're looking for.
It uses Apples Virtualization framework and works well, besides issues with virtiofs. But those can be worked around with virtual block devices aka images.
Edit: "some" limitations is putting it lightly. From https://eclecticlight.co/2022/11/17/lightweight-virtualisati... which is apparently still current:
> Apple’s current implementation of lightweight virtualisation still has no support for Apple ID, iCloud, or any service dependent on them, including Handoff and AirDrop. Perhaps the most severe limitation resulting from this is that you can’t run the great majority of App Store apps, although Apple’s free apps including Pages, Numbers and Keynote can still be copied over from the host and run in a guest macOS.
Same deal with VirtualBuddy, apparently the root of the problem is that some sort of hardware validations fail in VMs https://github.com/insidegui/VirtualBuddy/discussions/27
Edit: it actually does!
QEMU also has its own built-in remote access capabilities (SPICE and VNC-based) but the former needs guest support.
I really hate having to also support the Apple ecosystem. Development, CI/CD integration is really poor without having to buy the hardware.
However, the prices are definitely outside my regular budget (needed it for an iOS app project cause of walled garden ecosystem) and I only got the 8 GB MacBook which in hindsight very much feels like a mistake, even with the exorbitant pricing for the 16 GB model.
For the price of the 8 GB model I could have gotten a nice laptop with 32 GB of RAM built in. That said, I don’t hate the OS, it’s all quite pleasant and performs well.
https://darwin-containers.github.io/
This parent project is VMs of OSX with a docker interface, I think.
Darwin containers are runc reimplemented in terms of MacOS chroot, so you do some isolation on native macs in a docker style.
Self-host in the repo glibc to emphasize the temporariness of this patch
sickcodes committed Feb 12, 2021
Seriously though, this is great.No forum eh? Everyone should come to the live channels and ask the same questions again :)
Hint: reddit is sort of a collection of forums. Discord, whatsapp group chats, Slack and other similar things are not, they're just a discardable text chat.
Tell me you don't know what a forum is without telling me you don't know what a forum is :)