The new foundation frameworks around generative language model stuff looks very swift-y and nice for Apple developers. And it's local and on device. In the Platforms State of the Union they showed some really interesting sample apps using it to generate different itineraries in a travel app.
The other big thing is vibe-coding coming natively to Xcode through ChatGPT (and other) model integration. Some things that make this look like a nice quality-of-life improvement for Apple developers is the way that it tracks iterative changes with the model so you can rollback easily, and the way it gives context to your codebase. Seems to be a big improvement from the previous, very limited GPT integration with Xcode and the first time Apple Developers have a native version of some of the more popular vibe-coding tools.
Their 'drag a napkin sketch into Xcode and get a functional prototype' is pretty wild for someone who grew up writing [myObject retain] in Objective-C.
Are these completely ground-breaking features? I think it's more what Apple has historically done which is to not be first into a space, but to really nail the UX. At least, that's the promise – we'll have to see how these tools perform!
Does that explain why you don't have to worry about token usage? The models run locally?
I have the same question. Their Deep dive into the Foundation Models framework video is nice for seeing code using the new `FoundationModels` library but for a "deep dive", I would like to learn more about tokenization. Hopefully these details are eventually disclosed unless someone else here already knows?
[1] https://developer.apple.com/videos/play/wwdc2025/301/?time=1...
To parent, yes this is for local models, so insomuch worrying about token implies financial cost, yes
I went into this industry because I grew up fascinated by computers. When I learned how to code, it was about learning how to control these incredible machines. The joy of figuring something out by experimenting is quickly being replaced by just slamming it into some "generative" tool.
I have no idea where things go from here but hopefully there will still be a world where the craft of hand writing code is still valued. I for one will resist the "vibe coding" train for as long as I possibly can.
Where it gets interesting is being pushed into directions that you wouldn't have considered anyway rather than expediting the work you would have already done.
I can't speak for engineers, but that's how we've been positioning it in our org. It's worth noting that we're finding GenAI less practical in design-land for pushing code or prototyping, but insanely helpful helping with research and discovery work.
We've been experimenting with more esoteric prompts to really challenge the models and ourselves.
Here's a tangible example: Imagine you have an enormous dataset of user-research, both qual and quant, and you have a few ideas of how to synthesize the overall narrative, but are still hitting a wall.
You can use a prompt like this to really get the team thinking:
"What empty spaces or absences are crucial here? Amplify these voids until they become the primary focus, not the surrounding substance. Describe how centering nothingness might transform your understanding of everything else. What does the emptiness tell you?"
or
"Buildings reveal their true nature when sliced open. That perfect line that exposes all layers at once - from foundation to roof, from public to private, from structure to skin.
What stories hide between your floors? Cut through your challenge vertically, ruthlessly. Watch how each layer speaks to the others. Notice the hidden chambers, the unexpected connections, the places where different systems touch.
What would a clean slice through your problem expose?"
LLM's have completely changed our approach to research and, I would argue, reinvigorated an alternate craftsmanship to the ways in which we study our products and learn from our users.
Of course the onus is on us to pick apart the responses for any interesting directions that are contextually relevant to the problem we're attempting to solve, but we are still in control of the work.
Happy to write more about this if folks are interested.
Or like this week I was sick and didn't have the energy to work in my normal way and it was fun to just tell ChatGPT to build a prototype I had in mind.
We live in a world of IKEA furniture - yet people still desire handmade furniture, and people still enjoy and take deep satisfaction in making them.
All this to say I don't blame you for being dismayed. These are fairly earth shattering developments we're living through and if it doesn't cause people to occasionally feel uneasy or even nostalgia for simpler times, then they're not paying attention.
Vibe coding can be whatever you want to make of it. If you want to be prescriptive about your instructions and use it as a glorified autocomplete, then do it. You can also go at it from a high-level point of view. Either way, you still need to code review the AI code as if it was a PR.
Coding with an AI can be whatever one can achieve, however I don’t see how vibe coding would be related to an autocomplete: with an autocomplete you type a bit of code that a program (AI or not) complete. In VC you almost doesn’t interact with the editor, perhaps only for copy/paste or some corrections. I’m not even sure for the manual "corrections" parts if we take Simon Willinson definition [0], which you’re not forced to obviously, however if there’s contradictory views I’ll be glad to read them.
0 > If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding in my book—that's using an LLM as a typing assistant
https://arstechnica.com/ai/2025/03/is-vibe-coding-with-ai-gn...
(Your may also consider rethinking your first paragraph up to HN standards because while the content is pertinent, the form sounds like a youngster trying to demo iKungFu on his iPad to Jackie Chan)
> Vibe coding (or vibecoding) is an approach to producing software by using artificial intelligence (AI), where a person describes a problem in a few natural language sentences as a prompt to a large language model (LLM) tuned for coding. The LLM generates software based on the description, shifting the programmer's role from manual coding to guiding, testing, and refining the AI-generated source code.[1][2][3]
I also like cooking, but I like eating more than the actual cooking. It's an means to an end, and I don't need to always enjoy the cooking process.
> Vibe coding (or vibecoding) is an approach to producing software by using artificial intelligence (AI), where a person describes a problem in a few natural language sentences as a prompt to a large language model (LLM) tuned for coding. The LLM generates software based on the description, shifting the programmer's role from manual coding to guiding, testing, and refining the AI-generated source code.[1][2][3]
> Vibe coding (or vibecoding) is an approach to producing software by using artificial intelligence (AI), where a person describes a problem in a few natural language sentences as a prompt to a large language model (LLM) tuned for coding. The LLM generates software based on the description, shifting the programmer's role from manual coding to guiding, testing, and refining the AI-generated source code.[1][2][3]
So as long as I can, and as long as I still enjoy it, you'll find me writing code. Lucky to get payed to do this.
If your app is worthwhile, and gets popular in a few years, by that time iPhone 16 will be an old phone and a reasonable minimum target.
Skate to where the puck is going...
Phones still get replaced often, and the people who don’t replace them are the type of people who won’t spend a lot of money on your app.
Or do have the ability to reach out to the internet for up to the moment information?
The thing macOS really painfully lacks is not ergonomic ways to run Linux VMs, but actual, native containers-- macOS containers. And third parties can't really implement this well without Apple's cooperation. There have been some efforts to do this, but the most notable one is now defunct, judging by its busted/empty website[1] and deleted GitHub organization[2]. It required disabling SIP to work, back when it at least sort-of worked. There's one newer effort that seems to be alive, but it's also afflicted with significant limitations for want of macOS features[3].
That would be super useful and fill a real gap, meeting needs that third-party software can't. Instead, as wmf has noted elsewhere in these comments, it seems they've simply "Sherlock'd" OrbStack.
--
1: https://macoscontainers.org/
Linux container processes run on the host kernel with extra sandboxing. The container image is an easily sharable and runnable bundle.
macOS .app bundles are kind of like container images.
You can sign them to ensure they are not modified, and put them into the “registry” (App Store).
The Swift ABI ensures it will likely run against future macOS versions, like the Linux system APIs.
There is a sandbox system to restrict file and network access. Any started processes inherit the sandbox, like containers.
One thing missing is fine grained network rules though - I think the sandbox can just define “allow outbound/inbound”.
Obviously “.app”s are not exactly like container images , but they do cover many of the same features.
You don't get that in macOS. It's more of a jail than a sandbox. For example, as an app you can't, as far as I know, shell out and install homebrew and then invoke homebrew and install, say, postgres, and run it, all without affecting the user's environment. I think that's what people mean when they say macOS lacks native containers.
Read more about it here - https://github.com/darwin-containers
The developer is very responsive.
One of Apple's biggest value props to other platforms is environment integrity. This is why their containerization / automation story is worse than e.g. Android.
https://github.com/apple/containerization/blob/d1a8fae1aff6f...
If the sandboxing features a native containerization system relied on were also exposed via public APIs, those could could also potentially be leveraged by developer tools that want to have/use better sandboxing on macOS. Docker and BuildKit have native support for Windows containers, for instance. If they could also support macOS the same way, that would be cool for facilitating isolated macOS builds without full fat VMs. Tools like Dagger could then support more reproducible build pipelines on macOS hosts.
It could also potentially provide better experiences for tools like devcontainers on macOS as well, since sharing portions of your filesystem to a VM is usually trickier and slower than just sharing those files with a container that runs under your same kernel.
For many of these use cases, Nix serves very well, giving "just enough" isolation for development tasks, but not too much. (I use devenv for this at work and at home.) But Nix implementations themselves could also benefit from this! Nix internally uses a sandbox to help ensure reproducible builds, but the implementation on macOS is quirky and incomplete compared to the one on Linux. (For reasons I've since forgotten, I keep it turned off on macOS.)
One clever and cool thing Tart actually does that sort of relates to this discussion is that it uses the OCI format for distributing OS images!
(It's also worth noting that Tart is proprietary. Some users might prefer something that's either open-source, built-in, or both.)
Do you think people would be developing and/or distributing end user apps via macOS containers?
the firewall tools are too clunky (and imho unreliable).
Containerization provides APIs to:
[...]
- Create an optimized Linux kernel for fast boot times.
- Spawn lightweight virtual machines.
- Manage the runtime environment of virtual machines.
[1] https://github.com/apple/container
[2] https://github.com/apple/containerizationIs there a VM technology that can make Linux aware that it's running in a VM, and be able to hand back the memory it uses to the host OS?
Or maybe could Apple patch the kernel to do exactly this?
Running Docker in a VM always has been quite painful on Mac due to the excess amount of memory it uses, and Macs not really having a lot of RAM.
Isn't this an issue of the hypervisor? The guest OS is just told it has X amount of memory available, whether this memory exists or not (hence why you can overallocate memory for VMs), whether the hypervisor will allocate the entire amount or just what the guest OS is actually using should depend on the hypervisor itself.
How can the hypervisor know which memory the guest OS is actually using? It might have used some memory in the past and now no longer needs it, but from the POV of the hypervisor it might as well be used.
This is a communication problem between hypervisor and guest OS, because the hypervisor manages the physical memory but only the guest OS known how much memory should actually be used.
Apparently docker for Mac and Windows uses these, but in practice, docker containers tend to grow quite large in terms of memory, so not quite sure how well it works in practice, its certainly overallocates compared to running docker natively on a Linux host.
add:
[experimental] autoMemoryReclaim=gradual
to your .wslconfig
See: https://learn.microsoft.com/en-us/windows/wsl/wsl-config
I chased the package’s source and indeed it’s pointing to this repo.
You can install and use it now on the latest macOS (not 26). I just ran “container run nginx” and it worked alright it seems. Haven’t looked deeper yet.
That said, I'd think apple would actually be much better positioned to try the WSL1 approach. I'd assume apple OS is a lot closer to linux than windows is.
[0] https://devblogs.microsoft.com/commandline/announcing-wsl-2/...
Maintaining a working duplicate of the kernel-userspace interface is a monumental and thankless task, and especially hard to justify when the work has already been done many times over to implement the hardware-kernel interface, and there's literally Hyper-V already built into the OS.
I think Apple’s main hesitation would be that the Linux userland is all GPL.
There’s a huge opportunity for Apple to make kernel development for xnu way better.
Tooling right now is a disaster — very difficult to build a kernel and test it (eg in UTM, etc.).
If they made this better and took more of an OSS openness posture like Microsoft, a lot of incredible things could be built for macOS.
I’ll bet a lot of folks would even port massive parts of the kernel to rust for them for free.
1. Creating and configuring a virtual machine:
hv_vm_create(HV_VM_DEFAULT);
2. Allocating guest memory: void* memory = mmap(...);
hv_vm_map(memory, guest_physical_address, size, HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC);
3. Creating virtual CPUs: hv_vcpu_create(&vcpu, HV_VCPU_DEFAULT);
4. Setting registers: hv_vcpu_write_register(vcpu, HV_X86_RIP, 0x1000); // Set instruction pointer
hv_vcpu_write_register(vcpu, HV_X86_RSP, 0x8000); // Stack pointer
5. Running guest code: hv_vcpu_run(vcpu);
6. Handling VM exits: hv_vcpu_exit_reason_t reason;
hv_vcpu_read_register(vcpu, HV_X86_EXIT_REASON, &reason);
Apple’s stack gives you low-level access to ARM virtualization, and from there Apple has high-level convenience frameworks on top. OrbStack implements all of the high-level code themselves.
Native Linux (and Docker) support would be something like WSL1, where Windows kernel implemented Linux syscalls.
It's possible that Apple has implemented a similar hypervisor here.
XNU is modular, with its BSD servers on top of Mach. I don’t see this as being a strong advantage of NT.
I think it is the Unix side that decided to burry their heads into sand. We got Linux. It is free (of charge or licensing). It supported files, basic drivers and sockets. It got commercial support for servers. It was all Silicon Valley needed for startups. Anything else is a cost. So nobody cared. Most of the open source microkernel research slowly died after Linux. There is still some with L4 family.
Now we are overengineering our stacks to get closer to microkernel capabilities that Linux lacks using containers. I don't want to say it is ripe for disruption becuse it is hard and again nobody cares (except some network and security equipment but that's a tiny fraction).
You say this, but then proceed to state that it had a very good design back then informed by research, and still is today. Doesn't that qualify? :-)
NT brought a HAL, proper multi-user ACLs, subsystems in user mode (that alone is amazing, even though they sadly never really gained momentum), preemptive multitasking. And then there's NTFS, with journaling, alternate streams, and shadow copies, and heaps more. A lot of it was very much ahead of UNIX at the time.
> nobody else cares about core OS design anymore.
Agree with you on that one.
I meant that NT was a product that matched the state of the art OS design of its time (90s). It was the Unix world that decided to be behind in 80s forever.
NT was ahead not because it is breaking ground and bringing in new design aspects of 2020s to wider audiences but Unix world constantly decides to be hardcore conservative and backwards in OS design. They just accept that a PDP11 simulator is all you need.
It is similar to how NASA got stuck with 70s/80s design of Shuttle. There was research for newer launch systems but nobody made good engineering applications of them.
9front is to Unix was NT it's for VMS.
That's their phrasing, which suggests to me that it's just a virtualization system. Linux container images generally contain the kernel.
No, containers differ from VMs precisely in requiring dependency on the host kernel.
Thst's how docker works on WSL2, run it on top of a virtualised linux kernal. WSL2 is pretty tightly integrated with windows itself, stil a linux vm though. It seems kinda weird for apple to reinvent the wheel for that kind of thing for containers.
Can't edit my posts mobile but realized that's, what's the word, not useful... But yeah, sharing the kernal between containers but otherwise makes them isolated allegedly allows them to have VMesque security without the overhead of seperate VMs for each image. There's a lot more to it, but you get the idea.
alias docker='container'
Should work, at least for basic and common operationsI know the container ecosystem largely targets Linux just curious what people’s thoughts are on that.
Good read from horse mouth:
https://developer.apple.com/library/archive/documentation/Da...
https://en.m.wikipedia.org/wiki/HP-UX
What you searched for is an evolution of it.
I like to read bibliographies for that reason—to read books that inspired the author I’m reading at the time. Same goes for code and research papers!
Jails are first-class citizens that are baked deep into the system.
A tool like Docker relies using multiple Linux features/tools to assemble/create isolation.
Additionally, iirc, the logic for FreeBSD jails never made it into the Darwin kernel.
Someone correct me please.
Both very true statements and worth remembering when considering:
> Additionally, iirc, the logic for FreeBSD jails never made it into the Darwin kernel.
You are quite correct, as Darwin is is based on XNU[0], which itself has roots in the Mach[1] microkernel. Since XNU[0] is an entirely different OS architecture than that of FreeBSD[3], jails[4] do not exist within it.
The XNU source can be found here[2].
0 - https://en.wikipedia.org/wiki/XNU
1 - https://en.wikipedia.org/wiki/Mach_(kernel)
2 - https://github.com/apple-oss-distributions/xnu
3 - https://cgit.freebsd.org/src/
4 - https://man.freebsd.org/cgi/man.cgi?query=jail&apropos=0&sek...
Another great resource regarding XNU and OS-X (although a bit dated now) is the book:
Mac OS X Internals
A Systems Approach[0]
0 - https://openlibrary.org/books/OL27440934M/Mac_OS_X_InternalsDocker isn't providing any of the underlying functionality. BSD jails and Linux cgroups etc aren't fundamentally different things.
> Jails create a safe environment independent from the rest of the system. Processes created in this environment cannot access files or resources outside of it.[1]
While you can accomplish similar tasks, they are not equivalent.
Assume Linux containers are jails, and you will have security problems. And on the flip side, k8s pods share UTM,IPC, Network namespaces, yet have independent PID and FS namespaces.
Depending on your use case they may be roughly equivalent, but they are fundamentally different approaches.
[1] https://freebsdfoundation.org/freebsd-project/resources/intr...
With WSL2 you get the best of both worlds. A system with perfect driver and application support and a Linux-native environment. Hybrid GPUs, webcams, lap sensors etc. all work without any configuration effort. You get good battery life. You can run Autodesk or Photoshop but at the same time you can run Linux apps with almost no performance loss.
Microsoft frequently tweaks syscall numbers, and they make it clear that developers must access functions through e.g. NTDLL. Mac OS at least has public source files used to generate syscall.h, but they do break things, and there was a recent incident where Go programs all broke after a major OS update. Now Go uses libSystem (and dynamic linking)[2].
on the windows side, syscall ABI became stable since Server 2022 to run mismatched container releases
Apple looks like it's skipped the failed WSL1 and gone straight for the more successful WSL2 approach.
That’s an interesting difference from other Mac container systems. Also (more obvious) use Rosetta 2.
What seems to be different here, is that a VM per each container is the default, if not only, configuration. And that instead of mapping ports to containers (which was always a mistake in my opinion), it creates an externally routed interface per machine, similar to how it would work if you'd use macvlan as your network driver in Docker.
Both of those defaults should remove some sharp edges from the current Linux-containers on macOS workflows.
They sold Docker Desktop for Mac, but that might start being less relevant and licenses start to drop.
On Linux there’s just the cli, which they can’t afford to close since people will just move away.
Docker Hub likely can’t compete with the registries built into every other cloud provider.
The equivalent of Electron for containers :)
Once you have an engine podman might be the best choice to manage containers, or docker.
I'm the primary author of amalgamation of GitHub's scripts to rule them all with docker compose so my colleagues can just type `script/setup` and `script/server` (and more!) and the underlying scripts handle the rest.
Apple including this natively is nice, but I won't be a able to use this because my scripts have to work on linux and probably WSL
Oh, wait.
That is what I have been using since 2010, until WSL came to be, it has been ages since I ever dual booted.
Orbstack owners are going to be fuming at this news!
https://learn.microsoft.com/en-us/windows/wsl/compare-versio...
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
It seems like a big step in the right direction to me. It's hard to tell if its 100% compatible with Docker or not, but the commands shown are identical (other than swapping docker for container).
Even if its not 100% compatible this is huge news.
This sounds like apple announced 2 things, AI models and container related stuff I'd change it to something like:
> Apple Announces Foundation Models, Containerization frameworks, more tools
Secure Boot on other platforms is all-or-nothing, but Apple recognizes that Mac users should have the freedom to choose exactly how much to peel back the security, and should never be forced to give up more than they need to. So for that reason, it's possible to have a trusted macOS installation next to a less-trusted installation of something else, such as Asahi Linux.
Contrast this with others like Microsoft who believe all platforms should be either fully trusted or fully unsupported. Google takes this approach with Android as well. You're either fully locked in, or fully on your own.
I'm not sure what you mean by that. You can trivially root a Pixel factory image. And if you're talking about how they will punish you for that by removing certain features: Apple does that too (but to a lesser extent).
https://github.com/cormiertyshawn895/RecordingIndicatorUtili...
On many Android devices, unlocking the boot loader at any point will also permanently erase the DRM keys, so you will never again be able to watch high resolution Netflix (or any other app that uses Widevine), even if you relocked the bootloader and your OS passed verified boot checks.
On a Mac, you don't need to "unlock the bootloader" to do anything. Trust is managed per operating system. As long as you initially can properly authenticate through physical presence, you totally can install additional operating systems with lower levels of trust and their existence won't prevent you from booting back into the trusted install and using protected experiences such as Apple Pay. Sure, if you want to modify that trusted install, and you downgrade its security level to implement this, then those trusted experiences will stop working (such as Apple Pay, iPhone Mirroring, and 4K Netflix in Safari, for instance), but you won't be rejected by entire swathes of the third-party app ecosystem and you also won't lose the ability to install a huge fraction of Mac apps (although iOS and iPadOS apps will stop working). You also won't necessarily be prevented from turning the security back up once you're done messing around, and gaining every one of those experiences back.
So sure, you can totally boil it down to "Apple still punishes you, only a bit less", but not only do they not even punish your entire machine the way Microsoft and Google do, but they even only punish the individual operating system that has the reduced security, don't punish it as much as Microsoft and Google do, and don't permanently lock things out just because the security has ever been reduced in the past.
Do keep in mind though, the comparison to Android is a bit unfair anyway because Apple's equivalent to the Android ecosystem is (roughly; excluding TV and whatever for brevity) iPhone and iPad, and those devices have never and almost certainly will never offer anything close to a bootloader unlock. I just had used it as an example of the all or nothing approach. Obviously Apple's iDevice ecosystem doesn't allow user tampering at all, not even with trusted experiences excluded.
Fun fact though: The Password category in System Settings will disappear over iPhone Mirroring to prevent the password from being changed remotely. Pretty cool.
Its reasonable to install a different OS on Android, even if some features don't work. I've done this, my friends and family have done this, I've seen it IRL.
I've never seen anyone do this on iPhone in my entire life.
But I flipped and I'm a Google hater. Expensive phones and no aux port. At least I can get cheap androids still.
My comment's about macOS. Even though it's a completely different market segment than Android, I'm only using Android as an example.
I used to tweak/mod Android and most recently preferred customizing the OEM install over forks. I stopped doing that when TWRP ran something as OpenRecoveryScript and immediately wiped the phone without giving me any opportunity to cancel. My most recent Android phone I never bothered to root. I may never mod Android again.
Alternatively, read about iBoot. Haha, just kidding! There is no documentation for iBoot, unlike there is for uBoot and Clover and OpenCore and SimpleBoot and Freeloader and systemd-boot. You're just expected to... know. Yunno?
For power management, you can however give some credit to ACPI, which is not directly related to UEFI (it predates it), but is likewise an open standard, and is generally found on the same devices as UEFI (i.e. PCs and ARM servers). ACPI also provides the initial gateway to PCIe, another open standard; so if you have a discrete video card then you can theoretically access it without chipset-specific drivers (but of course you still need a driver for the card itself).
But for onboard video, and I believe a good chunk of power management as well, the credit goes to drivers written for Linux by the hardware vendors.
I wouldn't want a numpad. A track point would be ape.
I struggle with keyboard recommendations b/c I'm not fully satisfied lol.
Several small things combined make it really different to the experience that I have with a desktop OS. But it is nice as side device
It's irritatingly bad at consuming media and browsing the web. No ad blocking, so every webpage is an ad-infested wasteland. There are so many ads in YouTube and streaming music. I had no idea.
It's also kindof a pain to connect to my media library. Need to figure out a better solution for that.
So, as a relatively new iPad user it's pleasantly useful for select work tasks. Not so great at doomscrolling or streaming media. Who knew?
I just got a Macbook and haven't touched my iPad Pro since, I would think I could make a change faster on a Macbook then iPad if they were both in my bag. Although I do miss the cellular data that the iPad has.
The majority of the world are using their phones as a computing device.
And as someone with a MacBook and iPad the later is significantly more ergonomic.
Every single touch screen laptop I’ve seen has huge reflection issues, practically being mirrors. My assumption is that in order for the screen to not get nasty with fingerprints in no time, touchscreen laptops need oleophobic coating, but to add that they have to use no antiglare coating.
Personally I wouldn’t touch my screen often enough to justify having to contend with glare.
No! It's not - and it's dangerous to propagate this myth. There are so many arbitrary restrictions on iPad OS that don't exist on MacOS. Massive restrictions on background apps - things like raycast (MacOS version), Text Expander, cleanshot, popclip, etc just aren't possible in iPad OS. These are tools that anyone would find useful. No root/superuser access. I still can't install whatever apps I want from whatever sources I want. Hell, you can't even write and run iPadOS apps in a code editor on the iPad itself. Apple's own editor/development tool - Xcode - only runs on MacOS.
The changes to window management are great - but iPad and iPadOS are still extremely locked down.
They could have gone the direction of just running MacOS on it, but clearly they don't want to. I have a feeling that the only reason MacOS is the way it is, is because of history. If they were building a laptop from scratch, they would want it more in their walled garden.
I'm curious to see what a "power user" desktop with windowing and files, and all that stuff that iPad is starting to get, ultimately looks like down this alternative evolutionary branch.
I think Microsoft was a little too eager to fuse their tablet and desktop interface. It has produced some interesting innovations in the process but it's been nowhere near as polished as ipadOS/macOS.
On the other hand, I have come to love having a reading/writing/sketching device that is completely separate from my work device. I can't get roped into work and emails and notifications when I just want to read in bed. My iPad Mini is a truly distraction-free device.
I also think it would be hard to have a user experience that works great both for mobile work and sitting-at-a-desk work. I returned my Microsoft Surface because of a save dialog in a sketching app. I did not want to do file management because drawing does not feel like a computing task. On the other hand, I do want to deal with files when I'm using 3 different apps to work on a website's files.
If you are a developer or a creative however, then a Mac is still very useful.
For the same price, you still get a better mac.
Auth should be Apple Business Manager; image serving should be passive directories / cloud buckets.
Haven’t tried it though, still using JamF.
In education or corporate settings, where account management is centralized, you want each person who uses an iPad to access their own files, email, etc.
Parents and spouses would appreciate if they could take the multiple user experience for tvOS and make it an option for iPadOS.
I dgaf what the UI looks like. It’s fine.
1. iPadOS has a lot of software either built for the "three share sheets to the wind" era of iPadOS, or lazily upscaled from an iPhone app, and
2. iPadOS does not allow users to tamper with the OS or third-party software, so you can't fix any of this broken mess.
Video editing and 3D would be possible on iPadOS, but for #1. Programming is genuinely impossible because of #2. All the APIs that let Swift Playgrounds do on-device development are private APIs and entitlements that third-parties are unlikely to ever get a provisioning profile for. Same for emulation and virtualization. Apple begrudgingly allows it, but we're never going to get JIT or hypervisor support[0] that would make those things not immediately chew through your battery.
[0] To be clear, M1 iPads supported hypervisor; if you were jailbroken on iPadOS 14.5 and copied some files over from macOS you could even get full-fat UTM to work. It's just a software lockout.
"Foundation Models" is an Apple product name for a framework that taps into a bunch of Apple's on-device AI models.
https://machinelearning.apple.com/research/introducing-apple...
https://machinelearning.apple.com/research/apple-intelligenc...
The architecture is such that the model can be specialized by plugging in more task-specific fine-tuning models as adapters, for instance one made for handling email tasks.
At least in this version, it looks like they have only enabled use of one fine-tuning model (content tagging)
The Foundation Models framework documentation:
https://developer.apple.com/documentation/foundationmodels/
> The Foundation Models framework provides access to
> Apple’s on-device large language model that powers
> Apple Intelligence to help you perform intelligent
> tasks specific to your use case.
Sure, the models are also named the same. That's beside the point. That's not the point of the post you're correcting me on, which is again, announcing the framework.Why should I bother then as a 3rd party developer? Sure nice not having a cost for API for 25% of users but still those models are very small and equivalent of qwen2.5 4B or so and their online models supposed equivalent of llama scout. Those models are already very cheap online so why bother having more complicated code base then? Maybe in 2 years once more iOS users replace their phones but I'm unlikely to use this for developing iOS in the next year.
This would be more interesting if all iOS 26 devices at least had access to their server models.
For example, involving a network and data transfer will always toss in some latency that may not be acceptable or desired for a particular use case, even if you’re on a desktop plugged in with an Ethernet cable
I also think that in the long term the hardware that individuals own is far more powerful than the typical compute that can be allocated to a user who is using a free or cheap cloud service. Comparing a rented VPS to a cheap $400 server in my closet is like night and day, the VPS is just not a lot of horsepower and the typical smartphone has a whole lot of computing power to work with. In a very near future when the chips are even more AI-optimized, data centers might not be the most efficient way to go about this.
Example: imagine implementing an application like Final Cut Pro as a browser application where all the compute takes place on a remote cloud server. It’s just not plausible: too much data to handle, too much compute needed for processing, too much of a need for low-latency responsiveness of the app.
It looks like each container will run in its own VM, that will boot into a custom, lightweight init called vminitd that is written in Swift. No information on what Linux kernel they're using, or whether these VMs are going to be ARM only or also Intel, but I haven't really dug in yet [1].
Actually, they explain it in detail here: https://github.com/apple/containerization/issues/70#issuecom...
It's unclear whether this will keep being supported in macOS 28+, though: https://github.com/apple/container/issues/76, https://www.reddit.com/r/macgaming/comments/1l7maqp/comment/...
Apple Intelligence models primarily run on-device, potentially reducing app bundle sizes and the need for trivial API calls.
Apple's new containerization framework is based on virtual machines (VMs) and not a true 'native' kernel-level integration like WSL1.
Spotlight on macOS is widely perceived as slow, unreliable, and in significant need of improvement for basic search functionalities.
iPadOS and macOS are converging in terms of user experience and features (e.g., windowing), but a complete merger is unlikely due to Apple's business model, particularly App Store control and sales strategies.
The new 'Liquid Glass' UI design evokes older aesthetics like Windows Aero and earlier Aqua/skeuomorphism, indicating a shift away from flat design.
Full summary (https://extraakt.com/extraakts/apple-intelligence-macos-ui-o...)
This doesn’t sound impressive, it sounds insane.
Here is a summarization provided by Claude after I back-and-forthed it a bit:
--
Apple Developer Frameworks
This list represents the vast ecosystem of frameworks available to developers for building applications across Apple's platforms.
I. Foundational Frameworks
These provide the fundamental services and data management capabilities for all applications.
- Core Frameworks: Essential for data types, collections, and low-level services. Examples: Foundation, Core Data, Core Foundation
- Security: Manages user authentication, authorization, and cryptographic services. Examples: CryptoKit, LocalAuthentication, Security
- App Services: Supports core application functionalities and integrations. Examples: Contacts, EventKit, StoreKit, WeatherKit, ClockKit
II. User Interface & Experience
Frameworks for building the visual elements and user interactions of an application.
- UI Frameworks: The primary toolkits for constructing user interfaces. Examples: SwiftUI, UIKit (for iOS/tvOS), AppKit (for macOS)
- Services: Provides access to system-level services with a UI component. Examples: MapKit, CloudKit, Core Location, PassKit
III. Graphics & Media
For creating rich visual content, games, and handling audio/video.
- Graphics & Games: High-performance 2D and 3D graphics rendering and game development. Examples: Metal, SpriteKit, SceneKit, RealityKit
- Media: Manages the playback and processing of audio and video. Examples: AVFoundation, Core Audio, VisionKit
IV. Machine Learning
Enables the integration of intelligent features into applications.
- Core ML & Vision: The foundation for machine learning models and computer vision tasks. Examples: Core ML, Vision, Natural Language, Speech
- V. Platform-Specific Frameworks
The number of available frameworks varies significantly across Apple's operating systems, reflecting the unique capabilities of each platform.
- macOS: ~250+ frameworks
- iOS/iPadOS: ~200+ frameworks
- watchOS: ~50-60 frameworks
- tvOS: ~35-40 frameworks
- visionOS: A growing set of frameworks for spatial computing.
Containerization is a Swift package for running Linux containers on macOS - https://news.ycombinator.com/item?id=44229348 - June 2025 (158 comments)
Container: Apple's Linux-Container Runtime - https://news.ycombinator.com/item?id=44229239 - June 2025 (11 comments)
See also:
- https://edu.chainguard.dev/chainguard/chainguard-images/abou...
edit: For those curious, https://youtu.be/51iONeETSng?t=3368.
- New theme inspired by Liquid Glass
- 24-bit colour
- Powerline fonts
Everything else they would rather see devs stay on their platforms, see the official tier 1 scenarios on swift.org.
Is this the first time Apple has offered something substantial for the App store fees beyond the SDK/Xcode and basic app distribution?
Is it a way to give developers a reason to limit distribution to only the official App Store, or will this be offered regardless of what store the app is downloaded from?
They've offered 25hrs/mo of Xcode Cloud build time for the last couple years.
Bad news.
I wish I thought that the Game Porting Toolkit 3 would make a difference, but I think Apple's going to have to incentivize game studios to use it. And they should; the Apple Silicon is good enough to run a lot of games.
... when are they going to have the courage to release MacOS Bakersfield? C'mon. Do it. You're gonna tell me California's all zingers? Nah. We know better.
Ultimately UI widgets are rooted in reality (switches, knobs, doohickeys) and liquid glass is Salvador-Dali-Esque.
Imagine driving a car and the gear shifter was made of liquid glass… people would hit more grannies than a self-driving Tesla.
Don’t use macOS but had just kinda assumed it would by virtue of shared unixy background with Linux
very good to see XCode LLM improvements!
> I use VSCode Go daily + XCode Swift 6 iOS 18 daily
im confused
Their hardware across the board is fairly powerful (definetly not top end), they have a good API stack especially with Metal. And they have systems at all levels including TV. If they were to just make a standard controller or just say "PS5 dualshock is our choice" they could have a nice little slice for themselves.
I'm assuming this is an updated version of those.
I am excited to see what the benchmarks look like though, once it's live.
Edit: surprised apple is dumping resources into gaming, maybe they are playing the long game here?
I finally gave up and bought a Mini6 a year or two ago, which gets.... also minimal use. And I'm sure not buying ANOTHER tablet we're not going to use.
If they were multi-user I actually think we'd both get more value out of it, and upgrade our one device more often.
I get it, but an iPad starts at $349; often available for less.
At this point, an iPad is no different than a phone—most people wouldn't share a single tablet.
Laptops and desktops that run macOS, Linux, Windows which are multiuser operating systems have largely become single-user devices.
It's less about the cost and more about having to have another stupid device to charge, update, and keep track of, when a tablet is not a device that gets used enough by any one person to be worth all that. It would be much more convenient to have a single device on a coffee or end table which all family members could use when they need to do more than you can do on a phone.
> Laptops and desktops that run macOS, Linux, Windows which are multiuser operating systems have largely become single-user devices.
Maybe. Probably 90% of work laptops are single-user, I'm sure. But for home computers, multi-user can be very useful. And it's better than ever to use laptops as dumb terminals, since all most people's stuff is in the cloud. It's not nearly as much trouble to get your secondary user account on a spare laptop in the living room to be useful as it was in the Windows XP days. Just having a browser that's signed into your stuff, plus Messages or Whatsapp, and maybe Slack/Discord/etc. is enough.
> most people wouldn't share a single tablet.
Since iPads have never supported doing so in a sane way, that unfounded assertion is just as likely due to the fact that it's a terrible experience today, since if you share one today, someone else will be accidentally marking your messages as read, you'll be polluting their browser or YouTube history, etc.
It's also the kind of dismissive claim true Apple believers tend to trot out when someone points out a shortcoming: "Nobody wants to use a touchscreen laptop!" "Nobody wants USB-C on an iPhone when Lightning is slightly smaller!" "Nobody needs an HDMI port or SD slot on a MacBook Pro!" "Nobody needs a second port on the 12-inch MacBook!" Most of the above things have come true except the touch laptop, and somehow it hasn't hurt anyone, but the "nobody wants..." crew immediately stops when Apple finally [re-]embraces something
Having profiles for the kids however would be nice though. But most apps have that built in themselves.
I find this madness that apple doesnt have this already.
…10 Central and Mountain.
Looks like software UI design – just like fashion, film, architecture and many other fields I'm sure – has now officially entered the "nothing new under the sun" / "let's recycle ideas from xx years ago" stage.
https://en.wikipedia.org/wiki/Aqua_%28user_interface%29
To be clear, this is just an observation, not a judgment of that change or the quality of the design by itself. I was getting similar vibes from the recent announcement of design changes in Android.
This was posted in another HN thread about Liquid Glass: https://imgur.com/a/6ZTCStC . I'm sure Apple will tweak the opacity before it goes live, but this looks horribly insane to me.
But I'm not so sure if I want transparent.
I remember the catastrophe of Windows Vista, and how you needed a capable GPU to handle the glass effect. Otherwise, one of your (Maybe two) CPU cores would have to process all that overhead.
They are heading in a good direction, it just needs to be toned down. But like any new graphics technology the first year is the "WOW WE CAN DO X!!!!" then the more tame stuff comes along.
Why do you think they are headed in a good direction? There is literally nothing I like about the liquid glass effect from a usability perspective. The transparency/translucency is wholly negative in my opinion.
The best analogy to me is physical buttons in cars vs. touch screens. The "headed in a good direction" there is to actually stop putting more shit into the touchscreen and have physical buttons for anything you'd touch while the car is in motion.
Maybe this is consequence of the Frutiger Aero trend, and that users miss the time where user interfaces were designed to be cool instead of only useful
Usability feels it has only been down since Windows 7. (on another hand, Windows has plenty of accessibility features that help a lot in restoring usability)
Sebastiaan de With of Halide fame did a writeup about this recently, and I think he makes some great points.
Read on and:
They are completely dynamic: inhabiting characteristics that are akin to actual materials and objects. We’ve come back, in a sense, to skeuomorphic interfaces — but this time not with a lacquer resembling a material. Instead, the interface is clear, graphic and behaves like things we know from the real world, or might exist in the world. This is what the new skeuomorphism is. It, too, is physicality.
Well worth reading for the retrospective of Apple's website taking a twenty year journey from flatland and back.
Proof of a well-designed UI is stability, not change.
Reads to me strongly of an effort to give traditional media something shiny to put above the headline and keep the marketing engine running.
Apple will spend 10x the effort to tell you way a useless feature is necessary before they look at user feedback.
https://www.yahoo.com/lifestyle/why-gen-z-infatuated-frutige...
My only guess is this style looks better while using the product but not while looking at screenshots or demos built off Illustrator or whatever they’re using.
We were in a flat era for the last several years, this kicks off the next 3D era.
It was too slow and was later optimized away to run off of pre-rendered assets with some light typical style engine procedural code.
Feels like someone just dusted off the old vision now that the compute is there.
Showing off the pulsating buttons he said something like "we have these processors that can do billions of calculations of second, we might as well use them to make it look great".
And yet a decade later, they were undoing all of that to just be flat an boring. Im glad they are using the now trillions of calculations a second to bring some character back into these things.
A decade later they were handling the windfall that came with smartphone ascendancy. An emergence of an entirely new design language for touch screen UI. Skeumorphism was slowing that all down.
Making it all flat meant making it consistent, which meant making it stable, which meant scalability. iOS7 made it so that even random developers' apps could play along and they needed a lot of developers playing along.
P4: Foundation models will get newbies involved, but aren't ready to displace other model providers.
P4: New containers are ergonomic when sub-second init is required, but otw no virtualization news.
P2: Concurrency now visible in instruments and debuggable, high-performance tracing avoid sampling errors; are we finally done with our 4+ years of black-box guesswork? (Not to mention concurrency backtracking to main-thread-by-default as a solution.)
P5: UI Look-and-feel changes across all platforms conceal the fact that there are very few new API's.
Low content overall: Scan the platforms, and you see only L&F, app intents, widgets. Is that really all? (thus far?) - It's quite concerning.
Also low quality: online links point no where, half-baked technologies are filling presentation slots: Swift+Java interop is no where near usable, other topics just point to API documentation, "code-along" sessions restating other sessions.
Beware the new upgrade forcing function: adding to the memory requirements of AI, the new concurrency tracing seems to require M4+ level device support.
How about starting with reliably, deterministically, and instantly (say <50ms) finding obvious things like installed apps when searching by a prefix of their name? As a second criterion, I would like to find files by substrings of their name.
Spotlight is unbelievably bad and has been unbelievably bad for quite a few years. It seems to return things slowly, in erratic order (the same search does not consistently give the same results) and unreliably (items that are definitely there regularly fail to appear in search results).
[0]: https://www.apple.com/newsroom/2025/06/macos-tahoe-26-makes-...
Even I can, and have, build search functionality like this. Deterministically. No LLMs or “AI” needed. In fact for satisfying the above criteria this kind of implementation is still far more reliable.
AI makes it strictly worse. I do not want intelligence. I want to type, for example, "saf" and have Safari appear immediately, in the same place, every time, without popping into a different place as I'm trying to click it because a slower search process decided to displace the result. No "temperature", no randomness, no fancy crap.
Settings → Apple Intelligence and Siri → toggle Apple Intelligence off.
It's not enabled by default. But in case you accidentally turned it on, turning it off gets you a bunch of disk space back as the AI stuff is removed from the OS.
Some people are just looking for a reason to be offended.
Every year, macOS and iPadOS look superficially more and more similar, but they remain distinct in their interfaces, features, etc. But the past 15 years have been "we'll be *forced* to only use Apple-vetted software, just like the App Store!"
And yeah, the Gatekeeper mechanism got less straight-forward to get around in macOS 15, but … I don't know, someone will shoot me down for this, but it's been a long 15 years to be an Apple user with all that noise going on around you from people who really don't have the first clue what they're talking about — and on HN, no less.
They can come back to me when what they say actually happens. Until then, fifteen dang years.
It's not forced. It's completely optional. It has to be downloaded.
And if you activate it, then change your mind, you get the disk space back when you turn it off.
Just don't push the Yes button when it offers.
See (System) Settings
I’ve read stories about how people were amazed at calling each other and would get together or meet at the local home with a phone installed, a gathering spot, make an event about it. Now it’s boring background tech.
We kind of went through a faze of this with the introduction of webcams. Omegle, Chatroulette, it was a wild Wild West. Now it’s normalized, standard for work with the likes of Zoom, with FaceTiming just being normal.
Now the Cyberpunk pen and paper RPG seems prophetic if turn your head sideways a bit https://chatgpt.com/share/684762cc-9024-800e-9460-d5da3236cd...
AI maximalists are like those 100 years ago that put radium everywhere, even in toothpaste, because new things are cool and we’re so smart you need to trust us they won’t cause any harm.
I’ll keep brushing my teeth with baking soda, thank you very much.
There are lots of folks like this, and it's getting exhausting that they make being anti-AI their sole defining character trait: https://www.reddit.com/r/ArtistHate
https://www.youtube.com/watch?v=sV7C6Ezl35A
The ML hype-cycle has happened before... but this time everyone is adding more complexity to obfuscate the BS. There is also a funny callback to YC in the Lisp story, and why your karma still gets incinerated if one points out its obvious limitations in a thread.
Have a wonderful day, =3