I'm increasingly coming to the view that there is a big split among "software developers" and AI is exacerbating it. There's an (increasingly small) group of software developers who don't like "magic" and want to understand where their code is running and what it's doing. These developers gravitate toward open source solutions like Kubernetes, and often just want to rent a VPS or at most a managed K8s solution. The other group (increasingly large) just wants to `git push` and be done with it, and they're willing to spend a lot of (usually their employer's) money to have that experience. They don't want to have to understand DNS, linux, or anything else beyond whatever framework they are using.
A company like fly.io absolutely appeals to the latter. GPU instances at this point are very much appealing to the former. I think you have to treat these two markets very differently from a marketing and product perspective. Even though they both write code, they are otherwise radically different. You can sell the latter group a lot of abstractions and automations without them needing to know any details, but the former group will care very much about the details.
Kubernetes is not the first thing that comes to mind when I think of "understanding where their code is running and what it's doing"...
Just an “idle” Kubernetes system is a behemoth to comprehend…
Kubernetes is etcd, apiserver, and controllers. That's exactly as many components as your average MVC app. The control-loop thing is interesting, and there are a few "kinds" of resources to get used to, but why is it always presented as this insurmountable complexity?
I ran into a VXLAN checksum offload kernel bug once, but otherwise this thing is just solid. Sure it's a lot of YAML but I don't understand the rep.
…and containerd and csi plugins and kubelet and cni plugins and kubectl and kube-proxy and ingresses and load balancers…
Sure at some point there are too many layers to count but I wouldn't say any of this is "Kubernetes". What people tend to be hung about is the difficulty of Kubernetes compared to `docker run` or `docker compose up`. That is what I am surprised about.
I never had any issue with kubelet, or kube-proxy, or CSI plugins, or CNI plugins. That is after years of running a multi-tenant cluster in a research institution. I think about those about as much as I think about ext4, runc, or GRUB.
And CNI problems are extremely normal. Pretty much anyone that didn't just use weavenet and called it a day has had to spend quiet a bit of time to figure it out. If you already know networking by heart it's obviously going to be easier, but few devs do.
You definitely can run Kubernetes without running Ceph or any storage system, and you already rely on a distributed storage system if you use the cloud whether you use Kubernetes or not. So I wouldn't count this as added complexity from Kubernetes.
If you discount issues like that, you can safely say that it's impossible to have any issues with CSI, because it's always going to be with one of it's implementation.
That feels a little disingenuous, but maybe that's just me.
For example you'd say AWS EBS is part of Kubernetes?
Youre ultimately gonna have to use a storage of some form unless you're just a stateless service/keep the services with state out of k8s. That's why I'd include it, and the fact that you can use multiple storage backends, each with their own challenges and pitfalls makes k8s indeed quiet complex.
You could argue that multinode PaaS is always going to be complex, and frankly- I'd agree with that. But that was kinda the original point. At least as far as I interpreted it: k8s is not simple and you most likely didn't need it either. But if you do need a distributed PaaS, then it's probably a good idea to use it. Doesn't change the fact that it's a complex system.
But would I say that your entire Linux installation and the cloud it runs on is part of Kubernetes? No.
Surprisingly there were hosted services on the internet prior to kubernetes existing. Hell, I even have reason to believe that the internet may possibly predate Docker
Let's be clear on what we're comparing or we can't argue at all. Kubernetes is hard if you have never seen a computer before, I will happily concede that.
I see how you were asking the GP that question now
Maybe with fail over for high availability.
Even that's fine for most deployments that aren't social media sites, aren't developed by multiple teams of devs and don't have any operations people on payroll.
Ceph is its own cluster of kettles filled with fishes
> Kubernetes is not the first thing that comes to mind when I think of "understanding where their code is running and what it's doing"...
People act like their web framework and SQL connection pooler and stuff are so simple, while Kubernetes is complex and totally inscrutable for mortals, and I don't get it. It has a couple of moving parts, but it is probably simpler overall than SystemD.
That being said, what people tend to build on top of that foundation is a somewhat different story.
Unfortunately people (cough managers) think k8s is some magic that makes distrusted systems problems go away, and automagically enables unlimited scalability
In reality it just makes the mechanics a little easier and centralized
Getting distributed systems right is usually difficult
"The larger system" is more controllers in charge of other object types, doing the same kind of work for its object types
There is an API implemented for CRUD-ing each object type. The API specification (model) represents something important to developers, like a group of containers (Pod), a load balancer with VIP (Service), a network volume (PersistentVolume), and so on.
Hand wave hand wave, Lego-style infrastructure.
None of the above is exactly correct (e.g. the DB is actually a k/v store), but it should be conceptually correct.
If multiple controllers, how do they coordinate ?
No, there are many controllers. Each is in charge of the object types it is in charge of.
>What happens if [it] goes down?
CRUD of the object types it manages have no effect until the controller returns to service.
>If multiple controllers, how do they coordinate ?
The database is the source of truth. If one controller needs to "coordinate" with another, it will CRUD entries of the object types those other controllers are responsible for. e.g. Deployments beget ReplicaSets beget Pods.
So, take an app like WordPress that you want to make “highly available.” Let’s imagine it’s a very popular blog or a newspaper website that needs to serve millions of pages a day. What would you do without Kubernetes?
Without Kubernetes, you would get yourself a cluster of, let’s say, four servers—one database server, two worker servers running PHP and Apache to handle the WordPress code, and finally, a front-end load balancer/static content host running Nginx (or similar) to take incoming traffic and route it to one of the two worker PHP servers. You would set up all of your servers, network them, install all dependencies, load your database with data, and you’d be ready to rock.
If all of a sudden an article goes viral and you get 10x your usual traffic, you may need to quickly bring online a few more worker PHP nodes. If this happens regularly, you might keep two extra nodes in reserve and spin them up when traffic hits certain limits or your worker nodes’ load exceeds a given threshold. You may even write some custom code to do that automatically. I’ve done all that in the pre-Kubernetes days. It’s not bad, honestly, but Kubernetes just solves a lot of these problems for you in an automated way. Think of it as a framework for your hosting infrastructure.
On Kubernetes, you would take the same WordPress app and split it into the same four functional blocks. Each would become a container. It can be a Docker container or a Containerd container—as long as it’s compatible with the Open Container Initiative, it doesn’t really matter. A container is just a set of files defining a lightweight Linux virtual machine. It’s lightweight because it shares its kernel with the underlying host it eventually runs on, so only the code you are actually running really loads into memory on the host server.
You don’t really care about the kernel your PHP runs on, do you? That’s the idea behind containers—each process runs in its own Linux virtual machine, but it’s relatively efficient because only the code you are actually running is loaded, while the rest is shared with the host. I called these things virtual machines, but in practice they are just jailed and isolated processes running on the host kernel. No actual hardware emulation takes place, which makes it very light on resources.
Just like you don’t care about the kernel your PHP runs on, you don’t really care about much else related to the Linux installation that surrounds your PHP interpreter and your code, as long as it’s secure and it works. To that end, the developer community has created a large set of container templates or images that you can use. For instance, there is a container specifically for running Apache and PHP—it only has those two things loaded and nothing else. So all you have to do is grab that container template, add your code and a few setting changes if needed, and you’re off to the races.
You can make those config changes and tell Kubernetes where to copy and place your code files using YAML files. And that’s really it. If you read the YAML files carefully, line by line, you’ll realize that they are nothing more than a highly specialized way of communicating the same type of instructions you would write to a deployment engineer in an email when telling them how to deploy your code.
It’s basically a set of instructions to take a specific container image, load code into it, apply given settings, spool it up, monitor the load on the cluster, and if the load is too high, add more nodes to the cluster using the same steps. If the load is too low, spool down some nodes to save money.
So, in theory, Kubernetes was supposed to replace an expensive deployment engineer. In practice, it simply shifted the work to an expensive Kubernetes engineer instead. The benefit is automation and the ability to leverage community-standard Linux templates that are (supposedly) secure from the start. The downside is that you are now running several layers of abstraction—all because Unix/Linux in the past had a very unhealthy disdain for statically linked code. Kubernetes is the price we pay for those bad decisions of the 1980s. But isn’t that just how the world works in general? We’re all suffering the consequences of the utter tragedy of the 1980s—but that’s a story for another day.
I'm just sitting here wondering why we need 100 billion transistors to move a piece of tape left and right ;)
But then there's always always a lot of complexity and abstraction. Certainly, most software people don't need to know everything about what a CPU is doing at the lowest levels.
I mean, in my homelab I do have Kubernetes and no LB in front, but it's a homelab for fun and learn K8s internals. But in a professional environment...
step one: draw a circle
step two: import the rest of the owl
Go back to good ol' corsync/pacemaker clusters with XML and custom scripts to migrate IPs and set up firewall rules (and if you have someone writing them for you, why don't you have people managing your k8s clusters?).
Or buy something from a cloud provider that "just works" and eventually go down in flames with their indian call centers doing their best but with limited access to engineering to understand why service X is misbehaving for you and trashing your customer's data. It's trade-offs all the way.
Do you understand you're referring to optional components and add-ons?
> and kubectl
You mean the command line interface that you optionally use if you choose to do so?
> and kube-proxy and ingresses and load balancers…
Do you understand you're referring to whole classes of applications you run on top of Kubernetes?
I get it that you're trying to make a mountain out of a mole hill. Just understand that you can't argue that something is complex by giving as your best examples a bunch of things that aren't really tied to it.
It's like trying to claim Windows is hard, and then your best example is showing a screenshot of AutoCAD.
CSI is optional, you can just not use persistent storage (use the S3 API or whatever) or declare persistentvolumes that are bound to a single or group of machines (shared NFS mount or whatever).
I don't know how GP thinks you could run without the other bits though. You do need kubelet and a container runtime.
For some applications these people are absolutely right, but they've persuaded themselves that that means it's the best way to handle all use cases, which makes them see Kubernetes as way more complex than is necessary, rather than as a roll-your-own ECS for those who would otherwise truly need a cloud provider.
I assume everyone wants to be in control of their environment. But with so many ways to compose your infra that means a lot of different things for different people.
K8s is meant to be operated by some class of engineers, and used by another. Just like you have DBAs, sysadmins, etc, maybe your devops should have more system experience besides terraform.
Sir, I upvoted you for your wonderful sense of humour.
Some bash and Ansible and EC2? That is usually what Kubernetes haters suggest one does to simplify.
The main pain point I personally see is that everyone goes 'just use Kubernetes' and this is an answer, however it is not the answer. It steamrolling all conversations leads to a lot of the frustration around it in my view.
I love that the Kubernetes lovers tend to forget that Kubernetes is just one tool, and they believe that the only possible alternative to this coolness is that sweaty sysadmins writing bash scripts in a dark room.
I thought Mesos was kinda dead nowadays, good to hear it’s still kicking. Last time I used it it the networking was a bit annoying, not able to provide virtual network interfaces but only ports.
It seems like if you are going to operate these things, picking a solution with a huge community and in active development feels like the smart thing to do.
Nomad is very nice to use from a developer perspective, and it’s nice to hear infrastructure people preferring it. From outside the reason people pick Kubernetes seems to be the level of control of infra and security teams want over things like networking and disk.
I would argue against Kubernetes in particular situations, and even recommend Ansible in some cases, where it is a better fit in the given circumstances. Do you consider me as a Kubernetes hater?
Point is, Kubernetes is a great tool. In particular situations. Ansible is a great tool. In particular situations. Even bash is a great tool. In particular situations. But Kubernetes even could be the worst tool if you choose unwisely. And Kubernetes is not the ultimate infrastructure tool. There are alternatives, and there will be new ones.
Etcd is truly a horrible data store, even the creator thinks so.
For anyone unfamiliar with this the "official limits" are here, and as of 1.32 it's 5000 nodes, max 300k containers, etc.
https://kubernetes.io/docs/setup/best-practices/cluster-larg...
Maintaining a lot of clusters is super different than maintaining one cluster.
Also please don't actually try to get near those limits, your etcd cluster will be very sad unless you're _very_ careful (think few deployments, few services, few namespaces, no using etcd events, etc).
The department saw more need for storage than Kubernetes compute so that's what we're growing. Nowadays you can get storage machines with 1 PB in them.
The larger Supermicro or Quanta storage servers can easily handle 36 HDD's each, or even more.
So with just 16 of those with 36x24TB disks, that meets the ~14PB capacity mark, leaving 44 remaining nodes for other compute task, load balancing, NVME clusters, etc.
Cluster networking can sometimes get pretty mind-bending, but honestly that's true of just containers on their own.
I think just that ability to schedule pods on its own requires about that level of complexity; you're not going to get a much simpler system if you try to implement things yourself. Most of the complexity in k8s comes from components layered on top of that core, but then again, once you start adding features, any custom solution will also grow more complex.
If there's one legitimate complaint when it comes to k8s complexity, it's the ad-hoc way annotations get used to control behaviour in a way that isn't discoverable or type-checked like API objects are, and you just have to be aware that they could exist and affect how things behave. A huge benefit of k8s for me is its built-in discoverability, and annotations hurt that quite a bit.
I would ask a different question. How many people actually need to understand implementation details of Kubernetes?
Look at any company. They pay engineers to maintain a web app/backend/mobile app. They want features to be rolled out, and they want their services to be up. At which point does anyone say "we need an expert who actually understands Kubernetes"?
I have to wonder how many people actually understand when to use K8s or docker. Docker is not a magic bullet, and can actually be a foot gun when it's not the right solution.
In the end it's a scheduler for Docker containers on a bunch of virtual or bare metal machines. Once you get that in your head life becomes much more easy.
The only thing I'd really love to see from an ops perspective is a way to force-revive crashed containers for debugging. Yes, one shouldn't have to debug cattle, just haul the carcass off and get a new one... but I still prefer to know why the cattle died.
* Host hundreds or thousands of interacting containers across multiple teams in sane manner * Let's you manage and understand how is it done in the full extent.
Of course there are tons of organizations that can (and should) easily resign from one of these, but if you need both, there isn't better choice right now.
What looks like absurd scale to one team is a regular Tuesday for another, because "scale" is completely meaningless without context. We don't balk at a single machine running dozens of processes for a single web browser, we shouldn't balk at something running dozens of containers to do something that creates value somehow. And scale that up by number of devs/customers and you can see how thousands/hundreds of thousands can happen easily.
Also the cloud vendors make it easy to have these problems because it's super profitable.
* H: "kubernetes [at planetary scale] is too complex"
* A: "you can run it on a toaster and it's simpler to reason about than systemd + pile of bash scripts"
* H: "what's the point of single node kubernetes? I'll just SSH in and paste my bash script and call it a day"
* A: "but how do you scale/maintain that?"
* H: "who needs that scale?"
If they understood their system, odds are they’d realize that horizontal scaling with few, larger services is plenty scalable.
At those large orgs, the individual developer doesn’t matter at all and the EMs will opt for faster release cycles and rely on internal platform teams to manage k8s and things like it.
Of course there are simpler container runtimes, but they have issues with scale, cost, features or transparency of operation. Of course they can be good fit if you're willing to give up one or more of these.
Yes, complex tools tend to be powerful.
But when I say “devs who care about knowing how their code works” I’m also referring to their tools.
K8s isn’t incomprehensible, but it is very complex, especially if you haven’t worked in devops before.
“Devs who care…” I would, assume, would opt for simpler tools.
I know I would.
What's a bit different is we're creating own products, not renting people to others, so having uniform hosting platform is actual benefit.
I mean, if that's your starting point, then complexity is absolutely a given. When folks complain about the complexity of Kubernetes, they are usually complaining about the complexity relative to a project that runs a frontend, a backend, and a postgres instance...
We did not have a cluster just for a single application (with some exceptions because those applications were incredibly massive in pod numbers) and/or had patterns that required custom handling and pre-emptive autoscaling (which we wrote code for!).
Why are so many companies running a cluster for each application? That's madness.
I migrated one such firm off Kubernetes last year, because for their use case it just wasn't worth it - keeping the cluster upgraded and patched, and their CI/CD pipelines working was taking as much IT effort as the rest of their development process
People started using K8s for training, where you already had a network isolated cluster. Extending the K8s+container pattern to multi-tenant environments is scary at best.
I didn't understand the following part though.
> Instead, we burned months trying (and ultimately failing) to get Nvidia’s host drivers working to map virtualized GPUs into Intel Cloud Hypervisor.
Why was this part so hard? Doing PCI passthrough with the Cloud Hypervisor (CH) is relatively common. Was it the transition from Firecracker to CH that was tricky?
I'm not a even good developer. But I know enough to chime in on calls and provide useful and generally 'Wizarding' knowledge. Like a detective with a good hunch.
But yeah just autocomplete everything lol
In my job I develop a React Native app. I also need to have a decent understanding of iOS and Android native code. If I run into a bug related to how iOS runs 32 bit vs 64 bit software? Not my problem, we'll open a ticket with Apple and block the ticket in our system.
Wouldn't it be annoying to be blocked on Apple rather than shipping on your schedule?
Bonus points for writing a basic implementation from first principles capturing the essence of the problem kubernetes really was meant to solve.
The 100 pages kubernetes book, Andriy Burkov style.
https://github.com/kelseyhightower/kubernetes-the-hard-way
It probably won't answer the "why" (although any LLM can answer that nowadays), but it will definitely answer the "how".
Thanks for taking the time to share the walk through.
What would be the interest of it? Think about it:
- kubernetes is an interface and not a specific implementation,
- the bulk of the industry standardized on managed services, which means you actually have no idea what are the actual internals driving your services,
- so you read up on the exact function call that handles a specific aspect of pod auto scaling. That was a nice read. How does that make you a better engineer than those who didn't?
I just want to know how you'd implement something that would load your services and dependencies from a config file, bind them altogether, distribute the load through several local VMs and make it still work if I kill the service or increase the load.
In less than 1000 lines.
Then you seem to be confused, because you're saying Kubernetes but what you're actually talking about is implementing a toy container orchestrator.
I really wonder why this opinion is so commonly accepted by everyone. I get that not everything needs most Kubernetes features, but it's useful. The Linux kernel is a dreadfully complex beast full of winding subsystems and full of screaming demons all over. eBPF, namespaces, io_uring, cgroups, SE Linux, so much more, all interacting with eachother in sometimes surprising ways.
I suspect there is a decent likelihood that a lot of sysadmins have a more complete understanding of what's going on in Kubernetes than in Linux.
I think there's a degree of confusion over your understanding of what Kubernetes is.
Kubernetes is a platform to run containerized applications. Originally it started as a way to simplify the work of putting together clusters of COTS hardware, but since then its popularity drove it to become the platform instead of an abstraction over other platforms.
What this means is that Kubernetes is now a standard way to deploy cloud applications, regardless of complexity or scale. Kubernetes is used to deploy apps to raspberry pis, one-box systems running under your desk, your own workstation, one or more VMs running on random cloud providers, and AWS. That's it.
My point is that the mere notion of "a system that's actually big or complex enough to warrant using Kubernetes" is completely absurd, and communicates a high degree of complete cluelessness over the whole topic.
Do you know what's a system big enough for Kubernetes? It's a single instance of a single container. That's it. Kubernetes is a container orchestration system. You tell it to run a container, and it runs it. That's it.
See how silly it all becomes once you realize these things?
Second of all, I don't really understand why you think I'd be blown away by the notion that you can use Kubernetes to run a single container. You can also open a can with a nuclear warhead, does not mean it makes any sense.
In production systems, Kubernetes and its ecosystem are very useful for providing the kinds of things that are table stakes, like zero-downtime deployments, metric collection and monitoring, resource provisioning, load balancing, distributed CRON, etc. which absolutely doesn't come for free either in terms of complexity or resource utilization.
But if all you need to do is run one container on a Raspberry Pi and don't care about any of that stuff, then even something stripped down like k3s is simply not necessary. You can use it if you want to, but it's overkill, and you'll be spending memory and CPU cycles on shit you are basically not using. Literally anything can schedule a single pod on a single node. A systemd Podman unit will certainly work, for example, and it will involve significantly less YAML as a bonus.
I don't think the point I'm making is particularly nuanced here. It's basically YAGNI but for infrastructure.
https://www.ibm.com/docs/en/cics-ts/6.x?topic=sysplex-parall...
even if it wasn't as scalable as Kube. One the other hand, a cluster of 32 CMOS mainframe could handle any commercial computing job that people were doing in the 1990s.
That's assuming you have a solid foundation in the nuts and bolts of how computers work to begin with.
If you just jumped into software development without that background, well, you're going to end up in the latter pool of developers as described by the parent comment.
Containers are inherently difficult to sum up in a sentence. Perhaps the most reasonable comparison is to liken them to a "lightweight" vm, but the reasons people use them are so drastically different than vms at this point. The most common usecase for containers is having a decent toolchain for simple, somewhat reproducible software environments. Containers are mostly a hack to get around the mess we've made in software.
A VM, in contrast, fakes the existence of an entire computer, hardware and all. That fake hardware comes with a fake disk on which you put a new root filesystem, but it also comes with a whole lot of other virtualization. In a VM, CPU instructions (eg CPUID) can get trapped and executed by the VM to fake the existence of a different processor, and things like network drivers are completely synthetic. None of that happens with containers. A VM, in turn, needs to run its own OS to manage all this fake hardware, while a container gets to piggyback on the management functions of the host and can then include a very minimal amount of stuff in its synthetic root.
Not than I think. I'm well aware of how "tasks" work in Linux specifically, and am pretty comfortable working directly with clone.
Your explanation is great, but I intentionally went out of my way to not explain it and instead give a simple analogy. The entire point was that it's difficult to summarize.
It came from how Docker works, when you start a new container it runs a single process in the container, as defined in the Dockerfile.
It's a simplification of what containers are capable of and how they do what they do, but that simplification is how it got popular.
Super easy if we talk about Linux. It's a process tree being spawned inside it's own set of kernel namespaces, security measures and a cgroup to provide isolation from the rest of the system.
Once you recursively expand all the concepts, you will have multiple dense paragraphs, which don't "summarize" anything, but instead provide full explanations.
If you're running one team with all services trusting each other, you don't have problems solved by these things. Whenever you introduce a CNCF component outside core kubernetes, invest time in understanding it and why it does what it does. Nothing is "deploy and forget" and will need to be regularly checked and upgraded, and when issues come up you need some architecture-level of the component to troubleshoot because so many moving parts are there.
So if I can get away writing my own cronjob in 1000 lines rather than installing something from GitHub with a helm chart, I will go with the former option.
(Helm is crap though, but you often won't have much choice).
But setting it up is not a trivial task and often a recipe for disaster.
I've seen a fair share of startups who took too much kool aid and wanted parrot FANG stacks just to discover they are burning tons of money just trying to deploy their first hello world application.
But yeah, the argument could have as well just said running code on a VPS directly, because that also gives you a good deal of control.
> The other group (increasingly large) just wants to `git push` and be done with it, and they're willing to spend a lot of (usually their employer's) money to have that experience. They don't want to have to understand DNS, linux, or anything else beyond whatever framework they are using.
I'm a "full full-stack" developer because I understand what happens when you type an address into the address bar and hit Enter - the DNS request that returns a CNAME record to object storage, how it returns an SPA, the subsequent XHR requests laden with and cookies and other goodies, the three reverse proxies they have to flow through to get to before they get to one of several containers running on a fleet of VMs, the environment variable being injected by the k8s control plane from a Secret that tells the app where the Postgres instance is, the security groups that allow tcp/5432 from the node server to that instance, et cetera ad infinitum. I'm not hooking debuggers up to V8 to examine optimizations or tweaking container runtimes but I can speak intelligently to and debug every major part of a modern web app stack because I feel strongly that it's my job to be able to do so (and because I've worked places where if I didn't develop that knowledge then nobody would have).
I can attest that this type of thinking is becoming increasingly rare as our industry continues to specialize. These considerations are now often handled by "DevOps Engineers" who crank out infra and seldom write code outside of Python and bash glue scripts (which is the antithesis to what DevOps is supposed to be, but I digress). I find this unfortunate because this results in teams throwing stuff over the wall to each other which only compounds the hand-wringing when things go wrong. Perhaps this is some weird psychopathology of mine but I sleep much better at night knowing that if I'm on the hook for something I can fix it once it's out in the wild, not just when I'm writing features and debugging it locally.
This (and a few similar upthread comments) sum the problem up really concisely and nicely: pervasive, cross-stack understanding of how things actually work and why A in layer 3 has a ripple effect on B in layer 9 has become increasingly rare, and those who do know it are the true unicorns in the modern world.
Big part of the problem is the lack of succession / continuity at the university level. I have been closely working with very bright, fresh graduates/interns (data science, AI/ML, software engineering – a wide selection of very different specialisations) in the last few years, and I have even hired a few of them due to being that good.
Talking to them has given me interesting insights into what and how universities teach today. My own conclusion is that the reputable universities teach very well, but what they teach to is highly compartmentalised and typically there is little to no intersection across areas of study (unless the prospective student hits the pot of luck and enrolls in elective studies that go across the areas of knowledge). For example, students who study game programming (yes, it is a thing) do not get taught the CPU architectures or low-level programming in assembly; they have no idea what a pointer is. Freshly graduated software engineers have no idea what a netmask is and how it helps in reading a routing table; they do not know what a route is, either.
So modern ways of teaching are one problem. The second (and I think a big one) is the problem that the computing hardware has become heavily commoditised and appliance-like, in general. Yes, there are a select few who still assemble their own racks of PC servers at home or tinker with Raspberry Pi and other trinkets, but it is no longer an en masse experience. Gone are the days when signing up with an ISP also required building your own network at home. This had an important side effect of acquiring the cross-stack knowledge, which can only be gained today by willingfully taking up a dedicated uni course.
With all of that disappearing into oblivion, the worrying question that I have is: who is going to support all this «low level» stuff in a matter of 20 years without a clear plan for the cross-stack knowledge to succeed the current (and the last?) generation of unicorns?
So those who are drumming up the flexibility of k8s and alike miss out on one important aspect: with the lack of cross-stack knowledge succession, k8s is a risk for any mid- to large-sized organisation due to being heavily reliant on the unicorns and rockstar DevOps engineers who are few and far between. It is much easier to palm the infrastructure off to a cloud platform where supporting it will become someone else's headache whenever there is a problem. But the cloud infrastructure usually just works.
> So modern ways of teaching are one problem.
IME school is for academic discovery and learning theory. 90% of what I actually do on the job comes from self-directed learning. From what I gather this is the case for lots of other fields too. That being said I've now had multiple people tell me that they graduated with CS degrees without having to write anything except Python so now I'm starting to question what's actually being taught in modern CS curricula. How can one claim to have a B.Sc. in our field without understanding how a microprocessor works? If it's in deference to more practical coursework like software design and such then maybe it's a good thing...
And this is whom I ended up hiring – young engineers with curious minds, who are willing to self-learn and are continuously engaged in the self-learning process. I also continuously suggest interesting, prospective, and relevant new things to take a look into, and they seem to be very happy to go away, pick the subject of study apart, and, if they find it useful, incorporate it into their daily work. We have also made a deal with each other that they can ask me absolutely any question, and I will explain and/or give them further directions of where to go next. So far, such an approach has worked very well – they get to learn arcane (it is arcane today, anyway) stuff from me, they get full autonomy, they learn how to make their own informed decisions, and I get a chance to share and disseminate the vasts of knowledge I have accumulated over the years.
> How can one claim to have a B.Sc. in our field without understanding how […]
Because of how universities are run today. A modern uni is a commercial enterprise, with its own CEO, COO, C<whatever other letter>O. They rely on revenue streams (a previously unheard-of concept for a university), they rely on financial forecasts, and, most important of all, they have to turn profits. So, a modern university is basically a slot machine – outcomes to yield depend entirely on how much cash one is willing to feed it. And, because of that, there is no incentive to teach across the areas of study as it does not yield higher profits or is a net negative.
Here in Spain atthe most basic uni you are almost being able to write a Minix clone from scratch into some easy CPU (Risc-V maybe) from all the knowledge you got.
I am no Engineer (trade/voc arts, just a sysadmin) and I can write a small CHIP8 emulator at least....
Particularly at startups, it’s almost always more cost effective to hit that “scale up” button from our hosting provider than do any sort of actual system engineering.
Eventually, someone goes “hey we could save $$$$ by doing XYZ” so we send someone on a systems engineering journey for a week or two and cut our bill in half.
None of it really matters, though. We’re racing against competition and runway. A few days less runway isn’t going to break a startup. Not shipping as fast as reasonable will.
The closer your “Scale up” button is referencing actual hardware, the less of a problem it is.
Chances are high that you won't get it right from the beginning, you can create these abstractions once you really understand the problem space with real world data.
When you get to that point I have another pro tip: Don't refactor, just rewrite it and put all your learnings into the v2.
Refactoring such a codebase while keeping everything running can be a monumental effort. I found it very hard to keep people who work on such a project motivated. Analyzing the use cases, coming up with a new design incorporating your learnings and then seeing clear progress towards the goal of a cleaner codebase is much more motivating. Engineers get the chance to do something new instead of "moving code around".
I'm not saying rewrite everything. When you get to this point it usually makes sense to start thinking about these abstractions which I advised to avoid in the beginning. You can begin to separate parts of the system by responsibility and then you rewrite just one part and give it a new API which other parts of your system will consume. Usually by that time you'll also want to restructure your database.
We’re all different at good things, and it’s usually better to lean into your strengths than it is to paper over your weaknesses.
We can wish everyone were good at everything, or we can try to actually get things done.
> We can wish everyone were good at everything, or we can try to actually get things done.
False dichotomy. There's no reason we can't have both.I want to be clear, there's no perfect code or a perfect understanding or any of that. But the complaint here about not knowing /enough/ fundamentals is valid. There is some threshold which we should recognize as a minimum. The disagreement is about where this threshold is, and no one is calling for perfection. But certainly there are plenty who want the threshold to not exist. Be that AI will replace coders or coding bootcamps get you big tech jobs. Zero to hero in a few months is bull.
Minimum knowledge is one thing; minimum time to apply it is another.
I could go from servers sitting on the ground to racked, imaged, and ready to serve traffic in a few hours, because I've spent the time learning how to do it, and have built scripts and playbooks to do so. Even if I hadn't done the latter, many others have also done so and published them, so as long as you knew what you were looking for, you could do the same.
There's a bunch of sayings from tradesmen that I think are relevant here. And it's usually said by people who take pride in their work and won't do shoddy craftsmanship
- measure twice, cut once
- there's never time to do it right, but there's always time to do it twice
- if you don't have time to do it right when will you have time to do it again?
I think the advantage these guys have is that when they do a shit job it's more noticeable. Not only to the builders but anyone else. Unfortunately we work with high abstractions but high skill is the main reason we get big bucks. Unfortunately I think this makes it harder for managers to differentiate high quality from shit. So they'd rather get shit fast than quality a tad slower because all they can differentiate is time. But they don't see the how this is so costly since everything has to be done at least thriceI'd kinda want to argue with that - it is true, but we don't live in vacuum. Most programmers (me included, don't worry) aren't that skilled, and after work not everyone will want to study more. This is something that could be resolved by changing cultural focus, but like other things involving people, it's easier to change the system/procedures than habits.
To your point I agree. I would argue that employers should be giving time for employees to better themselves. It's the nature of any job like this where innovation takes place. It's common among engineers, physicists, chemists, biologists, lawyers, pilots, and others to have time to learn. Doctors seem to be in the same boat as us and it has obviously negative consequences. The job requires continuous learning. And you're right, that learning is work. So guess who's supposed to pay for work?
I do agree with you, though. I have fears though on how much can that be a thing in reality - because I cannot disagree that this is a right approach.
If you look around I think you'll notice it's mostly shit...
There's a flaw in markets though which allows shit to flourish. It's that before purchasing you can't tell the difference between products. So generally people then make the choice based on price. Of course, you get what you pay for. And many markets people are screaming for something different that isn't being currently met, but things are so entrenched that it's hard to even create that new market unless you're a huge player.
Here's a good example. Say you know your customers like fruit that is sweet. So all the farmers breed sweeter and sweeter strawberries. The customers are happy and sells go up. But at some point they don't want it any sweeter. But every farmer continues anyways and the customers have no choice but to buy too sweet strawberries. So strawberry sells decline. The farmers not having much signal from customers other than price and orders, what do they do? Well... they double down of course! It's what worked before.
The problem is that the people making decisions are so far removed from all this that they can't read the room. They don't know what the customer wants. Tbh, with tech, often the customer doesn't know what they want until they see it. (Which this is why so much innovation comes from open source because people are fixing things to make their own lives better and then a company goes "that's a good idea, let's scale this)
I'm unsure what those terms mean. What are qualities that perfect code or perfect understanding would have?
Depending on your framing I may agree or disagree.
Just to lob a softball, I'm sure there are/were people that have a perfect understanding of an older CPU architecture; or an entire system architecture's worth of perfect understanding that gave us spacecraft with hardware and firmware that still works and can be updated (out of the planetary solar system?), or Linux.
These are softballs for framing because they're just what I could type off the cuff.
To answer your softball, no, I doubt there was anyone who understood everything except petty early on. But very few people understand the whole OS let alone do any specialized task like data analysis, HPC, programming languages, encryption, or anything else. But here's the thing, the extra knowledge never hurts. It almost always helps, but certain knowledge is more generally helpful than others. Especially if we're talking memory but things like caching, {S,M}I{S,M}D, some bash, and some assembly go A LONG way
But people never do. Instead they just scale up, get more funding, rinse and repeat. It isn't until the bill gets silly that anyone bothers to consider it, and they usually then discover that no one knows how to optimize things other than code (maybe – I've worked with many devs who have no idea how to profile their code, which is horrifying).
Yes because usually the other option is focus on those things you advocate for up front and then they go out of business before they get a chance to have the problems you're arguing against.
Outside of eng, nobody cares if your company has the prettiest, leanest infrastructure in the world. They care about product.
Different environments require different tradeoffs. The vast majority of startups will die before their systems engineering becomes a problem.
Unless of course you're in a leadership role, in which case it's going to be priority #1,000 in 99.9% of cases.
When I was at <FAANG> we didn’t control our infrastructure, there were teams that did it for us. Those guys knew a lot more about the internals of Linux than your average HNer. Getting access to the SSD of the host wasn’t a sys-call away, it was a ticket to an SRE and a library import. It wasn’t about limited knowledge, it was an intentional engineering tradeoff made at a multi-billion dollar infra level.
When I worked at <startup>, we spent 1hr writing 50loc and throwing it at AWS lambda just to see if it would work. No thought to long term cost or scalability, because the company might not be there tomorrow, and this is the fastest way to prototype an API in the cloud. When it works, obviously management wants you to hit the “scale” button in that moment and if it costs 50% more, well that’s probably only a few hundred dollars a month. It wasn’t about limited knowledge, but instead an intentional engineering tradeoff when you’re focused on speed and costs are small
And there is a whole bunch of companies that exist in between.
If an engineer costs $100/hour, scaling an extra $100/month (or even an extra $1k/month) is generally a no brainer. That money is almost always better served towards shipping product.
Seriously, I'm struggling to figure out how "we have servers that run containers / applications" would need to be redone just because the application changed.
I would always recommend "serverless" monolith first with the option to develop with mocks locally/offline. That's imo the best risk/effort ratio.
In my personal life, I’m curiosity-oriented, so I put my blog, side projects and mom’s chocolate shop on fully self hosted VPSs.
At my job managing a team of 25 and servicing thousands of customers for millions in revenue, I’m very results-oriented. Anyone who tries to put a single line of code outside of a managed AWS service is going to be in a lot of trouble with me. In a results-oriented environment, I’m outsourcing a lot of devops work to AWS, and choosing to pay a premium because I need to use the people I hire to work on customer problems.
Trying to conflate the two orientations with mindsets / personality / experience levels is inaccurate. It’s all about context.
Over time we will move further away. If the cost of an easily managed solution is low enough, why do the details matter?
Are we? We're constantly changing abstractions, but we don't keep adding them all that often. Operating systems and high-level programming languages emerged in the 1960s. Since then, the only fundamentally new layer of abstraction were virtual machines (JVM, browser JS, hardware virtualization, etc). There's still plenty of hardware-specific APIs, you still debug assembly when something crashes, you still optimize databases for specific storage technologies and multimedia transcoders for specific CPU architectures...
The majority of software today is written without knowing even which architecture the processor is going to be, how much of the processor we are going to have, whether anything will ever fit in memory... hell, we can write code that doesn't know not just the virtual machine it's going to run in, but even the family of virtual machine. I have written code that had no idea if it was running in a JVM, LLVM or a browser!
So when I compare my code from the 80s to what I wrote this morning, the distance from the hardware doesn't seem even remotely similar. I bet someone is writing hardware specific bits somewhere, and that maybe someone's debugging assembly might actually resemble what the hardware runs, maybe. But the vast majority of code is completely detached from anything.
Frankly though, when I bring stuff like this up, it feels like I'm being mocked than the other way around - like we're the minority. And sadly, I'm not sure if anything can ultimately be done about it. People just don't know what they don't know. Some things you can't tell people despite trying to, they just won't get it.
And it wasn't redone in assembly, it was C++ with SIMD intrinsics, which might as well just be assembly.
https://www.youtube.com/watch?v=Ge3aKEmZcqY&list=PLEMXAbCVnm...
most programmers are not able to solve a problem like that in 20 lines of assembly or whatever, and no amount of education or awareness is going to change that. acting as if they can is just going to come across as arrogant.
> Half the things they are fixing, if not more, are created by the abstractions in the first place
Unlike the above post though, in my experience, it's less often devs (at least the best ones) who want to keep moving away from the silicon, but more often management. Everywhere I have worked, management wants to avoid control over the lower-level workings of things and outsource or abstract it away. They then proceed to wonder why we struggle with the issues that we have, despite people who deal with these things trying to explain it to them. They seem to automatically assume that higher level abstractions are inherently better, and will lead to productivity gains, simply because you don't have to deal with the underlying workings of things. But the example I gave, is reason for why that that isn't always necessarily the case. Fact is, sometimes problems are better and more easily solved in a lower-level abstraction.
But as I had said, in my experience, management often wants to go the opposite way and often disallows us control over these things. So, as an engineer who wants to solve the problems as much as management or customers want their problems solved, hope to achieve by "bringing it up" in cases which seem appropriate, a change which empowers us to actually solve such problems.
Don't get me wrong though, I'm not saying lower-level is always the way to go. It always depends on the circumstances.
Hold on there a sec: WHAT?!
Engineers tend to solve their problems differently and the circumstances for those differences are not always clear. I'm in this field because I want to learn as many different approaches as possible. Did you never experience a moment when you could share a simpler solution to a problem with someone and could observe first hand when they became one of todays lucky 10'000[0]? That's anything but arrogant in my book.
Sadly, what I can increasingly observe is the complete opposite. Nobody wants to talk about their solutions, everyone wants to gatekeep and become indispensable, and criticism isn't seen as part of productive environments as "we just need to ship that damn feature!". Team members should be aware when decisions have been made out of lazyness, in good faith, out of experience, under pressure etc.
You might, maybe, but an increasing proportion of developers:
- Don't have access to the assembly to debug it
- Don't even know what storage tech their database is sitting on
- Don't know or even control what CPU architecture their code is running on.
My job is debugging and performance profiling other people's code, but the vast majority of that is looking at query plans. If I'm really stumped, I'll look at the C++, but I've not yet once looked at assembly for it.
the only people that say this are people who don't work on compilers. ask anyone that actually does and they'll tell you most compiler are pretty mediocre (tend to miss a lot of optimization opportunities), some compilers are horrendous, and a few are good in a small domain (matmul).
this is again just more brash confidence without experience. you're wrong. this is a post about GPUs and so i'll tell you that as a GPU compiler engineer i spend my entire day (work day) staring/thinking about asm in order to affect register pressure and ilp and load/store efficiency etc.
> rather than something that a fancy optimization of the loop
a fancy loop optimization (pipelinig) can fix some problems (load/store efficiency) but create other problems (register pressure). the fundamental fact is NFL theorem applies here fully: you cannot optimize for all programs uniformly.
While yes, I/O is often a computational bound, I'd be shy to really say that in a consumer space when we aren't installing flash buffers, performing in situ processing, or even pre-fetching. Hell, in many programs I barely even see any caching! TBH, most stuff can greatly benefit from asynchronous and/or parallel operations. Yeah, I/O is an issue, but I really would not call anything I/O bound until you've actually gotten into parallelism and optimizing code. And even not until you apply this to your I/O operations! There is just so much optimization that a compiler can never do, and so much optimization that a compiler won't do unless you're giving it tons of hints (all that "inline", "const", and stuff you see in C. Not to mention the hell that is template metaprogramming). Things you could never get out of a non-typed language like python, no matter how much of the backend is written in C.
That said, GPU programming is fucking hard. Godspeed you madman, and thank you for your service.
While modern compilers are great, you’d be surprised about the seemingly obvious optimizations compilers can’t do because of language semantics or the code transformations would be infeasible to detect.
I type versions of functions into godbolt all the time and it’s very interesting to see what code is/isn’t equivalent after O3 passes
I understand that if you write machine code and run it in your operating system, your operating system actually handles its execution (at least, I _think_ I understand that), but in what way does it have little to do with what the CPU is doing?
For instance, couldn't you still run that same code on bare metal?
Again, sorry if I'm misunderstanding something fundamental here, I'm still learning lol
Not sure virtual machine are fundamentally different. In the end if you have 3 virtual or 3 physical machine the most important difference is how fast you can change their configuration. They will still have all the other concepts (network, storage, etc.). The automation that comes with VM-s is better than it was for physical (probably), but then automation for everything got better (not only for machines).
At my job, a decade ago our developers understood how things worked, what was running on each server, where to look if there were problems, etc. Now the developers just put magic incantations given to them by the "DevOps team" into their config files. Most of them don't understand where the code is running, or even what much of it is doing. They're unable or unwilling to investigate problems on their own, even if they were the cause of the issue. Even getting them to find the error message in the logs can be like pulling teeth. They rely on this support team to do the investigation for them, but continually swiveling back-and-forth is never going to be as efficient as when the developer could do it all themselves. Not to mention it requires maintaining said support team, all those additional salaries, etc.
(I'm part of said support team, but I really wish we didn't exist. We started to take over Ops responsibilities from a different team, but we ended up taking on Dev ones too and we never should've done that.)
This blog has a brilliant insight that I still remember more than a decade later: we live in a fantasy setting, not a Sci-fi one. Our modern computers are so unfathomable complex that they are demons, ancient magic that can be tamed and barely manipulated, but not engineered. Modern computing isn't Star Trek TNG, where Captain Picard and Geordi LaForge each have every layer of their starship in their heads with full understanding, and they can manipulate each layer independently. We live in a world where the simple cell phone in our pocket contains so much complexity that it is beyond any 10 human minds combined to fully understand how the hardware, the device drivers, the OS, the app layer, and the internet all interact between each other.
Try tens of thousands of people. A mobile phone is immensely more complicated than people realize.
Thank you for writing it so eloquently. I will steal it.
There will always be work for people like us. It's not so bad. We're not totally immune to layoffs but for us they come several rounds in.
This statement encapsulates nearly everything that I think is wrong with software development today. Captured by MBA types trying to make a workforce that is as cheap and replaceable as possible. Details are simply friction in a machine that is obsessed with efficiency to the point of self-immolation. And yet that is the direction we are moving in.
Details matter, process matters, experience and veterancy matters. Now more than ever.
My comment elsewhere goes into but more detail but basically silicon stopped being able to make single threaded code faster in about 2012 - we just have been getting “more parallel cores” since. And now at wafer scale we see 900,000 cores on a “chip”. When 100% parallel coding runs 1 million times faster than your competitors, when following one software engineering path leads to code that can run 1M X, then we will find ways to use that excess capacity - and the engineers who can do it get to win.
I’m not sure how LLMs face this problem.
As soon as the abstractions leak or you run into an underlying issue you suddenly need to understand everything about the underlying system or you're SOOL.
I'd rather have a simpler system I already understand all the proceeding abstractions about.
The overhead of this is minimal when you keep things simple and avoid shiny things.
I think that if the development side knew a little bit of the rest of the stack they'd write better applications overall.
A fantastic talk.
I’ll use my stupid hobby home server stuff as an example. I tossed the old VMware box years ago. You know what I use now? Little HP t6x0 thin clients. They are crappy little x86 SoCs with m2 slots, up to 32GB memory and they can be purchased used for $40. They aren’t fast, but perform better than the cheaper AWS and GCP instances.
In that a trivial use case? Absolutely. Now move from $30 to about $2000. Buy a Mac Mini. It’s a powerful arm soc with ridiculously fast storage and performance. Probably more compute than a small/mid size company computer room a few years ago and more performant than a $1M SAN a decade ago.
6G will bring 10gig cellular.
Hyperscalers datacenters are the mainframe of 2025.
When I can get the equivalent of a Mac Mini in a super cheap price point… you’re going to have opportunities to attack those stratospheric cloud margins.
Just took a quick look- appears t730 is of DDR3 era and may only have a single slot.
t740 definitely has two slots https://www8.hp.com/h20195/v2/GetPDF.aspx/c06393061.pdf
You get a super capable, low power device in the price footprint of a raspberry pie.
Have you ever had a plumber, HVAC tech, electrician, etc. come out to your house for something, and had them explain it to you? Have you had the unfortunate experience of that happening more than once (with separate people)? If so, you should know why this matters: because if you don’t understand the fundamentals, you can’t possibly understand the entire system.
It’s the same reason why the U.S. Navy Nuclear program still teaches Electronics Technicians incredibly low-level things like bus arbitration on a 386 (before that, it was the 68000). Not because they expect most to need to use that information (though if necessary, they carry everything down to logic analyzers), but because if you don’t understand the fundamentals, you cannot understand the abstractions. Actually, the CPU is an abstraction, I misspoke: they start by learning electron flow, then moving into PN junctions, then transistors, then digital logic, and then and only then do they finally learn how all of those can be put together to accomplish work.
Incidentally, former Navy Nukes were on the initial Google SRE team. If you read the book [0], especially Chapter 12, you’ll get an inkling about why this depth of knowledge matters.
Do most people need to understand how their NIC turns data into electrical signals? No, of course not. But occasionally, some weird bug emerges where that knowledge very much matters. At some point, most people will encounter a bug that they are incapable of reasoning about, because they do not possess the requisite knowledge to do so. When that happens, it should be a humbling experience, and ideally, you endeavor to learn more about the thing you are stuck on.
The more the big cloud providers can abstract cpu cycles, memory, networking, storage etc, the more they don’t have to compete with others doing the same.
If that were true, you might be right.
What happens in reality is that things are promised to work and (at best) fulfill that promise so long as no developers or deployers or underlying systems or users deviate from a narrow golden path, but fail in befuddling ways when any of those constraints introduce a deviation.
And so what we see, year over year, is continued enshittening, with everything continuously pushing the boundaries of unreliability and inefficiency, and fewer and fewer people qualified to actually dig into the details to understand how these systems work, how to diagnose their issues, how to repair them, or how to explain their costs.
> If the cost of an easily managed solution is low enough, why do the details matter?
Because the patience that users have for degraded quality, and the luxury that budgets have for inefficiency, will eventually be exhausted and we'll have collectively led ourselves into a dark forest nobody has the tools or knowledge to navigate out of anymore.
Leveraging abstractions and assembling things from components are good things that enable rapid exploration and growth, but they come with latent costs that eventually need to be revisited. If enough attention isn't paid too understanding, maintaining, refining, and innovating on the lowest levels, the contraptions built through high-level abstraction and assempbly will eventually either collapse upon themselves or be flanked by competitors who struck a better balance and built on more refined and informed foundations.
As a software engineer who wants a long and satisfying career, you should be seeking to understand your systems to as much depth as you can, making informed, contextual choices about what abstractions you leverage, exactly what they abstract over, and what vulnerabilities and limitations are absorbed into your projects by using them. Just making naive use of the things you found a tutorial for, or that are trending, or that make things look easy today, is a poison to your career.
Because vertical scaling is now large enough that I can run all of twitter/amazon on one single large server. And if I'm wrong now, in a decade I won't be.
Compute power grows exponentially, but business requirements do not.
One end is PaaS like Heroku, where you just git push. The other end is bare metal hosting.
Every option you mentioned (VPS, Manages K8S, Self Hosted K8S, etc) they all fall somewhere between these two ends of the spectrum.
If, a developer falls into any of these "groups" or has a preference/position on any of these solutions, they are just called juniors.
Where you end up in this spectrum is a matter of cost benefit. Nothing else. And that calculation always changes.
Those options only make sense where the cost of someone else managing it for you for a small premium gets higher than the opportunity/labor cost of you doing it yourself.
So, as a business, you _should_ not have a preference to stick to. You should probably start with PaaS, and as you grow, if PaaS costs get too high, slowly graduate into more self-managed things.
A company like fly.io is a PaaS. Their audience has always been, and will always be application developers who prefer to do nothing low-level. How did they forget this?
This is where I see things too. When you start out, all your value comes from working on your core problem.
eg: You'd be crazy to start a CRM software business by building your own physical datacenter. It makes sense to use a PaaS that abstracts as much away as possible for you so you can focus on the actual thing that generates value.
As you grow, the high abstraction PaaS gets increasingly expensive, and at some point bubbles up to where it's the most valuable thing to work on. This typically means moving down a layer or two. Then you go back to improving your actual software.
You go through this a bunch of times, and over time grow teams dedicated to this work. Given enough time and continuous growth, it should eventually make sense to run your own data centers, or even build your own silicon, but of course very few companies get to that level. Instead most settle somewhere in the vast spectrum of the middle, with a mix of different services/components all done at different levels of abstraction.
The DC will handle physical service for you if something breaks, you just pay for parts and labor.
All of this requires knowledge, of course, but it’s hardly an impossible task. Go look at what the more serious folk in r/homelab (or r/datacenter) are up to; it’ll surprise you.
> I'll have to monitor more things (like system upgrades and intrusion attempts)
You very much should be monitoring / managing those things on AWS as well. For system upgrades, `unattended-upgrades` can keep security patches (or anything else if you'd like, but I wouldn't recommend that unless you have a canary instance) up to date for you. For kernel upgrades, historically it's reboots, though there have been a smattering of live update tools like kSplice, kGraft, and the latest addition from GEICO of all places, tuxtape [0].
> I'd also have to amortize parts and labor as part of the cost, which is going to push the price up.
Given the prices you laid out for AWS, it's not multi-AZ, but even single-AZ can of course failover with downtime. So I'll say you get 2U, with two individual servers, DBs either doing logical replication w/ failover, or something like DRBD [1] to present the two servers' storage as a single block device (you'd still need a failover mechanism for the DBs). So $400 for two 1U servers, and maybe $150/month at most for colo space. Even with the (IMO unrealistically low) $200/month quote for AWS, at 5 months, you're now saving $50/month. Re: parts and labor, luckily, parts for old servers is incredibly cheap. PC3-12800R 16GiB sticks are $10-12. CPUs are also stupidly cheap. Assuming Ivy Bridge era (yes, this is old, yes, it's still plenty fast for nearly any web app), even the fastest available (E5-2697v2) is $50 for a matched pair.
I don't say all of this just guessing; I run 3x Dell R620s along with 2x Supermicros in my homelab. My uptime for services is better than most places I've worked at (of course, I'm the only one doing work, I get that). They run 24/7/365, and in the ~5 years or so I've had these, the only trouble the Dells have given me is one bad PSU (each server has redundant PSUs, so no big deal), and a couple of bad sticks of RAM. One Supermicro has been slightly less reliable but to be fair, a. it has a hodgepodge of parts b. I modded its BIOS to allow NVMe booting, so it's not entirely SM's fault.
EDIT: re: backups in your other comment, run ZFS as your filesystem (for a variety of reasons), periodically snapshot, and then send those off-site to any number of block storage providers. Keep the last few days, with increasing granularity as you approach today, on the servers as well. If you need to roll back, it's incredibly fast to do so.
But you don't need comparable capacity, at least not at first. And when you do, you click some buttons or run terraform plan/apply. Absolutely it's going to cost more measured only by tech specs. But you're not paying primarily for tech specs, you're paying for somebody else to do the work. That's where the cost comparability really needs to be assessed.
Security in AWS is a thorny topic, I'll agree, but the risks are a little different. You need to secure your accounts and users, and lock out unneeded services while monitoring for unexpected service utilization. Honestly, I think for what you're paying, AWS should be doing more for you here (and they are improving albeit slowly). Hence maybe the real point of comparison ought to be against PaaS because then all of that is out of scope too, and I think such offerings are already putting pressure on AWS to offer more value.
Agreed.
> But you're not paying primarily for tech specs, you're paying for somebody else to do the work. ... Honestly, I think for what you're paying, AWS should be doing more for you here
Also agreed, and this is why I don't think the value proposition exists.
We can agree to disagree on which approach is better; I doubt there's an objective truth to be had.
This is why numbers do not stack up in the calculations – the premise that the DB has to be provisioned is not the correct one to start off with.
The right way of cooking RDS in AWS is to go serverless from the start and configure the number of DCU's, e.g. 1 to N. That way it will be even cheaper than the originally quoted $200.
Generally speaking, there is absolutely no need for anything to be provisioned at a fixed compute capacity in AWS unless there is a very specific or an edge case that, likewise, warrants a provisioned instance of something.
Nitpick, but there is no Serverless for RDS, only Aurora. The two are wildly different in their architecture and performance characteristics. Then there's RDS Multi-AZ Cluster, which is about as confusingly named as they could manage, but I digress.
Let's take your stated Minimum ACU of 1 as an example. That gives you 2 GiB of RAM, with "CPU and networking similar to what is available in provisioned Aurora instances." Since I can't find anything more specific, I'll compare it to a `t4g.small`, which has 2 vCPU (since it's ARM, it's actual cores, not threads), and 0.128 / 5.0 Gbps [0] baseline/burst network bandwidth, which is 8 / 625 MBps. That burst is best-effort, and also only lasts for 5 – 60 minutes [1] "depending on instance size." Since this is tiny, I'm going to assume the low end of that scale. Also, since this is Aurora, we have to account for both [2] client <--> DB and DB-compute (each node, if more than one) <--> DB-storage bandwidth. Aurora Serverless v2 is $0.12/hour, or $87.60/month, plus storage, bandwidth, and I/O costs.
So we have a Postgres-compatible DB with 2 CPUs, 2 GiB of RAM, and 64 Mbps of baseline network bandwidth that's shared between application queries and the cluster volume. Since Aurora doesn't use the OS page cache, its `shared_buffers` will be set to ~75% of RAM, or 1.5 GiB. Memory will also be consumed by the various processes, like the WAL writer, background writer, auto-vacuum daemon, and of course, each connection spawns a process. For the latter reason, unless you're operating at toy scale (single-digit connections at any given time), you need some kind of connection pooler with Postgres. Keeping in the spirit of letting AWS do everything, they have RDS Proxy, which despite the name, also works with Aurora. That's $0.015/ACU-hour, with a minimum 8 ACUs for Aurora Serverless, or $87.60/month.
Now, you could of course just let Aurora scale up in response to network utilization, and skip RDS Proxy. You'll eventually bottleneck / it won't make any financial sense, but you could. I have no idea how to model that pricing, since it depends on so many factors.
I went on about network bandwidth so much because it catches people by surprise, especially with Aurora, and doubly so with Postgres for many services. The reason is its WAL amplification from full page writes [3]. If you have a UUIDv4 (or anything else non-k-sortable) PK, the B+tree is getting thrashed constantly, leading to slower performance on reads and writes. Aurora doesn't suffer from the full page writes problem (that's still worth reading about and understanding), but it does still have the same problems with index thrashing, and it also has the same issues as Postgres with Heap-Only Tuple updates [4]. Unless you've carefully designed your schema around this, it's going to impact you, and you'll have more network traffic than you expected. Add to that dev's love of chucking everything into JSON[B] columns, and the tuples are going to be quite large.
Anyway, I threw together an estimate [5] with just Aurora (1 ACU, no RDS Proxy, modest I/O), 2x ALBs with an absurdly low consumption, and 2x ECS tasks. It came out to $232.52/month.
[0]: https://docs.aws.amazon.com/ec2/latest/instancetypes/gp.html...
[1]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-inst...
[2]: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide...
[3]: https://www.rockdata.net/tutorial/tune-full-page-writes/
[4]: https://www.postgresql.org/docs/current/storage-hot.html
[5]: https://calculator.aws/#/estimate?id=8972061e6386602efdc2844...
I procured my Aurora intel from a lengthy phone conversation with an exceptionally knowledgeable (and excessively talkative) AWS engineer – who had worked on Aurora – several years ago. The engineer provided detailed explanations of Aurora’s architecture, do's, and dont's as part of our engagement with AWS. The engineer was very proud of AWS’ accomplishments (and I concur that their «something serverless» products are remarkable engineering feats as well as significant cost-saving solutions for me and my clients). The engineer was willing to share many non-sensitive technical details. Generally speaking, a sound understanding of distributed architectures and networks should be sufficient to grasp Aurora Serverless. The actual secret sauce lies in the fine-tuning and optimisations.
[0] https://muratbuffalo.blogspot.com/2024/07/understanding-perf...
The tl;dr is they built a distributed storage system that is split across 3 AZs, each with 2 storage nodes. Storage is allocated in 10 GiB chunks, called protection groups (perhaps borrowing from Ceph’s placement group terminology), with each of these being replicated 6x across the nodes in AZs as mentioned. 4/6 are required for quorum. Since readers are all reading from the same volume, replica lag is typically minimal. Finally, there are fewer / no (not positive honestly; I have more experience with MySQL-compatible Aurora) checkpoints and full page writes.
If you’ve used a networked file system with synchronous writes, you’ll know that it’s slow. This is of course exacerbated with a distributed system requiring 4/6 nodes to ack. To work around this, Aurora has “temporary local storage” on each node, which is a fixed size proportional to the instance size. This is used for sorts that spill to disk, and building secondary indices. This has the nasty side effect that if your table is too large for the local storage, you can’t build new indices, period. AWS will tell you “upsize the instance,” but IMO it’s extremely disingenuous to tout the ability for 128 TiB volumes without mentioning that if a single table gets too big, your schema becomes essentially fixed in place.
Similarly, MySQL normally has something called a change buffer that it uses for updating secondary indices during writes. Can’t have that with Aurora’s architecture, so Aurora MySQL has to write through to the cluster volume, which is slow.
AWS claims that Aurora is anywhere from 3-5x faster than the vanilla versions of the respective DBs, but I have never found this to be true. I’ve also had the goalposts shifted when arguing this point, with them saying “it’s faster under heavy write contention,” but again, I have not found this to be true in practice. You can’t get around data locality. EBS is already networked storage; requiring 4/6 quorum across 3 physically distant AZs makes it even worse.
The 64 TiB limit of RDS is completely arbitrary AFAIK, and is purely to differentiate Aurora. Also, if you have a DB where you need that, and you don’t have a DB expert on staff, you’re gonna have a bad time.
Aurora is actually not a database but is a scalable storage layer that operates over the network and is decoupled from the query engine (compute). The architecture has been used to implement vastly different query engines on top of it (PgSQL, MySQL, DocumentDB – a MongoDB alternative, and Neptune – a property graph database / triple store).
The closest abstraction I can think of to describe Aurora is a VAX/VMS cluster – where the consumer sees a single entity, regardless of size, whilst the scaling (out or back in) remains entirely opaque.
Aurora does not support RDS Proxy for PostgreSQL or its equivalents for other query engine types because it addresses cluster access through cluster endpoints. There are two types of endpoints: one for read-only queries («reader endpoints» in Aurora parlance) and one for read-mutate queries («writer endpoint»). Aurora supports up to 15 reader endpoints, but there can be only one writer endpoint.
Reader endpoints improve the performance of non-mutating queries by distributing the load across read replicas. The default Aurora cluster endpoint always points to the writer instance. Consumers can either default to the writer endpoint for all queries or segregate non-mutating queries to reader endpoints for faster execution.
This behaviour is consistent across all supported query engines, such as PostgreSQL, Neptune, and DocumentDB.
I do not think it is correct to state that Aurora does not use the OS page cache – it does, as there is still a server with an operating system somewhere, despite the «serverless» moniker. In fact, due to its layered distributed architecture, there is now more than one OS page cache, as described in [0].
Since Aurora is only accessible over the network, it introduces unique peculiarities where the standard provisions of storage being local do not apply.
Now, onto the subject of costs. A couple of years ago, an internal client who ran provisioned RDS clusters in three environments (dev, uat, and prod) reached out to me with a request to create infrastructure clones of all three clusters. After analysing their data access patterns, peak times, and other relevant performance metrics, I figured that they did not need provisioned RDS and would benefit from Aurora Serverless instead – which is exactly what they got (unbeknownst to them, which I consider another net positive for Aurora). The dev and uat environments were configured with lower upper ACU's, whilst production had a higher upper ACU configuration, as expected.
Switching to Aurora Serverless resulted in a 30% reduction in the monthly bill for the dev and uat environments right off the bat and nearly a 50% reduction in production costs compared to a provisioned RDS cluster of the same capacity (if we use the upper ACU value as the ceiling). No code changes were required, and the transition was seamless.
Ironically, I have discovered that the AWS cost calculator consistently overestimates the projected costs, and the actual monthly costs are consistently lower. The cost calculator provides a rough estimate, which is highly useful for presenting the solution cost estimate to FinOps or executives. Unintentionally, it also offers an opportunity to revisit the same individuals later and inform them that the actual costs are lower. It is quite amusing.
[0] https://muratbuffalo.blogspot.com/2024/07/understanding-perf...
They call it [0] a database engine, and go on to say "Aurora includes a high-performance storage subsystem.":
> "Amazon Aurora (Aurora) is a fully managed relational database engine that's compatible with MySQL and PostgreSQL."
To your point re: part of RDS, though, they do say that it's "part of RDS."
> The architecture has been used to implement vastly different query engines on top of it (PgSQL, MySQL, DocumentDB – a MongoDB alternative, and Neptune – a property graph database / triple store).
Do you have a source for this? That's new information to me.
> Aurora does not support RDS Proxy for PostgreSQL
Yes it does [1].
> I do not think it is correct to state that Aurora does not use the OS page cache – it does
It does not [2]:
> "Conversely, in Amazon Aurora PostgreSQL, the default value [for shared_buffers] is derived from the formula SUM(DBInstanceClassMemory/12038, -50003). This difference stems from the fact that Amazon Aurora PostgreSQL does not depend on the operating system for data caching." [emphasis mine]
Even without that explicit statement, you could infer it from the fact that the default value for `effective_cache_size` in Aurora Postgres is the same as that of `shared_buffers`, the formula given above.
> Switching to Aurora Serverless resulted in a 30% reduction in the monthly bill for the dev and uat environments right off the bat
Agreed, for lower-traffic clusters you can probably realize savings by doing this. However, it's also likely that for Dev/Stage/UAT environments, you could achieve the same or greater via an EventBridge rule that starts/stops the cluster such that it isn't running overnight (assuming the company doesn't have a globally distributed workforce).
What bothers me most about Aurora's pricing model is charging for I/O. And yes, I know they have an alternative pricing model that doesn't do so (but the baseline is of course higher); it's the principal of the thing. The amortized cost of wear to disks should be baked into the price. It would be difficult for a skilled DBA with plenty of Linux experience to accurately estimate how many I/O a given query might take. In a vacuum for a cold cache, it's not that bad: estimate or look up statistics for row sizes, determine if any predicates can use an index (and if so, the correlation of the column[s]), estimate index selectivity, if any, confirm expected disk block size vs. Postgres page size, and make an educated guess. If you add any concurrent queries that may be altering the tuples you're viewing, it's now much harder. If you then add a distributed storage layer, which I assume attempts to boxcar data blocks for transmission much like EBS does, it's nearly impossible. Now try doing that if you're a "cloud native" type who hasn't the faintest idea what blktrace [3] is.
[0]: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide...
[1]: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide...
[2]: https://aws.amazon.com/blogs/database/determining-the-optima...
My personal AWS account is stuffed with globally distributed multi-region, multi-az, fault tolerant, hugely scalable things that rarely get used. By “rarely” I mean requests per hour or minute, not second.
The sum total CPU utilization would be negligible. And if I ran instances across the 30+ AZs I’d be broke.
The service based approach (aka event driven) has some real magic at the low end of usage where experimentation and innovation happens.
You're not wrong that there's a PaaS/public-cloud dividing line, and that we're at an odd place between those two things. But I mean, no, it is not the case that our audience is strictly developers who do nothing low-level. I spent months of my life getting _UDP_ working for Fly apps!
> Their audience has always been, and will always be application developers who prefer to do nothing low-level. How did they forget this?
to this:
Their audience has always been, and will always be application developers who prefer to do nothing except to build their main product.
> Our primary DX is a CLI. One of our defining features is hardware isolation. To use us, you have to manage Dockerfiles. Have you had the experience of teaching hundreds of Heroku refugees how to maintain a Dockerfile? We have had that experience. Have you ever successfully explained the distinction between "automated" Postgres and "managed" Postgres? We have not.
I'm pretty much sure an application developer in this day and age has to know all of them, yes. Just like git.
https://github.com/gliderlabs/herokuish
Something like this + Procfile support should allow you to gobble up Heroku customers [like us] quickly since they've been stagnating for long, no?
I agree it’s worthwhile to understand things more deeply but developers slowly moving up layers of abstractions seems like it’s been a long term trend.
I don't think we actually have been abstracting new layers over the past day 5-10 years anyway. Most of what I see is moving sideways, not up the stack. Covering more breadth not height or depth, of abstractions.
> Most of what I see is moving sideways, not up the stack. Covering more breadth not height or depth, of abstractions.
I don't follow your logic. This comment is so vague. Do you have a specific example?As long as some new thing is being invented in our industry, a new abstraction will be needed because the old one just can’t quite flex enough while being backwards compatible.
Some of those levels are useful. Some of them are redundant. We should embed a Lua interpreter in our webserver and delete two levels of abstraction.
(I'm not aware of any actual Lua interpreter written in PHP, but it's representative of the kinds of stacks that do exist out there)
I don't think this split exists, at least in the way you framed it.
What does exist is workload, and problems that engineers are tasked with fixing. If you are tasked with fixing a problem or implementing a feature, you are not tasked with learning all the minute details or specifics of a technology. You are tasked with getting shit done, which might even turn out to not involve said technology. You are paid to be a problem-solver, not an academic expert on a specific module.
What you tried to describe as "magic" is actually the balance between broad knowledge vs specialization, or being a generalist vs specialist. The bulk of the problems that your average engineer faces requires generalists, not specialists. Moreover, the tasks that actually require a specialist are rare, and when those surface the question is always whether it's worth to invest in a specialist. There are diminished returns on investment, and throwing a generalist at the problem will already get some results. You give a generalist access to a LLM and he'll cut down on the research time to deliver something close to what a specialist would deliver. So why bother?
With this in mind, I would go as far as to frame a scenario backhandedly described as "want to understand where their code is running and what it's doing" (as if no engineer needs to have insight on how things work?) as opposed to the dismissive "just wants to `git push` and be done with it" scenario, can actually be classified as a form of incompetence. You,as an engineer, only have so many hours per day. Your day-to-day activities involve pushing new features and fixing new problems. To be effective, your main skillet is learn the system in a JIT way, dive in, fix it, and move on. You care about system traits, not low-level implementation details that may change tomorrow on a technology you may not even use tomorrow. If, instead, you feel the need to waste time on topics that are irrelevant to address the immediate needs of your role, you are failing to deliver value. I mean, if you frame yourself as a Kubernetes expert who even know commit hashes by heart, does that matter if someone asks you, say, why is a popup box showing off-center?
I want to understand LLMs. I want to understand my compiler, my gc, my type system, my distributed systems.
On the other hand, I don't really care about K8s or anything else, as long as I have something that works. Just let me `git push` and focus on making great things elsewhere.
this feels right to me. application development and platform development are both software development tasks, and lots of software devs do both. i like working on platform-level stuff, and i like building applications. but i like there to be a good distinction between the two, and when i'm working on application-level stuff, i don't want to have to think about the platform.
services like fly.io do a good job of hiding all the platform level work and just giving you a place to deploy your application to, so when they start exposing tools like GPUs that are more about building platforms than building applications, it's messy.
Increasingly, Fly even lets you dip into most complex configurations too.
I’ve got no issue with using Tofu and Ansible to manage my own infrastructure but it takes time to get it right and it’s typically not worth the investment early on in the lifecycle.
I just made this point in a post on my substack. Especially in regulated industries, you NEED to the able to explain your AI to the regulator. You can't have a situation where a human say "Well, gee I don't know. The AI told me to do it."
But the real reason I like fly.io is because it is a new thing that allows for new capabilities. It allows you to build your own Cloudflare by running full virtual machines colocated next to appliances in a global multicast network.
May just be my naïveté, but I thought that something like ECS or EKS is much cheaper than an in-house k8 engineer.
It’s always baffling to me why people think that ECS or god forbid EKS is somehow easier than a few Linux boxes.
For example: how do you roll out a new release of your product? In sane setups, it's often $(helm upgrade --install ...), which is itself often run either in-cluster by watching a git managed descriptor, or in CI on merge to a release branch/tag
How does your developer get logs? Maybe it's via Splunk/ELK/DataDog/whatever but I have never in my life seen a case where that's a replacement for viewing the logs
How do you jump into the execution environment for your workload, to do more advanced debugging? I'm sure you're going to say ssh, which leads to the next questions of "how do you audit what was done, to prevent config drift" followed by "how do you authenticate the right developer at the right time with access to the right machine without putting root's public key file in a spreadsheet somewhere"
It's pretty easy to accomplish that with docker compose if you have containers, but you can also use systemd and some bash scripts to accomplish the same thing. Admittedly this would only affect a single node, but it's also possible to manage multiple nodes without using K8s / Nomad.
> How does your developer get logs?
fluentd
> How do you jump into the execution environment for your workload, to do more advanced debugging?
ssh
> how do you audit what was done, to prevent config drift
Assuming you're pulling down releases from a git repo, git diff can be used to detect changes, and you can then opt to either generate a patch file and send it somewhere, or just reset to HEAD. For server settings, any config management tool, e.g. puppet.
> how do you authenticate the right developer at the right time with access to the right machine without putting root's public key file in a spreadsheet somewhere
freeipa
I'm not saying any of this is better than K8s. I'm saying that, IMO, the above can be simpler to reason about for small setups, and has a lot less resource overhead. Now, if you're already comfortable administering and troubleshooting K8s (which is quite a bit different than using it), and you have no background in any of the above, then sure, K8s is probably easier. But if you don't know this stuff, there's a good chance you don't have a solid background in Linux administration, which means when your app behaves in strange ways (i.e. not an application bug per se, but how it's interacting with Linux) or K8s breaks, you're going to struggle to figure out why.
Uh, any time I run a distributed system and logs could appear on n nodes I need a log aggregator or I am tailing in n terminals. I almost only use Splunk. I tail logs in dev. Prod needs an aggregator. This has been my experience at 4 of my last 6 companies. The shit companies who had all the issues? Logs on cloudwatch or only on the node
Kubernetes is something you can hire for. A couple of linux boxes running all your server code in the most efficient way possible might save you operational costs, but it resigns you to being the one who has to maintain it. I've learned this the hard way - moving things to ECS as we scale up has allowed me give away responsibility for things. I understand that it's more complex, but i don't have to teach people now.
I massively distrust Ops-adjacent people's technical abilities if they don't know Linux. Multiple datapoints at multiple companies of varying scale has shown this to be true.
That said, you're correct, and I absolutely hate it. People want to do managed services for everything, and they stare at you like you're insane if you suggest running something yourself.
That problem started so long ago and has gotten so bad that I would be hard pressed to believe there is anyone on the planet who could take a modern consumer pc and explain what exactly is going on the machine without relying on any abstractions to understand the actual physical process.
Given that, it’s only a matter of personal preference on where you draw the line for magic. As other commenters have pointed out, your line allowing for Kubernetes is already surprising to a lot of people
This is admittedly low effort but the vast majority of devs are paid wages to "write CRUD, git push and magic" their way to the end of the month. The company does not afford them the time and privilege of sitting down and analyzing the code with a fine comb. An abstraction that works is good enough.
The seasoned seniors get paid much more and afforded leeway to care about what is happening in the stack, since they are largely responsible for keeping things running. I'm just pointing out it might merely be a function of economics.
Just an example I recently came across: Working for a smaller company that uses Kubernetes and manages everything themselves with a small team. The result: They get hacked regularly and everything they run is constantly out of date because they don't have the capacity to actually manage it themselves. And it's not even cheaper in the long run because Developer Time is usually more expensive than just paying AWS to keep their EKS up to date.
To be fair, in my home lab I also run everything bare metal and keep it updated but I run everything behind a VPN connection and run a security scanner every weekend that automatically kills any service it finds > Medium Level CVE and I fix it when I get the time to do it.
As a small Team I can only fix so much and keep so much up to date before I get overwhelmed or the next customer Project gets forced upon me by Management with Priority 0, who cares about security updates.
I'd strongly suggest to use as much managed service as you can and focus your effort as a team on what makes your Software Unique. Do you really need to hire 2-3 DevOps guys just to keep everything running when GCP Cloud Run "just werks"?
Everything we do these days runs on so many levels of abstraction anyway, it's no shame to share cost of managing the lower levels of abstraction with others (using managed Service) and focus on your product instead. Unless you are large enough to pay for whole teams that deal with nothing but infrastructure to enable other teams to do Application Level Programming you are, in my limited experience, just going to shoot yourself in the foot.
And again, just to emphasize it: I like to do everything myself because for privacy reasons I use as little services that aren't under my control as possible but I would not recommend this to a customer because it's neither economical nor does it work well in my, albeit limited, experience.
Many, likely most, developers today don't care about controlling their system/network/hardware. There's nothing wrong with that necessarily, but it is a pretty fundamental difference.
One concern I've had with building LLM features is whether my customers would be okay with me giving their data over to the LLM vendor. Say I'm building a tool for data analysis, is it really okay to a customer for me to give their table schemas or access to the data itself to OpenAI, for example?
I rarely hear that concern raised though. Similarly when I was doing consulting recently, I wouldn't use copilot on client projects as I didn't want copilot servers accessing code that I don't actually own the rights to. Maybe its over protective though, I have never heard anyone raise that concern so maybe its just me.
As a software developer I want strong abstractions without bloat.
LLMs are so successful in part because they are a really strong abstraction. You feed in text and you get back text. Depending on the model and other parameters your results may be better or worse, but changing from eg. Claude to ChatGPT is as simple as swapping out one request with another.
If what I want is to run AI tasks, then GPUs are a poor abstraction. It's very complicated (as Fly have discovered) to share them securely. The amount of GPU you need could vary dramatically. You need to worry about drivers. You need to worry about all kinds of things. There is very little bloat to the ChatGPT-style abstraction, because the network overhead is a negligable part of the overall cost.
If I say I don't want magic, what I really mean is that I don't trust the strength of the abstraction that is being offered. For example, when a distributed SQL database claims to be PostgreSQL compatible, it might just mean it's wire compatible, so none of my existing queries will actually work. It might have all the same functions but be missing support for stored procedures. The transaction isolation might be a lie. It's not that these databases are bad, it's that "PostgreSQL as a whole" cannot serve as a strong abstraction boundary - the API surface is simply too large and complex, and too many implementation details are exposed.
It's the same reason people like containers: running your application on an existing system is a very poor abstraction. The API surface of a modern linux distro is huge, and includes everything from what libraries come pre-installed to the file-system layout. On the other hand the kernel API is (in comparison) small and stable, and so you can swap out either side without too much fear.
K8S can be a very good abstraction if you deploy a lot of services to multiple VMs and need a lot of control over how they are scaled up and down. If you're deploying a single container to a VM, it's massively bloated.
TLDR: Abstractions can be good and bad, both inherently, and depending on your use-case. Make the right choice based on your needs. Fly are probably correct that their GPU offering is a bad abstraction for many of their customer's needs.
I prefer to either manage software directly with no wrappers on top, or use a fully automated solution.
K8S is something I'd rather avoid. Do you enjoy writing configuration for your automation layer?
What's changing is that managed solutions are becoming increasingly easier to set up and increasingly cheaper on smaller scales.
While I do personally enjoy understanding the entire stack, I can't justify self-hosting and managing an LLM until we run so many prompts a day that it becomes cheaper for us to run our own GPUs compared to just running APIs like OpenAI/Anthropic/Deepseek/...
I feel this is similar to what you are pointing out. Why _shouldn’t_ people be the “magic” users. When was the last time one of your average devs looked in to how esm loading? Or the python interpreter or v8? Or how it communicates with the OS and lower level hardware interfacing?
This is the same thing. Only you are goalpost shifting.
I think we're approaching the point where software development becomes a low-skilled job, because the automatic tools are good enough to serve business needs, while manual tools are too difficult to understand by anyone but a few chosen ones anyway.
lol, even understanding git is hard for them. Increasingly, software engineers don't want to learn their craft.
once upon a time i could have said that it's better this way and that everybody will be thankful when i'm the only person who can fix something, but at this point that isn't really true when anybody can just get an LLM to walk them through it if they need to understand what's going on under the hood. really i'm just a nerd and i need to understand if i want to sleep at night lol.
They have incredible defaults that can make it as simple as just running ‘git push’ but there isn’t really any magic happening, it’s all documented and configurable.
tell me whether there's many brick layers who wants to understand the chemical composition of their bricks.
Paints, wood finishes, adhesives, oils, abrasives, you name it. You generally know at least a bit about what’s in it. I can’t say everyone I’ve worked with wanted to know, but it’s often intrinsic to what you’re doing and why. You don’t just pull a random product off a shelf and use it. You choose it, quite often, because of its chemical composition. I suspect it’s not always thought of this way, though.
This is the same with a lot of artistic mediums as well. Ceramicists often know a lot more than you’d expect about what’s in their clay and glazes. It’s really cool.
I’m not trying to be contrarian here. I know some people don’t care at all, and some people use products because it’s what they were told to do and they just go with it. But that wasn’t my experience most of the time. Maybe I got lucky, haha.
Ditto for the rest of technical voc degrees.
If you think you can do IT without at least a trade degree on understanding the low level components interact, (and I'm not talking about CS level, concurrency with CSP, O-notation, linear+discrete algebra... but basic stuff such as networking protocols, basic SQL database normalizations, system administration, configuration, how the OS boots, how processes work -idle, active, waiting..., if you don't get that, you will be fired faster than anyone around.
Who owns and depreciates the logs, backups, GPUs, and the database(s)?
K8s docs > Scheduling GPUs: https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus... :
> Once you have installed the plugin, your cluster exposes a custom schedulable resource such as amd.com/gpu or nvidia.com/gpu.
> You can consume these GPUs from your containers by requesting the custom GPU resource, the same way you request cpu or memory
awesome-local-ai: Platforms / full solutions https://github.com/janhq/awesome-local-ai?platforms--full-so...
But what about TPUs (Tensor Processing Units) and QPUs (Quantum Processing Units)?
Quantum backends: https://github.com/tequilahub/tequila#quantum-backends
Kubernetes Device Plugin examples: https://kubernetes.io/docs/concepts/extend-kubernetes/comput...
Kubernetes Generic Device Plugin: https://github.com/squat/generic-device-plugin#kubernetes-ge...
K8s GPU Operator: https://docs.nvidia.com/datacenter/cloud-native/gpu-operator...
Re: sunlight server and moonlight for 120 FPS 4K HDR access to GPU output over the Internet: https://github.com/kasmtech/KasmVNC/issues/305#issuecomment-... :
> Still hoping for SR-IOV in retail GPUs.
> Not sure about vCPU functionality in GPUs
Process isolation on vCPUs with or without SR-IOV is probably not as advanced as secure enclave approaches.
Intel SGX is a secure enclave capability, which is cancelled on everything but Xeon. FWIU there is no SGX for timeshared GPUs.
What executable loader reverifies the loaded executable in RAM after imit time ?
What LLM loader reverifies the in-RAM model? Can Merkle hashes reduce that cost; of nn state verification?
Can it be proven that a [chat AI] model hosted by someone else is what is claimed; that it's truly a response from "model abc v2025.02"?
PaaS or IaaS
We used to joke about this a lot when Java devs would have memory issues and not know how to adjust the heap size in init scripts. So many “CS majors” who are completely oblivious to anything happening outside of the JVM, and plenty happening within it.
I want to understand every possible detail about my framework and language and libraries. Like I think I understand more than many do, and I want to understand more, and find it fulfilling to learn more. I don't, it's true, care to understand the implementation details of, say, the OS. I want to know the affordances it offers me and the APIs that matter to me, I don't care about how it's implemented. I don't care to understand more about DNS than I need. I definitely don't care to spend my time futzing with kubernetes -- I see it as a tool, and if I can use a different tool (say heroku or fly.io) that lets me not have to learn as much -- so I have more time to learn every possible detail of my language and framework, so I can do what I really came to do, develop solutions as efficiently and maintainably as possible.
You are apparently interested in lower levels of abstraction than I am. Which is fine! Perhaps you do ops/systems/sre and don't deal with the higher levels of abstraction as much as I do -- that is definitely lucrative these days, there are plenty of positions like that. Perhaps you deal with more levels of abstraction but don't go as deep as me -- or, and I totally know it's possible, you just have more brain space to go as deep or deeper on more levels of abstraction as me. But even you probably don't get into the implementation details of electrical engineering and CPU design? Or if you do, and also go deep on frameworks and languages, I think you belong to a very very small category!
But I also know developers who, to me, dont' want to go to deep on any of the levels of abstraction. I admit I look down on them, as I think you do too, they seem like copy-paste coders who will never be as good at developing efficient maintainable soltuions.
I started this post saying I think that's a different axis than what layers of abstraction one specializes in or how far down one wants to know the details. But as I get here, while I still think that's likely, I'm willing to consider that these developers I have not been respecting -- are just going really deep in even higher levels of abstraction than me? Some of them maybe, but honestly I don't think most of them, but I could be wrong!
This is baffling. What’s value proposition here? At some point customer will be directly asking an AI agent to create an app for them and it will take care of coding/deployment for them..
Some people became software developers because they wanted to make easy money back when the industry was still advertising bootcamps (in order to drive down the cost of developers).
Some people simply drifted into this profession by inertia.
And everything in-between.
From my experience there are a lot of developers who don't take pride in their work, and just do it because it pays the bills. I wouldn't want to be them but I get it. The thing is that by delegating all their knowledge to the tools they use, they are making themselves easy to replace, when the time comes. And if they have to fix something on their own, they can't. Because they don't understand why and how it works, and how and why it became what it is instead of something else.
So they call me and ask me how that thing works...
I can usually tell at the end of a call which group they belong to. I've been wrong a few times too.
As long as they don't waste my time I'm fine with everyone, some people just have other priorities in life.
One thing I'd say is in my experience there are many competent and capable people in every group, but non-competent ones are extremely rare in the first group.
Extremely fast to start on-demand, reliable and although a little bit pricy but not unreasonably so considering the alternatives.
And the DX is amazing! it's just like any other fly machine, no new set of commands to learn. Deploy, logs, metrics, everything just works out of the box.
Regarding the price: we've tried a well known cheaper alternative and every once in a while on restart inference performance was reduced by 90%. We never figured out why, but we never had any such problems on fly.
If I'm using a cheaper "Marketplace" to run our AI workloads, I'm also not really clear on who has access to our customer's data. No such issues with fly GPUs.
All that to say, fly GPUs are a game changer for us. I could wish only for lower prices and more regions, otherwise the product is already perfect.
This is in stark contrast to all other options I tried (AWS, GCP, LambdaLabs). The fly.io config really felt like something worth being in every project of mine and I had a few occasions where I was able to tell people to sign up at fly.io and just run it right there (Btw. signing up for GPUs always included writing an email to them, which I think was a bit momentum-killing for some people).
In my experience, the only real minor flaw was the already mentioned embedding of the whole CUDA stack into your container, which creates containers that approach 8GB easily. This then lets you hit some fly.io limits as well as creating slow build times.
2012 - moores law basically ends - nand gates do t get smaller just more cleverly wrapped. Single threaded execution more or less stops at 2 GHz and has remained there.
2012-2022 - no one notices single threaded is stalled because everything moves to VMs in the cloud - the excess parallel compute from each generation is just shared out in data centres
2022 - data centres realise there is no point buying the next generation of super chips with even more cores because you make massive capital investments but cannot shovel 10x or 100x processes in because Amdahls law means standard computing is not 100% parallel
2022 - but look, LLMs are 100% parallel hence we can invest capital once again
2024 - this is the bit that makes my noodle - wafer scale silicon. 900,000 cores with GBs SRAM - these monsters run Llama models 10x faster than A100s
We broke moores law and hardware just kept giving more parallel cores because that’s all they can do.
And now software needs to find how to use that power - because dammit, someone can run their code 1 million times faster than a competitor - god knows what that means but it’s got to mean something - but AI surely cannot be the only way to use 1M cores?
It looks like maybe the slope changed slightly starting around 2006, but it’s funny because this comment ends complaining that Moore’s Law is too good after claiming it’s dead. Yes, software needs to deal with the transistor count. Yes, parallel architectures fit Moore’s law. The need to go to more parallel and more parallel because of Moore’s Law was predicted, even before 2006. It was a talking point in my undergrad classes in the 90s.
https://upload.wikimedia.org/wikipedia/commons/0/00/Moore%27...
http://cva.stanford.edu/classes/cs99s/papers/moore-crammingm...
But to further needle, the law is
Single Thread execution, I assume you mean IPC or may be more accurately as PPC ( Performance Per Clock ) has improved steadily if you accounted for ARM design and not just x86. That is why M1 was so surprising to everyone because most (all) thought Geekbench score on Phone doesn't translate to Desktop and somehow M1 went from nonsense to breakthrough.
Clockspeed also went from 2Ghz to 5Ghz and we are pushing 4Ghz on Mobile Phone already.
And Moores law, in terms of transistor density ends when Intel couldn't deliver 10nm on time, so 2016 / 2017 give or take. But that doesn't mean transistor density is not improving.
Any idea why? Is it because of some patent they hold?
Qualcomm's current Snapdragon Elite Oryon 2 on Mobile, and ARM Cortex X925 or previously known as X5 are already close to Apple A17 level performance. So this is no longer something unique to Apple.
I just wish both design are more widely available. And for x86, Intel and AMD still haven't quite caught up. At least not in the next 2 years.
I don't think it's anything to do with patents although I'm sure they have plenty.
https://www.man.com/technology/single-core-stagnation-and-th...
But we may finally be hitting a plateau unless Apple can demonstrate improvement in M5 and M6. They have pretty much squeezed out everything with the 8-Wide Design. Not sure if they could go any wider without some significant trade off.
From a 50MHz 486 in 1990 to a 1.4GHz P3 in 2000 is a factor of 28 improvement in speed solely due to clock speed! Add on all the other multiplicative improvements from IPC...
Since then, in more than 20 years, the clock frequency has increased only 2 times, while in the previous decade (1983-1993) it had increased only about 5 times, where a doubling of the clock frequency (33 to 66 MHz) had occurred between 1989 and 1993 (for cheap CPUs with MOS logic, because expensive CPUs using ECL had reached 80 MHz already during the seventies).
Also, Pentium III has reached 1.4 GHz only in early 2002, not in 2000, while 80486 has reached 50 GHz only in 1991, not in 1990.
Your typo got me wondering — what would the performance of an actual 50GHz 486 look like compared to modern single-core performance?
The lack of speculative execution combined with atrocious memory latencies and next to no cache should be enough to annihilate most if not all of the advantage from the faster clock — CPU is just going to be idling waiting for data. Then there’s the amount of work you can get done per cycle, and SIMD, and…
In 2012, 22nm architectures were new (https://en.wikipedia.org/wiki/List_of_Intel_CPU_microarchite...). Now we have 3nm architectures (https://en.wikipedia.org/wiki/3_nm_process). In what sense have nand gates "not gotten smaller"?
My computer was initially built in 2014 and the CPU runs up to 3 GHz (and I don't think it was particularly new on the market). CPUs made today overclock to 5+ GHz. In what sense did "single threaded execution more or less stop at 2 GHz and remain there"?
We might not be seeing exponential increases in e.g. transistor count with the same doubling period as before, but there has demonstrably been considerable improvement to traditional CPUs in the last 12+ years.
As for clock speeds, yes and no, basically thermal limits stop most CPUs running full time at full speed - the problem was obvious back in the day - I would build PCs and carefully apply thermal paste to the plastic casing of a chip - thus waiting for the heat of the transistors to heat up the plastic to remove the waste. Yes they are working on thermal something something directly on the layers of silicon.
Because 22nm was not actually 22nm, and 3nm is not actually 3nm.
You could have a laptop with 1000 cores on it - simple 32/64 bit CPUs that just ran full pelt.
The lack of parallelism drove decisions to take silicon and make it do stuff that was not run everything faster. But to focus on getting one instruction set through one core faster.
AI has arrived and found a world of silicon that it by coincidence can use every transistor for going full pelt - and the CPUs we think of in our laptops are using only a fraction of their transistors for full pedal to the metal processing and the rest is … legacy??
You get more cores because transistor density didn't stop increasing, software devs/compiler engineers just can't think of anything better to do with the extra real estate!
> Single threaded execution more or less stops at 2 GHz and has remained there.
There are other semiconductor materials that do not have the heat limits of silicon-based FETs and have become shockingly cheap and small (for example, a 200W power supply the size of a wallet that doesn't catch on fire). We're using these materials for power electronics and RF/optics today but they're nowhere close to FinFETs from a few years ago or what they're doing today. That's because all the fabrication technology and practices have yet to be churned out for these new materials (and it's not just UV lasers), but they're getting better, and there will one day be a mcu made from wide bandgap materials that cracks 10GHz in a consumer device.
Total aside, hardware junkies love talking cores and clock speeds, but the real bottlenecks for HPC are memory and i/o bandwidth/latency. That's why the future is optical, but the technology for even designing and experimenting with the hardware is in its infancy.
I’d bet most code we use every day spends most time just waiting for things like disk and network. Not to mention it’s probably inherently sequential.
I cannot work out if we pack enough parallel problems in the world or just lack a programming language to describe them
It's the best of both worlds, because from the CPU's perspective it gets to have separate lanes for instructions and data, but from the programmer's perspective it's one memory.
However, since the problem is intractable, you don't actually have to solve it. What you can do instead is perform random shooting in 32 different directions, linearize and then solve 32 quadratic problems and find the minimum over those 32. That is phase one. However, a cold start from phase one sucks, so there is a second phase, where you take an initial guess and refine it. Again you do the same strategy, but this time you can use the initial guess to carefully choose search directions and solve the next batch of 32 QPs and take the minimum over them.
Now here is the thing. Even this in itself won't save you. At the extreme top end you are going to have 20k decision variables for which you're going to solve an extremely sparse linear systems of equations.
SQP is iterative QP and QP is an iterative interior point or active set algorithm, so we are two iterative algorithms deep. A linear systems of equations can be solved iteratively, so let's make it a third. It turns out, the biggest bottleneck in this nesting of sequential algorithms isn't necessarily the sequential nature. It's multiplying a giant sparse 20000x20000 matrix with a 20000 wide vector and doing this over and over and over again. That is what is fucking impossible to do in the 2 millisecond time budget you've been given, not the sequential parts.
So what does Boston Dynamics do for their impressive application of MPC? They don't even try. They just linearize the MPC and let the non sequential QP solver run as many iterations as it can until time is up, meaning that they don't even solve to optimality!
Now you might wonder why someone would want non linear MPC in the first place if it is so impractical. The reason is that MPC provides a general compute scaling solution to many problems that would require a lot of human ingenuity. It is the bitter lesson. Back when QP solvers were to slow, people used the pseudo inverse on non constrained QP problems. It's time for faster parallel hardware to make QP obsolete and let SQP take over.
Yes, parallel hardware is the key to a sequential problem.
A 2GHz core from a 2012 is extremely slow compared to a 2GHz core of a modern CPU. The difference could be an order of magnitude.
There is more to scaling CPUs than the clock speed. Modern CPUs process many more instructions per clock on average.
Suddenly your cost for building a new data centre is something like twice the cost of the previous one (cold gets more expensive etc) and yet you only sell same amount of space. It’s not an attractive business in first place.
This was the push for lambda architecture etc - depending on usage you could have hundreds of people buying the same core. I would have put a lot of cash into making something that spins up docker instances so fast it’s like lambda - and guess what fly.io does?
I think fly.io’s obsession with engineering led them down a path of positive capital usage while AWS focused on rolling out new products on a tougher capital process.
Anyway - AI is the only thing that dense multi core data centres can use that packs in many many users compared to docker per core.
Unless we all learn how to think and code in parallel, we are leaving a huge amount of hardware gains on the table. And those gains are going to 100x in the next ten years and my bet is my janky-ass software will still not be able to use it - there will be a bifurcation of specialist software engineers who work in domains and with tooling that is embarrassingly parallel, and the rest of us will be on fly.io :-)
(#) ok so maybe 3 or 4 docker per core, with hyper visor doling out time slots, but much more than that and performance is a dog, and so the number of “virtual CPUs” you can sell is a limited number and creeps up despite hardware leaping ahead … the point I am making
Ancient Java servlets back in the early 2000s were more suitable for performance on current gen hardware than modern NodeJS, Python etc...
To go extreme, wafer scale silicon - 900,000 8 bit cores, 120GB sRam. Hand wave on the 8bit for now and just think how to handle facebook or e-commerce. A static site is simple if it fits inside 120GB, but facebook is low write high read (Inassume) - but fetching from / writing to memory - making that parallel means rewriting a lot of things .. and e-commerce - suddenly ACID does not parallelise easily.
I mean this all seems doable with fairly fundamental changes to architecture memory concepts and … rdbms is challenging.
But I think we are way past the idea that a web framework author can just fix it - this is deeper - a new OS a new compiler and so on - but I could be wrong.
Let's be honest, the other stuff is just Chrome: Tell me 96gb is enough?
Prompt eval is slow, inference for large models at high context is slow, training is limited and slow.
It's better than not having anything, but we got rid of our M1 Max 192GBs after about a year.
I have a Mac with a lot of RAM for running models. I haven’t done it in a month because I can tell that it’s not only slow, but the output also doesn’t come close to what I can get from the latest from Claude or ChatGPT.
It’s actually amazing that I can run LLMs locally and get the quality of output that they give me, but it’s just a different level of experience than the state of the art.
I’m becoming convinced that the people who sing the praises of running locally are just operating differently. For them, slow and lower quality output aren’t a problem because they’re having fun doing it themselves. When I want to get work done, the hosted frontier models are barely fast enough and have hit or miss quality for me, so stepping down to the locally hosted options is even more frustrating.
I can post benchmarks for these Mac machines and clusters of Studios and M4 Mac Minis (see my other HN posts last month, the largest Mac cluster I can benchmark for you has 4 TB of ultrafast unified memory and around 9216 M4 cores).
I meant to explain why no one ever posts a benchmark, it's expensive as hell to do a professional benchmark against accepted standards. Its several days work, very expensive rental of several pieces of $10K hardware, etc. You don't often hand that over for free. With my benchmark results some companies can save millions if they take my advice.
>any interesting use cases to massive amounts of memory outside of training?
Dozens, hundreds. Almost anything you use databases, CPUs, GPUs or TPUs for. 90% of computing is done on the wrong hardware, not just datacenter hardware.
The interesting use case we discussed here on HN last week was running full DeepSeek-R1 LLms on 778 GB fast DRAM computers locally. I benchmarked getting hundreds of tokens per second on a cluster of M4 Mac minis or a cluster of M2 Mac Studio Ultras where others reported 0.015 or 6 tokens per second on single machines.
I just heard of a Brazilian man who build a 256 Mac Mini cluster at double the cost that I would. He leaves $600K value on the table because he won't reverse engineer the instruction set, rewrite his software or even call Apple to negotiated a low price.
HN votes me down for commenting that I, a supercomputer builder for 43 years, can build better cheaper faster low power supercomputers from Mac Mini's and FPGA's than from any Nvidia, AMD or Intel state of the art hardware, it even beats the fastest supercomputer of the moment or the Cerebras wafer engine V3 (on energy. coding cost and performance per watt per dollar).
I design and build wafer scale 2 million core reconfigurable supercomputers for $30K a piece that cost $150-$300 million to mass produce. That's why I know how to benchmark M2 Ultra and M4 Macs, as they are the second best chip a.t.m. that we need to compete against.
As a consulting job I do benchmarks or build your on-prem hardware or datacenter. This job consists mainly teaching the customer's programming staf how to program massively parallel software or convincing the CEO not to rent cloud hardware but buy on-prem hardware. OP at Fly.io should have hired me, then he wouldn't have needed to write his blog post.
I replied to your comment in hope of someone hiring me when they read this.
What is your process to turn Mac minis into a cluster? Is there any special hardware involved? And you can get 100x tok/s vs others on comparable hardware, what do you do differently - hardware, software, something else?
1) Apply science. Benchmark everyting until you understand if its memory bound, i/o bound or compute bound [1].
2) Rewrite software from scratch in a parallel form with message passing.
3) Reverse engineer native instruction sets of CPU, GPU and ANE or TPU. Same for NVIDIA (don't use CUDA).
No special hardware needed but adding FPGA's for optimizing the network between machines might help.
So you analyse the software and hardware, then restructure it by reprogramming and rewireing and adaptive compilers. Then you benchmark again and you find what hardware runs the algorithm fastest for less $ using less energy and weigh that against the extra cost for reprogramming.
As you can see from this comments thread, most people, especially programmers, lack the knowledge we computer scientist, parallel programmers and chip or hardware designers have.
>What is your process
Science. To measure is to know, my prof always said.
To answer your questions in detail, email me.
You first need to be specific. The problem is not how to turn Mac minis into a cluster, with or without custom hardware ( I do both) on code X or Y. Or how to optimize software or rewrite it from scratch (which its often cheaper).
First find the problem. In this case the problem is find the lowest OPEX and Capex to do the stated compute load versus changing the compute load. Turns out in a simulation or a cruder spreadsheet calculation it becomes clear that the energy cost dominates of hardware choice, it trumps the cost of programming, the cost of off the shelf hardware and the difference if you add custom hardware. M4's are lower power, lower OPEX and lower CAPEX especially if you rewrite your (Nvidia GPU) software. The problem is the ignorance of the managers and their employee programmers.
You can repurpose the 2 x 10 Gbps USB-C, the 10 Gbps Ethernet and the three 32 Gbps PCIe ports or Thunderbolts but you have to use better drivers. You need to weigh if double the 960 Gbps 16 GB unified memory for 2 x $400 is faster than 2 Tbps memory at 1.23 times the cost versus 3 x 4 x 32 Gbps PCIe 4.0 versus 3 x 120 Gbps unidirectionally is better for this particular algorithm and wheat changes if you uses both the 10 CPU cores, 10 x 400 GPU corses and 16 Neural Engine cores (at 38 trillion 16 bit OPS) will work batter than just the CUDA cores. Ususally the answers is: rewrite the alogoritm and use an adaptive compiler and then a cluster of smaller 'sweet spot' off the shelf hardware will outperform the most fancy high end hardware if the network is balanced. This varies at runtime so you'll only know if you now how to code. As Akan Kay said and Steve Jobs quoted: if your serious about software you should do your own hardware. If you can't, then you can approach the hardware with commodity components if that turns out to be cheaper. I estimate for $42K labour I can save you a few hundred $k.
Yes. Several pages of comments about M4 clusters, wafer scale integrations and a few about DeepSeek.
https://news.ycombinator.com/threads?id=morphle (a few pages- press more).
When people casually ask for benchmarks in comments, they’re not looking for in-depth comparisons across all of the alternatives.
They just want to see “Running Model X with quantization Y I get Z tokens per second”.
> That's why I know how to benchmark M2 Ultra and M4 Macs, as they are the second best chip a.t.m. that we need to compete against.
Macs are great for being able to fit models into RAM within a budget and run them locally, but I don’t understand how you’re concluding that a Mac is the “second best option” to your $30K machine unless you’re deliberately excluding all of the systems that hobbyist commonly build under $30K which greatly outperform Mac hardware.
Influencers on Youtube will give them that [1] but its meaningless. If a benchmark is not part of an in-depth comparison than it doesn't mean anything and can't inform you on what hardware will run this software best.
These shallow benchmarks influencers post on youtube and twitter are not just meaningless but also take days to browse through. And they are influencers, they are meant to influence you and are therefore not honest or reliable.
[1] https://www.youtube.com/watch?v=GBR6pHZ68Ho
>but I don’t understand how you’re concluding that a Mac is the “second best option” to your $30K machine
I conclude that if you can't afford to develop custom chips than in certain cases a cluster of M4 Mac Mini's will be the fastest cheapest option. Cerebras Wafers or NVDIA GPUs have always been too expensive compared to custom chips or Mac Mini clusters, independent of the specific software workload.
I also meant to say that a cluster of $599 Mac Minis will outperform a $6500 M2 Ultra Mac Studio with 192GB and be half the price for higher performance and DRAM but only if you utilize the M4 Mac Mini aggregated 100 Gbps networking.
Like, I'm sure Nvidia is aware of Apple's "unified memory" as an alternative to their cards and yet...they aren't offering >24GB consumer cards yet, so clearly they don't feel threatened.
Don't get me wrong, I've always disliked Apple as a company, but the M series chips are brilliant, I'm writing this on one right now. But people seem to think that Apple will be able to get the same perf increases yoy when they're really stretching process limits by dumping everything onto the same die like that - where do they go from here?
That said Nvidia is using HBM so it does make me wonder why they aren't also doing memory on package with HBM, I think SK Hynix et al were looking at making this possible.
I'm glad we're headed in the direction of 3d silicon though, always seemed like we may as well scale in z, I imagine they can stack silicon/cooling/silicon/cooling etc. I'm sure they can use lithography to create cooling dies to sandwich between everything else. Then just pass connections/coolant through those.
With that said, this seems quite obvious - the type of customer that chooses Fly, seems like the last person to be spinning up dedicated GPU servers for extended periods of time. Seems much more likely they'll use something serverless which requires a ton of DX work to get right (personally I think Modal is killing it here). To compete, they would have needed to bet the company on it. It's way too competitive otherwise.
They're charging hyperscaler rates, and anyone willing to pay that much won't go with Fly.
For serverless usage they're only mildly overpriced compared to say Runpod, but I don't think of serverless as anything more than an onramp to renting dedicated machine, so it's not surprising to hear it's not taking off.
GPU workloads tend to have terrible cold-start performance by their nature, and without a lot of application specific optimizations it rarely ends up making financial sense to not take a cheaper continous option if you have an even mildly consistent workload. (and if you don't then you're not generating that much money for them)
My Fly machine loads from turned off to first inference complete in about 35 seconds.
If it’s already running, it’s 15 seconds to complete. I think that’s pretty decent.
And with the premium for per-second GPUs hovering around 2x that for hourly/monthly rentals, it gets even harder for products with scale to justify.
You'd want to have a lot of time where you're scaled to 0, but that in turn maps to a lot of cold starts.
I don't know exactly what type of cloud offering would satisfy my needs, but what's funny is that attaching an AMD consumer GPU to a Raspberry Pi is probably the most economical approach for a lot of problems.
Maybe something like a system where I could hotplug a full GPU into a system for a reservation of a few minutes at a time and then unplug it and let it go back into a pool?
FWIW it's that there's a large number of ML-based workflows that I'd like to plug into progscrape.com, but it's been very difficult to find a model that works without breaking the hobby-project bank.
I wouldn't put anything confidential through it.
It turns out that I can run most of the appropriate models on my ancient laptop if I don't mind waiting for the complicated ones to finish. If I do mind, I can just send that part to OpenAI or similar. If your workflow can scale horizontally like my OCR pipeline crap, every box in your shop with RAM >= 16GB might be useful.
Apologies if this is all stuff you're familiar with.
If you could checkpoint a GPU quickly enough it would be possible to run multiple isolated workloads on the same GPUs without any issues.
Fly.io seems to attract similar developers as Cloudflare’s Workers platform. Mostly developers who want a PaaS like solution with good dev UX.
If that’s the case, this conclusion seems obvious in hindsight (hindsight is a bitch). Developers who are used to having infra managed for them so they can build applications don’t want to start building on raw infra. They want the dev velocity promise of a PaaS environment.
Cloudflare made a similar bet with GPUs I think but instead stayed consistent with the PaaS approach by building Workers AI, which gives you a lot of open LLMs and other models out of box that you can use on demand. It seems like Fly.io would be in a good position to do something similar with those GPUs.
To me fly's offering reads like a system integrator"s solution. They assemble components produced mainly by 3rd parties into an offered solution. The business model of a system integrator thrives on doing the least innovation/custom work possible for providing the offering. You posotion yourself to take maximal advantage of investments and innovations driven by your 3rd party suppliers. You want to be squarely on their happy path.
Instead this artcle reads like fly, with good intention, was trying to divert their tech suppliers offer stream into niche edge cases outside of maistream support.
This can be a valid strategy for products very late into their maturity lifecycle where core innovation is stagnant, but for the current state of AI with extremely rapid innovation waves coarsing through the market, that strategy is doomed to fail.
Closely followed by, “I was right.” :)
A bit like "stop doing this..." and we think: omg, am I doing the same deadly mistake?
Apparently this is technically possible, if you can find the right person at Nvidia to talk about vGPU licensing and magic incantations. Hopefully someone reading this HN front page story can make the introduction.
To userspace Nvidia license server (a) in each host, (b) for entire Fly cloud, or (c) over WAN to Nvidia cloud?
Really what we'd have wanted to do would have been to give Fly Machines MIG slices. But to the best of my understanding, MIG is paravirtualized; it doesn't give you SR-IOV-style PCI addresses for the slices, but rather a reference Nvidia's userland libraries pass to the kernel driver, which is a dance you can't do across VM boundaries unless your hypervisor does it deliberately.
1. Instead of blocking VM start for license validation, convert that step into non-blocking async submission of usage telemetry, allowing every VM to start instantly. For PoC purposes, Nvidia's existing stack could be binary patched to proxy the license request to a script that isn't blocking VM start, pending step 2 negotiation.
2. Reconcile aggregate vGPU usage telemetry from Nvidia Fly-wide license server (Step 1) with aggregate vGPU usage reports from Fly's orchestration/control plane, which already has that data for VM usage accounting. In theory, Fly orchestration has more awareness of vGPU guest workload context than Nvidia's VM-start gatekeeping license agent, so there might be mutual interest in trading instant VM start for async workload analytics.
Do you mean vCS [1], which is already integrated and licensed by KVM/RedHat/Nutanix, Xen/Citrix and VMware?
It's distinct from SR-IOV, PCI passthrough, vGPU-for-VDI, and MIG.
[1] https://blogs.nvidia.com/blog/virtualcomputeserver-expands-v...
> Alternatively, we could have used a conventional hypervisor. Nvidia suggested VMware (heh). But they could have gotten things working had we used QEMU. We like QEMU fine, and could have talked ourselves into a security story for it, but the whole point of Fly Machines is that they take milliseconds to start.
Someone could implement virtio-cuda (there are PoCs on github [1] [2]), but it would be a huge maintenance burden. It should really be done by Nvidia, in lockstep with CUDA extensions.
Nvidia vCS makes use of licensed GPGPU emulation code in the VM device model, which is QEMU in the case of KVM and Xen. Cloud Hypervisor doesn't use QEMU, it has its own (Rust?) device model, https://github.com/cloud-hypervisor/cloud-hypervisor/blob/ma...
So the question is, how to reuse Nvidia's proprietary GPGPU emulation code from QEMU, with Cloud Hypervisor? C and Rust are not friends. Can a Firecracker or Cloud Hypervisor VM use QEMU only for GPGPU emulation, alongside the existing device model, without impacting millisecond launch speed? Could an emulated vGPGPU be hotplugged after VM launch?
There has been some design/PoC work for QEMU disaggregation [3][4] of emulation functions into separate processes. It might be possible to apply similar techniques so that Cloud Hypervisor's device model (in Rust) process could run alongside a QEMU GPGPU emulator (in C) process, with some coordination by KVM.
If this approach is feasible, the architecture and code changes should be broadly useful to upstream for long-term support and maintenance, rather than custom to Fly. The custom code would be the GPGPU emulator, which is already maintained by Nvidia and running within QEMU on RedHat, Nutanix, etc.
It would also advance the state of the art in security isolation and access control of emulated devices used by VMs.
[1] https://github.com/coldfunction/qCUDA
[2] https://github.com/juniorprincewang/virtio-cuda-module
[3] https://www.qemu.org/docs/master/devel/multi-process.html
Any company (let alone Fly) doing this won't go against Nvidia Enterprise T&C?
> how to reuse Nvidia's proprietary GPGPU emulation code from QEMU
If it has been contributed to QEMU, it isn't GPL/LGPL?
> Could an emulated vGPGPU be hotplugged after VM launch
gVisor instead bounces ioctls back and forth between "guest" and host. Sounds like a nice, lightweight (even if limited & sandbox-busting) approach, too. Unsure if it mitigates the need for the "licensing dance" tptacek mentioned above, but I reckon the security posture of such a setup is unacceptable for Fly.
https://gvisor.dev/docs/user_guide/gpu/
> would also advance the state of the art in security isolation and access control of emulated devices used by VMs
I hope I'm not talking to DeepSeek / DeepResearch (:
Good question for a lawyer. Even more reason (beyond maintenance cost) that it would be best done by Nvidia. qCUDA paper has a couple dozen references on API remoting research, https://www.cs.nthu.edu.tw/~ychung/conference/2019-CloudCom....
> If it has been contributed to QEMU, it isn't GPL/LGPL?
Not contributed, but integrated with QEMU by commercial licensees. Since the GPGPU emulation code isn't public, presumably it's a binary blob.
> I hope I'm not talking to DeepSeek / DeepResearch (:
Will take that as a compliment :) Not yet tried DS/DR.
Red Hat for one doesn't ship any functionality that isn't available upstream, much less proprietary, and they have large customers using virtual GPU.
Was that recent, with MIG support for GPGPU partitioning? Is there a public mailing list thread or patch series for that work?
Nvidia has a 90-page deployment doc on vCS ("virtual compute server") for RedHat KVM, https://images.nvidia.com/content/Solutions/data-center/depl...
That said, the slowness of QEMU's startup is always exaggerated. Whatever they did with CH they could have done with QEMU.
Who even knows what the customer is ever going to want? Pivot. Pivot. Pivot.
PS: And pouring one out for the engineering hours that went into shipping GPUs. Sometimes it's a fine product, but just doesn't fit.
I don't want GPUs, but that's not quite the reason:
- The SOTA for most use cases for most classes of models with smallish inputs is fast enough and more cost efficient on a CPU.
- With medium inputs, the GPU often wins out, but costs are high enough that a 10x markup isn't worth it, especially since the costs are still often low compared to networking and whatnot. Factor in engineer hours and these higher-priced machines, and the total cost of a CPU solution is often still lower (always more debuggable).
- For large inputs/models, the GPU definitely wins, but now the costs are at a scale that a 10x markup is untenable. It's cheaper to build your own cluster or pay engineers to hack around the deficits of a larger, hosted LLM.
- For xlarge models™ (fuzzily defined to be anything substantially bigger than the current SOTA), GPUs are fundamentally the wrong abstraction. We _can_ keep pushing in the current directions (transformers requiring O(params * seq^2) work, pseudo-transformers requiring O(params * seq) work but with a hidden, always-activated state space buried in that `params` term which has to increase nearly linearly in size to attain the same accuracy with longer sequences, ...), but the cost of doing so is exorbitant. If you look at what's provably required to do those sorts of computations, the "chuck it in a big slice of vRAM and do everything in parallel" strategy gets more expensive compared to theoretical optimality as model size increases.
I've rented a lot of GPUs. I'll probably continue to do so in the future. It's a small fraction of my overall spending though. There aren't many products I can envision which could be built on rented GPUs more efficiently than rented CPUs or in-house GPUs.
People aren't going to fly.io to rent GPUs. That's the actual reality here.
They thought they could sidecar it to their existing product offering for a decent revenue boost but they didn't win over the prospect's mind.
Fly has compelling product offerings and boring shovels don't belong in their catalog
In a different product, I was given some Google Cloud credits, which unlocked me to put the product in front of customer. This one also needed GPU but not as expensive as the previous. It works reliably and it's fast.
Personally, I had two use cases for GPU providers in past 3 months.
I think there's definitely demand for reliability and better pricing. Not sure Fly will be able to touch that market though as it's not known for both (stability & developer friendly pricing).
P.S If anyone is working on a serverless provider and want me to test their product, reach me out :)
Ironically GCP and AWS GPUs are so overpriced that getting even half the number of credits from Runpod is like a 4x increase in "GPU runway", especially with .44/hr A40s.
For the LLM itself, I just used a custom startup script that downloaded the model once ollama was up. It's the same thing I'd do on a local cluster though. I'm not sure how fly could make it better unless they offered direct integration with ollama or some other inference server?
There are people doing GPU-enabled inference stuff on Fly.io. That particular slice of the market seems fine?
Unless you have constant load that justify 24/7 deployments most devs will just use an API. Or find solutions that doesn’t need you to pay > $1 / hour.
The reason we don't want GPU, it's that renting is not priced well enough and the technology isn't quite there yet either for us to make consistently good use of it.
Removing the offer is just exacerbating the current situation. It feels both curves are about to meet.
In either case you'll have the experience to bring back the offer if you feel it's needed.
What part of the cost gets out of hand? Having to have a Machine for every process? Do you remember what napkin math pricing you were working with?
For example, I could get a digitalocean vm with 2gb ram, 1vcpu, 50gb storage, 2tb bandwidth for $12/mo.
For the same specs at fly.io, it'd be ~$22/mo not including any bandwidth. It could be less if it scales to zero/auto stops.
I recently tried experimenting with two different projects at fly. One was an attic server to cache packages for NixOS. Only used by me and my own vms. Even with auto scaling to zero, I think it was still around $15-20/mo.
The other was a fly gpu machine with Ollama on it. The cold start time + downloading a model each time was kind of painful, so I opted for just adding a 100gb volume. I don't actually remember what I was paying for that, but probably another 20/mo? I used it heavily for a few days to play around and then not so much later. I do remember doing the math and thinking it wouldn't be sustainable if I wanted to use it for stuff like home-assistant voice assistant or going through pdfs/etc with paperless.
On their own, neither of these are super expensive. But if I want to run multiple home services, the cost is just going to skyrocket with every new app I run. If I can rent a decent dedicated server for $100-$200/mo, then I at least don't have to worry about the cost increasing on me if a machine never scales to zero due to a healthcheck I forgot about or something like that.
Sorry if it's a bit rambly, happy to answer questions!
I would be curious how the Attic server would have gone with a Tigris bucket and local caching. Not sure how hard that is to pull off, but Tigris should be substantially cheaper than our NVMes and if you don't really NEED the io performance you're not getting anything for that money. Which is a long winded way of saying "we aren't great at block storage for anything but OLTP workloads and caches".
One thing we've struggled to communicate is how _cheap_ autosuspend/autostop make things. If that Machine is alive for 8 hours per day you're probably below $8/mo for that config. And it's so fast that it's viable for it to start/stop 45 times per day.
It's kind of hard to make the thing stay alive with health checks, unless you're meaning external ones?
We are suboptimal for things that make more sense as a bunch of containers on one host.
I'll have to look at autosuspend again too. I remember having autostop configured, but not autosuspend. I could see that helping with start times a lot for some stuff. It's not supported on GPU machines though, right? I thought I read that but don't see it in the docs at a quick glance.
> It's kind of hard to make the thing stay alive with health checks, unless you're meaning external ones?
Sorry, I did mean external healthchecks. Something like zabbix/uptimekuma. For something public facing, I'd want a health check just to make sure it's alive. With any type of serverless/functions, I'd probably want to reduce the healthcheck frequency to avoid the machine constantly running if it is normally low-traffic.
> We are suboptimal for things that make more sense as a bunch of containers on one host.
I think my ideal offering would be something where I could install a fly.io management/control plane on my own hardware for a small monthly fee and use that until it runs out of resources. I imagine it's a pretty niche case for enterprise unless you can get a bunch of customers with on-prem hardware, but homelabbers would probably be happy.
fly.io was the first provider I tried any gpu offerings at, I probably should give it another shot now that I've used a few others.
- "Datacenter" means it's comparable to Runpod's secure cloud pricing.
- A spot instance of an H200 under someone's living room media console wouldn't go for A100 rates.
$3.50 will also get you an H100 at a laundry list of providers people build real businesses on.
Certainly all better track records than fly.io, especially on a post where they explain it's not working out for them as an offering and then promise they'll keep it shambling along.
Salad Cloud is also very interesting if your models can fit on a consumer GPU, but it's a different model than typical GPU providers.
I used to use cheap vms/vps from lowendtalk deals, but usually they're on over-subscribed hosts and can't do anything heavy.
Actual host recommendations: I like Racknerd and Hivelocity currently. OVH too, but I've read a lot of horror stories so I guess ymmv.
For simple inference, its too expensive for a project that makes no money. Which is most projects.
My current company has some finance products. There was machine learning used for things like fraud and risk before the recent AI excitement.
Our executives are extremely enthused with AI, and seemingly utterly uncaring that we were already using it. From what I can tell, they genuinely just want to see ChatGPT everywhere.
The fraud team announce they have a "new" AI based solution. I assume they just added a call to OpenAI somewhere.
It sucks from a business perspective of course, but it also sucks from the perspective of someone who takes pride in their work! I like to call it “artisan’s regret”.
This is a very strange statement to make. They are acting like inference today happens with freshly spun up VMs and model access over remote networks (and their local switching could save the day). It’s actually hitting clusters of hot machines with the model of choice already loaded into VRAM.
In real deployments, latency can be small (if implemented well), and speed is comes down to the right GPU config for the model (why fly doesn’t offer).
People have built better shared resource usage inference systems for Loras (openAI, Fireworks, Lorax) - but it’s not VMs. It’s model aware, the right hardware for the base model, and optimizing caching/swapping the Loras.
I’m not sure the Fly/VM way will ever be the path for ML. Their VM cold start time doesn’t matter if the app startup requires loading 20GB+ of weights.
Companies like Fireworks are working fast Lora inference cold starts. Companies like Modal are working on fast serverless VM cold starts with a range of GPU configs (2xH100, A100, etc). These seem more like the two cloud primitives for AI.
Is there not a market for the kind data science stuff where GPUs help but you are not using an LLM. Like statistical models on large amounts of data and so on.
Maybe fly.io customer base isn't that sort of user. But I was pushing a previous company to get AWS GPUs because it would save us money vs CPU for the workload.
I might have had a bad sample set so far. But the "doing statistics" bit seems to be the interesting thing for them. the tooling doesn't really factor into solutions/plans that often. and learning something new because "engineer say it shinier" doesn't really seem to motivate them much :/
Also GPUs may be used when productionizing work done by DS but maybe I am in a tiny niche here of (Data Science) intersection (Scale up) minus (Deep learning LLM etc.)
One interesting thing about all this is that 1 GPU / 1 VM doesn't work today with AMD GPUs like MI300x. You can't do pcie passthrough, but AMD is working on adding it to ROCm. We plan to be one of the first to offer this.
If Dave does something malicious, we know where Dave lives, and we can threaten his livelihood. If your competitor does it you have to prove it, and they are protected from snooping at least as much as you are so how are you going to do that? I insist that the mutually assured destruction of coworkers substantially changes the equation.
In a Kubernetes world you should be able to saturate a machine with pods from the same organization even across teams by default, and if you're worried that the NY office is fucking with the SF office in order to win a competition, well then there should be some non-default flags that change that but maybe cost you a bit more due to underprovisioning.
You got a machine where one pod needs all of the GPUs and 8 cores, great. We'll load up some 8 core low memory pods onto there until the machine is full.
I also think the call that people want LLMs is slightly off. More correct to say people want a black box that gives answers. LLMs have the advantage that nobody really knows anything about tuning them. So, it is largely a raw power race.
Taking it back to ML, folks would love a high level interface that "just worked." Dealing with the GPUs is not that, though.
No, I want GPU. BERT models are still useful.
The point is your service is too expensive that only one or two months of renting is enough to build a PC from scratch and place it somewhere in your workplace to run 24/7. For applications that need GPU power, usually downtime or latency does not really matter. And you always add an extra server to ensure.
What do Nvidia’s lawyers think of this? There are some things that best not mentioned in a blog post, and this is one of them.
The whole cloud computing world was built on hypervisors and CPU virtualization, I wonder if we'll see a similar set of innovations for GPUs at commodity level pricing. Maybe a completely different hardware platform will emerge to replace the GPU for these inference workloads. I remember reading about Google's TPU hardware and was thinking that would be the thing - but I've never seen anyone other than Google talk about it.
The CPU part of high level user applications will probably be written in very high level languages/runtimes with, sometimes, some other parts being bare metal accelerated (GPU or CPU).
Devs wanting hardcore performance should write their stuff directly in GPU assembly (I think you can do that only with AMD) or at best with a SPIR-V assembler.
Not to mention doing complex stuff around the linux closed source nvidia driver is just asking for trouble. Namely, either you deploy hardware/software nvidia did validade, or just prepare to suffer... it means 'middle-men' deploying nvidia validaded solutions have near 0 added value.
But have they considered pivoting some of said compute to some 'private, secure LLM in a box' solution?
I've lately been toying with the idea of training from extensive docs and code, some open, some not, for both code generation and insights.
I went down the RAG rabbit hole, and frankly, the amount of competing ideas of 'this is how you should do it', from personal blogs to PaaS companies, overwhelmed me. Vector dbs, ollama, models, langchain, and various one off tools linking to git repos.
I feel there has to be substantial market for whoever can completely simplify that flow for dummies like me, and not charge a fortune for the privilege.
Many companies overinvest in fully-owned hardware, rather than renting from clouds. Owning hardware means you underwrite unrented inventory costs and prevents you from scaling. H100 pricing is now lower than any self-hosted option, even without factoring the TCO & headcount.
(Disclaimer: I work at a GPU cloud Voltage Park -- with 24k H100s as low as $2.25/hr [0] -- but Fly.io is not the only one I've noticed purchase hardware when renting might have saved some $$$)
_But_ the demand of open source models is just beginning. If they really have a big inventory of GPUs under-utilized and users want particular solutions on demand.... give it to them???
Like TTS STT video creation, real time illustration enhancement, deepseek and many others. You guys are great at devops, make useful offerings on demand, similar to what HuggingFace offers, no???
For a lot of use-cases you need at lest two A100s with a very fast interconnect, potentially many more. This isn't even about scaling with requests but about running one single LLM instance.
Sure you will find all of ways how people managed to runt his or that on smaller platforms, problem is that quite often doesn't scale to what is needed in production for a lot of subtle and less subtle reasons.
So it’s not just that openai and anthropic apis are good enough, they are also cheap enough, and still overpriced compared to the industry
Your GPU investment wont do as well as you thought, but also you are wasting time on security. If the end user and market doesnt care then you can consider not caring as well. Worst case you can pay for any settlement with …. more gpu credits.
That's why NVIDIA has NIMs [0]. A super easy way to use various LLMs.
> like with our portfolio of IPv4 addresses, I’m even more comfortable making bets backed by tradable assets with durable value.
Is that referencing the gpus, the hardware? If yes, why should they have a durable value? Historically hardware like that deprecated fast and reaches a value of 0, energy efficiency alone kills e.g. old server hardware. Something different here?
* never say never
But server GPUs tend to deprecate slower.
Which we also see with e.g. A100 80GiB is approaching 5 years of age, but still sold and used widely and still cost ~20k$USD (and I remember a noticable higher price before deep seek...).
The thing is sure a A100 80GiB is a much older arch then successors, but the main bottleneck is the memory of which it has 80GiB.
What was the price at launch?
Google and AWS helpfully offered their managed LLM AI services, but they don't really have anything terribly more useful than just machines with GPUs. Which are expensive.
I'm going to check fly.io...
I considered using a Fly GPU instance for a project and went with Hetzner instead. Fly.io’s GPU offering was just way too expensive to use for inference.
The smaller models are getting more and more capable, for high-frequency use-cases it'll probably be worth using local quantized models vs paying for API inference.
Actually, you're still wrong about JavaScript edge functions. CF Workers slap.
Ha ha, it didn't terrify Modal. It ships with all those security problems, and pretends it doesn't have them. Sorry Eric.
i would pay to have apis for:
sam2, florence, blip, flux 1.1, etc.
whatever use case I would have reached Fly for on GPU, i can't justify _not_ using Replicate. maybe Fly can do better offer premium queues for that with their juicy infra?
you're right! as a software dev I see dockerization and foisting these models as a burden, not a necessity.
The idea that a cloud compute provider can’t make GPU compute into an profitable business is pretty laughable.
for our p5 quota I had to talk to our TAM team on AWS, while most of our quota requests are instant usually.
The whole thing is sorta antithetical.
The real problem is the lack of security-isolated slicing one or more GPUs for virtual machines. I want my consumer-grade GPU to be split up into the host machine and also into virtual machines, without worrying about resident neighbor cross-talk! Gosh that sounds like why I moved out of my apartment complex, actually.
The idea of having to assign a whole GPU via PCI passthrough is just asinine. I don't need to do that for my CPU, RAM, network, or storage. Why should I need to do it for my GPU?
Wait... what?
I've been a Fly customer for years and it's the first time I hear about this.
Opportunity for Intel and AMD.