Removing the "experimental" tag is certainly a milestone to celebrate.
I'm looking forward to distros shipping a default kernel with Rust support enabled. That, to me, will be the real point of no return, where Rust is so prevalent that there will be no going back to a C only Linux.
Interesting, I use both (NixOS on servers, Arch for desktop) and never seen that. Seems you're referring to this: https://rust-for-linux.com/drm-panic-qr-code-generator (which looks like this: https://github.com/kdj0c/panic_report/issues/1)
Too bad NixOS and Arch is so stable I can't remember the last time any full system straight up panicked.
For now there aren't many popular drivers that use Rust, but there are currently 3 in-development GPU drivers that use it, and I suspect that when those get merged that'll be the real point of no return:
- Asahi Linux's driver for Apple GPUs - https://rust-for-linux.com/apple-agx-gpu-driver
- The Nova GPU driver for NVIDIA GPUs - https://rust-for-linux.com/nova-gpu-driver
- The Tyr GPU driver for Arm Mali GPUs - https://rust-for-linux.com/tyr-gpu-driver
I suspect the first one of those to be actually used in production will be the Tyr driver, especially since Google's part of it and they'll probably want to deploy it on Android, but for the desktop (and server!) Linux use-case, the Nova driver is likely to be the major one.
What does this mean? The kernel has no specific support for Rust userspace -- the syscall API does not suddenly change because some parts of the kernel are compiled with rustc.
As I understand it, the kernel has a flag, no by default, to enable building rust code as part of the kernel. They’re saying they’d like to see a distributed choosing to default this flag to yes for the kernels they build.
Rust isn’t an invading tribe. It’s just a tool.
People doing open-source work often feel very tribal about their code and block ideas that are good but threaten their position in the community. Essentially same thing as office politics except it's not about money, it's about personal pride.
Fundamentally, if a tool is good and provides benefits then why not use it? You'll be met with concerns about maintainability if you just airdrop something written in Ada/SPARK, but that's fair - just as it was fair that the introduction of Rust in the linux kernel was indeed met with concerns about maintainability. It seems that those were resolved to the satisfcation of the group that decides things and the cost/benefit balance was considered net positive.
https://github.com/microsoft/typescript-go/discussions/411#d...
Taken from the above Github discussion regarding Go and Typescript.
Do I treat your post from an obvious throwaway account created 13 minutes ago as somehow representative of the C community or the Linux kernel community or for that matter as representative of any community at all?
Come on, please. There's a ton of things that I consider worth of critisicm in the Rust community, but make a better case.
For all we know it could be the same person behind both the GitHub post and the Hacker News throwaway you're speaking to.
Do you have an example of other R4L maintainers harassment? Because you've spammed the same single example 3-4 times on this post alone.
- There was an experiment to have rust in Linux. It got provisional approval from Linus.
- Some people in the Linux kernel blocked essentially any changes being made to make the rust experiment able to succeed.
- Some people who were trying to get rust into Linux got frustrated. Some quit as a result.
- Eventually Linus stepped in and told everyone to play nice.
This whole drama involved like 5 people. It’s not “the majority of the rust community”. And it kinda had nothing to do with rust the language. It was really a management / technical leadership challenge. And it seems like it’s been addressed by leadership stepping in and giving clearer guidelines around how rust and C should interoperate, both at a technical and interpersonal level.
So what’s your point? Is rust bad because some people had an argument about it on the Linux kernel mailing list about it? ‘Cos uh, that’s not exactly a new thing.
It may be true in this case, but certainly you have seen corporate bloat being added to "open" source projects before.
May not, or may yes. As far as I know with my own interactions with the Linux kernel community, is that it's very different from typical "modern software politics", at least different enough that I'd make the claim they're special from the other communities.
If there is any community I'd assume makes most of their choices disregarding politics or pressures and purely on technical measures, it would be the Linux kernel community. With that said, no person nor community is perfect, but unless there is some hard evidence pointing to that the Linux people were forced to accept Rust into the kernel for whatever reason, then at least I'd refrain from believing in such theories.
That’s “appeal to authority” fallacy.
The people who decided rust has value in the kernel are linux kernel developers. Ie, traditional C developers who see value in using rust in linux. Rust in linux has caused plenty of headaches. If rust didn't add commensurate value to make the trouble worth it, it wouldn't be getting adopted.
"Although we entertained occasional thoughts about implementing one of the major languages of the time like Fortran, PL/I, or Algol 68, such a project seemed hopelessly large for our resources: much simpler and smaller tools were called for. All these languages influenced our work, but it was more fun to do things on our own."
-- https://www.nokia.com/bell-labs/about/dennis-m-ritchie/chist...
People arguing for UNIX/C, also tend to forget the decade of systems languages that predates them, starting with JOVIAL in 1958.
They also forget that after coming up with UNIX/C, they went on to create Plan 9, which was supposed to use Alef, abandoned its designed, later acknowledege lack of GC as an key issue, created Inferno and Limbo, finalizing with contributions to Go's original design.
"Alef appeared in the first and second editions of Plan 9, but was abandoned during development of the third edition.[1][2] Rob Pike later explained Alef's demise by pointing to its lack of automatic memory management, despite Pike's and other people's urging Winterbottom to add garbage collection to the language;[3] also, in a February 2000 slideshow, Pike noted: "…although Alef was a fruitful language, it proved too difficult to maintain a variant language across multiple architectures, so we took what we learned from it and built the thread library for C."[4]"
-- https://en.wikipedia.org/wiki/Alef_(programming_language)
UNIX and C creators moved on, apparently those that whorship them haven't got the history lesson up to date.
I'm sure your choice of words helps the discussion.
Basically, technology evolves, and you or anyone else can't stop that just because you don't like it for whatever nonsense (non technical) reason.
> nobody is forced to suddenly have to learn a new language, and that people who want to work purely on the C side can very much continue to do so.
So any argument that Rust would ever fully replace C in such a way that C would go away and be banned in the kernel is NOT what Linus said.
Rust is less tight. That’s fine for an application, but the kernel is used by everyone who uses Linux.
Rustians disparage C because it doesn’t have Rust behavior.
Why must the kernel have Rust behavior?
No, the 50+ years of ridiculously unavoidable memory corruption errors have done more to disparage C than anyone working in another language.
Even Linus Torvalds called out Hector Martin.
https://lkml.org/lkml/2025/2/6/1292
> On Thu, 6 Feb 2025 at 01:19, Hector Martin <marcan@marcan.st> wrote:
> > If shaming on social media does not work, then tell me what does, because I'm out of ideas.
> How about you accept the fact that maybe the problem is you.
> You think you know better. But the current process works.
> It has problems, but problems are a fact of life. There is no perfect.
> However, I will say that the social media brigading just makes me not want to have anything at all to do with your approach.
> Because if we have issues in the kernel development model, then social media sure as hell isn't the solution. The same way it sure as hell wasn't the solution to politics.
> Technical patches and discussions matter. Social media brigading - no than\k you.
> Linus
My view, now that Hector has resigned from the LKML both as himself and as Lina, is there is no problem any more. If Hector wants to project a persona, or decide his identity is the persona, or maybe multiple personas, that is fine on social media. People do that. So long as he's not sockpuppeting Linux kernel maintenance, it's fine.
[source] https://x.com/Lina_Hoshino/status/1862840998897861007
Linux Kernel team has this habit of a forceful pushback which breaks souls and hearts when prodded too much.
Looks like Hector has deleted his Mastodon account, so I can't look back what he said exactly.
Oh, I still have the relevant tab open. It's about code quality: https://news.ycombinator.com/item?id=43043312
I remember following the tension for a bit. Yes, there are other subjects about how things are done, but after reading it, I remember framing "code quality" as the base issue.
In high-stakes software development environments, egos run high generally, and when people clash and doesn't back up, sparks happen. If this warning is ignored, then something has to give.
If I'm mistaken, I can enjoy a good explanation and be gladly stand corrected.
This is what happened here. This is probably the second or third time I witness this over 20+ years. Most famous one was over CPU schedulers, namely BFS, again IIRC.
---
Yes. I remember that message.
Also let's not forget what marcan said [0] [1].
In short, a developer didn't want their C codebase littered with Rust code, which I can understand, then the Rust team said that they can maintain that part, not complicating his life further (Kudos to them), and the developer lashing out to them to GTFO of "his" lawn (which I understand again, not condone. I'd have acted differently).
This again boils down to code quality matters. Rust is a small child when compared to the whole codebase, and weariness from old timers is normal. We can discuss behaviors till the eternity, but humans are humans. You can't just standardize everything.
Coming to marcan, how he behaved is a big no in my book, too. Because it's not healthy. Yes, the LKML is not healthy, but this is one of the things which makes you wrong even when you're right.
I'm also following a similar discussion list, which has a similar level of friction in some matters, and the correct thing is to taking some time off and touching grass when feeling tired and being close to burnout. Not running like a lit torch between flammable people.
One needs to try to be the better example esp. when the environment is not in an ideal shape. It's the hardest thing to do, but it's the most correct path at the same time.
[0]: https://web.archive.org/web/20250205004552mp_/https://lwn.ne...
If I had been enthusiastic about Rust, and wanted to see if it could maybe make sense for Rust to be part of the Linux kernel[0], I would probably had turned my attention to gccrs.
What is then extra strange is that there have been some public hostility against gccrs (WIP Rust compiler for gcc) from the rustc (sole main Rust compiler primarily based on LLVM) camp.
It feels somewhat like a corporate takeover, not something where good and benign technology is most important.
And money is at stake as well, the Rust Foundation has a large focus on fundraising, like how their progenitors at Mozilla/Firefox have or had a large focus on fundraising. And then there are major Rust proponents who openly claim, also here on Hacker News, that software and politics are inherently entwined.
[0]: And also not have it as a strict goal to get Rust into the kernel, for there might be the possibility that Rust was discovered not to be a good fit; and then one could work on that lack of fit after discovery and maybe later make Rust a good fit.
Rust should have done exactly one thing and do that as good as possible: be a C replacement and do that while sticking as close as possible to the C syntax. Now we have something that is a halfway house between C, C++, JavaScript (Node.js, actually), Java and possibly even Ruby with a syntax that makes perl look good and with a bunch of instability thrown in for good measure.
It's as if the Jehova's witnesses decided to get into tech and to convince the world of the error of its ways.
>Rust should have done exactly one thing and do that as good as possible: be a C replacement and do that while sticking as close as possible to the C syntax.
Irony.
The goal of Rust is to build reliable systems software like the kind I've worked on for the last many years, not to be a better C, the original goal of which was to be portable assembler for the PDP-11.
> Now we have something that is a halfway house between C, C++, JavaScript (Node.js, actually), Java and possibly even Ruby with a syntax that makes perl look good and with a bunch of instability thrown in for good measure.
I think Rust's syntax and semantics are both the best by some margin across any industrial language, and they are certainly much better than any of the ones you listed. Also, you missed Haskell.
On the flip side there have been many downright sycophants of only C in the Linux kernel and have done every possible action to throttle and sideline the Rust for Linux movement.
There have been multiple very public statements made by other maintainers that they actively reject Rust in the kernel rather than coming in with open hearts and minds.
Why are active maintainers rejecting parts of code that are under the remit of responsibility?
I have a personal principle of not using LLVM-based languages (equal parts I don't like how LLVM people behave against GCC and I support free software first and foremost), so I personally watch gccrs closely, and my personal ban on Rust will be lifted the day gccrs becomes an alternative compiler.
This brings me to the second biggest reservation about Rust. The language is moving too fast, without any apparent plan or maturing process. Things are labeled unstable, there's no spec, and apparently nobody is working on these very seriously, which you also noted.
I don't understand why people are hostile against gccrs? Can you point me to some discussions?
> It feels somewhat like a corporate takeover, not something where good and benign technology is most important.
As I noted above, the whole Rust ecosystem feels like it's on a crusade, esp. against C++. I write C++, and I play with pointers a lot, and I understand the gotchas, and also the team dynamics and how it's becoming harder to write good software with larger teams regardless of programming language, but the way Rust propels itself forward leaves a very bad taste in the mouth, and I personally don't like to be forced into something. So, while gccrs will remove my personal ban, I'm not sure I'll take the language enthusiastically. On the other hand, another language Rust people apparently hate, Go, ticks all the right boxes as a programming language. Yes, they made some mistakes and turned back from some of them at the last moment, but the whole ordeal looks tidier and better than Rust.
In short, being able to borrow-check things is not a license to push people around like this, and they are building themselves a good countering force with all this enthusiasm they're pumping around.
Oh, I'll only thank them for making other programming languages improve much faster. Like how LLVM has stirred GCC devs into action and made GCC a much better compiler in no time.
Open source and free software mean almost exactly the same set of software; the only difference between the two terms, according to RMS and other free software advocates, is the emphasis. Sort of like the difference between the Gulf of America and the Gulf of Mexico: they mean the same body of water, but reflect a different viewpoint about it.
This confusion arises because RMS prefers the term "free software" over "open source", and also prefers copyleft over permissive licenses, so people sort of get the idea that they are the same distinction.
RMS certainly does not consider the difference between open source and free software to be merely one of 'emphasis.' According to him they are completely different animals. Here are his words on their difference[0]:
> 'When we call software “free,” we mean it respects the users’ essential freedoms: the freedom to run it, to study and change it, and to redistribute copies with or without changes (see http://www.gnu.org/philosophy/free-sw.html). This is a matter of freedom, not price, so think of “free speech,” not “free beer.” ... Nearly all open source software is free software; the two terms describe almost the same category of software. But they stand for views based on fundamentally different values. Open source is a development methodology; free software is a social movement. For the free software movement, free soft- ware is an ethical imperative, because only free software respects the users’ freedom. By contrast, the philosophy of open source considers issues in terms of how to make software “better”—in a practical sense only. It says that non-free software is a suboptimal solution. For the free software movement, however, non-free software [including non-free open source software] is a social problem, and moving to free software is the solution.'
[emphasis and square brackets mine]
It's not that RMS 'prefers the term "free software" over "open source"' but that he prefer software be free instead of non-free. The source being open is just an incomplete precondition for being free.
[0] https://courses.cs.duke.edu/common/compsci092/papers/open/st...
> Nearly all open source software is free software; the two terms describe almost the same category of software.
I see no disagreement, how is GP "completely wrong"?
LLVM is free software, and it is a confusion to equate copyleft and free software, though I still maintain that free and open source are very distinct concepts which refer to different categories of licenses. That contrast is better stated by RMS in the article on the subject above which I linked above.
Original reply:
Primarily its this first line here:
>LLVM is free software. You appear to be making the common mistake of confusing the permissive vs. copyleft distinction with the open source vs. free software distinction.
LLVM is NOT free software because it is released under the Apache license, which is an open source license but not a free software license. This is opposed to the linux kernel and GCC which are free software because their source is available under the GPL license. Further it is not really a confusion to equate permissive licensing with open source as distinguished from copyleft and free software. In this context, free is equivalent in meaning to copyleft, as distinguished from the more permissive open source licenses.
GNU disagrees with you: https://www.gnu.org/licenses/license-list.html
> Apache License, Version 2.0 - This is a free software license, compatible with version 3 of the GNU GPL.
Furthermore:
> Further it is not really a confusion to equate permissive licensing with open source as distinguished from copyleft and free software.
You are in disagreement with the FSF on this issue. "permissive" licenses also follow the Four Essential Freedoms, none of which require viral licensing.
The four essential freedoms[0] are good reading for the first principles of software freedom concept.
[0]https://en.wikipedia.org/wiki/The_Free_Software_Definition
Open Source can be used as Free Software because Open Source can be used as proprietary software or anything else, as long as you include the license or mention the author somewhere or whatever. But these are both standards for actual licenses, and the actual licenses are different. Copyleft software can not be included in your proprietary software.
Copyleft software is restrictive. You are required to allow people to access it, and required to redistribute it in source form if you are redistributing it in compiled form (or over the network for modern copyleft licenses.) You are also required to include all of the changes that you have made to the things that you have compiled within that source distribution, and to also distribute other code that is required to to make your software package work under the same licensing.
The confusion is only in people spreading misinformation trying to confuse the two things. You clearly seem to know that RMS can prefer copyleft over permissive licenses, but still need to pretend that there's no difference between the two. If you know that someone can prefer white chocolate to dark chocolate, there's obviously something wrong with you if you in the same breath say that there is no difference between white chocolate and dark chocolate. Why deceive people? What's the point?
If they're all exactly the same, everybody should be using the GPL in Linux then. Shouldn't even be a thought. Why waste time forcing corporate-friendly licenses if it doesn't matter and some people are against them? Shouldn't parsimony rule then?
Could you point at a single quote from a member of the compiler team, current or former, that can be construed as hostility?
If you really want to know, we can email about it, but I don't think it matters, because whatever it was is clearly under the bridge by now.
Generally speaking I think that Rust channels should be open to all implementations
from reading those threads at the time, the developer in question was deliberately misunderstanding things. "their" codebase wasn't to be "littered" with Rust code - it wasn't touched at all.
<< Alex Gaynor recently announced he is formally stepping down as one of the maintainers of the Rust for Linux kernel code with the removal patch now queued for merging in Linux 6.19. Alex Gaynor was one of the original developers to experiment with Rust code for Linux kernel modules. He's drifted away from Rust Linux kernel development for a while due to lack of time and is now formally stepping down as a listed co-maintainer of the Rust code. After Wedson Almeida Filho stepped down last year as a Rust co-maintainer, this now leaves Rust For Linux project leader Miguel Ojeda as the sole official maintainer of the code while there are several Rust code reviewers. >>2. Maintainers come and go all the time, this is how open source works.
3. The only reason Phoronix reports on this is because anytime they mention Rust it attracts rage bait clicks, for example, your comment.
The overall rules haven't changed.
Strictly speaking they've always been obligated to not break the Rust code, but the R4L developers have agreed to fix it on some subsystems behalf IIRC so Rust can be broken in the individual subsystem trees. But I think it's been the case all along that you can't send it to Linus if it breaks the Rust build, and you probably shouldn't send it to linux-next either.
https://arstechnica.com/gadgets/2024/09/rust-in-linux-lead-r...
> Ted Ts'o said that the Rust developers have been trying to avoid scaring kernel maintainers, and have been saying that "all you need is to learn a little Rust". But a little Rust is not enough to understand filesystem abstractions, which have to deal with that subsystem's complex locking rules. There is a need for documentation and tutorials on how to write filesystem code in idiomatic Rust. He said that he has a lot to learn; he is willing to do that, but needs help on what to learn
If he didn't want Rust in the kernel, he would have said it, and there would have been no Rust in the kernel. It is also the reason why there is no C++ in the kernel, Linus doesn't like C++. It is that simple.
And I respect that, Linux is hugely successful under Linus direction, so I trust his decisions, whatever opinion I have about them.
This is the first time I've had a comment hit -3 which, I mean, I get it!!
While Rust in the kernel was experimental, this rule was relaxed somewhat to avoid introducing a barrier for programmers who didn't know Rust, so their work could proceed unimpeded while the experiment ran. In other words, the Rust code was allowed to be temporarily broken while the Rust maintainers fixed up uses of APIs that were changed in C code.
As long as you can detect if/when you break it, you can then either quickly pick up enough to get by (if it's trivial), or you ask around.
Rust contributions to the Linux kernel were made by individuals, and are very obviously subject to the exact same expectations as other kernel contributions. Maintainers have responsibilities, not “communities”.
Yes it does.
If they completely replace an API then sure, probably.
But for most changes, like adding a param to a function or a struct, they basically have to learn nothing.
Rust isn't unlike C either. You can write a lot of it in a pretty C like fashion.
I think that with all of the Rust's borrowing rules the statement is very iffy.
Most of the things people stub their toe on in Rust coming from C are already UB in C.
For a given rust function, where you might expect a C programmer to need to interact due to a change in the the C code, most of the lifetime rules will have already been hammered out before the needed updates to the rust code. It's possible, but unlikely, that the C programmer is going to need to significantly change what is being allocated and how.
The Rust for Linux folks have offered that they would fix up such changes, at least during the experimental period. I guess what this arrangement looks like long term will be discussed ~now.
I'm not entirely convinced that Java has much mindshare among system programmers. During my 15 years career in the field I haven't heard "Boy I wish we wrote this <network driver | user-space offload networking application | NIC firmware> in Java" once.
The 'factory factory' era of Java spoiled the language thoroughly for me.
That's not really true. Data races are possible in Java.
Rust controls that quite a bit.
The weak point will always be the startup time of the JVM which is why you won't see it be used for short lived processes. But for a long lived process like a browser I see no obstacles.
linux = { "alpha", "arc", "arm", "aarch64", "csky", "hexagon",
"loongarch", "m68k", "microblaze", "mips", "mips64", "nios2",
"openrisc", "parisc", "powerpc", "powerpc64", "riscv",
"s390", "s390x", "sh", "sparc", "sparc64", "x86", "x86_64",
"xtensa" }
rust = { "arm", "aarch64", "amdcgn", "avr", "bpfeb", "bpfel",
"csky", "hexagon", "x86", "x86_64", "loongarch", "m68k",
"mips", "mips64", "msp430", "nvptx", "powerpc", "powerpc64",
"riscv", "s390x", "sparc", "sparc64", "wasm32", "wasm64",
"xtensa" }
print(f"Both: {linux.intersection(rust)}")
print(f"Linux, but not Rust: {linux.difference(rust)}")
print(f"Rust, but not Linux: {rust.difference(linux)}")
Which yields:Both: {'aarch64', 'xtensa', 'sparc', 'm68k', 'mips64', 'sparc64', 'csky', 'riscv', 'powerpc64', 's390x', 'x86', 'powerpc', 'loongarch', 'mips', 'hexagon', 'arm', 'x86_64'}
Linux, but not Rust: {'nios2', 'microblaze', 'arc', 'openrisc', 'parisc', 's390', 'alpha', 'sh'}
Rust, but not Linux: {'avr', 'bpfel', 'amdcgn', 'wasm32', 'msp430', 'bpfeb', 'nvptx', 'wasm64'}
Personally, I've never used a computer from the "Linux, but not Rust" list, although I have gotten close to a DEC Alpha that was on display somewhere, and I know somebody who had a Sega Dreamcast (`sh`) at some point.
Microblaze also is a soft-core, based on RISC-V, presumably it could support actual RISC-V if anyone cared.
All others haven't received new hardware within the last 10 years, everybody using these will already be running an LTS kernel on there.
It looks like there really are no reasons not to require rust for new versions of the kernel from now on then!
I do agree that it was AMD which really made 64-bit mainstream. There’s an interesting what-if game about how the 90s might’ve gone if Intel’s marketing had been less successful or if Rick Belluzo hadn’t bought into it since he killed PA-RISC and HPUX before moving to SGI where he killed both MIPS and Irix and gave the high-end graphics business to nVidia.
> Rust, which has been cited as a cause for concern around ensuring continuing support for old architectures, supports 14 of the kernel's 20-ish architectures, the exceptions being Alpha, Nios II, OpenRISC, PARISC, and SuperH.
That's a lot better than I expected to be honest, I was thinking maybe Rust supported 6-7 architectures in total, but seems Rust already has pretty wide support. If you start considering all tiers, the scope of support seems enormous: https://doc.rust-lang.org/nightly/rustc/platform-support.htm...
Rust also has an experimental GCC-based codegen backend (based on libgccjit (which isn't used as a JIT)).
So platforms that don't have either LLVM nor recent GCC are screwed.
additionally, I believe the GCC backend is incomplete. the `core` library is able to compile, but rust's `std` cannot be.
Not having a recent GCC and not having GCC are different things. There may be architectures that have older GCC versions, but are no longer supported for more current C specs like C11, C23, etc.
From https://news.ycombinator.com/newsguidelines.html: "Please use the original title, unless it is misleading or linkbait". Therefore: when misleading please edit.
Unsuccessful experiments have no end unless motivation to keep trying runs out — but Rust seems to have no end to motivation behind it. Besides, if it were to ever reach the point of there being no remaining motivation, there would be no remaining motivation for those who have given up to announce that they have given up, so we'd never see the headline to begin with.
Imagine a world where the Rust experiment hand't been going well. There is absolutely nothing stopping someone from keeping at it to prove why Rust should be there. The experiment can always live as long as someone wants to keep working on it. Experiments fundamentally can only end when successful, or when everyone gives up (leaving nobody to make the announcement).
That said, consider me interested to read the announcements HN has about those abandoned experiments. Where should I look to find them?
Don't get me wrong, rust in the kernel is good, because the memory safety and expressiveness it brings makes it worth it DESPITE the huge effort required to introduce it. But I disagree that new things are inherently good.
New things aren't automatically good. But you don't get new things that are good without trying new things.
By avoiding new things, you're guaranteed to not experience less stable, immature, and confusing things. But also any new things that are good.
In my mind, the reasoning for rust in this situation seems flawed.
But that's not the only reason Rust is useful - see the end of the write-up from the module author https://asahilinux.org/2022/11/tales-of-the-m1-gpu/ ("Rust is magical!" section)
You still get the safety guarantees of Rust in unsafe code like bounds checking and lifetimes.
In fact one of the main points of Rust is the ability to build safe abstractions on top of unsafe code, rather than every line in the entire program always being unsafe and possibly invoking UB.
unsafe {
// this is normal Rust code, you can write as much of it as you like!
// (and you can bend some crucial safety rules)
// but try to limit and document what you put inside an unsafe block.
// then wrap it in a safe interface so no one has to look at it again
}(but as others pointed out, unsafe should only be used at the 'edges', the vast majority should never need to use unsafe)
Mostly Rust has been used for drivers so far. Here's the first Rust driver I found:
https://github.com/torvalds/linux/blob/2137cb863b80187103151...
It has one trivial use of `unsafe` - to support Send & Sync for a type - and that is apparently temporary. Everything else uses safe APIs.
Functions like that can and should be marked unsafe in rust. The unsafe keyword in rust is used both to say “I want this block to have access to unsafe rust’s power” and to mark a function as being only callable from an unsafe context. This sounds like a perfect use for the latter.
That's not how it works. You don't mark them unsafe unless it's actually required for some reason. And even then, you can limit that scope to a line or two in majority of cases. You mark blocks unsafe if you have to access raw memory and there's no way around it.
The general rule of thumb is that safe code must not be able to invoke UB.
- Hardware access is unsafe - Kernel interface is unsafe
How much remains in the layer in-between that's actually safe?
I ported a C skip list implementation to rust a few years ago. Skip lists are like linked lists, but where instead of a single "next" pointer, each node contains an array of them. As you can imagine, the C code is packed full of fiddly pointer manipulation.
The rust port certainly makes use of a fair bit of unsafe code. But the unsafe blocks still make up a surprisingly small minority of the code.
Porting this package was one of my first experiences working with rust, and it was a big "aha!" moment for me. Debugging the C implementation was a nightmare, because a lot of bugs caused obscure memory corruption problems. They're always a headache to track down. When I first ported the C code to rust, one of my tests segfaulted. At first I was confused - rust doesn't segfault! Then I realised it could only segfault from a bug in an unsafe block. There were only two unsafe functions it could be, and one obvious candidate. I had a read of the code, and spotted the error nearly immediately. The same bug would probably have taken me hours to fix in C because it could have been anywhere. But in rust I found and fixed the problem in a few minutes.
Another thing you can do from rust without "unsafe": output some buggy source code that invokes UB in a language like C to a file, then shell out to the compiler to compile that file and run it.
Which part here is misinformation? Do you know something we, or the author, does not? I'm quite curious what that might be.
Does anyone know what's the current state / what is the current timeline for something like this to be feasible?
Other platforms don't have a leader that hates C++, and then accepts a language that is also quite complex, even has two macro systems of Lisp like wizardy, see Serde.
OSes have been being written with C++ on the kernel, since the late 1990's, and AI is being powered by hardware (CUDA) that was designed specifically to accomodate C++ memory model.
Also Rust compiler depends on a compiler framework written in C++, without it there is no Rust compiler, and apparently they are in no hurry to bootstrap it.
As does the GCC C compiler.
I don’t think C++ is going anywhere any time soon. Not with it powering all the large game engines, llvm, 30 million lines of google chrome and so on. It’s an absolute workhorse.
Furthermore, in terms of extensions to the language to support more obtuse architecture, Rust has made a couple of decisions that make it hard for some of those architectures to be supported well. For example, Rust has decided that the array index type, the object size type, and the pointer size type are all the same type, which is not the case for a couple of architectures; it's also the case that things like segmented pointers don't really work in Rust (of course, they barely work in C, but barely is more than nothing).
There's a ton of standardization work that really should be done before these are safe for library APIs. Mostly fine to just write an application that uses one of these crates though.
Bitfield packing rules get pretty wild. Sure the user facing API in the language is convenient, but the ABI it produces is terrible (particularly in evolution).
Rust worded in its initial guarantee that usize was sufficient to roundtrip a pointer (making it effectively uptr), and there remains concern among several of the maintainers about breaking that guarantee, despite the fact that people on the only target that would be affected basically saying they'd rather see that guarantee broken. Sort of the more fundamental problem is that many crates are perfectly happy opting out of compiling for weirder platform--I've designed some stuff that relies on 64-bit system properties, and I'd rather like to have the ability to say "no compile for you on platform where usize-is-not-u64" and get impl From<usize> for u64 and impl From<u64> for usize. If you've got something like that, it also provides a neat way to say "I don't want to opt out of [or into] compiling for usize≠uptr" and keeping backwards compatibility.
If you want to see some long, gory debates on the topic, https://internals.rust-lang.org/t/pre-rfc-usize-is-not-size-... is a good starting point.
The proper approach to resolving this in an elegant way is to make the guarantee target-dependent. Require all depended-upon crates to acknowledge that usize might differ from uptr in order to unlock building for "exotic" architectures, much like how no-std works today. That way "nearly every platform" can still rely on the guarantee with no rise in complexity.
Yet Rust is not Go and this solution is probably not the right one for Rust. As laid out in a link on a sibling comment, one possibility is to do away with pointer <=> integer conversions entirely, and use methods on pointers to access and mutate their addresses (which may be the only thing they represent on some platforms, but is just a part of their representation on others). The broader issue is really about evolving the language and ecosystem away from the mentality that "pointers are just integers with fancy sigil names".
Pointers should at no point be converted into numbers and back as that trips up many assumptions (special runtimes, static/dynamic analysis tools, compiler optimizations).
Additionally, I would make it a priority that writing FFIs should be as easy as possible, and requires as little human deliberation as possible. Even if Rust is safe, its safety can only be assumed as long as the underlying external code upholds the invariants.
Which is a huge risk factor for Rust, especially in today's context of the Linux kernel. If I have an object created/handled by external native code, how do I make sure that it respects Rust's lifetime/aliasing rules?
What's the exact list of rules my C code must conform to?
Are there any static analysis/fuzzing tools that can verify that my code is indeed compliant?
So as to not only hate on Rust, I want to remark, that baking in the idea that pointers can be used for arithmetic isn't a good one as well.
Can you expand on this point? Like are you worried about whether the external code is going to free the memory out from under you? That is part of a guarantee, the compiler cannot guarantee what happens at runtime no matter what the author of a language wants, the CPU will do what it's told, it couldn't care about Rusts guarantees even if you built your code entirely with rust.
When you are interacting with the real world and real things you need to work with different assumptions, if you don't trust that the data will remain unmodified then copy it.
No matter how many abstractions you put on top of it there is still lighting in a rock messing with 1s and 0s.
Assuming I'm reading these blog posts [0, 1] correctly, it seems that the size_of::<usize>() == size_of::<*mut u8>() assumption is changeable across editions.
Or at the very least, if that change (or a similarly workable one) isn't possible, both blog posts do a pretty good job of pointedly not saying so.
[0]: https://faultlore.com/blah/fix-rust-pointers/#redefining-usi...
[1]: https://tratt.net/laurie/blog/2022/making_rust_a_better_fit_...
Personally, I like 3.1.2 from your link [0] best, which involves getting rid of pointer <=> integer casts entirely, and just adding methods to pointers, like addr and with_addr. This needs no new types and no new syntax, though it does make pointer arithmetic a little more cumbersome. However, it also makes it much clearer that pointers have provenance.
I think the answer to "can this be solved with editions" is more "kinda" rather than "no"; you can make hard breaks with a new edition, but since the old editions must still be supported and interoperable, the best you can do with those is issue warnings. Those warnings can then be upgraded to errors on a per-project basis with compiler flags and/or Cargo.toml options.
Provenance-related work seems to be progressing at a decent pace, with some provenance-related APIs stabilized in Rust 1.84 [0, 1].
[0]: https://blog.rust-lang.org/2025/01/09/Rust-1.84.0/
[1]: https://doc.rust-lang.org/std/ptr/index.html#provenance
One of those goals is that code which was written for an older edition will continue to work. You should never be forced to upgrade editions, especially if you have a large codebase which would require significant modifications.
The other goal is that editions are interoperable, i.e. that code written for one edition can rely on code written for a different edition. Editions are set on a per-crate basis, this seems to be the case for both rustc [1] and of course cargo.
As I see it, what you're saying would mean that code written for this new edition initially couldn't use most of the crates on crates.io as dependencies. This would then create pressure on those crates' authors to update their edition. And all of this would be kind of pointless fragmentation and angst, since most of those crates wouldn't be doing funny operations on pointers anyway. It might also have the knock-on effect of making new editions much more conservative, since nobody would want to go through that headache again, thus undermining another goal of editions.
[1]: https://rustc-dev-guide.rust-lang.org/guides/editions.html
The reverse is probably more true, though. Rust has native SIMD support for example, while in standard C there is no way to express that.
'C is not a low-level language' is a great blog post about the topic.
std::simd is nightly only.
> while in standard C there is no way to express that.
In ISO Standard C(++) there's no SIMD.
But in practice C vector extensions are available in Clang and GCC which are very similar to Rust std::simd (can use normal arithmetic operations).
Unless you're talking about CPU specific intrinsics, which are available to in both languages (core::arch intrinsics vs. xmmintrin.h) in all big compilers.
There was one accelerator architecture we were working that discussed making the entire datapath be 32-bit (taking less space) and having a 32-bit index type with a 64-bit pointer size, but this was eventually rejected as too hard to get working.
In the end, I’m not sure that’s better, or maybe we should have had extra large pointers again (in that way back 32bit was so large we stuffed other stuff in there) like CHERI proposes (though I think it still has secret sidecar of data about the pointers).
Would love to Apple get closer to Cheri. They could make a big change as they are vertically integrated, though I think their Apple Silicon for Mac moment would have been the time.
I wonder what big pointers does to performance.
Double-linked lists are also pain to implement, and they're are heavily used in kernel.
You mean in safe rust? You can definitely do self-referential structs with unsafe and Pin to make a safe API. Heck every future generated by the compiler relies on this.
https://git.kernel.org/pub/scm/linux/kernel/git/a.hindborg/l...
And yes, I have had a look into the examples - maybe one or two years there was a significant patch submitted to the kernel and number of unsafe sections made me realize at that moment that Rust, in terms of kernel development, might not be what it is advertised for.
> https://git.kernel.org/pub/scm/linux/kernel/git/a.hindborg/l..
Right? Thank you for the example. Let's first start by saying the obvious - this is not an upstream driver but a fork and it is also considered by its author to be a PoC at best. You can see this acknowledged by its very web page, https://rust-for-linux.com/nvme-driver, by saying "The driver is not currently suitable for general use.". So, I am not sure what point did you try to make by giving something that is not even a production quality code?
Now let's move to the analysis of the code. The whole code, without crates, counts only 1500 LoC (?). Quite small but ok. Let's see the unsafe sections:
rnvme.rs - 8x unsafe sections, 1x SyncUnsafeCell used for NvmeRequest::cmd (why?)
nvme_mq/nvme_prp.rs - 1x unsafe section
nvme_queue.rs - 6x unsafe not sections but complete traits
nvme_mq.rs - 5x unsafe sections, 2x SyncUnsafeCell used, one for IoQueueOperations::cmd second for AdminQueueOperations::cmd
In total, this is 23x unsafe sections/traits over 1500LoC, for a driver that is not even a production quality driver. I don't have time but I wonder how large this number would become if all crates this driver is using were pulled in into the analysis too.
Sorry, I am not buying that argument.
The unsafe parts have to be written and verified manually very carefully, but once that's done, the compiler can ensure that all further uses of these abstractions are correct and won't cause UB.
Everything in Rust becomes "unsafe" at some lower level (every string has unsafe in its implementation, the compiler itself uses unsafe code), but as long as the lower-level unsafe is correct, the higher-level code gets safety guarantees.
This allows kernel maintainers to (carefully) create safe public APIs, which will be much safer to use by others.
C doesn't have such explicit split, and its abstraction powers are weaker, so it doesn't let maintainer create APIs that can't cause UB even if misused.
let's start by prefacing that 'production quality' C is 100% unsafe in Rust terms.
> Sorry, I am not buying that argument.
here's where we fundamentally disagree: you listed a couple dozen unsafe places in 1.5kLOC of code; let's be generous and say that's 10% - and you're trying to sell it as a bad thing, whereas I'm seeing the same numbers and think it's a great improvement over status quo ante.
I don't know what one should even make from that statement.
> here's where we fundamentally disagree: you listed a couple dozen unsafe places in 1.5kLOC of code; let's be generous and say that's 10%
It's more than 10%, you didn't even bother to look at the code but still presented it, what in reality is a toy driver example, as something credible (?) to support your argument of me spreading FUD. Kinda silly.
Even if it was only that much (10%), the fact it is in the most crucial part of the code makes the argument around Rust safety moot. I am sure you heard of 90/10 rule.
The time will tell but I am not holding my breath. I think this is a bad thing for Linux kernel development.
it's just a fact. by definition of the Rust language unsafe Rust is approximately as safe as C (technically Rust is still safer than C in its unsafe blocks, but we can ignore that.)
> you didn't even bother to look at the code but still presented
of course I did, what I've seen were one-liner trait impls (the 'whole traits' from your own post) and sub-line expressions of unsafe access to bindings.
This is quite dubious in a practical sense, since Rust unsafe blocks must manually uphold the safety invariants that idiomatic Safe Rust relies on at all times, which includes, e.g. references pointing to valid and properly aligned data, as well as requirements on mutable references comparable to what the `restrict` qualifier (which is rarely used) involves in C. In practice, this is hard to do consistently, and may trigger unexpected UB.
Some of these safety invariants can be relaxed in simple ways (e.g. &Cell<T> being aliasable where &mut T isn't) but this isn't always idiomatic or free of boilerplate in Safe Rust.
-------
The primary security concern regarding Rust generally centers on the approximately 4% of code written within unsafe{} blocks. This subset of Rust has fueled significant speculation, misconceptions, and even theories that unsafe Rust might be more buggy than C. Empirical evidence shows this to be quite wrong.
Our data indicates that even a more conservative assumption, that a line of unsafe Rust is as likely to have a bug as a line of C or C++, significantly overestimates the risk of unsafe Rust. We don’t know for sure why this is the case, but there are likely several contributing factors:
unsafe{} doesn't actually disable all or even most of Rust’s safety checks (a common misconception).
The practice of encapsulation enables local reasoning about safety invariants.
The additional scrutiny that unsafe{} blocks receive.
-----From https://security.googleblog.com/2025/11/rust-in-android-move...
> The additional scrutiny that unsafe{} blocks receive.
None of this supports an argument that "unsafe Rust is safer than C". It's just saying that with enough scrutiny on those unsafe blocks, the potential bugs will be found and addressed as part of development. That's a rather different claim.
The report says that their historical data gives them an estimate of 1000 Memory Safety issues per Million Lines of Code for C/C++.
The same team currently has 5 Million lines of Rust code, of which 4% are unsafe (200 000). Assuming that unsafe Rust is on par with C/C++, this gives us an expected value of about 200 memory safety issues in the unsafe code. They have one. Either they have 199 hidden and undetected memory safety issues, or the conclusion is that even unsafe Rust is orders of magnitude better than C/C++ when it comes to memory safety.
I trust them to track these numbers diligently. This is a seasoned team building foundational low level software. We can safely assume that the Android team is better than the average C/C++ programmer (and likely also than the average Rust programmer), so the numbers should generalize fairly well.
Part of the benefits of Rust is indeed that it allows local reasoning about crucial parts of the code. This does allow for higher scrutiny which will find more bugs, but that's a result of the language design. unsafe {} was designed with that im mind - this is not a random emergent property.
Do you honestly believe that there is 1 vulnerability per 5 MLoC?
Yes, I believe that at least the order of magnitude is correct because 4 800 000 of those lines are guaranteed to not have any by virtue of the compiler enforcing memory safety.
So it's 1 per 200 000, which is 1-2 orders of magnitude worse, but still pretty darn good. Given that not all unsafe code actually has potential for memory safety issues and that the compiler still will enforce a pretty wide set of rules, I consider this to be achievable.
This is clearly a competent team that's writing important and challenging low-level software. They published the numbers voluntarily and are staking their reputation on these reports. From personal observation of the Rust projects we work on, the results track with the trend.
There's no reason for me to disbelieve the numbers put forward in the report.
You accidentally put the finger on the key point, emphasis mine.
When you have a memory-unsafe language, the complexity of the whole codebase impact your ability to uphold memory-related invariants.
But unsafe block are, by definition, limited in scope and assuming you design your codebase properly, they shouldn't interact with other unsafe blocks in a different module. So the complexity related to one unsafe block is in fact contained to his own module, and doesn't spread outside. And that makes everything much more tractable since you never have to reason about the whole codebase, but only about a limited scope everytime.
It's not obviously clear to me which features are the relevant ones, but my general observation is that lifetimes, unsafe blocks, the borrow checker allow people to reason about code in smaller chunks. For example knowing that there's only one place where a variable may be mutated supports understanding that at the same time, no other code location may change it.
unsafe{} doesn't actually disable all or even most of Rust’s safety checks (a common misconception). The practice of encapsulation enables local reasoning about safety invariants.
which is not fully correct. Undefined behavior in unsafe blocks can and will leak into the safe Rust code so there is nothing there about the "local reasoning" or "encapsulation" or "safety invariants".This whole blog always read to me as too much like a marketing material disguised with some data so that it is not so obvious. IMHO
Strictly speaking, that encapsulation enables local reasoning about safety invariants does not necessarily imply that encapsulation guarantees local reasoning about safety invariants. It's always possible to write something unadvisable, and no language is capable of preventing that.
That being said, I think you might be missing the point to some extent. The idea behind the sentence is not to say that the consequences of a mistake will not be felt elsewhere. The idea is that when reasoning about whether you're upholding invariants and/or investivating something that went wrong, the amount of code you need to look at is bounded such that you can ignore everything outside those bounds; i.e., you can look at some set of code in complete isolation. In the most conservative/general case that boundary would be the module boundary, but it's not uncommon to be able to shrink those boundaries to the function body, or potentially even further.
This general concept here isn't really new. Rust just applied it in a relatively new context.
In the most general case, you don't. But again, I think that that rather misses the point the statement was trying to get at.
Perhaps a more useful framing for you would be that in the most general case the encapsulation and local reasoning here is between modules that use unsafe and everything else. In some (many? most?) cases you can further bound how much code you need to look at if/when something goes wrong since not all code in unsafe modules/functions/blocks depend on each other, but in any case the point is that you only need to inspect a subset of code when reasoning about safety invariants and/or debugging a crash.
> From their statement it appears as if there's such a simple correlation between "here's your segfault" and "here's your unsafe block that caused it",
I don't get that sense from the statement at all.
It's safer because it spends the human attention resource more wisely.
I've seen this argument thrown around often here in HN ("$IMPROVEMENT is still not perfect! So let's keep the statu quo.") and it baffles me.
C is not perfect and it still replaced ASM in 99% of its use cases.
Distribution of bugs across the whole codebase is not following the normal distribution but multimodal. Now, imagine where the highest concentration of bugs will be. And how many bugs there will be elsewhere. Easy to guess.
You're doing it again!
Doesn't matter where the majority of bugs will be. If you avoid the minority it's still an improvement.
Also, Rust safety is not related at all to bugs. You seem to have a misunderstanding of what Rust is or what safe Rust provides.
(Also, I'd challenge the rest of your assumptions, but that's another story.)
If I spend 90% of time debugging freaking difficult to debug issues, and Rust solves the other 10% for me, then I don't see it as a good bargain. I need to learn a completely new language, surround myself with a team which is also not hesitant to learn it, and all that under assumption that it won't make some other aspects of development worse. And for surely it will.
Have you written any Rust?
I do firmly expect that we're less than a decade out from seeing some reference algorithm be implemented in Rust rather than C, probably a cryptographic algorithm or a media codec. Although you might argue that the egg library for e-graphs already qualifies.
Java, really? I don’t think Java has been essential for a long time.
Is Perl still critical?
https://linuxfromscratch.org/lfs/view/development/chapter07/...
Edit:
On my (Arch) system removing perl requires removing: auto{conf,make}, git, llvm, glibmm, among others.
I suspect what we are going to see isn't so much Rust making it through that door, but LLVM. Rust and friends will come along for the ride.
As much as I like Java, Java is also not critical for OS utilities. Bash shouldn't be, per se, but a lot of scripts are actually Bash scripts, not POSIX shell scripts and there usually isn't much appetite for rewriting them.
I don't agree with "any modern system" but some modern systems certainly don't come with perl.
I have no doubt C will be around for a long time, but I think Rust also has a lot of staying power and won’t soon be replaced.
...to the extent I could see it pushing Rust out of the kernel in the long run. Rust feels like a sledgehammer to me where the kernel is concerned.
It's problem right now is that it's not stable enough. Language changes still happen, so it's the wrong time to try.
You do get some creature comforts like slices (fat pointers) and defer (goto replacement). But you also get forced to write a lot of explicit conversions (I personally think this is a good thing).
The C interop is good but the compiler is doing a lot of work under the hood for you to make it happen. And if you export Zig code to C... well you're restricted by the ABI so you end up writing C-in-Zig which you may as well be writing C.
It might be an easier fit than Rust in terms of ergonomics for C developers, no doubt there.
But I think long-term things like the borrow checker could still prove useful for kernel code. Currently you have to specify invariants like that in a separate language from C, if at all, and it's difficult to verify. Bringing that into a language whose compiler can check it for you is very powerful. I wouldn't discount it.
Zig, for all its ergonomic benefits, doesn’t make memory management safe like Rust does.
I kind of doubt the Linux maintainers would want to introduce a third language to the codebase.
And it seems unlikely they’d go through all the effort of porting safer Rust code into less safe Zig code just for ergonomics.
That was where my argument was supposed to go. Especially a third language whose benefits over C are close enough to Rust's benefits over C.
I can picture an alternate universe where we'd have C and Zig in the kernel, then it would be really hard to argue for Rust inclusion.
(However, to be fair, the Linux kernel has more than C and Rust, depending on how you count, there are quite a few more languages used in various roles.)
Rust is different because it both:
- significantly improve the security of the kernel by removing the nastiest class of security vulnerabilities.
- And reduce cognitive burden for contributors by allowing to encode in thr typesystem the invariants that must be upheld.
That doesn't mean Zig is a bad language for a particular project, just that it's not worth adding to an already massive project like the Linux kernel. (Especially a project that already have two languages, C and now Rust).
So aside from being like 1-5% unsafe code vs 100% unsafe for C, it’s also more difficult to misuse existing abstractions than it was in the kernel (not to mention that in addition to memory safety you also get all sorts of thread safety protections).
In essence it’s about an order of magnitude fewer defects of the kind that are particularly exploitable (based on research in other projects like Android)
So while there could definitely be an exploitable memory bug in the unsafe part of the kernel, expect those to be at least two orders of magnitude less frequent than with C (as an anecdotal evidence, the Android team found memory defects to be between 3 and 4 orders of magnitude less in practice over the past few years).
In practice you see several orders of magnitude fewer segfaults (like in Google Android CVE). You can compare Deno and Bun issue trackers for segfaults to see it in action.
As mentioned a billion times, seatbelts don't prevent death, but they do reduce the likelihood of dying in a traffic accident. Unsafe isn't a magic bullet, but it's a decent caliber round.
It reminds me of this fun question:
What’s the difference between a million dollars and a billion dollars? A billion dollars.
A million dollars is a lot of money to most people, but it’s effectively nothing compared to a billion dollars.
[1]: this the order of magnitude presented in the recent Android blog post: https://security.googleblog.com/2025/11/rust-in-android-move...
> Our historical data for C and C++ shows a density of closer to 1,000 memory safety vulnerabilities per MLOC. Our Rust code is currently tracking at a density orders of magnitude lower: a more than 1000x reduction.
It's "embedded programming" where you often start to run into weird platforms (or sub-platforms) that only have a c compiler, or the rust compiler that does exist is somewhat borderline. We are sometimes talking about devices which don't even have a gcc port (or the port is based on a very old version of gcc). Which is a shame, because IMO, rust actually excels as an embedded programming language.
Linux is a bit marginal, as it crosses the boundary and is often used as a kernel for embedded devices (especially ones that need to do networking). The 68k people have been hit quite hard by this, linux on 68k is still a semi-common usecase, and while there is a prototype rust back end, it's still not production ready.
I’m sure if Rust becomes more popular in the game dev community, game consoles support will be a solved problem since these consoles are generally just running stock PC architectures with a normal OS and there’s probably almost nothing wrong with the stock toolchain in terms of generating working binaries: PlayStation and Switch are FreeBSD (x86 vs ARM respectively) and Xbox is x86 Windows, all of which are supported platforms.
Otherwise is yak shaving instead of working on the actual game code.
Some people like to do that, that is how new languages like Rust get adoption, however they are usually not the majority, hence why mainstream adoption without backing from killer projects or companies support is so hard, and very few make it.
Also you will seldom see anyone throw away the confort of Unreal, Unity or even Godot tooling, to use Rust instead, unless they are more focused on proving the point the game can be made in Rust, than the actual game design experience.
Good luck shipping arbitrary binaries to this target. The most productive way for an indie to ship to Nintendo in 2025 is to create the game Unity and build via the special Nintendo version of the toolchain.
How long do we think it would take to fully penetrate all of these pipeline stages with rust? Particularly Nintendo, who famously adopts the latest technology trends on day 1. Do we think it's even worthwhile to create, locate, awaken and then fight this dragon? C# with incremental GC seems to be more than sufficient for a vast majority of titles today.
Devil May Cry for the Playstation 5 was shipped with it.
https://pubs.opengroup.org/onlinepubs/9799919799/utilities/c...
Still, who supports POSIX is expected to have a C compiler available if applying for the OpenGroup stamp.
Yes? I'm especially keen on the project to make just 'Linux' without any Gnu components. Jokes aside, yes, other operating systems exist.
> [...] and yes some parts of it are outdated in modern platforms, especially Apple, Google, Microsoft OSes and Cloud infrastructure.
Well, also on modern hardware in general. POSIX's idea about storage abstractions don't gel well with modern hardware. Well, not just modern hardware: atime has been cast aside since time immemorial. But lots of the other ideas are also dropping by the wayside.
In practice, we get something like 'POSIX-y' or 'close-enough-to-POSIX', which doesn't get a stamp from OpenGroup. I think Linux largely falls in that boat.
But regardless of that distinction, a C compiler is very much part of the informal 'close-enough-to-POSIX' class.
It's a bit of a shame that Unix has retarded operating system research for so long. Fortunately, the proliferation of virtual machines and hypervisors have re-invigorated the field. Just pretend that your hypervisor is your nano-kernel and your VMs are your processes. No need to run a full on Linux kernel in your VM, you can just have enough code running for your webserver or database etc, no general kernel needed. See eg https://mirage.io/
That isn't always the case. Slow compilations are usually because of procedural macros and/or heavy use of generics. And even then compile times are often comparable to languages like typescript and scala.
Most of the examples of "this is what makes Rust compilation slow" are code generation related; for example, a Rust proc_macro making compilation slow would be equivalent to C building a code generator (for schemas? IDLs?), running it, then compiling the output along with its user.
But of course that doesn't negate this cost, but building can always be done on another machine as a way to circumvent the problem.
It's only when you get very large projects that compilation time becomes a problem.
If typescript compilation speed wasn't a problem, then why would Microsoft put resources into rewriting the tyescript compiler in Go to make it faster[1]?
if only due to the Lindy Effect
Even then, there are still some systems that will support C but won't support Rust any time soon. Systems with old compilers/compiler forks, systems with unusual data types which violate Rust's assumptions (like 8 bit bytes IIRC)
`clang-cl` does this with `cl.exe` on Windows.
Let alone the fact that conceptually people with locked down environments are precisely those would really want the extra safety offered by Rust.
I know that real life is messy but if we don't keep pressing, nothing improves.
If you're developing something individually, then sure, you have a lot of control. When you're developing as part of an organization or a company, you typically don't. And if there's non-COTS hardware involved, you are even more likely not to have control.
If someone writes a popular Rust library, it's only going to be useful to Rust projects.
With Rust I don’t even know if it’s possible to make borrow checking work across a language boundary. And Rust doesn't have a stable ABI, so even if you make a DLL, it’ll only work if compiled with the exact same compiler version. *sigh*
Of course, C API/ABIs aren't able to make full use of Rust features, but again, that also applies to basically every other language.
This is still the worst possible argument for C. If C persists in places no one uses, then who cares?
C will continue to be used because it always has been and always will be available everywhere, not only places no one uses :/
Yes, you can use it everywhere. Is that what you consider a success?
Languages like Rust a probably more successful in terms of age vs. adoption speed. There's just a good number of platforms which aren't even supported, and where you have no other choice than C. Rust can't target most platforms, and it compiles on even less. Unless Linux want's to drop support for a good number of platforms, Rust adoption can only go so far.
Wait, wait, you can use C everywhere, but if absolute lines of code is the metric, people seem to move away from C as quickly as possible (read: don't use C) to higher level languages that clearly aren't as portable (Ruby, Python, Java, C#, C++, etc.)?
> Rust can't target most platforms, and it compiles on even less. Unless Linux want's to drop support for a good number of platforms, Rust adoption can only go so far.
Re: embedded, this is just terribly short sighted. Right now, yes, C obviously has the broadest possible support. But non-experimental Linux support starts today. GCC is still working on Rust support and rust_codegen_gcc is making in roads. And one can still build new LLVM targets. I'd be interested to hear what non-legacy platforms actually aren't supported right now.
But the real issue is that C has no adoption curve. It only has ground to lose. Meanwhile Rust is, relatively, a delight to use, and offers important benefits. As I said elsewhere: "Unless I am wrong, there is still lots of COBOL we wish wasn't COBOL? And that reality doesn't sound like a celebration of COBOL?" The future looks bright for Rust, if you believe as I do, we are beginning to demand much more from our embedded parts.
Now, will Rust ever dominate embedded? Will it be used absolutely everywhere? Perhaps not. Does that matter? Not AFAIAC. Winning by my lights is -- using this nice thing rather than this not so nice thing. It's doing interesting things where one couldn't previously. It's a few more delighted developers.
I'd much rather people were delighted/excited about a tech/a language/a business/a vibrant community, than, whatever it is, simply persisted. Because "simply persisted", like your comment above, is only a measure of where we are right now.
Another way to think of this is -- Perl is also everywhere. Shipped with every Linux distro and MacOS, probably to be found somewhere deep inside Windows too. Now, is Perl, right now, a healthy community or does it simply persist? Is its persistence a source of happiness or dread?
I never said Rust was the universal language, that is pleasing to all people. I was very careful to frame Rust's success in existential, not absolute or universal, terms:
>> Winning by my lights is -- using this nice thing rather than this not so nice thing. It's doing interesting things where one couldn't previously. It's a few more delighted developers.
I am not saying these technologies (C and Perl) should go away. I am saying -- I am very pleased there is no longer a monoculture. For me, winning is having alternatives to C in embedded and kernel development, and Perl for scripting, especially as these communities, right now, seem less vibrant, and less adaptable to change.
> ... yes?
Then perhaps you and I define success differently? As I've said in other comments above, C persisting or really standing still is not what I would think of as a winning and vibrant community. And the moving embedded and kernel development from what was previously a monoculture to something more diverse could be a big win for developers. My hope is that competition from Rust makes using C better/easier/more productive, but I have my doubts as to whether it will move C to make changes.
"Always has been" is pushing it. Half of C's history is written with outdated, proprietary compilers that died alongside their architecture. It's easy to take modern tech like LLVM for granted.
That said, only a handful of those architectures are actually so weird that they would be hard to write a LLVM backend for. I understand why the project hasn’t established a stable backend plugin API, but it would help support these ancillary architectures that nobody wants to have to actively maintain as part of the LLVM project. Right now, you usually need to use a fork of the whole LLVM project when using experimental backends.
This is exactly what I'm saying. Do you think HW drives SW or the other way around? When Rust is in the Linux kernel, my guess is it will be very hard to find new HW worth using, which doesn't have some Rust support.
For these cases you could always use Zig instead of C
Already setting the proper defaults on a Makefile would get many people half way there, without changing to language yet to be 1.0, and no use-after-free story.
And thats why Zig don’t offer much. Devs will just ignore it.
I know, a linked list is not exactly super complex and rust makes that a bit tough. But the realisation one must have is this: building a linked list will break some key assumptions about memory safety, so trying to force that into rust is just not gonna make it.
Problem is I guess that for several of us, we have forgotten about memory safety and it's a bit painful to have that remembered to us by a compiler :-)
And generally I'd imagine it's quite a weird form for data structures for which being in a linked list isn't a core aspect (no clue what specifically the kernel uses, but I could imagine situations where where objects aren't in any linked list for 99% of time, but must be able to be chained in even if there are 0 bytes of free RAM ("Error: cannot free memory because memory is full" is probably not a thing you'd ever want to see)).
That's a great point! A language almost like C plus a smart enough compiler could do the unboxing, but this technique doesn't work for multiple structures.
> And generally I'd imagine it's quite a weird form for data structures for which being in a linked list isn't a core aspect (no clue what specifically the kernel uses, but I could imagine situations where where objects aren't in any linked list for 99% of time, but must be able to be chained in even if there are 0 bytes of free RAM ("Error: cannot free memory because memory is full" is probably not a thing you'd ever want to see)).
I think you are right in practice, though in principle you could pre-allocate the necessary memory when you create the items? When you have the intrusive links, you pay for their allocation already anyway, too. In terms of total storage space, you wouldn't pay for more.
(And you don't have to formally alloc them via a call to kmalloc or so. In the sense that you don't need to find a space for them: you just need to make sure that the system keeps a big enough buffer of contiguous-enough space somewhere. Similar to how many filesystems allow you to reserve space for root, but that doesn't mean any particular block is reserved for root up-front.)
But as I thought, that's about in-principle memory usage. A language like C makes the alternative of intrusive data structures much simpler.
That is why kernel mostly (always?) uses intrusive linked lists. They have no such problem.
Is there known compilers that can do that?
The above is about the optimiser figuring out whether to box or unbox by itself.
If you are willing to give the compiler a hand: Rust can do it just fine and it's the default when you define data structures. If you need boxing, you need to explicitly ask for it, eg via https://doc.rust-lang.org/std/boxed/struct.Box.html
Same story for games where it can be in the list of all objects and in the list of objects that might be attacked. Once it's killed you can remove it from all lists without searching for it in every single one.
Like a linked list that forces you to add a next pointer to the record you want to store in it.
https://doc.rust-lang.org/src/alloc/collections/linked_list....
If you’re having trouble designing a safe interface for your collection then that should be a signal that maybe what you are doing will result in UB when looked at the wrong way.
That is how all standard library collections in Rust works. They’ve just gone to the length of formally verifying parts of the code to ensure performance and safety.
Rust is great, but there are some things that are safe (and you could prove them safe in the abstract), but that you can't easily express in Rust's type system.
More specifically, there are some some things and usage pattern of these things that are safe when taken together. But the library can't force the safe usage pattern on the client, with the tools that Rust provides.
Take a look at the unsafe functions for the standard library Vec type to see examples of this:
https://doc.rust-lang.org/std/vec/struct.Vec.html#method.fro...
Yes, that's what you do in practice. But it's no different--in principle--from the approach C programmers have to use.
Yeah and that's what not going to work for high performance data structures because you need to embed hooks into the objects - not just put objects into a bigger collection object. Once you think in terms of a collection that contains things you have already lost that specific battle.
Another thing that doesn't work very well in Rust (from my understanding, I tried it very briefly) is using multiple memory allocators which is also needed in high performance code. Zig takes care to make it easy and explicit.
It also trusts your neighbor, your kid, your LLM, you, your dog, another linked list...
It's a bit like asking with the new mustang on the market, with airbags and traction control, why would you ever want to drive a classic mustang?
I prefer not having to debug... I think most people would agree with that.
Probably kernel dev isn't as easy, but for application development Rust really shifts majority of problems from debugging to compile time.
1. Memory safety - these can be some of the worst bugs to debug in C because they often break sane invariants that you use for debugging. Often they break the debugger entirely! A classic example is forgetting to return a value from a non-void function. That can trash your stack and end up causing all sorts of impossible behaviours in totally different parts of the code. Not fun to debug!
2. Stronger type system - you get an "if it compiles it works" kind of experience (as in Haskell). Obviously that isn't always the case, but I can sometimes write several hundred lines of Rust and once it's compiling it works first time. I've had to suppress my natural "oh I must have forgotten to save everything or maybe incremental builds are broken or something" instinct when this happens.
Net result is that I spend at least 10x less time in a debugger with Rust than I do with C.
better to crash than leak https keys to the internet
Obviously not. When will that happen? 15 years? Maybe it's generational: How long before developers 'born' into to memory-safe languages as serious choices will be substantially in charge of software development?
I can think of possible reasons: Early in life, in school and early career, much of what you work on is inevitably new to you, and also authorities (professor, boss) compel you to learn whatever they choose. You become accustomed to and skilled at adapting new things. Later, when you have power to make the choice, you are less likely to make yourself change (and more likely to make the junior people change, when there's a trade-off). Power corrupts, even on that small scale.
There's also a good argument for being stubborn and jaded: You have 30 years perfecting the skills, tools, efficiencies, etc. of C++. For the new project, even if C++ isn't as good a fit as Rust, are you going to be more efficient using Rust? How about in a year? Two years? ... It might not be worth learning Rust at all; ROI might be higher continuing to invest in additional elite C++ skills. Certainly that has more appeal to someone who knows C++ intimately - continue to refine this beautiful machine, or bang your head against the wall?
For someone without that investment, Rust might have higher ROI; that's fine, let them learn it. We still need C++ developers. Morbid but true, to a degree: 'Progress happens one funeral at a time.'
I believe the opposite. There's some kind of weird mentality in beginner/wannabe programmers (and HR, but that's unrelated) that when you pick language X then you're an X programmer for life.
Experienced people know that if you need a new language or library, you pick up a new language or library. Once you've learned a few, most of them aren't going to be very different and programming is programming. Of course it will look like work and maybe "experienced" people will be more work averse and less enthusiastic than "inexperienced" (meaning younger) people.
That heavily depends, if you tap into a green field project, yes. Or free reign over a complete rewrite of existing projects. But these things are more the exception than the regular case.
Even on green field project, ecosystem and available talents per framework will be a consideration most of the time.
There are also other things like being parent and wanting to take care of them that can come into consideration later in life. So more like more responsibilities constraints perspectives and choices than power corrupts in purely egoistic fashion.
Not most, but the pool of software devs has been doubling every five years, and Rust matches C# on "Learning to Code" voters at Stack Overflow's last survey, which is crazy considering how many people learn C# just to use Unity. I think you underestimate how many developers are Rust blank slates.
Anecdotically, I've recently come across comments from people who've taught themselves Rust but not C or C++.
> Steve Klabnik?
Thankfully no. I've actually argued with him a couple times. Usually in partial agreement, but his fans will downvote anyone who mildly disagrees with him.
Also, I'm not even big on Rust: every single time I've tried to use it I instinctively reached for features that turned out to be unstable, and I don't want to deal with their churn, so I consider the language still immature.
I'd shove you a better data point but people aren't taking enough surveys for our sake, that's the one we've got. Unless you want to go with Jetbrains', which, spoilers, skews towards technologies supported by Jetbrains; I'm not aware of other major ones.
The only alt I’ve made on hacker news is steveklabnik1, or whatever I called it, because I had locked myself out of this account. pg let me back in and so I stopped using it.
Is it obvious? I haven't heard of new projects in non-memory-safe languages lately, and I would think they would struggle to attract contributors.
It is easy to forget that the state-of-the-art implementations of a lot of systems software is not open source. They don’t struggle to attract contributors because of language choices, being on the bleeding edge of computer science is selling point enough.
I'm working on Rust projects, so I may have incomplete picture, but I'm from what I see when devs have a choice, they prefer working with Rust over C++ (if not due to the language, at least due to the build tooling).
I don't want to write multithreaded C++ at all unless I explicitly want a new hole in my foot. Rust I barely have any experience with, but it might be less frustrating than that.
Multithreading had not been planned or architected for, it took 30 min, included the compiler informing me I couldn't share a hashmap with those threads unsynchronised, and informing me on how to fix it.
Zig would have been a nice proposition in the 20th century, alongside languages like Modula-2 and Object Pascal.
> ... has a debug allocator that maintains memory safety in the face of use-after-free and double-free
which is probably true (in that it's not possible to violate memory safety on the debug allocator, although it's still a strong claim). But beyond that there isn't really any current marketing for Zig claiming safety, beyond a heading in an overview of "Performance and Safety: Choose Two".
There are industry standards based in C, or C++, and I doubt they will be adopting Rust any time soon.
POSIX (which does require a C compiler for certification), OpenGL, OpenCL, SYSCL, Aparavi, Vulkan, DirectX, LibGNM(X), NVN, CUDA, LLVM, GCC, Unreal, Godot, Unity,...
Then plenty of OSes like the Apple ecosystem, among many other RTOS and commercial endevours for embedded.
Also the official Rust compiler isn't fully bootstraped yet, thus it depends on C++ tooling for its very existence.
Which is why efforts to make C and C++ safer are also much welcomed, plenty of code out there is never going to be RIR, and there are domains where Rust will be a runner up for several decades.
I wouldn’t say it’s inevitable that everything will be rewritten in Rust, at the very least this will this decades. C has been with us for more than half a century and is the foundation of pretty much everything, it will take a long time to migrate all that.
More likely is that they will live next to each other for a very, very long time.
I have never run into such a crate in the wild, I don't think.
By default, all it means is that unwinding the stack doesn't happen, so Drop implementations don't get called. Which _can_ result in bugs, but they're a fairly familiar kind: it's the same kind of fault if a process is aborted (crashes?), and it doesn't get a chance to finalise external resources.
EDIT: to clarify, the feature isn't really something crates have to opt in to support
I'm not trying to hype up Rust or disparage C. I learned C first and then Rust, even before Rust 1.0 was released. And I have an idea why Rust finds acceptance, which is also what some of these companies have officially mentioned.
C is a nice little language that's easy to learn and understand. But the price you pay for it is in large applications where you have to handle resources like heap allocations. C doesn't offer any help there when you make such mistakes, though some linters might catch them. The reason for this, I think, is that C was developed in an era when they didn't have so much computing power to do such complicated analysis in the compiler.
People have been writing C for ages, but let me tell you - writing correct C is a whole different skill that's hard and takes ages to learn. If you think I'm saying this because I'm a bad programmer, then you would be wrong. I'm not a programmer at all (by qualification), but rather a hardware engineer who is more comfortable with assembly, registers, Bus, DRAM, DMA, etc. I still used to get widespread memory errors, because all it takes is a lapse in attention while coding. That strain is what Rust alleviates.
FWIW I work in firmware with the heap turned off. I’ve worked on projects in both C and Rust, and agree Rust still adds useful checks (at the cost of compile times and binary sizes). It seems worth the trade off for most projects.
This is obviously not true when the CPU can either yank away the execution to a different part of the program, or some foreign entity can overwrite your memory.
Additionally, in low-level embedded systems, the existence of malloc is not a given, yet Rust seems to assume you can dynamically allocate memory with a stateless allocator.
Rust has no_std to handle not having an allocator.
Tons of things end up being marked “unsafe” in systems/embedded Rust. The idea is you sandbox the unsafeness. Libraries like zero copy are a good example of “containing” unsafe memory accesses in a way that still gets you as much memory safety as possible given the realities of embedded.
Tl;dr you don’t get as much safety as higher level code but you still get more than C. Or maybe put a different way you are forced to think about the points that are inherently unsafe and call them out explicitly (which is great when you think about how to test the thing)
This means:
- All lifetimes are either 'stack' or 'forever'
- Thread safety in the main loop is ensured by disabling interrupts in the critical section - Rust doesn't understand this paradigm (it can be taught to, I'm sure)
- Thread safety in ISRs should also be taught to Rust
- Peripherals read and write to memory via DMA, unbekownst to the CPU
etc.
So all in all, I don't see Rust being that much more beneficial (except for stuff like array bounds checking).
I'm sure you can describe all these behaviors to Rust so that it can check your program, but threading issues are limited, and there's no allocation, and no standard lib (or much else from cargo).
Rust may be a marginally better language than C due to some convenience features, but I feel like there's too much extra work very little benefit.
https://github.com/rust-embedded/cortex-m/blob/master/cortex...
You can look at asynch and things like https://github.com/rtic-rs/rtic and https://github.com/embassy-rs/embassy for common embedded patterns.
You handle DMA the same way as on any other system - e.x. you mark the memory as system/uncached (and pin it if you have virtual memory on), you use memory barriers when you are told data has arrived to get the memory controller up to speed with what happened behind its back, and so on. You’re still writing the same code that controls the hardware in the same way, nothing about that is especially different in C vs Rust.
I think there is very much a learning curve, and that friction is real - hiring people and teaching them Rust is harder than just hiring C devs. People on HN tend to take this almost like a religious question - “how could you dare to write memory unsafe code in 2025? You’re evil!” But pragmatically this is a real concern.
Apple has invested in Swift, another high level language with safety guarantees, which happens to have been created under Chris Lattner, otherwise known for creating LLVM. Swift's huge advantage over Rust, for application and system programming is that it supports an ABI [1] which Rust, famously, does not (other than falling back to a C ABI, which degrades its promises).
[1] for more on that topic, I recommend this excellent article: https://faultlore.com/blah/swift-abi/ Side note, the author of that article wrote Rust's std::collections API.
[0]: Rust, while no longer officially experimental in the Linux kernel, does not yet have major OSs written purely in it.
> Embedded Swift support is available in the Swift development snapshots.
And considering Apple made Embedded Swift, even Apple does not believe that regular Swift is suitable. Meaning that you're undeniably completely wrong.
[0]:
https://github.com/swiftlang/swift-evolution/blob/main/visio...
You continue being undeniably, completely wrong.
For Apple it suffices that it is fit for purpose for Apple itself, it is experimental for the rest of the world.
I love to be rightly wrong.
https://www.swift.org/get-started/embedded/
Rust's approach is overkill, I think. A lot of reference counting and stuff is just fine in a kernel.
The big sticking point for me is that for desktop and server style computing, the hardware capabilities have increased so much that a good GC would for most users be acceptable at the kernel level. The other side to that coin is that then OS’s would need to be made on different kernels for large embedded/tablet/low-power/smart phone use cases. I think tech development has benefitted from Linux being used at so many levels.
A push to develop a new breed of OS, with a microkernel and using some sort of ‘safe’ language should be on the table for developers. But outside of proprietary military/finance/industrial (and a lot of the work in these fields are just using Linux) areas there doesn’t seem to be any movement toward movement toward a less monolithic OS situation.
There's already applications out there for the "old thing" that need to be maintained, and they're way too old for anyone to bother with re-creating it with the "new thing". And the "old thing" has some advantages the "new thing" doesn't have. So some very specific applications will keep using the "old thing". Other applications will use the "new thing" as it is convenient.
To answer your second question, nothing is inevitable, except death, taxes, and the obsolescence of machines. Rust is the new kid on the block now, but in 20 years, everybody will be rewriting all the Rust software in something else (if we even have source code in the future; anyone you know read machine code or punch cards?). C'est la vie.
- a pure binding crate, which exposes the C lib libraries API, and
- a wrapper library that performs some basic improvements
Stuff in the second category typically includes adding Drop impls to resources that need to be released, translating "accepts pointer + len" into "accepts slices" (or vice versa on return), and "check return value of C call and turn it into a Result, possibly with a stringified error".
All of those are also good examples of local reasoning about unsafety. If a C API returns a buffer + size, it's unsafe to turn it into a reference/slice. But if you check the function succeeded, you unsafely make the slice/reference, and return it from a safe function. If it crashes, you've either not upheld the C calls preconditions (your fault, check how to call the C function), or the C code has a bug (not your fault, the bug is elsewhere).
Rust has a nice compiler-provided static analyzer using annotation to do life time analysis and borrow checking. I believe borrow checking to be a local optima trap when it comes to static analysis and finds it often more annoying to use than I would like but I won't argue it's miles better than nothing.
C has better static analysers available. They can use annotations too. Still, all of that is optional when it's part of Rust core language. You know that Rust code has been successfully analysed. Most C code won't give you this. But if it's your code base, you can reach this point in C too.
C also has a fully proven and certified compiler. That might be a requirement if you work in some niche safety critical applications. Rust has no equivalent there.
The discussion is more interesting when you look at other languages. Ada/Spark is for me ahead of Rust both in term of features and user experience regarding the core language.
Rust currently still have what I consider significant flaws: a very slow compiler, no standard, support for a limited number of architectures (it's growing), it's less stable that I consider it should be given its age, and most Rust code tends to use more small libraries than I would personaly like.
Rust is very trendy however and I don't mean that in a bad way. That gives it a lot of leverage. I doubt we will ever see Ada in the kernel but here we are with Rust.
there u can likely explore well the boundaries where rust does and does not work.
people have all kind of opinions. mine is this:
if you need unsafe peppered around, the only thing rust offers is being very unergonomic. its hard to write and hard to debug for no reason. Writing memory-safe C code is easy. The problems rust solves arent bad, just solved in a way thats way more complicated than writing same (safe) C code.
a language is not unsafe. you can write perfectly shit code in rust. and you can write perfectly safe code in C.
people need to stop calling a language safe and then reimplementing other peoples hard work in a bad way creating whole new vulnerabilities.
It is completely besides the point that you can also write "shit code" in Rust. Just because you are fed up with the "reimplement the world in Rust" culture does not mean that the tool itself is bad.
2. I agree it's harder, I feel like most of the community knows and recognises this.
> Maybe Rust proponents should spend more time on making unsafe Rust easier and more ergonomic, instead of effectively *undermining safety and security*. Unless the strategy is to trick as many other people as possible into using Rust, and then hope those people fix the issues for you, a common strategy normally used for some open source projects, not languages.
I don't think there is any evidence that Rust is undermining safety and security. In fact all evidence shows it's massively improving these metrics wherever Rust replaces C and C++ code. If you have evidence to the contrary let's see it.
Yes, it's lots of fun. rust is Boring.
If I want to implement something and have fun doing it, I ll always do it in C.
For my work code, it all comes down to SDKs and stuff. For example I'm going to write firmware for Nordic ARM chip. Nordic SDK uses C, so I'm not going to jump through infinite number of hoops and incomplete rust ports, I'll just use official SDK and C. If it would be the opposite, I would be using Rust, but I don't think that would happen in the next 10 years.
Just like C++ never killed C, despite being perfect replacement for it, I don't believe that Rust would kill C, or C++, because it's even less convenient replacement. It'll dilute the market, for sure.
I think c++ didn't replace C because it is a bad language. It did not offer any improvements on the core advantages of C.
Rust however does. It's not perfect, but it has a substantially larger chance of "replacing" C, if that ever happens.
There will be small niches leftover:
* Embedded - This will always be C. No memory allocation means no Rust benefits. Rust is also too complex for smaller systems to write compilers.
* OS / Kernel - Nearly all of the relevant code is unsafe. There aren't many real benefits. It will happen anyways due to grant funding requirements. This will take decades, maybe a century. A better alternative would be a verified kernel with formal methods and a Linux compatibility layer, but that is pie in the sky.
* Game Engines - Rust screwed up its standard library by not putting custom allocation at the center of it. Until we get a Rust version of the EASTL, adoption will be slow at best.
* High Frequency Traders - They would care about the standard library except they are moving on from C++ to VHDL for their time-sensitive stuff. I would bet they move to a garbage-collected language for everything else, either Java or Go.
* Browsers - Despite being born in a browser, Rust is unlikely to make any inroads. Mozilla lost their ability to make effective change and already killed their Rust project once. Google has probably the largest C++ codebase in the world. Migrating to Rust would be so expensive that the board would squash it.
* High-Throughput Services - This is where I see the bulk of Rust adoption. I would be surprised if major rewrites aren't already underway.
This isn't really true; otherwise, there would be no reason for no_std to exist. Data race safety is independent of whether you allocate or not, lifetimes can be handy even for fixed-size arenas, you still get bounds checks, you still get other niceties like sum types/an expressive type system, etc.
> OS / Kernel - Nearly all of the relevant code is unsafe.
I think that characterization is rather exaggerated. IIRC the proportion of unsafe code in Redox OS is somewhere around 10%, and Steve Klabnik said that Oxide's Hubris has a similarly small proportion of unsafe code (~3% as of a year or two ago) [0]
> Browsers - Despite being born in a browser, Rust is unlikely to make any inroads.
Technically speaking, Rust already has. There has been Rust in Firefox for quite a while now, and Chromium has started allowing Rust for third-party components.
[0]: https://news.ycombinator.com/item?id=42312699
[1]: https://old.reddit.com/r/rust/comments/bhtuah/production_dep...
Google is transitioning large parts of Android to Rust and there is now first-party code in Chromium and V8 in Rust. I’m sure they’ll continue to write new C++ code for a good while, but they’ve made substantial investments to enable using Rust in these projects going forward.
Also, if you’re imagining the board of a multi-trillion dollar market cap company is making direct decisions about what languages get used, you may want to check what else in this list you are imagining.
> Based on our research, we landed on two outcomes for Chromium.
> We will support interop in only a single direction, from C++ to Rust, for now. Chromium is written in C++, and the majority of stack frames are in C++ code, right from main() until exit(), which is why we chose this direction. By limiting interop to a single direction, we control the shape of the dependency tree. Rust can not depend on C++ so it cannot know about C++ types and functions, except through dependency injection. In this way, Rust can not land in arbitrary C++ code, only in functions passed through the API from C++.
> We will only support third-party libraries for now. Third-party libraries are written as standalone components, they don’t hold implicit knowledge about the implementation of Chromium. This means they have APIs that are simpler and focused on their single task. Or, put another way, they typically have a narrow interface, without complex pointer graphs and shared ownership. We will be reviewing libraries that we bring in for C++ use to ensure they fit this expectation.
-- https://security.googleblog.com/2023/01/supporting-use-of-ru...
Also even though Rust would be a safer alternative to using C and C++ on the Android NDK, that isn't part of the roadmap, nor the Android team provides any support to those that go down that route. They only see Rust for Android internals, not app developers
If anything, they seem more likely to eventually support Kotlin Native for such cases than Rust.
I believe the Chromium policy has changed, though I may be misinterpreting: https://chromium.googlesource.com/chromium/src/+/refs/heads/...
The V8 developers seem more excited to use it going forward, and I’m interested to see how it turns out. A big open question about the Rust safety model for me is how useful it is for reducing the kind of bugs you see in a SOTA JIT runtime.
> They only see Rust for Android internals, not app developers
I’m sure, if only because ~nobody actually wants to write Android apps in Rust. Which I think is a rational choice for the vast majority of apps - Android NDK was already an ugly duckling, and a Rust version would be taken even less seriously by the platform.
The NDK officially it isn't to write apps anyway, people try to do that due to some not wanting to touch Java or Kotlin, but that has never been the official point of view from Android team since it was introduced in version 2, rather:
> The Native Development Kit (NDK) is a set of tools that allows you to use C and C++ code with Android, and provides platform libraries you can use to manage native activities and access physical device components, such as sensors and touch input. The NDK may not be appropriate for most novice Android programmers who need to use only Java code and framework APIs to develop their apps. However, the NDK can be useful for cases in which you need to do one or more of the following:
> Squeeze extra performance out of a device to achieve low latency or run computationally intensive applications, such as games or physics simulations.
> Reuse your own or other developers' C or C++ libraries.
So it would be expected that at least for the first scenario, Rust would be a welcomed addition, however as mentioned they seem to be more keen into supporting Kotlin Native for it.
There are memory safety issues that literally only apply to memory on the stack, like returning dangling pointers to local variables. Not touching the heap doesn't magically avoid all of the potential issues in C.
Embedded Dev here and I can report that Rust has been more and more of a topic for me. I'm actually using it over C or C++ in a bare metal application. And I don't get where the no allocation -> no benefit thing comes from, Rust gives you much more to work with on bare metal than C or C++ do.
> Rust is also too complex for smaller systems to write compilers.
I am not a compiler engineer, but I want to tease apart this statement. As I understand, the main Rust compiler uses LLVM framework which uses an intermediate language that is somewhat like platform independent assembly code. As long as you can write a lexer/parser to generate the intermediate language, there will be a separate backend to generate machine code from the intermediate language. In my (non-compiler-engineer) mind, separates the concern of front-end language (Rust) from target platform (embedded). Do you agree? Or do I misunderstand?Memory safety applies to all memory. Not just heap allocated memory.
This is a strange claim because it's so obviously false. Was this comment supposed to be satire and I just missed it?
Anyway, Rust has benefits beyond memory safety.
> Rust is also too complex for smaller systems to write compilers.
Rust uses LLVM as the compiler backend.
There are already a lot of embedded targets for Rust and a growing number of libraries. Some vendors have started adopting it with first-class support. Again, it's weird to make this claim.
> Nearly all of the relevant code is unsafe. There aren't many real benefits.
Unsafe sections do not make the entire usage of Rust unsafe. That's a common misconception from people who don't know much about Rust, but it's not like the unsafe keyword instantly obliterates any Rust advantages, or even all of its safety guarantees.
It's also strange to see this claim under an article about the kernel developers choosing to move forward with Rust.
> High Frequency Traders - They would care about the standard library except they are moving on from C++ to VHDL for their time-sensitive stuff. I would bet they move to a garbage-collected language for everything else, either Java or Go.
C++ and VHDL aren't interchangeable. They serve different purposes for different layers of the system. They aren't moving everything to FPGAs.
Betting on a garbage collected language is strange. Tail latencies matter a lot.
This entire comment is so weird and misinformed that I had to re-read it to make sure it wasn't satire or something.
> Anyway, Rust has benefits beyond memory safety.
I want to elaborate on this a little bit. Rust uses some basic patterns to ensure memory safety. They are 1. RAII, 2. the move semantics, and 3. the borrow validation semantics.
This combination however, is useful for compile-time-verified management of any 'resource', not just heap memory. Think of 'resources' as something unique and useful, that you acquire when you need it and release/free it when you're done.
For regular applications, it can be heap memory allocations, file handles, sockets, resource locks, remote session objects, TCP connections, etc. For OS and embedded systems, that could be a device buffer, bus ownership, config objects, etc.
> > Nearly all of the relevant code is unsafe. There aren't many real benefits.
Yes. This part is a bit weird. The fundamental idea of 'unsafe' is to limit the scope of unsafe operations to as few lines as possible (The same concept can be expressed in different ways. So don't get offended if it doesn't match what you've heard earlier exactly.) Parts that go inside these unsafe blocks are surprisingly small in practice. An entire kernel isn't all unsafe by any measure.
Modern embedded isn't your grandpa's embedded anymore. Modern embedded chips have multiple KiB of ram, some even above 1MiB and have been like that for almost a decade (look at ESP32 for instance). I once worked on embedded projects based on ESP32 that used full C++, with allocators, exceptions, ... using SPI RAM and worked great. There's a fantastic port of ESP-IDF on Rust that Espressif themselves is maintaining nowadays, too.
only in interpreter mode.
It's like saying python is complex becasue you have metaclasses, but you'll never need to reach for them.
Speaking as a subscriber of about two decades who perhaps wouldn't have a career without the enormous amount of high-quality education provided by LWN content, or at least a far lesser one: Let's forgive.
It was unintentional as per author
> Ouch. That is what I get for pushing something out during a meeting, I guess. That was not my point; the experiment is done, and it was a success. I meant no more than that.
Has anyone found them to be inaccurate, or fluffy to the point it degraded the content?
I haven’t - but then again, probably predominantly reading the best posts being shared on aggregators.
That’s why I was chatting about the “Ouch.”
because it was the only part of the comment that didn’t make sense to me in isolation,
so I opened the context, what he was replying to.
(I do agree it's clickbait-y though)
> Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize.
The topic of the Rust experiment was just discussed at the annual Maintainers Summit. The consensus among the assembled developers is that Rust in the kernel is no longer experimental — it is now a core part of the kernel and is here to stay. So the "experimental" tag will be coming off. Congratulations are in order for all of the Rust-for-Linux team. > Mike: rachel and i are no longer dating
>
> rachel: mike that's a horrible way of telling people we're married
from the meme section on that page.> Mike's colleagues: Aww.. We'll miss you.
> Mike's manager: Is this your one week's notice? Did you take up an architect job elsewhere immediately after I promoted you to architect ?!
IMO the attitude is warranted. There is no good that comes from having higher-level code than necessary at the kernel level. The dispute is whether the kernel needs to be more modern, but it should be about what is the best tool for the job. Forget the bells-and-whistles and answer this: does the use of Rust generate a result that is more performant and more efficient than the best result using C?
This isn’t about what people want to use because it’s a nice language for writing applications. The kernel is about making things work with minimum overhead.
By analogy, the Linux kernel historically has been a small shop mentored by a fine woodworker. Microsoft historically has been a corporation with a get-it-done attitude. Now we’re saying Linux should be a “let’s see what the group thinks- no they don’t like your old ways, and you don’t have the energy anymore to manage this” shop, which is sad, but that is the story everywhere now. This isn’t some 20th century revolution where hippies eating apples and doing drugs are creating video games and graphical operating systems, it’s just abandoning old ways because they don’t like them and think the new ways are good enough and are easier to manage and invite more people in than the old ways. That’s Microsoft creep.
These are the performance results for an NVMe driver written in Rust: https://rust-for-linux.com/nvme-driver
It's absolutely on par with C code.
Good meme!
A lot of software was written in C and C++ because they were the only option for decades. If you couldn't afford garbage collection and needed direct control of the hardware there wasn't much of a choice. Had their been "safer" alternatives, it's possible those would have been used instead.
It's only been in the last few years we've seen languages emerge that could actually replace C/C++ with projects like Rust, Zig and Odin. I'm not saying they will, or they should, but just that we actually have alternatives now.
Also I would refrain me to list all other alternatives.
I've never used libcurl and I don't know why is it useful, so let's focus on curl. Of course if you want C library, you gotta write it with C, that's kind of weird argument.
My point is, there were plenty of better options to replace C. Yet people chose to use C for their projects and continue to do so. Rust is not even good option for most projects, as it's too low level. It's a good option for Linux kernel, but for user space software? I'm not sure.
Web browsers have been written in restricted subsets of C/C++ with significant additional tooling for decades at this point, and are already beginning to move to Rust.
https://github.com/mozilla-firefox/firefox : JavaScript 28.9%, C++ 27.9%, HTML 21.8%, C 10.4%, Python 2.9%, Kotlin 2.7%, Other 5.4%
How significant?
I would love an updated version of https://docs.google.com/spreadsheets/d/1flUGg6Ut4bjtyWdyH_9e... (which stops in 2020).
For Chrome, I don't know if anyone has compiled the stats, but navigating from https://chromium.googlesource.com/chromium/src/+/refs/heads/... I see at least a bunch of vendored crates, so there's some use, which makes sense since in 2023 they announced that they would support it.
So, written in C/C++? It seems to me you're trying to make a point that reality doesn't agree with but you stubbornly keep pushing it.
Not in the sense that people who are advocating writing new code in C/C++ generally mean. If someone is advocating following the same development process as Chrome does, then that's a defensible position. But if someone is advocating developing in C/C++ without any feature restrictions or additional tooling and arguing "it's fine because Chrome uses C/C++", no, it isn't.
Whatever your thoughts are about react, the JavaScript community that has been the tip of the spear for experimenting with new ways to program user interfaces. They’ve been at it for decades - ever since jQuery. The rest of the programming world are playing catchup. For example, SwiftUI.
VSCode is also a wonderful little IDE. Somehow much faster and more stable than XCode, despite being built on electron.
2. End users absolutely do not care in which programming language an application (or OS, they can't tell the difference and don't care) is written in. They only care if it does the job they need quickly, safely and easily.
"Before going into the details of the new Rust release, we'd like to state that we stand in solidarity with the people of Ukraine and express our support for all people affected by this conflict."
(Not a Linux hacker, so apologies if I get this wrong)
The filesystem APIs were quite arcane, and in particular whether or not you were allowed to call a C function at a certain point wasn't documented, and relied on experience to know already.
In trying to write idiomatic Rust bindings, the Rust for Linux group asked the filesystems maintainer if they could document these requirements, so that the Rust bindings could enforce as much of them as they could.
The result was... drama.
Sure, a different model that was easy to model would have been picked if we were initially using Rust, but we were not, and the existing model in C is what we need to wrap. Also, a more Rust-friendly model would have incured higher memory costs.
> Also, a more Rust-friendly model would have incured higher memory costs.
Can you give some details? This hasn’t been my experience.
I find in general rust’s ownership model pushes me toward designs where all my structs are in a strict tree, which is very efficient in memory since everything is packed in memory. In comparison, most C++ APIs I’ve used make a nest of objects with pointers going everywhere. And this style is less memory efficient, and less performant because of page faults.
C has access to a couple tricks safe rust is missing. But on the flip side, C’s lack of generics means lots of programs roll their own hash tables, array lists and various other tools. And they’re often either dynamically typed (and horribly inefficient as a result) or they do macro tricks with type parameters - and that’s ugly as sin. A lack of generics and monomorphization means C programs usually have slightly smaller binaries. But they often don’t run as fast. That’s often a bad trade on modern computers.
We heavily use arenas. We also have runtime-typed objects used to represent dynamically typed data like that obtained from JSON/Yaml or over IPC. If we were to be more friendly to modeling in Rust, we'd likely require that all memory reachable from an object node be in the same arena, disallowing common patterns like having list/map's arrays in one arena and having keys/strings in another or in static mem (this allows reusing other buffers without forcing copying all the data, so backing arrays can be smaller).
> Also, a more Rust-friendly model would have incured higher memory costs.
I'm not sure how modelling everything in a rust borrow checker friendly way would change anything here? Everything you're talking about doing in C could be done more or less exactly the same in rust.
Arenas are slightly inconvenient in rust because the standard collection types bake in the assumption that they're working with the global system allocator. But there's plenty of high quality arena crates in rust which ship their own replacements for Vec / HashMap / etc.
It also sounds like you'd need to write or adapt your own JSON parser. But it sounds like you might be writing part of your own JSON / Yaml parser in C anyway.
Not optimal for ease of compilation and building old versions of the Kernel. (You need a specific version of the nightly compiler to build a specific version of the Kernel)
It's trivial to install a specific version of the toolchain though.
https://www.phoronix.com/news/Linux-Patches-Multiple-Rust-Ve...
I'm genuinely not trolling, and Rust is okay, but only Rust acolytes exhibit this behaviour.
https://github.com/microsoft/windows-drivers-rs
maybe this will be good for the rest of the kernels, IllumOS/HaikuOS/ReactOS.
maybe
I don’t like programming in Go, but nothing stops me running go programs on my computer. My computer doesn’t seem to care.
I'm not actually sure if they've gotten as far as having unit tests.
(OpenBSD adds the additional step of adding a zillion security mitigations that were revealed to them in a dream and don't actually do anything.)
"Rust as a selling point" was a big thing in 2018-2022ish. You see it a lot less of the "written in Rust" in HN headlines these days. Some people were very excited about Rust early on. What feels more common today are people who unnecessarily hate Rust because they saw too much of this hype and they (justifiably) got annoyed by it.
If there is a new, optional language to be added to Linux Kernel development, Rust makes sense. It's a low level, performant, and safe language. Introducing it for driver development has almost no impact on 99% of users, except maybe it'll safe them a memory related bug fix patch having to be installed at some point. Is Rust the "selling point" here, or is the potential to avoid an entire class of bugs the selling point?
> The fact that you think linux machines are enjoyed by only a specific group of people makes me happier with my choice
If by "specific group of people" you mean "people who will refuse to use an OS based on the implementation language(s)", then I guess so.
I don't mean to be rude (although it reads like it, apologies), but I just think that you're coming at this from a perspective of malice instead of what the goal was, which was to reduce bugs in kernel drivers, and not to pimp Rust as a programming language by getting it into a large software project.
That's just not true. Neither Linus Torvalds, nor the Linux Foundation, nor any major distro, nor anyone else who could conceivably be considered responsible for "marketing" Linux is saying you should use it because a small part of it is written in Rust.
I just went to ubuntu.com and the word "rust" does not appear anywhere on the front page. So what are you talking about?
I can argue otherwise. Developer advocacy is a form of marketing (specially for a product traditionally targeted towards tech savvy people)
(Spoiler alert: Rust graduated to a full, first-class part of the kernel; it isn't being removed.)
(Of course, that's not what's happening at all.)
And thanks for attempting to answer my question without snark or down voting. Usually HN is much better for discussion than this.
I think that unlike user level software performance of a kernel is of utmost importance.
Rust would still help to eliminate those bugs.
I'm eagerly awaiting the day the Linux kernel is rewritten in Typescript so that more programmers can contribute :)