He is forceful in making his points, but respectful in the way he addressed Christoph's concerns.
This gives me great hope that the Linux maintainer community and contributers using Rust will be able to continue working together, find more common ground, and have more success.
According to the policy, rust folks should fix the rust binding when C changes breaks the binding. The C maintainer don't need to care rust, at all.
In practice, though, I would expect this needs lots of coordination. A PR with C only changes that breaks the whole building (because rust binding is broken) is unlikely to be merged to mainline.
Linus can reiterate his policy, but the issue can't be resolved without some rust developers keep on their persistent work and builds up their reputation.
I have never understood how that could work long-time. How do you release a kernel, where some parts are broken? Either you wait for Rust people to fix their side or you drop the C changes. Or your users suddenly find their driver doesn‘t work anymore after a kernel update.
As a preliminary measure when there isn‘t a substantial amount of Rust code, yet, sure. But the fears of some maintainers that the policy will change to "you either learn Rust and fix things or your code can be held up until someone else helps you out" are well-founded, IMO.
Before features are merged to Linus' release branch, pretty much all changes are published and merged to linux-next first. It is exactly here that build issues and conflicts are first detected and worked out, giving maintainers early visibility into changes that are happening outside their subsystem. Problems with the rust bindings will probably show up here, and the Rust developers will have ample time to fix/realign their code before the merge window even starts. And it's not uncommon for larger features (e.g. when they require coordination across subsystems) to remain in linux-next for more than one cycle.
What if Linus decided to go on a two month long vacation in the middle of the merge window?
> I don‘t claim that it can never work (or it cannot work in the common case), but as a hard rule it seems untenable.
There are quite a few rust developers already involved, if they cannot coordinate that at least some are available during a release critical two month period then none of them should be part of any professional project.
It is customary for maintainers to fix _all_ usage of their code themselves? That doesn't seem scalable.
(I do have experience with this causing regressions: someone updates a set of drivers to a new API, and because of the differences and lack of a good way to test, breaks some detail of the driver)
quaff(something, 5, Q_DOOP) ... into ... quaff(something, 5, 0, Q_DEFAULT | (Q_DOOP << 4))
Then it's not beyond the wits of a C programmer to realise that the Rust binding
quaff(var1, n, maybe_doop) ... can be ... quaff(var1, n, 0, Q_DEFAULT | (maybe_doop << 4))
Probably the Rust maintainer will be horrified and emit a patch to do something more idiomatic for binding your new API but there's an excellent chance that meanwhile your minimal patch builds and works since now it has the right number and type of arguments.
Isn’t the point of Coccinelle that you don’t have to spend time walking through (C) driver code you’ve never heard of?
At least for Debian, all you need to do if you hit such a case is to simply go and choose the old kernel in the Grub screen. You don't even need to deal with installing an older package and dealing with version conflicts or other pains of downgrading.
Then sure, some people who download a kernel release might enable a Rust driver, and that makes the build fail. But until Rust is considered a first-class, fully-supported language in the kernel, that's fine.
In practice, though, I would expect that the Rust maintainers would fix those sorts of things up before an actual release is cut, after the 2-week merge window, during the period when only fixes are accepted. Maybe not every single time, but most of the time; if no one is available to fix a particular bit of breakage, then it's broken for that release. And that's fine too, even if it might be annoying to some users.
Which is currently the only way possible and it will stay that way for a long time because remember that clang support less targets than gcc and gcc cannot compile Rust.
Once gcc can /reliably/ compile Rust, then and only then Rust could be "upgraded" to a first class citizen in Linux. The "C-maintainers don't want to learn Rust" issue, will still be here of course, but there will already be many years of having a mixed code base..
> But until Rust is considered a first-class, fully-supported language in the kernel, that's fine
A first-class language whose kernel parts may always break does seem unreasonable. I still think policy will have to change by that point.
C can break rust, and Debian/Ubuntu/Redhat/suse/etc can wait for it to be fixed before pushing a new kernel to end users.
"I respect you technically, and I like working with you[...] there needs to be people who just stand up to me and tell me I'm full of shit[...] But now I'm calling you out on YOURS."
...and (b) this is me recognizing that me taking charge of a conversation is a different thing than me taking control of your decisions:
"And no, I don't actually think it needs to be all that black-and-white."
(Of course Linus has changed over time for the better, he's recognized that, and I've learned a lot with him and have made amends with old colleagues.)
It's really cool to see someone temper their language and tone, but still keep their tell-it-like-it-is attitude. I probably wouldn't feel good if I were Christoph Hellwig reading that reply, but I also wouldn't feel like someone had personally attacked me and screamed at me far out of proportion to what I'd done.
Contrast this to people who are good at producing the appearance of an upstanding character when it suits them, but being quite vindictive and poisonous behind closed doors when it doesn't.
But looking at all the recent responses it seems Rusted Linux is inevitable. He is Pro Rust.
> C++ isn't going to give us any of that any decade soon, and the C++ language committee issues seem to be pointing out that everyone better be abandoning that language as soon as possible if they wish to have any codebase that can be maintained for any length of time.
- Systems not supported by Rust can use older kernels. They can also -- at least for a while -- probably still use current kernel versions without enabling any Rust code. (And presumably no one is going to be writing any platform-specific code or drivers for a platform that doesn't have a Rust toolchain.)
- It will be a long time before building the kernel will actually require Rust. In that time, GCC's Rust frontend may become a viable alternative for building Linux+Rust. And any arch supported by GCC should be more-or-less easily targetable by that frontend.
- The final bit is just "tough shit". Keep up, or get left behind. That could be considered a shame, but that's life. Linux has dropped arch support in the past, and I'm sure it will do so in the future. But again, they can still use old kernels.
As for new architectures in the future, if they're popular enough to become a first-class citizen of the Linux kernel, they'll likely be popular enough for someone to write a LLVM backend for it and the glue in rustc to enable it. And if not, well... "tough shit".
Unless you somehow want to run Apple M1 GPU drivers on a device that has no rust toolchain ... erm...
or you want to run a new experimental filesystem on a device that has no rust toolchain support?
The answer to the "new and emerging platforms" question is pretty much the same as before: sponsor someone to write the toolchain support. We've seen new platforms before and why shouldn't it follow the same pathway? Usually the c compiler is donated by the company or community that is investing into the new platform (for example the risc-v compiler support for gcc and llvm are both getting into maturity status, and the work is sponsored by the developer community, various non-profit[1][2] and for-profit members of the ecosystem as well as from the academic community.)
realistically speaking, it's very hard to come up with examples of the hypothetical.
[1] https://github.com/lowRISC/riscv-llvm
[2] https://lists.llvm.org/pipermail/llvm-dev/2016-August/103748...
I honestly suspect new architectures will be supported in LLVM before GCC nowadays; most companies are far more comfortable working with a non-GPL toolchain, and IMHO LLVM's internals are better-documented (though I've never added a new target).
Linux's attitude has always been either you keep up or you get dropped - see the lack of any stable driver API and the ruthless pruning of unmaintained drivers.
> What about new architectures that may come up in the future?
Who's to say they won't have a Rust compiler? Who's to say they will have a C one?
Gonna need a citation on that one. Drivers are removed when they don't have users anymore, and a user piping up is enough to keep the driver in the tree:
For example:
> As suggested by both Greg and Jakub, let's remove the ones that look
> are most likely to have no users left and also get in the way of the
> wext cleanup. If anyone is still using any of these, we can revert the
> driver removal individually.
https://lore.kernel.org/lkml/20231030071922.233080-1-glaubit...Or the x32 platform removal proposal, which didn't happen against after some users showed up:
> > > I'm seriously considering sending a patch to remove x32 support from
> > > upstream Linux. Here are some problems with it:
> >
> > Apparently the main real use case is for extreme benchmarking. It's
> > the only use-case where the complexity of maintaining a whole
> > development environment and distro is worth it, it seems. Apparently a
> > number of Spec submissions have been done with the x32 model.
> >
> > I'm not opposed to trying to sunset the support, but let's see who complains..
>
> I'm just a single user. I do rely on it though, FWIW.
> […snipped further discussion]
https://lore.kernel.org/lkml/CAPmeqMrVqJm4sqVgSLqJnmaVC5iakj...In the end the question is whether you want to hold back progress for 99.9% of the users because there are still 200 people running Linux on an Amiga with m68k. I am pretty sure that the number of Linux on Apple Silicon users outnumbers m68k and some other legacy systems by at least an order of magnitude (if not more). (There are currently close to 50000 counted installs. [1])
I'm sorry you cant hinder kernel development just because some random guy/corpo cant use your shit in obscure system, like how can that logic is apply to everything
if your shit is legacy then use legacy kernel
Well, it also takes effort to be held back with outdated tools. Also, the LLVM backend doesn't have to be top-notch, just runnable. If they want to run legacy hardware they should be okay with running a legacy or taking the performance hit of a weaker LLVM back-end.
Realistically
At version 16[1], LLVM supports: * IA-32 * x86-64 * ARM * Qualcomm Hexagon * LoongArch * M68K * MIPS * PowerPC * SPARC * z/Architecture * XCore * others
in the past it had support for Cell and Alpha, but I'm sure that the old code could be revived if needed, so how many users are effected here? Lets not forget the Linux dropped Itanium support and I'm sure someone is still running that somewhere.
Looking through this list [2], what I see missing is Elbrus, PA-RISC, OpenRisc, and SuperH. So pretty niche stuff.
[1] https://en.wikipedia.org/wiki/LLVM#Backends
[2] https://en.wikipedia.org/wiki/List_of_Linux-supported_comput...
Really good sign. Makes me hopeful about the future of this increasingly large kernel
> The fact is, the pull request you objected to DID NOT TOUCH THE DMA LAYER AT ALL.
> It was literally just another user of it, in a completely separate subdirectory, that didn't change the code you maintain in _any_ way, shape, or form.
> I find it distressing that you are complaining about new users of your code, and then you keep bringing up these kinds of complete garbage arguments.
Finally. If he had been sooner maybe we wouldn't have lost talented contributors to the kernel.
ON the flip side - when I (and I suspect many others) read Linux where Linus should be written, I rarely even notice and never really care because I've been there.
All this is a long winded way of saying: don't sweat it :) .
I feel that departure of the lead R4L developer was a compromise deliberately made to not make Hellwig feel like a complete loser. This sounds bad of course.
edit: whitespace
Possibly also that a significant portion of the suggested gain may be achievable via other means.
i.e. bounds checking and some simple (RAII-like) allocation/freeing simplifications may be possible without rust, and that those are (from the various papers arguing for Rust / memory safety elsewhere) the larger proportion of the safety bugs which Rust catches.
Possibly just making clang the required compiler, and adopting these extension may give an easier bang-for-buck: https://clang.llvm.org/docs/BoundsSafety.html
Over and above that, there seem to be various complaints about the readability and aesthetics of Rust code, and a desire not to be subjected to such.
Things like that have been said many times, even before Rust came around. You can do static analysis, you can put in asserts, you can use this restricted C dialect, you can...
But this never gets wider usage. Even if the tools are there, people are going to ignore them. https://en.wikipedia.org/wiki/Cyclone_(programming_language) started 23 years ago...
It took us decades to get to non executable stack and W^X and there are still occasional issues with that.
I had an argument about rust with a freebsd developer that had the same "I never make a mistake" attitude. I've made a PR to his project that fixes bugs that weren't possible in rust to being with. Not out of petty, but because his library was crashing my application. In fact, he tried to blame my rust wrapper for it when I raised an issue.
It's a computer. It does what it was instructed to do, all 50 million or so of them. To think you as a puny human have complete and utter mastery over it is pure folly every single time.
As time goes on I become more convinced that the way to make progress in computing and software is not with better languages, sure, those are very much appreciated, since language has a strong impact on how you even think about problems, but it's more about tooling and how we can add abstractions to the software to leverage the computer we already got to alleviate the eye gouging complexity of trying to manage it all by trying to predict how it will behave with our pitiful neuron sacs.
Read the above email. Greg KH is pretty certain it is worth the gain.
> Possibly also that a significant portion of the suggested gain may be achievable via other means.
I think this is a valid POV, if someone shows up and does the work. And I don't mean 3 years ago. I mean -- now is as good a time as any to fix C code, right? If you have some big fixes, it's not like the market won't reward you for them.
It's very, very tempting to think there is some other putatively simpler solution on the horizon, but we haven't seen one.
> Over and above that, there seem to be various complaints about the readability and aesthetics of Rust code, and a desire not to be subjected to such.
No accounting for taste, but I don't think C is beautiful! Rust feels very understandable and explicit to my eye, whereas C feels very implicit and sometimes inscrutable.
I don't think GP or anyone is under the impression that Greg KH thinks otherwise. He's not the "some folks" referred here.
Glad for your keen insights.
Sure, but opinions are always going to differ on stuff like this. Decision-making for the Linux kernel does not require unanimous consent, and that's a good thing. Certainly this Rust push hasn't been handled perfectly, by any means, but I think they at least have a decent plan in place to make sure maintainers who don't want to touch Rust don't have to, and those who do can have a say in how the Rust side of their subsystems look.
I agree with the people who don't believe you can get Rust-like guarantees using C or C++. C is just never going to give you that, ever, by design. C++ maybe will, someday, years or decades from now, but you'll always have the problem of defining your "safe subset" and ensuring that everyone sticks to it. Rust is of course not a silver bullet, but it has some properties that mean you just can't write certain kind of bugs in safe Rust and get the compiler to accept it. That's incredibly useful, and you can't get that from C or C++ today, and possibly not ever.
Yes, there are tools that exist for C to do formal verification, but for whatever reason, no one wants to use them. A tool that people don't want to use might as well not exist.
But ultimately my or your opinion on what C and C++ can or can't deliver is irrelevant. If people like Torvalds and Kroah-Hartman think Rust is a better bet than C/C++-based options, then that's what matters.
I will need to read their conversations more to see if it's the underlying fear, but formalization makes refactoring hard and code brittle (ie. having to start from scratch on a formal proof after substantially changing a subsystem). One of the key benefits of C/Kernel have been their malleability to new hardware and requirements.
My guess is, it cannot. The way -fbounds-safety works, as far as I understand, is that it aborts the program in case of an out-of-bounds read or write. This is similar to a Rust panic.
Aborting or panicking the kernel is absolutely not a better alternative to simply allowing the read/write to happen, even if it results in a memory vulnerability.
Turning people's computer off whenever a driver stumbles on a bug is not acceptable. Most people cannot debug a kernel panic, and won't even have a way to see it.
Rust can side-step this with its `.get()` (which returns an Option, which can be converted to an error value), and with iterators, which often bypass the need for indexing in the first place.
Unfortunately, Rust can still panic in case of a normal indexing operation that does OOB access; my guess is that the index operation will quickly be fixed to be completely disallowed in the kernel as soon as the first such bug hits production servers and desktop PCs.
Alternatively, it might be changed to always do buf[i % buf.size()], so that it gives the wrong answer, but stays within bounds (making it similar to other logic errors, as opposed to a memory corruption error).
https://github.com/apple-oss-distributions/xnu/blob/main/doc...
https://github.com/apple-oss-distributions/xnu/blob/main/doc...
Upstream fbounds in xnu has options for controlling if it panics or is just a telemetry event. They are in a kernel situation and have the exact same considerations on trying to keep the kernel alive.
Supposedly ~5% (1-29%), but I'm testing my own projects to verify (my guess is higher at 10-20%, but will depend on the code). Supposedly it's to land in gcc at some point but I dunno the time table.
it might do to wait until some other memory safe alternative appears.
And I don't understand how you can go from opining that Rust shouldn't be the only other option, to opining that they should have waited before supporting Rust. That doesn't make sense unless you just have a particular animus towards Rust.
But our opinions on this are irrelevant, as it turns out, unless you're actually Linus Torvalds hiding behind that throwaway account.
I don't think anyone is saying that Rust is the only way to achieve that. It is a way to achieve it, and it's a way that enough people are interested in working on in the context of the Linux kernel.
Ada just doesn't have enough developer momentum and community around it to be suitable here. And even if it did, you still have to pick one of the available choices. Much of that decision certainly is based on technical merits, but there's still enough weight put toward personal preference and more "squishy" measures. And that's fine! We're humans, and we don't make decisions solely based on logic.
> it might do to wait until some other memory safe alternative appears.
Perhaps, but maybe people recognize that it's already late to start making something as critical as the Linux kernel more safe from memory safety bugs, and waiting longer will only exacerbate the problem. Sometimes you need to work with what you have today, not what you hope materializes in the future.
It's slow because the potential benefits are slim and the costs of doing that research are high. The simple reality is that there just isn't enough funding going into that research to make it happen faster.
> there's no compelling reason that rust has to be the only way to achieve memory safety
The compelling reason is that it's the only way that has worked, that has reached a critical mass of talent and tooling availability that makes it suitable for use in Linux. There is no good Rust alternative waiting in the wings, not even in the kind of early-hype state where Rust was 15 years ago (Zig's safety properties are too weak), and we shouldn't let an imaginary better future stop us from making improvements in the present.
> it might do to wait until some other memory safe alternative appears.
That would mean waiting at least 10 years, and how many avoidable CVEs would you be subjecting every Linux user to in the meantime?
because it's hard enough that people don't try. and then they settle for rust. this is what i mean by "rust sucks the air out of the room".
however, its clearly not impossible, for example this authors incomplete example:
https://github.com/ityonemo/clr
> That would mean waiting at least 10 years,
what if it's not ten years, what if it could be six months? is or worth paying all the other downstream costs of rust?
youre risking getting trapped in a local minimum.
> youre risking getting trapped in a local minimum.
Or you are risking years of searching for perfect when you already have good enough.
I think it's the opposite. Rust made memory safety without garbage collection happen (without an unusably long list of caveats like Ada or D) and showed that it was possible, there's far more interest in it now post-Rust (e.g. Linear Haskell, Zig's very existence, the C++ efforts with safety profiles etc.) than pre-Rust. In a world without Rust I don't think we'd be seeing more and better memory-safe non-GC languages, we'd just see that area not being worked on at all.
> however, its clearly not impossible, for example this authors incomplete example:
Incomplete examples are exactly what I'd expect to see if it was impossible. That kind of bolt-on checker is exactly the sort of thing people have tried for decades to make work for C, that has consistently failed. And even if that project was "complete", the hard part isn't the language spec, it's getting a critical mass of programmers and tooling.
> what if it's not ten years, what if it could be six months?
If the better post-Rust project hasn't appeared in the past 15 years, why should we believe it will suddenly appear in the next six months? And given that it's taken Rust ~15 years to go from being a promising project to being adopted in the kernel, even if there was a project now that was as promising as the Rust of 15 years ago, why should we think the kernel would be willing to adopt it so much more quickly?
And even if that did happen, how big is the potential benefit? I think most fans of Rust or Zig or any other language in this space would agree that the difference between C and any of them is much bigger than the difference between these languages.
> youre risking getting trapped in a local minimum.
It's a risk, sure. I think it's much smaller than the risk of staying with C forever because you were waiting for some vaporware better language to come along.
Before rejecting a reason you at least have to know what it is!
i think some people would argue RAII but you could trivially just make all deacquisition steps an explicit keyword that must take place in a valid program, and have something (possibly the compiler, possibly not) check that they're there.
There are other ways to achieve memory safety. Java's strategy is definitely a valid one; it's just not as suitable for systems programming. The strength of Rust's approach ultimately stems from its basis in affine types -- it is a general purpose and relatively rigorous (though not perfect, see https://blog.yoshuawuyts.com/linearity-and-control/) approach to managing resources.
One implication of this is that a point you raised in a message above this one, that "rust's default heap allocation can't be trivially used", actually doesn't connect. All variables in Rust -- stack allocated, allocated on the heap, allocated using a custom allocator like the one in Postgres extensions -- benefit from affine typing.
also you could have affine types without RAII. without macros, etc. etc.
theres a very wide space of options that are theoretically equivalent to what rust does that are worth exploring for devex reasons.
Type systems are a form of static analysis tool, that is true; and in principle, they could be substituted by other such tools. Python has MyPy, for example, which provides a static analysis layer. Coverity has long been used on C and C++ projects. However, such tools can not "get out of the way of routine development" -- if they are going to check correctness of the program, they have to check the program; and routine development has to respond to those checks. Otherwise, how do you know, from commit to commit, that the code is sound?
The alternative is, as other posters have noted, that people don't run the static analysis tool; or run it rarely; both are antipatterns that create more problems relative to an incremental, granular approach to correctness.
Regarding macros and many other ergonomic features of Rust, those are orthogonal to affine types, that is true; but to the best of my knowledge, Rust is the only language with tightly integrated affine types that is also moderately widely used, moderately productive, has a reasonable build system, package infrastructure and documentation story.
So when you say "theres a very wide space of options that are theoretically equivalent to what rust does that are worth exploring for devex reasons.", what are those? And how theoretical are they?
It's probably true, for example, that dependently typed languages could be even better from a static safety standpoint; but it's not clear that we can tell a credible story of improving memory safety in the kernel (or mail servers, database servers, or other large projects) with those languages this year or next year or even five years from now. It is also hard to say what the "devex" story will be, because there is comparatively little to say about the ecosystem for such nascent technologies.
> how do you know, from commit to commit, that the code is sound?
these days its easy to turn full checks on every commit in origin; a pull request can in principle be rejected if any commit fails a test, and rewriting git history by squashing (annoying but not impossible) can get you past that if an intermediate failed.
It seems like, at least part of the time, you're discussing distinct use cases -- for example, the quick scripts you mention (https://news.ycombinator.com/item?id=43132877) -- some of which don't require the same level of attention as systems programming.
At other times, it seems like you're arguing it would be easier to develop a verified system if you only had to run the equivalent of Rust's borrow checker once in awhile -- on push or on release -- but given that all the code will eventually have to pass that bar, what are you gaining by delaying the check?
or a hpc workload for a physic simulation that gets run once on 400,000 cores, and if it doesnt crash on your test run it probably won't at scale.
if youre writing an OS, you will turn it on. in fact, even rust ecosystem suggests this as a strategy, for example, with MIRI.
I wouldn't argue that Rust is a good replacement for Makefiles, shell build scripts, Python scripts...
An amazing thing about Rust, though, is that you actually can write many "quick programs" -- application level programs -- and it's a reasonably good experience.
of course not, for kernel development. and in those cases, you WILL statically analyze.
And remember that your gripes with Rust aren't everyone's gripes. Some of the things you hate about Rust can be things that other people love about Rust.
To me, I want all that stuff in the compiler. I don't want to have to run extra linters and validators and other crap to ensure I've done the right thing. I've found myself so much more productive in languages where the compiler succeeding means that everything that can (reasonably) be done to ensure correctness according to that language's guarantees has been checked and has passed.
Put another way, if lifetime checking was an external tool, and rustc would happily output binaries that violate lifetime rules, then you could not actually say that Rust is a memory-safe language. "Memory-safe if I do all this other stuff after the compiler tells me it's ok" is not memory-safe.
But sure, maybe you aren't persuaded by what I've said above. So what? Neither of us are Linux kernel maintainers, and what we think about this doesn't matter.
what you really should care about is: is your code memory safe, not is your language memory safe.
and this is what is so annoying about rust evangelists. To rust evangelists it's not about the code being memory safe (for example you bet your ass SEL4 is memory safe, even if the code is in C)
A few that I found: logical operators do not short-circuit (so both sides of an or will execute even if the left side is true); it has two types of subprograms (subroutines and functions; the former returns no value while the latter returns a value); and you can't fall through on the Ada equivalent of a switch statement (select..case).
There are a few other oddities in there; no multiple inheritance (but it offers interfaces, so this type of design could just use composition).
I only perused the SPARK pdf (sorry, the first was 75 pages; I wasn't reading another 150), but it seemed to have several restrictions on working with bare memory.
On the plus side, Ada has explicit invariants that must be true on function entry & exit (can be violated within), pre- and post- conditions for subprograms, which can catch problems during the editing phase, and it offers sum types and product types.
Another downside is it's wordy. I won't go so far as to say verbose, but compared to a language like Rust, or even the C-like languages, there's not much shorthand.
It has a lot of the features we consider modern, but it doesn't look modern.
There are two syntaxes: `and` which doesn't short circuit, and `and then` which does. Ditto for `or` and `or else`.
C is actually more of an odd one here, and the fallthrough semantics is basically a side effect of it being a glorified computed goto (with "case" being literally labels, hence making things like Duff's device a possibility). Coincidentally, this is why it's called "switch", too - the name goes back all the way to the corresponding Algol-60 construct.
I've been writing C/C++ code for the last 16 years and I think a lot of mental gymnastics is required in order to call C "more readable" than Rust. C syntax is only "logical" and "readable" because people have been writing it for the last 60 years, most of it is literally random hacks made due to constraints ({ instead of [ because they thought that array would be more common than blocks, types in front of variables because C is just B with types, wonky pointer syntax, ...). It's like claiming that English spelling is "rational" and "obvious" only because it's the only language you know IMHO.
Rust sure has more features but it also way more regular and less quirky. And it has real macros, instead of insane text replacement, every C project over 10k lines I've worked on has ALWAYS had some insane macro magic. The Linux kernel itself is full of function-like macros that do any sort of magic due to C not having any way to run code at compile-time at all.
You're correct that there is a honest-to-god split of opinion by smart people who can't find a consensus opinion. So it's time for Linus to step up and mandate and say "discussion done, we are doing x". No serious organization of humans can survive without a way to break a deadlock, and it seems long past the time this discussion should have wrapped up with Linus making a decree (or whatever alternative voting mechanism they want to use).
How does it work? Are there only a few threads that they read? Which ones?
You'll be talking to a lot of people and making sure that everyone is on the same page, and that's what's going on here, hopefully. If you just shut up and write code all day, you probably aren't gonna get there and there will be conflict, especially if other people are touching your systems and aren't expecting your changes.
There is no "silent information" being distributed by random conversations around the office. If something is not explicitly written down, it did not happen and doesn't exist.
First you use a tool designed around following mailing lists. text based mail readers. they represent the threads in a compact form, allow to collapse threads and have them only resurrect if new content shows up. they also allow for pattern based tagging and highlighting of content "relevant to you", senders of interest, direct mentioning of your name/email address, ... and minor UX niceties like hiding duplicate subject in responses (Re: yadda <- we know that, it's at the top of the thread already)
such tool ergonomics allow you to focus on what's relevant to you
Hint: Outlook doesn't cut it.
And then with the right tool you practice, you learn how to skim the thread view like you maybe learned to skim the newspaper for relevant content.
and with the right tool and practice in place you can readily skim mailing lists during the day when you feel like it and can easily catch up after vacation.
That + having a couple decades to refine your email client setup goes a long way.
I don't think it would be necessary for most kernel developers to read that entire email thread. I feel like I could get through the entire thing in a half hour by ruthlessly skimming and skipping replies that don't tell me anything I care about, and only reading in full and in detail the handful or two of emails that really interest me.
And as a sibling says, a huge part of software development, especially when you're working with a large community of distributed developers, is communication. I expect most maintainers spend the majority of their time on communication, and less on writing code. And a lot of the contributors who write a lot of kernel code probably don't care too much about a lot of the organizational/policy-type discussion that goes on.
Overall, this means that they will sometimes err on the side of being deaf or dismissive.
But also, don't expect this kind of flame war to be a regular thing. Most discussions are a lot smaller and involve few people.
It's 3 days of posts, according to the dates in the outline structure at the bottom.
Good news! At the present moment, Rust is only being used for drivers. Who knows if that will change eventually, but it's already the case that the use case is contained.
The only results for cargo in entire the linux source tree is documentation suggesting you install bindgen via cargo install.... Plus a bunch of comments referencing "cargo-cult programming"
And you can see under samples/rust that only a kbuild-style Makefile is provided: https://github.com/torvalds/linux/tree/master/samples/rust
That being said, it depends on how well the two languages integrate with each other - I think.
Some of the best programming experience I had so far was when using Qt C++ with QML for the UI. The separation of concerns was so good, QML was really well suited for what it was designed for - representing the Ui state graph and scripting their interactions etc ... And it had a specific role to fill.
Rust in the kernel - does it have any specific places where it would fit well?
https://github.com/search?q=repo%3Atorvalds%2Flinux%20cargo&...
Does Linux kernel development have hot reload on the C side as a comparison?
Wouldn't a microkernel architecture shine here? Drivers could, presumably, reside in their own projects and therefore be written in any language: Rust, Zig, Nim, D, whatever.
Just by forcing new features to be written in rust, it will decrease potential memory safety related bugs / vulnerabilities drastically
https://security.googleblog.com/2024/09/eliminating-memory-s...
Perhaps I misunderstand your argument, but it sounds like: "Why have interfaces at all?"
The Rust bindings aren't guaranteed to be stable, just as the internal APIs aren't guaranteed to be stable.
EDIT: The process overhead seems straightforwardly worth it—rust can largely preserve semantics, offers the potential to increase confidence in code, and can encourage a new generation of contribution with a faster ramp-up to writing quality code. Notably nowhere here is a guarantee of better code quality, but presumably the existing quality-guaranteeing processes can translate fine to a roughly equivalently-capable language that offers more compile-time mechanisms for quality guarantees.
In addition, depending on the skill of the "binding writer", the second set of interfaces may simply be actually easier to use (and generally true, since the rust bindings are actually designed instead of evolved organically). This is yet another mental barrier. There may not even be a point to evolving one interface, or the other. Which just further contributes to splitting the project into two worlds.
I don't think I'd agree with that. Current kernel policy is that the C interfaces can evolve and change in whatever way they need to, and if that breaks Rust code, that's fine. Certainly some subsystem maintainers will want to be involved in helping fix that Rust code, or help provide direction on how the Rust side should evolve, but that's not required, and C maintainers can pick and choose when they do that, if at all.
Obviously if Rust is to become a first-class, fully-supported part of the kernel, that policy will eventually change. And yes, that will slow down changes to C interfaces. But I think suggesting that interfaces will ossify is an overreaction. The rate of change can slow to a still-acceptable level without stopping completely.
And frankly I think that when this time comes, maintainers who want to ignore Rust completely will be few and far between, and might be faced with a choice to either get on board or step down. That's difficult and uncomfortable, to be sure, but I think it's reasonable, if it comes to pass.
Presumably, this is an investment in replacing code written in C. There's no way around abstraction or overhead in such a venture.
> there's no way it doesn't significant put additional pressure when making breaking changes
This is the cost of investment.
> The natural reaction is that there will be less such breaking changes and interfaces will ossify.
A) "fewer", not "less". Breaking changes are countable.
B) A slower velocity of changes does not imply ossification. Furthermore, I'm not sure this is true—the benefits of formal verification of constraints surrounding memory-safety seems as if it would naturally lead to long-term higher velocity. Finally, I can't speak to the benefits of a freely-breakable kernel interface (I've never had to maintain a kernel for clients myself, thank god) but again, this seems like a worthwhile short-term investment for long-term gain.
> In addition, depending on the skill of the "binding writer" (and generally, since the rust bindings are actually designed instead of evolving organically), the second set of interfaces may simply be actually easier to use. There may not even be a point to evolving one interface, or the other. Which just further contributes to splitting the project.
Sure, this is possible. I present two questions, then: 1) what is lost with lesser popularity of the C interface with allegedly less stability, and 2) is the stability, popularity, and confidence in the new interface worth it? I think it might be, but I have no clue how to reason about the politics of the Linux ABI.
I have never written stable kernel code, so I don't have confident guidance myself. But I can say that if you put a kernel developer in front of me of genius ability, I would still trust and be more willing to engage with rust code. I cannot conceive of a C programmer skilled enough they would not benefit from the additional tooling and magnification of ability. There seems to be some attitude that if C is abandoned, something vital is lost. I submit that what is lost may not be of technical, but rather cultural (or, eek, egoist), value. Surely we can compensate for this if it is true.
EDIT, follow-up: if an unstable, less-used interface is desirable, surely this could be solved in the long term with two rust bindings.
EDIT2: in response to an aunt comment, I am surely abusing the term "ABI". I'm using it as a loose term for compatibility of interfaces at a linker-object level.
Nobody is proposing replacing code right now. Maybe that will happen eventually, but it's off limits for now.
R4L is about new drivers. Not even kernel subsystems, just drivers, and only new ones. IIRC there is a rule against having duplicate drivers for the same hardware. I suppose it's possible to rewrite a driver in-place, but I doubt anyone plans to do that.
For now, it's because for logistical and coordination reasons, Rust code is allowed to be broken by changes to C code. If subsystems (especially important ones) get rewritten in Rust, that policy cannot hold.
> yes i get there are linux vets we need to be tender with. This shouldn't obstruct what gets committed.
Not sure why you believe that. We're not all robots. People need to work together, and pissing people off is not a way to facilitate that.
> if this is what linux conflict resolution looks like, how the hell did the community get anything done for the last thirty years?
Given that they've gotten a ton done in 30 years, I would suggest that either a) your understanding of their conflict-resolution process is wrong, or b) your assertion that this conflict-resolution process doesn't work is wrong.
I would suggest you re-check your assumptions.
> You quarter-assed this reply so I'm sure your next one's gonna be a banger.
Please don't do this here. There's no reason to act like this, and it's not constructive, productive, interesting, or useful.
this just makes you look pedantic and passive aggressive
The C maintainer might also take patches to the C code from the Rust maintainer if they are suitable.
This puts a lot of work on the Rust maintainers to keep the Rust build working and requires that they have sufficient testing and CI to keep on top of failures. Time will tell if that burden is sustainable.
Most likely this burden will also change over time. Early in the experiment it makes sense to put most of the burden on the experimenters and avoid it from "infecting" the whole project.
But if the experiment is successful then it makes sense to spread the workload in the way that minimizes overall effort.
If it were me, I would have started building the relationships now with the R4L team to "act as-if" Rust is here to stay and part of the critical path, involving them when refractors happen but without the pressure to have to wait for them before landing C changes. That way you can actually exercise the workflow and get real experience on what the pain might be, and work on improving that workflow before it becomes an issue. Arguably, that is part of the scope of the experiment!
The fear that everyone from R4L might get up and leave from one day to the next, leaving maintainers with Rust code they don't understand, is the same problem of current subsystem maintainers getting up and leaving from one day to the next leaving no-one to maintain that code. The way to protect against that is to grow the team's, have a steady pipeline of new blood (by fostering an environment that welcomes new blood and encourages them to stick around) and have copious amounts of documentation.
But for new code / drivers, writing them in Rust where these types of bugs just can't happen (or happen much much less) is a win for all of us, why wouldn't we do this? -- greg k-h
> As someone who has seen almost EVERY kernel bugfix and security issue for the past 15+ years (well hopefully all of them end up in the stable trees, we do miss some at times when maintainers/developers forget to mark them as bugfixes), and who sees EVERY kernel CVE issued, I think I can speak on this topic.
The majority of bugs (quantity, not quality/severity) we have are due to the stupid little corner cases in C that are totally gone in Rust. Things like simple overwrites of memory (not that rust can catch all of these by far), error path cleanups, forgetting to check error values, and use-after-free mistakes. That's why I'm wanting to see Rust get into the kernel, these types of issues just go away, allowing developers and maintainers more time to focus on the REAL bugs that happen (i.e. logic issues, race conditions, etc.)
> I'm all for moving our C codebase toward making these types of problems impossible to hit, the work that Kees and Gustavo and others are doing here is wonderful and totally needed, we have 30 million lines of C code that isn't going anywhere any year soon. That's a worthy effort and is not going to stop and should not stop no matter what.
> But for new code / drivers, writing them in rust where these types of bugs just can't happen (or happen much much less) is a win for all of us, why wouldn't we do this?
But it does protect against memory leaks, use-after-free, and illegal memory access. C does not.
> The other question is at what cost it comes.
I think I trust the kernel developers to decide for themselves if that cost is worth it. They seem to have determined it is, or at least worth it enough to keep the experiment running for now.
Greg K-H even brings this up directly in the linked email, pointing out that he has seen a lot of bugs and security issues in the kernel (all of them that have been found, when it comes to security issues), and knows how many of them are just not possible to write in (safe?) Rust, and believes that any pain due to adopting Rust is far outweighed by these benefits.
To be clear, the linked CVE is an example of illegal memory access as a result of integer overflow. Of course, the buggy code involves an unsafe block so ... everything working as advertised. It's certainly a much higher bar for safety and correctness than C ever set.
This impacted C++'s standard library as well, but since the standard says it's undefined behavior, they said "not a bug" and didn't file CVEs.
Nobody believes that Rust programs will have zero bugs or zero security vulnerabilities. It's that it can significantly reduce them.
- most often are ub in binding code between rust and language x
- if not binding code the severity is often below 5, which is most often not a bug that will affect you
- exceptions are code with heavy async usage and user input handling (which rust never advertises to fix and is common in all languages, even ones with gc)
I really don't like rust, hence instead of wanting to contribute to projects which will inadvertently lead to more and more rust code being brought in, i start my own projects, when i can be the only voice of reason and have my joys of making things segfault :>... Its quite simple. If like me you are stuobborn and unflexible, you are a lone wolf. accept it and move on to be happy :) rather than trying to piss against the wind of change.
What's the reach here of linters/address san/valgrind?
Or a linter written specifically for the linux kernel? Require (error-path) tests? It feels excessive to plug another language if these are the main arguments? Are there any other arguments for using Rust?
And even without any extra tools to guard against common mistakes, how much effort is solving those bug fixes anyway? Is it an order of magnitude larger than the cognitive load of learning a (not so easy!) language and context-switching continuously between them?
Linters might be helpful, but I don't remember there being good free ones
The problem here is simple: C is "too simple" for its own good and it puts undue cognitive burden on developers
And those who reply with "skill issue" are the first to lose a finger on it
I should have Googled:
https://www.kernel.org/doc/html/latest/dev-tools/
So many tools here. Hard to believe these cannot come close to what Rust provides (if you put in the effort).
Only when writing code (and not even that: only when doing final or intermediate checks on written code). When reading the code you don't have to use the tools. Code is read at lot more then being written. So if tools are used, the burden is put only on the writer of the code. If Rust is used the burden of learning rust is put both on the writers and readers of the code.
For example, I cannot even recall the last time I had a double-free bug, though I used to do it often enough.
The emphasis for me is on a language that makes it easy to express algorithms.
Honestly, it's not the double-frees I worry about, since even in a language like C where you have no aids to avoid it, the natural structure of programs tends to give good guidance on who is supposed to free an object (and if it's unclear, risking a memory leak is the safer alternative).
It's the use-after-free I worry about, because this can come about when you have a data structure that hands out a pointer to an element that becomes invalid by some concurrent but unrelated modification to that data structure. That's where having the compiler bonk me on the head for my stupidity is really useful.
At work I’ve been helping push “use SQL with the best practices we learned from C++ and Java development” and it’s been working well.
It’s identical to your point. We no longer need to care about pointers. We need to care about defining the algorithms and parallel processing (multi-threaded and/or multi-node).
Fun fact: even porting optimized C++ to SQL has resulted in performance improvements.
If the people who work on the kernel now don't like that direction then that's a big problem.
The Linux leadership don't seem very focused on the people issues.
Where is the evidence that there is buy in from the actual people doing kernel development now?
Or is it just Linus and Greg as commanders saying "thou shalt".
Christian is a special case because his subsystem (DMA) is essentially required for the vast majority of useful device drivers that one might want to write. Whereas other subsystems are allowed to go at their own pace, being completely blocked on DMA access by the veto of one salty maintainer would effectively doom the whole R4L project. So whereas normally Linus would be more willing to avoid stepping on any maintainer's toes, he kind of has to here.
1. He has a philosophical objection to a multi-lingual kernel, because it adds complexity, and it's not unreasonable to expect that to spread. 2. It's fair enough to say it doesn't impact him now. But realistically if Rust is a success and goes beyond an experiment then at some point (e.g. in a decade) it will become untenable for subsystem maintainers to break the rust bindings with changes and let someone else fix them before releases. I fully expect that there will be very important drivers written in Rust in the future and it will be too disruptive to have the Rust build break on a regular basis just because Hellwig doesn't want to deal with it every time the DMA APIs are changed.
So unsurprisingly Hellwig is reacting now, at the point when he can exert the most control to avoid being forced to either accept working on doing some Rust himself or be forced to step aside and let someone else do it.
However this isn't realistically good enough. Linus already called the play when he merged the initial Rust stuff, the experiment gets to go on. The time to disagree and commit was back then.
https://lore.kernel.org/rust-for-linux/1f52fa44062e9395d54ed...
Are the people doing the work not good enough? See the maintainers list -- Miguel Ojeda, Alex Gaynor, Boqun Feng, Gary Guo, Björn Roy Baron, Benno Lossin, Andreas Hindborg, Alice Ryhl, Trevor Gross, Danilo Krummrich, etc., etc...
Who else exactly do you want to buy in?
> If the people who work on the kernel now don't like that direction then that's a big problem.
I think if you really want to lead/fight a counter-revolution, it will come down to effort. If you don't like Rust for Linux (for what could be a completely legitimate reason), then you need to show how it is wrongheaded.
Like -- reverse engineer an M1 GPU or some other driver, and show how it can be done better with existing tooling.
What I don't think you get to do is wait and do nothing and complain.
https://lwn.net/Articles/1007921/
> To crudely summarize: the majority of responses thought that the inclusion of Rust in the Linux kernel was a good thing; the vast majority thought that it was inevitable at this point, whether or not they approved.
This is the correct frame for RFL proponents. You're welcome.
As for this issue, it's just a nature of any project, people will come and go regardless, so why not let those C developers leave and keep the rust folks instead? At some point you have to steer the ship and there will always be a group of people unhappy about the course
In a way, these Rust bindings are somewhat stabilizing the Linux API as well, by putting more expectations and implications from documentation into compiler-validated code. However, this does imply certain changes are sure to break any Rust driver code one might encounter, and if may take Rust devs a while to redesign the interfaces to maintain compatibility. It's hardly a full replacement for a stable API.
At the moment, there aren't enough Rust developers to take over kernel maintenance. Those Rust developers would also need to accept giant code trees from companies updating their drivers, so you need experts in both.
With the increasing amount of criticism languages like C are receiving online because we now have plain better tooling, I think the amount of new C developers will diminish over the coming years, but it still may take decades for the balance to shift.
https://github.com/microsoft/windows-drivers-rs/blob/main/ex...
The point of R4L is that people want to write drivers for Linux in Rust. The corporate sponsors that are involved also are interested in writing drivers for Linux in Rust. Sure, Google could rebase Android on top of RedoxOS or Fuschia and Red Hat could spend a decade writing Linux Subsystem for RedoxOS, but neither want to do those things. They want to write drivers, for Linux, in Rust.
Telling them to write a new kernel is a bit like telling them they should go write a new package manager. It's a completely different thing from what they actually care about.
This is such an absurd, content-free argument, which is not surprising given how you closed it.
That's a misrepresentation of what's actually going on in the R4L project. Volunteers are enabling support for it within the kernel to allow for rust drivers in a way that explicitly does not require existing maintainers to change how they maintain their parts of the kernel. Maintaining rust support and the APIs consumed by Rust is the job of R4L and doesn't require any work from the existing maintainers who are allowed to make changes to their C that breaks Rust where the Rust will then be adjusted accordingly.
Influential people who have worked on the ins and outs of the Linux kernel for years and decades believe that adopting Rust (or at least keeping the Rust experiment going) is worth the pain it will cause.
That's really all that matters. I see people commenting here about how they think RAII isn't suitable for kernel code, or how keeping C and Rust interfaces in sync will slow down important refactoring and changes, or how they think it's unacceptable that some random tiny-usage architecture that Rust/LLVM doesn't support will be left behind, or... whatever.
So what! I'm not a Linux kernel developer or maintainer, and I suspect most (if not all) of the people griping here aren't either. What does it matter to you if Linux adopts Rust? Your life will not be impacted in any way. All that matters is what the maintainers think. They think this is a worthwhile way to spend their time. The people putting in the work get to decide.
What is he referring to?
[1]: https://github.com/carbon-language/carbon-lang/blob/trunk/do...
Personally, the biggest issue that gives me fear for C++'s future is that the committee seems to have more or less stopped listening to implementer feedback and concerns.
> "many people reading this might be familiar with the addition of the very powerful #embed preprocessor directive that was added to C. This is literally years of work brought about by one person, and that is JeanHeyd Meneide. JeanHeyd is a good friend and also the current editor of the C standard. And #embed started off as the std::embed proposal. Man, if only everyone in the world knew what the C++ committee did to fucking shut that shit down..."
> ... "Herb [Sutter] ... spun up a Study Group, SG15, at the recommendation of GDR to handling “tooling” in the C++ ecosystem. This of course, paved the way for modules to get absolutely fucking steamrolled into the standard while allowing SG15 to act as a buffer preventing any change to modules lest they be devoid of Bjarne [Stroustrup] and Gaby [Gabriel Dos Reis]’s vision. Every single paper that came out of SG15 during this time was completely ignored."
> "Gaby [Gabriel Dos Reis] is effectively Bjarne’s protégé. ... when it came to modules Gaby had to “prove himself” by getting modules into the language. Usually, the standard requires some kind of proof of implementation. This is because of the absolute disaster that was export template, a feature that no compiler that could generate code ever implemented. Thus, proof of modules workability needed to be given. Here’s where I bring in my personal conspiracy theory. The only instance of modules being used prior to their inclusion in the standard was a single email to the C++ mailing lists (please recall the amount of work the committee demanded from JeanHeyd for std::embed) where Gaby claimed that the Microsoft Edge team was using the C++ Modules TS via a small script that ran NMake and was “solving their problem perfectly”." ... the face she made when I asked [a Microsoft Employee] about Gaby’s statement signaled to me that the team was not happy. Shortly after modules were confirmed for C++20, the Microsoft Edge team announced they were throwing their entire codebase into the goddamn garbage and just forking Chromium... Gaby Dos Reis fucking lied, but at least Bjarne got what he wanted. ... This isn’t the first time Gaby has lied regarding modules, obviously...."
> ... "This [different] paper is just frankly insulting to anyone who has done the work to make safer C++ syntax, going on to call (or at least allude to) Sean Baxter’s proposal an “ad hoc collection of features”. Yet another case of Gaby’s vagueries where he can feign ignorance. As if profiles themselves are not ad hoc attributes, that have the exact same problem that Bjarne and others argue against, specifically that of the virality of features. The C++ committee has had 8 years (8 long fucking years) to worry about memory safety in C++, and they’ve ignored it. Sean Baxter’s implementation for both lifetime and concurrency safety tracking has been done entirely in his Circle compiler [which] is a clean room, from the ground up, implementation of a C++ compiler. If you can name anyone who has written a standards conforming C++ compiler frontend and parser and then added metaprogramming and Rust’s lifetime annotation features to it, I will not believe you until you show them to me. Baxter’s proposal, P3390 for Safe C++ has a very large run down on the various features available to us..."
> "Bjarne has been going off the wall for a while now regarding memory safety. Personally I think NASA moving to Rust hurt him the most. He loves to show that image of the Mars rover in his talks. One of the earliest outbursts he’s had regarding memory safety is a very common thing I’ve seen which is getting very mad that the definition a group is using is not the definition he would use and therefore the whole thing is a goddamn waste of time."
> "You can also look at how Bjarne and others talk about Rust despite clearly having never used it. And in specifically in Bjarne’s case he hasn’t even used anything outside of Visual Studio! It’s all he uses. He doesn’t even know what a good package manager would look like, because he doesn’t fucking care. He doesn’t care about how asinine of an experience that wrangling dependencies feels like, because he doesn’t have to. He has never written any actual production code. It is all research code at best, it is all C++, he does not know any other language."
> "Orson Scott Card didn't write Ender's Game [link] -> Ender's Game is an apologia for Hitler"
> "this isn’t a one off situation. It isn’t simply just Bjarne who does this. John Lakos of Bloomberg has also done this historically, getting caught recording conversations during the closing plenary meeting for the Kona 2019 meeting because he didn’t get his way with contracts. Ville is another, historically insulting members and contributors alike (at one point suggesting that the response to a rejected paper should be “fuck you, and your proposal”), and I’m sure there are others, but I’m not about to run down a list of names and start diagnosing people like I’m a prominent tumblr or deviantart user in 2017."
> "the new proposed (but not yet approved) Boost website. This is located at boost.io and I’m not going to turn that into a clickable link, and that’s because this proposed website brings with it a new logo. This logo features a Nazi dog whistle. The Nazi SS lightning bolts. Here’s a side by side of the image with and without the bolts being drawn over (Please recall that Jon Kalb, who went out of his way to initially defend Arthur O’Dwyer, serves on the C++ Alliance Board)."
> "Arthur O’Dwyer has learnt to keeps his hands to himself, he does not pay attention to or notice boundaries and really only focuses on his personal agenda. To quote a DM sent to me by a C++ community member about Arthur’s behavior “We are all NPCs to him”. He certainly doesn’t give a shit. He’s been creating sockpuppets, and using proxies to get his changes into the LLVM and Clang project. Very normal behavior by the way."
> "This is the state C++ is in, though as I’ve said plenty of times in this post, don’t get it twisted. Bjarne ain’t no Lord of Cinder. We’re stuck in a cycle of people joining the committee to try to improve the language, burning out and leaving, or staying and becoming part of the cycle of people who burn out the ones who leave."
But the author of that post clearly has some very fairly serious mental problems.
Also it's it's very disjointed, long, and incoherent. Classic schizo post.
If you look at the USPTO bw image, it immedeately invokes Schutzstaffel if you ever have seen their insignia, and only later you kinda maybe see it is a "B"
For a paid professional logo design, not being aware of, like, one of the most widely known evil logos after swastika, I mean, ok.
Plus the whole "we want to own your logo trademark please" with regards to an opensource project, what even actually is going on there
If you were a neo-nazi, do you think that's how you'd spend $12,000? Like is that the best bang for your buck, to maybe catch all those impressionable young men browsing the C++ boost library website, and subliminally bring them to your cause with your dashing B logo with a hidden mangled half of the SS insignia? Your local neo-nazi group would find a new treasurer immediately if you pulled a stunt like that!
Anyway, if this shocks you, wait until you find it out the windows logo has a hidden swastika.
I don't consider myself to be a Nazi, I'm nowhere near the historical defintion of a nazi, or even it's modern reinterpretation. But I am 100% sure that given maybe 2 or 3 more messages, you'll call me one.
So we can end it here. I've outed myself, through various "dog whistles", that I am in fact a "nazi". And therefore there's no need to reply to me.
I accept being put on a list. Real name is in my profile.
You're reading an awful lot into what I wrote.
> But I am 100% sure that given maybe 2 or 3 more messages, you'll call me one.
> I've outed myself, through various "dog whistles", that I am in fact a "nazi". And therefore there's no need to reply to me.
I don't even know what to answer to this. I have no idea where this is coming from.
> I accept being put on a list. Real name is in my profile.
Who's putting you on what list?
I honestly have no idea what this reply has to do with anything, unless you are arguing that dog whistles don't exist? If so, there are a few "modern Nazis" that disagree with you.
- marble statues are cool
- pepe the frog is right coded, but has been used in all sorts of contexts
- milk emoji.. I don't even get this one. wasn't it a software developer meme on X?
- It's 100% OK to be white, and I will die in this hill. Anyone who tells me how I was born is not OK can die in a fire.
We're now 2 replies deep. Would you just like to denounce me as a "nazi" and get this over with? We both know that's where this is going.
I mean, hey, I paid for and registered all rights to this emblem, would you so kindly put it on every thing you are making? I promise not to charge you for it.
Whatever is that even? Were they dying because of lack of registered logo?
And no, first thing that comes to mind when seeing windows logo is not "wow nazi symbol".
As I tried saying previously, this logo thing looks supremely fishy.
To imply that the only motive behind it all was putting SS insignia up is taking argument to an absurd extreme. Do you argue that one's ideology cannot influence such things as logo design, unless it is the sole purpose behind it?
My reaction 2 minutes later: "Oh..."
Funny, that's not Theodore T'so's position. The Rust guys tried to ask about interface semantics and he yelled at them:
https://lore.kernel.org/rust-for-linux/20250219170623.GB1789...
I was very confused by the lack of an actual response from Linus, he only said that social media brigading is bad, but he didn't give clarity on what would be the way forward on that DMA issue.
I have worked in a similar situation and it was the worst experience of my work life. Being stonewalled is incredibly painful and having weak ambiguous leadership enhances that pain.
If I were a R4L developer, I would stop contributing until Linus codifies the rules around Rust that all maintainers would have to adhere to because it's incredibly frustrating to put a lot of effort into something and to be shut down with no technical justification.
Only to Hedwig if I understood correctly
My best hope is for replacement. I think we've finally hit the ceiling of where monolithic kernels can take us. The Linux kernel will continue to make extremely slow progress while it deals with internal politics fighting against an architecture that can only get bigger and less secure over time.
But what could be the replacement? There's a handful of fairly mature microkernels out there, each with extremely immature userspaces. There doesn't seem to be any concerted efforts behind any of them. I have a lot of hope for SeL4, but progress there seems to be slow mostly because the security model has poor ergonomics. I'd love to see some sort of breakout here.
The amount of kernel code actually executing on any given machine at any given point in time is more likely to be around 9-12 million lines than anywhere near 40 million.
And a replacement kernel won't eliminate the need for hardware drivers for a very wide range of hardware. Again, that's where the line count ramps up.not
I suppose the main objection to that is accepting some degree of lock-in with the existing userspace (systemd, FHS...) over exploring new ideas for userspace at the same time.
(disclaimer: I work on Fuchsia, on Starnix specifically)
EDIT: for extra HN karma and related to the topic of the posted email thread, Starnix (Fuchsia's Linux compat layer) is written in Rust. It does run on top of a kernel written in C++ but Zircon is much smaller than Linux.
What's the driving use case for Starnix? Well, obviously "run Linux apps on Fuchsia" like the RFC for it says... but "very specific apps as part of a specific use case which might be timeboxed" or "any app for the foreseeable future"?
How complete in app support do you currently consider it compared to something like WSL1?
What are your thoughts about why WSL2 went the opposite direction?
Thanks!
I agree! Lots of fun stuff to do.
> What's the driving use case for Starnix?
The Starnix code is open source like the rest of Fuchsia and anyone is obviously free to read it and form their own opinions about where it's useful or where it's headed, but as a mere corporate employee I can't comment on direction/strategy :(.
> How complete in app support do you currently consider it compared to something like WSL1?
I'm only familiar with WSL1 as an occasional user so I can't really say for sure.
We run (and pass) a lot of tests compiled for Linux from the Linux Test Project, gVisor's compatibility test suite, and some other sources. There are still a lot of those tests that we don't yet pass :).
> What are your thoughts about why WSL2 went the opposite direction?
I don't know much about the history there. I've heard Nth-hand rumors that MS had a product strategy shift from Windows Phone Android compat (a relatively focused use case where edge cases might be acceptable) to trying to court developers (a broad use case where varying from their deployment environment might cause problems). I have no idea whether those rumors are accurate.
I've also heard that it was hard to make Linux programs perform well on top of NTFS, and that virtualized ext4 actually worked better for Linux workloads where fs performance mattered at all. Something something dirent cache for stat()? Some of this is discussed on the WSL1 vs WSL2 web page[0].
[0] https://learn.microsoft.com/en-us/windows/wsl/compare-versio...
I suspect many rust devs tend to be on the younger side, while the old C guard sees Linux development in terms of decades. Change takes time.
Monolithic kernels are fine. The higher complexity and worse performance of a microkernel design are mostly not worth the theoretical architectural advantages.
If you wanted to get out of the current local optimum you would have to think outside of the unix design.
The main treat for Linux is the Linux Foundation that is controlled by big tech monopolists like Microsoft and only spends only a small fraction on actual Kernel development. It is embrace, extend, extinguish all over but people think Microsoft are the good guys now.
Nope. The features are all in stable releases (Since last Spring in fact). However some of the features are still marked as unstable/experimental and have to be opted-in (so could in theory have breaking changes still). They're entirely features that are specific to kernel development and are only needed in the rust bindings layer to provide safe abstractions in a kernel environment.
seL4 has its place, but that place is not as a Linux replacement.
Modern general purpose computers (both their hardware, and their userspace ecosystems) have too much unverifiable complexity for a formally verified microkernel to be really worthwhile.
And the seL4 core architecture is fundamentally "one single big lock" and won't scale at all to modern machines. The intended design is that each core runs its own kernel with no coordination (multikernel a la Barrelfish) -- none of which is implemented.
So as far as any computer with >4 cores is concerned, seL4 is not relevant at this time, and if you wish for that to happen your choice is really either funding the seL4 people or getting someone else to make a different microkernel (with hopefully a lot less CAmkES "all the world is C" mess).
Kani et al are very interesting, but can't handle large codebases yet. I'm trying to write Rust in very compartmentalized, sans-IO etc, way to have small libraries that are fuzzable and more amenable to Kani verification.
> I'm trying to write Rust in very compartmentalized, sans-IO etc, way to have small libraries that are fuzzable and more amenable to Kani verification.
Good design even if Kani isn't used in the end.
[0] https://www.usenix.org/conference/osdi20/presentation/boos
One could also run virtual machines for end user workloads under a Theseus design. (The other meaning, not bytecode interpreter.) That sounds like a nice way to real world applicability, to me. History has shown reimplementing Linux syscalls is not realistic (gVisor, WSL1).
Any ordinary well-designed microkernel gives you a huge benefit: process isolation of core services and drivers. That means that even in the case of an insecure and unverified driver, you still have reasonable expectations of security. There was an analysis of Linux CVE's a while back and the vast majority of critical Linux CVEs to that date would either be eliminated or mitigated below critical level just by using a basic microkernel architecture (not even a verified microkernel). Only 4% would have remained critical.
https://microkerneldude.org/2018/08/23/microkernels-really-d...
The benefit of a verified microkernel like SeL4 is merely an incremental one over a basic microkernel like L4, capable of capturing that last 4% and further mitigating others. You get more reliable guarantees regarding process isolation, but architecturally it's not much different from L4. There's a little bit of clunkiness for writing userpace drivers for SeL4 that you wouldn't have for L4. That's what the LionsOS project is aiming to fix.
https://docs.sel4.systems/projects/sel4/frequently-asked-que...
This is all essential trust anyways. The leaps and bounds we've achieved through hardware engineering have the burden that they aren't credible for security. You can use IOMMU, but perhaps I won't. Integrated co-development of hardware and software is ideal, but generally there is an adversarial relationship, and we must reflect that in the software. Trust and security are not yes/no questions. We have to keep pushing boundaries. seL4 is a good start; let's make more from it.
And the performance disadvantages of a microkernel are all overblown, if not outright false [1]. Sure, you have to make twice as many syscalls as a monolithic kernel, but you can do it with much better caching behavior, due to the significantly smaller size. The SeL4 kernel is small enough to fit entirely in many modern processors' L2 cache. It's entirely possible (some chip designers have hinted as much), that with enough adoption they could prioritize having dedicated caches for the OS kernel...something that could never be possible with any monolithic kernel.
[1] https://trustworthy.systems/publications/theses_public/23/Pa...
Because we can and the security advantages are worth it.
Redox[0] has advantage that no-one will want to rewrite it in Rust.
I'd love to know where he got this impression. The new C++ features go a long way to helping make the language easier, and safer, to use.
Most C++ developers may don't understand what I mean. You need to proficient in Rust in order to understand it. When I was still using C++ as my primary language I have the same feeling as the other C++ developers about Rust. Once you start to comfortable with Rust you will see it is superior than C++ and you don't want to use C++ anymore.
1. The dangerous footguns haven't gone away 2. There are certain safety problems that simply can't be solved in C++ unless you accept that ABI will be broken and the language won't be backwards compatible.
Circle (https://www.circle-lang.org/site/index.html) and Carbon (https://docs.carbon-lang.dev/) were both started to address this fundamental issue that C++ can't be fully fixed and made safe like Rust without at least some breaking changes.
This article goes into more depth: https://herecomesthemoon.net/2024/11/two-factions-of-cpp/
In the case of the Linux kernel, a lot of the newer features that C++ has delivered aren't _that_ useful for improving safety because kernel space has special requirements which means a lot of them can't be used. I think Greg is specifically alluding to the "Safety Profiles" feature that the C++ committee looks like it will be going with to address the big safety issues that C++ hasn't yet addressed - that's not going to land any time soon and still won't be as comprehensive as Rust.
The challenging part is making a higher-level "safe" Rust API around the C API. Safe in the sense that it fully uses Rust's type system, lifetimes, destructors, etc. to uphold the safety guarantees that Rust gives and make it hard to misuse the API.
But the objections about Rust in the kernel weren't really about the difficulty of writing the Rust code, but more broadly about having Rust there at all.
Inadvertently, Rust makes working with C++ acceptable.
Android already uses a hardware abstraction layer for Linux written in C++ to write drivers.
It's a matter of politics to get something like this into the kernel.
As someone who has seen almost EVERY kernel bugfix and security issue for the past 15+ years (well hopefully all of them end up in the stable trees, we do miss some at times when maintainers/developers forget to mark them as bugfixes), and who sees EVERY kernel CVE issued, I think I can speak on this topic.
The majority of bugs (quantity, not quality/severity) we have are due to the stupid little corner cases in C that are totally gone in Rust. Things like simple overwrites of memory (not that rust can catch all of these by far), error path cleanups, forgetting to check error values, and use-after-free mistakes. That's why I'm wanting to see Rust get into the kernel, these types of issues just go away, allowing developers and maintainers more time to focus on the REAL bugs that happen (i.e. logic issues, race conditions, etc.)
I'm all for moving our C codebase toward making these types of problems impossible to hit, the work that Kees and Gustavo and others are doing here is wonderful and totally needed, we have 30 million lines of C code that isn't going anywhere any year soon. That's a worthy effort and is not going to stop and should not stop no matter what.
But for new code / drivers, writing them in rust where these types of bugs just can't happen (or happen much much less) is a win for all of us, why wouldn't we do this? C++ isn't going to give us any of that any decade soon, and the C++ language committee issues seem to be pointing out that everyone better be abandoning that language as soon as possible if they wish to have any codebase that can be maintained for any length of time.
Rust also gives us the ability to define our in-kernel apis in ways that make them almost impossible to get wrong when using them. We have way too many difficult/tricky apis that require way too much maintainer review just to "ensure that you got this right" that is a combination of both how our apis have evolved over the years (how many different ways can you use a 'struct cdev' in a safe way?) and how C doesn't allow us to express apis in a way that makes them easier/safer to use. Forcing us maintainers of these apis to rethink them is a GOOD thing, as it is causing us to clean them up for EVERYONE, C users included already, making Linux better overall.
And yes, the Rust bindings look like magic to me in places, someone with very little Rust experience, but I'm willing to learn and work with the developers who have stepped up to help out here. To not want to learn and change based on new evidence (see my point about reading every kernel bug we have.)
Rust isn't a "silver bullet" that will solve all of our problems, but it sure will help in a huge number of places, so for new stuff going forward, why wouldn't we want that?
Linux is a tool that everyone else uses to solve their problems, and here we have developers that are saying "hey, our problem is that we want to write code for our hardware that just can't have all of these types of bugs automatically".
Why would we ignore that?
Yes, I understand our overworked maintainer problem (being one of these people myself), but here we have people actually doing the work!
Yes, mixed language codebases are rough, and hard to maintain, but we are kernel developers dammit, we've been maintaining and strengthening Linux for longer than anyone ever thought was going to be possible. We've turned our development model into a well-oiled engineering marvel creating something that no one else has ever been able to accomplish. Adding another language really shouldn't be a problem, we've handled much worse things in the past and we shouldn't give up now on wanting to ensure that our project succeeds for the next 20+ years. We've got to keep pushing forward when confronted with new good ideas, and embrace the people offering to join us in actually doing the work to help make sure that we all succeed together.
thanks,
greg k-h
C committee, are you listening? Hello? Hello? Bueller?
(Unfortunately, if they are listening it is to make more changes on how compilers should take "creative licenses" in making developers shoot themselves in the foot)
C++ (ideally, C++17 or 20 to have all the boilerplate-reducing tools) allows for all of that to be made, even in a freestanding environment.
It's just that it's not enforced (flexibility is a good thing for evergreen/personal projects, less so for corporate codebases), and that the C++ committee seems to have weird priorities from what I've read (#embed drama, modules are a failure, concepts are being forced through despite concerns etc.) and treats freestanding/embedded as a second-class citizen.
Mastering a new programming language to a degree that makes one a competent maintainer is nothing to sneeze at and some maintainers might be unwilling to based on personal interests/motivation, which I'd consider legitimate position.
I think its important to acknowledge that not everyone may feel comfortable talking about their lack of competence/disinterest.
Don't he need to do that anyway for every user of his code?
I guess the point is that it he is able to review the code of every driver made in C using his API, but he can't review the Rust interface himself.
It's totally fine on a personal level if you don't want to adapt, but you have to accept that it's going to limit your professional options. I'm personally pretty surly about learning modern web crap like k18s, but in my areas of expertise, I have a multi-decade career because I'm flexible with languages and tools. I expect that if AI can ever do what I do, my career will be over and my options will be limited.
That being said, rust comes with technical advances and also with enough of a community that the non technical requirements are already met. There should be enough evidence for rational but stubborn people to accept it as a way forward
Another one that used to be well-known is s9y for Serendipity [1].
[0] https://en.wikipedia.org/wiki/Numeronym#Numerical_contractio...
[1] https://s9y.org/
I'm not saying it's never accurate*, it's just that, if you evaluate them through the site guidelines, the cost/benefit is negative.
https://news.ycombinator.com/newsguidelines.html
* (not a comment on this or any person)
> Right now the rules is Linus can force you whatever he wants (it's his project obviously) and I think he needs to spell that out including the expectations for contributors very clearly.
>
> For myself I can and do deal with Rust itself fine, I'd love bringing the kernel into a more memory safe world, but dealing with an uncontrolled multi-language codebase is a pretty sure way to get me to spend my spare time on something else. I've heard a few other folks mumble something similar, but not everyone is quite as outspoken.
He gets villianized and I don't think all his interactions were great, but this seems pretty reasonable and more or less in line with what other people were asking for (clearer direction from Linus).
That said, I don't know, maybe Linus's position was already clear...
Calling something "cancerous" is to say it was an incurable disease that unless stamped out with some amount of precision will continue to cause rot and decay. Be it correct or not, saying "The cancer that is killing HN" is pointing a finger at a problem and scapegoating all the other problems onto it.
It's colourful language for sure, but gimme a break.
Not that you have evidence of the author's state of mind?
I don't think the confusion you describe is happening.
> > >
> > > That is great, but that does not give you memory safety and everyone
> > > would still need to learn C++.
> >
> > The point is that C++ is a superset of C, and we would use a subset of C++
> > that is more "C+"-style. That is, most changes would occur in header files,
> > especially early on. Since the kernel uses a lot of inlines and macros,
> > the improvements would still affect most of the existing kernel code,
> > something you simply can't do with Rust.
I have yet to see a compelling argument for allowing a completely new language with a completely different compiler and toolchain into the kernel while continuing to bar C++ entirely, when even just a restricted subset could bring safety- and maintainability-enhancing features today, such as RAII, smart pointers, overloadable functions, namespaces, and templates, and do so using the existing GCC toolchain, which supports even recent vintages of C++ (e.g., C++20) on Linux's targeted platforms.
Greg's response:
> But for new code / drivers, writing them in rust where these types of bugs just can't happen (or happen much much less) is a win for all of us, why wouldn't we do this? C++ isn't going to give us any of that any decade soon, and the C++ language committee issues seem to be pointing out that everyone better be abandoning that language as soon as possible if they wish to have any codebase that can be maintained for any length of time.
side-steps this. Even if Rust is "better," it's much easier to address at least some of C's shortcomings with C++, and it can be done without significantly rewriting existing code, sacrificing platform support, or the incorporation of a new toolchain.
For example, as pointed out (and as Greg ignored), the kernel is replete with macros--a poor substitute for genuine generic programming that offers no type safety and the ever-present possibility for unintended side effects due to repeated evaluation of the arguments, e.g.:
#define MAX(x, y) (((x) > (y)) ? (x) : (y))
One need only be bitten by this kind of bug once to have it color your perception of C, permanently.
This simply forgets all the problems C++ has as a kernel language. It's really an "adopt a subset of C++" argument, but even that has its flaws. For instance, no one wants exceptions in the Linux kernel and for good reason, and exceptions are, for better or worse, what C++ provides for error handling.
Plenty of C++ codebases don't use exceptions at all, especially in the video game industry. Build with GCC's -fno-exceptions option.
> and exceptions are, for better or worse, what C++ provides for error handling.
You can use error codes instead; many libraries, especially from Google, do just that. And there are more modern approaches, like std::optional and std::expected:
Even if we are to accept this, we'd be back to an "adopt a subset of C++" argument.
You're right in one sense -- these are more modern approaches to errors, which were adopted in 2017 and 2023 respectively (with years for compilers to implement...). But FWIW we should note that these aren't really idiomatic C++, whereas algebraic data types is a baked in, 1.0, feature of Rust.
So -- you really don't want to adopt C++. You want to adopt a dialect of C++ (perhaps the very abstract notion of "modern C++"). But your argument is much more like "C++ has lambdas too!" than you may care to admit. Because of course it does. C++ is the kitchen sink. And that's the problem. You may want the smaller language inside of C++ that's dying to get out, but C++'s engineering values are actually "we are the kitchen sink!". TBF Rust's values are sometimes distinct too, but I'm not sure you've really examined just how different C++'s values are from kernel C, and why the kitchen sink might be a problem for the Linux kernel.
You say:
> RAII, smart pointers, overloadable functions, namespaces, and templates, and do so using the existing GCC toolchain
"Modern C++" simply doesn't solve the problem. Google has been very clear Rust + C++ codebases have worked well. But the places where it sees new vulnerabilities are mostly in new memory unsafe (read C++) code.
See: https://security.googleblog.com/2024/09/eliminating-memory-s...
I'm not sure there is much in your formulation.
It would seem to me to be a matter of program design, and programmer discretion, rather than a "subset of the language". Re: C++, we are saying "Don't use at least these dozen features, because they don't work well at many cooks scale, and/or they combine in ways which are non-orthogonal. We don't want you to use them because they complect[0] the code." Re: no panic Rust, we are saying "Don't call panic!(), because obviously you want a different program behavior in this context." These are different things.
And you cannot just roll your own library in a standard compliant way, because it contains secret compiler juice for, e.g. initializer_list or coroutines.
And once you use your own language dialect (with -fno-exceptions), who is to stop you from "customizing" other stuff, too?
So? The Linux kernel has freely relied on GCC-specific features for decades, effectively being written in "GCC C," with it only becoming buildable with Clang/LLVM in the last two years.
>(just look how much STL stuff
No one said you have to use the STL. Game devs often avoid it or use a substitute (like EASTL) more suitable for real-time environments.
That is unironically admirable. Either they have their man on GCC team, or have been fantastically lucky. In the same decades there have been numerous GCC extensions and quirks that have been removed [edit: from the gcc c++ compiler] once new standard proclaims them non-conformant.
So, which C++ dialect would provide tangible benefits to a freestanding self-modifying code that is Linux kernel, without bringing enough problems to outweight it all completely?
RAII and templates are nice, but it comes at the cost of making code multiple orders of magnitude harder to reason about. You cannot "simply" add C++ to sparse/coccinelle. And unlike rust, c++ compiler does not really care about memory bugs.
I mean, the c++ committee introduced "start_lifetime_as", effectively declaring all existing low-level c++ programs invalid, and made lambdas that by design can capture references to local variables then be passed around. Why would you set yourself up to have rug pulled out on the next C++ revision if you are not forced to?
C++ is a disability that can be accomodated, not something you do to yourself on purpose.
Did it? Wasn't that already the case before P2590R2?
And yes, a lot of the C++ lifetime model is insanity (https://en.cppreference.com/w/cpp/language/lifetime). Fortunately, contrary to the committee, compiler vendors are usually reasonable folks allowing needed low-level idioms (like casting integer constants to volatile ptr) and provide compiler flags whenever necessary.
If the entire natural inclination of the language is to use exceptions, and you don't, beginning with C++17 and C++23, I'm less sure that is the just right fit some think it is.
> Getting programmers to adhere to it could be handled 99% of the time with a linter, the other 1% can be code by reviewers.
What is the tradeoff being offered? Additional memory safety guarantees, but less good than Rust, for a voluminous style guide to make certain you use the new language correctly?
I've personally written libraries targeting C++20 that don't use exceptions. Again, error codes, and now std::optional and std::expected, are reasonable alternatives.
> What is the tradeoff being offered? Additional memory safety guarantees, but less good than Rust, for
It's not letting the perfect be the enemy of the good. It's not having to rewrite existing code significantly, or adopt a new toolchain, or sacrifice support for any platform Linux currently supports with a GCC backend.
I never thought I would say that C++ would be an improvement, but I really have to agree with that.
Simply adopting the generic programming bits with type safety without even objects, exceptions, smart pointers, etc. would be a huge step forward and a lot less disruptive than a full step towards Rust.
I'm not sure I have an informed enough opinion of the original C++ debate, but I don't think stepping to a C++ subset while also exploring Rust is a net gain on the situation, and has the same kinds of caveats as people who are upset at R4L complain about muddling the waters, while also being almost entirely new and untested if introduced now[1].
[1] - I'm pretty sure some of the closed drivers that do the equivalent of shipping a .o and a shim layer compiled have C++ in them somewhere sometimes, but that's a rounding error in terms of complexity testing compared to the entire tree.
On a memory safety scale I'd put C++ about 80% of the way from C to Rust.
No it doesn't.
Quite the contrary, great care is taken so that the language stay stable. "Stability without stagnation" is one of Rust core principles.
Anyway, why just stop at rust? If we really care about safety, lets drop the act and go make everyone do formal methods. Frama-C is at least C, has a richer contract language, has heavy static analysis tools before having to go to proofs, is much more proven, and the list goes on. Or, why not add Spark to the codebase if we are okay with mixing langs in the codebase? Its very safe.
Spark doesn't have an active community willing to support its integration into the kernel and has actually been taking inspiration from Rust for access types. If you want to rustle up a community, go ahead I guess?
If we are talking about more than memory, such as what greg is talking about in encoding operational properties then no, rust is far behind both frama-C, Spark, and tons of others. They can prove functional correctness. Or do you think miri, kani, cruesot, and the rest of the FM tools for Rust are superfluous?
My mocking was that that the kernel devs have had options for years and have ignored them out of dislike (ada and spark) or lack of effort (frama-C). That other options provide better solutions to some of their intrests. And that this is more a project exercise in getting new kernel blood than technical merits.
I don't think this is accurate, Rust is still totally optional. Also, the Rust folks are supposed to fix Rust code whenever it breaks due to changes on the C side. If they fail to do this, the newly-broken Rust code is supposed to be excluded from the build - up to and including not building any Rust at all.