Automatically translating C to unsafe Rust is pointless, the resultant code is harder to read and there's no improvement in understanding how to get the code maintainable and safe, that requires tons of manual work by someone with a deep understanding of the codebase.

Generally the Rust community as well don't seem to have an answer on how to do this incrementally. In business terms we have no idea how to do work slices with demonstrable value, so no way to keep this on track and cut losses if it becomes too much work. This also strongly indicates you're 'stuck' with Rust when you're done, maybe a better and less unidiomatic C++ killer comes later and sounds like you're either going to have to rewrite the whole thing or give up.

I'm definitely open to wisdom on this if anyone disagrees because it is valuable to me and probably most of the readers of this comment section.

> Automatically translating C to unsafe Rust is pointless, the resultant code is harder to read and there's no improvement in understanding how to get the code maintainable and safe, that requires tons of manual work by someone with a deep understanding of the codebase.

I have experience on a (nontrivial) translation of a "very unsafe" C codebase to Rust, and it's not true that there is no value in this type of work.

The first step, automatic translation from C to Rust via tools, immediately revealed bugs in the original codebase. This step alone is worth spending some time on the operation.

Ports from C to Rust aren't a binary distribution of "all safe" or no port at all. Some projects, for example ClamAV, are adopting a mixed approach - (part/most of) new code in Rust, and some translation of existing functionalities to Rust.

In general, I think that automatic porting of C to Rust is, in real world, an academic exercise. This is because C codebases designed without safety in mind, simply need to be redesigned, so the domain in not really "how to port C to Rust" - it's "how to redesign and unsafe C codebase to a safe one" first of all. Additionally, I believe that in such cases, maintaining the implementation details is impossible - unsafety is a design, after all.

I personally advocate for very precisely scoped ports, where it can be beneficial (safety an stability); where that's not possible, I agree, better abandon early.

IMO, safety and "idiomatic-ness" of Rust code are two separate concerns, with the former being easier to automate.

In most C code I've read, the lifetimes of pointers are not that complicated. They can't be that complicated, because complex lifetimes are too error prone without automated checking. That means those lifetimes can be easily expressed.

In that sense, a fairly direct C to Rust translation that doesn't try to generate idomatic Rust, but does accurately encode the lifetimes into the type system (ie. replacing pointers with references and Box) is already a huge safety win, since you gain automatic checking of the rules you were already implicitly following.

Here's an example of the kind of unidiomatic-but-safe Rust code I mean: https://play.rust-lang.org/?version=stable&mode=debug&editio...

If that can be automated (which seems increasingly plausible) then the need to do such a translation incrementally also goes away.

Making it idiomatic would be a case of recognising higher level patterns that couldn't be abstracted away in C, but can be turned into abstractions in Rust, and creating those abstractions. That is a more creative process that would require something like an LLM to drive, but that can be done incrementally, and provides a different kind of value from the basic safety checks.

> In that sense, a fairly direct C to Rust translation that doesn't try to generate idomatic Rust, but does accurately encode the lifetimes into the type system (ie. replacing pointers with references and Box) is already a huge safety win, since you gain automatic checking of the rules you were already implicitly following.

Unfortunately, there's a lot of non-trivial C code that really does not come close to following the rules of existing Safe Rust, even at their least idiomatic. Giving up on idiomaticness can be very helpful at times, but it's far from a silver bullet. For example, much C code that uses "shared mutable" data makes no effort to either follow the constraints of Rust Cell<T> (which, loosely speaking, require get or set operations to be tightly self-contained, where the whole object is accessed in one go) or check for the soundness of ongoing borrows at runtime ala RefCell<T> - the invariants involved are simply implied in the flow of the C code. Such code must be expressed using unsafe in Rust. Even something as simple (to C coders) as a doubly-linked list involves a kind of fancy "static Rc" where two pointers jointly "own" a single list node. Borrowing patterns can be decoupled and/or "branded" in a way that needs "qcell" or the like in Rust, which we still don't really know how to express idiomatically, etc.

This is not to say that you can't translate such patterns to some variety of Rust, but it will be non-trivial and involve some kind of unsafe code.

> Generally the Rust community as well don't seem to have an answer on how to do this incrementally.

You can very much translate C to Rust on a function-by-function basis, the only issue is at the boundary where you're either left with unsafe interfaces or a "safe" but slow interop. But this is inherent since soundness is a global property, even a tiny bit of wrong unsafe code can spoil it all unless you do things like placing your untrusted code in a separate sandbox. So you can do the work incrementally, but much of the advantage accrues at the end.

> You can very much translate C to Rust on a function-by-function basis, the only issue is at the boundary

Absolutely not. There are many restrictions of Rust that will prevent that. Lifetimes, global state come to mind first. Think about returning pointer to some owned by the caller - this can require massive cascading changes all over the codebase to be fixed.

These are restrictions of idiomatic Safe Rust. You can use either unsafe Rust or, in many cases, less idiomatic but still Safe Rust to sidestep them. (For instance, "aliasable mutable" but otherwise valid references which can often be expressed as &Cell<T>, etc.)

You might still need a "massive cascading change" later on to make the code properly idiomatic once you have Rust on both sides of the boundary, but that's just a one-time thing and quite manageable.

> You can use either unsafe Rust or, in many cases, less idiomatic but still Safe Rust to sidestep them. (For instance, "aliasable mutable" but otherwise valid references which can often be expressed as &Cell<T>, etc.)

There's no doubt that one can convert C into unsafe Rust - C2Rust can automatically convert an entire C codebase into unsafe Rust

The problem is that after such step (which is certainly valuable), converting the code to safe Rust is typically a lot of work, which is the point of the academic research in question. Half baked code, using safety workarounds, doesn't provide any value to a project.

unsafe rust still has to follow invariants, you're just promising the compiler that it does
Yes, clearly it's a matter of using different facilities that may only be accessible to Unsafe Rust, and changing the interface accordingly. But to state that Rust as a whole has such restrictions is not correct.
Surely if you do this, you just end up expressing your C design in different syntax?

Doing the right thing means writing different functions with different signatures. Incrementalism here is very hard, and the smallest feasible bottom up replacement for existing functionality may be uncomfortably large. Top down is easier but it tends to lock in the incumbent design.

> Surely if you do this, you just end up expressing your C design in different syntax?

Using different syntax is not pointless: the syntax allows you to express limited invariants that are expected to be comprehensively upheld by the surrounding C code. These invariants will initially be extremely broad (e.g. "this function must always get a $VALID pointer as input", for whatever values of $VALID), since they cannot be automatically checked; but they can gradually become stricter as more and more of the codebase is rewritten to be memory safe. Does this sometimes involve " cascading changes"? Yes, but much smaller than a from-scratch 100% rewrite into Safe Rust.

  • ·
  • 4 hours ago
  • ·
  • [ - ]
My 2C: What we need isn't a translater, but painless FFI. The FFI tools avail like cc and bindgen make working results most of the time, but they need [manual] wrapping.

It's kind of a similar situation (Although a bit more complicated) exposing Rust libs in python; PyO3/maturin do the job, but you have to manually wrap.

So... I would like tools that call C code from rust, but with slices etc instead of pointers.

> I would like tools that call C code from rust, but with slices etc instead of pointers.

A slice is just a bundle of pointer + size. C raw interfaces vary on how they express the "size" part, so the point of wrapping is translating that information into whatever bespoke way is expected by the code you're working with.

Good insight! I guess I don't really understand why we can't use native types then. I don't want to keep having to write these:

  pub fn fir_q31(
    s: &mut sys::arm_fir_instance_q31,
    input: &[i32],
    output: &mut [i32],
    block_size: usize,
) { // void arm_fir_q31 ( // const arm_fir_instance_q31 * S, // const float32_t * pSrc, // float32_t * pDst, // uint32_t blockSize // ) // Parameters // [in] S points to an instance of the floating-point FIR filter structure // [in] pSrc points to the block of input data // [out] pDst points to the block of output data // [in] blockSize number of samples to process // Returns none

    compiler_fence(Ordering::SeqCst);
    unsafe {
        sys::arm_fir_q31(s, input.as_ptr(), output.as_mut_ptr(), block_size as u32);
    }
}
It's not pointless. For a start it frees you from the C toolchain so things like cross-compilation and WASM become much easier.

Secondly, it's a sensible first step in the tedious manual work of idiomatic porting. I'm guessing you didn't read the article but it's about automating some of this step too.

The article doesn't address the hard problem of figuring out array sizes. There's some work going on as part of the DARPA TRACTOR program to work on that. This area, of course, is the usual cause of buffer overflows.

The goal is to convert C pointers to Rust arrays, pointer arithmetic to Rust slices, and array allocations to Vec initialization. The hard problem is figuring out the sizes of arrays, which is going to require global analysis down the call chain.

If you're going to publish papers on this, please address that problem.

Of course, one you have identifies the bounds to each pointer you could just do bounds checking in C.
That's not actually sufficient in the general case where the pointer may not be the type of the underlying object. You also have to respect strict aliasing even if the bounds are correct. This isn't true in the same way in Rust because memory is untyped. You only need to ensure basic memory validity (range, initialization, alignment, etc).
Yes, you also do not want to do random casts, but this is even easier. I do not get your point out memory validity in Rust. What if you write where a pointer is stored, or even a boolean?
I'm talking about type punning specifically here. There's a lot of old C code out there that stores everything in int * buffers and casts pointers back to the correct type. I'm even aware of one toolchain for a widely used MCU that typedef'd char to int (i16).

I believe this would be legal in Rust today if you respected the other rules, with the caveat that it wouldn't be remotely idiomatic or possible without unsafe.

Rust does not have strict aliasing, that’s correct.
But Rust still has trap representations, or? In practice, this implies similar constraints as strict aliasing.
Rust has pointer provenance which implies very similar constraints to the "typed memory" wording of C/C++.
This isn’t correct. Just because Rust has aliasing rules doesn’t mean they’re the same sorts of rules.

C and C++ are also looking to adopt more formal provenance rules.

Does it? It's very unclear to me whether something like type punning is prohibited by provenance today. The docs don't provide much clarity, and the comments I can find by ralf suggest the details are undecided. I can't imagine it won't be eventually prohibited since we already have hardware designs prohibiting it and it's a terrible code pattern to begin with, but I don't know if the language currently does so.
The code I've seen that was autotranslated from C to Rust has an absolutely hopeless number of unsafe statements.

You're better off using Fil-C.

Fil-C is an innovative approach and a great technical achievement. However, I wouldn't suggest that it is an universal solution without caveats. For instance, the performance penalty of up to 4x is not acceptable in a lot of cases.

Also, the c2rust output is rough but not hopeless: There are real world success stories of rust projects that were bootstrapped via c2rust, e.g. https://tweedegolf.nl/en/blog/151/translating-bzip2-with-c2r...

bzip2 is tiny, has relatively low overhead in Fil-C (forget exactly what it is but not 4x), and last I checked this Rust version still has >100 uses of unsafe.
Fil-C doesn't stop the data race problems the borrow checker would catch does it?

Has anyone tried pointing an agentic ai at recreating a c utility by looking only at the man page and using differential fuzzing? It isn't a port, so no licensing issues, and the code would use unsafe, and presumably be more idiomatic. I have no idea if it would ever complete, or just get stuck in an endless loop. Or even if it did succeed, how many joules it would use.

> data race problems

No, Fil-C just makes races memory safe.

Also this is sort of changing the topic a bit since bzip is single threaded

Even if it wasn't single threaded, it would probably have been fine grained OMP style multithreaded which runs into far fewer issues. I was just making sure I understood what Fil-C was doing. I hadn't heard of it. It seems like a great thing.
I would assume that these two use cases are basically completely separate.

Auto-translate from C to Rust would serve as a great step to starting a porting project. Now you can incrementally re-write the "basically C" auto-ported code to "proper Rust" without dealing with FFI and other pains that come from function-by-function ports.

Fil-C is great for running software that you don't want to port. (Or don't yet have the resources to port.)

Interestingly there is probably a gap between the two. When your project is pure C you can use Fil-C. However I don't think Fil-C supports Rust. So assuming that the initial C to Rust translation doesn't produce 100% safe code (I'm not aware of any current tools that do this) you have this middle state where you can no longer compile with Fil-C but have lots of unsafe Rust code. So maybe there is a use case for Fil-Rust where you compile your Rust program so that even unsafe blocks are in fact safe. This could be used until you complete the port.

Wonder if it would be better to auto translate to broken rust, ie forcing the user to fix memory issues. I imagine that would lead to pretty big refactors in some cases though.
No. What comes out of C2Rust is awful. The Rust that comes out reads like compiler output. Basically, they have a library of unsafe Rust functions that emulate C semantics. Put in C that crashes, get Rust that crashes in the same way. Tried that on a JPEG 2000 decoder.
I find it funny AF that Fil-C is safer than languages with the unsafe keyword. Who knew C could be so safe with a proper compiler
It is well known that GC allows you to solve memory safety problems
> proper compiler

Not just compiler but GC as sell. So it does note solve same problem as Rust.

Would you rather have a gc or unsafe?

In just about every language I seen people use .clone rather than deal with problems so I suspect a lot of cases a GC can be just fine or faster. Although I'm comfortable with memory management and rather use C or C++ if I'm writing fast code

> Would you rather have a gc or unsafe?

Like in case where you can't use Rust? (ie.: existing codebase). Sure that is what Fil-C is good for. Point is that Fil-C does not solve the problem Rust does. It is more like band-aid. (Maybe my comment was misunderstood because of typo: sell/well)

Also I think there is huge difference between GC and fact that some people use .clone() somewhere.

The memory model of C is intentionally designed to allow safe implementations (still from the time of hardware-segmented methods).
Could you expand on that?
I believe the claim is that there's nothing in the C standard that requires implementations to be unsafe. If they wanted to, they could bounds check pointers, check allocations are still alive when pointers are dereferenced, etc. and still be conformant to the standard.
Nothing in the C standard requires bytes to have 8 bits either.

There's a massive gap between what C allows, and what real C codebases can tolerate.

In practice, you don't have room to store lengths along pointers without disturbing sizeof and pointer<>integer casts. Fil-C and ASAN need to smuggle that information out of band.

Even more, certain rules are specifically designed to make such checks possible while being conformant to the standard.
  • ·
  • 16 hours ago
  • ·
  • [ - ]
Tried Claude Code with explicit instructions to create idiomatic code and avoid unsafe statements?
In a way this is strange because there us a huuuge new area of vulnerabilities caused by LLMs writing code that DWARFS the read/write out of array bounds issues C has.
I agree.

But on the other hand, let's not kid ourselves, array out of bounds, use after free, resource leaks and bad type system, all of this isn't even close to an exhaustive list of C downsides. Beyond its direct limitations, C inspires an approach that is vastly inferior even if you follow all the best practices. Even compared to (modern) C++ it's much worse. I say this and I kind of like C.

If the approaches described in the article save us 30% of the effort of translating C codebases to Rust, it's still worth trying; we're unfortunately not very close to complete automation, but that's something worthy of pursuit.

The code needs to pass integrity checks of the safe Rust subset, which is a different challenge than writing dangerous code without feedback.
I understand the issues related to LLM leaking and re-distributing "private" information, but I'm curious which category of concerns you're referring to. Would you mind giving some context (genuinely curious) ?
The other direction might be more interesting, in case rust drops in popularity in a couple of years, leaving behind a bunch of "let's rewrite in rust" efforts
I am not convinced that anyone would take a working rust project and rewrite it in C. I don’t see any good reason to do so.

When rust will lose popularity, it is going to happen eventually, I would bet it’s in favour of a newer and more promising programming language. Not C.

I think Rust has hit critical mass. It's now basically the default choice for something you want to perform well but want to be reasonably secure. For example, uv in the python ecosystem.
  • foldr
  • ·
  • 6 hours ago
  • ·
  • [ - ]
If you read HN you might get that impression, but that vast majority of software that needs security and good performance is being written in Java.
I wouldn't be surprised if that was closer to the truth. A heck of a lot of boring software runs on the JVM. That said, it's a slightly different niche from command line tools.
If you were building a programming language, would you write it in Java or Rust?
Graal and Truffle make the JVM look attractive, especially for this case!
  • foldr
  • ·
  • 5 hours ago
  • ·
  • [ - ]
I'm not personally a fan of Java, but if I was implementing a compiler, I'd pick a language with GC. There's pretty much no downside to a GC in that context, and it gives you more flexibility when working with graph data structures.

If 'building a programming language' means writing an interpreter or VM, then I can see the attraction of Rust for that case. But writing interpreters and VMs is like 0.0001% of the programming that gets done in the world.

[dead]
  • m00dy
  • ·
  • 12 hours ago
  • ·
  • [ - ]
Rust is the clear winner of the LLM era. With code generation being so effortless, why would you write in any other language?
i don't use LLMs, but i've heard people complain current LLMs are not good at writing Rust
Current LLMs are not good at writing any language you actually understand, unless you do so much of the work that you might as well have written the whole program yourself.

They're excellent at doing things I'm not an expert at, though! https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect

We should make calculators like this for kids to learn on. Every so often it makes mistakes that you will spot if you could have done the arithmetic yourself and are just saving time. That is where ai code is at right now.
This is exactly why I don't trust LLMs (and therefore why I don't use them). When dealing with something I know about I can see the many mistakes they make - I would have to be a complete fool to trust them to do better on subjects I don't know about.
  • m00dy
  • ·
  • 10 hours ago
  • ·
  • [ - ]
yeah that narrative was popular last year. You can't go wrong with LLMs on Rust.
Maybe I'm doing it wrong (using a variety of models on GitHub Copilot) but in complex tasks I often find that they give me code that doesn't quite compile (often due to lifetime errors, sometimes other issues)
Try agents like Claude code. My experience was that the initial code was conceptually correct with some type errors on the first pass. It then iterated on compile errors about 6 times, tweaking the code to resolve the issues. Then it compiled and ran correctly.

This was about 500 lines of working rust in about 10 minutes, approximately 25x my pace at writing rust. (I’m a bit of a beginner.)

That narrative is still popular with LLMs themselves. If you ask an LLM whether it can code Rust, it will tell you that it can but not very well.

They're good at web languages, python, and C/C++. As far as I can tell Rust works if you're already good at Rust and you can catch its screwups and strange architecture choices quickly.

new chips will always have a c compiler available long before anything else
I would assume that an LLVM backend is created for new chips and then C is not the only thing getting support. There's very little point in just supporting C in that sense.
That doesn't seem to have been an issue for recent new CPU architectures. RISC-V has excellent Rust support for example.
Not really. Rust still doesn't support Arm SVE or RVV intrinsics.
I suppose so. I'd see that as more of a missing Rust language feature (SIMD support is still immature) rather than a platform support issue though.
Compile speed maybe the only one. But hopefully that keeps becoming less of a difference
That would also help use Rust in platforms that only have a C compiler.
People have used mrustc like that to put rust on a c64. The number of targets that make sense from a word length perspective that aren't already supported by llvm are pretty small I think? You aren't going to compile rust to some fixed point dsp where a long is 48bits. The c anything is likely to generate won't compile in whatever odd not-quite-ansi c compiler the chip maker provides.
That could be interesting. If some new language or tool appears that automatically figures out the correct lifetime and ownership of the resources in your program, people (might be the same people) will call for rewrites from Rust into the new language, as you would no longer have to assign memory ownership manually.
I just upgraded my Ubuntu to the new version with Rust written Coreutils - this is insane

    % size /usr/bin/ls
       text    data     bss     dec     hex filename
    10086795  731540    2104 10820439  a51b57 /usr/bin/ls

    % ls -sh /usr/lib/cargo/bin/coreutils/ls
    11M /usr/lib/cargo/bin/coreutils/ls

    % du -sh /usr/bin
    1.5G /usr/bin
  • gpm
  • ·
  • 3 hours ago
  • ·
  • [ - ]
The entire rust coreutils package, as installed, is 12 MB https://packages.ubuntu.com/questing/rust-coreutils Which is nearly double the gnu coreutils package but still a complete nothing burger: https://packages.ubuntu.com/questing/gnu-coreutils

I think what's happening here is that they've all been compiled into one binary, and then that one binary hardlinked to a variety of names like /usr/bin/ls. Since they all show as having the same inode and the same size.

The other 1.5G of your 1.5G /usr/bin is unrelated to rust coreutils.

You are absolutely right!

    % du -sh /usr/lib/cargo/bin/
    13M /usr/lib/cargo/bin/
Just a bit odd they went for hard links instead of soft links, makes it harder to tell that it's all the same file.
The key invention here would be to translate from idiomatic C to idiomatic - safe - Rust.

That also sounds exactly like the kind of invention that would make me fear for my job and claim AGI has all but arrived.

Just syntactically translating C code to mostly unsafe or non-idiomatic Rust seems like a pretty pointless excercise?

  • ·
  • 8 hours ago
  • ·
  • [ - ]