Just doing memcpy or mmap would be even faster. But the same Rust advocates bragging about Rust speed frown upon such unsecure practices in C/C++.
They changed the persistence system completely. Looks like from a generic solution to something specific to what they're carrying across the wire.
They could have done it in Lua and it would have been 3x faster.
I wonder if it's just poorly worded and they meant to say something like "Replacing Protobuf with some native calls [in Rust]".
> Protobuf is fast, but not using Protobuf is faster.
The blog post reads like an unserious attempt to repeat a Rust meme.
Notably, Protobuf 2, a rewrite of Protobuf 1. Protobuf 1 was created by Sanjay Ghemawat, I believe.
[1] - https://github.com/protocolbuffers/protobuf: Google's data interchange format
[2] - https://github.com/google/flatbuffers: Also maintained by Google
AFAIK they have a bunch of production infra on protobuff/gRPC - not so sure about flatbufferrs which came out of the game dev side - that's the difference maker to me - which project is actually rooted in.
If you worked on Go projects that import Google protobuf / grpc / Kubernetes client libraries you are often reminded of that fact.
It sounds weird, and its totally dependent on your use case, but binary serialization can make a giant difference.
For me, I work with 3D data which is primarily tightly packed floats & ints. I have a bunch of options available:
1. JSON/XML, readable, easy to work with, relatively bulky (but not as bad as people think if you compress) but no random access, and slow floating point parsing, extensible. 2. JSON/XML + base64, easy to work with, quite bulky, no random access, faster parsing, but no structure, extensible. 3. Manual binary serialization: hard to work with, OK size (esp compressed), random access, optimal parsing, not extensible. 4. Flatbuffers/protobuf/capn-proto/etc: easy to work with, great size (esp compressed), random access, close-to-optimal parsing, extensible.
Basically if you care about performance, you would really just have control of the binary layout of your data, but you generally don't want to design extensibility and random access yourself, so you end up sacrificing some performance for that by choosing a lib.
We are a very regularly sized company, but our 3D data spans hundreds of terabytes.
Having a way to describe your whole API and generate bindings is a godsend. Yes, it can be done with JSON and OpenApi, yet it’s not mandatory.
I wrote assembly, memory mapping oriented protobuf software... in assembly, then what? I am allowed to say I am going 1000 times faster than rust now???
But I would just increase the stack size limit if it ever becomes a problem. As far as I know the only reason it is so small is because of address space exhaustion which only affects 32-bit systems.
The `become` keyword has already been reserved and work continues to happen (https://github.com/rust-lang/rust/issues/112788). If you enable #![feature(explicit_tail_calls)] you can already use the feature in the nightly compiler: https://play.rust-lang.org/?version=nightly&mode=debug&editi...
(Note that enabling release mode on that link will have the compiler pre-calculate the result so you need to put it to debug mode if you want to see the assembly this generates)
Isn't that just TCO or similar? Usually a part of the compiler/core of the language itself, AFAIK.
So I think there's value in providing it as an explicit opt-in; that way when you're reading the code, you know to account for it when you're looking at backtraces.
Additionally, if you're relying on TCO it might be a major bug if the compiler isn't able to apply it - and optimizations that aren't applied are normally invisible. This might mean you could get an error if you're expecting TCO and you or the compiler screwed something up.
Suppose I have a recursive function f(n: u8) where f(0) is 0 and otherwise f(n) is n * bar(n) + f(n-1)
I might well write that with a local temporary to calculate bar(n) and then we do the sum, but this would inhibit TCO because that temporary should exist after we did the recursive calculation, even though it doesn't matter in practice.
A compiler could try to cleverly figure out whether it matters and destroy that local temporary earlier then apply TCO, but now your TCO is fragile because a seemingly minor code change might fool that "clever" logic, by ensuring it isn't correct to make this change and breaking your optimisation.
The `become` keyword is a claim by the programmer that we can drop all these locals and do TCO. So because the programmer claimed this should work they're giving the compiler permission to attempt the early drop and if it doesn't work and can't be TCO then complain that the program is wrong.