I really don't know how LLVM picks between branches or conditional moves, but my guess is that it doesn't assume that float equality is any less likely than other conditions, and some optimization pass in O3 turns unpredictable branches into conditional moves. I base this on the fact that adding std::hint::unlikely to the "equal" branch produces the same assembly for the function in both modes.
https://godbolt.org/z/erGPKaPcx
Whether it's safe to assume in general that float equality is unlikely for the purpose of program optimization, I'll leave to the compiler engineers. If you know the data your program will be handling, adding hints will avoid these surprises.
__builtin_expect_with_probability(..., 0.5)
https://github.com/protocolbuffers/protobuf/commit/9f29f02a3...
It ended up being > 2x faster in debug build, but 2x-5x slower in the release build (??!!?) [1]. I haven't learned much about compilers/"lower level" C++, so I moved on at that point.
How it worked:
1.) The P.Q. created a vector and resized it to known bounds.
2.) The P.Q. kept tract of and updated the "active sorting range" each time an element was inserted or popped.
2B.) So each time an element is added, it uses the closest unused vector element and updates the ".end" range to sort
2C.) Each time an element was removed, it updated the ".start" range
3.) In theory this should have saved reallocation overhead.
[1] I believe Visual Studio uses -O0 for debug, and -O2 for release.
When adding an item, it gets added to the next unused vector element. The sorting range end offset gets updated.
Then it sorts (note you would actually need a custom sort since a PriorityQueue is a pair)
std::sort( vec.begin() + startOffset, vec.begin() + endOffset ) [1]
Adding an item would be something like: endOffset++; vec.insert( vec.begin() + endOffset, $value ); [1]
Or maybe I just used endOffset++; vec[endOffset] = $value; [1]
Popping an item: startOffset++;
[1] Im writing this from memory from my attempt many months ago. May have mistakes.. but should communicate the jistI found that surprising, but not as surprising as suddenly reading about Rust!
The second one is your problem. Haswell is 15 years old now. Almost nobody owns a CPU that old. -O3 makes a lot of architecture-dependent decisions, and tying yourself to an antique architecture gives you very bad results.
https://godbolt.org/z/oof145zjb
So no, Haswell is not the problem. LLVM just doesn't know about the dependency thing.
I have a box that old that can run image diffusion models (I upgraded the GPU during covid.)
My point being that compiler authors do worry about “obsolete” targets because they’re widely used for compatibility reasons.
Since we're using Rust there's some decent syntactic sugar for this function you can use instead:
let cmp = |other: &Neighbor| -> Ordering {
other.dist.partial_cmp(&neighbor.dist).unwrap()
.then_with(|| other.id.cmp(&neighbor.id))
};
You probably won't get any CMOVs in -O3 with this function because there are NaN issues with the original code.Just curious, since perf alone doesn’t seem to be the factor.
https://browser.geekbench.com/processors/intel-core-i7-2600k
https://browser.geekbench.com/processors/intel-core-i9-14900...
The only compelling reason that I want to upgrade my Sandy Lake chip is AVX2.
So it is instruction set not perf, sure there will be improved performance but most of the things that are actually performance issues is already handed off to the GPU.
On that note probably rebar and PCIe4 but those aren’t dramatic differences, if CPU is really a problem (renders/compilation) then it gets offloaded to different hardware.
When the numbers are that far apart, there is definitely room to perceive a performance improvement.
2011 era hardware is dramatically slower than what’s available in 2025. I go back and use a machine that is less than 10 years old occasionally and it’s surprising how much less responsive it feels, even with a modern high speed SSD drive installed.
Some people just aren’t sensitive to slow systems. Honestly a great place to be because it’s much cheaper that way. However, there is definitely a speed difference between a 2011 system and a 2025 system.
The big exceptions are things like “apt get upgrade”, but both boxes bottleneck on starlink for that. Modern games and compilation are the other obvious things.
> Modern games and compilation are the other obvious things.
I mean if we exempt all of the CPU intensive things then speed of your CPU doesn’t matter
I don’t have a fast CPU for the low overhead things, though. I buy one because that speed up when I run commands or compile my code adds up when I’m doing 100 or 1000 little CPU intensive tasks during the day. A few seconds or minutes saved here and there adds up multiplied by 100 or 1000 times per day. Multiply that by 200 working days per year and the value of upgrading a CPU (after over a decade) is very high on the list of ways you can buy more time. I don’t care so much about something rendering in 1 frame instead of 2 frames, but when I type a command and have to wait idly for it to complete, that’s just lost time.
That’s different than acknowledging that newer hardware is faster but deciding current hardware is fast enough
Then again, not many Sandy Bridge mobo supported NVMe.
Only thing I'd want is a higher resolution display that's usable in daylight, and longer battery life.
Good post though.
Almost everything that people think is ugly about Rust's syntax exists for very specific reasons. Most of the time, imo Rust made a good decision, and is just making something explicit.
Some things take time to get used to (e.g. if let), but for most people that's less an issue of syntax, and more an issue of not understanding a powerful feature (e.g. pattern matching deconstructions).
I understand pattern matching deconstructions, I have seen it in other languages. Funnily enough they were nowhere as ugly / noisy / complicated as Rust's is. Rust seems to have bolted on a lot of fancy shit that may be appealing to a lot of people and that is it.
In your link, the first one is fugly, the last one is fine. Maybe Rust just encourages ugly (i.e. complicated) code a bit too much.
And despite that I do use Rust when I want something simple to deploy/deliver, as handing over a binary that just runs is such a nice experience, and it's real easy to make fast. As long as I don't have to maintain in long-term, Rust is fine for what it is.
I don't know, it feels like you're just saying that you don't like it, missed the point of the post, and are not giving us anything concrete. Can you list a very clear example of how you'd improve the syntax?
Again, see the post: You can remove things, but you're losing explicitness or other things. If you want a language that's more implicit, this is fine. I don't.
I really dislike them. Makes me wonder if you just got distracted and forgot to finish the function. Be explicit, don't make me have to spend time figuring it out.
let Coordinates(x, y) = get_coords();
But this is intended for "exhaustive patterns". If you can't express an exhaustive pattern, like with an Option, then you can use let ... else let Some((x, y)) = get_coords() else { return };
if let is just an extension of this "let pattern" system.Once you internalize how patterns work (and they really work everywhere) it all starts to really make sense and feels a lot cleaner.
The type of a variable and the return type of a function are the most important pieces of information regarding those, so they ought to be on the left. It also fits the data flow going right to left (i.e. how the '=' operator works). C's type declarations can get pretty gnarly, so there is definitely room for improvement there. I would say Java (and C#) got it as close to perfect as possible.
If you want to replace C/C++ you should make your language look as inviting as possible to users of the old one. I truly think this strange fetish for putting types on the right is what gives C programmers the ick when they see Rust for the first time and is hampering its adoption.
Each to their own, I guess.
It took me about 2 years to feel somewhat comfortable but I'd still run into code where someone decided to use their own set of unconventional favourite features, requiring me to learn yet-another-way to do the same thing I had seen done in other ways.
I just got tired of it, didn't feel more productive nor enlightened...
The insert function, for what it’s worth, has nonstandard formatting; the de facto standard rustfmt tool would use more newlines, making it clearer and less dense. The function also uses a couple of syntactic features like `if let`, which may seem unfamiliar at first but become second nature in a few days at most.
^* I don't particularly enjoy the Ruby-borrowed || either, particularly because Alt-7 is a terrible key combination to type, but oh well.
On the QWERTY keyboard, the pipe is near the enter key and easily typed.
(This is how we end up with German cars, isn't it?)
Exactly which syntax should every language be using then? Everyone will give you a different answer.
> Randomly chosen bolts to use left hand threads just because it suits their arbitrary whim
Claiming the syntax was chosen randomly on a whim is very much not true: https://matklad.github.io/2023/01/26/rusts-ugly-syntax.html
And there are times when it does make sense to use left-hand threads.
Just because someone looks at new syntax and doesn't immediately understand it doesn't mean that syntax doesn't exist for good reason.
Also, if your language's constructs and semantics don't exactly match those in other languages, then giving them the same syntax would be actively misleading, and therefore a design flaw.
Syntax is also only an issue for people who haven't taken the time to learn the language. If you haven't learned the language, then familiar syntax isn't going to help you anyway. You'll just think you understand something you don't.
But, like, even then, there's screw drive (JIS vs phillips vs pozidriv vs a ton more) to worry about with screws. (never mind the obvious aspects of diameter & length; and whatnot else I happen to not know about)
The significant thing I think is that such trivial impactless questions just do not exist for programming languages. Even like comment starting characters is a messy question - "//" nicely aligns with "/* */" using the slash char, but "#" is a char shorter and interpreted languages need to handle "#!" as a comment for shebangs anyways. And things only get more complicated from there.
And, unlike with physical things, it's trivial to work with non-standardized things in software. Not like you're gonna copy-paste arbitrary code from one language into another and just expect it's gonna work anyway (else we wouldn't have more than one language).
I also don't think syntax is even a problem worth worrying about. It takes very little time to become accustomed to a new syntax. It's the semantics that are hard to learn. Difficulty syntax can often be helpful when learning a new language, as it serves as a reminder that the semantics are different.
(defun fib (n)
(declare (type (Integer 0 100) n))
(the (Integer 0 *)
(if (< n 2)
n
(+ (fib (- n 1))
(fib (- n 2).)I actually find the Rust syntax very natural, more than C in some areas.
typedef takes the identifier at the end of the statement.
The asterisk is used to de-reference but used to denote a reference in types.
While loops may take the condition after the block.
Guess what? They would blame Forth, Common Lisp, etc., yet I could tell them the same things I am being told about Rust.
The latter's baroque syntax is not provably good or necessary, but I'd call it the other end of a compromise for safety and expressiveness. And once you learn it, you've learned much of the language by default.
Not so with with the first two! The mental models involved in reasoning about a piece of code require much more cognition on the part of the programmer.
Please, every time you see someone praise Rust (every other day), tell them the same as you have told me, just the opposite way around.
Using Vec for arrays is also annoying, repeating the mistake from C++.
Neither Rust nor C++ uses vectors as array, they're distinct things.
An array is a fixed-size object, which never allocates and its elements thus always retain their memory address. In Rust they're `[T; N]`, in C++ `T[N]` or more recently `std::array<T, N>`.
A vector on the other hand is dynamically sized object, that may (re)allocate to accomodate new elements, and the address of their elements can change. In Rust they're `Vec<T>`; in C++ `std::vector<T>`.
Good luck if you want to get into the code of a library to understand what a function does. You have to go through 3 macros and 5 traits across 5 files. When it could have been just a couple function calls.
People don’t stop and think if they really need that trait or macro for five seconds, they just have to use it every time