That's an incredible find and once I saw the assembly I was right along with them on the debug path. Interestingly it doesn't need to be assembly for this to work, it's just that that's where the split was. The IR could've done it, it just doesn't for very good reasons. So another win for being able to read arm assembly.

Unsure if this would be another way to do it but to save an instruction at the cost of a memory access you could push then pop the stack size maybe? Since presumably you're doing that pair of moves on function entry and exit. I'm not really sure what the garbage collector is looking for so maybe that doesn't work, but I'd be interested to hear some takes on it

You would normally use the “LDR Rd, =expr” pseudo-instruction form [1]. For immediates not directly constructible, it puts a copy of the immediate value in a PC-relative memory location, then does a PC-relative load into register.

So that would turn the whole sequence of “add constant to SP” into 2 executable instructions, 1 for constructing immediate and 1 for adding for a total of 8 bytes, and a 4 byte data area for the 17-bit immediate for a total of 12 bytes of binary which is 3 executable instructions worth.

[1] https://developer.arm.com/documentation/dui0801/l/A64-Data-T...

  • comex
  • ·
  • 3 hours ago
  • ·
  • [ - ]
I've usually seen compilers handle large constants with MOV/MOVK sequences (encoding 16 bits of data per 32-bit instruction) instead of loading them from memory. Loading from memory was more common on 32-bit ARM.
I'm a little surprised that this bug wasn't fixed in the assembler as a special case for immediate adds to RSP. If the patch was to the compiler only, other instances of the bug could be lurking out there in aarch64 assembly code.
  • moefh
  • ·
  • 4 hours ago
  • ·
  • [ - ]
Would that be wise? The implemented solution uses a temporary register to hold the full value being added to rsp.

I don't know enough about how people use the go assembler, but I imagine it would be very surprising if `add $imm, rsp, rsp` clobbered an unrelated register when `$imm` is large enough. Especially since what's clobbered is the designated "temporary register", which I imagine is used all the time in handwritten go assembly.

Some architectures, and I believe aarch64 is one, have scratch registers reserved for being clobbered in special situations required by the assembler.
Not really, or at least not that I know if in the case of arm64. What you have is calling conventions that specify what one function/procedure/whatever can expect both from the caller and the callee's side.n I.e. some registers are caller-saved, some are callee-saved, which basically means the called function can treat them as "scratch".

Additionally, they call out interactions with the OS/execution environment. For example, x18 is the "platform register", and it's unspecified what the OS does with it. It's entirely possible that it clobbers it on context switch or during an interrupt or whatever. So don't use that one unless you have a contract with the OS itself.

But locally, i.e. "from instruction to instruction", no such convention exists to my knowledge, and you probably don't want to have registers that pseudo-instructions might trash inadvertently in general, because it means you can't optimally use these registers.

It's possible for pseudo-instructions or generally macros to be documented as, e.g., "this macro uses x3 as a temporary register and trashes it", but in my experience most macros that need additional temporary registers actually ask you to specify them as part of the macro invocation.

E.g. suppose you have a macro "weirdhash" that takes two registers and saves some kind of hash of them in a third register, but that also needs an extra register to perform its work. You would call it with:

    weirdhash x9, x10, x11, x0
Where x0 would be the scratch register you don't care about.
No, I think that’s just a MIPS thing.
  • bloak
  • ·
  • 11 hours ago
  • ·
  • [ - ]
> So another win for being able to read arm assembly.

Yes, though that weird stuff with dollars in it is not normal AArch64 assembly!

The article could have mentioned the "stack moves once" rule.

  • pjmlp
  • ·
  • 10 hours ago
  • ·
  • [ - ]
It is due to the Plan 9 Assembly dialect most likely, because it wasn't enough that we already have differences between AT&T and Intel.

https://go.dev/doc/asm

Still, I find great that Go got back the 1990's tradition that compiled languages have an assembler as part of their tooling, regardless of the syntax.

I've never heard of that rule (though tbh I'm not allocating > 64KB of stack when I'm in assembly) and it seems Google hasn't either. While I'm sure it makes sense, I don't think I've ever seen that be enforced. At least in C/C++. Maybe it makes more sense for these stack inspecting garbage collectors but I've also heard of ones that just scan the stack without unwinding anything. I did a test asking Google's AI to generate a complicated C function, put it in godbolt, and there's plenty of push push push push ..... Pop Pop Pop Pop going on
  • JdeBP
  • ·
  • 8 hours ago
  • ·
  • [ - ]
You need to look at non-x86 architectures. It was common years ago on MIPS.

* https://jdebp.uk/FGA/function-perilogues.html#StandardMIPS

I wrote up the x86 equivalent of doing just two read-modify-write operations on the stack pointer over 16 years ago.

* https://jdebp.uk/FGA/function-perilogues.html#Standardx86

Did you compile with optimisations? I think GCC will do a bunch of activity on the stack with -O0, but it'll generally coalesce everything into one push/pop per function with optimisations (not because of any rule, but just because it's faster). alloca and other dynamic stack allocation may break this, but normal variables should in pretty much all just get turned into one block on the stack (with appropriate re-use of space if variable lifetimes don't overlap)
  • ori_b
  • ·
  • 3 hours ago
  • ·
  • [ - ]
It will generate code to touch each page of the stack, because otherwise a very large stack allocation controlled by users (eg, in the case of a variable sized array) can be turned into a pointer to any location in memory by an attacker. Faulting in each page of the stack turns that into a crash.

There was a userspace thread library I came across a long time ago that used variable length arrays to switch between thread stacks; the scheduler would allocate an array of the right size to bump the stack pointer to the different thread's stack.

Wow, that’s horrible.
Yes
  • pjmlp
  • ·
  • 10 hours ago
  • ·
  • [ - ]
Usually in runtimes like Java and .NET there are safepoints exactly to avoid changing context in the middle of a set of instructions.
Yeah but we have codegen bugs in .NET as well. The biggest difference that stood out to me in this write up, is we would have gone straight for “coredump” instead of other investigation tools. Our default mode of investigating memory corruption issues is dumps.
  • pjmlp
  • ·
  • 9 hours ago
  • ·
  • [ - ]
Sure, I have experienced them, e.g. once in 2006 using IBM's JVM implementation with Websphere.

However it is probably not as problematic due to the way Go allows for Assembly being used directly.

While the JVM and CLR don't allow for direct access to Assembly code, Go does, thus I assume expecting safepoints everywhere is not an option, as any subroutine call can land on code that was manually written.

I think the right fix is that the compiler should, e.g. load the constant into a register using two moves and then emit a single add. It's one more instruction, but then the adjustment is atomic (i.e. a single instruction). Another option is to do the arithmetic in a temp register and then move it back.
  • cmckn
  • ·
  • 9 hours ago
  • ·
  • [ - ]
I noticed this when reviewing the linked issue: https://github.com/golang/go/issues/73259#issuecomment-31004...

Does the Go team have a natural language bot or is this just comment.contains(“backport”) type stuff?

  • etra0
  • ·
  • 6 hours ago
  • ·
  • [ - ]
Kinda funny that it requires both "please" and "backport" for it to be considered haha.
  • 9rx
  • ·
  • 9 hours ago
  • ·
  • [ - ]
One thing I worry about, probably unnecessarily, is anything with a sense of urgency.

HEY GUYS WE JUST FOUND A GOLANG COMPILER BUG AND FATAL PANICS!

Everyone is like “Hmm. I need to fix this now.”

So, 99% probability it’s what it is. 1% it’s some secret defensive thing because there was a bad stupid zero day someone would get fired over or that could leave the world in shambles if uncovered, or maybe something else needed to be swept under the rug, or maybe someone wants to distract while they introduce a new vulnerability.

I don’t think this with CVEs, but when someone’s like “install this patch everybody!” the dim red light flickers on.

Great technical blog. Good pathway for narrative, tight examples, description so clear it makes me feel smarter than I am because so easy to follow though the last time I even read assembly seriously was x86 years ago.

Also, fulfills the marketing objective because I cannot help but think that this team is a bunch of hotshots who have the skill to do this on demand and the quality discipline to chase down rare issues.

I assume these are Ampere Altra? I was considering some of those for web servers to fill out my rack (more space than power) but ended up just going higher on power and using Epyc.

I wonder if Go had a mode where you make it single step every instruction and trigger a GC interrupt on every opcode. That would make it easier to find these kinds of bugs.
You could vibe that.

Or you could pour water on your keyboard or into the air slots of your tower and just take that mofo down.

Your choice.

What ARM64 machines are you using and what are they used for? Last year you were announcing Gen 12 servers on AMD EPYC (https://blog.cloudflare.com/gen-12-servers/), but IIRC there weren’t any mentions of ARM64. But now it seems you’re running ARM64 in full production.
I'm not Cloudflare, I just read their blog too much. As they hint in the article when mentioning secure boot, they've been deploying Ampere in parallel to AMD for several years now. Purpose wise it seems to be Edge related for efficiency reasons, but maybe they use them for other things too. You can read some more here https://blog.cloudflare.com/designing-edge-servers-with-arm-... and here https://blog.cloudflare.com/arms-race-ampere-altra-takes-on-... along with the original evaluation of Qualcomm here https://blog.cloudflare.com/arm-takes-wing/
Yeah but those are pretty dated. I was under the impression those old Ampere servers are not efficient compared to modern EPYC anymore. So I’m wondering what their current generation of arm64 servers look like :p
I seem to recall Cloudflare hosts their some of their non-edge compute on public clouds? Like control plane stuff. Could be that.
I thought Cloudflare was 100% Rust, and x86 (EPYC) these days.

Interesting to hear Go & ARM in use.

I doubt any company is mono language at that scale. Using ARM usually makes sense for s lot of horizontal scaling workloads so it's also not that surprising.
Cloudflare has long kept Arm builds of everything even when they deployed to x86 only, to make it easy to switch when it made sense.

And yeah, a lot of Rust but also a lot of Go.

Excellent article as always from the cloudflare blog - engineering without magic infrastructure and ml. One day I will apply !

Compiler bugs are actually quite common ( I used to find several a year in gcc ), but as the author says, some of them only appear when you work at a very large scale, and most people never dive that far.

What's stopping you applying today?
Fair question. Location primarily ( nothing in France ), and I’m not sure how ‘we’re looking for people who enjoy doing that kind of thing’( I very much do ) relates to the actual job offers, ie what job offer should I actually apply to.

My background is not networking ( it’s math then hpc then broader stuff ) but I keep stumbling on similar problems ( including a beautiful one related to intel NICs a few years ago which led be into a rabbit hole of ebpf and kernel network layer and which surfaced later on the cloudflare blog), and the only tech company with which this seems to be a regular occurrence is cloudflare. Their space is a bit unknown to me so I guess I’m having a hard time projecting something onto the job offers.

I’d happily chat to someone working for cloudflare though - I guess this would help me understand what it is that actually happens over there. I guess I’m a bit intimidated by this unknown yet really good looking world :-)

I've interned at Cloudflare back in 2020 and had a great time- would highly recommend!

Can't speak to the locations but the stuff you're interested/experienced in seems extremely likely to overlap with what they do. They do a lot of very deep technical things in all kinds of areas.

my recommendation if you want to talk to someone about it: search github/twitter/linkedin for ppl who work there on stuff you like, and just send them a message and ask for a 20 minute call!

have done it plenty of times, has always been extremely positive

  • nevon
  • ·
  • 10 hours ago
  • ·
  • [ - ]
Similar to the previous commenter, every time I read a blog post from Cloudflare I end up checking the careers page thinking "this is exactly the kind of work I'd like to be doing". Sadly no openings in my country. I'll keep checking!
Pretty sure location is not a factor for these companies. You should apply anyway. I’ve worked with people living in active war zones.

If you have the skills, they have the coin.

They won’t hire some react guy in X country but someone who can find compiler bugs and save them XX+ million dollars a year? Heck yeah.

Unfortunately, in 95% cases location IS a factor with bigger companies.

I'm in a similar position where I'd like to do something a lot more interesting, but intersection between where the interesting companies have offices and where I'd be willing to live do not really overlap enough justify rooting up my life.

(Unless we're talking about "too good to ignore", that's a different story.)

I was explicitly talking about too good to ignore.

Anyone who can optimize a company’s bottom line will be hired.

Like I said, no random average mid react guy or dime a dozen Java developer is getting hired as a remote employee in some flyover country.

But if someone can provide like 50x value then hell yeah..

I thought that was obvious in my message considering we are discussing compiler optimization

(Yeah, I'd say your messaging was reasonably clear, but in the context of the whole thread it wasn't obvious whether the poster was putting themselves in that skill bucket.)

I think there's also quite a big spectrum of skill, even when we're talking about compiler optimization and highly skilled software developers. I'd put myself up there, but still I'm no Lars Bak (for whom Google allegedly created an office in Denmark).

How do you rate yourself as higher than dime a dozen? I work as a full remote dev but I am not sure I am anything special, I mean how do you know that you are objectively good.
Where did I say anything about myself? Sounds like projection or some deep insecurities if you meant it _that_ way.

If you're asking what would constitute someone being special, it would depend on the role and skillset. As I said in my earlier comment, someone who is a beast and can find and fix bugs in compilers is a rare person. Especially if that skillset can help the company save boatloads of money that can be deployed elsewhere.

There are probably only a handful of people in the world who understand and can push the AI landscape forward. A lot of them are Chinese immigrants, and yet OpenAI/Meta/etc are paying them boatloads of money.

As for remote roles, I once worked on a project where we hired some dude for like $500/hr as a contractor because he was one of the few people who knew the inside/out of postgres and oracle rdbms because we were doing some very important migration.

With seemingly the whole world rolling out new RTO mandates, location may not have been a factor recently, but may be lately.
Low compensation relative to many other companies. (It didn't stop me from applying, but I stopped me from accepting.)
  • ·
  • 8 hours ago
  • ·
  • [ - ]
Always adjust your stack pointer atomically, kids.
I guess those that wrote the preemption were on X86 where this doesn't happen thanks to variable length instructions being able to hold the constant and thus relied on the code-gen to do it atomically, then the ARM port had an automatic "split" from a higher level to make things "easy" thus giving us this bug.

Nobodys fault really, but bad results ensued.

> Nobodys fault really, but bad results ensued.

Uh, the fault is entirely in writing an assembler _that is not an assembler_, but rather something that is _almost_ like one but then 1% like an IR instead. It's an unforced error.

  • wbl
  • ·
  • 6 hours ago
  • ·
  • [ - ]
Assemblers used to do a ton of stuff back in the day
Oh yeah. S/360 assembly almost looks like a high level language sometimes. In MVS, functions of the OS and standard libraries (or its equivalent) were implemented as elaborate macros, with their own invocation syntax, whereas nowadays you'd expect a function that you'd call (dynamically linked or not), with parameters passed in registers.

At least in the 90s, there were actually macro assemblers that supported OOP programming in assembly. Borland Turbo Assembler 5.0 comes to mind, if was kind of fun.

Exactly what ran through my mind.
I don't get it, how were the machine threads being stopped in thr middle of two instructions? This is baremetal, right?
go uses interrupts for GC notifications
Signals.
I would have thought that unwinding would use the frame pointer and this wouldn't be a problem.
The frame pointer was updated non-atomically in two asm ops. An async interruption between the two ops would lead to a corrupt frame pointer.
So it was. The article never mentions the frame pointer and I'm familiar with compilers that load the saved value from the stack in the epilog, rather than adjusting it arithmetically. But they do have an assembly listing showing the two-step arithmetic adjustment for both the stack pointer and frame pointer.

But I'm not sure that matters, because the unwind code they show uses the stack pointer rather than the frame pointer anyway.

Really enjoyed reading this. Thanks for writing it!
> This was a very fun problem to debug.

I'm sure it was a relief to find a thorough solution that addressed the root cause. But it doesn't seem plausible that it was fun while it was unexplained. When I have this kind of bug it eats my whole attention.

Something this deep is especially frustrating. Nobody suspects the standard library or the compiler. Devs have been taught from a young age that it's always you, not the tools you were given, and that's generally true.

One time, I actually did find a standard library bug. I ended up taking apart absolutely everything on my side, because of course the last hypothesis you test is that the pieces you have from the SDK are broken. So a huge amount of time is spent chasing the wrong lead when it actually is a fundamental problem.

On top of this, the thing is a race condition, so you can't even reliably reproduce it. You think it's gone like they did initially, and then it's back. Like cancer.

It feels like this comment was almost a purely additive anecdote of your own experience with a similar kind of issue, but you've spoiled it by deciding to tell the author that they're incorrect about how they felt during the process?

Maybe different people find different things fun.

Not saying he's wrong, sometimes the word "fun" connotes something slightly different what what it literally means. "Satisfying" is something I'd use for the end state. Maybe "challenging" for the intermediate state. But while you're in a high-pressure situation that you don't understand, that is rarely "fun" in the literal sense.

You wouldn't pay to be given compiler race condition bugs, right?

  • klausa
  • ·
  • 59 minutes ago
  • ·
  • [ - ]
I wouldn't pay to be given any kind of work, but there are some aspects of my job that I find more or less 'fun'.

Hunting bugs that people have given up on or have no ideas on how to tackle is near the top of that list.

Maybe stop digging here and just let it be fun for the author?
I like these bugs. They’re intricate, technical puzzles, that can take weeks to figure out. You need a proper strategy to figure them out, cannot rely on simple tactics, and when you finally understand what’s going on, it’s immensely satisfying.

This, and now there’s pernosco which makes everything much easier.

Now, under pressure, this is going to be a nightmare unless you have a high tolerance to stress.

  • a10c
  • ·
  • 6 hours ago
  • ·
  • [ - ]
> Not saying he's wrong

https://heinen.dev/ - I’m Thea “Teddy” Heinen (she/her or they/them)!

Some people are perverse individuals and actually enjoy debugging very esoteric things. What might be frustrating to you might be the very thing that gets someone else very excited.
The people who find the fun are often good at identifying when it is the standard library or the compiler.
Probably just meant satisfying instead of fun. I found a bug in sscanf for the gcc arm toolchain that ships with Ubuntu (and Debian), and it wasn't fun since I had deadlines to deal with. Workaround was to use the official ARM one. But after 2 days, it was satisfying to nail the exact problem and write a regression test.
> I'm sure it was a relief to find a thorough solution that addressed the root cause. But it doesn't seem plausible that it was fun while it was unexplained. When I have this kind of bug it eats my whole attention.

Yeah, and that's fun for me. Some of my most fun bugs to debug have been compiler, or even CPU issues.

Segfaults with no use of “Unsafe” equivalents in managed languages can give immediate indication it’s not a code problem.
It becomes fun when you narrow down to the solution. Before that it's hell.

I don't think I'd be allowed spend weeks to debug something like this. Credit to Cloudflare's PMs.

Apparently they have a "unexplained crashes must have an explanation determined" policy ever since there was a trend of uninvestigated unexplained crashes that were canaries in the mine for a security issue.

https://blog.cloudflare.com/however-improbable-the-story-of-...

> But [the Cloudbleed sensitive information disclosure security incident] wasn’t the only consequence of the bug. Sometimes it could lead to an invalid memory read, causing the NGINX process to crash, and we had metrics showing these crashes in the weeks leading up to the discovery of Cloudbleed. So one of the measures we took to prevent such a problem happening again was to require that every crash be investigated in detail.

Since then, they have a "no crashes go uninvestigated" policy, which for the scale Cloudflare operates at, seems pretty impressive.

Although I’m good enough at it, like you I hate this kind of debugging experience, and try hard to avoid putting myself in a position where I have to do it. It’s not fun for me at all.

I also don’t like many puzzle games, like Sudoku, because to me they feel like this kind of work. Many colleagues of mine have expressed bafflement that I don’t find such puzzles fun and give me all kinds of grief about how I ought to enjoy them, since they do.

It’s the same thing here, just flipped around: this person seems to enjoy the debugging experience; just let them be. Or recruit them, because that temperament is valuable.

I find this sort of thing to be tremendously fun. It can be frustrating as well, but overall it’s my favorite part of my job. I don’t see why this would be implausible. Different people enjoy different things.
> Devs have been taught from a young age that it's always you, not the tools you were given, and that's generally true.

That's not been my experience at all FWIW. Tools get things wrong all the time.

Simply that more mature projects with heavy use like eg; gcc or clang/llvm generally tend to have had major bugs stamped out by this point. They do still happen though.

More nascent language and compiler ecosystems are more likely to run into issues. Especially languages with runtimes.

> I'm sure it was a relief to find a thorough solution that addressed the root cause. But it doesn't seem plausible that it was fun while it was unexplained. When I have this kind of bug it eats my whole attention.

Hey; it could've been type-3 fun.

Did they ever explain why netlink was involved? Or was that a red herring?
The netlink function uses a larger stack than most.

Their repro case required a stack adjustment larger than 1<<12 (4kiB).

The stack in that specific function was big enough to trigger the bug.
Seemed like a red herring. They were able to reproduce it without any libraries. Might have just been net link forcing the stacks to a certain size and that made the bug visible.
I've seen only one race condition in my career and it always surprises me how it is even found.
  • yalok
  • ·
  • 7 hours ago
  • ·
  • [ - ]
Classic problem of non-atomic stack pointer modification.

Used to have a lot of fun with those 3 decades ago.

I see something like this and I wonder "what testing methodology would have found this?" It has to be general, not something that would involve knowing what the bug was ahead of time.
When your scale is large enough, you move to "what monitoring methodology will find this?"

When you're doing enough transactions you start to see a noise floor of e.g. bit flips from cosmic rays, and looking for issues involves correlating/categorizing possible software failures and distinguishing them from the misbehavior of hardware.

This problem strikes me more as a debuginfo generation bug than a "compiler" bug.

> After this change, stacks larger than 1<<12 will build the offset in a temporary register and then add that to rsp in a single, indivisible opcode. A goroutine can be preempted before or after the stack pointer modification, but never during. This means that the stack pointer is always valid and there is no race condition.

Seems silly to pessimize the runtime, even slightly, to account for the partial register construction. DWARF bytecode ought to be powerful enough to express the calculations needed for restoring the true stack pointer if we're between immediate adjustments.

> This problem strikes me more as a debuginfo generation bug than a "compiler" bug.

But isn't that the same thing here? The bug occurred in their production workflows, not in some specific debug builds, so with that seems pretty reasonable to call it a compiler bug?

Thanks. I think of unwinder information as debuginfo even though, as you point out, it's used outside of debugging contexts all the time. :-)

As for the actual bug:

Unless you're unwinding the stack by walking the linked list of frames threaded through the frame pointer, then each time you unwind a level of the stack, you need to consult a table keyed on instruction pointer to look up how to compute the register contents of the previous frame based on register content of the current frame. One of the registers you can compute this way is the previous frame's stack pointer.

I haven't looked in depth at what the Go runtime is doing exactly, but at a glance, I don't see mention of frame pointers in the linked article, so I'm guessing Go uses the SP-and-unwind-table approach? If so, the real bug here is that the table didn't have separate entries for the two ADDs and so gave incorrect reconstruction instructions for one of them.

If, however, frame pointers are a load-bearing part of the Go runtime, and that runtime failed to update frame pointer (not just the stack pointer) in the contractually mandatory manner, well, that's a codegen bug and needs a codegen fix.

I guess I just don't like, as a matter of philosophy if not practical engineering, having frame pointers at all. Without the frame pointer, the program already contains all the information you need to unwind, at no runtime cost --- you pay for table lookups only when you unwind, not all the time, on straight-line code.

The purist in me doesn't like burning a register for debugging, but you have to use the right tool for the job I guess.

  • gok
  • ·
  • 10 hours ago
  • ·
  • [ - ]
The real lesson here should be that doing crazy shit like swizzling the program counter in a signal handler and writing your own assembler is not a good idea.
Neither of those are "crazy shit." It's just complex because the environment offers specific features like automatic GC with async preemption in a compiled language which pretty much requires it.

Complex engineering isn't something to be avoided by default.

Sorry, how exactly do you think compilers are supposed to work if not by 'writing [their] own assembler'? Someone has to write the assembler, and different compilers have different needs.
Those are both completely normal things to do when you're implementing a programming language. For example, the Hotspot JVM uses SIGSEGV to stop the world for garbage collection.
The general wisdom is that you shouldn't do this stuff yourself, and you should instead rely on tried and tested implementations. But sometimes you're the one who provides the tried and tested implementations. Implementing a compiled language is often one of those times.
This^. Keith W on Dtrace blog said it a decade ago https://wesolows.dtrace.org/2014/12/29/golang-is-trash/

I like Go but I don't really like their NIH / replace everything with our stuff stance - esp on system tools like assemblers and linkers.

[flagged]