• jjcm
  • ·
  • 5 hours ago
  • ·
  • [ - ]
This is entirely tangential to the article, but I’ve been coding in golang now going on 5 years.

For four of those years, I was a reluctant user. In the last year I’ve grown to love golang for backend web work.

I find it to be one of the most bulletproof languages for agentic coding. I have a two main hypotheses as to why:

- very solid corpus of well-written code for training data. Compare this to vanilla js or php - I find agents do a very poor job with both of these due to what I suspect is poorly written code that it’s been trained on. - extremely self documenting, due to structs giving agents really solid context on what the shape of the data is

In any file an agent is making edits in, it has all the context it needs in the file, and it has training data that shows how to edit it with great best practices.

My main gripe with go used to be that it was overly verbose, but now I actually find that to be a benefit as it greatly helps agents. Would recommend trying it out for your next project if you haven’t given it a spin.

Interesting. I've only dipped my toe in the AI waters but my initial experience with a Go project wasn't good.

I tried out the latest Claude model last weekend. As a test I asked it to identify areas for performance improvement in one of my projects. One of the areas looked significant and truth be told, was an area I expected to see in the list.

I asked it to implement the fix. It was a dozen or so lines and I could see straightaway that it had introduced a race condition. I tested it and sure enough, there was a race condition.

I told it about the problem and it suggested a further fix that didn't solve the race condition at all. In fact, the second fix only tried to hide the problem.

I don't doubt you can use these tools well, but it's far too easy to use them poorly. There are no guard rails. I also believe that they are marketed without any care that they can be used poorly.

Whether Go is a better language for agentic programming or not, I don't know. But it may be to do with what the language is being used for. My example was a desktop GUI application and there'll be far fewer examples of those types of application written in Go.

You need to be telling it to create reproduction test cases first and iterate until it's truly solved. There's no need for you to manually be testing that sort of thing.

The key to success with agents is tight, correct feedback loops so they can validate their own work. Go has great tooling for debugging race conditions. Tell it to leverage those properly and it shouldn't have any problems solving it unless you steer it off course.

  • ·
  • 1 hour ago
  • ·
  • [ - ]
+1 half the time I see such posts the answer is "harness".

Put the LLM in a situation where it can test and reason about its results.

I do have a test harness. That's how I could show that the code suggested was poor.

If you mean, put the LLM in the test harness. Sure, I accept that that's the best way to use the tools. The problem is that there's nothing requiring me or anyone else to do that.

I accept what you say about the best way to use these agents. But my worry is that there is nothing that requires people to use them in that way. I was deliberately vague and general in my test. I don't think how Claude responded under those conditions was good at all.

I guess I just don't see what the point of these tools are. If I was to guide the tool in the way you describe, I don't see how that's better than just thinking about and writing the code myself.

I'm prepared to be shown differently of course, but I remain highly sceptical.

  • treyd
  • ·
  • 1 hour ago
  • ·
  • [ - ]
If only there was a way to prevent race conditions by design as part if the language's type system, and in a way that provides rich and detailed error messages that allow coding agents to troubleshoot issues directly (without having to be prompted to write/run tests that just check for race conditions).
I don't believe the "corpus" argument that much.

I have been extending the Elm language with Effect semantics (ala ZIO/Rio/Effect-ts) for a new langauge called Eelm (extended-Elm or effectful-elm) and both Haskell (the language that the Elm compiler is written in) and Eelm (the target language, now we some new fancy capabilities) shouldn't have a particularly relevant corpus of code.

Yet, my experiments show that Opus 4.6 is terrific at understanding and authoring both Haskell and Eelm.

Why? I think it stems from the properties of these languages themselves: no mutability makes it reason to think about, fully statically typed, excellent compiler and diagnostics. On top of that the syntax is rather small.

One of the things that makes it work so well with agents is two facts. Go is a language that is focused on simplicity and also the gofmt and go coding style makes that almost all go code looks familiar, because everyone write the code with a very consistent style. That two things makes the experience pleasant and the work for the llm easier.
Go’s design philosophy actually aligns with AI’s current limitations very well.

AI has trouble with deep complexity, go is simple by design. With usually only one or two correct paths instruction wise. Architecturally you can design your src however but there’s a pretty well established standard.

I wonder how is the experience writing Rust or Zig with LLMs. I suspect zig might not have enough training data and rust might struggle with compile times and extra context required for borrow checker.
  • jwxz
  • ·
  • 2 hours ago
  • ·
  • [ - ]
I found Opus 4.6 to be good at Zig.

I got it to write me an rsync like CLI for copying files to/from an Android device using MTP, all in a single ~45 min sitting. It works incredibly well. OpenMTP was the only other free option on macOS. After being frustrated by it, I decided to try out Opus 4.6 and was pleasantly surprised.

I later discovered that I could plug in a USB-C hard drive directly into the phone, but the program was nonetheless very useful.

> I wonder how is the experience writing Rust or Zig with LLMs

I've had no issues with Rust, mostly (99% of the time) using codex with gpt-5.2 xhigh and does as well as any other language. Not sure why you think compile times would be an issue, the LLM doesn't really care if it takes 1 minute or 1 hour to compile, it's more of a "your hardware + project" issue than about the LLMs. Also haven't found it to struggle with borrow checker, if it screw up it sees the compilation errors, fixes it, just like with any other languages I've tried to use with LLMs.

Yeah in my experience Claude is significantly better at writing go than other languages I’ve tried (Python, typescript)
  • dizhn
  • ·
  • 3 hours ago
  • ·
  • [ - ]
I'm having similarly good results with go and agents. Another good language for it is flutter/dart in my experience.
[dead]
Perfectly happy with Go, my "Go should do X" / "Go should have Y" days are over.

But if I could have a little wish, "cargo check" would be it.

I always have the unfounded feeling that the go compiler/linker does not remove dead code. Go binaries have large minimal size. Tinygo in contrast can make awesome small binaries
It's pretty good at dead code elimination. The size of Go binaries is in large part because of the runtime implementation. Remove a bunch of the runtime's features (profiling, stacktraces, sysmon, optimizations that avoid allocations, maybe even multithreading...) and you'll end up with much smaller binaries. I would love if there was a build tag like "runtime_tiny", that provides such an implementation.
Go has a runtime. That alone is over a megabyte. Tinygo on the other hand has very limited(smaller) runtime. In other words, you don't know what you're talking about.
  • Surac
  • ·
  • 6 hours ago
  • ·
  • [ - ]
I can see no difference to an ordinary linker. Anyone care to explain it to me.?
Yes, it is not specially different from other linkers. It has some tasks building the final binary including special sections in the binary, and is more aware about the specifics of the go language. But there is nothing that is extremely different from other linkers. The whole point of the series is to explain a real compiler, but in general, most of the parts of the go compiler are very widely used in other languages, like ssa, ast, escape analysis, inlining...
The difference is that Go has its own linker rather than using a system linker. Another article could explain the benefits of tighter integration and the drawbacks of this approach. Having its own toolchain I assume is part of what enables the easy cross compilation of Go.
What is there to explain? The author did not claim there is a difference in the article.
  • pjmlp
  • ·
  • 5 hours ago
  • ·
  • [ - ]
Why should it be one?
The title is misleading
Misleading in what way? This is the linker part of a serie of posts about understanding the go compiler. I think there is no much space to be misleading.
  • vlinx
  • ·
  • 1 hour ago
  • ·
  • [ - ]
It's always fascinating to dive into the internals of the Go linker. One aspect I've found particularly clever is how it handles static linking by default, bundling everything into a single binary