That being said, I think we’re in a weird phase right now where people’s obvious mental health issues are appearing as “hyper productivity” due to the use of these tools to absolutely spam out code that isn’t necessarily broadly coherent but is locally impressive. I’m watching multiple people both publicly and privately clearly breaking down mentally because of the “power” AI is bestowing on them. Their wires are completely crossed when it comes to the value of outputs vs outcomes and they’re espousing generated nonsense as it’s thoughtful insight.
It’s an interesting thing to watch play out.
This feels like the same thing. Too early, but we're definitely headed in the direction of finding ways to use more tokens to get more mileage per prompt.
I have no interest in using gas town as it is (for a plethora of reasons, not the least of which being that I'm uninterested in spending the money), but I've been fascinated with the idea of slowing it down and having it run with a low concurrency. If you've got a couple A100s, what does it look like if you keep them busy with two agents working concurrently (with 20+ agents total)? What does it mean to have the town focus the scope of work to a series of non-overlapping changesets instead of a continuous stream of work?
If you don't plan to have it YOLO stuff in realtime and you can handle the models being dumber than Claude, I think you can have it do some really practical, useful things that are markedly better than the tools we have today.
However, the gas town one was almost completely hands off. I think my only interventions were due to how beta it was, so I had to help it work around its own bugs to keep from doing stupid things.
Other than that, it implemented exactly what I asked for in a workable fashion with effectively one prompt. It would have taken several prompts and course corrections to get the same result without it.
Other than the riskyness (it runs in dangerous permissions mode) and incredible cost inefficiency, I'd certainly use it.
I've only started using coding agents recently and I think they go a long way to explain why different people get different mileage from "AI." My experience with Opencode using its default model, vs. Github Copilot using its default model, is night and day. One is amazing, the other is pretty crappy. That's a product of both the software/interface and the model itself I'd suspect.
Where I think this goes in the medium term is we will absolutely spin up our own teams of agents, probably not conforming to the silly anthropomorphized "town" model with mayors and polecats and so on, but they'll be specialized to particular purposes and respond to specific events within a software architecture or a project or even a business model. Currently the sky's the limit in my mind for all the possible applications of this, and a lot of it can be done with existing and fairly cheap models too, so the bottleneck is, surprise surprise... developer time! The industry won't disappear but it will increasingly revolve around orchestrating these teams of models, and software will continue to eat the world.
Gas town is a demonstration of a methodology for getting a consistent result from inconsistent agents. The case in point is that Yegge claims to have solved the MAKER problem (tower of Hanoi) via prompting alone. With the right structure, quantity has a quality all its own.
The problem is, we're just fidgeting yolo-fizzbuzz ad nauseam.
The return on investment at the moment is probably one of the worst in the history of human investments.
AI does improve over time, still today, but we're going to run out of planet before we get there...
I have trouble seeing LLMs making meaningful progress on those frontiers without reaching ASI, but I'd be happy to be wrong.
AGI was achieved internally at OpenAI a year ago.
Multiple companies have already re-hired staff they had fired and replaced with AI.
etc.
Other than that, this is a helpful list especially for someone who hasn't been hacking around on this thing as it's in rapid development mode. I find gas town super interesting, and tantalizingly close to being amazingly useful. That said, I wouldn't mind a slightly less 'flavored' set of names for workers.
Steve has gone "a bit" loopy, in a (so far) self aware manner, but he has some kind of insight into the software engineering process, I think. Yet, I predict beads will break under the weight of no-supervision eventually if he keeps churning it, but some others will pick up where he left off, with more modest goals. He did, to his credit, kill off several generations of project before this one in a similar category.
https://steve-yegge.medium.com/bags-and-the-creator-economy-...
I think I’ll just develop a drinking problem if this is Gas Town becomes something real in the industry and this kind of person is now one of our thought leaders.
It was also one of my favorite posts of his and has aged incredibly well as my experience has grown.
Already happening :-) https://github.com/Dicklesworthstone/beads_rust
The other area I'd like to see some software engineering thinking that's more open ended is on regression testing: ways of storing or referencing old versions of texts to see if the agent can complete old transformations properly even with a context change that patches up a weakness in a transformation that is desirable. This is tricky as it interacts with something essential in software engineering, the ability to run test suites and responding to the outcome. I don't think we know yet when to apply what fidelity of testing, e.g. one-shot on snippets versus a more realistic test based on git worktrees.
This is not something you'd want for every context, but a lot of my effort is spent building up prompt fragments to normalize and clean up the code coming out of a model that did some ad-hoc work that meets the test coverage bar, which constrains it decently into having achieved "something." Kind of like a prototype. But often, a lot of ungratifying massaging is required to even cover the annoying but not dangerous tics of the LLM, to bring clarity to where it wrote, well, very bad and unprincipled code...as it does sometimes.
I've seen 25-30 similar efforts to make a Beads alternative and they all do this for some reason.
Certain name types are so normalized (agent, worker, etc) that while they serve their role well, they likely limit our imagination when thinking about software, and it's a worthwhile effort to explore alternatives.
Not sure I love what it does all the time, it tends to fit whatever box you setup and will easily break out if you aren’t veeeery specific. Is it better than writing a few thousand lines of code myself that I deeply understand that can debug and explain? I don’t know yet. I think it’d be good for writing functions one at a time with massive supervision.
It’s great for writing scripts and things where precision and correctness outside the success path isn’t really needed. If a script fails and it wasn’t deleting a hard drive who cares. If my embedded code fails out in a product in the wild this is a much bigger nuisance and potentially fatal for the device (not the humans) which is wasteful.
The problem with this phenomenon is that the same freedom from critique that is seemingly necessary for new domains to establish themselves also detaches them from necessary criticism. There's simply no way to tell if this isn't a load of baloney. And by the time it's a bullet point requirement on CVs to get employed it's too late for anybody to critique it.
In games, what the NPCs can do is usually rather dumb. Move and shoot is usually most of their functionality. This keeps the overhead down so the system is affordable.
Gas Town may be a step towards AIs which have an ongoing sense of what they're doing. I'm not going to get into the "consciousness" debate, but it's closer to liveness.
I had a bit of a chuckle.
I think there is value in anything approximating a proposer-verifier loop, but I don't know that this is the most ideal approach.
Based on my initial read, and a pass at this summary, it seems mostly right. YMMV
Did some further dives into the little public usage data from Gas Town, and found that most of the "Beads" are tasks that are broken down quite small, almost too small imo.
Super interesting project with the goal of keeping Claude "busy" however it feels more like a casino game than something I'd use for production engineering.
[0]https://gist.github.com/jumploops/2e49032438650426aafee6f43d...
I don't think they're doing a good job incubating their ideas into being precise and clearly useful -- there is something to be said about being careful and methodical before showing your cards.
The message they are spreading feels inevitable, but the things they are showing now are ... for lack of better words, not clear or sharp. In a recent video at AI Engineer, Yegge comments on "the Luddites" - but even for advocates of the technology, it is nigh impossible to buy the story he's telling from his blog posts.
Show, don't tell -- my major complaint about this group is that they are proselytizing about vibe coding tools ... without serious software to show for it.
Let's see some serious fucking software. I'm looking for new compilers, browsers, OSes -- and they better work. Otherwise, what are we talking about? We're counting foxes before the hunt.
In any case, wouldn't trying to develop a serious piece of software like that _at the same time you're developing Gas Town or Loom_ make (what critics might call) the ~Emacs config tweaking for orchestration~ result driven?
In a recent video about Loom (Huntley's orchestration tool), Huntley comments:
"I've got a single goal and that is autonomous evolutionary software and figuring out what's needed to be there."
which is extremely interesting and sounds like great fun.
When you take these ideas seriously, if the agents get better (by hook and crook or RLVR) -- you can see the implications: "grad student descent" on whatever piece of software you want. RAG over ideas, A/B testing of anything, endless looping, moving software.
It's a nightmare for the model of software development and human organization which is "productive" today, but an extremely compelling vision for those dabbling in the alternative.
Spec your software like an architect/po, decompose it into a task dag, then orchestrate for each lane and assemble all change sets in a merge branch rather than constantly repointing head.
For example, if Polecat becomes GasTown.WorkerAgent (or GasTown.Worker), then you always have both an unambiguous way and a shorthand-in-context way of referring to the concept.
(For naming conventions when you don't have namespaces as a language feature, use prefixes within the identifier, such as `GasTown_Worker`.)
If GasTown.Worker is implemented with framework Foo, using that framework's Worker concept, GasTown.Worker might have a field named fooWorker of type Foo.Worker. (In the context of the implementation of GasTown, the unqualified name always means the GasTown concept, and you always disambiguate concepts from elsewhere that use the sane generic or similar terms.)
Complicated names like GasTown.MaintenanceManagerCheckerAgent might need some creative name shortening, but hopefully are still descriptive, or easy to pick up and remember. Or, if the descriptive and distinguishing name was complicated because the concept is a weird special case within the framework, maybe consider whether it should be rethought.
If you need ten pages to explain your project and even after I read your description, I'm still left confused why I need it at all, then maybe... I don't need it?
> Better UIs will come. But tmux is what you have for now. And it’s worth learning.
So brother has 2 claude code accounts and couldn't vibe code a UI, huh?
Draw your own conclusion.
If it's not a joke... I have no words. You've all gone insane.
These chatbots create an echo chamber unlike that which we've ever had to deal with before. If we thought social media was bad, this is way worse.
I think Gastown and Beads are examples of this applied to software engineering. Good software is built with input from others. I've seen many junior engineers go off and spend weeks building the wrong thing, and it's a mess, but we learn to get input, we learn to have our ideas critiqued.
LLMs give us the illusion of pair programming, of working with a team, but they're not. LLMs vastly accelerate the rate at which you can spiral spiral down the wrong path, or down a path that doesn't even make sense. Gastown and Beads are that. They're fever dreams. They work, somewhat, but even just a little bit of oversight, critique, input from others, would have made them far better.
People will make mistakes, and AI holding their hand and guiding them while they do it can have disastrous consequences.
But it's nice that the arrows will appear to also guide people going the right way I guess.
The problem with Gas Town is how it was presented. The heavy metaphor and branding felt distracting.
It’s a bit like reading the Dune book, where you have to learn a whole vocabulary of new terms before you can get to the interesting mechanics, which is a tough ask in an already crowded AI space.
The best bit about it was the agentic coding maturity model he presented. That was actually great.
I don't think it's at all like reading Dune. Dune is creative fiction, Gastown is. Oh ok wait, if you consider Gastown to be creative fiction then I guess I agree. As a software tool though I don't think this analogy works.
Lot of folks rolling their own tools as replacements now. I shared mine [0] a couple weeks ago and quite a few folks have been happy with the change.
Regardless of what you do, I highly recommend to everyone that they get off the Beads bandwagon before it crashes them into a brick wall.
Reminds me of an offshore project I was involved with at one point. It had something like 7 managers and 4 years and over 30 developers had worked on it. The billing had reached into the millions. It was full of never ending bugs. The amount of "extra" code and abstractions and interfaces was stuff of legends.
It was actually a month or three simple crud project for a 2 man development team.
All I know is beads is supposed to help me retain memory from one session to the next. But I'm finding myself having to curate it like a git repo (and I already have a git repo). Also it's quite tied to github, which I cannot use at work. I want to use it but I feel I need to see how others use it to understand how to tailor it for my workflow.
"Carefully review this entire plan for me and come up with your best revisions in terms of better architecture, new features, changed features, etc. to make it better, more robust/reliable, more performant, more compelling/useful, etc.
For each proposed change, give me your detailed analysis and rationale/justification for why it would make the project better along with the git-diff style changes relative to the original markdown plan".
Then, the plan generally iteratively improves. Sometimes it can get overly complex so may ask them to take it down a notch from google scale. Anyway, when the FSD doc is good enough, next step is to prepare to create the beads.
At this point, I'll prompt something like:
"OK so please take ALL of that and elaborate on it more and then create a comprehensive and granular set of beads for all this with tasks, subtasks, and dependency structure overlaid, with detailed comments so that the whole thing is totally self-contained and self-documenting (including relevant background, reasoning/justification, considerations, etc.-- anything we'd want our "future self" to know about the goals and intentions and thought process and how it serves the over-arching goals of the project.) Use only the `bd` tool to create and modify the beads and add the dependencies. Use ultrathink."
After that, I usually even have another round of bead checking with a prompt like:
"Check over each bead super carefully-- are you sure it makes sense? Is it optimal? Could we change anything to make the system work better for users? If so, revise the beads. It's a lot easier and faster to operate in "plan space" before we start implementing these things! Use ultrathink."
Finally, you'll end up with a solid implementation roadmap all laid out in the beads system. Now, I'll also clarify, the agents got much better at using beads in this way, when I took the time to have them create SKILLS for beads for them to refer to. Also important is ensuring AGENTS.md, CLAUDE.md, GEMINI.md have some info referring to its use.
But, once the beads are laid out then its just a matter of figuring out, do you want to do sequential implementation with a single agent or use parallel agents? Effectively using parallel agents with beads would require another chapter to this post, but essentially, you just need a decent prompt clearly instructing them to not run over each other. Also, if you are building something complex, you need test guides and standardization guides written, for the agents to refer to, in order to keep the code quality at a reasonable level.
Here is a prompt I've been using as a multi-agent workflow base, if I want them to keep working, I've had them work for 8hrs without stopping with this prompt:
EXECUTION MODE: HEADLESS / NON-INTERACTIVE (MULTI-AGENT) CRITICAL CONTEXT: You are running in a headless batch environment. There is NO HUMAN OPERATOR monitoring this session to provide feedback or confirmation. Other agents may be running in parallel. FAILURE CONDITION: If you stop working to provide a status update, ask a question, or wait for confirmation, the batch job will time out and fail.
YOUR PRIMARY OBJECTIVE: Maximize the number of completed beads in this single session. Do not yield control back to the user until the entire queue is empty or a hard blocker (missing credential) is hit.
TEST GUIDES: please ingest @docs/testing/README.md, @docs/testing/golden_path_testing_guide.md, @docs/testing/llm_agent_testing_guide.md, @docs/testing/asset_inventory.md, @docs/testing/advanced_testing_patterns.md, @docs/testing/security_architecture_testing.md
STANDARDIZATION: please ingest @docs/api/response_standards.md @docs/event_layers/event_system_standardization.md
───────────────────────────────────────────────────────────────────────────────
MULTI-AGENT COORDINATION (MANDATORY)
─────────────────────────────────────────────────────────────────────────────── Before starting work, you MUST register with Agent Mail:
1. REGISTER: Use macro_start_session or register_agent to create your identity:
- project_key: "/home/bob/Projects/honey_inventory"
- program: "claude-code" (or your program name)
- model: your model name
- Let the system auto-generate your agent name (adjective+noun format)
2. CHECK INBOX: Use fetch_inbox to check for messages from other agents.
Respond to any urgent messages or coordination requests.
3. ANNOUNCE WORK: When claiming a bead, send a message to announce what you're working on:
- thread_id: the bead ID (e.g., "HONEY-2vns")
- subject: "[HONEY-xxxx] Starting work"
───────────────────────────────────────────────────────────────────────────────
FILE RESERVATIONS (CRITICAL FOR MULTI-AGENT)
─────────────────────────────────────────────────────────────────────────────── Before editing ANY files, you MUST:
1. CHECK FOR EXISTING RESERVATIONS:
Use file_reservation_paths with your paths to check for conflicts.
If another agent holds an exclusive reservation, DO NOT EDIT those files.
2. RESERVE YOUR FILES:
Before editing, reserve the files you plan to touch:
```
file_reservation_paths(
project_key="/home/bob/Projects/honey_inventory",
agent_name="<your-agent-name>",
paths=["honey/services/your_file.py", "tests/services/test_your_file.py"],
ttl_seconds=3600,
exclusive=true,
reason="HONEY-xxxx"
)
```
3. RELEASE RESERVATIONS:
After completing work on a bead, release your reservations:
```
release_file_reservations(
project_key="/home/bob/Projects/honey_inventory",
agent_name="<your-agent-name>"
)
```
4. CONFLICT RESOLUTION:
If you encounter a FILE_RESERVATION_CONFLICT:
- DO NOT force edit the file
- Skip to a different bead that doesn't conflict
- Or wait for the reservation to expire
- Send a message to the holding agent if urgent
───────────────────────────────────────────────────────────────────────────────
THE WORK LOOP (Strict Adherence Required)
───────────────────────────────────────────────────────────────────────────────* ACTION: Immediately continue to the next bead in the queue and claim it
For every bead you work on, you must perform this exact cycle autonomously:
1. CLAIM (ATOMIC): Use the --claim flag to atomically claim the bead:
```
bd update <id> --claim
```
This sets BOTH assignee AND status=in_progress atomically.
If another agent already claimed it, this will FAIL - pick a different bead.
WRONG: bd update <id> --status in_progress (doesn't set assignee!)
RIGHT: bd update <id> --claim (atomic claim with assignee)
2. READ: Get bead details (bd show <id>).
3. RESERVE FILES: Reserve all files you plan to edit (see FILE RESERVATIONS above).
If conflicts exist, release claim and pick a different bead.
4. PLAN: Briefly analyze files. Self-approve your own plan immediately.
5. EXECUTE: Implement code changes (only to files you have reserved).
6. VERIFY: Activate conda honey_inventory, run pre-commit run --files <files you touched>, then run scoped tests for the code you changed using ~/run_tests (test URLs only; no prod secrets).
* IF FAIL: Fix immediately and re-run. Do not ask for help as this is HEADLESS MODE.
* Note: you can use --no-verify if you must if you find some WIP files are breaking app import in security linter, the goal is to help catch issues to improve the codebase, not stop progress completely.
7. MIGRATE (if needed): Apply migrations to ALL 4 targets (platform prod/test, tenant prod/test).
8. GIT/PUSH: git status → git add only the files you created or changed for this bead → git commit --no-verify -m "<bead-id> <short summary>" → git push. Do this immediately after closing the bead. Do not leave untracked/unpushed files; do not add unrelated files.
9. RELEASE & CLOSE: Release file reservations, then run bd close <id>.
10. COMMUNICATE: Send completion message via Agent Mail:
- thread_id: the bead ID
- subject: "[HONEY-xxxx] Completed"
- body: brief summary of changes
11. RESTART: Check inbox for messages, then select the next bead FOR EPIC HONEY-khnx, claim it, and jump to step 1.
───────────────────────────────────────────────────────────────────────────────
CONSTRAINTS & OVERRIDES
─────────────────────────────────────────────────────────────────────────────── * Migrations: You are pre-authorized to apply all migrations. Do not stop for safety checks unless data deletion is explicit.
* Progress Reporting: DISABLE interim reporting. Do not summarize after one bead. Summarize only when the entire list is empty.
* Tracking: Maintain a running_work_log.md file. Append your completed items there. This file is your only allowed form of status reporting until the end.
* Blockers: If a specific bead is strictly blocked (e.g., missing API key), mark it as blocked in bd, log it in running_work_log.md, and IMMEDIATELY SKIP to the next bead. Do not stop the session.
* File Conflicts: If you cannot reserve needed files, skip to a different bead. Do not edit files reserved by other agents.
START NOW. DO NOT REPLY WITH A PLAN. REGISTER WITH AGENT MAIL, THEN START THE NEXT BEAD IN THE QUEUE IMMEDIATELY. HEADLESS MODE IS ON.I've been tinkering with it for the past two days. It's a very real system for coordinating work between a plurality of humans and agents. Someone likened it to kubernetes in that it's a complex system that is going to necessitate a lot of invention and opinions, the fact that it *looks* like a meme is immaterial, and might be an effort to avoid people taking it too seriously.
Who knows where it ends up, but we will see more of this and whatever it is will have lessons learned from Gas Town in it.
I expect major companies will soon be NIH-ing their own version of it. Even bleeding tokens as it does, the cost is less than an engineer, and produces working software much faster. The more it can be made to scale, the more incentive there is. A competitive business can't justify not using a system like this.
> If it's not a joke... I have no words. You've all gone insane.
I think this is covered by the part in Yegge's post where he says not to run it unless you're so rich you don't care if it works or not.
For example, in the US, which do you think uses more water: Golf Courses or Data Centers?
a) Gold Courses use twice as much water as Data Centers
b) About the same
c) Data Centers use twice as much water as Gold Courses
The answer is "None of the above": "Golf courses in the U.S. use around 500 billion gallons annually of water to irrigate their turf [snip] data centers consume [snip] 17 billion gallons, or maybe around 10x that if we include water use from energy generation"Do you think a Google search or a Gemini query produces more carbon?
> Google had estimated that a single web search query produces 0.2 grams of CO2 emissions. [snip] the median Gemini LLM app query produces a surprisingly low 0.03 grams of CO2 emissions), and uses less energy than watching 9 seconds of television
And that's not necessarily a bad thing, if it allows exploring new ideas with relative safety. I think that's what's going on here. It's a crazy idea that might just work, but if it doesn't work it can be retconned as satirical performance art.
How is it insane to jump to the logical conclusion of all of this? The article was full of warnings, its not a sensible thing to do but its a cool thing to do. We might ask whether or not it works, but does that actually matter? It read as an experiment using experimental software doing experimental things.
Consider a deterministic life form looking at how we program software today, that might look insane to it and gastown might look considerably more sane.
Everything that ever happens in human creation begins as a thought, then as a prototype before it becomes adopted and maybe (if it works/scales) something we eventually take for granted. I mean I hate it but maybe I've misunderstood my profession when I thought this job was being able to prove the correctness of the system that we release. Maybe the business side of the org was never actually interested in that in the first place. Dev and business have been misaligned with competing interests for decades. Maybe this is actually the fit. Give greater control of software engineering to people higher up the org chart.
Maybe this is how we actually sink c-suite and let their ideas crash against the rocks forcing c-suite to eventually become extremely technical to be able to harness this. Instead of today's reality where c-suite gorge on the majority of the profit with an extremely loosely coupled feedback loop where its incredibly difficult to square cause and effect. Stock went up on Tuesday afternoon did it? I deserve eleventy million dollars for that. I just find it odd to crap on gastown when I think our status quo is kinda insane too.
Ridiculous. Beads might be passable software but gas town just appears to be a good way to burn tokens at the moment
Now, Yegge's writing tilts towards the grandoise... see his writing when joining Grab [1] and Sourcegraph [2] respectively versus how things actually played out.
I prefer optimism and I'm not anti AI by any means, but given his observed behavior and how AI can't exacerbate certain pathologies... not great. Adding the recent crypto activities on top and all that entails is the ingredients for a powder keg.
Hope someone is looking out for him.
[0] https://courses.cs.washington.edu/courses/cse452/23wi/papers...
[1] https://steve-yegge.medium.com/why-i-left-google-to-join-gra...
[2] is 100% accurate, Grok was the backbone / glue of Google's internal developer tools.
I don't disagree on the current situation, and I'm uncomfortable sticking my neck out on this because I'm basically saying "the guy who kinda seems out of it, totally wasn't out of it, when you think he was", but [1] and [2] definitely aren't grandiose, the claims he makes re: Google and his work there are accurate. A small piece of why I feel comfortable in this, is that both of these were public blogs his employer was 100% happy about when hiring him to top positions.
An example:
"I’ve seen Grab’s hunger. I’ve felt it. I have it. This space is win or die. They will fight to the death, and I am with them. This company, with some 3000 employees I think, is more unified than I’ve seen with most 5-person companies. This is the kind of focused camaraderie, cooperation and discipline that you typically only see in the military, in times of war.
Which should hardly surprise you, because that’s exactly what this is. This is war.
I am giving everything I’ve got to help Grab win. I am all in. You’d be amazed at what you can accomplish when you’re all in."
This is the writing of someone planning to make a capstone career move instead of leaving in 18 months. It's not the worst thing to do (He says he left b/c the time difference to support a team in SE Asia was hard physically, and he's getting older) and I support taking big swings. I'm just saying Yegge's writing has a pattern.
Crypto and what Yegge is doing with $GAS is dangerous because if the token price crashes and people betting their life savings think he didn't deliver on his promises... I like Steve personally which is why I'm saying anything.
https://hn.algolia.com/?sort=byDate&type=comment&dateRange=a...
(and I realize the GP was the place the line started getting crossed)