So you can draw layouts like this and prompt Claude or Gemini with them, and get back working versions, which to me is space alien technology.
Then just prompt Claude to "use tmux to interact with and test the TUI rendering", prompt it through anything it gets hung up on (for instance, you might remind Claude that it can create a tmux pane with fixed size, or that tmux has a capture-pane feature to dump the contents of a view). Claude already knows a bunch about tmux.
Once it gets anything useful done, ask it to "write a subagent definition for a TUI tester that uses tmux to exercise a TUI and test its rendering, layout, and interaction behavior".
Save that subagent definition, and now Claude can do closed-loop visual and interactive testing of its own TUI development.
Since I have e2e tests, I only use the agent for: guiding it on how to write the e2e test ("use tmux to try the new UI and then write a test") or to evaluate its overall usability (fake user testing, before actual user testing): "use tmux to evaluate the feature X and compile a list of usability issues"
I agree that "TUI" is a better fit though. But not TUI-driven-development, more like TUI-driven-design, followed by using the textual design as a spec (i.e. spec-driven development) to drive GUI implementation via coding agents.
I’m not affiliated, but to clean them up you can use something like ascii-guard (https://github.com/fxstein/ascii-guard) which is a linter that will clean it up. Beats doing it by hand after multiple attempts telling AI to do it and repeatedly fail.
That said, the edge alignment is, I believe, caused by the fact that LLMs are involved in the process. Because the LLMs never "see" the final visual representation that humans see. Their "view" of the world is text-based, and in the text file, those columns line up because they have the same number of UTF-8 codepoints in the row. So the LLMs do not realize that the right edges are misaligned visually. (And since the workflow described is for an LLM to take that text file as input and produce an output in React/Vue/Svelte/whatever, the visual alignment of the text file needs to stay LLM-oriented for it to work properly. I assume, of course, since I haven't tried this myself).
The number of codepoints never did correspond exactly to the number of fixed-width blocks a character should take up (U+00E9 é is the same as U+0065 e plus U+0301 COMBINING ACUTE ACCENT, so it should be rendered in a single block but it might be one or two codepoints depending on whether the text was composed or decomposed before reaching the rendering engine). But with emojis in play, the number of possibilities jumps dramatically, and it's no longer sufficient to just count base characters and ignore diacritics: you have to actually compute the renderings (or pre-calculate them in a good lookup table, which IIRC is what Ghostty does) of all those valid emoji combinations.
P.S. The Hacker News comments stripped out those emojis; fair enough. They were, in order:
- a US flag emoji (made up of two codepoints) - a heart-on-fire symbol (two distinct symbols combined into a single image, made up of four codepoints total) - a woman and a man with a heart between them (three distinct symbols combined into a single image, made up of six codepoints total)
I won't speculate on whether the post is AI-written or whether the author has adopted quirks from LLM outputs into their own way of writing because it doesn't really matter. Something about this "feeling" in the writing causes me discomfort, and I don't even really know why. It's almost like a tightness in my jaw or a slight ache in my molars.
Every time I read something like, "Not as an aesthetic choice. Not as nostalgia. *But as a thinking tool*" in an article I had until then taken on faith was produced in the voice of a human being feels like a let down. Maybe it's just the sense that I believed I was connecting with another person, albeit indirectly, and then I feel the loss of that. But that's not entirely convincing, because I genuinely found the points this article was making interesting, and no doubt they came originally from the author's mind.
Since this is happening more and more, I'd be interested to hear what others' experiences with encountering LLM-seeming blog posts (especially of inherently interesting underlying content) has been like.
In this particular case, if the facts about how many years ago various products came out are wrong, it doesn't matter since I'm never going to be relying on that fact anyway. The fact that what the author is proposing isn't ASCII, it's UTF-8-encoded Unicode (emojis aren't ASCII) doesn't matter (and I rather suspect that this particular factual error would have been present even if he had written the text entirely by hand with no LLM input), because again, I'm not going to be relying on that fact for anything. The idea he presents is interesting, and is obviously possible.
So I care less about the "voice" of an article, but a LOT about its accuracy.
And if I'm wrong: so be it. I'm comfortable living dangerously.
(Reading it again, I probably should have noticed by "But here’s the thing: AI-generated UIs are high-fidelity by default", a couple of sentences previously. And in fact, there's "Deliberately sketchy. Intentionally low-fidelity. The comic-sans-looking wireframes were a feature, not a bug" in the very first paragraph - god, I'm so stupid! Still, each time I get this wrong, I'm that bit more likely to spot it in future.)
I think we do develop "antibodies" against this kind of thing, like listicles, clickbait, and random links that rickroll you. It's the same reason the article isn't titled, "5 examples of ASCII-Driven Development. You'll never guess #2!"
Every article is a little mentor, and the thing with mentors and teachers is you have to trust them blindly, suspend disbelief, etc. But the AI voice also triggers the part of the brain designed to spot scams.
When LLMs reuse the same patterns dozens of times in a single article, the patterns stops being interesting or surprising and just become obnoxious and grating.
Designers have learned figma and it's the de facto tool for them; doing something else is risky for them.
Product leaders want high fidelity. They love the AI tools that let them produce high fidelity prototypes.
Some (but not all) engineers prefer it because it means less decision making for them.
- Problem: AI UI generators are high-fidelity by default → teams bikeshed aesthetics before structure is right.
- Idea: use ASCII as an intentionally low-fidelity “layout spec” to lock hierarchy/flow first.
Why ASCII: - forces abstraction (no colors/fonts/shadows)
- very fast to iterate (seconds)
- pasteable anywhere (Slack/Notion/GitHub)
- editable by anyone
Workflow:
- describe UI → generate ASCII → iterate on structure/states → feed into v0/Lovable/Bolt/etc → polish visuals last
It also facilitates discussion:
- everyone argues about structure/decisions, not pixels
- feedback is concrete (“move this”, “add a section”), not subjective
More advanced setups could integrate user/customer support feedback to automatically propose changes to a spec or PRD, enabling downstream tasks to later produce PRs.
Example 2 has five boxes in a row each with a number 1 to 5 in them, and each box is missing a single space before the second vertical bar... I think the problem might be centering, where it needs to distribute 3 spaces on either side of the text, divides by 2 to get 1.5, then truncates both sides to 1, instead of doing 1 on one side and 2 on the other. Doesn't quite fit with how many are missing in [PRODUCT IMAGE] right above that, though.
(Also I'm just eyeballing it from mobile so I may be wrong about exact counts of characters)
Long before Unicode points were assigned we were using emojis in text communication in email and sms.
you can always be quite expressive with ones like :) :D :-( or even ¯\_(ツ)_/¯ - although not strictly ASCII.
Even the very first one (ASCII-Driven Development) which is just a list.
I guess this is a nitpick that could be disregarded as irrelevant since the basic structure is still communicated.
I've had it on a number of projects now where high quality assets were pushed into early builds causing execs eyes to light up as they feel like they're seeing a near final product, blind to the issues and under developed systems below. This can start projects off on bad footing because expectations can quickly become skewed and focus can go to the wrong places.
At one studio there was a running joke about trees swaying because someone had decorated an outdoor level with simulated trees. During an early test the execs got so distracted by how much they swayed and if it was too much or too little that they completely ignored the gameplay and content that was supposed to be under review. This issue repeated itself a number of times to the point where meetings would begin with someone declaring "We are not here to review the trees, ignore the trees!"
I've brought this issue up more recently with the advent of AI, which with things like Sora, the act of creating video clips can be stitched together can look like subjectively exciting movie trailers. This now has people declaring that AI movies are around the corner. To me this looks like the similar level of excitement as seeing the trees sway. An AI trailer looks much closer to a shipping product than it should be because the underlying challenges are far from solved; nothing is said about the script, pacing, character development, story etc...
You might believe that TUI is neutral, but it really isn't - there's a bajillion of different ways to make a TUI / CLI.
+--------+
| |
| ASCII! |
| |
+--------+claude(1) with Opus 4.5 seems to be able to take the examples in that article, and handle things like "collapse the sidebar" or "show me what it looks like with an open modal" or "swap the order of the second and third rows". I remember not long ago you'd get back UI mojibake if you asked for this.
Goes to show you really can't rest on your laurels for longer than 3 months with these tools.
None are mentioned. E.g I made https://cascii.app for exactly this purpose.
The examples are using non-ASCII characters. They also don’t render with a consistent grid on all (any?) browsers.
Maybe they meant plain-text-driven development?
Is ascii/unicode text UI the way to go here or is there other UI formats even more suited for LLMs?
I wonder if this has any real benefits over just doing very simple html wireframing with highly constrained css, which is readily renderable for human consumption. I guess pure text makes it easier to ignore many stylistic factors as they are harder to represent if not impossible. But I'm sure that LLMs have a lot more training data on html/css, and I'd expect them to easily follow instructions to produce html/css for a mockup/wireframe.
I may suffer from some kind of PTSD here, but after reading a few lines I can't help but see the patterns of LLM style of writing everywhere in this article.
This writing says something, has a point, and you could even say it has a correct way to get to the point, but it lacks any voice, any personality. I may be wrong -I stopped reading midway- but I really don't think so.
GUIs were supposed to the big huge thing that would let non-technical staff use computers without needing to grasp TUIs.
graph-easy.online printscii.com