Stock Vim (without `tmux`) can actually do most of what's shared in this post with `rg --vimgrep restore_tool | vim -c cb -` (`vim -c cb -` is my favorite feature in Vim; I find it strange that it's so rarely used or talked about).
(Since re-running the `rg` search can be undesirable, and I often like to analyze results in a terminal before opening them in Vim. I use a custom `tmux` command to copy the output of the last command [using this trick that involves adding a Unicode character to your prompt https://ianthehenry.com/posts/tmux-copy-last-command/], then I send that into Vim with e.g., `tmux saveb - | vim -c cb -`.)
For example, why is the default vim cursor hjkl? Well, it's just that the arrows on the physical keyboard of one of the vim designers were drawn there. That's it. There is no deep thought in search of the best cursor position, and understanding the why is just learning a useless piece of trivia.
To address your example: Why were the arrow keys on those particular keys? Who put them there? hjkl are on the home row, and touch typists end up having the movement keys under their right hand’s resting fingers. That’s suddenly quite convenient.
This is false, h isn't in the resting place. So go back and spend more time trying to explain that historic tidbid of design before trying to defend it (I'd also be curious to know why they shifted left instead of using resting places)!
Or don't and use this obvious principle directly and change keybinds to jkl;
Or go with the muscle memory of inverted T and use ijkl
But whatever you do, prioritizing the original design is a common bad heuristic because there is no reason to think that the original designer was great (not perfect!, don't twist it), so trying to understand the original reasons is a waste of "productivity" time (but if you're curious, it's not a waste of regular time)
They were placed on those particular keys by Lear Siegler who made the ADM-3A terminal Bill Joy was using at the time he made vi. End of story.
← ↓ ↑ →
makes a little sense to me. ← ↑ ↓ →
would be way better, IMO.Not only because the most used used direction (↓) would be closer to my "neutral" finger position, but mainly because the the keys for progressing "back" and keys for progressing "forwards would be grouped together.
Honestly, I wouldn't even mind having them spread across two rows, like U I J K
↑ ↓
← →
or something. (Personally, I have global WASD-like arrow mapping bound to IJKL through capslock combo in AutoHotkey, since sometimes cursor keys are really inconveniently far away when typing.)I don't know what your mean by "Not only because the most used used direction (↓) would be closer to my "neutral" finger position" - what is your neutral finger position?
I also got lost in the sentence about "back" and "forwards" - what is back and forwards?
Sorry, I didn't realise that was unclear. By "back" and "forwards", I mean movement through the flow of text relative to the cursor position. Given any reference point in linear text, all surrounding content either precedes or follows that point. Moving through preceding content is "going back", and moving through following content is "going forwards". When we move to the preceding line (↑) or preceding character (←), we're going "back". When we move to the following line (↓) or following character (→), we're moving "forwards". My point was that ← ↓ ↑ → effectively represents "back one character — forward one line — back one line — forward one character", which feels counter-intuitive to me.
> strongest finger, the index finger.
Interesting. As far as I now, middle finger is usually considered stronger than the index finger. Index finger might be more dextrous, though (?). Personally, I also slightly prefer the middle finger for rapid pressing over the index finger, but cannot see strong definitive advantage of either one. (I guess most of us use the middle finger for regular ↑↓ keys, as well as W/S in WASD bindings in games/project just fine, and using index finger in that context instead would feel odd.)
> what is your neutral finger position?
Mostly index finger on "K". (So I guess I'd prefer having "down" (being the most frequently used when VIM binding is involved) on "L", where my middle finger usually dwells, and "K" for moving up, if I had to invent it from scratch.)
P(old thing being good | old thing still being used after N years) is pretty high. Certainly higher than the base rate of P(new design fad being good).
Keeping your fingers on the home row is great design for keyboard-first navigation.
All software design is historically contingent so you do have a point but assuming that old design = bad design is just wrong. Some things haven't changed.
Except you've just made that reason up since vim's defaults don't follow this logic. For example, the most frequent commands of going back/forward by word are not on the home row, they follow a different principle of name-based mnemonics.
Strictly speaking, this isn't even true for hjkl, that was due to the fact that arrows were drawn there, not because the designer followed some good design principles (granted, at least the physical arrows were likely driven by that principle, though there is still a mistery of moving off resting/home keys, which might be related to the fact that cutoff ASCII H code is backspace https://news.ycombinator.com/item?id=3684763).
> Some things haven't changed.
Indeed, and that universal fundamental thing that hasn't changed is that a few random people doing design blindly can't universally create a world of good design!
Vim command layout is not perfect, the worst offender in my opinion is $ (move cursor to the end of the line), which is commonly needed, but somewhat hard to reach.
HOWEVER - when you start to use an surviving piece of still used old software, a bit of humility goes a long way. Not because software or designers were necessarily better in the past, but the reason that the software is still in use is probably because there are some benefits to it. So learn the defaults first, modify later when you understand them.
Right, just like w/b are "close enough" outside the home row completely
> but the reason that the software is still in use is probably because there are some benefits to it.
You forgot to connect this principle to this discussion. How does it make the defaults good to support your faulty conclusion that you need to learn them first?
Yes, they are close enough, and w/b are executed often.
> How does it make the defaults good to support your faulty conclusion that you need to learn them first?
You need to at least study them because assuming a priori that they are either good or "universally bad" is a faulty conclusion - and some stuff doesn't become obvious until you use them for some time.
When you start learning VIM, you will be faster with arrow keys, because you are used to them from other programs. When you get enough muscle memory, you may discover that there is some value in having somewhere near where your fingers rest on the keyboard - or maybe you will still find that you are faster with arrow keys - whatever the conclusion, it can not be made objectively before you have enough proficiency with both.
Now, you get to say it is faulty, and when one thinks about it for a minute, that idea packs a whole lot less punch.
Just want to dilute some unnecessary implied authority out of this otherwise interesting discussion.
Frankly, I always learn the defaults for the high value idea of reducing my overall configuration and maintenance workload!
Secondly, when I communicate workload to others, I don't have to do as much because the defaults are in place and useful.
These days, I tend to run defaults everywhere I can. Doing this means I do not have to a ton of configuration when setting up new environments.
I have also found those defaults do make a lot of sense. Maybe not the absolute peak sense, but more than enough.
I end up able to move and do a lot very reasonably quickly.
I also find my skills do not need to refresh as often too.
In any case there is plenty of room to disagree here, sans the idea of someone's conclusion being " faulty.
... one still can't correct the fault. You haven't answered the question of the original disconnect in arguments.
Your new arguments aren't relevant either since they're also NOT connected to the original re. bad defaults. You can have bad defaults that aren't there for any good reason and still think that reducing maintenance is more valuable! Fine, but that's a different argument!
That needs evaluation, and that need to be done in context.
In many cases, the defaults are not bad. One finds that out by working with them.
Really, I just don't feel "faulty" makes any sense.
If anything, it is more establishing a baseline.
Whether someone bothers with defeats first is debatable. I am on the work with them first side for what should be obvious reasons.
Passersby can arrive at their own conclusions and life carries on.
Re: still think reducing config and maintenance...
Yup. I have shown it many times. This also depends on context. For most of my career, no brainer.
set tabstop=4
set shiftwidth=4
set expandtab
set showmatch
set nohlsearch
set background=dark
syntax on
Typing that config into a file is emotionally associated with a system feeling "ready" for me. "ah, now I can _do_ things".I'm also an IDE user though. I tend to maintain a dichotomy between emacs(with evil-mode, of course) as the "kitchen sink" set up, with all the fixings, and vim with a config so short I can type it in as commands if I need to.
Vanilla vim is really perfect for quick edits to config files, scripts on random servers/VMs etc.
Bigger projects, at least for my usage, all happen on the same system , and having a bit more involved of an emacs set up makes sense there.
I suppose one could do a similar dichotomy with vim/neovim, if one had a distaste for emacs.
(I keep most of my dotfiles in a repository called "dotfiles".)
I get the emotional value/desire for a minimalistic .vimrc, but I also need the usefulness, and that necessitates, e.g., pulling in some plugins. E.g., lang-servers are just so valuable for immediate feedback in the editor.
Over time, someone of my vimrc has been pruned away just by development that has happened in/on vim itself, which is always lovely to see.
I'm not arguing for it, just saying I've seen it at multiple billion-dollar+-a-quarter companies.
There's no fixing it, though. I can know the "base tooling with zero config" … and I'm just less productive, that's all there is too it. Customized tooling makes me faster than the base tooling. (I did start trying to find "inventive" ways to try to work around the problem, of course. My case wasn't like military air-gapped or anything, just the only connection was via RDP. So for example, copy & paste is a communication channel.)
Basically, slowly "evolving" my environment by forcing me to try new things daily, without my doing massive "learning" runs where I try batches of new things at once
Seems like using a tool to its fullest potential to get more work done is better advice.
> I would be just as quick and comfortable on any system I would likely encounter.
How often are we encountering other systems…? And even where I am rarely ssh'd into something else … are we doing so much editing of code (live in production…?) that it matters? (I heavily customize my vim, but it isn't like I'm lost on a remote system with stock vim, or nano. ed is another matter.)
But if I need tons and tons of editing, … sshfs+local vim/terminal? But this just such a rare case, it seems like one of those "we should optimize for the common case" — which this is not.
For me personally it's a classic old timer habit from the days when you had to be prepared to fix a system using only the tools in /sbin. That doesn't mean you should operate like that all the time, but you should certainly know how to do so and be comfortable doing it.
```some examples " Quick access to commonly edited config files (and a directory for my Shell scripts!) map <leader>v :e ~/.vimrc<cr> map <leader>V :source ~/.vimrc<cr> map <leader>w :e ~/Workspace/myCo/tmuxp-session.yaml<cr> map <leader>W :e ~/.tmux.conf<cr> map <leader>z :e ~/Shell<cr>
" Super simple in editor note setup, amazing map <leader>x :vs<cr>:e ~/Documents/notepad.txt<cr> map <leader>X :vs<cr>:e ~/Documents/notes<cr>
" Quick terminal pane map <leader>t :vs<cr><c-w>l:term<cr><c-w>j:q<cr> " Pull file path into clipboard nmap <leader>b :let @+ = expand("%")<cr> " Pull current line into clipboard nmap <leader>B "*yy ```
Quick disposable terminals, tons of short cuts to get me into the config files that make it all happen (vimrc, zshrc, tmux.conf, a tmuxp session file) and to reload them, and super quick access to a well organized directory of notes are all huge boons during my workday.
Care to explain what it does? Trying `ls | vim -` and `ls | vim -c cb -` I don't immediately see a difference.
E.g., your example doesn't do anything because `ls` doesn't output `grep` format lines. So try piping the output of `grep` (you'll need flags for the line number and column number with `grep`, hence the `--vimgrep` flag above) matching the above format (or you could try `ls | sed 's/$/:0:0/' | vim -c cb -`, which will hack `ls` output to grep, and is occasionally useful).
(Note that the above hints at another useful tip, `grep` parsing is only part of what `cb[uffer]` does, it can also parse compile output, e.g., something like `gcc foo.c | vim -c cb -` will jump to the first compile error in your program and put the rest of the errors in the quickfix list).
Similar behavior with :grep inside Vim which you can change your grepprg to rg if you like.
I've got a feeling the `| vim -c cd -` isn't as commonly known because the Vim-initiated versions are somewhat more commonly known. It's handy to know that vim can do it in both "directions" (initiated from the external shell / initiated to the internal shell support).
I can't help but see it as the tiling window manager of text editors.
Even with plugins it's less featureful than Kate or Jetbrains IDEs. And the last time I really gave it a go, it was slow, which is surprising for a terminal text editor.
If I need to edit text via the terminal, micro has mouse support and keybindings that match what you'd expect in any OS.
I don't like the idea of thinking people who use vim are insane, it obviously has value and people who use it have good reasons for using it, but I can't see it as more than a niche nerd thing a la tiling window managers.
- Fast and precise navigation (jumping to files and jumping to some point in the file)
- Fast editing command (there’s no need to select first, and when you need to selection is fast due to the above)
- Easy extensibility (either custom commands which you can bind to keys, or hooking some logic to editor’s event)
- Integration with external tooling (using the text as input, collecting the output, intepret the output to find location to jump to)
All of these are first citizen in Vim, whereas in some editors, it can only be done with plugins.
https://stackoverflow.com/questions/1218390/what-is-your-mos...
With a bit of practice, vim/vi-style editing can be developed into muscle memory which makes it incredibly fast to use.
There is a reason why most popular text editors and IDEs have a vim plugins.
There was likely something wrong with your configuration. It's really hard to be faster than vanilla vim, but you can get weird performance issues from external plugins sometimes.
The problem is without some IDE-like features, I don't have a need for a text editor. Micro supports LSP servers by default and that gets me pretty far, before jumping to Kate or a real IDE.
There's of course also the "hacker street cred" aspect of it, to feel like a real serious developer. Or simply being fed up with churn and saying "I'm too old for this shit". JetBrains IDEs might change next year due to some new design fad. Or they may go bankrupt.
In my opinion, valuing boring old tech is good, but you shouldn't make a crusade of it. I choose to put up with some churn and inevitable tool changes for practicality. Yeah, some things have to be relearned this way, some changes seem pointless, but the overall effort may be less if you just learn to be flexible and say "ok if JetBrains goes bankrupt, I'll just learn the next popular IDE like everyone else and be done with it" instead of prepping in my bunker with my vim config files.
Thanks for the perspective.
Edit: I doubt there’s any feature in the more specialised editors, which you can’t also get from Vim.
I find that if I’m already piping into a buffer, I just leave it as a buffer. Vim’s gf and gf keybindings let me jump to the filename under the cursor, and it being a buffer makes it easier for me to edit (reorganize, group, further filter, etc).
I do think people undervalue the quickfix buffer though!
nvim -q (rg --vimgrep needle --color=never | psub) -c copen
rg --vimgrep restore_tool | vim -c cb! -
You might also wanna open the quickfix list by default: rg --vimgrep restore_tool | vim -c cb! -c copen -
You can learn more about how to navigate it using `:h quickfix`.The quickfix window is so small, so I added the "-c only" option to make it the only window that first pops up. Then made it a function so it's easier to call:
``` vgrep() { rg --vimgrep "$1" | vim -c cb! -c copen -c only - } ```
$> vgrep "restore_tool"
find "$1" -name "$2" | vim -c cb! -c "vert 40copen" -
40 means open at 40 column.atuin is make-or-break, its a bigger deal than zoxide and being a coder without zoxide is like being an athlete with shoes for a different sport.
asciinema is a better way to do terminal videos.
Its weird that this is weird now: having your tools wired in used to be called "being a programmer". VSCode and Zed and Cursor and shit are useful additions to the toolbox, you gotta know that stuff by heart now too and you have to know which LLM to use for what, but these things are the new minimum, they aren't a replacement for anything. Even with Claude Code running hot at 4am when the PID controller is wide open, sometimes its going to trash your tree (and if it doesnt youve got it on too short a leash to be faster than gptel) and without magit? gl.
If you think you're faster than OP with stock Cursor? Get them to make a video of how to use an LLM with chops.
That's not to say that tooling doesn't matter at all. Just that, historically, it's been a relatively minor factor. Maybe LLMs have changed that, or are about to.
An athlete with shoes for a different sport might run 5% slower. In a winner-takes-all competitive environment, that's fatal; a sprinter that ran 5% slower than the gold medalist is just another loser. Most programmers, however, win by collaboration, and on a relatively smooth fitness landscape, not a winner-takes-all spike. Even in winner-takes-all regions like startups, failure always results from bigger errors. I think nobody has ever said, "My startup would have succeeded if we'd used Dvorak keyboards instead of QWERTY", or vim instead of VSCode, or vice versa. It's always things like feuding cofounders, loss of motivation, never finding product-market fit, etc.
You sure? Programming is an act of creation. Any [good] creative worker - artists, sculptors, novelists, potters, bakers, et al. would agree that being an artist means finding joy in refining your technique, investing in your tools, searching for new recipes, and experimenting. Being a programmer is not about achieving better productivity percentages. As far as I know, most of the best-known programmers have never participated in competitive programming challenges. Tooling may not matter to building a product, yet the product is built by programmers, and tooling is very much everything to them. Good programmers do invest in their tooling, not because it's a universal rule they have to follow or because it gives them a competitive edge. They do it simply because they enjoy the process.
Though it's challenging to determine whether someone who loves exploring and refining their tools will excel in a specific team, one truth remains: those who don't engage with their tools likely aren't strong programmers, as they most likely fundamentally lack passion for programming itself.
I could just as easily say that good programmers are the ones who don't have sophisticated tooling setups because it means that they spend more time programming.
I'm inclined to agree with other comments that the baseline for productivity is probably lower than we think. It's fine to enjoy the process of making a perfect setup, but I don't see it as a prerequisite or strong indicator for being a strong programmer.
I have never said that. However, since you decided to go that direction. I can bite and entertain you. Here is a list of programmers, some of them I'm sure you'd even recognize. Donald Knuth, Rob Pike, Ken Thompson, Steve Yegge, Gary Bernhardt, Paul Graham, Rich Hickey, Bram Moolenaar, Richard Stallman, Anders Hejlsberg, Guido van Rossum, John Carmack, Tim Pope, Drew Neil, Sindre Sorhus, TJ Holowaychuk, Guillermo Rauch, Ryan Dahl, Fabrice Bellard.
The pattern is clear: many of the best programmers are also prolific tool-builders.
My point is that being a programmer is also about configuring one's development environment. Exactly because like you said: "the essence of programming is tool making". I just don't understand how it is different - configuring a tool, extending its functionality, adding more features to it from developing a [different] tool from scratch? Both is programming. Shit done by programmers. You don't call one bunch "pseudo-programmers" and the other "alpha-programmers" or some shit like that, right?
Then I misunderstood your comment. I read it as "not invested in their tools => not a good programmer."
Reading the replies to my sibling comments, I don't think we really disagree but we probably have different pictures in our heads when reading the context of this thread.
Yes, I'm sure. Being a painter is not about decorating your studio and choosing paints and paintbrushes. Being a sculptor is not about using the best chisels and rasps. Being a novelist is not about configuring your word processor. Being a potter is not about the selection of clay bodies and kiln accessories in your workshop. Being a baker is not about where you place your oven or what brand of mixing bowls you use.
It's surely true that any accomplished potter will have enough opinions about clay bodies, glazes, wheels, scrapers, and other tools to talk about all afternoon. But that's not what being a potter is about. 99% of potters have never made anything as beautiful or as useful as many of the pots that María Poveka Montoya Martínez coil-built from clay she dug up near her house and pit-fired with dried cow manure. She engaged with her tools, and I bet she would have yelled at you if you left one of them in the wrong place, but she wasn't defined by them.
That's what being a potter is about.
It's the same for programmers.
But, at the same time, sometimes, and quite often, working on tools IS the craft itself.
Building, configuring and improving programming tools is literally programming - you're writing code, solving problems, thinking about abstractions and interfaces. Every script you write, every editor configuration you tweak, every workflow you automate exercises the same skills you use in your "real" work. Understanding how tools work (and building your own) deepens your understanding of systems, APIs, and software design.
So, in essence, working on your tooling could actually make a better programmer out of you. In fact, many great, well-known programmers do actively work and maintain their tools.
The craftmanship is better left for small businesses, or FAANGS with engineers playgrounds.
We are in a better world today specifically because of legendary programmers who invested heavily in tooling, and even created their own from scratch. Prof. Knuth spent decades perfecting typesetting, made TeX/LaTeX, METAFONT, and literate programming tools. Linus built Git and maintains his own terminal emulator and scuba diving log app among many other things. Ken Thompson is famous for building tools to build tools. Rob Pike created text editors, window systems and numerous Unix tools. Carmack built custom level editors and dev tools for each game engine he created, he is known for obsessing over development workflows.
Can you name one person who "blows most other engineers out of the water" while not "being obsessed with tech", using nothing but "the soft skills"?
I dunno about you, I, as a software developer, rather want to be like these guys, and I think any aspiring software dev would. I spent my last weekend figuring out my new WM. Not because I had to, forced to do it, offered money for it, or because I perceive it as my "bottleneck". I don't fetishize over my tools to "get a slight edge". I do it purely out of enjoyment for the process, nothing else.
And I have watched many of my peers going through their career ladders. Sure, many of them might be perceived as more successful because while I was busy "sharpening my sword," they went into "the real world" and "touched grass" and "talked to people." Most of those are no longer doing engineering. If the goal is to "grow out" of being a software engineer, sure, then focus on the tooling might be overrated. That's not for me though; I'm happy where I am.
These famous people you cite are famous because they are "devs for devs". These guys were working on a _product_ (be it LaTex, games, text editors...) for which the user happened to be developers: by improving devx a little they had a lot of impact. That has little to do devs with individually spending a lot of time on their own tooling (for whom improving their devx a little has a little impact).
> Can you name one person who "blows most other engineers out of the water" while not "being obsessed with tech", using nothing but "the soft skills"?
The best engineers I've worked with were comfortable with their tools (of course, that's a requirement) but wouldn't be spending much time on incremental improvements. They'd spend time talking to customers, to PMs, to support, to leadership, learning about their domain and about how other engineers in the industry approached it. They _would_ spend time investing in tools that might be a game-changer (10x or whatever), but not things like vim configs and keyboards.
It always comes down to the same thing: you can have the best tech skills and tooling, yet a (decent) engineer who understand the domain and can communicate effectively will build a better product than you. That's hardly controversial.
> That's not for me though; I'm happy where I am.
I don't agree that it's a question of "growing out of engineering" at all (in fact I'd say it's "growing into" engineering from programming), but at the end of the day I'd totally agree that you should do what makes you happy! If you enjoy what you do and don't want to be doing the other stuff, who cares, I certainly won't be the one to tell you to change your ways!
Yeah, that's all what I'm saying.
> but wouldn't be spending much time on incremental improvements.
What is "an incremental improvement"?
My tools change depending on the work I do, and it so happens, the type of tasks I do sometimes change. When I say "working on tools," to me it's like knife sharpening. Sure, it's not impossible to cook with a dull knife, but it's helluva uncomfortable; why just not sharpen it? For me, it's like a chore I can't avoid, the only choice I can make is how to respond to it - and I choose to enjoy it. It takes less than two minutes to sharpen my knife - I have an electric sharpener. It's not tiny, but small enough so I can hold it in my hand, and it's not expensive. I also have regular sharpening stones. That process takes longer. I think I enjoy it, why not; it feels like meditation. But I don't do it every day, or even every week. I think I like how knives feel after.
So basically, I don't even understand what we are arguing about. Some engineers like extending the tools they use. Some, maybe not so much. Some spend a good amount of time doing that. They do it because they love it, and there are mostly benefits.
Some don't sharpen their knives at all - they simply throw them away and buy new set - I just don't know anyone like that in my circles. And similarly, I have never met engineers who never cared about improving their tooling even a little bit.
At the end of the day, we can probably all agree that doing the work without nicely "sharpened" tools is like cooking with a dull knife - it's not impossible; it's just why would anyone do this to themselves? It sounds anguishing.
I know what OP is referring to. Back in the day, a programmer was expected to have built their own toolbox of utility scripts, programs and configurations that would travel with them as they moved from project to project or company to company. This is akin a professional (craftsman, photographer, chef, electrician, etc.) bringing their own tools to a jobsite.
But, on the other hand, time spent on sharpening your tools is time not spent using them, or learning how to use them, and the sharpest tools won't cut in the hands of the dullest apprentice. And sometimes spending five hours automating something will save you five seconds a week. All the work I spent customizing window manager settings in the 90s, or improving my Perl development experience early this millennium, produced stuff I don't use now—except for the skills.
If you enjoyed the process it was time well spent.
It's like when people complain that leetcode is nothing like the job: yeah, it's a pretty bad test, but you're making a bad argument about that because CS knowledge is extremely relevant to the job, unless the job is assembly-line library composition.
I've consulted for companies that had all their dev boxes on coder or something, and you get really good at vscode really fast or you don't have much luck. It's more than 5%, but even stipulating that it's 5%, whoa, 5 percent?! By installing a bunch of stuff off a list on GitHub and dropping some aliases in your bashrc or zshrc? in a few weeks you're five percent more effective? Anyone in any field from banking to bioinformatics would think you were joking or a scam artist if you offered them 5% more efficient outcomes at that price.
Regarding OG editors like ed and ex and sam and stuff? I can absolutely believe that someone with a lifetime mastery of one of those tools could smoke a VSCode user, it's not at all obvious that VSCode is an upgrade to vim/etc. along any dimension other than low barrier to entry. ditto emacs which is about 1000x more powerful than vscode and the difference in library/extension ecosystem is not measured by size: the emacs one is identical to the vscode one with bottom 90% by quality sliced off the vscode one.
And this stuff compounds, you get a bit better at the tools and you can read more code faster, read the right code faster, start moving the derivative of the derivative.
It's also extremely non-obvious that collaboration at the kinds of scales and complexities that make exceptional skills or experience in it a top 3-5 core competency for people doing most software. Study after study going back to Fred Brooks and the Mythical Man Month have demonstrated that relatively small teams of people who get along very well coordinated by a relatively small number of highly technical managers who take the offload on the high-complexity cross-org collaboration is the golden ticket. It's the same reason computers have dedicated routing tables in the NIC and that those things talk to even bigger and more specialized machines that do all routing all day: you don't want to scale O(n^M) with the communication overhead.
A hacker needs to work really well with their immediate team, and have a good dialog with a manager who is both technical (to understand the work) and who devotes a lot of energy to complex collaboration.
> And this stuff compounds, you get a bit better at the tools and you can read more code faster, read the right code faster, start moving the derivative of the derivative.
I think you're overestimating how much tool choice impacts developer productivity.
Yes, using good tools is important. Being comfortable with and knowing your way around your toolset will make your life easier. But it won't make you a better programmer. It won't "smoke" anyone else. It's not a competition.
The time we spend reading and writing code is relatively minor compared to the time we spend thinking, designing, communicating, and collaborating with others. Reading and writing code is not the productivity bottleneck. It's understanding what you're reading, making a mental model of the system, and thinking about how to translate your thoughts into working code. The mechanical aspects of it are relatively insignificant. Especially nowadays when we have LLMs to automate those mechanical parts, for better or worse.
Not my cup of tea. I'm trying to become the absolute best I can be every single day at the absolute limit of my abilities and hopefully just a little bit past them as measured yesterday. I think competence as criteria is going to make a big comeback fairly soon.
For professionals it's absolutely a competition, but I'll also agree that engineers overvalue their environment setup.
A good dev can produce excellent results rapidly in an entirely "naked" environment. Sure, good tools might help, but you're looking at improvements in the margins - and a lot of this is about your personal joy, not productivity.
If the rate at which you generate value is noticeably gated by which IDE you use... well, you've got a long and exciting journey ahead of you. It's going to be fun, and I don't mean that facetiously.
"knowing your tools" was never called "being a programmer". Best devs I've ever worked with all did absolutely amazing work with more/grep/vi and a lot of thinking. And the thinking is where the value was created. That's still true, even if you throw an LLM into the mix.
I'm not knocking the classic tooling, it was designed by geniuses who obsessed about tools and tools that build tools (everyone from Ken Thompson to John Carmack are on the record about the importance of tooling).
It's the stock VSCode and Cursor people with an extension or two that are accepting a totally voluntary handicap. Someone in the thread called me a "shell bro", and I mean, that's just asking to get rocked if it's ever on the line against someone serious.
While I get your point, please consider how absurd this sounds to someone who doesn't recognize most of the names.
I love this place.
I know. It was more a commentary on how whimsical and uninformative all the names are.
Maybe you have no idea how much Elisp code is out there in the wild. There is a LOT of things written in Emacs Lisp. Elisp is probably the most widely used Lisp, with the global total codebase amount exceeding both Clojure and Common Lisp combined.
There are things built into Emacs that some specialized apps don't even have. Did you know that Emacs has built-in lunar and solar calendars, for example?
Just by comparing the sheer amount of code, of course vim+tmux may feel less complicated - they give you fewer opportunities to encounter complexity because they offer significantly less functionality out of the box.
That said, there is something nice about composing simple programs and keeping things small. With vim + tmux, I have terminal multiplexing and text editing done and haven't touched that configuration in at least 5 years for vim, over 12 for tmux. I know they offer 'less functionality' out of the box, but after years of tweaking and tons of installed packages I was never able to get that same functionality in Emacs, try as I might.
The main pain points were: - Not having tmux's <c-b z> functionality. You can kinda-sorta-almost jerryrig it with some custom bindings and winner-undo, but you'll still have weird edge cases to think about because you're technically doing a weird undo-tree thing, and all the context (projects, windows, buffers, etc) that emacs maintains will have something to say about that. - ACTUAL vim emulation. Evil mode is really not good enough, and I never had a package that was so hard to integrate without bugs, and when it breaks you have to usually peel back a massive layer cake of competing configurations to try to make it work. Even then, evil-magit has tons of issues, the dot operator doesn't work correctly, and none of the ctrl-i/ctrl-o plugins work across enough contexts to make it make sense like it does in vim. That's a killer feature for me - Embedded terminals that feel anything like tmux in wezterm. In the emacs terminals, there was always so much shit going on navigating in them (and again the layer of evil there) that it felt like walking on egshells using them, they tended to have performance issues and usually couldn't run TUI programs properly. I ended up having a separate drop-down terminal I used in tandem with emacs - Process management. It was always a huge headache for me keeping track of the processes Emacs was spinning up / down through its various plugins and debugging them.
I'd say most tmux+vim setups have something more like 0.00001% of emacs implemented, but sometimes that's a good thing. And again, I love and use both setups for various things.
What the heck are you even talking about? (delete-other-windows) command existed since at least Emacs 18 (released in 1987).
Okay, yeah, I get it, you're complaining that Emacs sorta lacks a dedicated "zoom toggle" that cleanly saves your exact window layout, maximizes one window, then perfectly restores the layout when toggled again. I personally never had any problem with my (toggle-maximize-buffer) command that builds on top of (delete-other-windows)
But do you even realize that you're comparing a bicycle to a Bugatti Veyron - tmux is a simple pane grid with position/size and Emacs windows carry much more state. There are buffer associations, point positions in each buffer, window-local vars, display parameters - margins, fringes, etc., there could be integrations with modes that manage window layouts, there are multiple abstraction layers - framers, pos-frames, buffer display rules.
tmux can get away with a simple layout save/restore because panes are just rectangular terminal regions.
> Evil mode is really not good enough
Ha, I'm a die-hard vimmer, and I laugh at this sentiment. You, my good Sir/Madam/etc., respectfully, have no idea what you're talking about. Evil-mode is the only ACTUAL vim emulation outside of vim/neovim. There's no such thing as a "vim emulation". And I've tried them all - different ones - IdeaVim for IntelliJ, Sublime Vim plugins, VSCode extensions, etc. All of them are pretty much filled with laughable deficiencies; they are not even shadows of the actual Neovim experience. With one notable exception, and that is the vim-implementation in Emacs. In Emacs, Evil-mode doesn't even feel like an extension, an afterthought; it feels like it's a baked-in, major feature of the editor. More than that, it can do certain things even better than you can do it in Neovim.
Anyway, it seems you're complaining for just the sake of complaining, without any evidentiary input. Like I said before, it's of course absolutely obvious that things of lesser capacity will have smaller surface area and thus would feel more stable.
What would be your reaction if I say - my bicycle of 15 years has never needed an oil change or brake fluid drainage. My new car in comparison, is so much more complicated and requires constant attention? You'd probably laugh and call me names. I hope you'd realize how vain this kind of argumentation is - comparing things of completely different caliber.
> Anyway, it seems you're complaining for just the sake of complaining, without any evidentiary input.
I have given multiple specific examples of every claim that I made. HN's formatting did make my comment difficult to read, but the content is there.
> But do you even realize that you're comparing a bicycle to a Bugatti Veyron - tmux is a simple pane grid with position/size and Emacs windows carry much more state.
Yes, yes, this is exactly what I'm saying. It's cumbersome and much more difficult to maintain. There are reasons to like both things (and I have repeatedly made efforts to re-iterate that I do like both options and use both for different purposes).
Your screed about how great evil-mode is does not address any of the specific issues that I have and was never able to resolve wrt using evil-mode in emacs.
> What would be your reaction if I say - my bicycle of 15 years has never needed an oil change or brake fluid drainage. My new car in comparison, is so much more complicated and requires constant attention? You'd probably laugh and call me names.
Incidentally I wouldn't, especially if your bicycle worked for your lifestyle and the constant attention and expense of motor vehicles factored into your decision making process. That's neither here nor there, though, because I happen to own both a bicycle and a car, and I never made the argument you seem to be fixated on. Like my car and bicycle, I use both a tmux / terminal app based workflow and emacs in different contexts and have swapped back and forth between the two at different points depending on my mood and what I'm doing. Sometimes I even (gasp!) discuss my experiences with the two tools in a comments section.
___
Emacs has TRAMP mode - stands for “Transparent Remote (file) Access, Multiple Protocol", it lets you:
- Edit files as if they were local: /ssh:user@host:/path/to/file
- Chain connections: /ssh:jumphost|ssh:target:/file for bastion hosts
- Access Docker containers: /docker:container:/etc/config
- Edit Kubernetes pods: /kubectl:pod:/app/settings
- Sudo seamlessly: /sudo::/etc/hosts or /ssh:host|sudo::/etc/config
- And even combine them: /ssh:server|docker:container|sudo::/etc/nginx/nginx.conf
What you get? Transparent integration - Dired, Magit, etc, they just work. There's no context switching - you stay in your configured Emacs environment. Your keybindings, packages, customizations remain the same. It's multiprotocol: Supports SSH, FTP, SMB, ADB (Android), and more.
ansi-term however doesn't work thru TRAMP, out of the box, though there are workarounds, like https://github.com/cuspymd/tramp-term.el (hasn't tried it yet)
not really TRAMP's fault, of course...
it also doesn't quite work with macOS remotes usually.
again, not purely TRAMP's fault, but a default config and habits issue with ~/.zshrc vs ~/.zprofile and maybe /etc/sshd/config settings, i think. i hasn't fully figured it out yet.
using this kind of "full-screen terminal screen sharing" approach has a more predictable experience, because of the amount of data transferred is usually 1 screen worth of characters (and colors) max, on most keystrokes.
that's both a pro and a con.
it imposes a fixed, network connection dependent input lag and the output is also often redrawn by retransmitting the same data over and over again...
"i don't use Nix partly because all my friends who use Nix have even weirder bugs than they already had and partly because i don't like the philosophy of not being able to install things at runtime."
The first part is mostly true. Nix installs things in a readonly store (/nix/store) so regular dynamically linked binaries don't work. Packaging takes a different approach and when things break, it can be difficult to work around. That said, I've run NixOS for over a year now and I find the benefits are far preferable to these downsides. It's not often I run into bugs, let alone show-stopping ones. What is annoying is how many tools are distributed without the source, so I have to run patchelf on them or use something like nix-ld.
As for the latter part, I think that using Nix will change that mentality. (Note that you can do `nix-env -iA $pkg` but it's not recommended). See, I don't even install things like rust at a global level anymore. I can always do `nix-shell -p $pkg` for an ephemeral shell if I need that, or I encode that dependency directly in the project's flake.nix. If I end up using that program a lot I will make the effort to add it to my NixOS config.
A great deal of things are already packaged, but for the most part I find it pretty fast to package something. Once you write a derivation [2] or two, it's not that bad. I never packaged for other distros because they all seemed quite tedious, but the nixpkgs reference [3] lists most things and I can look at the source of similar packages in nixpkgs. It is a time commitment though to learn so it's understandable that it's not really appealing.
[0]: https://github.com/svanderburg/nix-patchtools
[1]: https://blog.thalheim.io/2022/12/31/nix-ld-a-clean-solution-...
Nowadays I start every project with `nix flake init --template templates#utils-generic`. And put everything in that related to the project. I even had some projects where I had to put 'ssh' as a pinned package as it was used in some scripting and the default macOS and Linux versions accepted different flags.
I also do love that I can do something like `nix run nixpkgs#nmap` on any machine I'm on to instantly run a program with worrying where to get it from. I also use this feature in some of our projects so you can click a link in the admin web interface which is a 'command url' for iTerm2[0] like: `nix run gitlab.com/example/example/v1.0 -- test http://example.com` which will prompt to run that specific version of the command in your terminal, without have to checkout the source repo. In this case it is to rerun specific task locally for debugging purposes.
https://www.threads.com/@kunalb_/post/C6ZQIOVpwMd https://gist.github.com/kunalb/abfe5757e89ffba1cf3959c9543d9...
which has been really useful.
But I was curious for your approach... so I asked Claude to convert it to bash: https://claude.ai/public/artifacts/01a49347-1617-4afe-8476-0...
Works like a charm - pinned it to Ctrl-k, which was free in my setup. I guess I don't have to depend on XTerm for this any more :-)
Thanks!
We still do horses, but hardly anyone is favouring them for travelling around the continent delivering mail.
Kudos to the people that would rather experience that, I guess.
My thing is that most UIs use such poor use of the screen real estate. I get it, pretty is the priority. But when I want to do work, the padding on everything gets to be excessive. Additionally, it’s really nice to be able to host my dotfiles in GitHub and I can be in a 100% familiar environment within a couple minutes. I have a variety of machines I work on. Same thing goes for mouse usage. I’m much faster operating the keyboard, whereas I switch between mice and trackpads and sensitivities when I switch devices. Keyboards are mostly the same. Maybe it’s a skill issue on my part but oh well ;)
Graphical programs look nice but are a nightmare for interoperability.
Having said that, as an Emacs user I'm surprised that anyone goes to this much effort to not use Emacs. This is what it's made for and it's all built in the most hacker-friendly way imaginable.
Back in '05 I realised how crazy it was that everyone was using these shitty editors built in to bloated IDEs, all slightly different from each other. It's all just text! This caused me to discover vim and Emacs. This was about 10 years before editors like Atom and then VS Code caught on.
I tried vim for a while, did the tutorials and tried to believe that if I practised the keys I'd become a wizard. But it never paid off. But I'm glad I learnt to enter insertion mode and exit vi/m at least.
Emacs was not presented as well back then. It had (has?) a terrible looking GUI by default. But once I'd switched that off the keyboard interface and major/minor modes made so much sense. No surprise that VS Code uses the same model.
But then when I got into Elisp I can say I truly fell in love. I liked GNU/Linux before, but Emacs is what Free Software was always meant to be. Not just technically hackable but practically so. How many people edit their VS Code plugins to do exactly what they want? With Emacs you can hack everything right there in Emacs while it's running and then just go right back to where you were.
This is what systems like those from Xerox PARC, TI, Genera, ETHZ, Inferno had to offer, and we aren't still quite there in mainstream systems.
No need to experience like MS-DOS, CP/M, VMS and UNIX without X, is all that we can get hold of.
Most alternatives are much faster, easier to use, and more reliable than the horse. Just like the terminal, in fact: your metaphor was prefect, you just had it the wrong way round :)
It's literally what brought computing into the mainstream, so you can click options that are presented to you rather than memorize a bunch of incantations.
E.g.: not using Xcode for iOS dev doesn’t make much sense to me. At work we have a fully set-up IDE that works with our build system. I like QtCreator for C++.
But for web dev, scripting and blog writing, I find programs like VSCode extremely slow even with just minimal plugins. After a couple of years I got fed up and went back to vim in terminal.
GUI is not a superset or complete replacement of TUI. For many reasons, one of the reasons is that GUI is much harder and finicky to automate.
Every GUI automation is highly non-standard, ad hoc, finicky (usually, depends on exact pixel positions), possibly Turing complete, but even if it is, it's harder to use compared to writing a script.
> Every GUI automation is highly non-standard, ad hoc, finicky (usually, depends on exact pixel positions), possibly Turing complete, but even if it is, it's harder to use compared to writing a script.
The same applies to TUI applications. How do you automate top or mc? Don't conflate presentation (which is silly to even attempt automating) with internal logic.
The entire reason for TUI to exist is that TUI apps can be used in a terminal window, alongside CLI, so you don't have to switch to a separate window. But fundamentally it's just "GUI on a character grid".
I think its more like walking. For some things walking is preferable to driving there. its even less effort for things that are nearby, it uses fewer resources, I can do it after having drink, I can go through places that are not accessible by car (foot paths)....
Except that if you've ever seen someone who truly knows how to use a terminal, you'd be surprised how many times faster he can work than with any other UI-based workflow.
This may change with AI, but so far I'm not holding my breath.
Had I chosen to script it, I would probably still be hunting for the CLI to convert PDFs to PNGs. Sometimes, the trade-off is not so clear.
Assuming very basic knowledge of CLI on Linux
for i in *.pdf; do convert "$i" "$i".png; done
10 seconds flatIf you've never heard of convert (part of imagemagick) then:
open chatgpt/gemini/claude/whatever, enter prompt:
linux cli tool to convert pdf to png
Adds 30sec topsThank you for making my point for me.
1. Install sh/bash to get `for`, or convert your 'handy one liner' to PowerShell, and verify it works
2. Install ImageMagick, then ask an LLM to tell me what to do
2.5 And somehow LLMs don't present `convert`, but `pdftoppm`, or worse, both, and give me one giant essay, without my pre-configuring the system prompt to be concise and not overly wordy and avoid meta-sentences like 'if you want more help let me know';
3. Then navigate to my website anyway, and upload everything.
So no, point not made for you. With my existing environment and existing knowledge, it took me two minutes to get the work done. Even the slightest search would have brought me above that measure. There is a local minimum for a certain n at which discoverable GUIs absolutely excel over CLIs.
Well, what can I say other than people don't usually get in races with lead weights tied to their ankles.
A code analogy: it's a bit like how O(n^2) sort algorithms are the fastest for small n, and how most library sorts choose a different algorithm for different `n`. This is what I meant in my initial comment when I said
> Sometimes, the trade-off is not so clear.
Took less than a minute to write, test, and get the right dependencies (nix-shell -p imagemagick ghostscript). Scales to any number of pdfs.
The terminal just wins probably 95% of the time as long as you know about available tools.
So I have to install NixOS to get your 'took me less than a minute to write, test, and get the right dependencies'. Given my work-flow, and since I don't use Nix, doing what I did was absolutely 100% faster.
That, plus the terminal (itself) is at your disposal because, uh, you're in it.
That, plus vim emulation in all major ides be it vscode or jetbrains is pretty wonky and not comparable to the real thing.
Not much to do with 'true hackers', it's sad there's people who actually think that.
(all of the above being purely subjective ofc; this can be an infinite argument)
You mentioned the arcane keyboard shortcuts of tmux. I'm curious if you or others here have tried/use byobu (which I think of as a wrapper around tmux, basing most commands on the F# row). I was shown it a decade ago and have used it since (after a couple prior years of primitive tmux use).
> You mentioned the arcane keyboard shortcuts of tmux.
oh, i've remapped almost all the shortcuts in tmux. `ctrl-k` is not the default prefix and `h` is not the default key for "select pane left".
i haven't tried byobu but from skimming the readme i expect it not to have a ton other than nicer default key bindings, and i'd rather not add more layers to my terminal.
In my case, we have dev robots with byobu installed, and it's much easier to train non-SW engineers (i.e. HW folks, technicians, QA) on its use (primarily for remote session persistence).
(This is also why I don't do much/heavy customization these days: for uniformity between local and robot machines...)
https://github.com/tmux-plugins/tmux-fpp
https://github.com/tmux-plugins/tmux-copycat
https://github.com/Morantron/tmux-fingers
https://github.com/tmux-plugins/tmux-urlview
Any configuration or plugin that leans on the built-ins is probably going to be faster, so consider that w/r/t tmux-copycat.
I also really like tmux-resurrect, which saves and restores sessions for you; tmux-continuum, which runs those automatically; and the tmux-zen plugin for Oh-My-Fish:
https://github.com/tmux-plugins/tmux-resurrect
https://github.com/tmux-plugins/tmux-continuum
https://github.com/sagebind/tmux-zen/tree/master
It's pretty easy to get a very nice tmux setup going!
https://github.com/laktak/extrakto
(similar to copycat) and
Steps:
1. have two panes open
2. In pane 0, have your cursor or a line or multiple lines of code
3. :VtrSendLinesToRunner
4. In pane 1, the lines are performed
I used tmux for everything many years ago, but when I started the current job, I decided to try to do everything natively with Kitty instead. (IIRC one of my main reasons was that Kitty supports macOS' "secure input" features.)
But tbh I never got around to making my Kitty setup as nice as my old tmux one. I think I may switch back soon. It sounds like I may want to set things up differently in light of these new features!
i expect that kitty allows doing this kind of thing too, kitty and wezterm both seem very programmable. the main thing they’re both missing compared to tmux is session persistence, but if you don’t need that, tmux doesn’t gain you a lot.
(I could be wrong about that)
(In case not obvious, current title is 'I use my terminal')
Plus one for pro-terminal posts. As a chromebooker I've found that all I need is terminal and browser. Switching to my Mac OS seems kinda maximalist and I rarely use much outside of the terminal and, you know, more browsers
I know I’m far from alone in having skipped your post entirely upon opening. Nothing personal, but I have yet to find a single post by anyone written in this style where the content was worth the effort of parsing non-existing capitalisation.
You go through the trouble of adding aids like syntax highlighting, lists, coloured titles, and even differentiated notes and timestamps. Presumably those are there to help the reader. But then you throw away a lot of readability by making everything lowercase.
I used to write like that when I was a teenager. I guess it's a subtle way of rebelling against "the system". But seeing adults do that, especially in a professional setting, is cringey.
However, for speed, I have recently abandoned capitalization and punctuation when interacting with LLMs, unless they are critical for clarity. I wonder if this is why many folks in the AI crowd write everything in lowercase.
I think many of us abandoned it when we went professional. Or abandoned it in those contexts but still do it in others. I don't do it on HN, clearly - but I do it almost everywhere else. It's much more natural to me to skip capitals.
I believe there was also a period in the transition to ubiquitous smartphones where it wasn't an option to turn off auto-caps, or maybe there just wasn't the fine-grained control of which auto-correct you use on mobile devices that there is now. I suspect that killed some all-lowercase habits. I think that's why I ended up with a "normal" style on HN where I use caps and normal punctuation (I don't usually use periods for sentences that terminate a paragraph outside of HN.)
For me it wasn't about being professional, it was just about learning to type. As my typing speed improved it just became second nature to capitalise where appropriate. In other words, I capitalise everywhere out of laziness.
if shells exposed a scrollback api with line refs and structural tagging, we could annotate paths, link to buffers, diff last two runs, all without re executing anything. that would cut through half the indirection in current workflows. tmux's regex jump is a hack but it hints at a deeper model. scrollback should be its own memory layer
There is an xterm command for writing the scrollback buffer to a file so in theory if you wanted a hack to enable it today you could use that + grep (or even trigger it with something xdotool if you wanted to automate it.)
Several attempts have been made to do similar things in Unix, but there's a massive ecosystem problem: four decades of classic tooling has to be totally rewritten to get this kind of support. I don't see it happening without some kind of industry champion who can throw their weight around to force it on people.
If you want a comprehensive demo with even more features, I have one in this community video [1]
Anyone can email me if they're game: abner at terminal dot click
[0] https://terminal.click/posts/2025/04/the-wizard-and-his-shel...
[1] https://vimeo.com/1091637660 (6-minute mark)
>interacting with the terminal requires either a dedicated plugin or opening another nested terminal emulator
This is not true, you can run any terminal command in vim's command mode like :!git show HEAD, you can autocomplete it, pipe output where you need it etc without ever getting away from the file you're currently editing. You can also use % to substitute the current open file path in the command like :!git add %
Emacs and Vim are perfectly capable of taking arbitrary strings (which can be lines from the same terminal buffer that match a given regex) and putting them onto the command line of a terminal buffer. And more importantly, you can customize that whole process without writing C.
Huh, I install things at runtime all the time.
># i am so sorry > # see `search-regex.sh` for wtf this means ># TODO: include shell variable names bind-key f copy-mode \; send-keys -X search-backward \ '(^|/|\<|[[:space:]"])((\.|\.\.)|[[:alnum:]~_"-]*)((/[][[:alnum:]_.#$%&+=@"-]+)+([/ "]|\.([][[:alnum:]_.#$%&+=@"-]+(:[0-9]+)?(:[0-9]+)?)|[][[:alnum:]_.#$%&+=@"-]+(:[0-9]+)(:[0-9]+)?)|(/[][[:alnum:]_.#$%&+=@"-]+){2,}([/ "]|\.([][[:alnum:]_.#$%&+=@"-]+(:[0-9]+)?(:[0-9]+)?)|[][[:alnum:]_.#$%&+=@"-]+(:[0-9]+)(:[0-9]+)?)?|(\.|\.\.)/([][[:alnum:]_.#$%&+=@"-]+(:[0-9]+)?(:[0-9]+)?))'
This is pure quantum flux
Trying to bootstrap a Python setup "that just works™" is also a common struggle e.g. in Emacs world. Python tools are just a bunch of contraptions built with fiddlesticks and bullcrap. Anyone who tells you differently either already have learned how to navigate that confusing world and totally forgot "the beginner's journey"; or too new and have not tussled with its tooling just yet; or simply don't know any better.
- in Emacs' Eshell, one can pipe results in and out of buffers, e.g., you can run a command, then pipe it to grep/ripgrep, then pipe the results into a buffer (without any intermediate files).
- Conversely, you can read a buffer content and pipe it into a command.
- Or you can do simple calculations, you just need to use prefix notation e.g. `* 3 42` or `(* 2 pi)`.
- You can run regular emacs functions, like `dired .` or `magit-status`, `find-file ~/foo` or `(calendar)`.
- Or you can use Emacs vars and functions directly, e.g., `cd $org-directory`, or `cd (projectile-project-root)`, or `mkdir (format-time-string "backup-%Y%m%d")`
- You can absolutely delegate some (potentially long running) tasks to run in an outside terminal. I wrote this command eshell-send-detached-input-to-kitty¹, it uses Kitty Terminal's socket control feature.
There are integrations that you can only dream about, one recent example is John Wiegley (creator of Eshell) and Karthik (author of gptel package) having to experiment with piping things in and out of LLMs.
Sure, the backwards is also possible - you can emacs from a terminal, but terminaling from inside emacs is way more cooler.
___
¹ https://github.com/agzam/.doom.d/blob/main/modules/custom/sh...
A pro tip, if I may: never argue about programmability against a Lisp system. The argument unlikely to ever end up in your favor, because Lisp was designed from the ground up to be the ultimate programmable medium. The essence of Lisp lies in its incredible simplicity and flexibility - code is data, data is code, and everything can be manipulated programmatically.
The development experience is unmatched. You don't even need to save things - you can just eval them in a scratch buffer and see instant results. The REPL is unbelievably good, making programming feel like playing a video game where you get immediate feedback for every action. This interactivity transforms coding from a write-lint-link-compile-run cycle into a fluid, exploratory conversation with your system.
Sure, some things can be done equally well in shells, but not as nicely and elegantly as in a Lisp environment. The uniformity of S-expressions, the power of macros, and the ability to redefine anything at runtime give you a level of control that other systems struggle to match. Yours is a decades old argument yet Lisps are still prolific today, powering everything from Emacs configurations to modern web applications. The longevity speaks for itself - good ideas endure.
I love Clojure. I use it all the time. Sometimes through babashka, sometimes via nbb. Yeah, I would probably consider trading my dog if that gets me proper Clojure-like maps and other immutable structures in Elisp. Still, this would be far down in my wishlist for Emacs improvements. My biggest gripe is the improbability of removing the GIL - I would love to have proper concurrency, but that's not the Lisp machinery of Emacs, that's all C core.
In emacs, you can do this, because your email client is a package written in Lisp and running in the globally shared context of the emacs interpreter, so you can directly query its state. And if you don't know the names of the variables to query, you can trivially jump into the source code and start figuring it out. Furthermore, there is a 99.9% chance the state is implemented using well-known emacs primitives like buffers, whereas for general programs launched from the shell there is no standard set of UI and text manipulation libraries.
There isn't functionality in bash to magically jump to the C++ source code of Thunderbird or read variables from its internal state, which is the key difference.
That said, yeah, it certainly doesn’t Just Work out of the box the way something like vscode does.
Edit "file.txt" in server "FILES" if it exists, become server "FILES" otherwise:
gvim --servername FILES --remote-silent file.txt
Welcome to ANSI escape sequences. The left arrow key, for example, is really just <Esc>[D . You can see this stuff for yourself by running `cat >/dev/null` (cat isn't actually doing anything useful here, it's just allowing you to type without the shell getting in the way and actually, you know, making an attempt to interpret the keys you press). Press backspace to figure out which bytes are represented by 1 and which by 2 characters. 2-character sequences where the first is a caret (^) can be produced by ctrl + the second character, and correspond to the second character in ASCII minus 64. Hence ^A is byte 0x01. The escape key sends ASCII ESC, number 27, and is written ^[ .
https://en.wikipedia.org/wiki/ANSI_escape_code
Software distinguishes between a bare Escape key press and an ANSI escape sequence by waiting a couple of milliseconds and seeing if more bytes arrive. The number of milliseconds is often configurable, with e.g. the 'escape-time' config key in tmux and the 'ttimeoutlen' setting in vim.
And for package persistence I have an extra configuration to use Brew. It all works beautifully and very fast/no noticeably latency on a capable VM/vps etc:
https://docs.linuxserver.io/images/docker-kasm/
https://gist.github.com/jgbrwn/3787259bc7d00fce3fdd4b5bd579c...
https://gist.github.com/jgbrwn/28645fcf4ac5a4176f715a6f9b170...
That's why I build off of the programs my Shell Bling Ubuntu [1] installs. Not everything I use in an average day is in there, but everything I find myself shocked not to have on a new Ubuntu VM out of the box is.
Well, they aren't really, and given the fundamentally flawed foundation of all text as the lowest common denominator, that power is not exposed.
For example, your regex path detection logic is invariably flawed, and while there are ways to mark the output with extra codes as paths so you could just click on them to open in whatever editor, that requires the whole processing pipeline to stray from the plain text (and might still not allow you to search for files only by type)
- one top-level tmux
- with a window for each workspace
- running a nested tmuxsession, one per-
workspace
- for any codebase that can be indexed by
cscope(1) I run cscope on window #0 of
each nested tmux session, with
- CSCOPE_EDITOR set to a script that
launches $EDITOR in a new window in the
same session, titled after the basename
of the selected file
- that CSCOPE_EDITOR script exits as soon
it runs the `tmux new-window` command
so that cscope gets focus back
This lets me use tmux+cscope as a poor-man's mouse-less IDE. bind-key -n M-F1 select-window -t :0
bind-key -n M-F2 select-window -t :1
bind-key -n M-F3 select-window -t :2
bind-key -n M-F4 select-window -t :3
bind-key -n M-F5 select-window -t :4
bind-key -n M-F6 select-window -t :5
bind-key -n M-F7 select-window -t :6
bind-key -n M-F8 select-window -t :7
bind-key -n M-F9 select-window -t :8
bind-key -n M-F10 select-window -t :9
bind-key -n M-F11 select-window -t :10
bind-key -n M-F12 select-window -t :11
So relieved Claude Code and Aider exist now - I almost bought into the Cursor hype
rgf is a function
rgf ()
{
( unset rg;
function rg_normalize ()
{
sed -E 's#-(\x1b\[[0-9;]*m)*([0-9]+)(\x1b\[[0-9;]*m)*-#\1:\2:\3#g'
};
rg --color=always "${@:-${RG_LAST_ARGS[@]}}" --with-filename --line-number | rg_normalize | fzf --ansi --multi > /dev/shm/.tmp.fzf );
cmd_fuzzy_editor $(cat /dev/shm/.tmp.fzf | awk -F':' '{file=$1; line=$2;} {print file, line}') > /dev/shm/.my_editor.source;
source /dev/shm/.my_editor.source
}
type cmd_fuzzy_editor
cmd_fuzzy_editor is a function
cmd_fuzzy_editor ()
{
( set -eu;
{
local file="$1";
local line="$2"
} 2> /dev/null;
export EDITOR="${EDITOR:-vim}";
export FUZZY_EDITOR="${FUZZY_EDITOR:-${EDITOR}}";
case "$FUZZY_EDITOR" in
"vi" | "vim" | "gvim" | "nvim" | "neovim" | "spacevim")
echo "$FUZZY_EDITOR" "${file}" "+${line}"
;;
"emacs")
echo "$FUZZY_EDITOR" --no-splash "+${line}" "${file}"
;;
"nano")
echo "$FUZZY_EDITOR" "+${line}" "${file}"
;;
"nedit")
echo "$FUZZY_EDITOR" "${file}" -line "${line}"
;;
"sublime_text")
echo "$FUZZY_EDITOR" --new-window "${file}:${line}"
;;
"code" | "vscode")
echo "$FUZZY_EDITOR" --goto "${file}:${line}"
;;
"notepadqq" | "nqq")
echo "$FUZZY_EDITOR" -line "${line}" "${file}"
;;
"kate" | "leafpad" | "mousepad" | "gedit")
echo "$FUZZY_EDITOR" "${file}"
;;
*)
echo 'No FUZZY_EDITOR found!' 1>&2
;;
esac )
}
It's apparently black magic, according to team members. But it's extremely productive to be able to develop in a terminal. Not only is it extremely productive, but it's also extremely extensible -- you can export your workflow in a script and use that same workflow in CI/CD. Running the same thing in the cloud and on your local workstation is... beyond productive.
tl;dr: learn how to use vim (or emacs, you heathen!) and you will get waaaaay better at writing software. You'll start to miss having pipes when you're forced to use GUI apps.
I’m guessing some people already have these capabilities integrated into terminal workflows and I’d love to see a demo/setup.
I use aider + neovim FITM plugin (with qwen coder.) It's nice because it not only can help me with code problems but the often more frustrating tool problems.
I am on CC pro but I think to get the 100 or 200$ abonnements.
Indeed, this is one of the strongest arguments for tmux versus other approaches to this problem. It can run on the server you're remoted into and still do its job. https://hiandrewquinn.github.io/til-site/posts/tmux-is-worse...
You can fall down an unexpectedly vast rabbit hole with its customisability, however, since multiplexers see so much of what's going on in a very general way (by design).
But for whatever reason, I'd rather vimdiff when I have to resolve conflicts on rebase.
What I really hate about VSCode currently is how huge pylance is though, and how it's all up in my grill when I am trying to manually code.