He was writing some niche OS system and the blog was a collection of posts about that system.
I also remember from the discussion thread that the developer had passed away.
EDIT: Looks like the parent post entered the second chance pool (https://news.ycombinator.com/item?id=26998308). Not only did all the timestamps get rewritten, but apparently I can now edit this day-old comment :) Interestingly, it does show the correct timestamp when editing. Off-topic, but thought it's interesting behavior worth mentioning.
I think 10x programmers are indeed usually smarter than ordinary programmers. They don't work out 10x amount of higher quality code, or 10x quality of code, but work in COMPLETELY DIFFERENT DOMAINS -- OS, Compilers, Emulators, etc., they simply solve tougher problems.
And they also have the capacity to hold a lot of code in their brains. This is so amazing. I usually lost myself after just maybe a couple of dozen lines, or a few function hops. Admittedly, my program uses more libraries so TBH the brain trace just stopped whenever it went into a 2000 line long library function. But my project is more trivial than theirs.
It's a bit disturbing to me that this is being voiced as a feat of some kind. Everybody should be doing this and most people are capable of it. I'm sure you are as well, despite your own pessimism. If you're not doing this, there's been some failing among your mentors. You can develop this skill with intent and practice and you really really really should.
That's what the programming part of our job is: not pasting in snippets from Stack Overflow, not asking Copilot for something that you hope is okay, not vomiting code until it gets a big green OK from your tooling, not gluing together library functions whose implementation and constraints we don't understand. You should be developing a clear mental model of your code, what supporting utilities it references, how those utilities work and what constraints they impose, as well as actively considering edge cases that need specific attention and the constraints your code projects outward to its callers.
You won't be able to do this consistently during your first couple years of working. And that's fine. But something's gone deeply wrong with your craft if you haven't gotten a handle on it once you're more professionally mature. But rather than feeling incapable of doing it yourself and in awe of those who do, you just need to commit yourself to practicing it as much as you can until it becomes second nature.
I agree with you that everyone should be doing this, but I disagree that most people are capable of it.
There are two points of arguments:
1) People get tired when running code in head, so I can't do it very long. What I observe is that 10X programmers can do this consistently, even when they are tired. John Carmack is a very good example. I won't say it's in the genes, but I suspect most people don't have a huge room to improve -- or, if theoretically they have a huge room to improve, realistically they don't
2) Even when I can run code in my head, when I'm in a good condition, I can only run MY part of the code, i.e. I simply cannot run any library code. I'm working on a simple Hex editor with C++/SDL2/ImGui, and running SDL2 code and ImGui code in my head is way above my capability. Last night I kept running a piece of code in my head and I couldn't really find ANY place it could go wrong, until I figured out that must be the scan code that ImGui is using, so I switched to a recent version (mine was from 2020) and the issue went away. What I observe from the 10X programmers, is that they usually work on very low level problems, so they have the benefit of not relying on external libraries. David Cutler is a good example here.
It is not the defining characteristic of "10x developers" like Carmack. He's of an especially rarified sort and not really someone to look at as a role model in the first place because his experience and background is so idiosyncratic.
What we're talking about here is just an essential characteristic of "competent developers" and is among the array of characteristics that distinguish early-career developers from more genuinely senior ones. Your stepwise growth of the skill is very highly correlated with what value you'll provide as a developer (and therefore what roles and rates you can pursue).
Next time you encounter a frustration like you found in (2), don't just blindly update ImGui and hope for the best. If you think it might be the problem (a great insight because you were modeling it!), open your local version of ImGui and see if you can find where it's a causing problem, because you're now going to be very invested in reading ImGui closely (to find the issue) and you're going to learn so much that will help you better model ImGui in the future, as well as better model other utilities.
After you've put some effort into that, and hopefully even found the issue (but perhaps not), then go to GitHub and try to use changelogs, issues, PR's, etc to see if you can find some specific commits that might be related to the issue. Analyze those commits yourself and see what you can learn from them, improving your understanding about utilities like ImGui and how they're implemented and where bugs like the one you encountered might lurk so you can track them down more quickly in the future. Only then should you consider updating your local version.
And while you might be reading this and thinking "who's got time for that?!", it doesn't really take that much time once you get the hang of it through practice, and every time you do it, you're making huge investments into your own proficiency and value (-> future roles and rates). Don't skimp, just do it!
(As for #1 - "getting tired" -- aspiring athletes, whether professional or amateur, get tired during training too. And that's okay, because again, it's all a process of development and growth. To train, you push as far through the tiredness as you can, and then make the compromises you need to, always trying to push yourself a little bit farther before you do. By doing this, that wall of tiredness moves further and further out and you become that much more capable and productive.)
Like I said I agree with you in principle. Sometimes I feel guilty not knowing the real issue instead of just glossing away. Wouldn't the 10x programmers want to figure out the root issue of this kind of bugs? But realistically -- this probably sounds defeatism, it burns me out quickly if I do that. It might just be that I'm genetically prone to be impatient and easy to frustrate, or be that I'm not mentally well, or whatever. I know it sounds defeatism so I would prefer not to speak these lines.
But again I must thanks for your reply, and will keep trying pushing a bit further every time.
Basically try to take advantage of when verification is easier than what you're asking AI to do for you.
I can't complain about the code quality as it's decent, but I just want to get better at reading the source code, I guess.
If I'm writing 100 lines of greenfield code then yeah, anything is easy then.
> what supporting utilities it references, how those utilities work and what constraints they impose
The responsibility isn't to memorize and model every function called up and down your whole stack. Often, you don't even have full insight into that if you wanted it and of course you couldn't hold that all in your head even if you wanted to. But you don't need to.
The responsibility is simply to thoroughly understand how each function you call works insofar as you're using it.
You should be confident, not hopeful, that the state you've arranged for that function call is a valid state for it, you should have a informed, not incurious, sense of its general behavior characteristics (fast or slow, high or low resource demands, thread safety, etc), and you should be able to make informed predictions about what its output should like given the state you pass in.
It's actual implementation will often be opaque, or at least opaque at some depth, but between the function's documentation, any access to its source, and your own insight of how something like that function would likely or necessarily implemented, you can and should be able to fully model it for the purposes of your own invocation.
If you work close to the metal you only have 1-2 levels of abstraction. If you need to call a library which calls a library which calls a VM which calls some syscall you simply don't have the brain to trace down all those -- plus you are not allowed the luxuries of tracing it down because the ticket is wanted ASAP.
Getting the right job is the only thing needed to take you away from this hell.
I'm sure you're generalizing to make a point but I can assure you this is often not the case, I know several PhD's who have a very hard time dealing with real-world problems that any practical engineer would catch. There is a lot to be said for theory, but there is also a very hard limit.
I don't know the details, but this could exactly be the proof, not the counter-proof. They do different things, but they may not perform ordinary jobs very well. What if we ask John Carmack to play with CSS, move those pixels so the UI looks "perfect"?
Not necessarily to disagree with your overall point. But having a PhD is maybe not always as significant as we think it is. Getting a PhD (usually) means extreme specialization in a (possibly highly niche) area, and may well leave someone without a lot of basic skills you might expect them to have.
(1) Many of these "defiant" people merely didn't trust credentials -- they were perfectly fine with authority who earned deference by virtue of proving they really do know what they are doing, and
(2) That the psychology/psychiatry profession in general, consisting of people who have their Masters and PhDs, have to "suck up" to a lot of credentialed authority, without question, to get their degrees -- and thus it's only natural for them to expect everyone to unconditionally respect credentials!
(For the record, I have a PhD, but it's in pure math, which is possibly simultaniously both the least practical and most practical thing you could possibly learn -- but as such, I'm tangential to engineering and physics -- and I'm pretty sure that all three of these fields have a certain "fine, you have a credential, but can you really walk the walk?" element to them.)
While I haven't really been in forums debating the merits and perils of paired programming, I cannot help but be amused by this essay, that pretty much confirms this initial thought I had about paired programming!
I have been very interested in learning all kinds of details from the Archmages, so I gathered as much information as possible. From what I observe, great minds do great things.
I'm sure that was true for everyone back in the punchcard days. It would enforce a kind of rigor that I can blissfully ignore.
edit: I see the exact same story in the linked thread, so clearly a lot of Russians are very proud of that skill
Quite simply, when you had to walk across campus or at least to a different room to submit your card deck, wait (perhaps hours) for the results (printed on fan-fold paper, that again you had to go to a different building or room to pick up) only to find your program didn't compile due to a syntax error or didn't run due to a careless bug, you learned to "desk check" your code, run the program in your head, and be as sure as you could be that there were no errors.
Even when we got connected terminals, it could still take hours for your compile job to work its way through the queue of pending jobs, because it was in a development region that only got resources when the production queue was clear. You didn't use the compiler as a syntax checker in those days.
That all started to change when we got PCs or workstations, or at least good interactive multiuser development environents, and a "code-compile" loop or repl became part of the standard toolkit.
The hard part for me is then translating his ideas into vectorized numpy for speed, but at least I get the right answer to check against.
- Think through and write on paper in pseudo code first;
- Run written code in head, or on paper if they don't have the capacity, a couple of times before pressing that BUILD menu item;
- Restrain from using libraries if a custom, better solution is possible;
But then I think, it probably doesn't make a lot of sense if you work as a frontend pushing out JS, or as a data eng writing Python code which calls some Scala code which runs in JVM which is highly complicated, because the whole industry is "AGILE" and the chain is way too long. You need to get into the right job, to nurture such a mindset. The old timers got lucky. They started with 6502 bare metal.
That's why I'm pushing myself to get out of Data Engineering, to do something lower level, until I get deep into the basement. I probably won't succeed here but it's fun to try.
I disagree about not using libraries. Libraries are almost always going to be better tested, better reviewed, and handle more edge cases than anything you come up with in-house. Of course if no suitable library exists you have no choice.
It's good to hear the literate programming thing. I sometimes do that on paper when I need to clear my mind. Basically code in English but with C type {} as scope.
It Can Be Done (2003) - https://news.ycombinator.com/item?id=39342143 - Feb 2024 (137 comments)
It Can Be Done (2003) - https://news.ycombinator.com/item?id=18415231 - Nov 2018 (18 comments)
(re the timestamps see https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... - only the front page and /item pages show the re-upped time. Good point about the edit window, I missed that twist!)
https://news.ycombinator.com/item?id=32395589
He also passed away relatively recently.
https://melsloop.com/docs/the-story-of-mel/pages/mel-kaye-cv
I cannot help but reflect on how my approach was a "hybrid" between both pencil-and-paper and modern-cli-and-ide -- we were coming out of the age of really simple home computers, but not yet in the age of super fast computers with large monitors.
This was all code in the heart of an OS - thread switching, interrupt dispatch, synchronization mechanisms - things where even the most rare and exotic error might actually occur and cause a disaster.
But some hazard/cost computation is needed. There was an article in the '90s about a team doing software for an arm for space work (maybe on the shuttle) - they were hyper careful. I figured out that if all of windows had been made at that rate out of code output it would take 100 years to finish and would cost several trillion dollars. Not long after that that space arm suffered some kind of software failure, in space. Wasn't for want of effort by the dev team.
Remember that many errors arise from things outside the code you wrote/studied - some other code corrupted something, buggy behavoir in hardware, and so forth.
As for coding by hand, simulating by hand, flow graphing by hand, I don't think those were all that unusual, just one person took it to extremes and wrote about it.
It looks as if some of them might have been re-drawn for:
https://www.multicians.org/nss.html
and this is the source code in question?
I had totally forgotten about this. I still have the PASCAL book.
So I wrote BASIC programs in the back of my school notebooks and typed in some of them when I got to the computer.
https://www.history.com/news/in-1950-alan-turing-created-a-c...
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
It can also be Mel
(Might have a lot of details wrong)
I've got a pile of notebooks full of hand assembled Z80 code that my dad wrote in pencil for the Exidy Sorcerer, which he got in 1979.
It was easier to do that, and reason about your program on paper before running it on the actual computer.
The blog (in old school design) had a lot of posts and pictures of that era and the people involved building that OS.