Back in the 70s, Hal Finney was writing a BASIC interpreter to fit in 2K of ROM on the Mattel Intellivision system. This meant every byte was precious. To report a syntax error, he shortened the message for all errors to:
EH?
I still laugh about that. He was quite proud of it.I feel like that would also make a good response from the text parser in an old-school interactive fiction game.
Slightly related, but I remember some older variants of BASIC using "?" to represent the PRINT statement - though I think it was less about memory and more just to save time for the programmer typing in the REPL.
All this being on a C64 of course, but I suspect most versions of Bill Gates's BASIC did something similar.
Each command could be typed in two ways: the full name, or the first two letters, with the second capitalized. Plus a few exceptions like "?" turning into the PRINT token ($99, nowhere near the PETSCII value for ?) and π becoming $FF.
The tokens were expanded into full text strings when you would LIST the program. Which was always amusing if you had a very dense multi-statement line that expanded as longer than the 80 characters the c64's tokenizer routine could handle, you'd have to go back and replace some or all commands with the short form before you could edit it.
The story goes that ed was designed for running over a slow remote connection where output was printed on paper, and the keyboard required very firm presses to generate a signal. Whether this is true or folklore, it would explain a lot.
GNU Ed actually has optional error messages for humans, because why not.
"Note the consistent user interface and error reportage. Ed is generous enough to flag errors, yet prudent enough not to overwhelm the novice with verbosity."
that doesn't make complete sense, in unixland it's old-timers who understand the beauty of silence and brevity, while novices scan the screen/page around the new prompt for evidence that something happened
When each line of code was it's own punch card having a { stand alone on a line was somewhere between stupid and pointless. Also explains the reason why lisps were so hated for so long.
By the same token today you can tell which projects use an IDE as the only way to code them because of the terrible documentation. It is after all not the end of the world to have to read a small function when you can just tab to see it. Which is true enough until you end up having those small functions calling other small functions and you're in a stack 30 deep trying to figure out where the option you passed at the top went.
It made the option to print file content with line numbers very useful (personally only used very dumb terminals instead of physical teletype, but experience is a bit similar just with shorter scrollback :D)
And taking a printed listing before heading home with the terminal.
Not only story, some of these running resurrected today.
(On certain architectures, you could use 1-byte soft-interrupt opcodes to call the most used subroutine, but 8080 lacked it IIRC; on 6502 you could theoretically use BRK for that. But likely you had other uses for it than printing error diagnostics.)
https://en.wikipedia.org/wiki/JOSS#/media/File:JOSS_Session....
https://en.wikipedia.org/wiki/JOSS
They had about 5KB of memory but comparing to the Intellivision the machine weighed about 5,000lbs.
There was once a contest between Caltech and MIT. Each was to write a program to play Gomoku, and they'd play against each other. Hal wrote a Gomoku-playing program in a weekend, and it trashed MIT's program.
It was never dull with Hal around.
Add another seven Easter eggs, and people could love that byte.
RIP.
If the original setting had been named something bool-y like `help.autocorrect_enabled`, then the request to accept an int (deciseconds) would've made no sense. Another setting `help.autocorrect_accept_after_dsec` would've been required. And `dsec` is so oddball that anyone who uses it would've had to look up.
I insist on this all the time in code reviews. Variables must have units in their names if there's any ambiguity. For example, `int timeout` becomes `int timeout_msec`.
This is 100x more important when naming settings, because they're part of your public interface and you can't ever change them.
Same here. I'm still torn when this gets pushed into the type system, but my general rule of thumb in C++ context is:
void FooBar(std::chrono::milliseconds timeout);
is OK, because that's a function signature and you'll see the type when you're looking at it, but with variables, `timeout` is not OK, as 99% of the time you'll see it used like: auto timeout = gl_timeout; // or GetTimeoutFromSomewhere().
FooBar(timeout);
Common use of `auto` in C++ makes it a PITA to trace down exact type when it matters.(Yes, I use IDE or a language-server-enabled editor when working with C++, and no, I don't have time to stop every 5 seconds to hover my mouse over random symbols to reveal their types.)
std::this_thread::sleep_for(10ms); // sleep for 10 milliseconds
std::this_thread::sleep_for(1s); // sleep for one second
std::this_thread::sleep_for(50); // does not work, unit is required by type system
That's such a cool way to do it: instead of forcing you to specify the exact unit in the signature (milliseconds or seconds), you just say that it's a time duration of some kind, and let the user of the API pick the unit. Very neat! someMethodDealingWithTime(Duration.ofMillis(10));
someMethodDealingWithTime(Duration.ofSeconds(1));
someMethodDealingWithTime(50); // does not compile
Since these often come from config, i also have a method parseDuration which accepts a variety of simple but unambiguous string formats for these, like "10ms", "1s", "2h30m", "1m100us", "0", "inf", etc. So in config we can write: galactus.requestTimeout=30s
No need to bake the unit into the name, but also less possibility of error.I did that too with parsers for configuration files; my rule of thumb is that the unit has to always be visible somewhere anywhere a numeric parameter occurs - in the type, in the name, or in the value. Like e.g.:
// in config file:
{ ..., "timeout": "10 seconds", ... }
// in parsing code:
auto& ParseTimeout(const std::string&) -> Expected<std::chrono::milliseconds>;
// in a hypothetical intermediary if, for some reason, we need to use a standard numeric type:
int timeoutMsec = ....;
Wrt. string formats, I usually allowed multiple variants for a given time unit, so e.g. all these were valid and equivalent values: "2h", "2 hour", "2 hours". I'm still not convinced it was the best idea, but the Ops team appreciated it.(I didn't allow mixing time units like "2h30m" in your example, as to simplify parsing into single "read double, read rest as string key into a lookup table" pass, but I'll think about allowing it the next time I'm in such situations. Are there any well-known pros/cons to this?)
https://en.wikipedia.org/wiki/ISO_8601#Durations
One place i have run into confusion is being able to express a given span of time in multiple ways. 1m30s and 90s are the same length, but are they the same thing? Should we always normalise? If we do, do we normalise upwards or downwards? This hasn't actually been a problem with time, but i do similar handling with dates, and it turns out we often want to preserve the distinction between 1y6m and 18m. But also sometimes don't. Fun times.
Don't know why I never noticed it before; thanks for posting this! That does give the idea more weight, so I'll consider mixed-unit durations next time I find myself coding up parsing durations in config files.
> Should we always normalise? If we do, do we normalise upwards or downwards?
I'd say normalize, but on the business side, down to regular units - e.g. the config or UI can keep its "1m30s" or "90s" or even "0.025h", but for processing, this gets casted to seconds or millis or whatever the base unit is. Now, this is easy when we're only reading, but if we need to e.g. modify or regenerate the config from current state, I'd be leaning towards keeping around the original format as well.
> i do similar handling with dates, and it turns out we often want to preserve the distinction between 1y6m and 18m
Can you share specific examples where does this matter, other than keeping the user input in the format it was supplied even underlying data values get regenerated from scratch?
You cannot compile FooBar(5000), so there is never confusion in C++ like C has. You have to do explicit "FooBar(std::chrono::milliseconds(500))" or "FooBar(500ms)" if you have literals enabled. And this will handle conversion if needed - you can always do FooBar(500ms) and it will work even if actual type in microseconds.
Similarly, your "auto" example will only compile if gl_timeout is a compatible type, so you don't have to worry about units at all when all your intervals are using std::chrono.
I feel like Go strikes a good balance here with the time.Duration type, which I use wherever I can (my _msec example came from C). Go doesn’t allow implicit conversion between types defined with a typedef, so your code ends up being very explicit about what’s going on.
JetBrains does a great thing where they show types for a lot of things as labels all the time instead of having to hover over all the things.
Then you end up with something where you can write "TimoutSec=60" as well as "TimeoutSec=1min" in the case of systemd :)
I'd argue they'd been better of not putting the unit there. But yes, aside from that particular weirdness I fully agree.
But that's wrong too! If TimeoutSec is an integer, then don't accept "1min". If it's some sort of duration type, then don't call it TimeoutSec -- call it Timeout, and don't accept the value "60".
The best alternative I've found is to accept units in the values, "5 seconds" or "5s". Then just "1" is an incorrect value.
[1] https://www.joelonsoftware.com/2005/05/11/making-wrong-code-...
I don't want to have a type for an integer in seconds, a type for an integer in minutes, a type for an integer in days, and so forth.
Just like I don't want to have a type for a float that means width, and another type for a float that means height.
Putting the unit (as oppose to the data type) in the variable name is helpful, and is not the same as types.
For really complicated stuff like dates, sure make a type or a class. But for basic dimensional values, that's going way overboard.
This is not how a typical Duration type works.
https://pkg.go.dev/time#Duration
https://doc.rust-lang.org/nightly/core/time/struct.Duration....
Not everything should be a type.
If all you're doing is calculating the difference between two calls to time(), it can be much more straightforward to call something "elapsed_s" or "elapsed_ms" instead of going to all the trouble of a Duration type.
> For really complicated stuff like dates, sure make a type or a class.
Pick one. How are you separating days from dates? Not all days have the same number of seconds.
Personally I flag any such use of int in code reviews, and instead recommend using value classes to properly convey the unit (think Second(2) or Millisecond(2000)).
This of course depends on the language, it's capabilities and norms.
I suppose this is the "actual" problem with the git setting, in so far as there is an "actual" problem: the variable started out as a boolean, but then quietly turned into a timespan type without triggering warnings on user configs that got reinterpreted as an effect of that.
In fact, our French keyboards do have a "µ" key (as far as I remember, it was done so as to be able to easily write all SI prefixes) but using non-ASCII symbols is always a bit risky.
xmobar uses deciseconds in a similar, albeit more problematic place - to declare how often to refresh each section. Using deciseconds is fantastic if your goal is for example configs to have numbers small enough that they clearly can't be milliseconds, resulting in people making the reasonable assumption that it must thus be seconds, and running their commands 10 times as often as they intended to. I've seen a number of accidental load spikes originating from this issue.
EDIT: 1) is the result of my misreading of the article, the "previous value" never existed in git.
1) Pushing a change that silently break by reinterpreting a previous configuration value (1=true) as a different value (1=0.100ms confirmation delay) should pretty much always be avoided. Obviously you'd want to clear old values if they existed (maybe this did happen? it's unclear to me), but you also probably want to rename the configuration label..
2) Having `help.autocorrect`'s configuration argument be a time, measured in a non-standard (for most users) unit, is just plainly bad. Give me a boolean to enable, and a decimal to control the confirmation time.
The mistake was here. Instead of retargeting the existing setting for a different meaning, they should have added a new setting.
help.autocorrect - enable or disable
help.autocorrect.milliseconds - how long to wait
There are similar mistakes in other systems, e.g., MySQL has innodb_flush_log_at_trx_commit
which can be 0 if disabled, 1 if enabled, and 2 was added as something special.Is "was" before the change described at the end of the article, or after it?
Before the change, any positive number implied that the feature is on, because that's the only thing that makes sense.
After the change, you could say that 1 stops being treated as a number, but it's simpler to say it's still being treated as a number and is getting rounded down. The interpretation of various types is still messy, but it didn't get more messy.
Elsewhere, 1 is still allowed as a true equivalent.
Now, because of this confusion, they’re special-casing 1 to actually mean 0. But other integers are still themselves. They’ve also now added logic to make "yes", "no", "true", "off“ strings be interpreted as booleans now too.
So if you wanted to go that fast, you could, the invokation should have relatively stable speeds (order of some milliseconds...
1. it does not distinguish between dangerous and safe actions
2. it pollutes my shell history with mistyped commands
Reading this article gave me just enough of a nudge to just disable it after a year.
It's a terrible idea of fish not to save errors in history (even if the way bash does it is not optimal, ignoring/obliterating the error return fact) because running a command to look up the state of something can easily return the state you are checking along with an error code. "What was that awesome three letter TLD I looked up yesterday that was available? damn, not a valid domain is an error code" and just like that SEX.COM slips through your grasp, and your only recourse would be to hijack it.
but it's compoundedly worse to feel like the problem is solved by autocorrect further polluting your history.
I would not want to be fixing things downstream of you, where you would be perfectly happy downstream of me.
Note: english is not my mother tongue, but I am from the civilised part of the world that uses the metric system FWIW.
I mean, you probably cannot sense the difference in duration between 20 and 30 ms without special equipment.
But you can possibly sense the difference between 2 and 3 deciseconds (200 ms and 300 ms) after some practice.
I think the issue in this case was rather the retrofitting a boolean setting into a numerical setting.
Several top players have multiple "perfect full combos" under their belt, where they hit every note in the song within 50ms of the target. I even have one myself on one of the easier songs in the game.
At 120bpm a sixteenth note is 125ms, the difference is very obvious I would think
The concept is briefly alluded to in the prologue, and then...nada, not relevant to the rest of the plot at all (the _effects_ of the archeology are, but "software archeologists" are not meaningful characters in the narrative). I felt bait-and-switched.
As for 100WPM, which is a very respectable typing speed, it translates to 500 CPM, less than 10 characters per second, and thus slightly above 100ms per keypress. But Ctrl+C are two key presses: reacting to type them both in under 100 ms is equivalent to a writting speed above 200WPM.
Even the fastest pro-gamers struggle to go faster than 500 actions (keypresses) per minute (and they use tweaks on repeat rates to get there), still more than 100ms for two key presses.
I think people don't really type/press buttons at a constant speed. Instead we do combos. You do a quick one-two punch because that's what you're used to ("you've practiced"). You do it much faster than that 100ms, but after that you get a bit of a delay before you start the next combo.
I dare anyone to make a script that, after launching, will ask you to press Ctrl+C after a random wait between 1000 and 3000 ms. And record your reaction time meassured after key release. It's allowed to "cheat" and have your fingers ready over the two keys. Unless you jump start and get lucky, you won't get better than 150ms.
You start reacting to the typo as you're typing. You just won't get to the end of your reaction before you've pressed enter.
The point of my combo comment is that pressing Ctrl + C is not the same thing as typing two random letters of a random word.
Combine these two things and I think it's possible for somebody to interrupt a command going out. The question is whether you can press Ctrl+C while typing faster than 100ms, not whether you can react to it within 100ms.
Also, people regularly type faster than the speed that pro StarCraft players play at. The sc2 players need the regedit because they will need to press Z 50 times in a row to make 100 zerglings as fast as possible, but you don't need that to type.
for single mouse click, 225ms is pretty typical for me after a bit of warmup. sub 200 is not consistently reproducible. i dont think i've ever cracked < ~185ms
That being said, there are obviously cases where you mistype (usually a fat-finger or something, where you don't physically recognise that you've pressed multiple keys) and don't appreciate it until you visually notice it or the application doesn't do what you expected. 100ms to react to an unexpected stimulus like that is obviously not useful.
also I typed this entire thing that way without looking at it other than for red squiggles.
So, you type `git pshu<enter>` and realise you made a typo before you've finished typing. You can't react fast enough to stop hitting enter but you can absolutely ctrl+c before 100 more ms are up
Similar to the other reply, I also commonly do that when typing, where I know I've fat fingered a word, exclusively from the feeling of the keyboard.
But also, your not just trying to beat the fork/exec. You can also successfully beat any number of things. The pre-commit hook, the DNS look up, the TLS handshake. adding an additional 100ms of latency to that could easily be the difference between preempting some action, interrupting it or noticing after it was completed.
I wrote this bash script:
#!/usr/bin/env bash
start_time=$(gdate +%s%3N)
# Function to handle Ctrl+C (SIGINT)
on_ctrl_c() {
end_time=$(gdate +%s%3N)
total_ms=$((end_time - start_time))
# Calculate integer seconds and the remaining milliseconds
seconds=$((total_ms / 1000))
millis=$((total_ms % 1000))
# Print the runtime in seconds.milliseconds
echo "Script ran for ${seconds}.$(printf '%03d' ${millis}) seconds."
exit 0
}
# Trap Ctrl+C (SIGINT) and call on_ctrl_c
trap on_ctrl_c INT
# Keep the script running indefinitely
while true; do
sleep 1
done
And then i typed "bash sleep.sh git push origin master<enter><ctrl+C>"and got "Script ran for 0.064 seconds."
I'm not a competitive speed typist or anything but I struggle to get above 110 on a standard keyboard and I don't think I've ever seen anyone above the 125-130 range.
[0] https://www.typingpal.com/en/documentation/school-edition/pe...
Not gonna believe that without empirical evidence.
Regarding reaction time, below 120ms (on a computer, in a browser(!)) is consistently achievable, e.g. this random yt video https://youtu.be/EH0Kh7WQM7w?t=45 .
For some reason, I can't find more official reaction time measurements (by scientists, on world champion athletes, e-athletes), which is surprising.
> So it is not a reaction time, but an enter to ctrl+c scenario.
At minimum, if we ignore the whole "changing your mind" thing. And for comparison: the world record for typing speed (over 15 seconds and without using any modifier keys) is around 300wpm, which translates to one keypress every 40ms - you really think 100ms to press two keys is something "you can absolutely" do? I'd believe that some* people could sometimes do it, but certainly not just anyone.
The delay is intended to let you abort execution of an autocorrected command, but without reading the output you have no idea how the typos were corrected.
Reaction to unreasonable, unexpected events will be very slow due to processing and trying to understand what happens and how to respond. Examples, you are a racecar driver, participating in a race, you're driving your car on a racetrack in a peaceful country.
An armed attack: Slow reaction time, identifying the situation will take a long time, selecting an appropriate response will take longer.
A kid running into the road on the far side of the audience stands: Faster.
Kid running into the road near the audience: Faster.
Car you're tailing braking with no turn to come: Faster.
Crashed car behind a turn with bad overview: Faster.
Guy you're slipstreaming braking before a turn: Even faster.
For rhythm games, you anticipate and time the events, and so you can say these are no longer reactions, but actions.
In the git context, where you typed something wrong, the lines are blurred, you're processing while you're acting, you're typing while you're evaluating what you're typing, first line of defence is you're feeling/sensing that you typed wrong, either from the feedback that your fingers touched too many keys, or that you felt the rhythm of your typing was wrong, at least for me, this happens way faster than my visual input. I'm making errors as I type this, and they're corrected faster than I can really read it, sometimes I get it wrong and deleted a word that was correct. But still, watching people type, I see this all the time, they're not watching and thinking about the letters exclusively, there's something going on in their minds at the same time. 100 ms is a rather wide window in this context.
Also, that said, we did a lot of experiments at work with a reaction time tester, most people got less than 70 ms after practice (a led lights up at a random interval between 2 and 10 seconds)
But often you type something, realize it's wrong while you are typing but not fast enough to stop your hand from pressing [Enter]
That is one of the only situation 100ms would be enough to safe you
That being said, the reason in the article for 100ms is just confused commander. Why would anyone:
1) encode a Boolean value as 0/1 in a human readable configuration 2) encode a duration as a numeric value without unit in a human readable configuration
Both are just lazy
It may be lazy, but it's very common!
why demand many char when few char do trick?
also
> Why would anyone [...] encode a duration as a numeric value without unit in a human readable configuration
If I'm only implementing support for a single unit, why would you expect or want to provide a unit? What's the behavior when you provide a unit instead of a number?
> but not doing that extra work is lazy
no, because while I'm not implementing unit parsing for a feature I wouldn't use, instead I'm spending that time implementing a better, faster diff algorithm. Or implementing a new protocol with better security, or sleeping. It's not lazy to do something important instead of something irrelevant. And given we're talking about git, which is already very impressive software, provided for free by volunteers, I'm going to default to assuming they're not just lazy.
https://www.formula1.com/en/video/valtteri-bottas-flying-fin...
"World Athletics rules that if an athlete moves within 100 milliseconds (0.1 seconds) of the pistol being fired to start the race, then that constitutes a false start."
https://www.nytimes.com/athletic/5678148/2024/08/03/olympics...
The argument for it being what it is is the fact that our auditorial processing (when using a starter pistol) or visual processing (looking at start-lights) takes time, as well as transferring that message to the relevant muscles. 100 milliseconds is a pretty good average actually
Someone new to the “gunshot, run” dynamic could take longer, a soldier trained via repetition to react to a gunshot could be shorter, and a veteran with PTSD could be shorter still.
100ms is both too long and too short (or so I’ve heard, I’m not an expert).
I don't have extensive resources/references at hand, but I've read about this a few times over the years.
Yeah well, I did a psych bsc and I'm telling you that it's impossible.
It's certainly possible for people to do and notice things way faster than that, like a musician noticing a drummer being a few ms off beat, or speedrunners hitting frame perfect inputs, but in those cases the expectation and internal timekeeping is doing most of the heavy lifting.
I'm willing to bet Bottas fouled that, too late (or late enough).
With your git commands it is fairly predictable what happens next, it is not as if the computer is randomly taunting you with five lights.
I suggest a further patch where you can put git in either 'F1 mode', or, for our American cousins, 'Drag Strip mode'. This puts it in to a confirmation mode for everything, where the whole timing sequence is shown in simplified ASCII art.
As a European, I would choose 'F1 mode' to have the give lights come on in sequence, wait a random delay and then go out, for 'git push' to happen.
I see no reason to also have other settings such as 'Ski Sunday mode', where it does the 'beep beep beep BEEEP' of the skiing competition. 'NASA mode' could be cool too.
Does anyone have any other timing sequences that they would like to see in the next 'patch'?
In any case it is not a good idea to have a CLI command happen without your approval, even if the intention was really obvious.
The fact that this guy has been the Git maintainer for so long and designs settings like this explains a lot!
Deciseconds are just uncommon. But the problem here is that the user didn't expect the "1" to be a unit of time but instead a boolean value. He never wanted a timer in the first place.
By the way, not making the unit of time clear is a pet peeve of mine. The unit is never obvious, seconds and milliseconds are the most common, but you don't know which one unless you read the docs, and it can be something else.
My preferred way is to specify the unit during the definition (ex: "timeout=1s") with a specific type for durations, second is to have it in the name (ex: "timeoutMs=1000"), documentation comes third (that's the case of git). If not documented in any way, you usually have to resort to trial and error or look deep into the code, as these values tend to be passed around quite a bit before reaching a function that finally makes the unit of time explicit.
Do some think that 900ms, or 800, or some other sub-second value is really what we need for this error condition? Instead of, you know, not creating errors?
You might be more familiar with decimeters, deciliters, decibels or the base-10 (decimal) numbering system.
One day it'll dump the recent bash and git history into an LLM that will say something along the lines of "alright dumbass here's what you actually need to run"
> introduced a patch
> the Git maintainer, suggested
> relatively simple and largely backwards compatible fix
> version two of my patch is currently in flight to additionally
And this is how interfaces become unusable, through thousand small "patches" created without any planning and oversight.
Then it might enjoy some modicum of success, instead of languishing in its well-deserved obscurity!
Also, if you see it as insult, that's your mistake. It is just a simple empirical observation. I'm not saying it's an original thought - feel free to Google more about this topic.
I won't waste any more time since you obviously aren't interested in discussion.
Pot. Kettle. Black.
Anyway, 0.1 seconds would be far too short even for them, which have a job based on fast reaction times.
I'm so sick of commands with --timeout params where I'm left guessing if it's seconds or millis or what.
All because someone thought surely nobody would ever want something to happen on a quarter of a second delay/interval, or a 250 microsecond one.
I mostly say this because I find it somewhat fun that they have raced _each other_ at Le Mans last year, but also because I've personally seen both of them type Git commands, so I know it's true.
Even the original documentation for the feature back when it was introduced in 2008 (v1.6.1-rc1) is pretty clear what the supported values are and how they are interpreted.
That aside, I feel the reason is to advertise the feature so that the user gets a chance to set the timer up to his preference or disable autocorrect entirely.
in video games it may seem like a lot of time for a reaction, but a lot of that “reaction time” is based off previous context of the game, visuals and muscle memory and whatnot. If playing street fighter and say youre trying to parry an attack that has a 6 frame startup, you’re already anticipating an attack to “react” to before their attack even starts. When typing git commands, you will never be on that type of alert to anticipate your typos.
git good.
(the parent post was a set up for this)
I mean, for all practical purposes, the value of 1 equals to the unconditional execution.