One rather big difference here though, the limitation is on the size of the code, not the compiled binary. Most of the "optimizations" here have little to no effect on the code after it's compiled.
With older computers, instead of removing line breaks from the code, you're doing things like tweaking compiler flags to shrink the size.
I wonder these days if anyone is using multi-character emoji (don't know what do you call those exactly) to compress more data in 1024 "bytes"?
Since HackerNews does not allow emojis, here is a demo of what I mean:
Unicode is a nightmare, but I’m glad everyone agrees to share the same nightmare.