See also: The Ultimate Oldschool PC Font Pack from VileR at <https://int10h.org/oldschool-pc-fonts/fontlist/>.
I came across this website when I was looking for IBM PC OEM fonts for a little HTML + Canvas-based invaders-like game I was developing a few years ago. It is impressive how much effort VileR has poured into recovering each OEM font and their countless variants, from a wide range of ROMs. The site not only archives them all with incredible attention to detail, but also offers live previews, aspect ratio correction and other thoughtful features that make exploring it a joy. I've spent numerous hours there comparing different OEM fonts and hunting down the best ones to use in my own work!
I'm envious of the level of nerdiness and genius at display, and hope some of it rubbed off on me by watching that demo.
I ended up writing a rust parser for the .hex file format for use in my kernel[1]. So I can now display the fantasy kernel on bare-metal :)
[1]: https://github.com/LevitatingBusinessMan/runix/blob/limine/s...
Out of curiosity I checked with lsof, apparently other fonts are used as fallback:
/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf
/usr/share/fonts/truetype/droid/DroidSansFallbackFull.ttf
/usr/local/share/fonts/MS/segmdl2.ttf
/usr/local/share/fonts/MS/seguisym.ttf
/usr/local/share/fonts/nerd/Iosevka/IosevkaNerdFont-Regular.ttf
/usr/local/share/fonts/nerd/JetBrainsMono/JetBrainsMonoNerdFontMono-Regular.ttf
At least the result is perfect!
So is Webdings: https://www.dafontfree.io/webdings-font/
Webdings even got integrated into Unicode 7.0, so all the Noto fonts support it: https://en.wikipedia.org/wiki/Webdings
And recode(1) has full support for ISO-8859-*. As does iconv and the Python3 encodings.codecs module. I'm pretty sure browsers can render pages in them, too. Firefox keeps rendering UTF-8 pages as if they were ISO-8859-1 encoded when I screw up at setting the charset parameter on their content-type.
That's the point. Think again.
Do you have a link to the MUD you're working on?
https://en.wikipedia.org/wiki/Sixel
we are full circle, 40 year later.
Today, when we're sending it to terminal emulators running on teraflops supercomputers over gigabit-per-second links, it's only a waste of CPU and software complexity instead of user time and precious bandwidth. But it's still a waste.
Why couldn't we have FTP and Gopher support in web browsers instead?
I mean not really, they are ancient and horribly insecure protocols without enough users to justify improving them.
Also, you may not have noticed this, but you're commenting on a thread that's largely about PETSCII and Videotex.
Fortunately, AFAIK, there isn't any significant body of existing Sixel art we need to preserve access to.
The browser support would have need continous security fixes and rewrites unfortunately, the protocol specs and the code was written in the day and age of a much less adversarial internet. It's much safer to handle those sort of protocols with a HTTPS proxy on the front these days. There's dedicated gopher and ftp clients still out there, IMHO browsers are too big and bloated as they are they need more stuff taken out of them, not more added without taking anything away, particularly stuff thats old and insecure and not used much anymore.
And yes, I'm also here for the retro factor :-) my pet project is Z80/6502 emulation in UnrealEngine with VT100 and VGA support and running BBS's in space. So I'm all over stuff about old ANSI, PETSCII and anything even tangentially 8x8 character set related:
Each UTF8 character (1 to 3 bytes) corresponds to 1 byte of input data. The average increase in data size is about 70%, but you gain binary independence in any medium that understands utf8 (email, the terminal, unit tests, etc.)
Nice work! But if you want something like this in production, base64 only increases the size by 33%.
CNXT = Constantine's Nine x Twenty
It took several seconds to load for me, so here's the first paragraph. It's a good first paragraph, though!
Also:
I won't have to wait seconds (!!!) to read it
I come to the comments to find out what these "clickbait title" articles (meaningless words with no context) really are before clicking.
Secondly, the site appears to be "hug of death"'d at the moment. I presume it was still accessible but struggling when OP posted.
I'll definitely give this a try in my Linux TTY. Thanks for sharing!
A great deficiency of Unifont mentioned several times in the other thread was its lack of combining-character support, and the absence of alternative glyphs for the code points in scripts like Arabic (well, and Engsvanyáli) whose form is affected by joiner or non-joiner context. Does anyone know if Unscii does better at this?
From opening it in Fontforge, Unscii seems to have pretty broad coverage, including things like Bengali, Ethiopic, and even runic, plus pretty full CJK(V) coverage. It seems to have some of the CSUR https://www.evertype.com/standards/csur/ assignments, such as the Tengwar of Feanor in the range U+E000 to U+E07F, but has conflicting assignments for some other ranges, like the Cirth range U+E080 to U+E0FF (present in Unifont but arguably duplicative with the runic block), which is assigned to Teletext/Videotex block mosaics. I note that my system has different conflicting assignments for this range, with Tux at U+E000 followed by a bunch of dingbats, while the Cirth range is a bunch of math symbols.
Given that astral-plane support is virtually universal in Unicode implementations these days (thanks largely to emoji) it might be better for future such efforts to use SPUA and SPUB to reduce the frequency of such codepoint clashes. SPUA and SPUB are each the size of the entire BMP: https://en.wikipedia.org/wiki/Private_Use_Areas
For day-to-day use of semigraphic characters, I ran into the problem two hours ago in https://news.ycombinator.com/item?id=46277275 that the "BOX DRAWING" vertical lines don't connect, consequently failing to draw proper boxes. I had the same problem in Dercuano, where I fixed it by reducing the line-height for <pre> elements. The reason seems to be that Firefox defaults line-height to "normal", which is apparently equivalent to "1.41em", which doesn't sound very normal to me (isn't an "em" defined as the normal line height?), and, although the line-drawing characters in my font (which seems to be Noto Sans Mono) are taller than 1em, they still don't reliably join up if the line-height is taller than 1.21em.
Chromium does the same thing, except its abnormal definition of "normal" is evidently more like 1.35em.
It's probably too late to make a change to the standard HN stylesheet so major as
pre { line-height: 1.2em }
since it would change the rendering of the previous decades of comments. It would be a significant improvement for things like what I was doing there, and I don't think it would be worse for normal code samples. However, given the lengths to which the HN codebase goes to limit formatting (replacing characters like U+2009 THIN SPACE with regular spaces, stripping out not just emojis but most non-alphanumeric Unicode such as U+263A WHITE SMILING FACE, etc.) maybe discouraging the use of these semigraphics is intentional?If not, though, perhaps the fact that the line-height is already different between Chromium and Firefox represents a certain amount of possible flexibility...
Obviously the line-height would be a much more serious problem for the kinds of diagonal semigraphic characters that viznut is largely focusing on here; those would strictly require a line-height of exactly 1em, which I think would substantially impair the readability of code samples.