Phenomenal for those low powered servers you just want to leave on and running some tiny batch of cronjobs [1] or something for months or years at a time without worrying too much about wear on the SD card itself rendering the whole installation moot.
This is actually how I have powered the backend data collection and processing for [2], as I wrote about in [3]. The end result is a static site built in Hugo but I was careful to pick parts I could safely leave to wheedle on their own for a long time.
[1]: https://til.andrew-quinn.me/posts/consider-the-cronslave/
[2]: https://hiandrewquinn.github.io/selkouutiset-archive/
[3]: https://til.andrew-quinn.me/posts/lessons-learned-from-2-yea...
Before RPI existed, I always made filesystem images for USB sticks in NetBSD so that writes never touched "disk" ("diskless"). This allows me to remove the USB stick after boot, freeing up the slot for something else
BSD "install images" work this way
I have been using the RPi with a diskless NetBSD image since around 2012; there are no SD card writes, the userland is extracted into RAM
I can pull out the SD card after boot and use the slot for something else
If I want data storage, I connect an external drive
It's been wild to read endless online complaints from so-called "technical" RPi users for the last 13 years about SD card wear and tear
To me, it's another example of how it's possible to have a solution that is as old as the hills and have it be completely ignored in favor of a "modern" approach that is fatally-flawed
A lot of the SD-card wear issues come from people running “normal PC workflows” on a storage medium that was never designed for that pattern.
Something I’ve seen help many newcomers is simply enabling an overlay filesystem or tmpfs-based writes. It’s basically the middle ground between a full RAM-boot distro (piCore, Alpine diskless, NetBSD) and a standard SD-based Raspberry Pi OS.
You still get the normal ecosystem and docs, but almost no writes hit the card unless you explicitly commit them.
For anyone stuck between “I want something simple” and “I don’t want my SD to die,” overlays are the easiest win.
The point I'm making is that putting the rootfs on a memory filesystem, e.g., tmpfs, mfs, etc. avoids the problem with SD cards^1
This can be done with a variety of operating systems. IMO, the advantange of the RPi hardware is that it is supported by so many different operating systems
When I want to run additional, larger programs that are not in the rootfs I have embedded into the kernel, I either (a) run them from external storage or (b) copy them to the mfs/tmpfs
It depends on how much RAM I have available
1. There are probably other ways to avoid the problem, too
NetBSD and Tiny Core Linux, even with all their benefits, is a harder experience to get into if you haven't already dipped your toes into Linux, and doesn't have the same wide community and boundless online resources.
But, NetBSD ISOs are much heavier than TCL ISOs, and so while I'm sure there's a way to get just what I want working in diskless mode, I'm not confident I will have any RAM to run what I actually want to run on top of it.
https://www.digitalreviews.net/reviews/pc/norhtec-xcore-geck...
I've noticed Puppy is still around but I have no idea whether it can still be comparable to Tiny Core.
As compared to TC, the "out of the box" NetBSD images contain many things I wouldn't need, so customizing it has been a recurring thought, but oh well. The documentation and careful modularity is, obviously, a huge bonus of NetBSD in that regard (even an end-user like me could do some interesting modifications of the kernel solely by reading the manual). TC seems much more ad-hoc, but I assume this, too, is intentional, by design.
Though I don't explicitly load the entire userspace into RAM, since this is a laptop and I don't foresee a need to remove the SSD after boot.
Yes, this is exactly what I want, except I need some simple node servers running, which is not so ultra light. Would you happen to know, if this still all works within the ram out of box, or does this require extra work?
You can run nodejs fine on a pi with "Raspberry Pi OS Lite". In the configs, look for "Overlay File System" and enable it on the boot partition and main partition. The pi will boot from the sd card and run entirely in ram.
Be sure to run something to clear your logs occasionally or reboot once in a while or you'll run out of RAM. Still, get a quality sd card and power supply. You can get years out of a setup like this.
I also like SliTaz: http://slitaz.org/en, and Slax too: https://www.slax.org/
Oh and puppy Linux, which I could never get into but was good for live CDs: https://puppylinux-woof-ce.github.io/
And there's also Alpine too.
The most responsive one, unexpectedly, was Raspberry Pi OS.
It will increase the size of the VM but the template would be smaller than a full blown OS
Aside from dev containers, what are other options? I'm not able to run intellij on my laptop, is not an option
I use Nvim to ssh into my computer to work, which is fine. But really miss the full capacity of intellij
In my experience, by the time you’re compiling and running code and installing dev dependencies on the remote machine, the size of the base OS isn’t a concern. I gained nothing from using smaller distros but lost a lot of time dealing with little issues and incompatibilities.
This won’t win me any hacker points, but now if I need a remote graphical Linux VM I go straight for the latest Ubuntu and call it day. Then I can get to work on my code and not chasing my tail with all of the little quirks that appear from using less popular distros.
The small distros have their place for specific use cases, especially automation, testing, or other things that need to scale. For one-offs where you’re already going to be installing a lot of other things and doing resource intensive work, it’s a safer bet to go with a popular full-size distro so you can focus on what matters.
I'm all for suggestions for a better base OS in small docker containers, mostly to run nginx, php, postgress, mysql, redis, and python.
> Alpine uses musl instead of glibc for the C standard library. This has caused me all types of trouble in unexpected places.
I have no experience with alternative C libs. Can you share some example issues?https://purplecarrot.co.uk/post/2021-09-04-does_alpine-resol...
Question, I use VirtualBox, but I feel it's kind a laggy sometimes, What do you use? Any suggestion on performance improvements?
Never really got what it’s for.
It'd be best with hardwired network though.
thank you for this reminder! I had completely forgotten about SliTaz, looks like I need to check it out again!
In what way? Do you mean you didn't get the chance to use it much, or something about it you couldn't abide?
I used both the FLTK desktop (including my all-time favorite web browser, Dillo, which was fine for most sites up to about 2018 or so) and the text-only mode. TC repos are not bad at all, but building your own TC/squashfs packages will probably become second nature over time.
I can also confirm that a handful of lenghty, long-form radio programs (a somewhat "landmark" show) for my Tiny Country's public broadcasting are produced -- and, in some cases, even recorded -- on either a Dell Mini 9 or a Thinkpad T42 and Tiny Core Linux, using the (now obsolete?) Non DAW or Reaper via Wine. It was always fun to think about this: here I am, producing/recording audio for Public Broadcasting on a 13+ year old T42 or a 10 year old Dell Mini netbook bought for 20€ and 5€ (!) respectively, whereas other folks accomplish the exact same thing with a 2000€ MacBook Pro.
It's a nice distro for weirdos and fringe "because I can" people, I guess. Well thought out. Not very far from "a Linux that fits inside a single person's head". Full respect to the devs for their quiet consistency - no "revolutionary" updates or paradigm shifts, just keeping the system working, year after year. (FLTK in 2025? Why not? It does have its charm!) This looks to be quite similar to the maintenance philosophy of the BSDs. And, next to TC, even NetBSD feels "bloated" :) -- even though it would obviously be nice to have BSD Handbook level documentation for TC; then again, the scope/goal of the two projects is maybe too different, so no big deal. The Corebook [1] is still a good overview of the system -- no idea how up-to-date it is, though.
All in all, an interesting distro that may "grow on you".
Before encryption by default, get files from windows for family when they messed up their computers. Or change the passwords.
Before browser profiles and containers I used them in VMs for different things like banning, shopping, etc.
Down to your imagination really.
Not too mention just to play around with them too.
Booting a dedicated, tiny OS with no distractions helped me focus. Plus since the home directory was a FAT32 partition, I could access all my files on any machine without having to boot. A feature I used a lot when printing assignments at the library.
Or 128K of ram and 400 kb disk for that matter.
The "high color" (16 bit) mode was 5:6:5 bits per channel, so 16 bits per pixel.
> So 153,600 bytes for the frame buffer.
And so you're looking at 614.4 KB (600 KiB) instead.
To be frank, I wasn't aware such a mode was a thing, but it makes sense.
In 1985, and with 512K of RAM. It was very usable for work.
Games used either 320h or 640h resolutions, 4 bit or fake 5 bit known as HalfBrite, because it was basically 4 bit with the other 16 colors being same but half brightness. The fabled 12-bit HAM mode was also used, even in some games, even for interactive content, but it wasn't too often.
For example, NVIDIA GPU drivers are typically around 800M-1.5G.
That math actually goes wildly in the opposite direction for an optimization argument.
They also pack in a lot of game-specific optimizations for whatever reason. Could likely be a lot smaller without those.
The EGA (1984) and VGA (1987) could conceivably be considered a GPU although not turning complete. EGA had 64, 128, 192, or 256K and VGA 256K.
The 8514/A (1987) was Turing complete although it had 512kB. The Image Adapter/A (1989) was far more powerful, pretty much the first modern GPU as we know them and came with 1MB expandable to 3MB.
The PGC was kind of a GPU if you squint a bit. It didn't work the way a modern GPU does where you've got masses of individual compute cores working on the same problem, but it did have a processor roughly as fast as the host processor that you could offload simple drawing tasks to. It couldn't do 3D stuff like what we'd call a GPU today does, but it could do things like solid fills and lines.
In today's money the PGC cost about the same as an RTX PRO 6000, so no-one really had them.
No ssl, probably so you can access that site on the browser
That said, OSs came with a lot less stuff then.
Sure we could go back... Maybe we should. But there are lots of stuff we take for granted to day that were not available back then.
It's hinted at in this tutorial, but you'd have to go through the Programmer's Reference Manual for the full details: https://www.stevefryatt.org.uk/risc-os/wimp-prog/window-theo...
RISC OS 3.5 (1994) was still 2MB in size, supplied on ROM.
P.S. I should probably mention that there wasn't room in the ROM for the vector fonts; these needed to be loaded from some other medium.
Windows 3.1 was only something like 16MB of storage.
Imagine the Cray supercomputer in those days being used to run a toaster or doorbell…
I prefer to use additional RAM and disk for data not code
Probably not due to DMA buffers. Maybe a headless machine.
But would be funny to see.
If you were someone special, you got 1024x768.
Or 32K of RAM and 64KB disk for that matter.
What's your point? That the industry and what's commonly available gets bigger?
It's 20 years later and I've been running Linux for most of that time, so I probably would have even more fun revisiting DSL and Tiny Core Linux.
I don’t think that had the X Windows system. https://web.archive.org/web/19991128112050/http://www.qnx.co... and https://marc.info/?l=freebsd-chat&m=103030933111004 confirm that. It ran the Photon microGUI Windowing System (https://www.qnx.com/developers/docs/6.5.0SP1.update/com.qnx....)
They were expensive too. You had to pay for each device driver you used.
Some businesses stick with markets they know, as non-retail customer revenue is less volatile. If you enter the consumer markets, there are always 30k irrational competitors (likely with 1000X the capital) that will go bankrupt trying to undercut the market.
It is a decision all CEO must make eventually. Best of luck =3
"The Rules for Rulers: How All Leaders Stay in Power"
Stuff that is better designed and implemented usually costs money and comes with more restrictive licenses. It’s written by serious professionals later in their careers working full time on the project, and these are people who need to earn a living. Their employers also have to win them in a competitive market for talent. So the result is not and cannot be free (as in beer).
But free stuff spreads faster. It’s low friction. People adopt it because of license concerns, cost, avoiding lock in, etc., and so it wins long term.
Yes I’m kinda dissing the whole free Unix thing here. Unix is actually a minimal lowest common denominator OS with a lot of serious warts that we barely even see anymore because it’s so ubiquitous. We’ve stopped even imagining anything else. There were whole directions in systems research that were abandoned, though aspects live on usually in languages and runtimes like Java, Go, WASM, and the CLR.
Also note that the inverse is not true. I’m not saying that paid is always better. What I’m saying is they worse is free, better was usually paid, but some crap was also paid. But very little better stuff was free.
Conversly i remenber Maya or Autodesk used to have a bounty program for whoever would turn in people using unlicensed/cracked versions of their product.Meanwhile Blender (from a commercial past) kept their free nature and have connsistently grown in popularity and quality without any such overtures.
Of course nowadays with Saas everything get segmented into wierd verticals and revenue upsells are across the board with the first hit usually also being free.
They turned into legal-service-firms along the way, and stopped real software development/risk at some point in 2004.
These firms have been selling the same product for decades. Yet once they get their hooks into a business, few survive the incurred variable costs of the 3000lb mosquito. =3
In *nix, most users had a rational self-interest to improve the platform. "All software is terrible, but some of it is useful." =3
Tiny Core ran surprisingly well and I could actually use it to browse the web and use IRC.
I don't know if there are any other options for older machines other than stripped down Linux distros.
EDIT: nevermind, I see that it has the md5 in a text file here: http://www.tinycorelinux.net/16.x/x86/release/
https://distro.ibiblio.org/tinycorelinux/downloads.html
And all the files are here
https://distro.ibiblio.org/tinycorelinux/16.x/x86/release/
Under a HTTPS connection. I am not at a terminal to check the cert with OpenSSL.
I don’t see any way to check the hash OOB
Also this same thing came up a few years ago
https://www.linuxquestions.org/questions/linux-newbie-8/reli...
> this same thing came up a few years ago
Honestly, that makes this inexcusable. There are numerous SSL providers available for free, and if that’s antithetical to them, they can use a self signed certificate and provide an alternative method of verification (e.g. via mailing list). The fact they don’t take this seriously means there is 0 chance I would install it!
Honestly, this is a great use for a blockchain…
Are any distros using block chain for this ?
I am used to using code signing with HSMs
> are any sisters using blockchain
I don’t think so, but it’s always struck me as a good idea - it’s actual decentralised verification of a value that can be confirmed by multiple people independently without trusting anyone other than the signing key is secure.
> I am used to code signing with HSMs
Me too, but that requires distributing the public key securely which… is exactly where we started this!
> for extra high security,
No, sending the hash on a mailing list and delivering downloads over https is the _bare minimum_ of security in this day and age.
And all the files are here https://distro.ibiblio.org/tinycorelinux/16.x/x86/release/
I posted that above in this thread.
I will add that most places, forums, sites don’t deliver the hash OOB. Unless you mean like GPG but that would have came from same site. For example if you download a Packer plugin from GitHub, files and hash all comes from same site.
This thread started by talking about the site serving the download (and hash) over http. Github serves their content over https, so you're not going to be MITM'ed. There are other attack vectors, but if the delivery of the content you're downloading is compromised/MITM'ed, you've lost.
Download from at least one more location (like some AWS/GCP instance) and checksum.
Download from the Internet Archive and checksum:
https://web.archive.org/web/20250000000000*/http://www.tinyc...
Was a little tricky to install on disk and even on disk it behaved mostly like a live cd and file changes had to be committed to disk IIRC.
Hope they improved the experience now.
https://en.wikipedia.org/wiki/Tiny_Core_Linux#System_require...
Showcase video https://www.youtube.com/watch?v=8or3ehc5YDo
iso https://web.archive.org/web/20240901115514/https://pupngo.dk...
2.1mb, 2.2.26 kernel
>The forth version of xwoaf-rebuild is containing a lot of applications contained in only two binaries: busybox and mcb_xawplus. You get xcalc, xcalendar, xfilemanager, xminesweep, chimera, xed, xsetroot, xcmd, xinit, menu, jwm, desklaunch, rxvt, xtet42, torsmo, djpeg, xban2, text2pdf, Xvesa, xsnap, xmessage, xvl, xtmix, pupslock, xautolock and minimp3 via mcb_xawplus. And you get ash, basename, bunzip2, busybox, bzcat, cat, chgrp, chmod, chown, chroot, clear, cp, cut, date, dd, df, dirname, dmesg, du, echo, env, extlinux, false, fdisk, fgrep, find, free, getty, grep, gunzip, gzip, halt, head, hostname, id, ifconfig, init, insmod, kill, killall, klogd, ln, loadkmap, logger, login, losetup, ls, lsmod, lzmacat, mesg, mkdir, mke2fs, mkfs.ext2, mkfs.ext3, mknod, mkswap, mount, mv, nslookup, openvt, passwd, ping, poweroff, pr, ps, pwd, readlink, reboot, reset, rm, rmdir, rmmod, route, sed, sh, sleep, sort, swapoff, swapon, sync, syslogd, tail, tar, test, top, touch, tr, true, tty, udhcpc, umount, uname, uncompress, unlzma, unzip, uptime, wc, which, whoami, yes, zcat via busybox. On top you get extensive help system, install scripts, mount scripts, configure scripts etc.
https://forum.tinycorelinux.net/index.php/topic,26713.0.html
I recommend asking on that forum. Folks are helpful.
But can they please empower a user interface designer to simply improve the margins and paddings of their interface? With a bunch of small improvements it would look significantly better. Just fix the spacing between buttons and borders and other UI elements.
Any project that rejects those trends gets bonus points in my book.
In my opinion, I believe the Tiny Core Linux GUI could use some more refinement. It seems inspired by 90s interfaces, but when compared to the interfaces of the classic Mac OS, Windows 95, OS/2 Warp, and BeOS, there’s more work to be done regarding the fit-and-finish of the UI, judging by the screenshots.
To be fair, I assume this is a hobbyist open source project where the contributors spend time as they see fit. I don’t want to be too harsh. Fit-and-finish is challenging; not even Steve Jobs-era Apple with all of its resources got Aqua right the first time when it unveiled the Mac OS X Public Beta in 2000. Massive changes were made between the beta and Mac OS X 10.0, and Aqua kept getting refined with each successive version, with the most refined version, in my opinion, being Mac OS X 10.4 Tiger, nearly five years after the public beta.
I thought that would be immediately clear to the HN crowd but I might have overestimated your aesthetic senses.
I know that not everybody spent 10 years fiddling with CSS so I can understand why a project might have a skill gap with regards to aesthetics. I'm not trying to judge their overall competence, just wanted to say that there are so many quick wins in the design it hurts me a bit to see it. And due to nature of open source projects I was talking about "empowering" a designer to improve it because oftentimes you submit a PR for aesthetic improvements and then notice that the project leaders don't care about these things, which is sad.
Too much information density is also disorienting, if not stressing. The biggest problem is finding that balance between multiple kinds of users and even individuals.
If you are trying to maximize for accessibility, that is.
I imagine the sign-off date of 2008, the lack of very simple to apply mobile css, and no https to secure the downloads (if it had it then it would probably be SSL).
This speaks to me of a project that's 'good enough', or abandoned, for/by those who made it. Left out to pasture as 'community dev submissions accepted'.
I've not bothered to look, but wouldn't surprise me if the UI is hardcoded in assembly and a complete ballache to try and change.
It's documentation is a free book : http://www.tinycorelinux.net/book.html
[1] https://wiki.tinycorelinux.net/doku.php?id=dcore:welcome
In weeks before, when the topic came up elsewhere, I had to use one of my tailscale exit nodes elsewhere.
It wouldn't work from Japan. Not from home, not from office, not from phone network either.
I remember booting Linux off a 1.44Mb floppy
All of the minilaguages exposed there will run on TC even with 32MB of RAM.
On TC, set IceWM the default WM with no opaque moving/resizing as default and get rid of that horrible dock.
Handmade parchment, or leather carvings if you don’t mind.