btw, I have a project in my drawer, to place rootfs of my NixOS on IPFS.
oh that would be fun! have you made much progress? is there a repo or something I can follow for updates?
The problem with booting Linux off very high latency devices is the kernel tends to time out I/O requests after too short a time (60 seconds I think) so you have to adjust those timeouts upwards.
This was basically an option of the OpenBoot PROM firmware of the SPARC machines.
It looked like this (ok is the forth prompt of the firmware):
ok setenv network-boot-arguments dhcp,hostname=myclient,file=https://192.168.1.1/cgi-bin/wanboot-cgi
ok boot net
This doesn't only load the initramfs over the (inter)network but also the kernel.https://docs.oracle.com/cd/E26505_01/html/E28037/wanboottask...
https://docs.oracle.com/cd/E19253-01/821-0439/wanboottasks2-...
I've seen organizations do something similar to this for trade shows when they want a bunch of machines that visitors can interact with and don't want to have to keep them updated individually. Just update the image once and reboot each machine.
Or, even better, a magnet link.
1. bootp says boot image is at <ip address>/path/to/img 2. PXE network stack fetches that image via TFTP (which is awful) 3. PXE network stack boots that image
In most cases, the boot image would be a chainloader like pxelinux, and that would fetch a config file which told it the kernel path, the initrd path, and the commandline, and then the user could choose to boot that image, and then pxelinux would fetch the files via TFTP (which is still awful) and boot them.
In this new, HTTP-based case, we replace each instance of "TFTP" with "HTTP", which we can authenticate (ish), which we can easily firewall, which doesn't have weird compatibility issues, and so on.
Note that, before now, you could replace pxelinux with iPXE, and iPXE could fetch files via HTTP (which is awesome), but you still had to fetch iPXE and its config file via TFTP.
Note that TFTP is an unauthenticated, UDP-based, extremely limited protocol which has almost no support for anything but the most basic "get this file" or "take this file" functionality. Being able to replace it is a joy and a wonder.
It worked really well but I haven’t worked on large scale DCs for a few years now so maybe some new stuff happened
Your fleet is ready.
> While standard PXE clients use only TFTP to load parameters and programs from the server, iPXE client software can use additional protocols, including HTTP, iSCSI, ATA over Ethernet (AoE), and Fibre Channel over Ethernet (FCoE). Also, on certain hardware, iPXE client software can use a Wi-Fi link, as opposed to the wired connection required by the PXE standard.
Does iPXE have a ca-certificates bundle built-in, is there PKI with which to validate kernels and initrds retrieved over the network at boot time, how does SecureBoot work with iPXE?
For HTTPS booting, yes.
> how does SecureBoot work with iPXE?
It doesn't, unless you manage to get your iPXE (along with everything else in the chain of control) signed.
There was an article recent for somebody doing it on an Orange Pi [1]. IIUC, you can have one RasPi with an SD Card (I use USB drives but w/e) to be the PXE server and then the rest can all network boot.
This is technically neat, but... How often does the memory card break on a Raspberry? How often does the network break (either Raspberry hardware or upstream)? There are fewer things to break when you run from local hardware.
Keep log files and other things frequently written in RAM and the sdcard will last forever.
When was the last time an sdcard in a digital camera wore out?
As others have said, you can also use PXE, but http is a bit easier to deal with.
There is a hosting company with something like 44k Raspberry Pis. Are you going to be the guy to update them?
Many of us have a handful of Pis at home doing whatever they do, each with their own unique MicroSD card. In this configuration, every time the number of Pis doubles, the overall MTBF for their collective storage halves. Backups are a pain since each Pi is a unique and special snowflake, and are thus somewhat unlikely to actually get accomplished. When a MicroSD card does die, that Pi's configuration and all of the work that went into making it do whatever it likely disappears with it.
However, when booting over the network:
A handful of Pis are at home doing whatever they do, and booting from a reasonably-resilient NAS (eg a ZFS RAIDZ2 box) somewhere in the house (which is a great idea to have around for all kinds of other reasons, too). Adding more Pis does not decrease storage MTBF at all, since there is no MicroSD card to die. Backups become simple, since ZFS snapshots make that kind of thing easy even if each Pi's disk image is a unique and special snowflake. Space-efficient periodic snapshots become achievable, making it easy to unfuck a botched change -- just roll back to an hour ago or yesterday or whenever things last worked and by using that snapshot instead. Undetected bitrot becomes zero. Speeds (for many workloads) might even increase, since at least the Pi4 can handle wire-speed network traffic without any real sweat.
It's not a great fit for everyone, but it may result in a net long-term time savings for some of us folks here who tinker with stuff at home if enough steps are automated, and it seems likely to result in fewer frustrating surprises.
I have no data on this, only anecdata - since I've started being interested in the Home Assistant project, I've seen countless problems with people who've done an upgrade, rebooted their HA Pi and had some kind of Disk IO issue because the card died.
As I understand it, it's the constant logging to disk for logfiles and databases that ends up killing MicroSD cards. It seems to be particularly bad for clones and cheap ones off eBay/Amazon. It's still apparently a problem even for high quality "endurance" MicroSD cards.
If I were to netboot these things, then I'd have fewer points of failure than I do now.
Of course moderns servers have far more CPU power than even the largest LANs back in 1986. Still those of use who remember when Sun was a big deal miss the power of the network.
Sun did this through NIS, originally Yellow Pages/YP, but name changed for trademarks.
When I worked at Yahoo, corp machines typically participated in an automounter config so your home would follow you around, it was super convenient (well, except when the NFS server, which might be your personal corp dev machine under your desk, went away, and there was no timeout for NFS operations... retry until the server comes back or heat death of the universe). They used a sync script to push passwords out, rather than NIS though --- a properly driven sync script works almost as fast, but has much better availability, as long as you don't hit an edge case (I recall someone having difficulty because they left the company and came back, and were still listed as a former employee in some database, so production access would be removed automatically)
https://stackoverflow.com/questions/37453841/download-a-file...
[1] http://git.annexia.org/?p=libguestfs-talks.git;a=tree;f=2016...
Of course in the real world the people who select your hardware don't talk to the people who care about software. So you are stuck with slow boots just because it is too late to go back and do a fill million dollars each board re-spins now that we know our boot times are too slow.
It gets worse, even if you select fast init hardware that doesn't mean it really is fast. I've seen hardware that claims to not need long inits, but if you don't insert waits in the boot there are bugs.
Pretending there is no problem is part of the problem.
It's simply not rocket science.
I'm surprised to see this, in what way does it require hard CS research? Isn't it just debugging and implementation pain?
(Note I'm not saying you can't do a PhD in it and improve the situation -- you could probably do that for any problem, honestly. Just saying that I think you could get most of the way there by just paying the engineering cost.)
EDIT: https://hal.science/hal-00870846/file/W5_PX_Le_Berre_On_SAT_...
But still, nicely done!
I read the “How to shrink a file system without a live cd. So here’s my one. How to shrink a file system without a live CD as part of a single command install script of a program.
My sbts-aru sound localizing recorder program does that on the pi.
I’m willing to bet that no other project on the Internet does this, but I’d love to be surprised. Let me know.
It installs the majority of the code, then reboots, shrinks the file system. Creates additional partitions and labels them installing file systems. Then finishes the install and comes up running.
So the procedure goes as follows.
sudo apt install -y git
git clone https://github.com/hcfman/sbts- aru.git
cd sbts-aru
sudo -H ./sbts_install_aru.sh
That’s it. It comes up running a recorder on a system with multiple partitions running an overlayFS on memory on the first one.It will even work on a Raspberry Pi zero (Works on all Pi versions) and it doesn't matter if it's Raspbian or Bookworm.
Back in the CyanogenMod days, I had an even better setup: there was an app that also let you emulate a USB keyboard and mouse, so I could, with some command-line trickery, boot a computer from an ISO on my phone, then use that same phone as a keyboard and mouse/trackpad, including in the BIOS.
https://github.com/nitanmarcel/isodrive-magisk
needs root, and your kernel needs USB Mass storage gadget support module enabled, which, sadly, LineageOS doesn't enable by default.
But Lineage does not enable it on all kernels, even if it could just be enabled. I observe this on all of my Samsungs, for example.
You can use this app to see which USB gadget options are enabled on your kernel: https://github.com/tejado/android-usb-gadget
This and more can be seen in the `device info` screen of the App mentioned above
You can also do `zcat /proc/config.gz | grep CONFIGFS_` in a root shell (su) inside termux to get what options are set by the default kernel.
It dosen't work on all smartphone
[0] https://play.google.com/store/apps/details?id=com.softwareba...
To rootless Boot a Linux ON (not from) your phone is possible via tmux APP.
Search for "rootless kali nethunter" on YouTube. See here: https://m.youtube.com/watch?v=GmfM8VCAu-I
I did find this, but it's ancient and may not meet your needs anyway... https://xdaforums.com/t/app-root-usb-mass-storage-enabler-v1...
Fundamentally, accessing files on a live filesystem is a solved problem, and has been since before smart phones. I don't even know how they made such a broken setup.
(I believe the problem with USB mass storage is that it's closer to an IDE/SCSI protocol than a filesystem protocol. You can't have one bit of the system running around "accessing files" while you've got something else "moving the simulated drive head and writing this sector". In principle you could put the work in to make it all work out, but then it would be as flaky as the media access is now, only for a good reason rather than laziness/lockin.)
One question I have is the following: If I take two linux PC's, why can't I just plug a USB cable between the two and communicate files to each other?
Instead the only solution I know is run an ssh server on one of them, and use sshfs on the other
You can do this with an ethernet cable, if you have one and an ethernet port on both ends. You can manually set up a network on just that cable and transfer at full speed. (AFAIK all modern ethernet ports are capable of figuring out that they need to crossover in this situation and you haven't needed a special crossover cable in a long time.)
If the sd card is mounted by your computer, you can't run any apps on the phone that need to use the sd card. That means, apps you moved to the SD card for space reasons, or apps that might save photos to the SD card (such as messengers).
If your computer messes up the filesystem, then you're in a world of hurt.
If multiple apps can access the filesystem at the same time, why couldn't also some app (background system process) read from / write to the filesystem in an android multi access compatible way, while serving the mass storage device API on the other side
The paradigm for block oriented filesystem access is exclusive access, and filesystem code is built around that. There's some niche filesystems around multiple simultaneous access to block devices, but I don't know if there's any that are open source; mostly people don't setup scsi/sas/das disk arrays with two hosts anymore, and when they do, they're much more likely to have exclusive access with failover than simultaneous access.
If you had a team of really detail oriented developers capable of getting this done for Android and desktop platforms, wouldn't you rather they work on something else?
Another approach might be to build a virtual filesystem to export as a block device on usb connection that's essentially a snapshot of the current one, and then you sync any changes that were written on usb disconnect, but then you need to manage divergent changes and that's unfun too.
SMB over USB would be terrible in many ways, but probably handle this use case much better.
If you can improve the world (mass storage device is really wide spread) this way, why not?
Top on my list would be getting it so the touch screen just works every time. I can't count the number of times I've had to turn the screen off and on, because the touch screen came up in a way that I can't swipe up to get the code entry because swiping from the bottom to the top of the screen doesn't move it enough. I've had this happen on pretty much all my androids.
Things like booting faster would actually be nice. Especially since sometimes phones reboot in pockets. Setting up applications for faster starting would be amazing. It's not all in Google's court, but a basic hello world with the Android IDE starts up rather slow, even if you've noticed you need to compile a release build.
https://www.slashgear.com/samsung-linux-on-dex-is-dead-here-...
Would carrying an extra USB stick be that big of a hassle? No, but I can see the need for booting up a ready Linux image being extremely situational so the vast majority of time you're just carrying dead weight.
You're only allowed to use it in the prescribed fashion.
https://www.amazon.com/Verbatim-8GB-Clip-Flash-Drive/dp/B00N...
Using the phone directly still seems the cleanest and most reliable way. Or maybe a combination of both, like those magnetic plugs [2] but with an integrated USB stick. Bonus points if you don't have to take it out at all (until needed) by either connecting the other magnetic part for data transfer and charging or data through USB OTG and wireless charging. One can dream... but the technology will shrink even further so who knows.
1. https://www.amazon.com/Enfain-Flash-Drives-Memory-Credit/dp/...
sudo mount /dev/sdb2 /mnt
sudo xhost +local:
sudo mount --bind /tmp/.X11-unix /mnt/tmp/.X11-unix
sudo cp ~/.Xauthority /mnt/root/.Xauthority
sudo mount --bind /dev /mnt/dev
sudo mount --bind /proc /mnt/proc
sudo mount --bind /sys /mnt/sys
sudo mount --bind /dev/pts /mnt/dev/pts
sudo unshare --uts chroot /mnt su -l timonoko
sudo umount /mnt/proc
sudo umount /mnt/sys
sudo umount /mnt/dev/pts
sudo umount -l /mnt/dev
sudo umount -l /mnt/tmp/.X11-unix
sudo umount -l /mnt
I posit that because wimlib supports pipable WIMs that we could pipe an endless stream of QR codes to it (thus making the "installing Windows from QR codes" possible)...
I think we managed to only ever install it once successfully without error.
Also, fun semi-related fact: In my country we called 8" and 5.25" floppies "floppies", and the smaller 3.5" ones were called "stiffies" - because the larger ones were floppy, and the smaller were, well, stiffer. Do with this information as you please.
I believe it worked by supplying the recovery software as a bootable ISO image in ROM on the drive and emulating a bootable (USB? SCSI?) CD-ROM drive at boot.
In the late 90s I worked in the server support line for DEC, and the number of times we had to talk people through the "invisible F6 prompt" was nuts.
This process was specific to installing storage drivers needed for the system volume. All other driver installation happened elsewhere.
My memory says there was actually a "Press F6 to load system storage drivers" prompt or something displayed by the installer, but it wasn't displayed for all that long a time and I imagine it was effectively invisible for many people. I recall spamming F6 to make sure I wouldn't miss the prompt.
Here's how I remember it: The Windows CD itself had drivers built into the installer so that it could discover hardware. However, if you had a brand new storage controller, you might find that even Windows NT CD's installer wouldn't recognise it, so it would tell you that there were no storage devices found. To get around this you had to press F6 right at the start of the CD boot, before the Windows logo appeared. After a few seconds you could provide your storage drivers on a floppy disk, and the Windows installer program would continue to load. This time, the installer would recognise your disks. Then during the installation you would get a visible F6 prompt to provide your storage drivers. This allowed you to provide extra storage drivers that would be bundled with the installed OS.
Most people didn't know about the first F6, because I think NT installer had some sort of very basic, generic storage drivers that would work in most cases. If you had some very recent array controller, you would likely need to know about the "invisible F6 prompt".
Old time OS programmers typically didn't need to think about these things...
Current and future OS designers might wish to consider these things in their designs, if they desire maximum flexibility in their current or future OS...
Anyway, an excellent article!
Related:
https://en.wikipedia.org/wiki/Coupling_(computer_programming...
https://thenewstack.io/how-decoupling-can-help-you-write-bet...
https://softwareengineering.stackexchange.com/questions/2444...
My initial goal was to write a fuse filesystem for mounting RPM packages, but I wanted to see how far it goes. Turns out, pretty far indeed: https://github.com/Vogtinator/repomount/commit/c751c5aa56897...
The system boots to a working desktop and it appears like all packages available on the DVD are installed.
That was an "amazing" thing to me back in the day. I had the bandwidth to do it, a simple floppy to start the whole process and...there it was! BSD on my machine.
I'm not sure if you can still do that today. Pretty sure the files were FTP hosted somewhere (or even TFTP). I think today it's all ISOs.
pulitzer prize nomination material
That sounds cool!
oh.
> Competitiveness is a vice of mine. When I heard that a friend got Linux to boot off of NFS, I had to one-up her. I had to prove that I could create something harder, something better, faster, stronger.
sounds like they're well aware of the traditional way to do it, and are deliberately going out of their way to do something different and weird.
If you want a fuller system you could try 1) convert the filesystem to tmpfs after boot and install packages to RAM, or 2) mount a remote disk image as your roofs rather than keeping individual files remote. The former will be blazing fast but you're limited by your RAM. The latter will be faster than fuse, benefit from io caching, and not have the bugs mentioned.