I wouldn't technically call this "boot" since the kernel has already booted... If get google-drive "mounting" support into grub, then I'll concede. This just places the rootfs on some strange place.

btw, I have a project in my drawer, to place rootfs of my NixOS on IPFS.

This is easy to fix. We just call the first kernel a boot loader and kexec a new kernel from google drive
> NixOS on IPFS

oh that would be fun! have you made much progress? is there a repo or something I can follow for updates?

  • rwmj
  • ·
  • 5 days ago
  • ·
  • [ - ]
How about booting Linux off bittorrent? https://libguestfs.org/nbdkit-torrent-plugin.1.html#EXAMPLES

The problem with booting Linux off very high latency devices is the kernel tends to time out I/O requests after too short a time (60 seconds I think) so you have to adjust those timeouts upwards.

If that's a huge problem, you can wedge FUSE in there somehow, as far as I know there's no automatic kernel-side timeout to requests sent to FUSE.
Back in the the day it was possible to boot Sun Solaris over HTTP. This was called wanboot. This article reminded me of that.

This was basically an option of the OpenBoot PROM firmware of the SPARC machines.

It looked like this (ok is the forth prompt of the firmware):

    ok setenv network-boot-arguments dhcp,hostname=myclient,file=https://192.168.1.1/cgi-bin/wanboot-cgi
    ok boot net
This doesn't only load the initramfs over the (inter)network but also the kernel.

https://docs.oracle.com/cd/E26505_01/html/E28037/wanboottask...

https://docs.oracle.com/cd/E19253-01/821-0439/wanboottasks2-...

Modern UEFI can do that too!

https://ipxe.org/appnote/uefihttp

First thing I disable on a new PC.
I was going to say, booting from a random website image sounds like a terrible idea.
It's possible to require that any images used be signed using a specific key that is configured in the hardware ahead of time. Even if you don't do that, the same setup can be helpful for provisioning a bunch of machines without accessing any external network. You can configure a small box to act just as a DHCP server and to serve a machine image for network boot. Then you can have all the machines on this subnet automatically load that image as it is updated without the need for any further configuration on each device.

I've seen organizations do something similar to this for trade shows when they want a bunch of machines that visitors can interact with and don't want to have to keep them updated individually. Just update the image once and reboot each machine.

  • xur17
  • ·
  • 5 days ago
  • ·
  • [ - ]
Ideally it would be possible to just specify an image url and a hash.

Or, even better, a magnet link.

I dunno, I actually think a public key is better than a hash, because it lets you sign updated images without having to update things on the client. Obviously it should be user-controlled, but this feels like a legitimate use.
  • xur17
  • ·
  • 5 days ago
  • ·
  • [ - ]
It is more flexible than a hash, but it's also more complicated.
I don't really see it being that much more complicated. Signing the image is just one extra step when you publish, but it also means that you never need to update client machines unless the key is compromised.
Okay but why not just use PXE? Why does everything have to be HTTP?
Well, it kind of does. Normally, the PXE network booting will use DHCP (or bootp or whatever) to fetch the boot image location, then it will fetch that boot image. Historically, that has worked this way:

1. bootp says boot image is at <ip address>/path/to/img 2. PXE network stack fetches that image via TFTP (which is awful) 3. PXE network stack boots that image

In most cases, the boot image would be a chainloader like pxelinux, and that would fetch a config file which told it the kernel path, the initrd path, and the commandline, and then the user could choose to boot that image, and then pxelinux would fetch the files via TFTP (which is still awful) and boot them.

In this new, HTTP-based case, we replace each instance of "TFTP" with "HTTP", which we can authenticate (ish), which we can easily firewall, which doesn't have weird compatibility issues, and so on.

Note that, before now, you could replace pxelinux with iPXE, and iPXE could fetch files via HTTP (which is awesome), but you still had to fetch iPXE and its config file via TFTP.

Note that TFTP is an unauthenticated, UDP-based, extremely limited protocol which has almost no support for anything but the most basic "get this file" or "take this file" functionality. Being able to replace it is a joy and a wonder.

PXE is one layer higher than what you're thinking of. The old-school analog to HTTP in this case is TFTP, and it sucks.
You can do either
Anyone know why this comment is collapsed by default?
  • ·
  • 5 days ago
  • ·
  • [ - ]
I'm wondering if this is how we did a net install of a custom Distro back in a former job, but I don't recall. I just remember it being insanely easy to install the distro over the network, even on a VM.
if it was a decade ago, PXE/tftp booting was pretty common (at MetaCarta we shipped dell 2650/6650 servers around then, and while field upgrades were from DVD, the QA lab had some "synthesize keystrokes through a KVM to select netbooting" and then a tftpserver that had the image you wanted to install in a MAC address specific filename, so the machine picked up the intended image. We got the idea from another boston-area startup (Vanu Inc) that put similar Dell servers in software-configurable cellphone towers, iirc)
As far as i know most places are still using iPXE and Tftp to load an image with some custom provisioning framework.

It worked really well but I haven’t worked on large scale DCs for a few years now so maybe some new stuff happened

PXE is still the king in large DCs. I can install ~250 servers in 15 minutes with a single xCAT node over traditional gigabit Ethernet. Give another 5 minutes for post-install provisioning and presto!

Your fleet is ready.

Nice! That was my experience as well. Just wanted to make sure I wasn’t falling too far out of date while I switched to some MLops roles for a while.
I don't know about servers and stuff but I'm using PXE to image a Surface Pro right now.
Booting over HTTP would be interesting for device like Raspberry. Then you could run without memory card and have less things to break.
I would also prefer HTTP, but Pis can use PXE boot and mount their root filesystem over NFS already:) Official docs are https://www.raspberrypi.com/documentation/computers/raspberr... and they have a tutorial at https://www.raspberrypi.com/documentation/computers/remote-a...
Once you have PXE you can do all the things -- NFS boot, HTTP boot, iSCSI boot, and so on. There are several open source projects that support this. I think the most recent iteration is iPXE.
That's true, though I always have felt that if I needed PXE+TFTP to boot the bootloader I might as well just load a kernel+initrd from the same place and be done with it; I couldn't remove the TFTP requirement so anything else would just be extra things to configure. If UEFI can really do pure HTTP (as discussed upthread) then I may need to reevaluate. (Well, for Raspberry Pis I'll have to keep TFTP, but maybe in other contexts I can drop it)
iPXE: https://en.wikipedia.org/wiki/IPXE :

> While standard PXE clients use only TFTP to load parameters and programs from the server, iPXE client software can use additional protocols, including HTTP, iSCSI, ATA over Ethernet (AoE), and Fibre Channel over Ethernet (FCoE). Also, on certain hardware, iPXE client software can use a Wi-Fi link, as opposed to the wired connection required by the PXE standard.

Does iPXE have a ca-certificates bundle built-in, is there PKI with which to validate kernels and initrds retrieved over the network at boot time, how does SecureBoot work with iPXE?

> Does iPXE have a ca-certificates bundle built-in, is there PKI with which to validate kernels and initrds retrieved over the network at boot time

For HTTPS booting, yes.

> how does SecureBoot work with iPXE?

It doesn't, unless you manage to get your iPXE (along with everything else in the chain of control) signed.

https://www.google.com/search?q=raspberry%20pi%20pxe%20booti...

There was an article recent for somebody doing it on an Orange Pi [1]. IIUC, you can have one RasPi with an SD Card (I use USB drives but w/e) to be the PXE server and then the rest can all network boot.

[1]: https://news.ycombinator.com/item?id=40811725

Welcome back, diskless workstations! We've missed you... oh, wait, no, we really haven't.

This is technically neat, but... How often does the memory card break on a Raspberry? How often does the network break (either Raspberry hardware or upstream)? There are fewer things to break when you run from local hardware.

I'd say sd card failures are the most common rPI failures.
Only because people treat them like harddrives and stubbornly run desktop distos on them.

Keep log files and other things frequently written in RAM and the sdcard will last forever.

When was the last time an sdcard in a digital camera wore out?

You are thinking about this wrong. Imagine having a single disk image for 100 Pis. Now imagine having to burn that image to a hundred MicroSD cards, now suddenly you want to update the disk image.

As others have said, you can also use PXE, but http is a bit easier to deal with.

There is a hosting company with something like 44k Raspberry Pis. Are you going to be the guy to update them?

  • ssl-3
  • ·
  • 4 days ago
  • ·
  • [ - ]
That's one improvement, but network booting can also help us home-gamers who don't have a hundred Raspberry Pis that are all doing the same thing.

Many of us have a handful of Pis at home doing whatever they do, each with their own unique MicroSD card. In this configuration, every time the number of Pis doubles, the overall MTBF for their collective storage halves. Backups are a pain since each Pi is a unique and special snowflake, and are thus somewhat unlikely to actually get accomplished. When a MicroSD card does die, that Pi's configuration and all of the work that went into making it do whatever it likely disappears with it.

However, when booting over the network:

A handful of Pis are at home doing whatever they do, and booting from a reasonably-resilient NAS (eg a ZFS RAIDZ2 box) somewhere in the house (which is a great idea to have around for all kinds of other reasons, too). Adding more Pis does not decrease storage MTBF at all, since there is no MicroSD card to die. Backups become simple, since ZFS snapshots make that kind of thing easy even if each Pi's disk image is a unique and special snowflake. Space-efficient periodic snapshots become achievable, making it easy to unfuck a botched change -- just roll back to an hour ago or yesterday or whenever things last worked and by using that snapshot instead. Undetected bitrot becomes zero. Speeds (for many workloads) might even increase, since at least the Pi4 can handle wire-speed network traffic without any real sweat.

It's not a great fit for everyone, but it may result in a net long-term time savings for some of us folks here who tinker with stuff at home if enough steps are automated, and it seems likely to result in fewer frustrating surprises.

> How often does the memory card break on a Raspberry?

I have no data on this, only anecdata - since I've started being interested in the Home Assistant project, I've seen countless problems with people who've done an upgrade, rebooted their HA Pi and had some kind of Disk IO issue because the card died.

As I understand it, it's the constant logging to disk for logfiles and databases that ends up killing MicroSD cards. It seems to be particularly bad for clones and cheap ones off eBay/Amazon. It's still apparently a problem even for high quality "endurance" MicroSD cards.

  • ssl-3
  • ·
  • 5 days ago
  • ·
  • [ - ]
Amusingly, most of the things I regularly use Raspberry Pi hardware for require a functional network as well as functional storage on that network.

If I were to netboot these things, then I'd have fewer points of failure than I do now.

I always put the rootfs in the kernel. It mounts on mfs or tmpfs. SD card is read-only. After boot, I can pull out the card. No need to boot over HTTP.
I remember the glorious AIX machines we had which could book from tape backups made with a simple "mksysbk" command. :)
How slow was that?
If it is pulling a filesystem from tape into memory and booting from that, it could be pretty quick. Reading sequentially from tape, if you are already at the right location which is easy if that location is the start of the tape, isn't particularly slow at all – non-sequential access is where tape storage becomes very slow due to massive latency in the physical mechanisms.
"The network is the computer." It was a shortlived thing.
"Short-lived" depends on your perspective. Cloudflare owns the rights to that trademark now; because they believe their mission furthers that vision: https://en.wikipedia.org/wiki/The_Network_is_the_Computer (and John Cage, the Sun employee who coined the phrase, said he was fine with Cloudflare picking it up: https://spectrum.ieee.org/does-repurposing-of-sun-microsyste...)
  • msh
  • ·
  • 5 days ago
  • ·
  • [ - ]
I guess Chromebook’s is the resurrection of the idea
Thanks to Crostini, Chromebooks are also excellent local computing devices.
Not really. Chromebooks don't use the LAN. They can run code locally, or on the server in a different timezone. However with Sun if you needed more CPU you could log into all the machines on your local network - all machines shared the same filesystem(NFS) and passwd (I forget this was), so using all the CPUs in the building was easy. It was unencrypted, but generally good enough until the Morris worm.

Of course moderns servers have far more CPU power than even the largest LANs back in 1986. Still those of use who remember when Sun was a big deal miss the power of the network.

> all machines shared the same filesystem(NFS) and passwd (I forget this was), so using all the CPUs in the building was easy.

Sun did this through NIS, originally Yellow Pages/YP, but name changed for trademarks.

When I worked at Yahoo, corp machines typically participated in an automounter config so your home would follow you around, it was super convenient (well, except when the NFS server, which might be your personal corp dev machine under your desk, went away, and there was no timeout for NFS operations... retry until the server comes back or heat death of the universe). They used a sync script to push passwords out, rather than NIS though --- a properly driven sync script works almost as fast, but has much better availability, as long as you don't hit an edge case (I recall someone having difficulty because they left the company and came back, and were still listed as a former employee in some database, so production access would be removed automatically)

That's because Sun just bolted stuff on to Unix. Bell Labs actually achieved that goal in Plan 9 which is still very much alive.
Grub can boot a kernel from http too.
  • ktm5j
  • ·
  • 5 days ago
  • ·
  • [ - ]
I remember doing this to install Solaris while resurrecting an old sparcstation. Fun times!
I didn't realize that. I booted over BootP many times but this is even cooler.
Can you really say you are booting off of something remote when you are really booting a rootfs from a local initramfs of several megabytes?
Not any worse than 32+ megabytes of UEFI booting off of an iPXE bootrom.
That's what I'm saying about hard drives and ROMs
Yeah we didn't need those silly hardrives with their crufty filesystems.
To close the loop, they really need an EFI stub that loads a combined kernel image/ramfs from Drive.
iPXE can already boot from a web server: https://ipxe.org/
  • e12e
  • ·
  • 5 days ago
  • ·
  • [ - ]
Should be possible then, if you "share" the initrd and Linux image?

https://stackoverflow.com/questions/37453841/download-a-file...

Perhaps that's what this "off of" preposition means. I've often wondered.
What people really want is sub-second booting, especially in embedded. It is a hard problem but somehow nobody seems interested in doing the hard CS research to solve it.
  • rwmj
  • ·
  • 5 days ago
  • ·
  • [ - ]
There's tons of work on millisecond boot times going on, in kata-containers, confidential computing, and various "serverless" implementations. I wrote a paper about it nearly a decade ago too[1].

[1] http://git.annexia.org/?p=libguestfs-talks.git;a=tree;f=2016...

And I still can't boot my Linux system in a reasonable time. Perhaps the true problem that needs to be solved is that everybody is somehow (forced at) reinventing the wheel every time.
The real problem is linux is just a kernel - they cannot force you to have good hardware. If you want fast boot you need to start with the hardware: a lot of hardware has an long init sequence so there is no way the kernel can boot fast as it cannot boot until that hardware is initialized. Then you can look at the kernel, step one is strip out all the drivers for that slow to init hardware you don't have (since those drivers have to insert waits into the boot while they check for the hardware you don't have). If you do this you can save a lot of boot time.

Of course in the real world the people who select your hardware don't talk to the people who care about software. So you are stuck with slow boots just because it is too late to go back and do a fill million dollars each board re-spins now that we know our boot times are too slow.

It gets worse, even if you select fast init hardware that doesn't mean it really is fast. I've seen hardware that claims to not need long inits, but if you don't insert waits in the boot there are bugs.

  • fwip
  • ·
  • 5 days ago
  • ·
  • [ - ]
I haven't kept up with modern linux - is there a tool that automates that? e.g, records what drivers have been used over some number of boots, and then offers to disable all the drivers that haven't been used.
systemd-analyze record the boot time after the kernel is started but I don't know if there are equivalent for the kernel startup.
I don't think this is related to slow hardware, maybe bad drivers, but not slow hardware. I consistently get a faster boot both on Windows and on MacOS with reasonably slower specs than my Linux desktop. The linux boot is fast, some 5sec maximum. But Windows is almost instant, of course it uses the notorious fast startup, but even so I expected more from linux, being some lightweight as it is
Well, in many cases people __can__ get a kernel to have decent boot times if they pour sufficient time and energy into it.
At least on my completely unoptimized desktop, majority of boot time is already spent in UEFI firmware, not in kernel or userspace startup. So realistically there is limited opportunity to optimize the boot times.
Linux boots to your application in 125 ms. There's no hard problem there, just bloat, general-purpose systems, and hardware not designed to boot fast.
"Linux" is more than just the kernel.

Pretending there is no problem is part of the problem.

That's not what I'm doing. I'm saying if your distro, your hardware, or your setup takes significantly more time, examine why. The hardware part is tough because we're effectively locked into whatever is cheap on the market, everything else is 100% fixable. Fast booting is not really a hard problem, especially in embedded where you know & control the hardware.
Yet, just about any system I've used boots slowly. Your argument is like saying that software bugs are not a real problem because you can simply find and fix them if you look hard enough.
My Framework laptop took 2.423s after starting userspace to be "done", without me making any effort on that. (Measurements for the part before that aren't useful on this setup because my initrd waits for user input passphrase.)

It's simply not rocket science.

2.423 seconds is on the fast end of what I've seen, congratulations. For most systems I've seen it would be at least 5 seconds and when comparing that to loading a webpage, I would consider closing the tab.
> hard CS research

I'm surprised to see this, in what way does it require hard CS research? Isn't it just debugging and implementation pain?

I can only guess here. But remember that software package management was a pain too and it took someone to do a Ph.D. on the topic to give us NiX (and it still isn't perfect).
Ah I see where you're coming from. I don't see any reason to expect that's the case here though. Package management has some fairly obvious tough CS problems inherent in it -- dependency resolution with version upgrades inherently feels NP-hard, for example. Whereas booting is about making hardware that initializes quickly and then making software that abstracts over a variety of hardware well... within the development budget you have. And then you're stuck with backward compatibility as everything changes. I could be wrong here but it feels like a costly engineering problem more than anything else.

(Note I'm not saying you can't do a PhD in it and improve the situation -- you could probably do that for any problem, honestly. Just saying that I think you could get most of the way there by just paying the engineering cost.)

Dependency resolution with versions is indeed NP-hard, if versions "conflict" (2 versions of the same package can't be installed at the same time). What if they don't conflict, and you just wanna install the fewest possible package versions to satisfy all dependencies? That's NP-hard too.
I suppose you could use a generic SAT solver for that.

EDIT: https://hal.science/hal-00870846/file/W5_PX_Le_Berre_On_SAT_...

I'm just seeing that this is a forever lingering problem and I think if only engineering costs were involved the problem would have been solved by now.
It is not hard research, it is "just" a lot of plain old boring engineering.
He casually mentions he boots of S3 as well. Changing S3 for Google Drive mostly adds latency, apparently.

But still, nicely done!

  • _flux
  • ·
  • 5 days ago
  • ·
  • [ - ]
Redundant S3 is easy-ish to selfhost, though, so that could actually be a decent way to setup reliable diskless workstations.
At that point you might as well run Ceph and give your diskless workstations a writable block device via RBD. The overhead of an S3 operation per file is quite high.
  • _flux
  • ·
  • 5 days ago
  • ·
  • [ - ]
There are some easier solutions for just S3, like Minio, which I imagine is likely much easier to setup than Ceph (though ceph is not that hard with cephadm).
By the time you add the word "redundant" in the mix, nothing is really easy anymore.
His S3-compatible bucket was locally hosted, did not go over the internet.
They, not he.
Love the one up manship!

I read the “How to shrink a file system without a live cd. So here’s my one. How to shrink a file system without a live CD as part of a single command install script of a program.

My sbts-aru sound localizing recorder program does that on the pi.

I’m willing to bet that no other project on the Internet does this, but I’d love to be surprised. Let me know.

It installs the majority of the code, then reboots, shrinks the file system. Creates additional partitions and labels them installing file systems. Then finishes the install and comes up running.

So the procedure goes as follows.

  sudo apt install -y git
  git clone   https://github.com/hcfman/sbts-  aru.git
  cd sbts-aru
  sudo -H ./sbts_install_aru.sh
That’s it. It comes up running a recorder on a system with multiple partitions running an overlayFS on memory on the first one.

It will even work on a Raspberry Pi zero (Works on all Pi versions) and it doesn't matter if it's Raspbian or Bookworm.

Speaking of booting Linux from places, what I would like to be able to do is carry a Linux image around with me on my (Android) smartphone, plug the phone into a USB port on a laptop and boot the Linux image from there on the laptop. Does such a thing exist?
This really is nice to have and a sibling comment has already linked to DriveDroid, the solution I'm using for this.

Back in the CyanogenMod days, I had an even better setup: there was an app that also let you emulate a USB keyboard and mouse, so I could, with some command-line trickery, boot a computer from an ISO on my phone, then use that same phone as a keyboard and mouse/trackpad, including in the BIOS.

A magisk module to do just that:

https://github.com/nitanmarcel/isodrive-magisk

needs root, and your kernel needs USB Mass storage gadget support module enabled, which, sadly, LineageOS doesn't enable by default.

I have used this many times on my phone running LineageOS. Did not have to enable any kernel features.
On the phones, where the Vendor kernel has this option enabled, Lineage also enables it, e.g. most LGs.

But Lineage does not enable it on all kernels, even if it could just be enabled. I observe this on all of my Samsungs, for example.

You can use this app to see which USB gadget options are enabled on your kernel: https://github.com/tejado/android-usb-gadget

Makes sense. My phone model is a Xiaomi. Don't know why Samsung would ship their kernels without ConfigFS support but I have never had such issues.
It's not about `ConfigFS` as a whole, but specifically `CONFIG_USB_CONFIGFS_MASS_STORAGE`, that is left disabled, while lots of other `CONFIG_USB_CONFIGFS_BLA` are enabled.

This and more can be seen in the `device info` screen of the App mentioned above

Should have said *proper ConfigFS support. Anyway, had no prior interest in this kernel feature until you mentioned the anomaly that is specific to certain vendors.

You can also do `zcat /proc/config.gz | grep CONFIGFS_` in a root shell (su) inside termux to get what options are set by the default kernel.

Also requires Root access
not sure if such a thing can work w/o root
i used drivedroid [0] on in the 2010's for this purpose. handy but never essential. requires root though.

[0] https://play.google.com/store/apps/details?id=com.softwareba...

Boot linux of a Smartphone would take drive emulation, which is possible, but not easily available.

To rootless Boot a Linux ON (not from) your phone is possible via tmux APP.

Search for "rootless kali nethunter" on YouTube. See here: https://m.youtube.com/watch?v=GmfM8VCAu-I

  • ce4
  • ·
  • 5 days ago
  • ·
  • [ - ]
That is not booting a linux-kernel at all. it is just using the existing kernel which Android is based on (also Linux).
Glue a bootable usb to your phone.
Yes, do this. Don't under any circumstances try to solve a cute technical challenge -- that would only lead to fun, or worse yet, satisfaction.
It sounds to me like software enlightenment:

https://xkcd.com/1988/

Android stopped exposing USB Mass Storage, because it's problematic for the core use case of letting you grab pictures and what not from your phone, because it requires exclusive access to a filesystem; that wouldn't be a big deal for you, I don't think, you probably just want to create a large file and expose that as a device, but the implications of exposing the sd card (or the virtual sd card) as mass storage are why it went out of style.

I did find this, but it's ancient and may not meet your needs anyway... https://xdaforums.com/t/app-root-usb-mass-storage-enabler-v1...

What do you mean, usb mass storage was much better for the core use case of getting pictures of your phone than the flaky mtp now is
  • jerf
  • ·
  • 4 days ago
  • ·
  • [ - ]
What I'd like to know is why my 2023 phone is still every bit as flaky as my 2018 phone was. For a while I was blaming my Linux solution but every time I try to use it on Windows it's just as flaky.

Fundamentally, accessing files on a live filesystem is a solved problem, and has been since before smart phones. I don't even know how they made such a broken setup.

(I believe the problem with USB mass storage is that it's closer to an IDE/SCSI protocol than a filesystem protocol. You can't have one bit of the system running around "accessing files" while you've got something else "moving the simulated drive head and writing this sector". In principle you could put the work in to make it all work out, but then it would be as flaky as the media access is now, only for a good reason rather than laziness/lockin.)

Regarding accessing files on a live filesystem being a solved problem:

One question I have is the following: If I take two linux PC's, why can't I just plug a USB cable between the two and communicate files to each other?

Instead the only solution I know is run an ssh server on one of them, and use sshfs on the other

  • jerf
  • ·
  • 4 days ago
  • ·
  • [ - ]
The situation with USB is a lot more complicated than it appears. See things like this for instance: https://unix.stackexchange.com/questions/120368/make-a-compu... The controllers that sit between the port and the computer can create significant limitations versus what is theoretically possible if you were directly bit-banging the cords.

You can do this with an ethernet cable, if you have one and an ethernet port on both ends. You can manually set up a network on just that cable and transfer at full speed. (AFAIK all modern ethernet ports are capable of figuring out that they need to crossover in this situation and you haven't needed a special crossover cable in a long time.)

I mean, yes, but ...

If the sd card is mounted by your computer, you can't run any apps on the phone that need to use the sd card. That means, apps you moved to the SD card for space reasons, or apps that might save photos to the SD card (such as messengers).

If your computer messes up the filesystem, then you're in a world of hurt.

Couldn't they emulate it then?

If multiple apps can access the filesystem at the same time, why couldn't also some app (background system process) read from / write to the filesystem in an android multi access compatible way, while serving the mass storage device API on the other side

That's possible, but really challenging to do because mass storage is block oriented, the host (desktop) is likely to do read and write caching, and there's no mechanism for a host to say I finished writing this file, and there's no mechanism for a device (phone) to say blocks changed from under you.

The paradigm for block oriented filesystem access is exclusive access, and filesystem code is built around that. There's some niche filesystems around multiple simultaneous access to block devices, but I don't know if there's any that are open source; mostly people don't setup scsi/sas/das disk arrays with two hosts anymore, and when they do, they're much more likely to have exclusive access with failover than simultaneous access.

If you had a team of really detail oriented developers capable of getting this done for Android and desktop platforms, wouldn't you rather they work on something else?

Another approach might be to build a virtual filesystem to export as a block device on usb connection that's essentially a snapshot of the current one, and then you sync any changes that were written on usb disconnect, but then you need to manage divergent changes and that's unfun too.

SMB over USB would be terrible in many ways, but probably handle this use case much better.

> If you had a team of really detail oriented developers capable of getting this done for Android and desktop platforms, wouldn't you rather they work on something else?

If you can improve the world (mass storage device is really wide spread) this way, why not?

For better or worse, most people's photos never get transferred to a computer. Heck, there's a ton of people who don't have a non-phone computing device. Pushing their photos to the cloud so they see them after their current device dies is better than working on a new filesystem that allows for multiple host simultaneous access and porting that to everything. I could make a huge list of more tractable things that this hypothetical team could work on to make Android better for way more people.

Top on my list would be getting it so the touch screen just works every time. I can't count the number of times I've had to turn the screen off and on, because the touch screen came up in a way that I can't swipe up to get the code entry because swiping from the bottom to the top of the screen doesn't move it enough. I've had this happen on pretty much all my androids.

Things like booting faster would actually be nice. Especially since sometimes phones reboot in pockets. Setting up applications for faster starting would be amazing. It's not all in Google's court, but a basic hello world with the Android IDE starts up rather slow, even if you've noticed you need to compile a release build.

You can just use termux+rsync to get files to or from your phone.
Why just not use Samsung's DeX that gets you a linux desktop when you plug your phone in a usb-c monitor/console
Wasn't Linux on DeX discontinued?
Dex does not need an underlying OS. Your conflating features. Dex simply requires a monitor. No computer.
Yes it was.
Dex does not need an underlying OS. Your conflating features. Dex simply requires a monitor. No computer.
different use case and requirement for samsung device?
You could set up a PXE boot server on the android phone, then set up computer to boot off it.
Why does it need to be on the phone? Carry a normal USB stick.
It doesn't, but consider that the vast majority of us already carry our phones everywhere.

Would carrying an extra USB stick be that big of a hassle? No, but I can see the need for booting up a ready Linux image being extremely situational so the vast majority of time you're just carrying dead weight.

You can have a stick with one boot and one commonly formatted (FAT32/exFAT/ext) partition, Linux image being stored in later. Then it's like a normal stick that can also be used to boot Linux. Ventoy automates this process, allowing you to throw any ISO in a specific directory and boot it.
What do you do about the USB cable though? A flash drive you can plug in directly, it's guaranteed to work. A phone requires you to either carry around an extra cable (arguably more annoying to carry than a flash drive) or take the risk that you won't have the right cable available nearby when you're trying to (say) boot a laptop.
Wouldn't it be cool if these general purpose computers in our pockets were useful in novel ways?

You're only allowed to use it in the prescribed fashion.

  • Medox
  • ·
  • 5 days ago
  • ·
  • [ - ]
The USB stick will be forgotten or lost much quicker than the phone.
I have a few Verbatim "Tuff and Tiny" USB drives. Like this but without the plastic clip part. I can fit them in my wallet because its about the thickness of 2 credit cards which are also in my wallet.

https://www.amazon.com/Verbatim-8GB-Clip-Flash-Drive/dp/B00N...

  • Medox
  • ·
  • 5 days ago
  • ·
  • [ - ]
Reminds me of the credit card sized (literally [1]) USB stick I still have somewhere but it was too annoying to carry around and hope that next time that cheap stick still works...

Using the phone directly still seems the cleanest and most reliable way. Or maybe a combination of both, like those magnetic plugs [2] but with an integrated USB stick. Bonus points if you don't have to take it out at all (until needed) by either connecting the other magnetic part for data transfer and charging or data through USB OTG and wireless charging. One can dream... but the technology will shrink even further so who knows.

1. https://www.amazon.com/Enfain-Flash-Drives-Memory-Credit/dp/...

2. https://www.adafruit.com/product/5524

USB sticks attached to keychains are already widespread in some communities (DJs for example), I'm sure us software people could do it too if we wanted to :)
I leave my keychain at the door when I get home. This is probably a common practice.
That makes sense. I once got falsely identified as a DJ, but it was just a YubiKey.
Also attach an USB killer for extra thrill
I glue phones to all my USB sticks for just this reason.
This inspired me to study the possibility of booting on one linux and then chrooting to another linux. Reason being that I cannot update the first one, it being too old, but it has important janitorial purposes. With the help of ChatGPT I made this script, where everything seems to work including x-windowed programs.

    sudo mount /dev/sdb2 /mnt
    sudo xhost +local:
    sudo mount --bind /tmp/.X11-unix /mnt/tmp/.X11-unix
    sudo cp ~/.Xauthority /mnt/root/.Xauthority
    sudo mount --bind /dev /mnt/dev
    sudo mount --bind /proc /mnt/proc
    sudo mount --bind /sys /mnt/sys
    sudo mount --bind /dev/pts /mnt/dev/pts
    sudo unshare --uts chroot /mnt su -l timonoko
    sudo umount /mnt/proc
    sudo umount /mnt/sys
    sudo umount /mnt/dev/pts
    sudo umount -l /mnt/dev
    sudo umount -l /mnt/tmp/.X11-unix 
    sudo umount -l /mnt
mid 90's, a friend of mine installed Windows NT to, and booted it from, a DAT tape
While not booted from, wimlib's support for pipable WIMs means through some shenanigans, you can install modern Windows from tape. I had a bootstrap ISO that would fire up Windows PE, mount my USB DAT tape drive, rewind it, prep the onboard storage, then image direct from tape to disk and make it bootable.

I posit that because wimlib supports pipable WIMs that we could pipe an endless stream of QR codes to it (thus making the "installing Windows from QR codes" possible)...

  • brnt
  • ·
  • 5 days ago
  • ·
  • [ - ]
I got PTSD from installing Windows 95 from floppy and after 40 floppies getting read errors...
My first IT job involved installing a lot of Windows 95 from floppy disk. Luckily each PC I bought came with a set, so I'd build up some "good sets" over time after discarding all the disks that had read errors.
The first time I installed SLS Linux (pre-Slackware), it took some 25 1.44MB floppies and I owned ~20 empty ones. I left the installer running overnight and downloaded more floppies the next day at school. It took an extra day because some floppies had bad sectors, and had to be re-downloaded..
  • beAbU
  • ·
  • 5 days ago
  • ·
  • [ - ]
Somewhere in my parents' house there is a massive box with floppies for office 95 (or whatever it was called back then). Not 40 floppies massive, but still a large number.

I think we managed to only ever install it once successfully without error.

Also, fun semi-related fact: In my country we called 8" and 5.25" floppies "floppies", and the smaller 3.5" ones were called "stiffies" - because the larger ones were floppy, and the smaller were, well, stiffer. Do with this information as you please.

  • exe34
  • ·
  • 5 days ago
  • ·
  • [ - ]
i need to know which country this is, please!
  • obrix
  • ·
  • 5 days ago
  • ·
  • [ - ]
Happened also in Finland. It was "lerppu" (floppy) for the flexible ones and "korppu" (hard biscuit) for the hard ones.
I'm going to wager South Africa based on this blog post: https://jasonelk.com/2015/12/who-knew-that-the-rest-of-the-w...
Certainly not the UK where inserting your stiffie in to something has rather a different connotation….
  • beAbU
  • ·
  • 5 days ago
  • ·
  • [ - ]
South Africa!
How long did it take? Seek times for tapes can be minutes, so fragmentation matters a great deal here.
installation was more than overnight. once it was up and running, it was remarkably responsive, in the short run, but would invariably need to do a lot of seeking to launch any app. The sort of thing if you are sitting nearby you could give it some input every couple minutes.
For some fringe use cases one could drop a readily installed (and defragmented!) OS image to the tape and boot it up. I've only had some floppy tape drives and parallel-port attached Ditto. They didn't support random access, or at least I never had a driver that could do that.
I seem to recall some vendor (HP?) selling external tape drives at some point that supported bootable, bare metal Windows restore from tape.

I believe it worked by supplying the recovery software as a bootable ISO image in ROM on the drive and emulating a bootable (USB? SCSI?) CD-ROM drive at boot.

DAT tapes presented as disk drivers, sector addressable. and you can dump a .iso to a disk to make it bootable, so sounds right.
That must have been fun.

In the late 90s I worked in the server support line for DEC, and the number of times we had to talk people through the "invisible F6 prompt" was nuts.

can you explain?
If your intended system volume was going to require drivers that weren't built into WinNT, you needed to press F6 at a specific point during installation. This would allow you to load a driver that makes the volume visible / usable.

This process was specific to installing storage drivers needed for the system volume. All other driver installation happened elsewhere.

My memory says there was actually a "Press F6 to load system storage drivers" prompt or something displayed by the installer, but it wasn't displayed for all that long a time and I imagine it was effectively invisible for many people. I recall spamming F6 to make sure I wouldn't miss the prompt.

Actually there were two separate times during the installation process that you could press F6 to provide storage drivers. The first had no visible prompt! The second has the prompt you remember.

Here's how I remember it: The Windows CD itself had drivers built into the installer so that it could discover hardware. However, if you had a brand new storage controller, you might find that even Windows NT CD's installer wouldn't recognise it, so it would tell you that there were no storage devices found. To get around this you had to press F6 right at the start of the CD boot, before the Windows logo appeared. After a few seconds you could provide your storage drivers on a floppy disk, and the Windows installer program would continue to load. This time, the installer would recognise your disks. Then during the installation you would get a visible F6 prompt to provide your storage drivers. This allowed you to provide extra storage drivers that would be bundled with the installed OS.

Most people didn't know about the first F6, because I think NT installer had some sort of very basic, generic storage drivers that would work in most cases. If you had some very recent array controller, you would likely need to know about the "invisible F6 prompt".

Any current or future OS should have its filesystem completely decoupled from the OS itself -- thus allowing booting/running the OS off of any kind of plain or esoteric storage device, local or network, present or remote, physical or cloud-based, interrupt/DMA based or API/protocol based, block-based or file-based, real or virtualized, encrypted or not encrypted, tunnelled or not tunnelled, over another protocol or not over another protocol, using TCP/IP or UDP or even just raw 1's and 0's over whatever electronic communication channel someone invents next, etc., etc.

Old time OS programmers typically didn't need to think about these things...

Current and future OS designers might wish to consider these things in their designs, if they desire maximum flexibility in their current or future OS...

Anyway, an excellent article!

Related:

https://en.wikipedia.org/wiki/Coupling_(computer_programming...

https://thenewstack.io/how-decoupling-can-help-you-write-bet...

https://softwareengineering.stackexchange.com/questions/2444...

I did something similar some time ago: Booting from an RPM repository on a Tumbleweed installation DVD.

My initial goal was to write a fuse filesystem for mounting RPM packages, but I wanted to see how far it goes. Turns out, pretty far indeed: https://github.com/Vogtinator/repomount/commit/c751c5aa56897...

The system boots to a working desktop and it appears like all packages available on the DVD are installed.

We do this all the time in Windows with Citrix. It is called pvs. It does a small pxe boot and then it streams down the windows server image
A few days ago I was able to boot armbian on a tvbox I got from the trash, felt so great, now feels so pedestrian...
Can you boot Google off a Linux drive?
you have no idea how much time gets spent considering how to cold boot Google
I remember first getting my cable modem at the house, and I was able to install BSD over the network using a boot floppy.

That was an "amazing" thing to me back in the day. I had the bandwidth to do it, a simple floppy to start the whole process and...there it was! BSD on my machine.

I'm not sure if you can still do that today. Pretty sure the files were FTP hosted somewhere (or even TFTP). I think today it's all ISOs.

> On the brink of insanity, my tattered mind unable to comprehend the twisted interplay of millennia of arcane programmer-time and the ragged screech of madness, I reached into the Mass and steeled myself to the ground lest I be pulled in, and found my magnum opus.

pulitzer prize nomination material

“…booting Linux off of a Git repository and tracking every change in Git using gitfs.”

That sounds cool!

i keep parsing this headline as "kicking Linux off google drive". huh?

oh.

  • pjmlp
  • ·
  • 5 days ago
  • ·
  • [ - ]
Tfpt boot gets rediscovered.
You meant to say tftp right? I'm just checking if there is some long lost technology called Tfpt that I have never heard of.
  • pjmlp
  • ·
  • 5 days ago
  • ·
  • [ - ]
Typo.
But now with some one else's computer (aka, "the cloud")
  • pjmlp
  • ·
  • 5 days ago
  • ·
  • [ - ]
It was always with someone else computer, we used to call it timesharing and thin clients. :)
It was especially fun when you used someone's entire computer lab during night hours ;)
  • pjmlp
  • ·
  • 5 days ago
  • ·
  • [ - ]
Yep, I had some fun with PVM, for the audience, somehow the alternative that lost to MPI.
I mean,

> Competitiveness is a vice of mine. When I heard that a friend got Linux to boot off of NFS, I had to one-up her. I had to prove that I could create something harder, something better, faster, stronger.

sounds like they're well aware of the traditional way to do it, and are deliberately going out of their way to do something different and weird.

  • ·
  • 5 days ago
  • ·
  • [ - ]
Considering how slow and buggy it is to use as a rootfs, you can instead put an initrd on Google Drive and just boot that. You'll need to make it by hand to get it to a reasonably small size, so picking up a copy of Linux From Scratch, and using libmusl or libuclibc along with BusyBox, will go a long way towards a functional system in a small size.

If you want a fuller system you could try 1) convert the filesystem to tmpfs after boot and install packages to RAM, or 2) mount a remote disk image as your roofs rather than keeping individual files remote. The former will be blazing fast but you're limited by your RAM. The latter will be faster than fuse, benefit from io caching, and not have the bugs mentioned.

How do you load the initrd?
UEFI provides a pretty complete environment; it would probably not be too hard to write a .efi program that connected to network and downloads whatever you want from Google Drive (or anywhere else) into RAM and runs it. For that matter, IIRC Linux can already build a combined kernel+initrd into a .efi, so you could make this semi-generic by writing a gdrive.efi that downloaded an arbitrary .efi from gdrive and booted it.
That would be a very interesting article.