You couldn't map all the properties of the resource fork into an inode block of the time in UFS. It has stuff like the icon. More modern fs may have larger directory block structure and can handle the data better.
I’d say this is not the right way to describe a resource fork. Instead, think of it as two sets of file contents—one called "data" and one called "rsrc". On-disk, they are both just bytestreams.
The catch is that you usually store a specific structure in the resource fork—smaller chunks of data indexed by 4-byte type codes and 2-byte integer IDs. Applications on the 68K normally stored everything in the resource fork. Code, menus, dialog boxes, pictures, icons, strings, and whatever else. If you copy an old Mac application to a PC or Unix system without translation, what you got was an empty file. This meant that Mac applications had to be encoded into a single stream to be sent over the network… early on, that meant BinHex .hqx or MacBinary .bin, and later on you saw Stuffit .sit archives.
That’s why these structures don’t fit into an inode—it’s like you’re trying to cram a whole goddamn file in there. The resource fork structure had internal limits that capped it at 16 MB, but you could also just treat it as a separate stream of data and make it as big as you want.
> While the data fork allows random access to any offset within it, access to the resource fork works like extracting structured records from a database.
So, whatever the on-disk structure, the motivation here is that from an OS API perspective, software (including the OS itself) can interact with files as one "seekable stream of bytes" (the data fork), and one "random-access key-value store where the values are seekable streams of bytes" (the resource fork).
So not quite metadata vs data, but rather "structured data" (in the sense that it's in a known format that's machine-readable as a data structure to the OS itself) and "unstructured data."
The on-disk representation was arbitrary; in theory, some version of HFS could have stored the data and resource forks contiguously in a single extent and just kept an inode property to specify the delimiting offset between the two. Or could have stored each hunk of the resource fork in its own extent, pre-offset-indexed within the inode; and just concatenated those on read / split them on write, if you used the low-level API that allows resource forks to be read/written as bytestreams.
This in mind, it's curious that we never saw an archive file format that sends the hunks within the resource fork as individual files in the archive beside the data-fork file, to allow for random access / single-file extraction of resource-fork hunks. After all, that's what we eventually got with NeXT bundle directories: all the resource-fork stuff "exploded" into a Resources/ dir inside the bundle.
There are multiple layers to the OS API. There is the Resource Manager, which provides the structured view. Underneath it is the File Manager, which gives you a stream of bytes. You can use either API to access the resource fork, and there are reasons why you would use the lower-level API.
One example from the documentation was to provide a backup. For various reasons, it was possible that a resource fork could become corrupt—this is back in the day that macOS had no protected memory (for shame!), disk was slow, and we didn’t use journaling filesystems. Some programs kept around backup copies of whatever file you were working on. If your data was stored in the resource fork, well, there’s an easy way to get a backup… just open the resource fork as a stream of bytes and copy it to another place on disk. You could copy it a data fork, and some people even copied it to a data fork in the same file.
The other main reason you would use the lower-level API is because you are writing a program like MacBinary or Stuffit.
> This in mind, it's curious that we never saw an archive file format that sends the hunks within the resource fork as individual files in the archive beside the data-fork file,
Well, there are advantages and disadvantages to that approach. You can already access resources inside a resource fork inside various archive formats, like MacBinary, AppleDouble, and AppleSingle. But you probably do want to preserve the actual byte stream of the resource fork itself. (And there’s also an undocumented compression format for single resources.)
The Resource Manager made it to Mac OS X as part of Carbon. The main part of Carbon is gone, but a part of it called CarbonCore survives, and that contains the resource manager. If you dig through the docs, you can find it. It was deprecated in 10.8 (which seems really late… the writing was on the wall about resources back when 10.0 hit).
https://developer.apple.com/documentation/coreservices/carbo...
The modern resource manager functions in CarbonCore I think just use the POSIX API underneath. Undoubtedly, there’s some test suite at Apple that makes sure it works correctly. Also undoubtedly, there’s some application vendors who wrote code using resources in the 1990s and still has some of that shipping today.
This adage translated to classic MacOS becomes "Everything is a resource". The Resource Manager started out as developer cope from Bruce Horn for not having access to SmallTalk anymore[0], but turned out to completely overtake the entire Macintosh Toolbox API. Packaging everything as type-coded data with standard-ish formats meant cross-cutting concerns like localization or demand paging were brokered through the Resource Manager.
All of this sounds passe today because you can just use directories and files, and have the shell present the whole application as a single object. In fact, this is what all the ex-Apple staff who moved to NeXT wound up doing, which is why OSX has directories that end in .app with a bunch of separate files instead. The reason why they couldn't do this in 1984 is very simple: the Macintosh File System (MFS) that Apple shipped had only partial folder support.
To be clear, MFS did actually have folders[1], but only one directory[2] for the entire volume. What files went in which folders was stored in a separate special file that only the Finder read. There was no Toolbox support for reading folder contents, just the master directory, so applications couldn't actually put files in folders. Not even using the Toolbox file pickers.
And this meant the "sane approach" NeXT and OSX took was actually impossible in the system they were developing. Resources needed to live somewhere, so they added a second bytestream to every file and used it to store something morally equivalent to another directory that only holds resources. The Resource Manager treats an MFS disk as a single pile of files that each holds a single pile of resources.
[0] https://www.folklore.org/The_Grand_Unified_Model.html?sort=d...
[1] As in, a filesystem object that can own other filesystem objects.
[2] As in, a list of filesystem objects. Though in MFS's case it's more like an inode table...
> I’d say this is not the right way to describe a resource fork. Instead, think of it as two sets of file contents—one called "data" and one called "rsrc". On-disk, they are both just bytestreams.
I think it's a perfectly fine way. You're just coming at it from a wildly different level of abstraction.
One could say yours is not the right way either and jump down into quantum fields as another level.
Once you pushed an app beyond the level of usage the developer had performed in their initial tests, it would crawl to a near-halt, thrashing the disk like crazy on any save. Apple's algorithm would shift huge chunks of the file multiple times per set of updates, when usually it would be better to just rewrite the entire file once. IIRC, part of the problem was an implicit commitment to never strictly requiring more than a few KBs of available disk space.
In a sense, the resource fork was just too easy and accessible. In the long run, Mac users ended up suffering from it more than they benefited. When Apple finally got rid of it, the rejoice was pretty much universal. There was none of the nostalgia that usually accompanies disappearing Apple techs, especially the ones that get removed outright instead of upgraded (though one could argue that's what plists, XML and bundles did.)
Personally, MacOS X bundles (directories that were opaque in the Finder) seemed like a decent enough replacement for resource forks. The problem was that lots of NeXT-derived utilities munged old Mac files by being ignorant of resource forks and that was not ok.
A large amount of transition code was written in those years. One well-placed design failure could have cratered the whole project. Considering that the Classic environment was a good-enough catch-all solution, I would have also erred on the side of retiring things that were redundant in NeXT-land.
Resource forks were one of the best victims, 1% functionality and 99% technical debt. The one I mourned for was the Code Fragment Manager. It was one of Apple's best OS9 designs and was massively superior to Mach-O (and even more so wrt other unices.) Alas, it didn't bring enough value to justify the porting work, let alone the opportunity cost and risk delta.
https://www.usenix.org/techsessionssummary/challenges-integr...
Here's some https://arstechnica.com/gadgets/2001/08/metadata/
Once you pushed an app beyond the level of usage the developer
had performed in their initial tests, it would crawl to a near-halt
With HFS (unsure about HFS+) the first three extents are stored in the extent data record. After that extents get stored in a separate "overflow" file stored at the end of the filesystem. How much data goes in those three extents depends on a lot of things, but it does mean that it's actually pretty easy for things to get fragmented.A bit more detail: the first three extents the resource and data forks are stored as part of the entry in the catalog (for a total of up to six extents). On HFS each extent can be 2^16 blocks long (I think HFS+ moved to 32-bit lengths). Anything beyond that (due to size or fragmentation) will have its info stored in an overflow catalog. The overflow catalogs are a.) normal files and b.) keyed by the id (CNID) of the parent directory. If memory serves this means that the catalog file itself can become fragmented but also the lookups themselves are a bit slow. There are little shortcuts (threads) that are keyed by the CNID of the file/directory itself, but as far as I can tell they're only commonly written for directories not files.
tl;dr For either of the forks (data or resource) once you got beyond the capacity of three extents or you start modifying things on a fragmented filesystem performance will go to shit.
Oh, they're not gone -- still very much part of APFS. You can read the contents of the resource fork for a file at path `$FILE` by reading `$FILE/..namedfork/rsrc`
The resource fork is still how custom icons for files and directories are implemented! (Look for a hidden file called `Icon\r` inside any directory with a custom icon, and you can dump its resource fork to a `.icns` file that Preview can open)
It was a lot of fun and something I’ve missed in modern computing. Not even desktop Linux is really fills that void. ResEdit and the way it exposed everything complete with built-in editors was really something special.
One of the big problems with resource forks was that no other system supported them so to host a mac file on a non-mac drive or an ftp server, etc, the file had to be converted to something that contained both parts, then converted back when brought to the mac. It was a PITA.
https://en.wikipedia.org/wiki/NTFS#Alternate_data_stream_(AD...
Every FILE object in the database is ultimately (outside of some low level metadata) a map of Type-(optional Name)-Length-Value entries, of which file contents and what people think of as "extended attributes" are just random DATA type entries (empty DATA name marks the default to own when you do file I/O).
It's similar to ZFS (in default config) and Solaris UFS where a file is also a directory
Except actually NTFS does have "extended attributes" in the HPFS sense, which were added to support the OS/2 subsystem in Windows NT. And went on to be used by other stuff as well, including the POSIX subsystem (and its successors Interix/SFU/SUA) and more recently WSL (at least WSL1, not sure about WSL2), for storage of POSIX file metadata.
In NTFS, the streams of a regular file are actually attributes of `$DATA` type; the primary stream is an unnamed `$DATA` type attribute, and any alternate data stream (ADS) is a named `$DATA` type attribute. By contrast, extended attributes are not stored in `$DATA` type attributes, they are stored in the file's `$EA` and `$EA_INFORMATION` attributes. I believe `$EA` contains the actual extended attribute data, whereas `$EA_INFORMATION` is an index to speed up access.
Alternate data streams are accessed using ordinary file APIs, suffixing the file name with `:` then the stream name. Actually, in its fullest form, an NTFS file or directory name includes the attribute type, so the primary stream of a file `foo.txt` is called `foo.txt::$DATA` and an ADS named bar's full name is `foo.txt:bar:$DATA`. For a directory, the default stream is called `$I30` and its type is `$INDEX_ALLOCATION`, so the full name of `C:\Users` is actually `C:\Users:$I30:$INDEX_ALLOCATION`. You will note in `CMD.EXE`, `dir C:\Users:$I30:$INDEX_ALLOCATION` actually works, and returns identical results to `C:\Users`, while other suffixes (e.g. `:$I31` or `:$I30:$DATA`) give you an error instead. Windows will let you created named `:$DATA` streams on a directory, but not a named one.
By contrast, extended attributes are accessed using dedicated Windows NT APIs, namely `NtQueryEaFile` and `NtSetEaFile`.
I'm not sure why Windows POSIX went with EAs instead of ADS; I speculate it is because if you only have a small quantity of data to store, but want to store it on a huge number of files and directories, EAs end up being faster and using less storage than ADS do.
HPFS had a different approach of internally handling EAs, but OS/2 did create extra file on FAT16 filesystems to store EAs, which could point to origin of $EA. (HPFS itself has special EA-handling implemented in its FNODE, equivalent of inode/FILE entry)
I do not recall the EA actually being used anywhere by new code though, quite shocked by the mention of WSL. Old POSIX subsystem originated before ADSes I think, and might have decided to avoid creating more data types.
My quip about difference of Linux/Irix xattr is related to architectural design involved in the APIs - Irix style xattr API (copied by Linux) is rather explicitly designed for short attributes - do not know if it's still current but I recall something about API itself limiting it to single page per attribute? Come to think of it, that would match certain aspects of Direct IO that AFAIK were also imported from Irix...
Oh, and BTW - NTFS internal structures being accessible as "normal" files is one of the design decisions inherited from Files-11 on VMS, one I quite like from architecture cleanliness pov at the very least.
This explains it: https://learn.microsoft.com/en-au/archive/blogs/wsl/wsl-file...
uid, gid, mode, and POSIX format timestamps are stored in an EA. It also mentions file capabilities being stored in an ADS. On Linux, capabilities and ACLs are stored in xattrs, so that seems to imply that xattrs are stored in ADS not EA.
> Old POSIX subsystem originated before ADSes I think, and might have decided to avoid creating more data types.
I'm not sure about that, I think support for ADS has been in NTFS from its very beginnings, it was designed to support it from the very start.
Actually, from what I understand, the original design for NTFS – which was never actually implemented, at least not in any version that ever shipped to customers – was to let users define their own attribute types. The reason why their names all start with $, is that was supposed to reserve the attribute type as "system", user attribute types were supposed to start with other characters (likely alphabetic). And that's the reason why they are defined in a file on the filesystem, $AttrDef, and why the records in that file contain some (very basic) metadata on validating them (minimum/maximum sizes, etc). If they were never planning to support user-defined attribute types, they wouldn't have needed $AttrDef, they could have just hardcoded it all in the code.
Looking at NTFS from on-disk structure side, it always seemed quite obvious to me that a lot of accolades given to BeFS applied to NTFS - it's the lack of actually using the abilities - and IIRC a lot of the indexing system is actually used by Windows Search, which in tech spaces I always found mentioned as "useless thing I disabled", yet I found out later offices where people are very much dependant on the component (helps that MS Office installed document handlers to index its documents in it)
Microsoft had some very grand plans in this area... Cairo, OFS, WinFS... but they just kept on getting delayed, cancelled, pulled from the beta for too many issues. I think contemporary Microsoft has lost interest in this (it was something Bill Gates was big on) and moved on to other ideas.
It got used occasionally - not a lot. I had a newsgroup reader that would store the date of the last time you downloaded items for a group in an EA (of the file that had the items).
E.g. Dropbox, which syncs some extended attributes (and uses some for internal metadata), seems to store them in the ADS on Windows.
You can't actually open a security descriptor attribute and modify select bytes of it to create an invalid security descriptor, as you would if it were a general purpose stream.
If you at the Linux kernel source code, `fs/ntfs3/ntfs.h` contains the following:
struct ATTRIB {
enum ATTR_TYPE type; // 0x00: The type of this attribute.
__le32 size; // 0x04: The size of this attribute.
u8 non_res; // 0x08: Is this attribute non-resident?
u8 name_len; // 0x09: This attribute name length.
__le16 name_off; // 0x0A: Offset to the attribute name.
__le16 flags; // 0x0C: See ATTR_FLAG_XXX.
__le16 id; // 0x0E: Unique id (per record).
union {
struct ATTR_RESIDENT res; // 0x10
struct ATTR_NONRESIDENT nres; // 0x10
};
};
So the name field isn't specific to `$DATA` attributes, every attribute has it. However, for most attributes either the name is zero bytes, or it is a hardcoded name (like `$I30` for directories). Is `$DATA` the only one that can have different instances of the attribute with arbitrary names?Now, implementation in ntfs.sys is another thing and I have no idea if it's just an unused code path or if something would explode, and from what I heard Microsoft ended up in situation where people are scared to touch it not because of code quality but because of being scared of breaking something.
ntfs.sys has validation checks in it which prevent you from directly creating anything other than named or unnamed $DATA attributes on a regular file, and named $DATA attributes on a directory, and (indirectly) creating other stuff (directories, file names, standard attributes, EAs) through the appropriate APIs. If you try to do anything funky, you'll get an "Access Denied" error code returned by ntfs.sys
The API preventing arbitrary messing up is a separate (and good and valid) concern.
From reading the source code of the Linux kernel NTFS driver (the ntfs3 one in the latest Linux kernel, not the older one it replaced), its (pretty reasonable) strategy is just to ignore things it doesn't expect. But I don't know what ntfs.sys does in such a scenario, I've never tried.
Application metadata describing what file types an application could open, what icons to use for those file types if they matched the application’s creator code was stored in the resource fork of the application, but file metadata never was stored in the resource fork. File types, creator codes, lock, invisible, bozo, etc. bits always were stored in the file system.
See for example the description of the MFS disk format at https://wiki.osdev.org/MFS#File_Directory_Blocks
I can probably look it up and figure it out myself, ah, the joys of learning about obsolete tech!
Microsoft went a different route with its long filename extensions (Joliet) – they simply created a whole different (UCS-2/UTF-16 encoded) directory tree. An ISO 9660 implementation that's compatible with Joliet will prefer the Unicode directory hierarchy and look there for files.
If you want to know about the different types of CDs, you'll want to know about the various colors: https://en.wikipedia.org/wiki/Rainbow_Books
In addition, first 32kB of iso9660 are unused, which allowed tricks like putting another filesystem metadata there.
By carefully arranging metadata on disk it was then possible to make essentially overlapping partitions, stuffing each filesystem metadata in area unused by the other, with files reusing the same space
Prefixing the file name with a single dot - is this a file system convention ? Or just a "good idea" ?
But what I'm wondering about is the idea of associating (for example) "myfile.xyz" and ".myfile.xyz". I've never heard of this as a convention for associating metadata.
NTFS ADS were created to accommodate Mac OS resource forks on network volumes when using AFP.
From HPFS it was taken by SGI XFS (the ancestor of Linux XFS) and MS NTFS, both in 1993.
From there it has spread to various other file systems and specifications.
The concept of resource forks is earlier, but both are examples of using alternate data streams in a file.
defaults write com.apple.desktopservices DSDontWriteNetworkStores -bool TRUE
https://support.apple.com/en-us/102064I don't recall there ever being a way to turn it off for local volumes.
It seems to be the first time I'm seeing Apple themselves officially recommending a "defaults write" command.
defaults write com.apple.desktopservices DSDontWriteUSBStores -bool TRUE
Re: defaults:https://support.apple.com/guide/terminal/edit-property-lists...
> At core Asepsis provides a dynamic library DesktopServicesPrivWrapper which gets loaded into every process linking against DesktopServicesPriv.framework. It interposes some libc calls used by DesktopServicesPriv to access .DS_Store files. Interposed functions detect paths talking about .DS_Store files and redirect them into a special prefix folder. This seems to be transparent to DesktopServicesPriv.
> Additionally Asepsis implements a system-wide daemon asepsisd whose purpose is to monitor system-wide folder renames (or deletes) and mirror those operations in the prefix folder. This is probably the best we can do. This way you don’t lose your settings after renaming folders because rename is also executed on folder structure in the prefix directory.
Unsurprisingly, you can no longer do anything like this with SIP. If you're willing to disable SIP, there are forks of the project that apparently still work.
I can see after .DS_Store was allowed, it was no problem for other engineers to approve .fseventsd or .Spotlight-V100 or other nonsense that has cropped up over the years.
And I can't tell you how many filesystems I've had "corrupted" with these sorts of files.
Mostly SD cards, usb flash drives, but occasionally something horrible.
for these kinds of things I usually run:
rm -rf .DS_Store .Trashes ._.Trashes .fseventsd .Spotlight-V100
and quickly eject the drive before something else is written.If you've had a disk that is going bad and you need to copy stuff off of it, the LAST thing you want is to index the whole thing and start writing to it.
seriously, there should be a setting.
Put that in a script and add it to your crontab.
Besides, there are .DS_Store I really don't wanna delete. Notably, there are git repos which have erroneously committed .DS_Store files; I don't wanna make those repos dirty by deleting them.
You have a good coding style.
It wasn't Apple's fault, but it still would have been nice if there was a way to turn them off.
When is someone going to copyright .gitignore? You could register gitignore.me right now! Fame, riches, lunch with Myhrvold[1][2]!
[0]: https://www.bbc.com/news/technology-42580523
[1]: https://en.wikipedia.org/wiki/Intellectual_Ventures
[2]: https://www.amazon.com/Modernist-Cuisine-Science-Stainless-S...
1. Pirates uploaded a folder full of copyrighted files to Google Drive, accidentally including some DS_Store files along with the actual media.
2. The copyright owner filed a DMCA takedown on the whole folder, accidentally claiming ownership of a bunch of generic DS_Store files.
3. The above two steps have likely happened many times, not just once.
4. Google's takedown system now automatically flags DS_Store files as having multiple copyright violations.
5. A Google employee might be able to whitelist a user's individual DS_Store files to temporarily suppress the violation on their account, but since they can appear in different folders with different data and are constantly receiving new copyright claims, their system likely errs on the side of caution and continues to flag them as copyright violations so that Google doesn't accidentally lose its safe harbor protections.
In theory, a Google engineer could code in a special case to avoid this problem, but good luck finding and talking to one who's authorized to do so; Google is notorious for having one of the lowest employee;revenue ratios in the world and writing useless FAQs instead of having a proper support channel for when things go wrong.
And then in this alternate universe, pirates start naming all of their files ".DS_Store"!
If the person who finally managed to figure it out ends to reading this, thanks for the resolution :)
Growing up with both System 7.5 / OSX, and windows machines, the Macs never seemed inclined to make me see extraneous files, filetypes, and other “how the computer works” implementation details. It’s just so odd to my mental model of it all to see this file end up everywhere.
It is very ugly when files are shared from a Mac to people on Windows though. I think it gives a bad first impression for anyone who might be thinking of transitioning to the Mac.
Same place you should put rules for Emacs / Vim swap files.
Maybe the benefits / drawbacks would be different for an open-source project with a lot of contributors.
Banning someone because they commit a file you don’t like is definitely a sign of a controlling person.
Thanks for giving me a new interview question.
Apple's polish has always been more about the surface then the internals.
For operating systems it must be straight up impossible.
Mostly only due to misbehaving hardware. Something that should really not happen on a Mac. And "filled" is way hyperbolic, there usually isn't a lot of it.
It seems like a better solution.
This is my number one frustration with the Finder.
You can customize the look and size of individual folder windows in many interesting ways, al a the Classic Mac OS Finder, which is a really great feature. But if you blow through that same folder in a browser window then most of those customization are lost, overwritten with the settings of that browser window, even if you never change anything.
What's the point of allowing all of these great customizations when they're so easily clobbered?
I have a global hot key to bring up the Applications folder. I'd love to customize the look of that window, but it's pointless. Whenever I hit that hot key I have no idea what I'm going to get. It's always getting reset.
By the way, the reason it does this is because the Finder has no way to set a default browser window configuration. So instead, it just leaves behind the current browser settings in each folder it visits. Super frustrating.
Not global, but as long as you're in the Finder cmd-shift-A opens the Applications folder. cmd-shift-U opens the Utilities folder.
I guess macos probably just uses GNU tar? It's kind of surprising it wasn't modified or configured by default to ignore .DS_Store.
It was, but not by default.
If you export COPYFILE_DISABLE=true then tar will skip .DS_Store files.
https://old.reddit.com/r/MacOS/comments/lvju40/comment/gpc8i...
Since my network shares are for a local Synology, it's not a a big deal for me. I have run into them at work before, and it does create quite the mess.
[0] https://superuser.com/questions/212896/is-there-any-way-to-p...
(setq dired-omit-mode t
dired-omit-files "^.+\\.\\(DS_Store\\|aux\\|bak\\|bbl\\|bcf\\|blg\\|dvi\\|ent\\|idx\\|ilg\\|ind\\|log\\|orig\\|out\\|pdf-view-restore\\|pdf#\\|reg\\|run.xml\\|synctex.gz\\|toc\\)$")
And a follow up article - https://rixstep.com/2/20061212,00.shtml
I would think that the file manager for an entirely separate operating system being written from scratch would be a foregone conclusion.
MacOS creates a junk file/folder just by visiting any folder. It's not comparable.
I think people that get upset about this just need something to fret about, at least in my experience. They probably trim their speaker cables to the same length to match impedence, too.
I'll never get how some rocket scientist (IVIE I suspect) removed Apple's best finder feature, colored file folders, which made for easy sorting. To make matters worse, added stupid dot labels instead. What a cluster.
Oh well. Still a bad day on a Mac is better than a great day in Windows.
Also, I think only the desktop allows moving icons around freely.
I'm pretty sure Windows used to allow you to move icons around, I clearly remember making a mess on some Windows 98 folders. Maybe they removed that feature recently?
Interesting to see that apps were split into front and back end (indeed, I'm surprised even that the terms existed) back in 1999.
Originally a central DB and a PC front end. But the server could be doing business processing e.g. feeds and processing of stock prices.
Client Server predates the web.
If Apple wanted to store view settings for remote volumes (or even local volumes), the competent design would have been to store them locally (and per user) in a central location on the machine doing the browsing.
I remember the promised re-write of Finder and thought it never happened. Nothing seems to have improved for the user. I could post a list of decades-old defects that persist today.
The one thing I can think of that has finally been fixed (and this was long after the "rewrite") was that you can now finally sort the file list properly: with folders at the top.
Now I wish someone would explain something that might actually be worse than DS-turds: the presence of a "Contents" subdirectory in every goddamned Apple package. I mean... who thought you needed to create a directory called "Contents" to hold the contents of the parent directory? It's mind-boggling.
It also kind of reveals an underlying attitude of the OS developers: That it's OK to use the user's filesystem (particularly directories owned by the user as opposed to the OS) as their dumping ground for all this metadata. As if it's their hard drive rather than mine.
I'm OK with Apple putting whatever it wants in /System and /Library, but I'd expect the rest of my filesystem to contain only files I put there.
Same goes for you, Microsoft: You can have C:/WINDOWS and I should get the rest of the filesystem.
There are more of this type of offender than I can possibly count that dump myriad dotfiles and dotfolders in your home folder on nixes instead of adhering to platform conventions or XDG or anything, really. Worse, these programs won't function properly if you set your home folder to be read-only (leaving subdirectories writable) to keep it clean. Drives me nuts.
If in a central place what happens if the original directory is moved - how is the metadata updated. - Unix is another file somewhere, Windows can be in the registry.
With Apple it is kept with the directory.
The issue is that a directory needs some metadata and the Unix design of everything is a file does not allow the directory to include this without adding another file somewhere.
The POSIX file system is not the perfect thing.
It really should be turned off by default on network volumes though.
> There is also an unfortunate bug that is not fixed to this day that result in an excessive creation of .DS_Store file. Those files should only be created if the user actually makes adjustments to the view settings or set a manual location for icons in a folder. That’s unfortunately not what happens and visiting a folder pretty much guarantees that a .DS_Store file will get created
I get the sense that if you are annoyed by it, you aren't the target audience of Mac OS, the target audience are technologically illiterate people for who it really doesn't matter (they barely know what folders are anyway), so to Apple there is no reason to ever invest any effort to fix it.
It’s because not every bundle does include that folder. Here you go: https://en.m.wikipedia.org/wiki/Bundle_(macOS)
The issue is the file system.
Apple file systems allow a file to have extended attributes or resource forks. Thus a file is not a simple stream of bytes.
When you copy a file to a file system (e.g. FAT) that does not understand these attributes macOS copies those to a ._ (I think if the file system was NTFS then you could probably convert them but I don't think anyone does)
Copying a file out of an Apple environment loses data (OK the data is metadata and usually no one cares)
Not sure but it could be the case that when you mount a network drive there isn't a stable identifier that can be used to track it.
You don't want user prefs to apply to multiple locations solely based on URI.
Also the conflict resolution to support concurrent updates would be crazy.
But the stakes are very low here, so settings can be invalidated and discarded if they can't be resolved or they age out of the local cache. And if the mount is of a type that can't be reliably identified later, the default should have been to do nothing. Spewing junk all over every computer visited, especially junk that won't even survive the next Mac user's visit... is amateur-hour and obnoxious at best.
Not that I don't appreciate your work from back then, but as a longtime daily Mac user I cannot wait for the day that this is done once again. The Finder has so many bizarre quirks and it's so slow to proliferate updates that it's just embarrassing. Not to mention it's actually capable of locking up waiting for network access in some circumstances.
I don't know what the Finder source code looks like today but I bet it's a similar kind of hell project as the Classic Finder was back then when they first rewrote it, considering how reluctant they are to do anything to it.
Apple unfortunately isn’t in the business of making powerful, efficient (user-facing) software anymore.
Say what you will about Windows, but the Explore file manager has always been pretty rock solid.
Windows 11 has pretty severely fucked up Explorer. Named directories can't have their path copied (I think 10 did this bullshit, too). The context menu getting insane whitespace, missing options, and having things dynamically load into it is a travesty. It is heartbreaking that mobile-inspired trash is ultimately going to be way you're forced to interact with a computer.
People let their distaste for somebody's bad behavior and/or old things stop them from admitting that we're in a pretty severe backward slide.
About that part... Modern computers are insanely fast. How does every single piece of software manages to fill half a minute of CPU or disk I/O for enumerating some 3 or 4 items?
It's absurd.
I use Firefox inside eatmydata nowadays, because it spends 10 minutes enumerating the same 2 directories every time it starts up (hundreds of thousands of times). The start menu and equivalents everywhere are already famous. Windows can't search files nowadays, not only it doesn't work, but it never ends either... The list is endless.
What have you got like a 10 year old profile or something?
Librewolf starts up instantly for me, and I saw no performance difference using eatmdata.
Anyway, there are a lot of people reporting the same thing on the internet. I've found 3 different bugs opened for the same thing.
But yeah, as far as I remember, Iceweasel doesn't do it either. Maybe I should change my browser.
I don't know any more because I use Total Commander on Windows...
I’ve long since moved to command line or dual pane explorers but it’s something that makes me pause every time I do find myself in Finder for some reason.
For MacOS I can recommend Forklift [0]. I've been using it for years and it is a bit closer to the Windows Explorer way of doing things. Does what it is meant to do. Affordable. No nags. Gets out of the way. Not perfect, but soooo much better than the horrific experience that is Finder.
I have a paid Forklift 3, and it’s nagging me to upgrade and pay for next version.
I mostly went back to Finder for now, as I remember having some kind of issues with Forklift3 not being performant, though I don’t remember the details.
That said I only work on local files and don't use any of the remote workflows. The most advanced feature I use is synchronising files between local storage and SD card. And that works fine.
One thing that did break in v4 is that search doesn't work anymore when using the text only toolbar. I reported that ~10 months ago but it's still broken. Maybe I'm the only person who was actually using it.
It does make me wonder though, how do you feel about System 7.0 Finder?
NeXT/Mac column view are great and should be table stakes in a file manager in my opinion.
Command + O to open files/folders in Finder was a bit challenging to remember since Enter/Return just works in Explorer
Command + up arrow is a good shortcut to go up one level, surprisingly hard via gui
...and in Finder, Enter is rename, which is a lot more puzzling, so much that many others have commented on the same and some even tried to justify it:
https://apple.stackexchange.com/questions/6727/why-does-the-...
https://old.reddit.com/r/MacOS/comments/16hxjrn/why_is_the_d...
Apologies for my post getting snipped, The latest iOS beta keeps randomly eating my text. Apple is aware.
Unix file systems are not sufficient, you need a layer on top.
Agreed on Dual Pane file managers though. I used them on Windows from Windows 3 onwards and various macOS ones except the writers of the macOS ones had nice early versions then decided to rewrite to provide memory hogs that stopped working - e.g. Cocoatech Pathfinder - It is simple just a file browser don't keep adding stuff.
That's how I see these files. And maybe one day, we can have and edit our own .gitignore -like files for such Inattentional blindness[0].
[0]: Inattentional blindness