For those unaware, the article hints at a system that really does believe everything is a file: Plan 9 from Bell Labs, the “second system”/spiritual successor of Unix. But it’s also worth pointing out that NT’s kernel is designed around a hierarchical namespace of “objects,” where various subsystems slot in at different levels to take over responsibility of the rest of the path. Unlike Plan 9, this is separate from the userland filesystem. It might be most familiar to people who have installed NT 4 (or maybe 3.51?) through XP via bootable floppy: SETUP.EXE shows strings like `\Device\HardDisk0` in the status bar.
Just pointing out how the same general idea can take distinct forms of implementation.
Under ReactOS' explorer.exe (IDK if it's possible to run it under Windows) you can see all the NT objects, even the Registry too. So you can browse the Registry hier as if it were a path.
I've never quite understood why the idea "everything is a file [descriptor]" is often revered as some particularly great insight. Perhaps it was for its time, but I think we have to be honest and say that it is a really awkward abstraction in 2025.
It can mean a couple of things:
- Kernel objects have an opaque 32 bit ID local to each process.
- Global kernel objects have names that are visible in the file system.
- Kernel objects are streams of bytes (i.e. you can call `read()`, `write()` etc.).
The first is a kind of arbitrary choice that limits modern kernels. (For example, a kernel might want to use all 64 bits to add tag bits to its handles - still possible, but now you are close to the limit.)
The second and third are mostly wrong. Something like a kernel synchronization primitive or an I/O control primitive does not behave anything like a file or a stream of bytes, and indeed you cannot use any normal stream operations on them. What's the point of conflating the concept of a file system path and kernel object namespacing? It makes a kind of sense to consider the latter a superset of the former, but they are clearly fundamentally different.
The end result is that the POSIX world is full of protocols. A lot of things are shoehorned into file-like streams of bytes (see for example: the Wayland protocol), even when a proper RPC/IPC mechanism would be more appropriate. Compare with the much maligned COM system on Windows, which though primitive and outdated does provide a much richer - and safer - channel of communication.
That's why ioctl exists, which is an RPC. For example NetBSD even support sending messages created with its proplib as properly lists of Apple fame.
Also I always found it weird, that a lot of things are "files" in Linux, but not ethernet interfaces, so you have to do that enumeration dance before getting an fd to oictl on. I remember HP-UX having them as files in /dev, which was neat.
Yeah, let's see how Xous fairs. Approach is interesting, and maybe the future is in those small, hardened microkernels.
Note that these books were written when design pattern was still a buzzword.
Unfortunately, in some parts of the industry it still is.
Careful selection also implies rejection. I wonder about the technologies that have been lost to time because they didn't pass this historical filter. I learned never to underestimate the accomplishments of our predecessors after reading about old mainframe systems.