This is what prof Andrew S. Tanenbaum was working on when he made the famous "LINUX is obsolete" post in 1992. At the end of his post:
P.S. Just as a random aside, Amoeba has a UNIX emulator (running in user
space), but it is far from complete. If there are any people who would
like to work on that, please let me know. To run Amoeba you need a few 386s,
one of which needs 16M, and all of which need the WD Ethernet card.
https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_deb..."A bunch of high spec 386s with specific network cards" was pretty unaffordable for (especially young, and most were at the time) hobbyists in '92. We know how it went with that and Linux (which ran on most cheap 386s).
I seem to recall that early versions of Linux wouldn’t run on a cheap 386sx until the kernel started to incorporate embedded applications from widespread use in the late 90s. Also I remember the minimum ram was 4 meg but could go down to 2 meg if networking wasn’t installed even though it was built in to the kernel. Until the dot com boom, I don’t recall Linux being a partially flexible operating system. I think this is what kept windows 3.1 and ms-dos going until the late 90s
> I seem to recall that early versions of Linux wouldn’t run on a cheap 386sx until the kernel started to incorporate embedded applications from widespread use in the late 90s.
I ran Linux on a 486SX with 4MB RAM in early 1994. Prior to that (1993?) I had been running 386BSD. Both OSes ran quite well on that hardware: certainly sh, vi, gcc, and make ran well, and what else do you really need?
4MB wasn't good enough for X Windows, but I had a VT102 connected to a serial port so that I could have two terminal sessions going at the same time.
Coming from Commodore Amiga, my first PC was a 486dx with 4MB to run Linux. I did set it up the same day I bought it - noticed 4MB was not enough to run XFree86 on top of Linux 0.99.6 or whatever, so I grumbled, checked my wallet and bank account, went back to the dealer and asked him to take back the 4MB memory and sell me 16MB of RAM.
I was a student and ate only very cheap food the rest of that month.
Oh, and a bit later (two or three years) Tanenbaum visited my University and held a lecture about Amoeba.
He was only describing hardware that was available at the time in businesses. Around that same time, Digital had a product called Pathworks that was very popular. All the PC's used standard unshielded twisted pair Ethernet. It wasn't that expensive. The Pathworks license was about $3,000 per VMS server though. This was great due to Windows NT didn't become popular until 3.51 in 1995 or so. Even then it was 100% interoperable with VMS LANMAN file servers and vice versa. NetWare 3.1 was also popular starting in 1991, although that far back the slim coax cable was still in use.
Was only a $329 (1984 USD) for an 3Com Ethernet ISA network adapter card at Fey's Electronics.
Or about 2 months revenue of newspaper route.
Linux 0.98 shortly afterward and Donald Becker's Ethernet driver.
My take on "worse is better" is that "worse is cheap / free / low friction."
Better -- meaning polished, thoroughly engineered, correct, etc. -- is often heavier in terms of hardware requirements. It's also often a lot of work by serious career devs or big companies and costs money and has a non-OSS license.
All that increases not only cost but more importantly friction. So the "worse" cheap/free unencumbered thing goes viral and takes over instead.
I should add the caveat though -- it's cheap/free up front but you pay later in bugs and ugly hacks to make it work at larger scale. But up front friction matters more than friction later. If it's cheap/free in terms of cost or friction now, you'll already have sunk cost by the time the real cost becomes apparent.
I 100% agree. In fact, even in the original C/Unix versus Lisp/Lisp machines that were mentioned in Richard's Gabriel "The Rise of Worse is Better" article, C and Unix were inexpensive compared to Lisp implementations and Lisp machines. Unix's relatively liberal licensing rules in the 1970s and early 1980s helped lead to its embrace in academia, and it also gained a footing in industry, especially with the rise of Unix workstations from companies such as Sun.
Another example is how C++ and Java, but not Smalltalk, became the dominant object-oriented programming languages in the 1990s, despite Smalltalk being older and (debatably) being closer to Richard Gabriel's "right thing". There were affordable C++ implementations from Borland and Microsoft, and Sun released the Java Development Kit for free. However, the leading Smalltalk implementations of the 1990s were much more expensive. Perhaps had there been a Borland Turbo Smalltalk or a Microsoft Visual Smalltalk in the 1990s, maybe things would have turned out differently.
March 7, 1988 — "Smalltalk/V 286 is available now and costs $199.95, the company said. Registered users of Digitalk's Smalltalk/V can upgrade for $75 until June 1."
https://books.google.com/books?id=CD8EAAAAMBAJ&lpg=PA25&dq=d...
September 1991 — "Smalltalk/V code is portable between the Windows and the OS/2 versions. And the resulting application carries no runtime charges. All for just $499.95."
(Advert on the last page of "The Smalltalk Report")
https://rmod-files.lille.inria.fr/Archives/TheSmalltalkRepor...
September 1991 — "Digitalk, Inc. announced new versions of Smalltalk/V DOS and Smalltalk/V Mac that include royalty-free runtime. Smalltalk/V Windows and Smalltalk/V PM are already royalty free. … Prior to this new policy, there was a per-copy charge for runtime applications."
"The Smalltalk Report" p25
https://rmod-files.lille.inria.fr/Archives/TheSmalltalkRepor...
~
https://wirfs-brock.com/allen/posts/914Lest we forget applets https://dev.java/duke/
Engineers like to work from the bottom up because nature isn't made out of straight lines and regularity. Breaking things apart into abstractions works to a certain point, then the natural irregularity of reality steps in and you need to be able to react to that and adjust your foundation. So yes, fundamentally it is about thermodynamic knock-on-effects of handling things from the top-down.
Messing around with lisp and smalltalk environments is fun. But the moment you have to stick your hand beneath the surface, you can very quickly end up in someone else's overengineering hell. Most serious GNU Emacs users shudder at the thought of hitting Doom Emacs with a wrench until it suits their tastes. It's universally agreed that building your Emacs environment up from the defaults is fundamentally a wiser choice. Now realize that a full Lisp operating system is several orders of magnitude more complex than Doom Emacs, and most things you're going to program emacs to do are absolutely trivial in comparison to most industrial-grade application logic.
Thanks for sharing! Never heard about it but seems like an interesting project. Also another interesting tidbit from Wikipedia: "The Python programming language was originally developed for this platform."
Seems to be the source code was archived, I wonder if it's bootable in a VM. Well I know what'll do this weekend!
https://web.archive.org/web/20000901081815/http://www.cs.vu....
Edit: even better, found a mirror on Github (original file dates are not preserved though)
In the "getting help" section of the readme:
"Note that the RTFM system applies to these reports/questions (see the FAQ)."
I wonder if the time is right for a distributed OS research rebirth, now that we have many layers of infra advancement to develop and host it on. Kubernetes needs some fresh ideas.
If I'm reading it right, security of the entire system depends on an assumption that 48-bit numbers are so big it's infeasible to find what you need through brute force.
There was so much interesting research on distributed systems back then that was wholly abandoned.
There was also Mosix, OpenMosix, Beowulf, etc. All those are worth a look. It's a path that was not taken, and IMHO could have been better, but "worse is better" won once again.
Had we taken this path, you might be able to create a distributed instance of an OS and just throw hardware at it -- boxes or VMs -- and tasks would run on it like a single OS with little or no modification. It'd be like having an infinite giant box.
Of course as soon as you started trying to do anything like geo-distributed or even multi-DC/multi-AZ work loads you'd be back to orchestration and the like. You'd also run into problems if you want to rev the hardware in any big way, since these systems generally depended on all the boxes being at least nearly the same. So if you threw in, say, some new boxes with AVX512 ISNs, you would not be able to use AVX512 until you'd rotated out all boxes without it. Bigger architectural shifts would require what amounts to a "reboot" of the big virtual OS instance. Mixing boxes with different performance characteristics or RAM amounts was also problematic. Making that work well would be hard to get right.
Under the hood these distributed OSes were orchestrators like K8S, just hidden from you for the most part behind a Posix emulation layer that made it look like one box.
Ultimately I think the type of architecture you see with "serverless" where everything is a "function" with no side effects that can talk to one or more shared global databases is the better architecture. Do away with the entire concept of a "box" in favor of a sea of functions that call each other, event queues, schedulers, etc. We do kind of have that, but only in the form of proprietary serverless platforms.
The amount of complexity we inflict on ourselves by dragging along the concept of unmanaged / unbounded global state, static state, and side effects is mind boggling. A function should only operate on what you give it.
Glad to see Amoeba here. Last time I heard about it I was still in college X-D
I guess academia was too far from what industry needed. They where thinking about the future, and we needed more stable boxes and sometimes a cluster for critical loads.
All the microkernel stuff Tanenbaum defended makes a lot of sense if you have a distributed OS: you just need to talk with your "local" service with the API you have at hand... and it is the same everywhere (for instance, your SOAP server - which is the equivalent of etcd in K8s).
Very surprised to learn that Guido van Rossum worked on Amoeba!
in fact, he originally developed python for amoeba
Curious that it was developed for A distributed os, and yet we still have the GIL...
Part of my final Diplom (Master) exams, oh those were the times! :-)