I find it incredible that we all now have access to an SGI-level machine at home, thanks to Nvidia. This reminds me of a previous thread on HN: https://news.ycombinator.com/item?id=39945487
It's more like thanks to 3dfx?
Not really, because initially they went with Glide, their great boards eventually weren't a match to Nvidia, that had enough cash to buy 3dfx.
I was disappointed that I couldn't make my newly bought Voodoo card work on my motherboard due to a PCI connection issue, but the Riva TNT that the shop offered me as possible alternative did, thus NVidia got one more customer.
I think what parent means is that 3dfx was founded by three former Silicon Graphics engineers, so I guess that the 3dfx hardware had a lot more SGI DNA than Nvidia's chips.
Nvidia's first 3D chip ~~Riva 128~~ (my bad, it was the 'NV1') also was a weird design, and it's successor the Riva 128 wasn't remarkable performance-wise especially compared to what 3dfx had to offer (Nvidia's only good decision was that they bet on D3D early on when everybody else was still doing their own 3D APIs - even though early D3D versions objectively sucked compared to Glide or even OpenGL, it turned out to be the right long-term decision).
Nvidias first remarkable chip was the Riva TNT which came out in 1998 (hardware progress really was unbelievably fast back then - 3dfx Voodoo in 1996, Riva 128 in 1997, Riva TNT in 1998, and both the Riva TNT2 and Geforce in 1999).
edit: fixed my NV1 vs Riva 128 mistake, somehow I merged those two into one :)
A better point is that the consumer 3d cards were created specifically because SGI was designing 3d machines the wrong way. It became obvious that the way to design a consumer 3d card was to create a small multithreaded chip that you could scale up and down based on the workload. SGI instead created special designs for every computer, sometimes multiple graphics board designs for every computer.
Sometimes (such as VICE in an O2) SGI would also put old processors in odd configurations inside new computers in the hopes they could offload things such as dvd decompression, but then drop dvd support before releasing the machine while keeping the hardware.
SGI was just very unfocused around 1998-2204 when this whole consumer 3d chip became realistic and they just refused to do things in a sane way. they even knew it but did it anyway. Betting the company on web servers instead.
There were like a bazillion companies competing for consumer 3D accelerators in the 90s. 3dfx was the most successful thanks to their Glide API and vertical integration but they weren't the only one on the market, which is why they were so affordable despite the novelty. Unlike today.
It's not because of NVIDIA but because of Moore. We have SGI-level five-dollar microcontroller boards now.
https://wiki.preterhuman.net/SGI_Maximum_IMPACT says:
> Maximum Impact graphics are the highest tier of SGI's IMPACT graphics offered both on the SGI Indigo2 and SGI Octane workstations. They include a 27MB frame buffer and have 2 raster engines (i.e. are "2RSS" boards).
...
> Two GE11 Geometry/Image Engines:
>> Power the graphics subsystem
>>> 960 MFLOPS for transforming triangles
>>> 960 MIOPS for processing pixels
>>> 600,000 gates each
>>> Note: The refreshed Octane 'E-series' Geometry engines were capable of 1344 MFLOPS
>> Two RE4 Raster Engines:
>>> Provide the pixel-fill capabilities
>>> 234 Mpixels/sec gouraud fill rate
I think the Raspberry Pi 4B has 8000 megaflops https://www.reddit.com/r/raspberry_pi/comments/fsc3fw/perfor... and I think that's just the CPU. That's roughly 5× the performance of the Indigo²'s Maximum Impact card. The Pi 3 CPU came in at 2700 megaflops: https://raspberrypi.stackexchange.com/questions/55862/what-i... and I think the GPU is something like four times that.
Of course, benchmarks can be misleading, but if anything I'd expect this number to understate the difference, since the SGI card was a fixed-function pipeline. You can do all kinds of crazy visual effects on the Pi's CPU that the Indigo² couldn't touch. And of course the Pi's texture memory and framebuffer is measured in gigabytes now, not megabytes.
Compare the specs on the ESP32-S3 IoT microcontroller: 480 megaflops (counting multiply-accumulates as two flops, as is stupid but traditional) https://www.reddit.com/r/esp32/comments/t46960/whats_the_esp.... It only comes with 320K of RAM, but if you want a 27MB framebuffer, it supports 32MiB external RAM (PSRAM): https://docs.espressif.com/projects/esp-idf/en/stable/esp32s... but that's still less than half as fast as the Indigo²'s Maximum Impact card. They cost US$2.74 though https://www.digikey.com/en/products/detail/espressif-systems... so you might be able to afford more than one. They're commonly used for things like opening cat flaps in doors so your cat can go outside: https://hackaday.com/2025/06/12/2025-pet-hacks-contest-cat-a...
But this web page is about SGIs that long predate the Indigo² (which was circa 01994), such as the 4D/60 from 01987 built around an 8MHz MIPS R2000, and so are dramatically slower than the ESP32. The "G" card described could fill 5500 Gouraud-shaded polygons per second, while the "GTX" could hit 100,000, about 2,000 per frame.
This is very timely for me, because I've just come into possession of a 4D/60 "buttons box" (which normally would go with the dials). It is quite unlike the later button boxes since it's a giant cheese wedge with the power supply integrated, which seems to be very rare as there's no reference to this on the net anywhere. It even has a display where when the unit powers on it says it's rev A. I'm hoping the DB-9 on it is RS232 and can be spoken to by a modern machine, but my one RS232 cable is the wrong gender, of course.
Many years ago I had an Indigo, and even 25 years ago that was an exercise in difficult interfacing with modern equipment. The monitor was amazing.
Edit to add: the notes here about 20A power requirements remind me of when a VR company I was consulting for hired a notable CTO from the VFX business and all he cared about was making sure there was enough electricity supplied to the office. That was in about 2015 and I remember thinking he was clearly scarred by previous events and was long out of date.
> [...] I was consulting for hired a notable CTO from the VFX business and all he cared about was making sure there was enough electricity supplied to the office. That was in about 2015 and I remember thinking he was clearly scarred by previous events and was long out of date.
You're likely being overly dismissive. Ensuring you have enough power is vital, and being wrong can set you back a year or longer while you wait for permitting, the actual service upgrade, and inspections to be completed.
I'm currently dealing with this right now with my startup and wish I'd begun the process a year earlier when I first expected it to be necessary; at the time, I'd assumed it could happen in a few months.
It is RS232. Try 9600 baud, N, 8, 1
It should start streaming events in ASCII the moment you do anything with the buttons.
The joystick ran at that bitrate, which I thought slow but it wasn't.
A full SGI setup, ONYX Reality Engine with edge blended display, joystick, buttons, dials, space ball 3D control (all RS232), a sprinkling of workstations, is kind of a magical environment.
At one point, I had everything except the reality engine. Origin servers instead of ONYX. Fantastic computing environment.
A few things possible on that setup:
Pull a sick SCSI drive right out of a group setup with error correction and full XFS Journaling. Nothing bad happens except disk activity goes up a little. Then insert another one, rescan the SCSI bus, then add the drive to the group and see disk activity go way up as the system repopulates that drive to replace the sick one.
Want an incremental tape backup? You can ask IRIX to back up a subset of any given file system. No big deal, lot of systems do that, right? Well, read it back into your home directory only to find out that incremental backup is a valid, mini-fileaystem that can be read, written to and so forth. That feature makes doing backups simple with a few scripts, same with file recalls.
Start ones career with a XFS disk created on an Indy, IRIX 5.3. Take that same file system through a career, Indy, O2, Octane, Fuel, Origin, and end it on IRIX 6.5.30
Each time I leveled up, I cloned my original environment onto a disk suitable for the new machines, ran swmgr to sort out OS components, dependencies, libraries, dev environment, and then it was off to the races!
I made many different file systems, but my personal one only needed to be made once!
Linux window managers and fonts were kind of crappy compared to how nice the Indigo Magic Desktop was. I had an Indy managing windows and serving fonts to my Linux boxes for a few years.
X was network modular. Still is, and I hope the effort to save it sees success!
Once, just for fun I distributed a high end CAD application across many machines:
One machine was my primary user display. Another machine managed the windows, yet another handled fonts, another was sharing storage for the application which ran on yet another machine which got data from still another machine!
Double click an icon, hear that kerechka! SGI app launch sound and see 6 machines contribute to the spinning model on my screen!
Could run an X server with -GLX extensions enabled to make it 7 machines, one being a PC running an X server to view the product of the other 6 machines.
Record video using S-video input while compiling MAME. Write it out on an S-video capable VCR with quality equal to commercial VHS movies, or sometimes better.
On that note, build XMAME on an Indy. Using GCC it would take close to a day. Using MipsPRO, it could take longer with -O3 enabled to get a binary 10 to 15 percent faster.
One time I got behind on a movie project. Needed many machines rendering frames to hit deadline. I had set one up to do the work over a weekend and the job died 100 frames in.
OOF!
After management secured some temp licenses for the Alias renderer I was using, I spent 16 or so hours straight using every machine in the building to render frames.
Some had users on them who never knew I had unloaded whole sub-systems they were not using to make room for the renderer to be loaded and work. I would kick it off and then renice process priority low enough to mooch every cycle the user did not need. Then when done, put it all back how I found it most none the wiser. Out sysadmin, who was training me to do systems work loved it and spent a fair amount of time looking at the various boxes and how they performed under the high loads I subjected the ones without active users to.
I spent the time in front of my O2, main desktop at the time, using virtual desktops to manage all the environments I had remoted to my display and copying bunches of frames to my local storage to be burned onto optical disk every so often to hedge against catastrophe, and otherwise slotting them into my Alias Composer movie timeline and doing test writes to VHS as chapters got done. Was brutal! And on the eve of a major holiday, family a total mess because there was no way I could go home!
Wrote it all out to video tape just an hour before the person who bought the time was going to catch a flight to Taiwan. Made them 2 copies, just in case. Literally hit play, saw they were complete, hit rewind, and when the VCR did that finish click, the guy walked in anxious expression melting into a smile as I handed him tapes!
By the way, that experience about 5 years into my serious computing phase, was when I really committed to UNIX. It was so damn powerful compared to Windows at the time. Still is, but it is harder to tell these days.
I had the best computing experiences ever on SGI machines! Learned a ton along the way and miss all that big sometimes.
My UNIX knowledge ebbs and flows, depending on where project work may take me. But what I know of UNIX and LINUX at any given time is more than enough to kick ass and get shit done.
Further, most of that has remained useful without too many changes.
Take Linux and the body of Open Code we have today and it is a lot of great software offering up a ton of capability to anyone who bothers to load it up and just start using it.
Nothing compares. Don't get me wrong here. Windows and MacOS are really good now, but they were not back then when it really mattered.
What I like most is not having to constantly remap skills as much as I sometimes find myself doing in Windows and to a fair degree now and then in Apple land.
First thing I do on new hardware is spin up a Linux, then install Windows. I mostly leave Macs on MacOS, though I can't wait to run Linux on my M1. Just need some time...
Then I go get all my open code, settle all that in and then finally whatever closed thing I gotta use get setup and I am ready to go. Until recently, I was proud to have never purchased Microsoft OS or App licenses. Happy to do the work as long as some one else clicks the EULA and pays the money...
This time around I bought Win 10/11 though I am gonna try and avoid 11, and permanent licenses to Office because those may go away.
These things all come from SGI influences.
Edit: Years later a friend brought me an Indigo Elan! Beautiful machine running IRIX 5.3. 30Mhz R8000 256Mb of RAM.
On a whim, I compiled amp, which was an optimized mp3 decode to real-time playback, or output to wave or AIFF file tool.
That 30Mhz machine could play back up to 256mbps Joint Stereo files without stuttering! 90 percent CPU utilization. Yes, that left just enough to do it from an xterm on a logged in desktop from a shared file repository the machine read over NFS. Basically full utilization doing that! But hey, quality, stereo mp3 playback from a 30Mhz workstation was sweet! Really showed the quality of the systems. That particular box was produced in the very early 90's I believe.
It is no surprise to see nVidia doing ehat it does today. SGI had many of the best and brightest in the building and often funded what it took to get the most out of those people.
Heh, a shared memory design in the O2 workstation could throw moving video onto moving surfaces with relative ease and do so before 2000. Heck, it could do the same with a 700mb image.
Apple M1 is a shared memory design that warms my heart. I know they are up to what M4, M5? I am happy with my M1 Mac Air for now.
Sorry for the ramble. Sometimes an SGI topic gets me remembering so many damn good and fun things...
If you made it this far, thank you for reading! Please putany cool IRIX experiences you are having or had here where I can see 'em.
> Please putany cool IRIX experiences you are having or had here where I can see 'em.
My mind was blown wn when I started University in 1993. We had rooms filled with brand new Indys. We had SUN boxes. We had been Solaris boxes. We had Snakes (HP 9000) running HP UX.
All as one heterogeneous system. You logged in to any workstation running X. Everything was seamless connected. One Login and everything just worked.
That new fangled Mosaic was then also primarily used to view scantily clad women. Serious students found serious information on Gopher. The geeks of the geeks wandering the MUDs.
Mosaic ran best (only?) on the SUN boxes. But through the power of X you could use it anywhere.
I saw the future. I thought that was how things was supposed to work. I have never experienced such a smooth and well running system ever since. So nostalgia hits hard with any mention of the old boxes.
Only thing I do not remember seeing at the time was IBM RS/6000 running AIX. And with full access to everything for a first year student. Glory days.
This was at DAIMI in Aarhus. First year students in Copenhagen at DIKU was envious as they had to share 1 HP workstation with around 20 terminals.
> Some had users on them who never knew I had unloaded whole sub-systems they were not using to make room for the renderer to be loaded and work. I would kick it off and then renice process priority low enough to mooch every cycle the user did not need. Then when done, put it all back how I found it most none the wiser. Out sysadmin, who was training me to do systems work loved it and spent a fair amount of time looking at the various boxes and how they performed under the high loads I subjected the ones without active users to.
Huh. That would be an interesting case of load testing.
I remember getting an Indigo2 Max Impact that had been in a fire (smoke damage) and restoring it. Very cool to own a workstation that was worth more than my car :-)
The IRIX 4Dwm desktop can still hold its own today.
The O2 workstation was very clever how all the components were easy to swap out