For those who don't recognize the title, this is a reference to the classic:
https://www.goodreads.com/book/show/7090.The_Soul_of_a_New_M...
which was one of the first computer books I ever read --- I believe in an abbreviated form in _Reader's Digest_ or in a condensed version published by them (can anyone confirm that?)
EDIT: or, maybe I got a copy from a book club --- if not that, must have gotten it from the local college after prevailing upon a parent to drive me 26 miles to a nearby town....
“I am going to a commune in Vermont and will [In my mind I've always heard a 'henceforth' inserted here for some reason] deal with no unit of time shorter than a season”
One of my favorite quotes in the book - when an overworked engineer resigns from his job at DG. The engineer, coming off a death march, leaves behind a note on his terminal as his letter of resignation. The incident occurs during a period when the microcode and logic were glitching at the nanosecond level.
He didn't join a commune though, still working [1].
His resume doesn't really indicate he didn't join a commune. Leaving Data General in 1979 and joining ComputerVision in 1979 wouldn't preclude joining a commune for a season. However, a letter to the editor in 1981 [1] provides evidence that he left Data General specifically to work for ComputerVision, with both events happening in Spring.
[1] https://www.nybooks.com/articles/1982/02/18/man-and-supermin...
There is a paper on the ComputerVision design in an AMD handbook from 1986 [1] (big PDF).
[1] http://www.mirrorservice.org/sites/www.bitsavers.org/compone...
Wow, I love that web page / resume and that era of HTML authorship. Brings back memories.
I don't know about an abridged version but it's one of the best books about product development ever written. I actually dotted-lined to Tom West at one point though a fair bit after the events of "the book." (Showstopper--about Windows NT--is the other book I'd recommend from a fairly similar era from the perspective of today.)
Thanks for sharing this. I forgot to add the link in the original post.
I also highly recommend the TV show Halt and Catch Fire. It's not related to the book but very similar spiritually.
The Halt and Catch Fire Syllabus[0] has a lot of awesome content worth checking out as well.
[0] https://bits.ashleyblewer.com/halt-and-catch-fire-syllabus/
While far from perfect, Halt and Catch Fire definitely captured a lot of the spirit of the early PC industry at about the same time (early 80s).
I'm about 75% through the audiobook, and it's absolutely fantastic.
The most surprising thing so far is how advanced the hardware was. I wasn't expecting to hear about pipelining, branch prediction, SIMD, microcode, instruction and data caches, etc. in the context of an early-80s minicomputer.
Yes, though that stuff cost bug bucks back then. PCs were a big step backward for a long time.
Another great Kidder book is "House", which applies the same perceptive profiling to the people involved in building a single house -- the couple, the architect, and the construction workers. There's an argument that a lot of the best work is done not in pursuit of an external reward but because doing good work is itself rewarding, and one way of viewing these books is as immersive explorations of how that plays out.
Such a great book… should be required reading for anyone managing engineers.
The full length version is a really good read
Indeed, one of the more memorable set pieces in chapter 1:
"He traveled to a city, which was located, he would only say, somewhere in America. He walked into a building, just as though he belonged there, went down a hallway, and let himself quietly into a windowless room. The floor was torn up; a sort of trench filled with fat power cables traversed it. Along the far wall, at the end of the trench, stood a brand-new example of DEC’s VAX, enclosed in several large cabinets that vaguely resembled refrigerators. But to West’s surprise, one of the cabinets stood open and a man with tools was standing in front of it. A technician from DEC, still installing the machine, West figured.
Although West’s purposes were not illegal, they were sly, and he had no intention of embarrassing the friend who had given him permission to visit this room. If the technician had asked West to identify himself, West would not have lied, and he wouldn’t have answered the question either. But the moment went by. The technician didn’t inquire. West stood around and watched him work, and in a little while, the technician packed up his tools and left.
Then West closed the door, went back across the room to the computer, which was now all but fully assembled, and began to take it apart.
The cabinet he opened contained the VAX’s Central Processing Unit, known as the CPU—the heart of the physical machine. In the VAX, twenty-seven printed-circuit boards, arranged like books on a shelf, made up this thing of things. West spent most of the rest of the morning pulling out boards; he’d examine each one, then put it back.
..He examined the outside of the VAX’s chips—some had numbers on them that were like familiar names to him—and he counted the various types and the quantities of each. Later on, he looked at other pieces of the machine. He identified them generally too. He did more counting. And when he was all done, he added everything together and decided that it probably cost $22,500 to manufacture the essential hardware that comprised a VAX (which DEC was selling for somewhat more than $100,000). He left the machine exactly as he had found it."
That reminds me of the US, during the cold war, intercepting the soviet "Lunik" satellite, in transit by truck, which was being exhibited in the US(!), and overnight completely disassembling/reassembling it before letting it go on it's way with the soviets none the wiser.
There is a post Where are they now. In Wired magazine about the team 20 years later https://www.wired.com/2000/12/eagleteam/
It's also the best non-fiction book I've ever read. And won the Pulitzer, I think.
I like the idea of identifying ‘bit flips’ in papers, which are (if I am following along) statements which precipitate or acknowledge a paradigm shift.
Perhaps the most important bit-flip of this paper’s time (and perhaps first fully realized in it) might be summarized as ‘instructions are data.’
This got me thinking: today, we are going through a bit-flip that might be seen as a follow-on to the above: after von Neumann, programs were seen to be data, but different from problem/input data, in that the result/output depends on the latter, but only through channels explicitly set out by the programmer in the program.
This is still true with machine learning, but to conclude that an LLM is just another program would miss something significant, I think - it is training, not programming, that is responsible for their significant features and capabilities. A computer programmed with an untrained LLM is more closely analogous to an unprogrammed von Neumann computer than it is to one running any program from the 20th. century (to pick a conservative tipping point.)
One could argue that, with things like microcode and virtual machines, this has been going on for a long time, but again, I personally feel that this view is missing something important - but only time will tell, just as with the von Neumann paper.
This view puts a spin on the quote from Leslie Lamport in the prologue: maybe the future of a significant part of computing will be more like biology than logic?
This is essentially the paradigm what Karpathy deemed to be "Software 2.0" in an article in 2017:
> instructions are data.
is an insight I've often seen attributed to von Neumann, but isn't it just the basic idea of a universal Turing machine? - one basic machine whose data encode the instructions+data of an arbitrary machine. What was von Neumann's innovation here?
Turing machines' instructions are different from their data; a Turing machine has a number of states that it flips between depending on what it reads from the tape.
GP is referring to universal Turing machines (those that can emulate an arbitrary Turing machine) and not a specific Turing machine.
And the next paradigm shift after that will probably be "programs' own outputs are their input"
That sounds like a feedback control loop. That's the basis of a programming language I'm writing ^_^
> When people who can’t think logically design large systems, those systems become incomprehensible. And we start thinking of them as biological systems. And since biological systems are too complex to understand, it seems perfectly natural that computer programs should be too complex to understand.
For some time I've been drawing parallels between ML/AI and how biology "solves problems" - evolution. And I also am bit disappointed by the fact the future might lead us in a different direction than mathematical elegance of solving problems.
I really loved that quote in the article. It’s such an excellent description of the “computer voodoo“ users come up with to explain what is, to them, unexplainable about computers.
You’re right though, we’re basically there with AI/ML aren’t we. I mean I guess we know why it does the things it does in general, but the specific “reasoning“ on any single question is pretty much unanswerable.
Are you able to share what these parallels are that you've drawn? I've always thought them to be slightly different processes.
the paper: https://www.ias.edu/sites/default/files/library/Prelim_Disc_...
> 6.8.5 ... There should be some means by which the computer can signal to the operator when a computation has been concluded. Hence an order is needed ... to flash a light or ring a bell.
(later operators would discover than an AM radio placed near their CPU would also provide audible indication of process status)
Similarly, on-board soundcards are notorious for this. Even in my relatively recent laptop I can judge activity by the noise in my headphones if I use the built-in soundcard, thanks to electrical interference. Especially on one motherboard I had, I could relatively accurately monitor CPU activity this way.
There's also audible noise that can sometimes be heard from singing capacitors[1] and coil whine[2], as mentioned in a sibling comment.
[1]: https://product.tdk.com/system/files/contents/faq/capacitors...
[2]: https://en.wikipedia.org/wiki/Electromagnetically_induced_ac...
I jokingly refer to my GPU's coil whine as a HDD emulator as it sounds like a spinning disk to me.
Depending on workload my mainboard's VRMs (I think) sound like birds chirping.
The voltage regulators are really just buck converters[1], one per phase, and so contain both capacitors and inductors in a highly dynamic setting.
Thus they're very prone to the effects mentioned.
I have a dell laptop that makes a high pitched whine whenever I'm scrolling. really strange. GeForce RTX 2060.
The connection to scrolling is usually increased activity on the bus. CPU feeding the graphics card new bitmaps is just one source of that traffic. In a previous job, I could tell whether a 3-machine networking benchmark was still running based on the sound.
On my laptop the same thing happens, but only with one particular power supply connected. The noise itself definitely comes from the laptop though.
There was a Lenovo docking station compatible with the T420 (around 2012). The headphone Jack had shrieky noise on it whenever the Gigabit Ethernet connection got going. Took a while to figure that one out.
The other two docking stations were built differently and had the wires run somewhere else.
EDIT: Typo
I have experienced two or three computers in my life (and least one laptop) that produced barely audible sound when CPU activity changed. The most memorable was a PowerBook G4 with a touchpad, and as I slid my finger slowly across the pad in a quiet enough room, I could hear kind of a tick each time the pointer moved a pixel.
Still happens to me when using software that is GPU intensive, like Blender. When I drag a slider, I hear the buzzing of the GPU.
That's not what's being described in this thread. You're referring to fan noise; the other comments are discussing electrical interference.
Not necessarily. My GPU audibly sings in the KhZ range whenever it comes off idle, a long time before the fans actually start up. It could be electrical noise from the fan drivers and/or motor coils if they're running at low speed, but it's definitely not the sound of air movement. And if you're e.g. processing photos on the GPU, you can hear it starting and stopping, exactly synced to the progress of each photo in the UI.
This was very common with first sound cards (at least on PC). As I remember, only with Creative Sound Blaster, completely finished era of continuous hearing of "machine soul" sounds.
My favourite was troubleshooting an already ancient original IBM XT in 1992... The operator was complaining about the screen giving her headaches.
Sure enough - when I went onsite to assist, that "green" CRT was so incredibly wavy, I could not understand how she could do her job at all. First thing I tried was moving it away from the wall, in-case I had to unplug it to replace.
It stopped shimmering and shifting immediately. Moved it back - it started again.
That's when we realised that her desk was against a wall hiding major electrical connectivity to the regional Bell Canada switching data-centre on the floors above her.
I asked politely if she could have her desk moved - but no... that was not an option...
... So - I built my first (but not last) solution using some tin-foil and a cardboard box that covered the back and sides of her monitor - allowing for airflow...
It was ugly - but it worked, we never heard from her again.
My 2nd favourite was with GSM mobile devices - and my car radio - inevitably immediately before getting a call, or TXT message (or email on my Blackberry), if I was in my car and had the radio going, I would get a little "dit-dit-dit" preamble and know that something was incoming...
(hahahaha - I read this and realise that this is the new "old man story", and see what the next 20-30 years of my life will be, boring the younglings with ancient tales of tech uselessness...)
My 2nd favourite was with GSM mobile devices - and my car radio - inevitably immediately before getting a call, or TXT message (or email on my Blackberry), if I was in my car and had the radio going, I would get a little "dit-dit-dit" preamble and know that something was incoming...
I remember this happening all the time in meetings. Every few minutes, the conversation would stop because a phone call was coming in and all the conference room phones started buzzing. One of those things that just fades away so you don't notice it coming to an end.
The “GSM chirp”. I never got to hear it much because my family happened to use CDMA phones in that era, but I do remember hearing it a few times. I know it was well known.
I haven’t thought about that in a long time.
At a certain large tech company, at the peak of GSM data-capable phones (eg Palm Treo, Blackberry) it was accepted practice for everyone turn off data or leave their phone on a side table due to conference speaker phones on the tables amplifying the data.
Also, during this era I was on a flight and the pilot came over the PA right before pushing from the gate saying exasperatedly "Please turn your damn phones off!" (I assumed the same RF noise was leaking into his then-unsheilded headset).
I had an IBM in a Electrical Generating Station that had a HUGE 480 volt 3 phase feed not far from the monitor in the next room. The display would swirl about 1/2 of an inch at quite a clip.
The solution I picked was to put the machine in Text/Graphics mode (instead of normal character rom text mode, this was back in the MS-DOS days), so the vertical sync rate then matched up with the swirling EM fields, and there was almost zero apparent motion.
Many years ago, I was a user of an IBM 1130 computer system. It filled a small room, with (as I recall) 16K 16-bit words and a 5MB disk drive, which was quite noisy. I would feed in my Fortran program, then head down to the other end of the building to get coffee. The computer would think about the program for a while, and then start writing the object file. This was noisy enough that I'd hear it, and head back to the computer room just in time to see the printer burst into action.
(Please feel free to reference the Monty Python “Four Yorkshiremen” sketch. But this really happened.)
Ahh… the good old days. Usually when I walked all the way to the computer center to pick up my printout, it would say “syntax error on line 1”.
The. I had to walk all the way back to the terminal room, and was a lot more careful the next time.
Some computers ship with an implementation of this using inductor coils which whine depending on the load :D
I remember doing this back on the IBM 1130. We didn't see that there was any value in using it as a monitor.
Much of my earliest computer experience was on an 1130 clone, the General Automation 18/30. Never did the AM radio thing, but you could see what phase the Fortran compiler was up to just by the general look of the blinkenlights.
This is a great read. von Neumann was pivotal in the design of architecture, I think his contribution is way under appreciated - did you know he also wrote a terrific book The Computer and the Brain and coined the term the Singularity? https://onepercentrule.substack.com/p/the-architect-of-tomor...
I see this in wikipedia:
"The attribution of the invention of the architecture to von Neumann is controversial, not least because Eckert and Mauchly had done a lot of the required design work and claim to have had the idea for stored programs long before discussing the ideas with von Neumann and Herman Goldstine[3]"
Yes, von Neumann was tasked with doing a write up of what Eckert and Mauchly were up to in the course of their contract building the eniac/edvac. meant to be an internal memo. goldstein leaked the memo and the ideas inside were attributed to the author, von Neumann. this prevented any of the work being patented btw, since the memo served as prior art.
The events are covered in great detail in Jean Jennings Bartik's autobiography "Pioneer Programmer", according to her von Neumann really wasn't that instrumental to this particular project, nor did he mean to take credit for things -- it was others that were big fans of his that hyped up his accomplishments.
I attended a lecture by Mauchly's son, Bill, "History of the ENIAC", he explains how eniac was a dataflow computer that, when it was just switches and patch cables, could do operations in parallel. There's a DH Lehmer quote, "This was a highly parallel machine, before von Neumann spoiled it." https://youtu.be/EcWsNdyl264
I think the stored-program concept was also present internally at IBM around those times, although as usual no single person got the credit for that: https://en.wikipedia.org/wiki/IBM_SSEC
> We should not accept this. If we don’t, then the future of computing will belong to biology, not logic.
In this quote from Leslie Lamport, I took "If we don't" to mean "If we don't accept this". But the rest of the sentence made no sense then.
Could it be a really awkward way to say: If we don't "not accept this", i.e. if we accept this?
I believe it was probably a typo, and one that existed in the original paper. The tricky thing about it is, a person might not notice it until it was pointed out. (I didn't - but I verified afterward that it's in Lamport's paper)
Maybe an earlier draft said "We should reject this. If we don't [...]", and was edited to be less harsh.
I think you’re right, it means something like “if we don’t refuse to accept this”.
That is rather awkward isn’t it.
I really liked that whole block quote though.
If you've read the article, and thought to yourself... I wonder what it was like back then, and if there might be some other direction it could have gone, oh boy do I have a story (and opportunity) for you!
It's my birthday today(61)... I'm up early to get a tooth pulled, and I read this wonderful story, and all I have is a tangent I hope some of you think is interesting. It would be nice if someone who isn't brain-foggy could run with the idea and make the Billion dollars or so I think can be wrung out of it. You get a bowl of spaghetti, that could contain the key to Petaflops, secure computing, and a new universal solvent of computing like the Turing machine, as an instructional model.
The BitGrid is an FPGA without all the fancy routing hardware, that being replace with a grid of latches... if fact, the whole chip would consist almost entirely of D flip-flops and 16:1 multiplexers. (I lack the funds to do a TinyTapeout, but started going there should the money appear)
All the computation happens in cells that are 4 bit in, 4 bit out Look up tables. (Mostly so signals can cross without XOR tricks, etc) For the times you need a chunk of RAM, the closest I got is using one of the tables as a 16 bit wide shift register, which I've decided to call isolinear memory[6]
You can try the online React emulator I'm still working on [1], and see the source[2]. Or the older one I wrote in Pascal, that's MUCH faster[3]. There's a writeup from someone else about it as an Esoteric Language[4]. I've even written a blog about it over the years.[5] It's all out in public, for decades up to the present... so it should be easy to contest any patents, and keep it fun.
I'm sorry it's all a big incoherent mess.... completely replacing an architecture is freaking hard, especially when confronted with 79 years of optimization in another direction. I do think it's possible to target it with LLVM, if you're feeling frisky.
[1] https://mikewarot.github.io/bitgrid-react-app/
[2] https://github.com/mikewarot/bitgrid-react-app
[3] https://github.com/mikewarot/Bitgrid
[4] https://esolangs.org/wiki/Bitgrid
[5] https://bitgrid.blogspot.com/
[6] https://bitgrid.blogspot.com/2024/09/bitgrid-and-isolinear-m...
Please also note that the ENIAC slowed down by a factor of 6 (according to Wikipedia) after it's conversion to von Neumanns architecture. BitGrid is slower than an FPGA, but far more flexible for general purpose computation.
I don't like to rain on anyone's parade, but this is at least the third time I've seen one of these comments, and if it were me I'd want someone to point out a drawback that these young whippersnappers might be too respectful to mention. Isn't it a truism in modern computer architecture that computation is cheap whereas communication is expensive? That "fancy routing hardware" is what mitigates this problem as far as possible by enabling signals to propagate between units in the same clock domain within a single clock period. Your system makes the problem worse by requiring a number of clock periods at best directly proportional to their physical separation, and worse depending on the layout. If I were trying to answer this criticism, I'd start by looking up systolic arrays (another great idea that never went anywhere) and finding out what applications were suited to them if any.
Not sure why you're saying that systolic arrays never went anywhere. They're widely used for matrix operations in many linear algebra accelerators and tensor units (yes, largely in research), but they are literally the core of Google's TPU [1] and AWS EC2 Inf1 instances [2].
[1] https://cloud.google.com/blog/products/ai-machine-learning/a...
[2] https://aws.amazon.com/blogs/aws/amazon-ecs-now-supports-ec2...
Sorry to burst your bubble but modern FPGAs are already designed this way (latches paired with muxes and LUTs across the lattice). Take a look at the specs for a Xilinx Spartan variant. You still need the routing bits because clock skew and propagation delay are real problems in large circuits.
Yes, I know about clock skew and FPGAs. They are an optimization for latency, which can be quite substantial and make/break applications.
The goal of an FPGA is to get a signal through the chip in as few nanoSeconds as possible. It would be insane to rip out that hardware in that case.
However.... I care about throughput, and latency usually doesn't matter much. In many cases, you could tolerate startup delays in the milliseconds. Because everything is latched, clock skew isn't really an issue. All of the lines carrying data are short, VERY short, thus lower capacitance. I believe it'll be possible to bring the clock rates up into the Gigahertz. I think it'll be possible to push 500 Mhz on the Skywater 130 process that Tiny Tapeout uses. (Not outside of the chip, of course, but internally)
[edit/append]
Once you've got a working BitGrid chip rated for a given clock rate, it doesn't matter how you program it, because all logic is latched, the clock skew will always be within bounds. With an FPGA, you've got to run simulations, adjust timings, etc and wait for things to compile and optimize to fit the chip, this can sometimes take a day.
The goal of an FPGA is to get cheap prototype and/or for applications which don't need large batches, enough to ASIC become cost effective.
Low latency links are just byproduct, added to improve FPGA performance (which is definitely bad if compare to ASIC), you don't have to use them in your design.
If you will make working FPGA prototype, it would be not hard to convert them to ASIC.
Happy birthday!
Latency is only part of the problem. Unmanaged clock skew leads to race conditions, dramatically impacting circuit behaviour and reliability. That's the reason the industry overwhelmingly uses synchronized circuit designs and clocks to trigger transitions.
BTW have you hear about asynchronous FPGA programming (and asynchronous HDL design)? It was very popular subject some years ago.
Happy birthday!
I have no idea what you’re talking about but I’m gonna follow these links.
There is a terrific interview with Tracy Kidder and Tom West from the Computer Museum
> 10^5 flip-flops is about 12.2KiB of memory
It is reasonable amount for CISC CPU cache, as for CISC normal to have small number of registers (RISC is definitely multiple-register machine, as memory operations are expensive with RISC paradigm).
> for CISC normal to have small number of registers
A very common fallacy. For CISC it's normal to have small number of programmer-visible (architectural) registers. But nothing stops them from having large register files having about a hundred or two of physical registers which are renamed during the execution. IBM has been doing this sort of thing since the late sixties.
Modern x64 have IIRC 180 integer general-purpose registers, and I imagine this number can be increased further: say, AMD's Am29000 from 1985 had 192 registers... and honestly, having more than 16 programmer-visible registers is not that useful when 99.999% of the code is written for the high-level languages with separate-compilation schemes: any function call requires you to spill all your live data from the registers to the memory; having call-preserved registers means that you can do such a spill only once, at the start of the outer function, but it still has to happen. And outside of the function calls, in the real world programs most of the expressions that use only built-in operators either don't require more than 4 registers calculate, or can be done much more efficiently using vector/SIMD instructions and registers anyway.
> For CISC it's normal to have small number of programmer-visible (architectural) registers
You compare apples with carrots.
Question is what you name CISC/RISC. If under CISC considered early single-chip stubs like 8080 or even x86 before AMD64, this is one history, but if CISC are IBM/360,/370,/390 and DEC PDP-11 or later, this is totally other drama. Same is with RISC.
What really important, genuine CISC typically have human-friendly instruction set, and even /360 assembler is comparable to C language; many RISCs are load/store architectures, don't have rich memory addressing modes, so by definition tied to work inside registers. Also, when compare similar level of manufacture and similar timeline, will easy see, 68k series have just 8 registers, but same era RISCs have at least 16.
> You compare apples with carrots
This analogy, I swear. Yes, I do, and this is an entirely reasonable comparison. Both are foodstuffs, and you can make pies out of both. But which pie would be healthier, and/or more nutritious? Now take the relative prices (or comparative difficulty of growing your own) of the ingredients into the account.
> 68k series have just 8 registers
Didn't they have 16?
68k claimed to be 32-bit CPU, so it have 8 32-bit registers. Compare it with 16-bit CPU is just apples vs carrots.
68000 had sixteen 32-bit registers, but no 32-bit ALUs; it used 3 16-bit ALUs instead. This has been rectified in 68020 which had fully 32-bit ALU.
And Western Electric 32000, a CISC processor from the same period, also had 16 32-bit registers.
> 68000 had sixteen 32-bit registers
It is your own opinion. Wikipedia have other definition.
> Address storage and computation uses 32 bits internally; however, the 8 high-order address bits are ignored due to the physical lack of device pins.
> The CPU has eight 32-bit general-purpose data registers (D0-D7), and eight address registers (A0-A7).
And to the best of my knowledge, moving data from a data register to an address register and back did not destroy the top 8 bits.
> moving data from a data register to an address register and back did not destroy the top 8 bits
Yes, you could use address registers for data storage, but you cannot make general computations on them, and because of this wiki definition don't account address registers in list of general purpose registers.
I don’t think that’s correct. Ideally, your working set fits into the cache, and that doesn’t get smaller or larger when you use a RISC CPU.
(There may be an effect because of instruction density, but that’s not very big, and will have less of an effect the larger your working set)
> Ideally, your working set fits into the cache
Ideally, your working set fits into the registers, and that was most win feature or RISC, having relatively simple hardware (if compared to genuine CISC, like IBM-360 mainframes) they saved significant space on chip for more registers.
- Register-register operations are definitely avoid memory wall, so data cache become insignificant.
Sure, if we compare comparable, not apples vs carrots.
If you are unlucky, so your working set don't fit into the registers, in this case you sure will deal with memory bus and with cache hierarchy. But best if you just don't touch memory at all, making all processing in your code in registers.
The main topic for this blog is a paper that describes what is now more commonly known as "von Neumann Architecture". This architecture is used in almost every computing device on earth
I'd argue that MCUs, the highest volume of which are largely based on Harvard architecture, far outnumber von Neumanns.
I just realized that the word for organ the music instrument and the body part is one and the same in English. (In dutch they're called orgel and orgaan respectively.) Which of these meanings is being referred to in the article? To me both could make sense.
The definition of organ in this article is closest to the body part definition. The use of organ here relies on a less common definition: roughly a constituent part of some larger whole that performs a specific function.
Not just a specific function, but a necessary function. Usually, without an organ, the organism won't survive. So your fingers aren't organs, because you can survive (with some difficulty) without them, but without your stomach or heart or skin, you'll die.
I think that’s a great summary.
As a native English speaker, it was understandable but feels rather foreign because you never hear parts of computers referred to that way these days.