At VCFMW last month, my table was adjacent to Lorraine and her friends.
Ben Heck walked by during setup, and asked me what it was. I was clueless, so we started making educated guesses. The Amiga poster was a start.
I do wire-wrap. This thing is a marvel to behold. It is quite orderly, but could have used colors more effectively.
The three units implement the VLSI chips and the main board of the Amiga that was first shown at CES (I believe.)
Each VLSI is a stack of PCB such as you might get from Vector, with columns of pads for ICs in wire-wrap sockets, buss bars, and edge areas having mounting holes for connectors. The layers are connected by ribbon cables.
(they are not called breadboard!)
Wire wrap is a superior technology. There are no cold solder joints. They are gas-tight.
It is not hard to debug. If you follow some rules, and don't make a spaghetti bird's nest.
Such workmanship can be seen on minicomputers of the early 1970s.
Whole computers were made by wire-wrap around MSI chips. My wire-wrapped PDP-11/10 functioned perfectly thru the 1990s.
Recently, I implemented a microcomputer design in wire-wrap. That was enjoyable!
My design was captured in KiCad, laid out as a PCB, which I translated to perf-board and wire-wrap sockets
This approach is perfect for prototyping, as you can simply add new blocks.
I probably still have my wire-wrap gun in storage somewhere but I would much rather use an FPGA. If you don't mind, answering, why do you still use this tech instead of FPGAs?
As for cold solder joints: no, they don't do that. But when you start making modifications you have to be extremely careful not to cause any damage because tracing a loose joint on a wire wrap board is the stuff of nightmares.
For a clumsy person like me wire-wrap seems even harder than the usual stuff around soldering, that said I'll look into it more because it seems incredibly interesting.
Also it's nice hearing about Ben Heck in the wild, when I first started fiddling with Arduino I watched his YouTube videos on the element14 channel. Though from what I remember he's long since left. Another channel that I remember fondly is EEVblog even though both were about electronics the content itself didn't have much overlap if I recall correctly.
FWIW when wire wrapping you can get handy little hollow tools. You feed the wire in to a hole in the tool, drop the tool over the pin and just spin the tool to wrap the wire round the pin. It's all very neat and tidy and requires pretty minimal hand-eye-coordination to get it looking nice.
I bought a tiny little one of the tools a while ago when doing some raspberry pi prototyping. Makes it easy to attach a wire to the GPIO header if it's not a dupont lead/wire
The display was in a booth replicating the 1984 Winter CES booth. Here is a Creative Computing article from the 1984 event.
https://www.atarimagazines.com/creative/v10n4/150_Amiga_Lorr...
Forbodingly, the article signs off with "Amiga, please don't join the sorrowful ranks that have wasted technological superiority through marketing muck-ups."
Well, lucky Amiga, it was wasted by executive muck-ups rather than mere marketing foibles.
The marketing managed to miss most of the time, too. (Mostly because of exec-level misguidance, though.)
What a godawful mess that must have been to debug. I've never used wirewrap, it looks awful to me.
I am trying to imagine what it would have been like to design such a system using only pencil and paper. Going from block diagram to the lowest level, just on big sheets of paper... the pencil sharpeners must have been emptied twice a day.
I made wirewrap boards back in the early 90s and it was extremely tedious, both following the netlist instructions to do the initial work, and then the inevitable debugging when it didn't work. Of course you can have two kinds of bugs - either you made a mistake doing the wirewrap, or your design has a problem! For the complexity of the boards I did, I think it took 2-3 days to do the initial wirewrap, and then days or weeks to debug and get it working.
It was, however, both cheaper and easier than doing a prototype PCB. For that, I'd have to use the institution's darkroom with their flatbed photoplotter connected to a PDP-something that you had to boot from reel-to-reel tape. The plot happened overnight, and then had to be developed next day in the darkroom, and then if you were lucky you'd have transparencies of each layer of the PCB that you could send off to a local company who would etch you a single PCB for a lot of money in a few weeks. Even that wasn't trouble-free, since PCBs can have manufacturing faults, or you could screw up when soldering the components to the board, or your design could be wrong.
It was very rare that I as the most junior person was allowed to go the PCB route. I think for my boards it happened only once on an ECL design that simply wouldn't have been possible with wirewrap. Although I was tasked with doing the transparencies for other team members. Since I was being paid only £40/week through a government benefits scheme, it was much cheaper to pay for my time than to pay an external company.
Also as the other reply says, I used CPLDs a lot which were much faster to iterate. With practice you could pull out the QFP package, put it in the programmer, recompile and upload the new logic, and put it back into the board in an hour. Luxury!
We never used pencil and paper (except for notes). The software for drawing schematics, laying out PCBs, making netlists, and compiling CPLDs was pretty advanced even then. Although all of it was horribly proprietary. No KiCAD for you.
Taught you to check everything in your design early and often.
The existence of wire wrap tells you a lot how painfully tedious it was to layout PCB's at the time. I did a couple of wire wrap boards. But eventually just started soldering wire wrap wire to the sockets. By the early 90's it was faster to layout a PCB and have it fabbed. Bonus you could outsource that and use the 3-4 weeks to do less tedious things.
This is just around the time that programmable logic became readily available. It'd be much easier to iterate with that than wiring up logic gates. Last 30 years you can do all this debugging with simulations and then test using FPGA's.
Present day: I can fabricate a wire-wrap version from my PCB footprints, much faster than I can route the PCB!
With wire-wrap, you can route multiple traces between the same pins, or I like to neatly bundle a whole bus' worth.
I'm far more pleased with the results of my wire-wrap, than the quality of my SMT soldering once I get a PCB made.
I had a tutor for wire-wrap in the 1980s. But I'm self-taught in PCB routing, and I start it over at least 3 times.
When I was a kid helping my father with in-house electronics, early 1980's, this meant using specific pens and paper to layout them into boards, and then do chemical baths, drying the boards, and then hope for the best when starting to solder components on top.
Usually a mistake in any step would mean throw away the whole board and start from scratch.
I'm old-young enough to be aware of the evolution of minicomputers implemented in MSI TTL with wire-wrap (1970) to VLSI integration (1975). Examples are the LSI-11 and TMS9900.
My first home-brew micro was done in 1987 using the Radio Shack hand-tool and an OK Industries' motorized wrap gun.
Now, I add CPLD to my wire-wrap designs! Just like on an iterated PCB, you must lock the physical pins to functions.
Some DEC CPUs were wire-wrapped until VLSI became a thing. The KL-10 was notoriously an ECL wire-wrap design.
But they had a semi-automated machine to handle it. I don't think anyone mass-produced non-trivial numbers of wire-wrap boards by hand. It's fine for prototypes, but it's a very error-prone process and makes it difficult to handle noise, especially at fast clock speeds.
Never seen wire wrapped boards besides photos of this and maybe some other early micro. So of course I had to do a little search and one of the first results has Bil Herd from Commodore (Plus/4, C128...) explaining it.
Thanks for sharing that! Never got to see any pro wrapping.
By the time this stuff started, I'd started forgetting all the hobbyist hardware electronics I'd learned (thinking it would last) and had moved to software ... at right about the time that manu's stopped documenting their internals ... but while disassemblers still existed.
In retrospect, it seems almost nuts that appliances from the past came with not only a block diagram, but a schematic. Especially now that everything is not only too complex to be documented "for your own good", and even potential repair techs, and now with DMCA protected software locks.
I still sometimes use the old Scott amplifier my parents got 50 years ago, with manual - and it has everything you need for a repair (besides a list of modern day replacement for some of these, of course). Same with the ol' Amiga 500 in a box over at their house. A lot of tinkering you could scheme on your own without going online (my quartz oscillator overclocking replacement never materialized, but hey).
I love this, it’s like a holy relic. :D
I’m amazed someone preserved that! In whose ownership is it currently?
Dale Luck (from the original team) is preserving it, apparently.
I remember seeing old photos of the prototype. I assumed it was lost decades ago.
Wire wrap is/was an underrated prototyping technique prior to PCB automation. Nasa flew missions with wire wrap boards.
I had an old Kenwood amplifier for years that had wire wrap board to board connectors; it worked great.
I was there and it was glorious to watch. Beautiful to see an interesting part of history up close.
If they could build that then, imagine what Ben Eater could build today. ;)
When I was a kid I had a Dragon32 and my little brother had an Amiga 500. I thought it was so cool with the demos and the sound but he was always getting worms that spread via floppy disc.
Yes, that was a real problem. The Amiga would access disks every time they were inserted to id them, but viruses too would use that feature to spread themselves; they would remain resident in memory (also after warm resets) then write themselves on the bootblock of every inserted floppy. Seeing the computer access every inserted disk was then normal, no way to know if it was for reading or writing, so back then I built a small device that would take the writing pulse on the floppy port, make it a bit longer and drive a beeper. If I then inserted a floppy and it started beeping that meant that floppy and the one I read before it were infected so that I could restrict the suspects list and clean them. It was very effective, built some for friends and also sold some of them on local listings.
Yup, heh. And you'd infect all your floppies if you just warm booted, and put in the next game.
Flipping the read only tab on every floppy was the first thing I did, after my first infection.
there was a distinct floppy sound when the filesystem was updated after a write.
i noticed my first virus pretty quickly, and even though i couldn't remove it, i could disable it in some files that i couldn't reproduce (no internet back then, and i was on the wrong side of the iron curtain as a child)
Always blows my mind that the prototype was made with discrete components.
If you really take seriously what you could have done for a home computer if you had started with fully integrated chips, its actually insane.
Imagine if you had an Amiga Chipset and you had combined it with a RISC like chip. If you did that in late 70s with 3.5μm HMOS (like 68k). The resulting system would be insane, in terms of performance to cost. You could outperform minicomputers that cost 10-100x more.
The ARM2 like chip and the complete Amiga chipset seem to have less transistors then a single 68k, so the price off such a system would be very low. And we can see that with the Amiga, what really blows my mind is how cheap Amiga ended up being, an unbelievable achievment.
Its seem the issue really was the the companies that had the resources to do that amount of chip design knowlage and finances were not interest in making a home computers/workstation. Workstation ended up being made by startups who didn't have the resources to do so much costume work. Appollo was a split-off group DEC because DEC was not interested in workstations. IBM was just to slow and couldn't really do prodcut design, and we all know how the eventually got around that problem with the PC. Apple for the Mac did try to do one ambitious chip with VSLI but didn't end up using it.
The split between computer companies and chip design company was just to big to get the needed amount of integration, and there was clearly a lacking vision for what home computer could be. Jobs vision for the Macintosh went in the right direction, but really Jay Mine had the right vision, and he had it because he build a computer for himself. He wanted a home comptuer that was fast, had a proper operating system and enough media capability to run a flight simulator software. Sadly manamgent most of the time wanted him to develop a console and later when they allowed a home computer they didn't share his full vision.
But then also actually plulling this vision off, multi-chip costume design with very few resources is just an amazing achievment. And many of the people didn't even have that much knowlage in chip design, there was a lot of competition for chip design people. Getting into Commmodore where they had the actual semiconductor teams to get these designs over the line was lucky, many other companies who could have bought them might have messed this up.
In a perfect world you add ARM2-like RISC chip, a Sun-like costume MMU to something like Amiga Chipset and you move computing forward by 10+ years. In reality the exact opposite won, a 16-bit PC that had basically no costume design in it what so ever.
For 1985, I'd think that you'd also have to imagine that DRAM was cheaper than what it really was.
RISC and especially MMU with paging increase memory requirements. For comparison, the first Amiga was designed for 128K, got 256K. Linux/M68K with MMU on the Amiga required 4MB to be usable.
Yes. When I was playing around with different configuration for what a machine could look like back then, you realize that the RAM cost is dominate. So you can develop as many chips as you want, that leads to high fixed cost, but the marginal cost is tiny.
While on the other hand, RAM is easely equal to all other chips on the board combined.
So you need to desperatly do everything you can to limit RAM usage. That's why Amiga was smart to do shared RAM between the CPU and the Chipset. The Archimedes did the same thing as well. And ideally you get some SRAM on there not depend on the RAM to much, ARM3 chip did that and speed improved by a huge extent.
And you really need a good operating system that could give you advanced features without completely blowing up your RAM.
That is why it is sad that nobody with sufficent capital to pump into costume chip design took on a project like that, as it needs to be somebody with the volume the economics work out.
But nobody did and instead we got there threw an iterative process where PC manufactures pumped out PC and bought the cheapest chips, while Intel and chip companies integrated the board more and more. I remember learning about Southbridge and Northbridge, and that essentially the design you end up with.
This reminds me of how expensive RAM was back then. I remember spending $180 to add 512KB to my Amiga 500 in 1988 or so. $180 was a lot of money back then.
Ram was relatively cheap between 1985 and 1987 hovering around $100-150 using 256Kbit chips. Then 1987 anti dumping laws lined up with fabs upgrading to lower yielding new 1Mbit chips and things got crazy. In 1988 256K chips went from $3.5 to $7 in less than a month. Some companies coped better than others - Atari was the first to offer computer shipping with 1MB below $1000. Tramiels little secret was smuggling ram from Japan and skirting anti dumping restrictions :)
https://forums.atariage.com/topic/207245-secret-atari-dram-r... https://web.archive.org/web/20180817061246/https://www.atari... TLDR up to $6Mil a month of smuggled ram from Japan by Atari to make money on a side :)
https://web.archive.org/web/20171220171525/https://www.atari...
"Source [an informant to the FBI who is apparently an Atari employee] remembers the first DRAM shipment arriving in 1988 was sold to Sun Micro Systems in Milpitas, CA. Source was told to deliver this shipment by [redacted]. [redacted] told Source to leave all [redacted] including his [redacted] and to take [redacted] which had an Atari logo on them. When Source delivered the Integrated Circuits, [redacted] cashier's check from Sun Microsystems to deliver to Atari, and the payee's name was left blank."
I wonder how much faster the ARM2 would have been compared to the 68k in a first-generation Amiga. The Amiga's chip memory only delivered 7 MBytes/s, shared between the CPU and the chipset! With its 32-bit instruction words, the ARM2 would have been very far from its theoretical performance.
> I wonder how much faster the ARM2 would have been compared to the 68k in a first-generation Amiga.
Considerably faster. I looked at both (and the ST) and bought an Archimedes.
ARM chips benchmarked from the ARM2 up to the RasPi 3B+:
https://stardot.org.uk/forums/viewtopic.php?t=20379
68000 benchmarks around that time:
http://www.faqs.org/faqs/motorola/68k-chips-faq/
ARM2:
Dhrystone/sec 5463
68000 @ 8MHz:
Dhrystones
68000 2100
MIPS
https://en.wikichip.org/wiki/acorn/microarchitectures/arm2
https://en.wikipedia.org/wiki/Instructions_per_second
ARM2: from 6 to 10 million instructions per second, depending on instruction mix
68000: 1.4 MIPS typical.
(For comparison: Intel 8086 at the same speed, something like 300 Whetstones, 0.5 MIPS. So either of them stomped all over a comparable x86 machine from that time.)
So, very roughly, ARM2 was between 2-3x faster in typical use.
Note:
. Neither CPU could do FP in hardware.
. Neither had cache memory.
. The Amiga had a lot of complex hardware acceleration for graphics; the original ARM2 machines from Acorn (Archimedes A2305, A310, A400) had essentially none.
So, Amiga games could do things that on the Arc required raw CPU, typically done careful hand-coded assembler.
Yes, but that's my point: the ARM2 cannot get faster than 1.25 MIPS in an Amiga because of the memory bandwidth (assuming that the CPU uses 5 MBytes/s of the available 7 MBytes/s that it has to share with the graphics chipset).
That is defently the limitation that you need to improve over time. And eventually you likely split the graphcis memory off from the main memory.
To improve performance you need to start to add cache in or around your CPU/MMU and you need an increasingly clever memory chip to arbitrate memory access.
But you are right, you always going to run into limitations. As RAM is the most expensive part of the BOM, literally anything you can do to get maximum utilily of out of the memory bandwith is where you need to spend design time.
Like ARM did a 16 bit encoding for your 32 bit RISC would have been a great design feature to further reduce the how intensive it is to keep the CPU active. Of course nobody had thought of that combination, that was an early 90s thing.
RAM is just so expensive back then, you build your machine around sharing it as well as you can and living with the downsides.
I do not follow.
There was never any ARM-based Amiga from Commodore or any Commodore partner.
Any CPU performance is 100% theoretical because there was no such hardware.
The 68000 performance numbers I cited are from contemporary benchmarks and they _favour_ the 68K. The real chip in real Amigas ran slower.
The Acorn Archimedes used the ARM2; the ARM was developed for the Archimedes range. Its display, sound, memory controller, etc. are all pretty much unaccelerated.
Now there are Arm-based Amigas but they run the Amiberry emulator on top of Linux.
Whole idea behind ARM was running at full speed of ram. Amiga had very little Ram BW to spare for CPU so ARM2 would be throttled by narrow slow ram bus in the Amiga.
To this day my pet peeve is that Commodore didn't ship Amigas with a tiny amount of scratch RAM independent of the shared bus. Would have been so useful.
Is that not the distinction between Chip RAM and Fast RAM in real machines?
A large amount of Fast Ram would have been even more useful, but if you are cost optimizing the BOM, just a tiny sliver of it (it being Fast RAM) would have been effectively a "manual cache" managed by the developer and would have been incredibly useful for anything computation intense, like flight simulators, spreadsheets, anything not involving just blitting graphics really.
Afaik only A3000 and A4000 shipped with Fast ram, both computers werent really price competitive with PCs.
Well - I used Archimedes computers with ARM2 and owned an Amiga 500+ and honestly, I couldn't tell you the Arcie was faster. It certainly didn't have the custom chips, so it is probably not a fair comparison.
You mean BeBox or a NeXT?
Because that was basically the spirit behind them.
Also note that one of the original ideas for the Amiga, was to get into the UNIX workstation market, then ended up pivoting into a multimedia machine (thankfully) to what we now know as the Amiga.
I mean NeXT machine was much more expensive then what you could have done. And need far more RAM. And of course it also used mostly commodity chips. As far as I know, but I haven't looked, they don't have fully integrated IO chips. Those were just conventional 68k workstations. Did NeXT design a single chip themselves?
> Also note that one of the original ideas for the Amiga, was to get into the UNIX workstation market
Could you source that for me? It might be true. I heard from interviews with Jay Minor that they were developing a game console, and then when the 1982 crash happened, he convinced them to pivot to a home computer. And specifically he wanted it to run games like flight simulation.
Commodore explicitly cancled its own Amiga workstation, the Z8000 based Commodore 900 Unix workstation.
When Amiga lauched they lauched with games and with pixel editors. It was endorsed by EA.
Doesn't sound like 'unix workstation' to me. It was later in the 1980s where they started to want to get into that market.
PS: BeBox much later, by then the PC ecosystem had already done most of that stuff.
"custom" not "costume" (...I'm only adding this note to help, not criticise)
Thank you, damn my dyslexia. My hands type by themselves and my brain has no idea what's going on.
Thank you!
> Imagine if you had an Amiga Chipset and you had combined it with a RISC like chip
This was the plan for the successor machine, codenamed HOMBRE and never released.
An Amiga-like chipset closely coupled with an HP PA-RISC CPU.
https://en.wikipedia.org/wiki/Amiga_Hombre_chipset
A little more info in German:
Yeah that was in the 90s.
I always thought PA-RISC was a great RISC and it an interesting concept.
But even in that case that wouldn't have been the CPU. Commodore never dared to do that after the 6502 (of course that team left Commodore).
>what really blows my mind is how cheap Amiga ended up being
Commodore was buying 68000 CPUs from Motorola at $2 a pop thanks to owning a fab and knowing how much it cost to make such chip in house and using that knowledge in negotiations.
That and the fact that the CPU was nearly a decade old by the time they were buying them in bulk
You're not hardcore until you're wire-wrapped your own Amiga:
https://www.amiga-news.de/pics/L/lorraine/FB_IMG_17610203985...
I never owned an Amiga, but it was clear at the time it was a masterpiece, and this just confirms it.
Designing the chips in MSI, wire wrapping a prototype, and getting it all to work to the point where you get a functional chip run is the stuff of legends.
The failure of the Amiga and the near-failure and resurrection of Apple is what makes me believe in parallel universes/alternate timelines more than anything :)
> makes me believe in parallel universes
And we're obviously in the less cool branch. :-)
Reminds me of that LLM computer built in Minecraft that was discussed here a few weeks back.
Looks like a beehive. Very cool
a C-hive
Does it still boot?
At VCFMW it was not powered on.
Still incredibly cool. My Amiga 500 was the computer I have the most fond memories from. I even had a 1200 for 4 years before I reluctantly transitioned to PC.