Nice, these ideas have been around for a long time but never commercialized to my knowledge. I've done some experiments in this area with simulations and am currently designing some test circuitry to be fabbed via Tiny Tapeout.
Reversibility isn't actually necessary for most of the energy savings. It saves you an extra maybe 20% beyond what adiabatic techniques can do on their own. Reason being, the energy of the information itself pales in comparison to the resistive losses which dominate the losses in adiabatic circuits, and it's actually a (device-dependent) portion of these resistive losses which the reversible aspect helps to recover, not the energy of information itself.
I'm curious why Frank chose to go with a resonance-based power-clock, instead of a switched-capacitor design. In my experience the latter are nearly as efficient (losses are still dominated by resistive losses in the powered circuit itself), and are more flexible as they don't need to be tuned to the resonance of the device. (Not to mention they don't need an inductor.) My guess would be that, despite requiring an on-die inductor, the overall chip area required is much less than that of a switched-capacitor design. (You only need one circuit's worth of capacitance, vs. 3 or more for a switched design, which quadruples your die size....)
I'm actually somewhat skeptical of the 4000x claim though. Adiabatic circuits can typically only provide about a single order of magnitude power savings over traditional CMOS -- they still have resistive losses, they just follow a slightly different equation (f²RC²V², vs. fCV²). But RC and C are figures of merit for a given silicon process, and fRC (a dimensionless figure) is constrained by the operational principles of digital logic to the order of 0.1, which in turn constrains the power savings to that order of magnitude regardless of process. Where you can find excess savings though is simply by reducing operating frequency. Adiabatic circuits benefit more from this than traditional CMOS. Which is great if you're building something like a GPU which can trade clock frequency for core count.
Hi, someone pointed me at your comment, so I thought I'd reply.
First, the circuit techniques that aren't reversible aren't truly, fully adiabatic either -- they're only quasi-adiabatic. In fact, if you strictly follow the switching rules required for fully adiabatic operation, then (ignoring leakage) you cannot erase information -- none of the allowed operations achieve that.
Second, to say reversible operation "only saves an extra 20%" over quasi-adiabatic techniques is misleading. Suppose a given quasi-adiabatic technique saves 79% of the energy, and a fully adiabatic, reversible version saves you "an extra 20%" -- well, then now that's 99%. But, if you're dissipating 1% of the energy of a conventional circuit, and the quasi-adiabatic technique is dissipating 21%, that's 21x more energy efficient! And so you can achieve 21x greater performance within a given power budget.
Next, to say "resistive losses dominate the losses" is also misleading. The resistive losses scale down arbitrarily as the transition time is increased. We can actually operate adiabatic circuits all the way down to the regime where resistive losses are about as low as the losses due to leakage. The max energy savings factor is on the order of the square root of the on/off ratio of the devices.
Regarding "adiabatic circuits can typically only provide an order of magnitude power savings" -- this isn't true for reversible CMOS! Also, "power" is not even the right number to look at -- you want to look at power per unit performance, or in other words energy per operation. Reducing operating frequency reduces the power of conventional CMOS, but does not directly reduce energy per operation or improve energy efficiency. (It can allow you to indirectly reduce it though, by using a lower switching voltage.)
You are correct that adiabatic circuits can benefit from frequency scaling more than traditional CMOS -- since lowering the frequency actually directly lowers energy dissipation per operation in adiabatic circuits. The specific 4000x number (which includes some benefits from scaling) comes from the analysis outlined in this talk -- see links below - but we have also confirmed energy savings of about this magnitude in detailed (Cadence/Spectre) simulations of test circuits in various processes. Of course, in practice the energy savings is limited by the resonator Q value. And a switched-capacitor design (like a stepped voltage supply) would do much worse, due to the energy required to control the switches.
https://www.sandia.gov/app/uploads/sites/210/2023/11/Comet23... https://www.youtube.com/watch?v=vALCJJs9Dtw
Happy to answer any questions.
Thanks for the reply, was actually hoping you'd pop over here.
I don't think we actually disagree on anything. Yes, without reverse circuits you are limited to quasi-adiabatic operaton. But, at least in the architectures I'm familiar with (mainly PFAL), most of the losses are unarguably resistive. As I understand PFAL, it's only when the operating voltage of a given gate drops below Vth that the (macro) information gets lost and reversibility provides benefit, which is only a fraction of the switching cycle. At least for PFAL the figure is somewhere in the 20% range IIRC. (I say "macro" because of course the true energy of information is much smaller than the amounts we're talking about.)
The "20%" in my comment I meant in the multiplicative sense, not additive. I.e. going from 79% savings to 83.2%, not 99%. (I realize that wasn't clear.)
What I find interesting is reversibility isn't actually necessary for true adiabatic operation. All that matters is the information of where charge needs to be recovered from can be derived somehow. This could come from information available elsewhere in the circuit, not necessarily the subsequent computations reversed. (Thankfully, quantum non-duplication does not apply here!)
I agree that energy per operation is often more meaningful, BUT one must not lose sight of the lower bounds on clock speed imposed by a particular workload.
Ah thanks for the insight into the resonator/switched-cap tradeoff. Yes, capacitative switching designs which are themselves adiabatic I know is a bit of a research topic. In my experience the losses aren't comparable to the resistive losses of the adiabatic circuitry itself though. (I've done SPICE simulations using the sky130 process.)
It's been a while since I looked at it, but I believe PFAL is one of the not-fully-adiabatic techniques that I have a lot of critiques of.
There have been studies showing that a truly, fully adiabatic technique in the sense I'm talking about (2LAL was the one they checked) does about 10x better than any of the other "adiabatic" techniques. In particular, 2LAL does a lot better than PFAL.
> reversibility isn't actually necessary
That isn't true in the sense of "reversible" that I use. Look at the structure of the word -- reverse-able. Able to be reversed. It isn't essential that the very same computation that computed some given data is actually applied in reverse, only that no information is obliviously discarded, implying that the computation always could be reversed. Unwanted information still needs to be decomputed, but in general, it's quite possible to de-compute garbage data using a different process than the reverse of the process that computed it. In fact, this is frequently done in practice in typical pipelined reversible logic styles. But they still count as reversible even though the forwards and reverse computations aren't identical. So, I think we agree here and it's just a question of terminology.
Lower bounds on clock speed are indeed important; generally this arises in the form of maximum latency constraints. Fortunately, many workloads today (such as AI) are limited more by bandwidth/throughput than by latency.
I'd be interested to know if you can get energy savings factors on the order of 100x or 1000x with the capacitive switching techniques you're looking at. So far, I haven't seen that that's possible. Of course, we have a long way to go to prove out those kinds of numbers in practice using resonant charge transfer as well. Cheers...
PFAL has both a fully adiabatic and quasi-adiabatic configuration. (Essentially, the "reverse" half of a PFAL gate can just be tied to the outputs for quasi-adiabatic mode.) I've focused my own research on PFAL because it is (to my knowledge) one of the few fully adiabatic families, and of those, I found it easy to understand.
I'll have to check out 2LAL. I haven't heard of it before.
No, even with a fully adiabatic switched-capacitance driver I don't think those figures are possible. The maximum efficiency I believe is 1-1/n, n being the number of steps (and requiring n-1 capacitors). But the capacitors themselves must each be an order of magnitude larger than the adiabatic circuit itself. So it's a reasonable performance match for an adiabatic circuit running at "max" frequency, with e.g. 8 steps/7 capacitors, but 100x power reduction necessary to match a "slowed" adiabatic circuit would require 99 capacitors... which quickly becomes infeasible!
Yeah, 2LAL (and its successor S2LAL) uses a very strict switching discipline to achieve truly, fully adiabatic switching. I haven't studied PFAL carefully but I doubt it's as good as 2LAL even in its more-adiabatic version.
For a relatively up-to-date tutorial on what we believe is the "right" way to do adiabatic logic (i.e., capable of far more efficiency than competing adiabatic logic families from other research groups), see the below talk which I gave at UTK in 2021. We really do find in our simulations that we can achieve 4 or more orders of magnitude of energy savings in our logic compared to conventional, given ideal waveforms and power-clock delivery. (But of course, the whole challenge in actually getting close to that in practice is doing the resonant energy recovery efficiently enough.)
https://www.sandia.gov/app/uploads/sites/210/2022/06/UKy-tal... https://tinyurl.com/Frank-UKy-2021
The simulation results were first presented (in an invited talk to the SRC Decadal Plan committee) a little later that year in this talk (no video of that one, unfortunately):
https://www.sandia.gov/app/uploads/sites/210/2022/06/SRC-tal...
However, the ComET talk I linked earlier in the thread does review that result also, and has video.
How do the efficiency gains compare to speedups from photonic computing, superconductive computing, and maybe fractional Quantum Hall effect at room temperature computing? Given rough or stated production timelines, for how long will investments in reversible computing justify the relative returns?
Also, FWIU from "Quantum knowledge cools computers", if the deleted data is still known, deleting bits can effectively thermally cool, bypassing the Landauer limit of electronic computers? Is that reversible or reversibly-knotted or?
"The thermodynamic meaning of negative entropy" (2011) https://www.nature.com/articles/nature10123 ... https://www.sciencedaily.com/releases/2011/06/110601134300.h... ;
> Abstract: ... Here we show that the standard formulation and implications of Landauer’s principle are no longer valid in the presence of quantum information. Our main result is that the work cost of erasure is determined by the entropy of the system, conditioned on the quantum information an observer has about it. In other words, the more an observer knows about the system, the less it costs to erase it. This result gives a direct thermodynamic significance to conditional entropies, originally introduced in information theory. Furthermore, it provides new bounds on the heat generation of computations: because conditional entropies can become negative in the quantum case, an observer who is strongly correlated with a system may gain work while erasing it, thereby cooling the environment.
I have concerns about density & cost for both photonic & superconductive computing. Not sure what one can do with quantum Hall effect.
Regarding long-term returns, my view is that reversible computing is really the only way forward for continuing to radically improve the energy efficiency of digital compute, whereas conventional (non-reversible) digital tech will plateau within about a decade. Because of this, within two decades, nearly all digital compute will need to be reversible.
Regarding bypassing the Landauer limit, theoretically yes, reversible computing can do this, but not by thermally cooling anything really, but rather by avoiding the conversion of known bits to entropy (and their energy to heat) in the first place. This must be done by "decomputing" the known bits, which is a fundamentally different process from just erasing them obliviously (without reference to the known value).
For the quantum case, I haven't closely studied the result in the second paper you cited, but it sounds possible.
Can one define the process of an adiabetic circuit goes through like one would do analogusly for the carnot engine? The idea being coming up with a theoretical cieling for the efficiency of such a circuit in terms of circuit parameters?
Yes a similar analysis is where the above expression f²RC²V² comes from.
Essentially -- (and I'm probably missing a factor of 2 or 3 somewhere as I'm on my phone and don't have reference materials) -- in an adiabatic circuit the unavoidable power loss for any individual transistor stems from current (I) flowing through that transistor's channel (a resistor R) on its way to and from another transistor's gate (a capacitor C). So that's I²R unavoidable power dissipation.
I must be sufficient to fill and then discharge the capacitor to/from operating voltage (V) in the time of one cycle (1/f). So I=2fCV. Substituting this gives 4f²RC²V².
Compare to traditional CMOS, wherein the gate capacitance C is charged through R from a voltage source V. It can be shown that this dissipates ½CV² of energy though the resistor in the process, and the capacitor is filled with an equal amount of energy. Discharging then dissipates this energy through the same resistor. Repeat this every cycle for a total power usage of fCV².
Divide these two figures and we find that adiabatic circuits use 4fRC times as much energy as traditional CMOS. However, f must be less than about 1/(5RC) for a CMOS circuit to function at all (else the capacitors don't charge sufficiently during a cycle) so this is always power savings in favor of adiabatics. And notably, decreasing f of an adiabatic circuit from the maximum permissible for CMOS on the same process increases the efficiency gain proportionally.
(N.B., I feel like I missed a factor of 2 somewhere as this analysis differs slightly from my memory. I'll return with corrections if I find an error.)
Maybe this would work better with superconducting electronics?
There indeed has been research on reversible adiabatic logic in superconducting electronics. But superconducting electronics has a whole host of issues of its own, such as low density and a requirement for ultra-low temperatures.
When I was at Sandia we also had a project exploring ballistic reversible computation (as opposed to adiabatic) in superconducting electronics. We got as far as confirming to our satisfaction that it is possible, but this line of work is a lot farther from major commercial applications than the adiabatic CMOS work.
Possibly, that's an interesting thought. The main benefit of adiabatics as I see them is that, all else being equal, a process improvement of the RC figure can be used to enable either an increase in operating frequency or a decrease in power usage (this is reflected as the additional factor of fRC in the power equation). With traditional CMOS, this only can benefit operating frequency -- power usage is independent of the RC product per se. Supercondition (or near-superconduction) is essentially a huge improvement in RC which wouldn't be able to be realized as an increase in operating frequency due to speed of light limitations, so adiabatics would see an outsize benefit in that case.
Notably the physical limit is
https://en.wikipedia.org/wiki/Landauer%27s_principle
it doesn't necessarily take any energy at all to process information, but it does take roughly kT work of energy to erase a bit of information. It's related to
https://en.wikipedia.org/wiki/Maxwell%27s_demon
as, to complete cycles, the demon has to clear its memory.
Does it not take energy to process information? Can any computable function be computed with arbitrarily low energy input/entropy increase?
No, and yes, so long as you don't delete information.
Think of a marble-based computer, whose inner workings are frictionless and massless. The marbles roll freely without losing energy unless they are forced to stop somehow, but computation is nonetheless performed.
I don't know how to compute with marbles without mass and stopping. Marble computers I've seen rely on gravity and friction, though I'd love to see one that didn't.
Henry G. Baker wrote this paper titled "The Thermodynamics of Garbage Collection" in the 90s about linear logic, stack machines, reversibility and the cost of erasing information:
https://wiki.xxiivv.com/docs/baker_thermodynamics.html
A subset of FRACTRAN programs are reversible, and I would love to see rewriting computers as a potential avenue for reversible circuit building(similar to the STARAN cpu):
This is really cool, I never expected to see reversible computation made in electrical systems. I learned about it undergrad taking a course by Bruce MacLennan* though it was more applied to "billiard ball" or quantum computing. It was such a cool class though.
*Seems like he finally published the text book he was working on when teaching the class: [https://www.amazon.com/dp/B0BYR86GP7?ref_=pe_3052080_3975148...
Wow. This whole logic sounds like something really harebrained from a Dr Who episode: "It takes energy to destroy information. Therefore if you don't destroy information, it doesn't take energy!" - sounds completely illogical.
I honestly don't understand from the article how you "recover energy". Yet I have no reason to disbelieve it.
Someone else here compared it to regenerative braking in cars, which is what made it click for me. If you spend energy to accelerate, then recapture that energy while decelerating, then you can manage to transport yourself while your net energy expenditure is zero (other than all that pesky friction). On the other hand, if you spend energy to accelerate, then shed all that energy via heat from your brake pads, then you need to expend new energy to accelerate next time.
> The main way to reduce unnecessary heat generation in transistor use—to operate them adiabatically—is to ramp the control voltage slowly instead of jumping it up or down abruptly.
But if you change the gate voltage slowly, then the transistor will be for a longer period in the resistive region where it dissipates energy. Shouldn't you go between the OFF and ON states as quickly as possible?
The trick is not to have a voltage across the channel while it's transitioning states. For this reason, adiabatic circuits are typically "phased" such that any given adiabatic logic gate is either having its gates charged or discharged (by the previous logic gate), or current is passing through its channels to charge/discharge the next logic gate.
Interesting, thanks!
The ideas are neat and both Landauer and Bennet did some great work and left a powerful legacy. The energetic limits we are talking about are not yet relevant in modern computers. The amount of excess thermal energy for performing 10^26 erasures associated to some computation (of say an LLM that would be too powerful for the current presidential orders) would only be about 0.1kWh, so 10 minutes of a single modern GPU. There are other advantages to reversibility, of course, and maybe one day even that tiny amount of energy savings will matter.
Also an Edward Fredkin https://en.wikipedia.org/wiki/Edward_Fredkin interest https://en.wikipedia.org/wiki/Fredkin_gate .
As well as Tommaso Toffoli, Norman Margolus, Tom Knight, Richard Feynman, and Charles Bennett:
Reversible Computing, Tommaso Toffoli:
https://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TM-1...
>Abstract. The theory of reversible computing is based on invertible primitives and composition rules that preserve invertibility. With these constraints, one can still satisfactorily deal with both functional and structural aspects of computing processes; at the same time, one attains a closer correspondence between the behavior of abstract computing systems and the microscopic physical laws (which are presumed to be strictly reversible) that underly any concrete implementation of such systems. According to a physical interpretation, the central result of this paper is that it is ideally possible to build sequential circuits with zero internal power dissipation.
A Scalable Reversible Computer in Silicon:
https://www.researchgate.net/publication/2507539_A_Scalable_...
Reversible computing:
https://web.eecs.utk.edu/~bmaclenn/Classes/494-594-UC-F17/ha...
>In 1970s, Ed Fredkin, Tommaso Toffoli, and others at MIT formed the Information Mechanics group to the study the physics of information. As we will see, Fredkin and Toffoli described computation with idealized, perfectly elastic balls reflecting o↵ barriers. The balls have minimum dissipation and are propelled by (conserved) momentum. The model is unrealistic but illustrates many ideas of reversible computing. Later we will look at it briefly (Sec. C.7).
>They also suggested a more realistic implementation involving “charge packets bouncing around along inductive paths between capacitors.” Richard Feynman (Caltech) had been interacting with Information Mechanics group, and developed “a full quantum model of a serial reversible computer” (Feynman, 1986).
>Charles Bennett (1973) (IBM) first showed how any computation could be embedded in an equivalent reversible computation. Rather than discarding information (and hence dissipating energy), it keeps it around so it can later “decompute” it back to its initial state. This was a theoretical proof based on Turing machines, and did not address the issue of physical implementation. [...]
>How universal is the Toffoli gate for classical reversible computing:
https://quantumcomputing.stackexchange.com/questions/21064/h...
Calling the addition of an energy storage device into a transistor "reverse computing" is like calling a hybrid car using regenerative braking "reverse driving".
It's a very interesting concept - best discussed over pints at the pub on a Sunday afternoon along with over unity devices and the sad lack of adoption of bubble memory.
Well actually, "reversible driving" is perfectly apt in the sense of acceleration being a reversible process. It means that in theory the net energy needed to drive anywhere is zero because all the energy spent on acceleration is gained back on braking. Yes I know in practice there's always friction loss, but the point is there isn't a theoretical minimum amount of friction that has to be there. In principle a car with reversible driving can get anywhere with asymptotically close to zero energy spent.
Put another way, there is no way around the fact that a "non-reversible car" has to have friction loss because the brakes work on friction. But there is no theoretical limit to how far you can reduce friction in reversible driving.
Cars specifically dissipate energy on deformation of the tires; this loss is irreversible at any speed, even if all the bearings have effectively zero losses (e.g. using magnetic levitation).
A train spends much less on that because the rails and the wheels are very firm. A maglev train likely recuperates nearly 100% of its kinetic energy during deceleration, less the aerodynamic losses; it's like a superconducting reversible circuit.
Actually, a non-reversible car also has no lower energy limit, as long as you drive on a flat surface (same for a reversible one) and can get to the answer arbitrarily slowly.
An ideal reversible computer also works arbitrarily slowly. To make it go faster, you need to put energy in. You can make it go arbitrarily slowly with arbitrarily little energy, just like a non-reversible car.
This is glorious.
The reverse computing is independent of the energy storage mechanism. It's used to "remember" how to route the energy for recovery.
A pub in Cambridge, perhaps! I doubt you'd overhear such talk in some Aldershot dive.
The Falling Edge, maybe? The Doped Wafer?
The Flipped Bit? The Reversed Desrevereht?
(I once read a fiction story about someone who, instead of having perfect pitch, had perfect winding number: he couldn't get to sleep before returning to zero, so it took him some time to realise that when other people talked about "unwinding" at the end of the day, they didn't mean it literally)
Sounds like a good time :)
The concept completely flummoxed me but how does this play with quantum computers? That’s the direction we are going aren’t we?
Quantum computations have to be reversible , because you have to collapse the wave function and take a measurement to throw away any bits of data. You can accumulate junk bits as long as they remain in a superposition. But at some point you have to take a measurement. So, very much related.
The miniscule amount of energy retained from the "reverse computation" will be absolutely demolished by the first DRAM refresh.
I doubt it would use DRAM. Maybe some sort of MRAM/FeRAM would be a better fit. Or maybe a tiny amount of memory (e.g. Josephson junction) in a quantum circuit at some point in the future.
SRAM is actually very architecturally similar to some adiabatic circuit topologies.
Reversible Computing (2016) [video] (youtube.com)
https://news.ycombinator.com/item?id=16007128
https://www.youtube.com/watch?v=rVmZTGeIwnc
DonHopkins on Dec 26, 2017 | next [–]
Billiard Ball cellular automata, proposed and studied by Edward Fredkin and Tommaso Toffoli, are one interesting type of reversible computer. The Ising spin model of ferromagnetism is another reversible cellular automata technique. https://en.wikipedia.org/wiki/Billiard-ball_computer
https://en.wikipedia.org/wiki/Reversible_cellular_automaton
https://en.wikipedia.org/wiki/Ising_model
If billiard balls aren't creepy enough for you, live soldier crabs of the species Mictyris guinotae can be used in place of the billiard balls.
https://www.newscientist.com/blogs/onepercent/2012/04/resear...
https://www.wired.com/2012/04/soldier-crabs/
http://www.complex-systems.com/abstracts/v20_i02_a02.html
Robust Soldier Crab Ball Gate
Yukio-Pegio Gunji, Yuta Nishiyama. Department of Earth and Planetary Sciences, Kobe University, Kobe 657-8501, Japan.
Andrew Adamatzky. Unconventional Computing Centre. University of the West of England, Bristol, United Kingdom.
Abstract
Soldier crabs Mictyris guinotae exhibit pronounced swarming behavior. Swarms of the crabs are tolerant of perturbations. In computer models and laboratory experiments we demonstrate that swarms of soldier crabs can implement logical gates when placed in a geometrically constrained environment.
https://news.ycombinator.com/item?id=35366971
Tipler's Omega Point cosmology:
https://en.wikipedia.org/wiki/Frank_J._Tipler#The_Omega_Poin...
>The Omega Point cosmology
>The Omega Point is a term Tipler uses to describe a cosmological state in the distant proper-time future of the universe.[6] He claims that this point is required to exist due to the laws of physics. According to him, it is required, for the known laws of physics to be consistent, that intelligent life take over all matter in the universe and eventually force its collapse. During that collapse, the computational capacity of the universe diverges to infinity, and environments emulated with that computational capacity last for an infinite duration as the universe attains a cosmological singularity. This singularity is Tipler's Omega Point.[7] With computational resources diverging to infinity, Tipler states that a society in the far future would be able to resurrect the dead by emulating alternative universes.[8] Tipler identifies the Omega Point with God, since, in his view, the Omega Point has all the properties of God claimed by most traditional religions.[8][9]
>Tipler's argument of the omega point being required by the laws of physics is a more recent development that arose after the publication of his 1994 book The Physics of Immortality. In that book (and in papers he had published up to that time), Tipler had offered the Omega Point cosmology as a hypothesis, while still claiming to confine the analysis to the known laws of physics.[10]
>Tipler, along with co-author physicist John D. Barrow, defined the "final anthropic principle" (FAP) in their 1986 book The Anthropic Cosmological Principle as a generalization of the anthropic principle:
>Intelligent information-processing must come into existence in the Universe, and, once it comes into existence, will never die out.[11]
>One paraphrasing of Tipler's argument for FAP runs as follows: For the universe to physically exist, it must contain living observers. Our universe obviously exists. There must be an "Omega Point" that sustains life forever.[12]
>Tipler purportedly used Dyson's eternal intelligence hypothesis to back up his arguments.
Cellular Automata Machines: A New Environment for Modeling:
https://news.ycombinator.com/item?id=30735397
>It's also very useful for understanding other massively distributed locally interacting parallel systems, epidemiology, economics, morphogenesis (reaction-diffusion systems, like how a fertilized egg divides and specializes into an organism), GPU programming and optimization, neural networks and machine learning, information and chaos theory, and physics itself.
>I've discussed the book and the code I wrote based on it with Norm Margolus, one of the authors, and he mentioned that he really likes rules that are based on simulating physics, and also thinks reversible cellular automata rules are extremely important (and energy efficient in a big way, in how they relate to physics and thermodynamics).
>The book has interesting sections about physical simulations like spin glasses (Ising Spin model of the magnetic state of atoms of solid matter), and reversible billiard ball simulations (like deterministic reversible "smoke and mirrors" with clouds of moving particles bouncing off of pinball bumpers and each other).
Spin Glass:
https://en.wikipedia.org/wiki/Spin_glass
>In condensed matter physics, a spin glass is a magnetic state characterized by randomness, besides cooperative behavior in freezing of spins at a temperature called 'freezing temperature' Tf. Magnetic spins are, roughly speaking, the orientation of the north and south magnetic poles in three-dimensional space. In ferromagnetic solids, component atoms' magnetic spins all align in the same direction. Spin glass when contrasted with a ferromagnet is defined as "disordered" magnetic state in which spins are aligned randomly or not with a regular pattern and the couplings too are random.
Billiard Ball Computer:
https://en.wikipedia.org/wiki/Billiard-ball_computer
>A billiard-ball computer, a type of conservative logic circuit, is an idealized model of a reversible mechanical computer based on Newtonian dynamics, proposed in 1982 by Edward Fredkin and Tommaso Toffoli. Instead of using electronic signals like a conventional computer, it relies on the motion of spherical billiard balls in a friction-free environment made of buffers against which the balls bounce perfectly. It was devised to investigate the relation between computation and reversible processes in physics.
Reversible Cellular Automata:
https://en.wikipedia.org/wiki/Reversible_cellular_automaton
>A reversible cellular automaton is a cellular automaton in which every configuration has a unique predecessor. That is, it is a regular grid of cells, each containing a state drawn from a finite set of states, with a rule for updating all cells simultaneously based on the states of their neighbors, such that the previous state of any cell before an update can be determined uniquely from the updated states of all the cells. The time-reversed dynamics of a reversible cellular automaton can always be described by another cellular automaton rule, possibly on a much larger neighborhood.
>[...] Reversible cellular automata form a natural model of reversible computing, a technology that could lead to ultra-low-power computing devices. Quantum cellular automata, one way of performing computations using the principles of quantum mechanics, are often required to be reversible. Additionally, many problems in physical modeling, such as the motion of particles in an ideal gas or the Ising model of alignment of magnetic charges, are naturally reversible and can be simulated by reversible cellular automata.
Theory of Self-Reproducing Automata: John von Neumann's Quantum Mechanical Universal Constructors:
https://news.ycombinator.com/item?id=22738268
[...] Third, the probabilistic quantum mechanical kind, which could mutate and model evolutionary processes, and rip holes in the space-time continuum, which he unfortunately (or fortunately, the the sake of humanity) didn't have time to fully explore before his tragic death.
>p. 99 of "Theory of Self-Reproducing Automata":
>Von Neumann had been interested in the applications of probability theory throughout his career; his work on the foundations of quantum mechanics and his theory of games are examples. When he became interested in automata, it was natural for him to apply probability theory here also. The Third Lecture of Part I of the present work is devoted to this subject. His "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components" is the first work on probabilistic automata, that is, automata in which the transitions between states are probabilistic rather than deterministic. Whenever he discussed self-reproduction, he mentioned mutations, which are random changes of elements (cf. p. 86 above and Sec. 1.7.4.2 below). In Section 1.1.2.1 above and Section 1.8 below he posed the problems of modeling evolutionary processes in the framework of automata theory, of quantizing natural selection, and of explaining how highly efficient, complex, powerful automata can evolve from inefficient, simple, weak automata. A complete solution to these problems would give us a probabilistic model of self-reproduction and evolution. [9]
[9] For some related work, see J. H. Holland, "Outline for a Logical Theory of Adaptive Systems", and "Concerning Efficient Adaptive Systems".
https://www.deepdyve.com/lp/association-for-computing-machin...
https://deepblue.lib.umich.edu/bitstream/handle/2027.42/5578...
https://www.worldscientific.com/worldscibooks/10.1142/10841
perl4ever on Dec 26, 2017 | root | parent | next [–]
Tipler's Omega Point prediction doesn't seem like it would be compatible with the expanding universe, would it? Eventually everything will disappear over the speed-of-light horizon, and then it can't be integrated into one mind.
DonHopkins on Dec 26, 2017 | root | parent | next [–]
It also wishfully assumes that the one mind can't think of better things to do with its infinite amount of cloud computing power than to simulate one particular stone age mythology.
Then again, maybe it's something like the 1996 LucasArts game Afterlife, where you simulate every different religion's version of heaven and hell at once.
https://en.wikipedia.org/wiki/Afterlife_(video_game)
The primary goal of the game is to provide divine and infernal services for the inhabitants of the afterlife. This afterlife caters to one particular planet, known simply as the Planet. The creatures living on the Planet are called EMBOs, or Ethically Mature Biological Organisms. When an EMBO dies, its soul travels to the afterlife where it attempts to find an appropriate "fate structure". Fate structures are places where souls are rewarded or punished, as appropriate, for the virtues or sins that they practiced while they were alive.
https://news.ycombinator.com/item?id=30735397
DonHopkins on March 19, 2022 | parent | context | favorite | on: Ask HN: What book changed your life?
Cellular Automata Machines: A New Environment for Modeling Published April 1987 by MIT Press. ISBN: 9780262200608.
http://mitpress.mit.edu/books/cellular-automata-machines
http://www.researchgate.net/publication/44522568_Cellular_au...
https://donhopkins.com/home/cam-book.pdf
https://github.com/SimHacker/CAM6/blob/master/javascript/CAM...
themodelplumber on March 20, 2022 | prev [–]
I'm curious, how did the book change your life? What kind of problems did the authors model using their approach? I'm new to the topic, thanks for any input.
DonHopkins on March 22, 2022 | parent [–]
It really helped me get my head around how to understand and program cellular automata rules, which is a kind of massively parallel distributed "Think Globally, Act Locally" approach that also applies to so many other aspects of life.
But by "life" I don't mean just the cellular automata rule "life"! Not to be all depressing like Marvin the Paranoid Android, but I happen to think "life" is overrated. ;) There are so many billions of other extremely interesting cellular automata rules besides "life" too, so don't stop once you get bored with life! ;)
https://www.youtube.com/watch?v=CAA67a2-Klk
For example, it's kind of like how the world wide web works: "Link Globally, Interact Locally":
https://donhopkins.medium.com/scriptx-and-the-world-wide-web...
It's also very useful for understanding other massively distributed locally interacting parallel systems, epidemiology, economics, morphogenesis (reaction-diffusion systems, like how a fertilized egg divides and specializes into an organism), GPU programming and optimization, neural networks and machine learning, information and chaos theory, and physics itself.
I've discussed the book and the code I wrote based on it with Norm Margolus, one of the authors, and he mentioned that he really likes rules that are based on simulating physics, and also thinks reversible cellular automata rules are extremely important (and energy efficient in a big way, in how they relate to physics and thermodynamics).
The book has interesting sections about physical simulations like spin glasses (Ising Spin model of the magnetic state of atoms of solid matter), and reversible billiard ball simulations (like deterministic reversible "smoke and mirrors" with clouds of moving particles bouncing off of pinball bumpers and each other).
Spin Glass:
https://en.wikipedia.org/wiki/Spin_glass
>In condensed matter physics, a spin glass is a magnetic state characterized by randomness, besides cooperative behavior in freezing of spins at a temperature called 'freezing temperature' Tf. Magnetic spins are, roughly speaking, the orientation of the north and south magnetic poles in three-dimensional space. In ferromagnetic solids, component atoms' magnetic spins all align in the same direction. Spin glass when contrasted with a ferromagnet is defined as "disordered" magnetic state in which spins are aligned randomly or not with a regular pattern and the couplings too are random.
Billiard Ball Computer:
https://en.wikipedia.org/wiki/Billiard-ball_computer
>A billiard-ball computer, a type of conservative logic circuit, is an idealized model of a reversible mechanical computer based on Newtonian dynamics, proposed in 1982 by Edward Fredkin and Tommaso Toffoli. Instead of using electronic signals like a conventional computer, it relies on the motion of spherical billiard balls in a friction-free environment made of buffers against which the balls bounce perfectly. It was devised to investigate the relation between computation and reversible processes in physics.
https://en.wikipedia.org/wiki/Reversible_cellular_automaton
>A reversible cellular automaton is a cellular automaton in which every configuration has a unique predecessor. That is, it is a regular grid of cells, each containing a state drawn from a finite set of states, with a rule for updating all cells simultaneously based on the states of their neighbors, such that the previous state of any cell before an update can be determined uniquely from the updated states of all the cells. The time-reversed dynamics of a reversible cellular automaton can always be described by another cellular automaton rule, possibly on a much larger neighborhood.
>[...] Reversible cellular automata form a natural model of reversible computing, a technology that could lead to ultra-low-power computing devices. Quantum cellular automata, one way of performing computations using the principles of quantum mechanics, are often required to be reversible. Additionally, many problems in physical modeling, such as the motion of particles in an ideal gas or the Ising model of alignment of magnetic charges, are naturally reversible and can be simulated by reversible cellular automata.
Also I've frequently written on HN about Dave Ackley's great work on Robust-First Computing and the Moveable Feast Machine, which I think is brilliant, and quite important in the extremely long term (which is coming sooner than we think).
https://news.ycombinator.com/item?id=22304110
interesting.
on a basic level, with the gates, it seems, if you have at most two input amounts of work, and get at most one out, then storing the lost work for later reuse makes sense
The simplest, dumbest alternative to for reversible computing is to install datacenters in ex-USSR, where there is still (slowly disappearing) rich infrastructure for central hot water. Instead of charging people, utilities can charge both people and datacenters and yet lower the carbon footprint.
I believe it would be more efficient to use a heat pump for the district heating even if the datacenter heat is just dumped. Heat pumps can get up to 400% efficiency.
What do you mean by efficient?
The heat emitted by the electronics will always be emitted and needs to go somewhere. If 1MWh of that heat is dumped into district heating how would that be less efficient than the 1MWh being dumped in the atmosphere to (hopefully) be reclaimed by a heat pump elsewhere?
Or, alternatively, that 1MWh could be absorbed by the already existing datacenter AC coils which could ultimately still be used to heat up district water as it cools the refrigerant. (People actually do this with swimming pools, using the coils from their AC to heat the pool).
Resistive heating generates heat: thus, it cannot generate more than 100% of the energy put into the resistor, because of conservation of energy.
In contrast, heat pumps move heat, so they can move more than the energy put into them. Even cold air (around freezing) has a lot of heat in it to siphon off.
If you're going to run 1MWh worth of compute anyway, then selling the waste heat is still a good idea. But if you weren't, a heat pump will get you more heat energy than a bank of computers with the same energy budget.
istjohn is right. Using a heat pump instead of resistive heating (which is basically what a data centre is) is many times more efficient.
That doesn't mean we shouldn't use the heat a data centre provides. It just means that it is not a good idea to neglect the development of energy-saving technology because the heat produced can be used somewhere else.
the issue is that there is an upper limit to how much heat can be removed from a system each cycle, so even if you have a way to disperse the removed heat in a useful way you still can't grow compute beyond a certain point. And because scaling is exponential even immersing the whole rack into liquid nitrogen would only buy a few years of computing growth post-Moore's law.
Wait, you mean there is no central hot water infrastructure in the world? Poland is not ex-USRR but it is common place and I always assumed this is a normal thing everywhere.