At the current rate, we will not be able to keep up with the energy requirements of our computers; in addition, circuits simply cannot go small beyond a point, due to heat limitations. And as you know very well, the world now depends on ever-increasing computational power to sustain development.
The Point Of No Return
So just how far away is the point at which our computers won't be able to get any faster? It's way into the future. If you consider alternative computing paradigms, that is. Such as DNA computing (see Digit, November 2005), quantum computing, and others. But we're not talking about those things here. And we cannot talk about nanotech giving Moore's Law a new lease of life-because there is a fundamental physical limitation we must now talk about. Physics guarantees that a computer will not be able to discard more than 3.5Ã—1022 bits of physical information per second.
Umm…"discard"? Well, we're edging closer to our point. What's happening when your computer is doing its computations is, it's throwing away bits. For example, when you add 5 and 3 to get 8, the 5 and 3 are gone-forever. They're dissipated as heat. This is pretty simple to understand-after all, your processor gets hot because it's doing computations! Now think about it-can you possibly get back a 5 and a 3 from an 8? Given an 8, it could have been 1 7, or 2 6, or… in fact, it could have been any of an infinite number of combinations. So let's tie this up with what we said: our wasteful computers, working harder and harder, are throwing away more and more bits, but there's a limit. We won't name the limit because there are horrible equations involved-we'll just call it "the limit."
To put the above figure into perspective, today's fastest processors are only about 1,00,000 times slower at discarding bits than the limit permits.
Isn't 1,00,000 large enough for, say, another 200 years? The simple answer is no. The researchers have it all worked out-they say that in 20 to 30 years, The Limit will have been reached. And that's in theory: in practice, The Limit will be reached even sooner. This is because of things such as thermal noise-the heat dissipated by the elements on an IC, because of the motion of electrons-which will become a concern in 20 years, and problems with leakage could come within ten years or less.
The answer? Obviously, stop throwing away the bits. Recycle them. "Uncompute" the unneeded bits. Go green. In other words, get started on Reversible Computing.
Dr Michael Frank, Junior Faculty Member, FAMU-FSU College of Engineering
To understand how we can do this, we need to understand "ballistic" circuits. Reversible Computing is not simple to explain, but the first step is the broad picture. For that, all you need to do is to think of computation as a physical process. Once you're in "something-physical-is-going-on-in-the-processor" mode, instead of thinking "software-is-running-on-the-processor," you'll find the following easy enough to understand.
A ballistic physical process is one where the system moves forward propelled by its own momentum, without much of its own energy being converted into heat. Think of an ICBM-an intercontinental ballistic missile: it doesn't heat up too much in the atmosphere. It "coasts" along once it's been propelled into the atmosphere. Now think of a ballistic circuit, though it might be hard to imagine.
Since we're talking physics now, one needs to see that an ICBM isn't all that different from a circuit-they both follow the laws of physics! How ballistic a process can be is quantified by its "quality factor," called Q. This is simply the ratio between the amount of energy involved in carrying out the process and the amount that gets dissipated as heat. The idea here is to construct computers with tiny, nanoscale oscillators with a high Q. (Oscillators are spring-like devices: what makes your CPU run at 3.2 GHz is an oscillator, which generates a recurring signal.) The nano-oscillators would be on the silicon, and would enable the chip to run in a ballistic way: they would capture most of the energy used in one calculation and then use it in another-at least in pure theory. One feeds in the initial data, gives the processor a kick-start, and it coasts (like an ICBM) from one step of the computation to the next. This means only a small fraction of its energy being lost to heat at each step.
The concept is slightly analogous to hybrid cars that take the energy generated during braking and convert it into electricity that can be used to power the car.
The biggest challenge facing reversible computing is probably that of designing oscillators that have a high enough Q. There is no fundamental reason that it cannot be done-we can leave that to circuit engineers!
A reversible computing problem: (a) Set up the balls (programs and data) according to the specifications (b) Give the system an initial push, and let the balls move around (c) Your answer is in the final configuration
…But Keeping Your Cool
Coming back to heat, a physical process that is highly ballistic (such as what we've described above) means that it is also adiabatic to a high degree. "Adiabatic" means "without flow of heat." A field called adiabatic circuits studies logic circuits that operate mostly in adiabatic mode.
Link that to the fact that a physical process can be adiabatic only if it is logically reversible -we mean it should employ "reversible logic." The logic we commonly use-such as "1 AND 0 = 0", "1 OR 0 = 1", and so on, are not reversible. For example:
- When you do an AND operation on a 0 and 1, you get a 0
- You also get a 0 when you do an AND on a 0 and a 0
It turns out that we can have reversible logic-logic that doesn't discard bits. We hope you realised that in the above example, the two bits that went in as inputs were lost; with reversible logic, they won't.
What we have thus far is that we need ballistic circuits; ballistic circuits are highly adiabatic; and adiabatic circuits should employ reversible logic. But before that, here's another way of looking at what's going on in Reversible Computing. We need to ask, what is a computation? You feed in initial data, do some operations, and whatever state the computer reaches is the result.
In a regular setup, this happens in a step-by-step, irreversible fashion. The system moves from one state to another, but it's all in the forward direction. And this is the only way we currently think about computation.
Instead, think of a perfectly smooth pool table. Divide the table into a grid. At the intersections of the grid, the presence of a ball represents a 1; the absence of a ball is a 0.
Now, to describe a problem, you'll need to figure how to configure the pool balls. Once that's figured out, place the balls on the table, and then give the cue ball a slight push. Since it's a pool table, what will ensue will be ballistic-the balls will move in all sorts of directions, bouncing off each other and the edges of the table, without too much friction (which translates to heat). When the process is done, the configuration of the balls (what balls are at what intersection points) represents the result of the computation.
That's pretty much what happens in Reversible Computing-but since we've looked at it without thinking of the actual steps involved in the computation, we need yet another way of looking at it.
Walking On Water
When we get back to the traditional way of looking at it, we can see that we would need lots of memory to store intermediate results-at each point in the computation, the results would be stored somewhere, instead of being thrown away (and therefore dissipated as heat). The next step would generate more results, and so on, and the memory would soon fill up. For a really complex calculation, you would need tons and tons of RAM! So what we do is, we go forward a few steps, and reach a certain stage x. Then we take all the results we've accumulated, and use those in addition to the information x-and do some more calculations. We thus proceed a little bit further. Then we backtrack again, and…
To give a simple analogy, think of crossing a river using only a few stepping stones (which represent the limited memory). You place, say, five of them in the river, and walk to a point. Then you stand on the fifth stone, pick up the other four, and put them in front of you. You discard what was the fifth stone, then walk ahead, pick up the stones again, and…
Think of it thus:
5 3 = 8 (the first step; we don't throw away the knowledge that 5 plus 3 equals 8)
6 1 = 7 (the second step; we don't throw away the knowledge that 6 plus 1 equals 7)
Say we get to a step that involves adding 5 and 3 and 9, we don't do 5 3 9; we use our prior knowledge and just do 8 9.
That's a crude (and strictly speaking, incorrect) way of looking at it, but it does reveal what the worth of "uncomputing" values is. The challenge lies in devising logic where uncomputed values can be put to use.
We've given you two fancy, fun ways of looking at how a reversible computation would work. But what would a reversible circuit actually look like?
We need to get back to logic-we need reversible logic gates. As a simple example, the AND gate is irreversible (like we said), but the NOT gate is reversible-you take the output and feed it back, and you get the original input.
Now think of a CNOT (Controlled Not) gate: it has two inputs A and B, and it flips B if and only if A is 1. In other words, it's a NOT gate with a control line. Then think of the Toffoli gate, which is a CCNOT (Controlled controlled NOT) gate: it has inputs A, B, and C, and it inverts C if and only if A and B are both 1. It turns out that the Toffoli gate is a "universal" reversible gate, in the sense that (in simple terms) any reversible computer can be built using them.
But that's only in theory. The real gates that a reversible computer would be based on are more complex, and space does not permit a discussion of them here.
- Never turn On a transistor when there is a voltage difference between its terminals
- Never turn Off a transistor when there is a current passing through it.
An AND gate. Different input sets give the same output. How can you possibly get back what you put in?
A CNOT gate. Reverse the operation, and you get back what you had. Of course, even the NOT gate is an example
We now know how reversible computing has the potential to reduce heat production, and to enable chips to shrink further. But is it just another fad? A toy technology, so to speak?
It simply is not. The general ideas of the technology date back to the 1970s! ITArchitect.com surmised two years ago that in 2013, adiabatic techniques will become more cost-effective than traditional methods for desktops. With performance per watt becoming more and more of a benchmark, the trend is clearly towards "green computing."
The trend is also towards quantum computing, which has been much in the press the last few years (refer our nameless article on quantum cryptography, Digit February 2006). And there is a close relationship between quantum and reversible computers: every quantum computer is a reversible computer. But before quantum computing arrives, problems in reversible computing need to be sorted out.
We'll give the last word to Dr Michael Frank, now a junior faculty member in the ECE department in the FAMU-FSU College of Engineering, and a Reversible Computing pioneer. Dr Frank said in an interview with Digit in 2004: "It is not an exaggeration to say that the entire long-term future of computing technology, and indeed of our entire civilisation, will depend on exploring the problems that reversible computing poses."
Essentially, the merger of physics and computer science is the new convergence we're headed towards.