It?EUR(TM)s Morphin?EUR(TM) Time!

Published Date
01 - Sep - 2006
| Last Updated
01 - Sep - 2006
It’s Morphin’ Time!
And a dark time came, for RAM was scarce, and the swap was reduced to naught.
And the OS said unto the Chip, "Be Memory!", and lo! For the Chip was now Memory.

- The E-Testament, Chaos 21:12

All right, so there's no such thing as the E-Testament. Even so, imagine a time when you are running the immensely taxing ~Doom V~, and your system finds itself desperately in need of memory. All it needs to do now is tell one of the other chips (how about your Ethernet card?) to transform itself into a memory chip, and all is well in the gaming world again. Even as these words spill out, the idea seems quite absurd. After all, whoever heard of hardware running around transforming itself into new components? We ranted about adaptive software not too long ago (~The Lizard of Oz, Digit January 2006~); how about hardware that can adapt itself to the demands of the situation? If William Ditto, Chair of the Department of Biomedical Engineering at the University of Florida, and his colleagues, have their heads screwed on right (and many, including the US Navy, believe they do), we might actually see a prototype chip that can transform itself from CPU to Memory to Graphics card to take-your-pick as early as next year! The idea is called Chaos Computing, and will, as the name suggests, use a phenomenon that scientists are otherwise known to hate – chaos.

The Butterfly Effect And Other Stories
Chaos, in the scientific sense, isn't quite the same as we know it in real life. While life's chaos is completely random and unpredictable, science defines a chaotic system to be one which is quite predictable, but extremely sensitive to its initial conditions. To understand the concept better, try this: get your hands on a crazy ball (those little unnaturally bouncy balls) and bounce it down a sufficiently bumpy slope. If you can find it after that, repeat the exercise ~exactly~ as you did it before, right down to the movement of your arm. Most likely, it won't follow the same path down as it did before; in fact, it will probably go so astonishingly off course, you'd think it has a mind of its own. This is because even the slightest extra movement or the tiniest irregularity on the surface of the ball has a dramatic effect on the way it comes up from the first bounce. This then affects the next bounce and the one after that, and so on, and now nobody will believe your "exact reproduction" story. Note, however, that calculating these paths is quite possible, though horribly complex, so the system isn't really random – just finicky.

Such fickleness is what led Edward Lorenz to talk about the ~Butterfly Effect~ while studying chaos in weather. It's a little like the Domino Effect, only with a bigger, meaner domino falling each time. The tiniest change in his simulation caused major changes in the weather system, and his subsequent paper read, "One meteorologist remarked that if the theory were correct, one flap of a seagull's wings could change the course of weather forever." The seagull was soon replaced by the prettier butterfly.

Chaos And The Chip
The main idea behind Chaotic Computing is the manipulation of ~Logic Gates~, the basic elements of computing. You might have heard of these – the AND gate, for example, outputs a binary 1 only if all its inputs (A ~and~ B ~and~ C ~and~ so on) are also 1. To construct any chip, you're going to need the most basic AND, OR, NOT and XOR (eXclusive OR) gates. Today, each gate serves one specific purpose, so if you wanted a chip that could perform AND operations and OR operations, you'd need to fabricate one with both gates, even if you don't need them both at the same time. With Ditto's idea, however, you'd just need to fabricate one circuit, which can then become any gate you want it to be with but a slight programmatic adjustment.

We always talk about chips dealing with ones and zeroes, but we overlook the fact that at the bottom of it all, they're actually dealing with electric current. A typical IC today treats any voltage below 0.8 volts (approx) as a logical 0, and any voltage above 3.1 volts (approx) as a logical 1. Life between these two thresholds is grey area, and will generally confound the chip in question. Now, imagine if you played around with these thresholds a little. Let's take an example: we have at our disposal an OR gate, and give it one input of 0.4 volts (a logical 0), and 5 volts (a logical 1) at the other input, resulting in an output of, say, 4 volts (still a logical 1). Now let's change the threshold for 1 to 4.5 volts. The inputs are still 0 and 1, but the output will now be taken as 0, and poof! Our OR gate is now an AND gate. The example is crude, and purists would probably come after us with torches lit, but you get the picture. By manipulating thresholds in this manner, gates can be transformed on a wide scale, making it possible to create complex chips that can change their behaviour whenever we want them to. These chips will be built using ~chaotic elements~ - devices that have inherent chaotic behaviours, and it isn't limited to electrical circuits; we'll soon be seeing lasers and even neurons joining in. Ditto's company, ChaoLogix Inc., has already begun work on the Reconfigurable Chaotic Logic Gate Array (RCLGA), which will subsequently be interfaced with an existing operating system to show off its capabilities.

Programming The Madness
It's probably not that awe-inspiring to talk of reconfiguring one or a few gates, but think about the chaos that will ensue when you deal with circuits that contain thousands, even millions of gates – one false move and you could turn your CPU into a sound card instead of a memory chip, and you know you don't want that! Programming these chips is going to be quite difficult, and the success of the concept hinges on it. If programmers find that programming for the chaotic chip is more trouble than it's worth, we wouldn't put it past them to reject the idea altogether and carry on writing code for existing single-purpose hardware. ChaoLogix will also be developing a programming language for the RCLG arrays; time will tell how programmers react to it.

Been There, Done That?

The idea of a reconfigurable chip that can take on many different roles isn't new, however. It's been around in the form of the Field Programmable Gate Array (FPGA), a semiconductor device built from logic gates, with reconfigurable interconnects between them. These were the original "stem cells" of the electronic world – out of the foundry in no particular form, and then configured depending on their intended use; the first FPGAs were configurable only once.

Then came the era of the reprogrammable FPGA, which allowed you to rewire the interconnects as and when you needed to. Soon enough, the FPGA was proposed as a replacement for embedded microcontrollers, culminating in the manufacture of fully functional CPUs. Xilinx's MicroBlaze and Altera's Nios II are both "soft CPU cores" based on FPGAs (called soft because you could reconfigure them if you wanted to), but they found little mainstream use as they were remarkably slower than their specifically built counterparts.

You'd expect that a chaotic chip would be based on the FPGA, but you would be quite wrong. The trouble with the FPGA is that it takes a few milliseconds to reconfigure the interconnects for a new operation – highly unacceptable. A chaotic chip, however, will be able to change its role in just one clock cycle, making it (quite literally) a million times faster.

Doubtless you're trying to see what's so great about chaotic chips other than their "I'm-so-cool-I-can-do-anything" claim to fame. The most obvious advantage is, of course, the end of the single-purpose chip. Instead of wasting years developing the innards of an otherwise silly device like, say, the next revolutionary toaster, manufacturers will be able to use a single, generic chip and program it do serve their purpose, leaving them time and money to focus on more important things.

Designing and fabricating a chaotic chip is going to be a lot easier and cheaper than current chips, especially now that everyone's straining to get as many transistors into as small an area as possible. And really, who doesn't love cheap technology?

Finally, Ditto claims that because the chaotic chip can switch roles in a single cycle of the system's clock signal, smart design can enable it to perform many more operations per second than existing chips, resulting in faster computing than we know today. Imagine a computer with two chaotic chips – at a given time, we could have one dedicated to processing and one playing the role of system memory. If we need more number-crunching power, we could then have both playing the role of processor!

Pandemonium Ahead
In March 2005, Indian scientists K. Murali and Sudeshna Sinha, along with William Ditto, published their first experiments with building a real-life reconfigurable cell, and implemented a binary half-adder with it. They've also had some success with building a chaotic element using a leech neuron, though it will still be time before we see nerve cells in our PCs.

If all goes well (and that's a big if), we'll probably see the first commercially viable chaotic chips before the decade runs out. But then, they said similar things about quantum computers, and we all know that story. The truth is, there's a lot that might hinder chaos computing – from design issues to inherent electrical flaws that can't be removed, to the possible difficulty in programming such circuitry. Still, the idea is a lot more appealing than building new chips from the ground up, and definitely shows more promise of becoming a reality sooner than quantum or DNA computing. When it does get off the ground, well, you're imagination will be the only one limiting the possibilities – smaller, smarter cell phones with days worth of talk-time, faster CPUs that spend their spare time as graphics and sound cards, PCs that can heal themselves – take your pick!

And you thought chaos was a bad thing.

Team DigitTeam Digit

All of us are better than one of us.