How processors go from idea to your PC
If you’ve ever handled a transistor—in your college lab, for instance—you’ve probably had a brain cell or two pop trying to imagine 291 million of them inside the diminutive package that is the Intel Core 2 Duo. And if it isn’t enough to imagine just transistors on that little chip, try to imagine that those transistors are actually connected to each other in circuits, which ultimately power all our PCs. Hurts, doesn’t it?
So how do you design circuitry when millions of transistors are involved? And how do you fit the lot on a chip that’s barely the size of a fingernail?
Before the actual design of your microchip commences, you first have to decide what it’s going to do—and not just in vague terms like “Oh, it’ll decode video files.” You’ll have to draw up algorithms for how exactly the chip will decode video—which codec it’ll use and so on.
The next step is dividing the chip’s job into hardware and software—what tasks will be performed by the chip directly, and what you’ll write drivers or software for. If you implement all the chip’s functionality on the hardware itself, you’ll have yourself a real cracking performer, but you won’t be able to add to those functions later. For example, your mobile phone manufacturer could very well incorporate all the functions of your phone on to the same chip, but if they had to make even the simplest change to the phone—like adding an FM radio function—they’d have to design and manufacture a whole new chip: an embarrassingly expensive proposition. Instead, they build a chip with all the essential features—making phone calls, for instance—and write software to implement the rest of the functions.
Once the hardware’s role in the Big Picture is fixed, it’s time to design.
Drawing Board, II
We talk about circuitry with millions of transistors, but the truth is that even in the early days of chip manufacture, designers never had to bother with individual transistors—the smallest units they had to care about were logic gates; this was a time when digital circuits weren’t nearly as complex as today’s. If you were to design a microprocessor at that minute level today, it would take years to just finish the design, let alone test and refine it. The solution? Don’t bother with circuit diagrams at all—tell a computer what you want, and let it do the dirty work.
Today, designing hardware has become a matter of writing code—using a special “Hardware Descriptive Language” (HDL), you describe every function of the chip you’re designing—right down to what the chip does in each clock cycle. That code, once interpreted (by a logic synthesiser), results in a circuit diagram—which can now be refined by hand, if necessary. Before your design is shipped off for manufacture, it’ll need manual looking-over to see if the design can be optimised beyond what the logic synthesiser brought up. Finally, it’s tested in a simulation to see if it really does what it’s supposed to.
And now, it’s off to the fabrication plant...
Inevitably even C will fall short in the face of the complexity
of newer hardware, so manufacturers are looking to newer
methods to make processor design easier
In The Clean Room
Even the smallest speck of dust can turn an IC into a worthless piece of silicon, so everything we’re talking about here takes place in a “cleanroom,” which is, well, a very clean room. We’re talking millions of dollars worth of dust filters, special furniture that won’t leave particles, even specialised cleaning materials.
It all starts with a wafer of pure silicon; the wafers are rinsed in pure water and a special cleaning solution. The huge number of transistors that will be made on this chip are going to be packed together real close, so it’s necessary to isolate them from each other using an insulating material. Rather than slap on a layer of material on the silicon wafer, the wafer is exposed to heat and oxygen to create a layer of silicon dioxide on it. Now, we’ll remove the oxide only from those areas that we need to create the transistors on.
The layer of silicon dioxide is then coated with a material called photoresist, which turns into an incoherent goo when exposed to ultraviolet light. Here’s where you use the circuit diagram that came out of the design process. The circuit’s layout is etched on to a mask, which is then placed on the layer of photoresist. This is then exposed to ultraviolet light, which causes the photoresist under the mask’s transparent areas (which is where the transistors need to be) to turn gooey so it can be washed away. The wafer is then doused in acid, which eats up the exposed portions of the silicon dioxide layer, giving us access to the silicon underneath; this is called etching.
The process we’ve described above—how we start with a piece of silicon, go through the process, and end up with circuits etched on top of silicon dioxide
Now, through a process called ion implantation, ions are diffused into the silicon to create transistors. Note that we’re talking only transistors here—even resistors and capacitors are created using transistors connected in special configurations; the gory details are too much to fit in here.
So we’ve created our components on the wafer; now it’s time to bring in the wiring.
Connecting Them All
We use the term “wiring” rather loosely, but we are following the intention. The components have been created, but they’re not connected. The first thing to do is to close up the transistors inside insulating material—more silicon dioxide. This time, the process of masking and etching is repeated, only the mask is creating windows that lead to the transistors’ terminals. A layer of metal is deposited now, which establishes the connections.
This is only the first layer of metal, which establishes only part of the connections that need to be made—another cycle of insulating, masking, etching and metal depositing follows, creating the next set of connections—this goes on for up to twenty layers. This is how they’re able to connect so many components in such a small area.
And speaking of small...
The Nanometre Kerfuffle
If you’ve been following technology news for the past few months—specifically the antics of the processor giants—you’ve most likely encountered talk about the old 90 nm (nanometre) process, the on-its-way-out 65 nm and the new wave of 45 nm processes. What are these numbers and what’s the big deal?
To put it in non-engineering terms, these numbers represent the effective length of the transistors on the chip—so in the same chip area, it’s possible to put more 45 nm transistors than 60 nm. However, it’s not just that bit that’s causing the hype—it’s the transistors themselves (more specifically, the transistors that you’ll see on Intel’s upcoming Penryn and Nehalem processors). The transistors are called high-k metal gate transistors, and their design reduces the amount of current wasted in the transistor, which in turn pulls down the power they consume and the amount of heat dissipated by them. A more detailed explanation would fill up a few issues of this magazine, so we won’t go that deep.
Even with the new technologies, processor designs are getting more complex—HDLs have become cumbersome, and designers now prepare their specifications in variants of the C programming language. Inevitably, though, even C is going to fall short in the face of the complexity of newer hardware, so manufacturers are looking to newer methods to make processor design easier.
One way is to modularise the processor—break it into parts in such a way that they can be added or removed as necessary. An approach that looks promising is AMD’s Fusion, which enables them to give you processors with as many CPU or graphics cores you want on the same chip. With the earlier method, if they wanted graphics and CPU on the same chip, they’d have to design it from the ground up—with Fusion, all they need to do is add the extra graphics core to an existing CPU core design, and voila!
And then there’s talk of offloading even more of the design to computers. Who knows, maybe we’ll soon see processors whose sole purpose is to design better processors...