For now, while the highest end commercial processors have no more than 12 cores, Intel says there is nothing stopping them from creating efficient processors with over 1000 processing elements.
Recent advances in the world of field programmable gate arrays (FPGAs) by the University of Glasgow have led to the creation of a 1,000 core FPGA processor. The team, led by Dr. Wim Vanderbauwhede, divided up the many millions of transistors into 1,000 different elements or mini-circuits, each of which are able to process their own instruction set. This prototype FPGA processor has apparently already shown 20x performance increase over conventional processors while using a fraction of the power.
Intel on the other hand has been thinking about creating a 1,000 core processor for a while now, and already have a design in mind based on their prototype 48-core Single-chip Cloud Computer (SCC). There are still computational limiting factors however, such as Amdahl’s law, which is a mathematical approximation of the speedup resultant from splitting a program into parallel threads/processors.
If Intel does build upon its SSC-derived plans of building 1,000 core homogeneous multi-core central processing units, the question remains if this is actually the right approach to be taking. Modern GPUs already have thousands of processing elements, but so far can’t be efficiently used to solve massive problems or run operating systems. It’s for this reasons that AMD and Intel have been working on multi-core heterogeneous microprocessors instead – which will contain both x86 processing cores as well as high-performance stream-processors (and with AMD’s plans, even I/O controllers) – due to arrive only later this decade in the form of Bulldozer and Sandy Bridge server CPUs.
[RELATED_ARTICLE]While right now 1,000 cores seems like overkill, Intel’s Timothy Mattson says there will probably come a time when there are applications that will require the use of that many cores:
“Speaking from a technical perspective, I can easily see us using 1000 cores. The issue, however, is really one of product strategy and market demands. As I said earlier, in the research world where I work, my job is to stay ahead of the curve so our product groups can take the best products to the market, optimised for usage models demanded by consumers.”
You might also be wondering why the holy grail seems to be such a round number as 1,000 cores, Mattson explains why 1,000 seems feasible in the next 8-10 years:
“I came up with that 1000 number by playing a Moore's Law doubling game. If the integration capacity doubles with each generation and a generation is nominally two years, then in four or five doublings from today's 48 cores, we are at 1000. So this is really a question of how long do we think our fabs can keep up with Moore's Law. If I've learned anything in my 17 years at Intel, it's never bet against our fabs.”