It is interesting to see how things are getting mixed up between Central Processing Unit (CPU), the heart of a modern computer and Graphics Processing Unit (GPU), the heart of video processing. GPUs are extremely fast at vector processing (such as matrix multiplication, cross product etc) because that is what graphics processing tasks demand and use. 

In terms of raw power GPU is a monster compared to a CPU of its class (Core 2 Duo – 25GFLOPs, NVidia GeForce GTX 280 – 933 GFLOPs). GPUs also have a huge memory bandwidth compared to CPU memory bandwidth (with main memory or DRAM).  Consequently they can be repurposed, as regular processors, to do physics simulation or speech recognition or can be bunched together into a very powerful high-performance cluster supercomputer. Using GPUs for non-graphics purposes is termed GPGPU (General Purpose computing on GPU).  GPGPU applications typically involve large independent data sets and data parallelism so that GPU(s) can use all that raw power and deliver results faster than a CPU can1.

Once software folks hacked GPUs to their purpose, graphics vendors facilitated (NVidia CUDA, AMD Streams SDK and Brook+) by making the GPU easier to program and now we see GPUs such as Tesla from NVidia and FireStream from AMD, that are specifically designed to be great GPGPUs2.



Things are changing on CPU side as well. AMD after acquiring ATI, is developing a heterogenous multicore processor called Fusion  where GPU, originally in the AMD North Bridge, will be moved into the CPU as if it was another core. This will result in lower DRAM latency for GPU and better performance per watt especially in mobile architectures.

And guess what Intel is doing ? Intel in its upcoming Nehalem processors plans on putting its own GPU (codename Larrabee) in the same die as the CPU as well3. Dubbed as iGFX, the graphics core on CPU, will interface with display controller hardware located in Ibex Peak, the Nehalem south bridge4 if you will, over Flexible Display Interface (FDI).5  The GPU-in-CPU chips were supposed to debut in 2009 but may have been delayed until 2010.

While these hybrid multicores are challenging to some extent the traditional roles of CPU and GPU in the system, they are changing things for the only GPU vendor left standing – NVidia. Granted that Larrabee may be GPU circa 2006 for NVidia, but it would be hard not to think about where the CPU bellwether will go with it. Watch this YouTube video of NVidia’s CEO commenting on Intel Larrabee.







While moving GPU into CPU may be seen as an evolution of GPGPU ideas that diminished role of CPU in several applications mentioned earlier, it can also be seen as an outcome of AMD buying ATI and thinking about how to marry the two technologies.  It could also be a step towards AMD coming up with an x86 extension6 to use the GPU core more directly from software, thereby making GPGPU less exotic and more mainstream. That would be just like how AMD came up with the 64-bit extensions to x86 first and Intel had to follow suit.

How will Intel stack up to ATI’s graphics core with their own attempt at making discrete GPU Larrabee ? NVidia raised eyebrows last month, when they allowed Intel X58 Chipsets to do Native SLI, without using any nForce 200 chips. Perhaps Intel will return the favor and invite NVidia’s GPU core to be in an Intel chip ?7

The coming months and years will be pretty interesting times for sure. Whether the CPU-on-GPU will be a short-lived fad that fizzles out or a long term trend that is favored by end-users and designers, only time will tell.



For now, AMD seems to be better off than Intel.





1General Purpose Computation on Graphics Processors (GPGPU), Presentation by Mike Houston, Stanford University

2 A Brief Introduction to GPGPU, by Paolo Corsini, Hardware Upgrade

3 The CPU and the GPU have separate clocks in both Intel Nehalem and AMD Fusion.

4 In place of North Bridge and South Bridge, Intel has Ibex Peak in Nehalem architecture.

5 Non graphics traffic between CPU and Ibex Peak is over 2GBps chip-to-chip interconnect called Direct Media Interface (DMI).

6 Both Intel and AMD GPU cores need GPGPU compilers for applications to be able to use them. An x86 application cannot run on these cores now. An x86 extension will obviate the need for specialized compilers.

7 This is a wild speculation on my part. After all an Intel exec has said NVidia’s CUDA will be nothing more than a “footnote” in history

Share →

One Response to GPU and CPU – The odd couple plans a tryst

  1. […] while back, I blogged about offloading computation traditionally done on CPU to GPU. Here is an excellent presentation […]

Leave a Reply

Your email address will not be published. Required fields are marked *

*

Looking for something?

Use the form below to search the site:


Still not finding what you're looking for? Drop us a note so we can take care of it!

Visit our friends!

A few highly recommended friends...

Set your Twitter account name in your settings to use the TwitterBar Section.