University of Utah experiments with GPU-acceleration in Linux kernel

By Kshitij Sobti Published Date
09 - May - 2011
| Last Updated
09 - May - 2011
University of Utah experiments with GPU-acceleration in Linux ker...

If you're a gamer, it is likely that a large proportion of your computer's cost is actually your GPU, your graphic card. One of the most powerful systems on your computer, and likely one of the most underused except when you are actually gaming. Even with a cheaper system with an entry-level graphics card, it is usually one of the most powerful parts of the computer.

Graphics cards have, from a while no, supported means for using the power of the graphics card even for non graphical computations.

After all, graphics card / GPU is just a name; what it is really is powerful, highly parallelised processor that is optimized for dealing with streams of data. This also makes it highly useful for accelerating procedures such as encryption.

Imagine then that if the very core of your operating system, the kernel, could use GPU-acceleration? That is exactly the research going on at University of Utah with sponsorship from NVIDIA.

Many application such as media encoders and software such as Adobe Premiere Pro, MATLAB etc. already use these technologies to their advantage. However with such applications there is a reasonable expectation that the user of the software will have a decent graphics card. However, it is hard to get a computer these days that does not have at least some rudimentary graphics card, so it is quite likely that a large number of applications in the future will support hardware acceleration. If graphics card become ubiquitous, having kernel code running on them is likely to become a reality as well.

The University of Utah's "KGPU" project modifies the normal Linux kernel by adding support for accelerating some code using NVIDIA CUDA, which is NVIDIA's own GPU-acceleration solution. Included is an implementation of the AES (encryption) cypher for use in encrypted filesystems. It has managed to "reach a factor of 3x ~ 4x improvement over an optimized CPU implementation (using a GTX 480 GPU)."

Find out more at the project's page on Google Code.