Part 1: Information | Part 2: Data Explosion | Part 3: Reducing Redundancy
We often have no idea how much data is behind the content we consume. There is such an information explosion, that not even your uber-cool 2TB hard disk is enough. We may not know it, but some of the most significant data we use today is infeasible unless highly compressed.
To see how much compression goes on behind the scenes, take a normal full colour image in BMP format. The image will take up 24-bits per pixel, which is 3 bytes per pixel. This means that to find out the file size you multiply the number of pixels with 3 bytes. So a common 2-MP image will take up around 6MB uncompressed.
While this size may seem rather small considering the size of hard disks today, but for an image used on the internet, this size is exorbitant. The same image as a JPG, would take less than an MB, in all probability.
Now a look at videos.
A simple VCD quality video, hardly worth a scoff in today's HD world, has a resolution of 352x288 (PAL), at 24-bit colour, playing at 25 frames per second. A single frame here would be around 352x288x3 bytes = 304128 bytes = 297 kb. Not much? Imagine that coming at you 25 times a second. That would be around, 7.25 MB per second, a single VCD worth of content (around 80 minutes) would take over 34 GB to store!
On comparing that to the actual size of a VCD disk -- around 700 MB -- we can see that what we have is a compression of over 48 times. This high compression is possible even with an outdated and inefficient codec such as MPEG 1. A careful observer may notice that the 700MB on a VCD also incudes audio, something we have not factored in, yet even with that included, we will only get a higher and more impressive compression ratio. Consider that today we have 1080p HD videos which have a resolution of up to 1920*1080, 20 times the resolution of a VCD.
Audio on the other hand is something we have all dealt with in an uncompressed form on audio CDs or even as WAV files on our hard disks. It is comparatively easier to deal with audio in its uncompressed form as they are of significantly lower size compared to video . To compare audio file sizes, an uncompressed audio file of around a minute will be as much as 10MB, while on compression, it is capable of reaching as little as 600 KB (AAC at 80 kbps) giving a ratio of as much as 17 times!
To see how we get to this figure, we take a one minute long 16-bit stereo audio sample recorded with a sampling frequency of 44.1 kHz. A 44.1 kHz sample means 44100 samples in a second, each of 16 bits (2 bytes). Multiplying it all, we get 44100 x 2 = 88200 bytes ~ 86 KB for a second of audio. So for a minute of audio we are looking at approximately 5MB of audio, and since we usually have a stereo audio track which has two channels of audio; we have in effect, 10MB of data per minute. Nowadays we have audio recorded at sampling rate as high as 192 kHz at 32-bit per sample, and with as many as 8-channels of audio (an overall increase of around 34 times)!
It becomes fairly obvious that multimedia is not something to be messed with in its uncompressed state!
Containing the explosion
Even if you do ignore the exorbitantly large sizes, there is another concern that needs be noted. With an HD video of 1080p reaching over a TB in size, how exactly is one to access such volume of data in the 2hrs needed to play it?! Uncompressed HD video is like opening as many as 30 BMP image of 1920*1080 resolution in a second, requiring a read speed of over 150 Megabytes per second for the video alone.
When dealing with such kind of data, even a compression ratio of three or four times is clearly not enough. While audio and images are still quite often compressed using lossless compression formats such as PNG for images and APE, FLAC etc. for audio, traditional lossless measures of compression become unfeasible when used on video. Lossless compression of video is only of use in media companies where the video needs to go through great amounts of editing and compositing before it is finally encoded into the format in which it will be distributed.
Additionally, in the case of audio and video, the compression format should be such that the entire file need not be decompressed just to play the media. All these requirements are clearly not met with our normal compression utilities which offer only lossless compression methods, and require decompression of the entire file before being used. Decompression speed is also very important since huge amounts of data are to be handled, and while it is possible to come up with some innovative compression methods, it is very important that the resultant file be playable on current computers
Besides audio and video, many other kinds of data require great volumes of storage space. Software distributed online is usually heavily packed, because increasing internet speeds, is no excuse for inefficiency. The popular VLC Media player for example, is a ~ 17MB download in its v1, yet after installation it expands to take up as much as 70MB! While such content needs to be compressed without loss of information, it is also not as important for the data content to have as fast a decompression.
Different kinds of data show redundancies in different places, an image might have an entire row or column of a single color, a video might have a few repeated frames, text might have some words appearing more frequently than others. Each data type requires a different approach to detect such redundancies, so that it may be stored in a minimal amount of space.