From a few favourite songs on magnetic tape on a Walkman, to wireless, portable MP4 movies; from beepers to cell phones; from SLRs to camera phones — in just a couple of decades, science has taken us beyond the predictions of futurology and into the realms of Asimov and Arthur C Clarke. In such an intensely fluxed technological environment, we’ve become so used to witnessing Spidey-like jumps in technology during our lifetimes that even touch screens are beginning to seem old hat. We at Digit share your impatience and so we decided to satisfy our curiosity — and yours — by ferreting out and laying before you ten of the most remarkable technologies being worked on today, which are set to bring sci-fi to reality.
1. CUBIC CHIPS
Laying It On Thick
Stuck In The Moore
In 1965, Intel’s Gordon Moore stated what has come to be known as Moore’s Law — that the number of transistors on a chip will double about every two years (how many times have you heard that one before?). But as chips get smaller, engineers are already facing problems in trying to cram innumerable transistors into decreasing space.
The Rochester Chip
Enter the Rochester chip – a chip that’s been designed vertically, bottom up, specifically to maximise the main functions of a chip by the use of several layers of processors. However this ‘3D’ chip is unlike the ‘stacking’ idea – where present-day chips are merely stacked one on top of another. This one is built so that each layer interacts with each other layer as a part of a single circuit, while performing different functions. Chips for audio, for example, differ in requirements from chips that process digital photos or texts. The Rochester chip is designed simultaneously to deal with the different speeds and power requirements of these processes.
The design of this cubic chip (not to be confused with the Power Mac Apple G4 cube, which was a computer in itself) is purportedly the first to integrate each layer in such an optimally seamless and efficient manner. Piling several integrated circuits together made it necessary to first ensure an effective insulation between each chip and then drill thousands of perforations in the insular layer to allow vertical connectivity. The prototype of this ‘cube’ is already functioning at the University of Rochester at a speed of 1.4 gigahertz.
Eby Friedman, distinguished Professor of Electrical and Computer Engineering at Rochester University, New York, USA, is the Director of this project and he’s had help in the form of engineering student Vasilis Pavlidis. The chip, which has been specially fabricated at MIT (Massachusetts Institute of Technology), is still in the prototype stage.
And If It Comes Through...
The continuous shrinking of integrated circuits augments speed but connecting multiple chips horizontally means that more space is required. Since all the layers act like a single system, the Rochester chip functions like a folded-up circuit board. Imagine the motherboard of you computer shrunk to the size of a Rubik’s cube. Besides, the architecture of the cube is such that it could increase the speed of your iPod or cell phone by up to ten times that of chips today. More height means less width, so finally perhaps, we’ll have flatter CPUs, smaller printers, miniscule iPods etc. — and as a result, more space to use around the room. Skepticism has been voiced on whether the industry would take to it well, but we at Digit feel that the future belongs to chipper chips and not whopper circuit boards.
2. PROTEIN SHAKERS
Hard Disks Go Organic
If nature can use proteins to help our brains store memory, why can’t we? If you take a good look at our CDs and DVDs, you’ll realise that they are enhancements of the vinyl record – albeit on a microscopic scale. Perhaps then, our use of synthetic materials is due to end. Memory-based devices made out of biological materials have long since been considered to have the power to process information more quickly and allow more data to be stored than the present-day options available to us. Several experiments in the past have floundered but the challenge draws humankind onwards.
Our traditional hard disks, CDs, etc. are either magnetic or optical data storage systems which are becoming, well, harder disks to put more data into. Rooms full of memory devices seem to be the only way of managing the mammoth databases the world is now dealing with. The most significant advancement in this area has been made by researchers in Japan who have managed to develop a new protein—based memory device.
Koji Nakayama, Tetsuro Majima and Takashi Tachikawa have succeeded in etching or ‘recording’ specific data on a glass slide, using a fluorescent protein. The combined use of light and chemicals effectively stored information that was later ‘read’ and then erased. Thus, recording, playback and deletion — the basic functions of memory storage instruments — were proved possible using biological materials. They define the material as ‘a biological device that enables us to spatiotemporally photoregulate the recording, reading, and erasing of information on a solid surface using protein’.
And If It Comes Through...
The scientists involved in the project have themselves suggested that the technology could be used for biosensing and diagnostic assays. But of importance to us is their third suggestion, that it be utilised for ‘record-erasable soft material’. The possibilities, if this works, are limitless, and a reversal of sorts, where the bio-chip seamlessly entering our bodies to enhance human functionality seems possible in the not-so-distant future. Contemplating the consequences of bio-memory instruments, one immediately fears ‘Terminator 3’ and ‘Matrix’-like dystopian scenarios, but never fear, the research is not even in the prototype stage and has a long way to go before it is proved to work as well as today’s storage instruments. Several other simultaneous efforts towards the protein chip are ongoing and therefore it remains to be seen who will come up with the best product.
3. SENSOR GLOVES
More Than Just Haptic
From the calculator watch to the HMDs (Helmet Mounted Displays), we have always been a little impatient and have now firmly begun to believe that a person’s computer should be worn, much like eyeglasses or clothing are worn, and interact with the user based on the context of the situation. In fact, at a time when skin is being treated more and more like cloth, intelligent clothes are one sure-shot way to bring back to clothes their primal status of functional accessories. While laptops and palmtops are steps in this direction, serious advances indicate that the dream may not remain a dream much longer.
Fits Like A Glove
Most technological advancements and breakthroughs, regrettably, emerge from conflict, war and the needs of the military (ARPANET, aviation technology, etc.). The latest example is an intelligent glove. US soldiers in Iraq already use wearable computer systems but lack efficient input devices. Now, a company called RallyPoint, based in Cambridge, MA, has developed a sensor-embedded glove that allows the soldier to easily view and navigate using digital maps, activate radio communications, and send commands without needing to use his hands. This isn’t so great when you consider that several groups have been working and bringing out sensor filled gloves in the past, using accelerometers, gyroscopes, and other high-tech sensors.
However, this one is a little different, because it is more practical, rugged and made for the military. It has been designed in such a way that a soldier can use it to grip an object and still continue to use its electronic capacities. The glove has four custom-built push-button sensors sewn into the fingers. Radio can be activated by the sensors on the tips of the middle and fourth fingers, each finger used to locate a different channel. On the lower portion of the index finger is a tiny sensor that can help change modes, from “map mode” to “mouse mode”. Another sensor, on the little finger, can be used to zoom in (or out) of a map, while in ‘Map Mode’. The same sensor in ‘Mouse Mode’ is a mouse-click button.
And If It Comes Through...
Although it probably really hasn’t been envisaged yet, the glove-computer has immense possibilities for the future of gaming. We all know about the magic of the Apple Motion Sensor, PSP Sixaxis motion detection etc. But with the Glove-computer, the extent of immersion and interaction into the game could increase ten-fold. No more handheld pads or joystick surrogates. Everything you need would literally be in you hands. Now if only they could find a way to make it wireless...
4. ANTI-VIRUS CLUSTER
A Cloud Full Of Silver Linings
If you’ve heard of Web 2.0, no doubt you’ve heard of Cloud Computing — but we’ll tell you anyway. Cloud computing is basically a concept that involves Web 2.0, SaaS and the latest trends in technology to provide seamless, better enriched services using the internet. No self-respecting computer today can get by without a good anti-virus software installed and trying to grapple with the number of trojans, malware, worms and hacker-go-lucky viruses that are trying to infilitrate into your sytem.
How nice it would be if the task of checking the files and documents that you open was done by some software deep in the infinite web, which monitors your PC remotely! Researchers at the University of Michigan developed a new cloud-based approach to antivirus which they call ‘CloudAV’ and which, they claim, can outdo any anti-virus package on the market.
Prof. Farnam Jahanian, professor of computer science and engineering in the Department of Electrical Engineering and Computer Science, along with PhD student, Jon Oberheide and postdoctoral fellow Evan Cooke, evaluated 12 different antivirus software programs (including the popular McAfee, Avast and AVG) by pitting them against more than seven thousand malware samples. They found that, due to the increasingly innovative viruses and the growing complexity of anti-virus software itself, detection of malicious software was really low — about 35 per cent. Besides having several vulnerabilities in the software itself, most of them took about seven weeks on an average to equip themselves against new virus threats that are in circulation on the Web.
Another major drawback in today’s anti-virus packages is that you can’t run more than one of them simultaneously in the same system. CloudAV is a single solution to all of these problems for the following reasons:
It analyses potential threats to your system using several different antivirus programs at the same time, thereby significantly increasing the degree of protection for your system.
Operates by installing a simple, lightweight software agent in your computer, mobile phone or laptop, which automatically detects a new document or program being opened and sends it to the anti-virus cloud (somewhere on the web) to be analysed.
With CloudAV it’s pouring antivirus agents. CloudAV uses 12 different detectors that run parallel to one another, but independently, to tell your computer whether it’s okay to open a particular file.
Caches the results so that detection becomes smoother in future.
And If It Comes Through...
The latest irritant in India is frequent virus attacks on our cell phones. Typically, cell phones lack the space and power to accommodate bulky antivirus software. Leaving the job of detection and quarantine to an external agent — and not just one, but twelve — would be a boon for users of mobile computing devices. For the rest of us too — PC users — we’ll stop cursing our favourite AV vendor for the viruses that weasel in and start praising CloudAV.
5. TELESCOPIC PIXELS
Screening For More
As alternatives to CRTs are becoming cheaper, more than half the globe has switched over to LCD monitors or TVs — not to mention the ubiquitous TFT screens in our cell—phones and PDAs. Naturally, we have also begun to perceive the manifest errors and glitches in using LCD technology. Not to rest on their laurels, scientists have already begun investigating possible new technologies to replace LCD screens. And this time, they are doing it with mirrors.
Whether it’s LCD, Plasma or CRT screens, we’re stuck with pixels. Pixels — short for ‘Picture Elements’ — are the tiny dots that make up the images on a screen. To cut a long story short, the quality and accuracy of the image is determined by the ‘resolution’ of the screen, so the greater the number of pixels, the sharper the image.
Even though LCD screens are all the rage, there are several drawbacks that noticeably hinder the achievement of a truly high-quality image:
The pixels in an LCD screen do not really turn completely off.
It’s virtually impossible to view the image on a TFT/LCD screen in natural ambient light.
When images move fast, the pixels take about half a second to switch between colours, and when these are very different, this leads to momentary blurs.
Dead or stuck pixels which are damaged in such a way that they permanently stay in the or off state, seriously affect visual accuracy.
Finally, by the time light passes through the three stages of an LCD screen (the polarising film, the liquid—crystal coat and the colour filters) almost 90 per cent of the light is lost, making the screen appear blacker and the displayed image dark.
Microsoft To The Rescue
Researchers at Microsoft have come up with a terrific new design for pixels (published online in Nature Photonics, 20th July, 2008) in which each individual pixel is made up of two opposing microscopic mirrors with one changing shape based on applied voltage, and reflecting light through a hole on the primary mirror and onto the display screen. Both mirrors are made of aluminium and the first one, with a hole in the center, is only a 100 micrometres wide and 100 nanometres thick!
When the pixels are ‘off’, both mirrors reflect the light back on to the source, so none emerges on the other side of the screen. However, when they’re switched on, the disk bends towards a transparent electrode (typically made of indium titanium oxide) due to a little application of voltage. The light therefore bounces towards the second mirror and emerges through the hole.
And If This Comes Through...
Michael Sinclair, senior researcher in the Hardware Devices group — under the direction of Turner Whitted at Microsoft — is convinced that once the design makes it past the prototype stage, it will replace conventional display units all over the world. Less powerful backlights would be necessary and this would bring down costs, while increasing the longevity of the battery on your cell phone or laptop. The telescopic pixels allow about 36 per cent of the light through, increasing brightness by three to six times as compared with the present-day LCD technology.
Just as happened with CRT monitors, people are going to sooner or later hope to get some more space to use on their shrinking office desks and the Telescopic Pixel technology could shrink the width of the screen to the thinness of a whiteboard. As the design is simple and the materials are cheaper, the fabrication as well as price should be substantially easier on the pocket. One possible drawback could be the mechanical nature of the parts — mechanical parts tend to wear out and break down — which may raise maintenance issues, but the positives far outweigh this single danger. So, though we’re not holding our breath, we’re definitely looking forward to the quick development and commercialisation of the Telescopic Pixel Screen.
6. SENSITIVE ARTIFICIAL LISTENERS
More Like People
We’re already quite familiar with our computers interacting through auditory means — voice commands, text-to-speech software, etc. — but most of us get a bit bugged by the monotony of the electronic voice speaking back to us. Science fiction writers have always dreamt of computers becoming emotional or talking like people (there was even this computer which fell in love in the 1984 Hollywood movie ‘Electric Dreams’). The fiction may be inching towards fact with the Sensitive Artificial Listener system (SAL) being developed by an international team including Queen’s University, Belfast.
Making Human Inputs More Acceptable
Humans do not only communicate through words. Non-verbal communication, in fact, is supposed to constitute more than 90 per cent of our oral interactions. Computers, however, only understand crystal clear commands and the ambiguity, fluidity and variant significations of body language or facial expressions has so far been beyond machines.
Using a unique blend of science, ethics, psychology and linguistics, scientists are attempting to overcome this obstacle too. SEMAINE (Sustained Emotionally colored Machine-human Interaction using Nonverbal Expression) is a project undertaken by an international group of technologists led by DFKI, the German centre for research on Artificial Intelligence and including Imperial College, London, the University of Paris, the University of Twente in Holland, Queen’s University, Belfast and the Technical University of Munich. The team, with a European Commission Grant of 2.75 million euros, aims to create a Sensitive Artificial Listener (SAL) system, which will perceive a human user’s facial expression, gaze, and voice while interacting with him or her. Just as humans do, the system will alter it’s own tone, behaviour and actions according to the non-verbal stimulus it receives (and actually perceives) from the user. For the first time in history, this project to create a machine-human interface system is using fields as diverse as psychology, linguistics and ethics at every step of its endeavour.
And If This Comes Through...
Professor Roddy Cowie, from the School of Psychology, at Queen’s University gives a timeline of about 20 years. But given the scale and enthusiasm of the SEMAINE project, and given that several such projects are underway all over the world, Digit hazards a guess that we’ll be chatting up to and making jokes with our computers quite routinely within the next decade. And then, perhaps, they may just end up replacing dogs as man’s best friend.
7. WIRELESS ELECTRICITY
Power’s In The Air
Tired Of Being Wired
If phones, mice and keyboards could get wireless, why not everything else? In fact, about a hundred years ago, that untamed genius, Nikola Tesla had already begun to build a tower at Wardenclyffe, N.Y. to demonstrate the transmission of electricity without the use of wires.
On a humbler scale, researchers at MIT are in the process of repeating the experiment with their own ideas and less ostentatious techniques.
Marin Soljacic, Assistant Professor of Physics at the MIT, has spent a considerable number of years trying to figure out how to transmit power without cables. Radio waves lose too much energy during their passage through the air and lasers are constrained by the necessity of line-of-sight. Soljacic decided to use Resonant Coupling, in which two objects vibrating at the same frequency can exchange energy without harming things around them. He used magnetic resonance and along with his colleagues Jon Hoannopoulos and Peter Fisher, succeeded in lighting up a 60 watt bulb two metres away. What they did was this: two resonant copper coils were tied to dangle from the ceiling, two meters away from each other. Both were tuned to the same frequency and one had a light bulb attached to it. When current was made to pass through one coil, it created a magnetic field and the other resonated, generating an electric current. And then there was light. The experiment succeeded in spite of a thin screen being placed between the two copper coils.
And If This Comes Through...
One of the most obvious results is that we won’t have dozens of cables to trip over in our offices and rooms. Primarily, the aim of this research team is to achieve a cable-free environment wherein your laptops PDAs and mobile phones could charge themselves (with all the electricity floating around) and even, maybe, get rid of the batteries that are so much an essential part of our portable devices today. Magnetic fields interact very weakly with biological organisms and this little fact makes it infinitely safer for us. While this experiment happened about a year ago, the team is still hard at work trying to use other materials so as to increase the efficiency of the transfer of power from 50 per cent to 80 per cent. Once that happens, both, the industry as well as individuals will grab hold of it and never let go.
Scribbling On The Desk
No More Dead Wood
There isn’t a student alive who hasn’t sometime scribbled his name (or a caricature of his prof.) on his school desk. How much more exciting it would have been if your desk was actually a Graphical User Interface! Experts at Durham University are aiming for just that with their ‘Smart-Desk’ initiative.
The Active Learning in Computing department at Durham University, UK is designing interactive multi-touch desks at their TEL (Technology-Enhanced Learning Research Group), hoping to replace the traditional desk with cell-phone-like touch-screens which can act like a multi-touch whiteboard, a keyboard, and mobile screen that several students can use at the same time. Dr Liz Burd and her team have linked up with private enterprises to design software that will enable all these surfaces to be networked and connected to a main smartboard. The computer becomes a part of the desk.
And If This Comes Through...
Instant visual displays of topics being discussed, on-screen interactive mathematics, group efforts in problem-solving and more involvement of students in the task at hand — the possible advantages of the smart desk to teachers and students seem infinite. Students who tend to isolate themselves or resist participation in class would be coerced to interact. Teamwork would be a natural consequence with multiple users on single screens. Each student could be presented a task or problem according to his or her individual capacities. More active and creative tasks would replace passive listening. The team aims to fill all schools in the UK with these desks within a decade and keeping in mind the pace of technological advance in India, we should see at least some of the schools in the country doing the same in the near future.
9. WRAP-AROUND COMPUTERS
Open New Folder
Why Should Screens Be Flat?
Flat screens are in. But what if we could have screens folded or curved around any surface that was convenient to reach? What if animated billboards could be folded round the pole of a street light? What if you could watch your favourite movie by stretching the screen over the back of a chair?
Bendable and flexible electronics are already all over the news. Trouble is, most of them cannot be tied up or wrapped around uneven surfaces or complicated shapes. Nanotechnology (what else?) has the answer.
Takao Someya, professor of engineering at the University of Tokyo, along with his team of researchers has added carbon nanotubes to a polymer with high elasticity to make a conductive material. They then used this to connect organic transistors in a stretchable electronic circuit. To induce conductivity in this material, Someya and his team combined several single-walled carbon nanotubes with an ionic liquid, which took the form of a black paste. This substance was then added to liquid polymer, which was dried after being poured into a cast. This material could be used to make an ‘electronic skin’ for robots. As a result, the nanotubes were evenly spread in the material and these worked to form a network that permitted electrical signals to pass through in a controlled manner. To make this material more stretchable, it was perforated into a net and then coated with a material with a silicone base.
In a paper published in ‘Science’ magazine, Someya reported that the material has the highest conductivity among soft materials in the world. Besides, the material is able to stretch up to about 134 percent of its original shape.
And If This Comes Through...
Mass production of nanotubes would, in turn, assist in the bulk production of these elastic conductors. The new material could be used to make displays, actuators or computers. With foldable keyboards already in the market for quite a while, a stretchable screen would make carrying around your laptop infinitely easier. It won’t be long before you’ll reach into your pocket for a handkerchief and pull out your PC. Look before you sneeze.
10. ULTRA—COMPRESSED MUSIC FILES
Just Forty Thousand Songs?
The Apple iPod (160 GB) can hold about forty thousand songs and, yes, people are buying it. As the capacity of MP3 players increase, strangely, our list of must-have favourite songs also expands exponentially. It doesn’t matter how many songs you’ve got with you, the song you want is always elsewhere. So for those of you out there, who, have a million favourite tunes, never fear, Rochester’s here. Again.
Zipped A Thousand Times
Researchers at the University of Rochester have succeeded in digitally reproducing a piece of music in a file that is almost 1,000 times smaller than a regular MP3 file. Mark Bocko, professor of electrical and computer engineering, and his team announced this achievement, at the International Conference on Acoustic Speech and Signal Processing in Las Vegas, held on April 1, 2008.
Even though the results are not perfect, they are almost so. The team took as a sample, a short musical piece, a 20-second solo on a clarinet and compressed it to less than a kilobyte. The file was then replayed by a combination of physics and the knowledge of how a clarinet works. They fed into the computer everything about clarinet playing — including the movement of the fingers, the pressure on the mouthpiece, etc. — to create a virtual clarinet based on real-world dimensions and parameters. They then made a virtual clarinet player for this virtual clarinet by feeding in a model that tells the computer how the human player interacts with the instrument including the fingerings, the force of breath, and the pressure of the player’s lips to determine how they would affect the response of the virtual clarinet.
And If This Comes Through...
Not only does this imply the possibility of ultra-compressed music files but also the incredible prospect of recording the performer along with performance. Once a computer figures out the typical style of a player, his every breath and movement, it could play a tune much after the player’s gone. According to Professor Bocko, improvement in quality is inevitable as the algorithms become more accurate and acoustic measurements are further perfected. The day won’t be far when your cell phone will hold all the music ever produced in the whole wide world – and the movies too.