According to an article in the New Scientist, researchers at Carnegie Mellon University (Chris Harisson) along with Microsoft’s lab (Dan Morris and Desney Tan) claim to be able to turn your skin into a touch-screen. Innovatively called ‘Skinput’ (a combination of the words ‘skin’ and ‘input’), it uses a bio-acoustic sensing array in combination with a wrist-mounted pico-projector to turn your arm into a display and input device, without any implantation.
How does it work? The pico-projector (which can be mounted in other places than just your wrist) will project images onto the skin of your arm (or on any part of your body in line of sight of the projector), and as you press these image buttons, particular vibrations/sounds will ripple through your skin, muscles and bones, which will be picked up and interpreted as signals by special software in the bio-acoustic sensing array. Specific locations/images/buttons can be mapped for specific functions.
What is interesting is that the projector itself is not critical to the process, only the sensing array and software is, enabling you to ‘tap’ the chosen area without any images on it, and still cause an effect. From an observer’s perspective though, this might look quite ridiculous, with a person touching himself for no apparent reason. It also gives new meaning to the term logging off, when a touch could trigger anything from a song to a search.
This application joins the ranks of real-world digital-interfaces, along with the Microsoft Surface, Project Natal, and Pranav Mistry’sSixthSense. Recent work on Microsoft’s Surface will also soon make the device a portable one, similar to SixthSense. Pranav Mistry, whose device certainly seems to have the most potential applications as well as portability, warns that the Skinput device will have to be very precisely placed each time, so that the images and sounds are nearly identical each time, thereby limiting the functionality of the device.