Think your PDAs and cell phones are essentials? They’re about to become indispensable
You’re bored. Quite understandable—it happens to everyone. Worse still, you’re in a city that you don’t call home. What do you do? If you’ve got GPRS, you could whip out your mobile phone and consult the almighty Google—“leisure activities [your current location]”, you’d type, and browse through the results. It would do well to be sitting down, because these things take time—picking a worthy site among all that rubbish can be tedious. Still, the information exists, which is what matters in the end.
But what if your phone would do that for you, without your intervention? What if it detected that you’re not in your home city and would eventually feel the need to see the sights? What if it told you that you’d just crossed the city’s best pub, and almost missed out on the experience? Call it Context Aware Computing or Ubiquitous Computing or whatever name that catches your fancy, the fact remains that with some connectivity and a lot of information, such scenarios won’t remain the product of an over-active imagination.
In geek speak, context awareness is the ability of a system (a nice, generic term for a piece of hardware that runs software) to be able to recognise its environment—location, temperature, altitude, that sort of thing—and act accordingly. A GPS device is one of the simplest context-aware computer—it’s aware of your position, and based on that, is able to tell you what time the sun will rise or set where you are.
Context-aware Computing is just another piece in a much larger (and more Utopian) jigsaw puzzle called Ubiquitous Computing. The term was coined back in 1988 by Mark Weiser, who was the chief scientist at Xerox’s PARC (Palo Alto Research Centre). The idea is that all technology around us—down to the humble light-bulb—will connect to, and interact with each other, and adapt to suit your needs. Biometric chips in your clothes will tell your air conditioner that you’re too cold, and the AC will turn up the temperature to make you feel better, for example. You could cook up a million scenarios like this, and they’ll all be plausible in a world with ubiquitous computing, and therein lies the catch—it’ll likely be many years before this could happen.
Context-aware computing is the realist’s answer to ubiquitous computing—it doesn’t approach the dream, but is far better than current reality...
Think Hitchhikers’ Guide to The Galaxy—not the book, but the book in the book (wrap your heads around that one)—the single source for all the information you need as you travel the galaxy—from how to deal with an irate Vogon to the recipe for a Pan Galactic Gargle Blaster. We’ve got something like it already in development—it’s called the Wikipedia.
Now consider that right next to many places of interest on Google Earth, you’ll see a little Wikipedia icon, which shows you the Wikipedia entry on that place when you click on it. Not so on Google Maps, and WikiMapia would fit the bill if it weren’t full of landmarks created by people saying “Look, I live here!”
Finally, consider that GPS-enabled cell phones are readily available in the market, and are getting cheaper with every new model. Throw in the high-speed mobile connectivity we yearn so much for—you see where this is going, don’t you? The GPS in your phone will tell Google Earth (or Google Maps) what position to centre on, and you can then read the Wikipedia entries on important places in your area (not as helpful if you’re not out sightseeing, but anything that gets rid of annoying guides prattling on can only be good). Bring the Wikipedia overlay into Google Maps, and this will happen now—on the iPhone and the N95, for instance, in countries where respectable mobile Internet speeds are not too much to ask for.
Remember when the cell phone first entered our lives? How we’d cringe every time a phone blared out an ugly ring or a cheap imitation of a song? Cellular etiquette is as important, and as shamelessly ignored as it was back then, but there is hope in sight. Researchers at Intel are working on software for your cell phone that’ll make it more aware of your surroundings, and even decide whether to interrupt with a phone call or take a voicemail message instead.
The software will analyse your speech—not the words themselves, but the tone, pitch, volume, and the time for which you go on—and the speech of the people around you, and determine what sort of situation you’re in. For example, a conversation between you and your friend will have a casual tone, and since you’re friends who don’t mind interrupting each other, the conversation will have a lot of overlapping speech. If you’ve told the software that it’s all right to interrupt casual conversations, you’ll feel that familiar buzzing when you get a call. If, on the other hand, you’re in a meeting, the phone will detect the formal, subdued tones, and tell the caller to leave a voicemail message instead. At the very least, it could automatically switch to Vibrate. The software will also be able to detect your mood based on how you speak, and act accordingly—not interrupt you when you’re ticked, for instance.
Intel used this unobtrusive sensor to collect data about the wearer’s social interactions in their trials
At this stage, social interactions are still being analysed to help the software make decisions better—it isn’t particularly accurate with mood detection (and we doubt it’ll go beyond a point), and hasn’t been tested on a cell phone. Indeed, even the most powerful phone today can’t stand up to the punishment of getting all that data from a conversation in real time—all they’re capable of is detecting when a person is speaking, and the length of that speech.
It’s also unlikely that the software will know when to interrupt you and when not to—some initial setting up will be required, though it’ll be a small price to pay for a system that’ll make up for your lack of cellular etiquette, even though it’ll take years to come.
And then, there’s the matter that started us along these lines in the first place...
We anticipate Magitti-like software that will integrate with your Orkut,
Facebook or other profile, and prioritise activities that your friends
How’d you like a cell phone that told you what you could do with all that idle time? Or if you’re walking around in a new city without a clue? PARC is working on a program called Magitti, that uses what it knows about you to recommend activities that you can enjoy in your vicinity.
What you’ll see on your screen when you fire up Magitti is simple enough—a list of things to do in the area. When you first use it, this list won’t be prioritised according to your likes, but as the software learns more about you, it’ll be able to tell which recommendations are more likely to pique your interest—much like recommendation systems on sites like Amazon. For example, if you’re a museum buff, historical landmarks will find themselves at the top of your list. It’ll also be able to tailor its recommendations based on the time of the day—you can’t go museum-ing late at night, so local pubs might be brought to the top of the recommendation list. Magitti will use GPS to infer what choices you make, and find out more about your destination using an online database (which may eventually turn out to be the Wikipedia, for all we know).
Magitti’s user interface on a Windows Mobile-based phone
In addition to learning from your choices, Magitti will also analyse the data on your cell phone—text messages, for instance—and like Google AdSense, make recommendations. If a friend messages you to meet for an inexpensive Chinese dinner, for example, Magitti will prioritise such places when you ask for recommendations. Extrapolating a bit, we anticipate Magitti-like software that will integrate with your Orkut, Facebook or other profile, and prioritise activities that your friends are recommending. The software will go through trials in Japan in the next few months, and we’ll be watching eagerly.
Magitti may be able to learn from your choices, but what if it could know you by sensing your behaviour?
|Beyond The Cell Phone
|While the cell phone is the prime candidate for any experiment with context awareness, it isn’t alone. At CeBIT 2007, we saw NEC’s Dew concept camera that you can wear around your neck while it monitors your mood using the tone of your voice. If it detects happiness, it takes a picture, saving your happy moment forever (or till its memory runs out).
Closer to the shelves, we see the Sony CyberShot T200, which has a “Smile detection mode”, which, well, detects smiles. Switch the camera to that mode, place it wherever you want, and it’ll take a photo every time a smiling face is in its frame.
Sensors And Sensibility
The iPhone, the Sony Ericsson W910i, the Nokia 5500 Sport and a host of soon-to-be-released phones sport accelerometers that enable various features depending on the device’s movement. The iPhone and iPod Touch use it to rotate movies when you tilt the phone, the Sony Ericsson phones use shakes to shuffle tracks, and the 5500 uses it to measure the distance you’ve travelled and in a game where you tilt the phone to get a ball out of a maze
The iPhone also has an ambient light sensor that adjust the screen’s brightness based on the brightness of its environment, and an infrared sensor that can tell when you’re pressing the phone to your face. But why must we know this?
Nathan Eagle and Sandy Pentland of MIT’s Media Laboratory lament that while new mobile phones are loaded with more sensors, they’re rarely used for anything beyond trivial gimmicks. Collecting the data from these sensors could yield a whole lot of information about the user, and will enable software like Magitti work even more like magic. The data from an accelerometer in your phone, for example, can be used to tell whether you lead an active or sedentary lifestyle. It can also be used to tell whether you’re walking, running, cycling or riding in a vehicle. Combined with GPS and Intel’s “polite phone” software, your phone can collect data about where you go, what social situations you’re in, who you talk to for how long, and so on. Eagle and Petland have even developed an algorithm that can predict your future actions based on the data your phone collects about you!
Give all that data (and the predictions) to software like Magitti, and you’ve got yourself a device that knows you so incredibly well that the mind boggles.
Every great fantasy is thwarted by reality. In this case, it’s the hardware—nearly all these concepts demand a lot more processing power than current phones can deliver, and even if we see such a powerhouse within the next year, processing all this data in real-time is bound to be murder on battery technology that’s already showing wrinkles.
For now, we must content ourselves with GPS and (ugh) Wikimapia.