The New York Times posted an interesting piece on computers getting smarter, and that got us to thinking. Is "Skynet" really that close?
There were two parts to that thought. First was Skynet, a fictional AI from the Terminator films that on becoming self-aware decides that the only course of action for its survival is to end the human race. The second was whether such a thing was really that close.
[RELATED_ARTICLE]Then we spent 10 minutes on the internet. We realized that any artificial intelligence that spends more that 5 minutes on the internet and doesn't realize that human beings are perfectly capable of their own doom is really not that intelligent. A paradox emerged. In the end we came to the conclusion that artificial intelligence wasn't going to cut it. Only a computer programmed in great detail with the latest updates on human stupidity would want to destroy the human race, with nuclear weapons nonetheless — which are harmful to technology as well.
Somehow the first thing that pops up in every one's mind when they think of super-intelligent computers is Skynet, the end of the world, Armageddon. Yet the virtues associated with highly intelligent humans is usually peace and calm. Are machines really that malevolent?
The New York Times article talks of an AI program called NELL that is capable of learning by itself with little human guidance. A marvellous achievement indeed. The program scans the internet building semantic relationships, and can categorize objects based on how they are used online. Still it is not perfect. The articles goes on to give an example of how the program thought "Internet cookies" were baked goods! Take over the world, right... The most harm this program could do would be to bomb Hungary thinking it would end world hunger...
One thing we must remember that intelligence is not an absolute quantity. The intelligence of ants is different from the intelligence of dogs, which is different from the intelligence of humans. When we talk of intelligent computers, we must not measure their intelligence on our terms, in the flawed ways we think. Intelligent computers are entities of their own; they need not understand poetry, or appreciate art, to be deemed intelligent.
We are sure that as computers become more intelligent they will not suddenly want to take over the world, that would only be the outcome if we go to great lengths to program human stupidity into them. Intelligent computers will be capable of learning and adapting, and much more than they already do. The Bayesian spam filter in your email client is capable of learning, from your actions, which emails are spam and which are not, yet your email application will never profess it's love for you — not because you're unlovable, we're sure you are — because it simply can’t.
So is doomsday approaching as we develop better AIs? Nope, just better spam filters.