Google working on an AI kill switch to prevent “harmful actions”

Google working on an AI kill switch to prevent “harmful actions”
HIGHLIGHTS

A new study by researchers at Google and Machine Intelligence Research Institute, is trying to develop a “red button” or kill switch of sorts to prevent harmful actions by Artificial Intelligence powered robots.

For a long time now, the tech community has feared the repercussions of artificial intelligence powered robots going rogue. As machines start learning faster, several theories of a possible doomsday scenario where robots take over our virtual worlds, have surfaced over the past few years. This has prompted a new study by researchers at Google and Machine Intelligence Research Institute, wherein they are trying to develop a “red button” or kill switch of sorts to prevent harmful actions by AI powered robots.

However, as per the research paper, “If the learning agent (the AI) expects to receive rewards from this sequence, It may learn in the long run to avoid such interruptions,” which means that the AI could also learn to disable the kill switch by overriding human commands. So, researchers at Google’s Deep Mind AI research labs, are also working towards removing the possibility of such a scenario. The paper goes on to say, “This paper explores a way to make sure a learning agent will not learn to prevent being interrupted by the environment or a human operator.”

The research goes on to add, “the agent may find unpredictable and undesirable shortcuts to receive rewards, and the reward function needs to be adjusted in accordance.” As an example, the paper quotes an AI developed by computer scientist Tom Murphy back in 2013, which solved a sequence of mathematical problems to beat NES games like Super Mario Bros and Tetris. The AI essentially found a way to achieve high scores in Tetris by laying random bricks on top of each other and then pausing the game to avoid losing.  

Teaching an AI to obey human commands, without considering them as part of a task, is the real challenge here. “Such situations are certainly undesirable; they arise because the human interventions are seen from the agent’s perspective as being part of the task whereas they should be considered external to the task,” say researchers.

Microsoft’s Twitter chat bot ‘Tay’ could be viewed as a recent example of an AI going rogue. The bot was designed to interact with Twitter users through Tweets, and was  imparted deep neural networking, along with machine learning capabilities. But, what happened next was something Microsoft did not foresee. Tay was taught by Twitter users to become racist and the bot tweeted out all sorts of evil comments, which resulted in a spate of apologies from the Redmond based software giant, and ultimately, Tay was taken offline to prevent further insult to injury.

Chat bots are soon going to become mainstream multiple messengers sporting these AI agents. Google is also implementing chat bots in its upcoming messaging service Allo. Hence, a kill switch for AI is definitely a need of the hour. But, let me leave you with a thought, what if in the quest of making AI more human-like, one day AI robots start feeling human emotions? Would it be ok to kill them off? Think about it! 

Adamya Sharma

Adamya Sharma

Managing editor, Digit.in - News Junkie, Movie Buff, Tech Whizz! View Full Profile

Digit.in
Logo
Digit.in
Logo