Microsoft and MIT develop new model that identifies AI blind spots

Microsoft and MIT develop new model that identifies AI blind spots
HIGHLIGHTS

A new model built by Microsoft and MIT is capable of identifying exceptions or "blind spots" in AI learning patterns.

Highlights:

  • Microsoft and MIT researchers have developed a new AI assessment model.
  • The model can identify instances where autonomous systems have learnt from exceptions in training examples.
  • The study has been presented in a couple of papers.

A new model developed by researchers at Microsoft and MIT is capable of identifying instances where autonomous systems have learnt from training examples that don't match with what's actually happening, reports MIT News. It seems this model could be used in the future to improve the safety factor in artificial intelligence systems like driverless vehicles and autonomous robots. Researchers from the two organisations have presented their findings in a couple of papers that will be presented at the upcoming Association for the Advancement of Artificial Intelligence conference.

"The model helps autonomous systems better know what they don’t know," says co-author Ramya Ramakrishnan, a graduate student in the Computer Science and Artificial Intelligence Laboratory. "Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors."

The approach used by the researchers first puts an AI system through simulation training, where the system creates a "policy" that maps every situation to the best actions it can take. The system is then made to work in the real world, where humans give error signals in areas where the system's actions are forbidden. Humans are allowed to provide signals to the system in more than one way, namely "corrections" and "demonstrations". For example, a driver behind the wheel of an autonomously driving car could force inputs in certain driving situations to tell the system that it was not acting unacceptably in those situations. The system then records which situation-and-action pairs were accepatable and unacceptable.

According to Ramakrishnan, the next step is compiling the information to ask, "How likely am I to make a mistake in this situation where I received these mixed signals?" If the system learns that it is right to give way to an ambulance from a simulation in which way was given to an ambulance nine out of ten times, human inputs that block the ambulance will be treated as actions that are "right". In short, exceptions like blocking the way of a speeding ambulance could confuse, or worse, corrupt the system.

"When the system is deployed into the real world, it can use this learned model to act more cautiously and intelligently. If the learned model predicts a state to be a blind spot with high probability, the system can query a human for the acceptable action, allowing for safer execution," adds Ramakrishnan. The pair of papers that will be presented at the upcoming Association for the Advancement of Artificial Intelligence conference was originally presented at last year's Autonomous Agents and Multiagent Systems conference.

 

Related Read: Windows 7 support to end in a year, Microsoft says

Digit NewsDesk

Digit NewsDesk

Digit News Desk writes news stories across a range of topics. Getting you news updates on the latest in the world of tech. View Full Profile

Digit.in
Logo
Digit.in
Logo