Make your home smarter than the average home
Make your life smarter, simpler, and more convenient with IoT enabled TVs, speakers, fans, bulbs, locks and more.
Click here to know more
A new model built by Microsoft and MIT is capable of identifying exceptions or "blind spots" in AI learning patterns.
A new model developed by researchers at Microsoft and MIT is capable of identifying instances where autonomous systems have learnt from training examples that don't match with what's actually happening, reports MIT News. It seems this model could be used in the future to improve the safety factor in artificial intelligence systems like driverless vehicles and autonomous robots. Researchers from the two organisations have presented their findings in a couple of papers that will be presented at the upcoming Association for the Advancement of Artificial Intelligence conference.
"The model helps autonomous systems better know what they don’t know," says co-author Ramya Ramakrishnan, a graduate student in the Computer Science and Artificial Intelligence Laboratory. "Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors."
The approach used by the researchers first puts an AI system through simulation training, where the system creates a "policy" that maps every situation to the best actions it can take. The system is then made to work in the real world, where humans give error signals in areas where the system's actions are forbidden. Humans are allowed to provide signals to the system in more than one way, namely "corrections" and "demonstrations". For example, a driver behind the wheel of an autonomously driving car could force inputs in certain driving situations to tell the system that it was not acting unacceptably in those situations. The system then records which situation-and-action pairs were accepatable and unacceptable.
According to Ramakrishnan, the next step is compiling the information to ask, "How likely am I to make a mistake in this situation where I received these mixed signals?" If the system learns that it is right to give way to an ambulance from a simulation in which way was given to an ambulance nine out of ten times, human inputs that block the ambulance will be treated as actions that are "right". In short, exceptions like blocking the way of a speeding ambulance could confuse, or worse, corrupt the system.
"When the system is deployed into the real world, it can use this learned model to act more cautiously and intelligently. If the learned model predicts a state to be a blind spot with high probability, the system can query a human for the acceptable action, allowing for safer execution," adds Ramakrishnan. The pair of papers that will be presented at the upcoming Association for the Advancement of Artificial Intelligence conference was originally presented at last year's Autonomous Agents and Multiagent Systems conference.
Related Read: Windows 7 support to end in a year, Microsoft says
Digit caters to the largest community of tech buyers, users and enthusiasts in India. The all new Digit in continues the legacy of Thinkdigit.com as one of the largest portals in India committed to technology users and buyers. Digit is also one of the most trusted names when it comes to technology reviews and buying advice and is home to the Digit Test Lab, India's most proficient center for testing and reviewing technology products.
We are about leadership-the 9.9 kind! Building a leading media company out of India.And,grooming new leaders for this promising industry.