In a world that's progressively ploarising into pro-AI and anti-AI netizens, a video dropped on YouTube by Alexander Reben last week has managed to send reactions of different kinds rippling through the vast internet. The thirty-second-long video shows an air gun mounted on a stand with a string attached to its trigger. The ends of the string are attached to a solenoid, which, in turn is connected to a switch that can be controlled by the Google Assistant. Within the gun's line of sight is an apple (read: a few centimetres away from the muzzle), also placed on a stand of equal height. So, when a voice commands, 'Activate Gun', the gun fires a pellet at the apple, knocking it down to the floor.
Watchers of the video were seen reacting in either one of two ways: dismissing the stunt or making a fanciful story out of it. Some users of YouTube and Reddit commented that it was a cheap trick to name a switch 'Gun' and have Google Assistant activate it, while online media networks like Engadget and Shacknews constructed a story out of it with exciting headlines like, 'We're A Little Concerned'.
Where do we, at Digit, stand on the matter? Well, we believe that while the video itself doesn't convey anything about the malevolence of Google Assistant or artificial intelligence-enabled systems in general, it does tend to let our imaginations drift towards the sombre image of a more sophisticated weapon setup, where a virtual assistant like Google Assistant or Amazon Alexa is instructed to pull the trigger of a gun to shoot a real bullet at something more substantial and more significant than a little red apple. When we covered Google's I/O 2018 earlier this month, Sundar Pichai was heard saying, after the demonstration of AI-enabled tools for health care, "It’s clear technology can be a positive force, but it’s equally clear we just can’t be wide-eyed about about the innovation tech creates. We feel a deep sense of responsibility about how to get this right."
Given how unnervingly accurate Google Duplex was during its demonstration at I/O 2018 earlier this month, it makes us think harder and deeper about the direction in which artificial intelligence is headed in general. Could there be a day when an instruction to harm somebody is as easy as an instruction to turn the kitchen lights on? Are we on the path to a world of easy creations peppered equally with easy destructions? Well, SpaceX and Tesla-CEO Elon Musk thinks we can relax, at least for the moment, "Absent a major paradigm shift - something unforeseeable at present - I would not say we are at the point where we should truly be worried about AI going out of control." It behoves us to be more cognizant of the machines we build in any case.