Machine learning and artificial intelligence (AI) are finding new applications across industries. Many tasks that were performed by humans are now being handled by machines, adding efficiency to the output. But what would happen if AI crosses the threshold of human control and makes unilateral decisions? It is a frightening, but highly probable, scenario. In 2014, it prompted Google to consider the idea of a “big red button” to stop dangerous AI in an emergency. However, the challenge is not in being able to stop or interrupt an AI process but in preventing AI from biased learning due to such frequent interruptions. The biased learning can be extremely dangerous in multi-agent systems, where several machines are involved in an AI task.
To negate that possibility, human operators must be able to interrupt a task assigned to an AI agent and simultaneously ensure safety by preventing individual agents from learning from each other based on the interruptions. That is the essence of a new study by EPFL researchers El Mahdi El Mhamdi, Rachid Guerraoui, Hadrien Hendrikx, and Alexandre Maurer.
In their paper presented on December 5 at the Neural Information Processing Systems (NIPS) conference in California, the researchers argued that an AI application involves several machines and not just one unit. Therefore, unlike the safe interruptibility proposed by earlier scholars for a single machine (or learner), the current research proposes sufficient conditions in the learning algorithm to enable dynamic safe interruptibility for multi-agent systems.
AI machines learn by the proverbial carrot and stick routine, otherwise known as reinforcement learning. To achieve safe interruption for joint-action learners, the researchers altered the machines’ learning and reward system by adding ‘forgetting’ mechanisms to the learning algorithms that essentially delete bits of a machine’s memory.
The results of the research are likely to have a major impact on the development of autonomous cars and unmanned drones, facilitating their mass production. Humans, after all, will have the final say.