Media coverage on the distant future of AI and machine learning have painted a scary picture of machines going berserk, rampaging killer robots, and rogue self-driving cars. Those ugly manifestations of machine learning are unlikely to go beyond fiction. But the dangers of machine learning can—and already have—taken different routes. A couple of podcasts featuring El Mahdi El Mhamdi, PhD scholar at EPFL, shed important light on the dark side of AI—poisoned data sets, bad actors, AI-generated fake news, and the Byzantine problem—and his work on technical AI safety and robustness in biological systems.
Both the podcasts were recorded in January this year. The Practical AI podcast, hosted by Chris Benson (Chief AI Strategist at Lockheed Martin, RMS APA Innovations), was recorded during the Applied Machine Learning Days Conference in Lausanne, Switzerland. The AI Alignment Podcast of the Future of Life Institute was recorded during the Beneficial AGI conference in Puerto Rico.
El Mahdi El Mhamdi discusses fault tolerance, or the lack of it. Referring to the allegory of the three Byzantine generals, he explains ‘Byzantine fault,’ where components fail in a distributed computing system and there is imperfect information sharing and poisoning attacks.
He calls some “recommender systems” as the “killer robots” of today. For instance, AI-backed search engines spread misconceptions about the “dangers” of vaccinations. As a result, vaccine-preventable diseases are reemerging and causing thousands of deaths, prompting WHO to list “vaccine hesitancy” as one of the ten threats to global health.
El Mahdi El Mhamdi also alludes to the weakness of applying average gradients in machine learning because that leads to completely skewed recommender systems. To address the problem, he is working with fellow researchers on systems that offer poisoning resilience and safe interruptability. They have developed a protocol called Gradient Descent and “derived a series of algorithms that behave like a median, and that provides guarantees that it is bounded in between a majority of points.” They have also developed a new version of TensorFlow (Google’s machine learning framework) to make it Byzantine resistant. The AI Alignment Podcast explains El Mahdi El Mhamdi’s work on Byzantine-resilient distributed machine learning, the difficulties along the way, and the importance of this line of research for long-term AI alignment
There are, perforce, limitations in computer science to tackle vulnerabilities in machine learning. After all, “Computationally, it’s way easier to be the poisoner.” However, El Mahdi El Mhamdi and his colleagues have successfully developed systems to improve security in AI and machine learning, and are working toward a future of technical AI safety.