A novel model for neural networks based on robustness techniques drawn from nature


Team

  Guerraoui Rachid


Natural computing refers to problem-solving techniques inspired from nature. One of its subsets is bio-inspired computing, which leans heavily on three disciplines: biology, computer science, and mathematics. Out of the long list of research areas that fall within bio-inspired computing, biological or natural algorithms have seen considerable research activity. However, an ongoing research at the School of Computer and Communication Sciences (LPD) reveals that existing natural algorithms have not yet been theoretically tested, which raises doubts about their robustness. The main objective of the research is to develop a novel model for neural networks based on robustness techniques drawn from nature.

The efficacy of neural networks has been amply demonstrated in advanced applications of artificial intelligence, such as skin cancer diagnosis and recognition of images with a precision unmatched by any human. This is neural networks at work, much in line with the way the natural nervous system functions. However, the main problem with the implementation of a neural network is that it is viewed as a mathematical abstraction simulated on top of Turing machines. This results in what researchers call a “computational bottleneck” because there is a changeover from digital computation to analogue computation at each instance of a query directed at the hardware. With neural networks being deployed in sensitive applications, it is crucial that systems are developed to guard against failures.

After evaluating various options to achieve robustness, lead researcher Rachid Guerraoui, Professor of Computer and Communication Siences at EPFL, is working on new paradigms to enforce robustness in biological algorithms. Unlike past researches that have relied on simulations or unrealistic testing, the current research advocates a theoretically-based robustness guarantee. In another departure from earlier works, the researchers adopt a model where a single neuron is the unit of failure (and not the entire machine).

By enforcing robustness in biological algorithms, the findings of the study could result in vastly improved fault tolerance techniques as well. The project is supported by the Swiss National Foundation.

Suggested readings:

https://infoscience.epfl.ch/record/229311
https://arxiv.org/pdf/1707.08167.pdf
http://www.snl.salk.edu/~navlakha/BDA2016/