Technology generally follows our desire to have always more accuracy. Our smartphones constantly promise more detailed images, and higher resolution video. However, in some contexts, lower resolution is better.
Dr. Chang Meng of the Laboratory of Integrated Systems has a curious research goal: "My work is to introduce more harmless errors deliberately, in order to reduce power consumption.
"This can be applied to many error tolerant applications, like natural language processing, image processing, machine learning and many others. We try to use it to find benefits for powerful AI accelerators, Internet of Things devices, and embedded systems."

Chang is working on designing ultra-efficient hardware: well-designed silicon for complex tasks.
In deep learning models, most of the work is driven by multiplication. Multiplying matrices is a concept that lies at the heart of AI accelerators, but exact multipliers use a lot of power.
"I design very low‑power, approximate multipliers for neural networks," Chang explains. "They reduce energy use without losing too much accuracy."
One method removes certain logic gates from the multiplier circuit in a controlled way.
"We start from a pre-trained high-accuracy floating point deep neural network," Chang says. "We apply integer quantization to transform the computation into integers. Furthermore, we replace exact integer multipliers with approximate ones. That lowers accuracy a bit, but boosts speed and cuts power."
By moving from 32‑bit floats to 8‑bit or even 4‑bit integers, memory needs shrink dramatically. Approximate multipliers also have lower latency and draw less current. Altogether, this yields much greater efficiency.
There is a limit to how much accuracy can be discarded, however.
"Because approximate multipliers introduce errors, there is a risk that the accuracy of deep neural networks will degrade. So to mitigate this we need a retraining process, to maintain overall levels of accuracy."
Designing the retraining step has been a major accomplishment, one that was presented at DATE2025, in March.
"Our experimental results show that, compared to the state-of-the-art methods, our method raises Deep Neural Network accuracy after retraining by 4.10% and 2.93% on average," explains Chang.
"What is more, we cut power use by 51% compared to standard quantization methods with exact multipliers."
One real-world application of this kind of work would be in embedded devices. "Approximate multipliers allow us to achieve fast systems of analysis with surprisingly low energy consumption. Because we are able to perform powerful image analysis at very low power, this kind of technology would be beneficial to many autonomous, hand-held medical sensors."
Many portable devices are being trained to process visual, audio or kinetic data with the use of deep neural networks, on the Edge. Approximate multipliers can give them the most potent kind of upgrade.
"This is very important research," says Giovanni De Micheli, director of EcoCloud Center and LSI lab, "to reduce the extremely high energy costs of AI applications, and make ML sustainable on a variety of platforms."
Dr. Chang Meng is joining the University of Eindhoven, Netherlands, as Assistant Professor.
Gradient Approximation of Approximate Multipliers for High-Accuracy Deep Neural Network Retraining
Meng, Chang; Burleson, Wayne; Qian, Weikang; De Micheli, Giovanni
2025 Design, Automation & Test in Europe Conference (DATE)