Since the turn of the century, we most often use the word "analog" to refer to outdated technology, like radios from the pre-digital era, or dashboards with dials and needles. However, analog is coming back with a vengeance.

Nowhere is this more striking than in the field of optical computing. When it comes to computer processors, 64-bit technology is currently the norm. This means information is 64 bits long, which is a vast improvement over the 8-bit technology of the early 1980s. Nowadays, high end GPUs (Graphical Processing Units) can process thousands of channels in parallel carrying 64 bits of information.

Optical computing involves processing data encoded in a beam of light, and light can carry a lot more channels, as Prof. Chris Moser of the Laboratory of Applied Photonics Devices explains: "We now have modulators that can process information with around 10 million pixels, which can be viewed as channels. So, we have 10 million channels, each carrying analog values."

From 64-bit binary to 10-million-pixel analog technology is akin to a jump to hyperspace. Of course, it is not as simple as that. We won't be using smartphones that run on optical computing in the near future, but certain tasks of computing will soon achieve rates so fast that this technology represents a paradigm shift.

The revival of optical neural networks

Prof. Christophe Moser has been working in the field of optical computing for almost thirty years, first at Caltech and then at EPFL. He works in close collaboration with the acknowledged pioneer of optical neural networks, Prof. Demetri Psaltis. Together they have led a revival of this technology. Invented in the 1980s, before hardware could fully support it, optical neural networks are now making a massive comeback.

Optical computing uses hybrid bonding, so that traditional chipsets will control the inflow and outflow of data electronically, while the powerful calculation takes place in the optical domain.

There are many potential applications for this kind of processing power, and neural networks are a great example of this. In a paper recently published in the journal Nanophotonics, researchers from LAPD demonstrate how they were able to build digital twins to train their hybrid neural network.

"We train our hybrid architecture with a neural model that approximates the very complex optical system," explains Prof. Moser. "The data acquired serve as a training dataset that constantly refines the digital twin. So that you can actually predict the hardware that you're going to need, and how to configure it."

Using neural to networks to design neural networks

An order of scholars reviving ancient wisdom, armed with beams of light, discovering mysterious twins and making jumps to hyperspace: if this all sounds a bit like Star Wars, the laboratory of Prof. Moser looks the part, too. The demonstrator would make a convincing hyperdrive for the Millennium Falcon.

However, this is anything but science fiction. Deploying digital neural networks to design optical neural networks, as a way to revolutionize the computing industry, is the daily work of Profs. Moser and Psaltis, and their many researchers.

The end product will see a leap forward in the ultra-fast processing of non-linear data such as images, sound and video. The manipulation of complex neural networks will be accelerated to astonishing rates. Prof. Moser states that a by-product of this development can be used as a high-performance switch in data centers, with the added bonus that optical processing has a far lower carbon footprint and lower latency than traditional methods.

Leaders of industry take note: the return of the analog is on its way.