Featured

The coolest microchips around: designer swimwear by POWERLab

Having come through a heatwave it is easy to forget that humans are not the only ones that need to shelter from oppressive heat. Animals suffer too, and huge mechanical infrastructures like railways, even major runways. But what about computers?

From the beginning, data centers have faced problems with cooling. In the old days, it was known that Google had a fantastic set-up in their data centers where super computers would be laid out in open rows, in a refrigerated environment. Technicians in California could go and replace velcro-mounted components, while enjoying the cool air for a few refreshing minutes. The real benefit was that the rooms would be kept at a low temperature, to lighten the load on each server’s individual cooling system. Nowadays, however, this would be regarded as extraordinarily inefficient: why waste energy refrigerating an entire room, when it is only the PCs that have to be kept cool?

Prof. Elison Matioli from POWERlab has taken this question a step further. Why should the entire PC be cooled down by having air blown at it, when each individual chip could have its own liquid-cooled system?

As data center demand goes up, so does the cost of cooling

Unless you are unusually well-versed in data center technology, it is unlikely you will be aware of the full extent to which you make use of data centers. If you use Gmail or Outlook.com it is likely that every email you send or receive is stored in a center in the USA; if you have your photos backed up on Apple iCloud or Google Photos, they will be stored in several of the many data centers these companies run all over the world. As more and more people increasingly make use of the Internet-of-Things, from smart cars to smart doorbells to fitness monitors, our data center usage is constantly growing.

At EPFL’s EcoCloud Center we have many professors working on ways to decrease the energy consumption of data centers, even as demand grows and global temperatures increase. At the smallest level, Prof. Moser and Prof. Psaltis have been researching light propagation in optical fibers, and using it to perform practical computational tasks, with much lower energy consumption than traditional digital techniques. At the city-wide level, Prof. Paolone has been building smart grids that turn static power networks into self-regulating, highly-efficient intelligent systems.

Prof. Elison Matioli is working in between these two extremes, at the level of computer components. “Our vision is  to contribute to the development of an integrated chip – a single unit for powered electronics where you have loads of devices integrated – smaller and more energy-efficient than anything that can be achieved currently.”

Microchips continue to get smaller in size, so scientists around the globe are seeking out alternatives to silicon, as the natural limits of this tried and tested material impose themselves. Prof. Matioli has identified the best alternative, but there is an inevitable problem:

“Using Gallium Nitride allows us build electronic devices like power units, memory chips or graphic cards, much smaller than can be achieved with silicon. We can deliver better performances in energy conversion with a much smaller volume but, as a consequence of this, we are generating greater amounts of heat over a smaller surface area.

“It is vital that components do not overheat, nor cause neighbouring devices to overheat.”

Cooling revolutionary chip sets like the above became a main focus of Prof. Matioli’s team, and led to some radical solutions. In turn, these solutions opened new possibilities for cooling all kinds of chip sets, including data center infrastructure.

Walking through the lab, it comes across like science fiction.

Just as an astronaut uses a space suit with built-in liquid-cooling, so these microchips are each housed in a liquid-cooled membrane. Cooling with air in the traditional way is fine, but liquid conducts heat faster than air, so the cooling is much more efficient. In the devices being pioneered by POWERlab, microchannels of varying diameter provide a cooling system that is tailored to the needs of each chip, as part of a cooling network designed for the entire machine – the hot spots having been identified in advance. Crucially, it is only the hot spots themselves that are targeted – an ultra-efficient strategy.

Getting mechanical and electronics engineers to work together

This “co-designing” of the ultra-compact microchips and their cooling system makes the approach unique, and beats at the heart of spin-off Corintis SA, a start-up which has evolved out of POWERlab, and is currently recruting.

“Corintis is bringing a service to chip manufacturers, providing them with heat maps for their devices. Their experts can optimise microfluidic cooling while the customer is designing their microchips. They can then design the heat sink in a way that is made to measure for their chipset.”

Interdisciplinarity is a key feature to this work: “Very often the departments looking at thermal issues and electronic devices are in different buildings: mechanical engineering and electrical engineering. So you build a chip and then send it to another department to find a way to cool it down. But by this time you have already missed many opportunities!

“In our lab I brought mechanical engineers and electrical engineers to work together,” explains Prof. Matioli, “and that is what makes us different.”

In discussion with Remco van Erp, CEO of Corintis SA

The annual increase in computing power of general-purpose chips has been slowing down in recent years. Many of the biggest tech companies in the world are now designing their own application-specific chips to meet their future needs: Apple designs chips for their phones and laptops, Amazon designs chips for their data center, and Youtube even designs chips for video processing, there is a large amount of heterogeneity. The custom design of chip-sets can greatly benefit from tailor-made cooling solutions to improve energy efficiency and performance, especially where data centers are concerned. Increasingly, companies are coming to us looking for bette cooling solutions.

This is a very multidisciplinary problem, requiring expertise ranging all the way from mathematics to plumbing. At Corintis, we have computational and hardware experts working together to achieve our goals. The modelling aspect is very important, since we want to predict power and temperature distribution, and optimize the microfluidic cooling design, before a chip is even manufactured. It’s also a multi-scale problem: on the one hand, we are dealing with channels at the micrometre scale , on the other it is integrated into chips that are several centimetres big. This requires clever innovations in modelling and simulation.

We keep strong links with EPFL: our microfabrication experts are working in the clean rooms there, we have four interns from EPFL and other international institutions, and we are applying for research funding in collaboration with POWERLab.



Find out more:

POWERlab
https://powerlab.epfl.ch

Corintis
https://corintis.com

EcoCloud
https://ecocloud.epfl.ch

Related publications:

Multichannel nanowire devices for efficient power conversion
Nature, 25 March 2021

Co-designing electronics with microfluidics for more sustainable cooling

Nature, 9 September 2020

0
Read More
40

EcoCloud’s expanding mission

As of January 1st, 2022, the EPFL EcoCloud Center is headed by Professor David Atienza. Its mission has been expanded with a strong new focus on fundamental research and education in the domain of sustainable cloud computing.

“Historically, Ecocloud’s main focus has been to deliver technologies jointly with top companies in the information technologies (IT) sector to help them optimize the large cloud computing infrastructure of public cloud systems”, says Atienza. “We are now focusing on the whole IT ecosystem to develop sustainable multiscale computing from the cloud to the edge”, he adds. “Our goal is to rethink the whole ecosystem and how we can provide IT solutions that can make computing more sustainable. In particular, the goal is to optimize the used resources for computing to minimize the environmental and social impact of IT infrastructures and practices. This includes the monitoring of materials, energy, water as well as other rare resources, and the creation of a circular economy for IT infrastructure, consering electronics impact on the environment from production to the recycling of cloud computing components.”

IT infrastructure as enabler for a sustainable society

“In collaboration with the School of Engineering (STI), the School of Computer and Communication Sciences (IC), the School of Architecture, Civil and Environmental Engineering (ENAC), and the School of Basic Sciences (SB) we have defined multi-disciplinary IT application pillars or directions that are strategic for them”, says Atienza.

Four multi-center projects are planned for 2022 in the following research areas: energy-constrained and sustainable deep learning (in collaboration with the Center for Intelligent Systems (CIS) and the Center for Imaging), computational and data storage sustainability for scientific computing (in collaboration with the Space Center and the Energy Center), sustainable smart cities and transportation systems (in partnership with the FUSTIC Association, CIS and CLIMACT Center) and energy-constrained trustworthy systems including Bitcoin’s technology (in collaboration with the Center for Digital Trust).

In addition to its multi-center research projects on specific applications, EcoCloud will also work on fundamental technologies to enable sustainable IT infrastructures, such as minimal-energy computing and storage platforms, or approaches to maximize the use of renewable energy in data centers and IT services deployment.

Moreover, EcoCloud will keep working and strengthening in this new era of sustainable cloud computing research its previous collaboration for many years with historical IT partners through its Industrial Affiliates Program (IAP), such as Microsoft, HPE, Intel, IBM, Huawei or Facebook, who have confirmed their interest in continuing to collaborate with the center on its new research topics through their AIP membership.

A new facility for research on sustainable computing

“We plan to create an experimental facility dedicated to multi-disciplinary research on sustainable computing at EPFL”, says Atienza. In this facility, EcoCloud will provide specialized IT personnel to assist and support the EPFL laboratories in performing tests related to the proposed multi-center IT research projects and cloud infrastructures. “This year, research activities will focus on the agreed projects with the different schools and centers at EPFL, but in the future, we expect to make open calls for anyone at EPFL interested in research related to sustainable computing to be supported by EcoCloud.

Best practices for IT infrastructure

The dissemination of best practices for sustainable IT infrastructure is another core mission of EcoCloud. “In cooperation with the Vice-Presidency for Responsible Transformation (VPT), we are going to develop a course about the fundamentals of sustainable computing for EPFL students at the master level, which will be offered by the Section of Electrical Engineering (SEL) and the Section of Computer Science (SIN) for the complete campus”, says Atienza. “Continuous education for professionals is also important. We plan to offer training to companies to support and assist them in their digitalization processes and help them understand how to implement the most sustainable IT technologies and processes possible.”

“IT is the engine of our digital world. With a compound annual growth rate of more than 16%, cloud computing must embrace a strategy of digital responsibility to support economic progress and societal development without compromising the future of our planet”, concludes Atienza.

Public cloud

The public cloud concept refers to an IT model where on-demand computing services and infrastructure are managed by a third-party provider (e.g., Microsoft, Amazon, Google, IBM, etc.) and shared (for a specific fee) with multiple organizations using the Internet. So, a public cloud is a subscription service offered by a company to many customers who want similar services. On the contrary, a private cloud is a service entirely controlled by a single organization for its internal use and not shared with others (e.g., the internal datacenter and IT infrastructure we have at EPFL).


Author: Leila Ueberschlag

Source: Computer and Communication Sciences | IC

This content is distributed under a Creative Commons CC BY-SA 4.0 license. You may freely reproduce the text, videos and images it contains, provided that you indicate the author’s name and place no restrictions on the subsequent use of the content. If you would like to reproduce an illustration that does not contain the CC BY-SA notice, you must obtain approval from the author.
0
Read More