John

EPFL takes another step towards carbon neutrality

Today EPFL inaugurated its new heating plant, which has the capacity to heat and cool the Lausanne campus solely by drawing water from Lake Geneva and recovering excess heat from a connected data center. The ceremony was attended by local political leaders including Vassilis Venizelos, who is a Vaud Canton councilor and the head of the Canton’s department of youth, the environment and security.

The plant – some three years in the making – marks a major step towards the School’s goal of becoming carbon neutral. EPFL began renovating the heating and cooling facilities at its Lausanne campus in 2019 after they had become obsolete.
“Our School has long been a pioneer in making efficient use of energy resources,” says Matthias Gäumann, EPFL’s Vice President for Operations. “We installed the first lake-water-fed cooling system in the late 1970s, and in 1986 we began heating parts of the Lausanne campus with lake water, too. Thanks to the plant inaugurated today, all of the campus’ heating and cooling needs will now be met through a combination of water pumped from Lake Geneva, heat exchangers, heat recovered from a data center and solar panels. All that corresponds to 54% of the campus’ total energy requirement.” 40% of the remaining energy comes from electricity and just 6% from natural gas; the campus uses no fuel oil.

Recovering excess heat
The new heating plant was built through a project with Bouygues. Its innovative design includes an integrated system bringing together different types of renewable energy. The sides and roof of the plant’s main building (located near a metro track) are covered entirely with solar panels, while a new pumping station makes it possible to draw water from Lake Geneva. This water – sourced more deeply than in the previous plant – is drawn at a constant temperature of 7°C before next-generation heat pumps raise it to 67°C through a thermodynamic process involving compression, condensation, expansion and evaporation. The end result is significantly better energy performance.
Another major advance is that the plant makes use of the excess heat generated by a data center built on top of it, through a system that was started up early this year. The server-rack doors in the center are designed to accommodate filtered industrial water cooled by lake water. The system is energy-efficient and technically quite bold due to the challenges involved in running water near electrical fittings. By using the heat recovered from cooling the servers to warm the rest of the Lausanne campus, the School can cut its power bill considerably, particularly when compared to the conventional approach of using refrigeration units to cool servers.

“A crucial step”
Looking further ahead, the plant could one day make use of a nearby composting facility that converts organic waste from neighboring parks and gardens. A digester for food waste from campus cafeterias could also be installed, eventually leading to small-scale local biogas production.
Last but not least, the new plant could also be used for research aims. EPFL’s EcoCloud research center is working with the School’s Energy Center on a project to reduce the data center’s carbon emissions. The project entails incorporating the solar panels on the heating plant along with a battery located on campus, and setting up a system for controlling the data center directly.
“Few people in the EPFL community have ever really wondered where the energy they use to heat buildings, light up classrooms and run experiments comes from,” says Gisou van der Goot, EPFL’s Vice President for Responsible Transformation. “But times are changing, and we need to showcase our School’s energy strategy as an example worth following. Especially in today’s circumstances, it’s reassuring to know that our main campus uses no fuel oil and almost no natural gas, and runs primarily on renewable energy thanks to our new heating plant. This plant marks a crucial step towards our long-term objectives.”

Key figures
54% of the Lausanne campus’ energy comes from the heating plant, which uses water from Lake Geneva to heat and cool the entire campus
6% comes from natural gas
40% comes from electricity
218 GWh expected energy use in 2022


Author: Mediacom

Source: EPFL

0
Read More

The coolest microchips around: designer swimwear by POWERLab

Having come through a heatwave it is easy to forget that humans are not the only ones that need to shelter from oppressive heat. Animals suffer too, and huge mechanical infrastructures like railways, even major runways. But what about computers?

From the beginning, data centers have faced problems with cooling. In the old days, it was known that Google had a fantastic set-up in their data centers where super computers would be laid out in open rows, in a refrigerated environment. Technicians in California could go and replace velcro-mounted components, while enjoying the cool air for a few refreshing minutes. The real benefit was that the rooms would be kept at a low temperature, to lighten the load on each server’s individual cooling system. Nowadays, however, this would be regarded as extraordinarily inefficient: why waste energy refrigerating an entire room, when it is only the PCs that have to be kept cool?

Prof. Elison Matioli from POWERlab has taken this question a step further. Why should the entire PC be cooled down by having air blown at it, when each individual chip could have its own liquid-cooled system?

As data center demand goes up, so does the cost of cooling

Unless you are unusually well-versed in data center technology, it is unlikely you will be aware of the full extent to which you make use of data centers. If you use Gmail or Outlook.com it is likely that every email you send or receive is stored in a center in the USA; if you have your photos backed up on Apple iCloud or Google Photos, they will be stored in several of the many data centers these companies run all over the world. As more and more people increasingly make use of the Internet-of-Things, from smart cars to smart doorbells to fitness monitors, our data center usage is constantly growing.

At EPFL’s EcoCloud Center we have many professors working on ways to decrease the energy consumption of data centers, even as demand grows and global temperatures increase. At the smallest level, Prof. Moser and Prof. Psaltis have been researching light propagation in optical fibers, and using it to perform practical computational tasks, with much lower energy consumption than traditional digital techniques. At the city-wide level, Prof. Paolone has been building smart grids that turn static power networks into self-regulating, highly-efficient intelligent systems.

Prof. Elison Matioli is working in between these two extremes, at the level of computer components. “Our vision is  to contribute to the development of an integrated chip – a single unit for powered electronics where you have loads of devices integrated – smaller and more energy-efficient than anything that can be achieved currently.”

Microchips continue to get smaller in size, so scientists around the globe are seeking out alternatives to silicon, as the natural limits of this tried and tested material impose themselves. Prof. Matioli has identified the best alternative, but there is an inevitable problem:

“Using Gallium Nitride allows us build electronic devices like power units, memory chips or graphic cards, much smaller than can be achieved with silicon. We can deliver better performances in energy conversion with a much smaller volume but, as a consequence of this, we are generating greater amounts of heat over a smaller surface area.

“It is vital that components do not overheat, nor cause neighbouring devices to overheat.”

Cooling revolutionary chip sets like the above became a main focus of Prof. Matioli’s team, and led to some radical solutions. In turn, these solutions opened new possibilities for cooling all kinds of chip sets, including data center infrastructure.

Walking through the lab, it comes across like science fiction.

Just as an astronaut uses a space suit with built-in liquid-cooling, so these microchips are each housed in a liquid-cooled membrane. Cooling with air in the traditional way is fine, but liquid conducts heat faster than air, so the cooling is much more efficient. In the devices being pioneered by POWERlab, microchannels of varying diameter provide a cooling system that is tailored to the needs of each chip, as part of a cooling network designed for the entire machine – the hot spots having been identified in advance. Crucially, it is only the hot spots themselves that are targeted – an ultra-efficient strategy.

Getting mechanical and electronics engineers to work together

This “co-designing” of the ultra-compact microchips and their cooling system makes the approach unique, and beats at the heart of spin-off Corintis SA, a start-up which has evolved out of POWERlab, and is currently recruting.

“Corintis is bringing a service to chip manufacturers, providing them with heat maps for their devices. Their experts can optimise microfluidic cooling while the customer is designing their microchips. They can then design the heat sink in a way that is made to measure for their chipset.”

Interdisciplinarity is a key feature to this work: “Very often the departments looking at thermal issues and electronic devices are in different buildings: mechanical engineering and electrical engineering. So you build a chip and then send it to another department to find a way to cool it down. But by this time you have already missed many opportunities!

“In our lab I brought mechanical engineers and electrical engineers to work together,” explains Prof. Matioli, “and that is what makes us different.”

In discussion with Remco van Erp, CEO of Corintis SA

The annual increase in computing power of general-purpose chips has been slowing down in recent years. Many of the biggest tech companies in the world are now designing their own application-specific chips to meet their future needs: Apple designs chips for their phones and laptops, Amazon designs chips for their data center, and Youtube even designs chips for video processing, there is a large amount of heterogeneity. The custom design of chip-sets can greatly benefit from tailor-made cooling solutions to improve energy efficiency and performance, especially where data centers are concerned. Increasingly, companies are coming to us looking for bette cooling solutions.

This is a very multidisciplinary problem, requiring expertise ranging all the way from mathematics to plumbing. At Corintis, we have computational and hardware experts working together to achieve our goals. The modelling aspect is very important, since we want to predict power and temperature distribution, and optimize the microfluidic cooling design, before a chip is even manufactured. It’s also a multi-scale problem: on the one hand, we are dealing with channels at the micrometre scale , on the other it is integrated into chips that are several centimetres big. This requires clever innovations in modelling and simulation.

We keep strong links with EPFL: our microfabrication experts are working in the clean rooms there, we have four interns from EPFL and other international institutions, and we are applying for research funding in collaboration with POWERLab.



Find out more:

POWERlab
https://powerlab.epfl.ch

Corintis
https://corintis.com

EcoCloud
https://ecocloud.epfl.ch

Related publications:

Multichannel nanowire decives for efficient power conversion
Nature, 25 March 2021

Co-designing electronics with microfluidics for more sustainable cooling
Nature, 9 September 2020

0
Read More
40

EcoCloud’s expanding mission

As of January 1st, 2022, the EPFL EcoCloud Center is headed by Professor David Atienza. Its mission has been expanded with a strong new focus on fundamental research and education in the domain of sustainable cloud computing.

“Historically, Ecocloud’s main focus has been to deliver technologies jointly with top companies in the information technologies (IT) sector to help them optimize the large cloud computing infrastructure of public cloud systems”, says Atienza. “We are now focusing on the whole IT ecosystem to develop sustainable multiscale computing from the cloud to the edge”, he adds. “Our goal is to rethink the whole ecosystem and how we can provide IT solutions that can make computing more sustainable. In particular, the goal is to optimize the used resources for computing to minimize the environmental and social impact of IT infrastructures and practices. This includes the monitoring of materials, energy, water as well as other rare resources, and the creation of a circular economy for IT infrastructure, consering electronics impact on the environment from production to the recycling of cloud computing components.”

IT infrastructure as enabler for a sustainable society

“In collaboration with the School of Engineering (STI), the School of Computer and Communication Sciences (IC), the School of Architecture, Civil and Environmental Engineering (ENAC), and the School of Basic Sciences (SB) we have defined multi-disciplinary IT application pillars or directions that are strategic for them”, says Atienza.

Four multi-center projects are planned for 2022 in the following research areas: energy-constrained and sustainable deep learning (in collaboration with the Center for Intelligent Systems (CIS) and the Center for Imaging), computational and data storage sustainability for scientific computing (in collaboration with the Space Center and the Energy Center), sustainable smart cities and transportation systems (in partnership with the FUSTIC Association, CIS and CLIMACT Center) and energy-constrained trustworthy systems including Bitcoin’s technology (in collaboration with the Center for Digital Trust).

In addition to its multi-center research projects on specific applications, EcoCloud will also work on fundamental technologies to enable sustainable IT infrastructures, such as minimal-energy computing and storage platforms, or approaches to maximize the use of renewable energy in data centers and IT services deployment.

Moreover, EcoCloud will keep working and strengthening in this new era of sustainable cloud computing research its previous collaboration for many years with historical IT partners through its Industrial Affiliates Program (IAP), such as Microsoft, HPE, Intel, IBM, Huawei or Facebook, who have confirmed their interest in continuing to collaborate with the center on its new research topics through their AIP membership.

A new facility for research on sustainable computing

“We plan to create an experimental facility dedicated to multi-disciplinary research on sustainable computing at EPFL”, says Atienza. In this facility, EcoCloud will provide specialized IT personnel to assist and support the EPFL laboratories in performing tests related to the proposed multi-center IT research projects and cloud infrastructures. “This year, research activities will focus on the agreed projects with the different schools and centers at EPFL, but in the future, we expect to make open calls for anyone at EPFL interested in research related to sustainable computing to be supported by EcoCloud.

Best practices for IT infrastructure

The dissemination of best practices for sustainable IT infrastructure is another core mission of EcoCloud. “In cooperation with the Vice-Presidency for Responsible Transformation (VPT), we are going to develop a course about the fundamentals of sustainable computing for EPFL students at the master level, which will be offered by the Section of Electrical Engineering (SEL) and the Section of Computer Science (SIN) for the complete campus”, says Atienza. “Continuous education for professionals is also important. We plan to offer training to companies to support and assist them in their digitalization processes and help them understand how to implement the most sustainable IT technologies and processes possible.”

“IT is the engine of our digital world. With a compound annual growth rate of more than 16%, cloud computing must embrace a strategy of digital responsibility to support economic progress and societal development without compromising the future of our planet”, concludes Atienza.

Public cloud

The public cloud concept refers to an IT model where on-demand computing services and infrastructure are managed by a third-party provider (e.g., Microsoft, Amazon, Google, IBM, etc.) and shared (for a specific fee) with multiple organizations using the Internet. So, a public cloud is a subscription service offered by a company to many customers who want similar services. On the contrary, a private cloud is a service entirely controlled by a single organization for its internal use and not shared with others (e.g., the internal datacenter and IT infrastructure we have at EPFL).


Author: Leila Ueberschlag

Source: Computer and Communication Sciences | IC

This content is distributed under a Creative Commons CC BY-SA 4.0 license. You may freely reproduce the text, videos and images it contains, provided that you indicate the author’s name and place no restrictions on the subsequent use of the content. If you would like to reproduce an illustration that does not contain the CC BY-SA notice, you must obtain approval from the author.
0
Read More

Beating hackers at bug hunting with automated, far-reaching technology

On the 9th of December, 2021 the world of IT security abruptly went into a state of shock. An alarming message was spreading like wildfire:

    RCE 0-day exploit found in log4j   

For the uninitiated, there is a lot to unpack here. “RCE” stands for remote code execution: similar to when somebody takes control of your computer with TeamViewer to run programs of their choosing. In this context, however, control is exerted without the consent, or even the knowledge of the owner.

A zero day exploit is a major software vulnerability, previously unknown to the developer. They must act quickly to develop a patch because, by the time the developer learns about it, adversaries could already be exploiting the opening.

The log4j library allows Java software to log (report) certain conditions, and is widely used in Java software. A vulnerability in it could allow an adversary to execute arbitrary code in the context of the underlying software.

Put it all together and you get this: at the time the above headline was published, a system tool used by companies all over the world – in cloud servers, game servers and financial platforms – was already being exploited by hackers, allowing them to take control of servers and data centers.

News spread fast about the staggering vulnerability

93% of the world’s cloud services affected

According to the Wall Street Journal, “U.S. officials say hundreds of millions of devices are at risk, hackers could use the bug to steal data, install malware or take control.”

One estimate stated that the vulnerability affected 93% of enterprise cloud environments. At EPFL, all IT administrators were sent instructions to patch their server software immediately. Even Oracle Corporation, world leaders in information security, had to send out a distress call:

“Due to the severity of this vulnerability and the publication of exploit code on various sites, Oracle strongly recommends that customers apply the updates provided by [our] Security Alert as soon as possible.”

It is hard to gauge the full extent of the damage caused, but it is clear that these vulnerabilities have real-world impact: among confirmed victims of the log4j bug are the Belgian Ministry of Defence, the UK’s National Health Service and a range of financial trading platforms. So the question begs itself – what are corporations like Oracle doing about it?

As a matter of fact, Oracle had already been working against this kind of vulnerability long before the log4j zero day. The log4j library uses deserialization: a server receives structured data (a form of code and object relationships) for processing. If the checks during deserialization are insufficient, and allow the attacker leeway in how the data is interpreted, it often results in RCE. Identifying the vulnerabilities exposed during the deserialization process had long been a subject of interest to Oracle researchers by 2020, when they reached out to Prof. Mathias Payer of EPFL’s HexHive lab:

“We had already covered fuzzing and program analysis, and had worked on cloud security as part of EPFL’s EcoCloud Center,” explains Prof. Payer, “but we had not approached these deserialization bugs.  Then we got to work with Oracle Labs (part of Oracle Inc) who provided funding via a gift. François Gauthier and Kostyantyn Vorobyov, two Oracle researchers from Oracle Labs introduced us to the complex technical issues that they were facing. And then we worked together, and developed a platform for discovering deserialization vulnerabilities.

“People have been attempting to find and exploit vulnerabilities in deserialization code, including Oracle’s, for years: either intent on gaining some kind of direct advantage, or to earn money by submitting bug reports. Either way, these are dedicated, manual attacks. In these manual attacks, the analyst thoroughly analyzes the source code of the target and then painstakingly crafts the exploit. What we have developed is a mechanism that automates the process, and allows Oracle to get ahead of the attackers.

Eight moves ahead, like a chess grandmaster

“In addition to this, the bugs that we are finding can be much more complex than the ones that experts are finding manually. Most analysts are trained to search to a depth of two manipulations: an entry and a control vector. Our platform creates an abstract dependency graph for all available classes, and can do a fuzzy search to a depth of up to eight manipulations.”

The battle between IT security managers and attackers is one where the defenders hope to find bugs before the attackers do. However, Prof. Payer explains that security managers have one key advantage when it comes to using HexHive’s platform: “Although our tool is neutral, i.e., it can be used by both attackers and defenders, developers have full access to and understanding of their own code, which gives them a huge advantage over a hacker when it comes to interpreting the results. They therefore have a very good chance of finding weak points before the attacker.”

Negotiations are under way to set up internships for HexHive researchers at Oracle Corporation. “This will be good for Oracle because they will have people who actually developed some of the code on site, which will make it easier to integrate the platform into their pipeline. Another thing I appreciate is that our prototype will remain open source, and bug reports will be published.”

So long as information technology is around, the battle between security managers and hackers will rage on. Thanks to their collaboration with HexHive, however, Oracle will be able to keep one step ahead of the aggressor: faster, higher, stronger.

0
Read More

The Oracle Database Multilingual Engine (MLE) and APEX

Featured talk at EPFL: UGraalVM’s full power in the Database

Starting with Oracle Database 21c, developers can execute JavaScript within the database. This functionality is enabled by the Multilingual Engine (MLE), powered by GraalVM, and enables Oracle APEX 20.2 to be the first (and only) low code framework on the planet which natively supports server-side JavaScript, out of the box.

In this talk, we will get a glimpse into the architecture of MLE which solves the challenge of embedding a multilingual compiler and runtime (GraalVM) into a database management system (Oracle Database). In addition, we will see how MLE enables low-code platforms like APEX to run dynamic languages like JavaScript natively in the database, both in theory as well as in a short practical demonstration within the Oracle Cloud Autonomous Database.

Speaker:

Dr. Lucas Braun is a Program Manager at Oracle Labs, Oracle’s research and development branch. He received his doctoral degree from the computer science department of ETH Zurich (Systems Group) and cloud databases are his primary research focus. Lucas is part of the team that develops the Oracle Database Multilingual Engine (MLE) which got first released in December 2020 as part of the Oracle Database 21c.

Date: Wednesday 1st June 2022
Time: 10:15
Place: CM2

0
Read More

Using the matrix to help Meta gear up

Just 12-months after it was created, in December 2004, 1-million people were active on Facebook. As of December 2021 it had an average 1.93 billion daily active users. EPFL is in a unique collaboration with its parent company Meta around distributed deep learning research.

For a user base of this size, large-scale automated-systems must be utilized to understand user experience in order to ensure accuracy and success. EPFL’s Machine Learning and Optimization Laboratory (MLO), led by Professor Martin Jaggi, is in active collaboration with Meta Platforms, Inc., Facebook’s parent company, to solve this unique challenge.

With funding from EPFL’s EcoCloud Research Center, MLO collaborates with Meta through internships at the company for MLO researchers and the use by Meta of a pioneering MLO invention: PowerSGD. MLO is helping Meta to analyze and better understand millions of users’ experiences while at the same time respecting user privacy. This requires collaborative learning, that is, privacy-preserving analysis of information from many devices for the training of a neural network that gathers, and even predicts, patterns of behavior.

To do this, a key strategy is to divide the study of these patterns over “the edge”, using both the user’s device, and others that sit between it and the data center, as a form of distributed training. This requires a fast flow of information and efficient analysis of the data. PowerSGD is an algorithm which compresses model updates in matrix form, allowing a drastic reduction in the communication required for distributed training. When applied to standard deep learning benchmarks, such as image recognition or transformer models for text, the algorithm saves up to 99% of the communication while retaining good model accuracy.

PowerSGD was used to speed up training of the XLM-R model by up to 2x. XLM-R is a critical Natural Language Processing model powering most of the text understanding services at Meta. Facebook, Instagram, WhatsApp and Workplace all rely on XLM-R for their text understanding needs. Use cases include: 1) Content Integrity: detecting hate speech, violence, bullying and harassment; 2) Topic Classification: the classification of topics enabling feed ranking of products like Facebook; 3) Business Integrity: detecting any policy violation for Ads across all products; 4) Shops: providing better product understanding and recommendations for shops.

“There are three aspects to the process. The first is to develop gradient compression algorithms to speed up the training, reducing the time required to prepare this information for its transfer to a centralized hub. The second is efficient training of the neural network within a data center – it would normally take several weeks to process all the information, but we distribute the training, reducing computation from months to days,” said MLO doctoral researcher Tao Lin.

Tao Lin of MLO

As a third aspect, privacy is a constant factor under consideration. “We have to distinguish between knowledge and data. We need to ensure users’ privacy by making sure that our learning algorithms can extract knowledge without extracting their data and we can do this through federated learning,” continued Lin.

The PowerSGD algorithm has been gaining in reputation over the last few years. The developers of deep learning software PyTorch have included it as part of their software suite (PyTorch 1.10), which is used by Meta, OpenAI, Tesla and similar technology corporations that rely on artificial intelligence. The collaboration with Meta is due to run until 2023.


Related news: EPFL’s PowerSGD Algorithm Features in Software Industry

Authors: Tanya Petersen, John Maxwell

Source: EPFL

This content is distributed under a Creative Commons CC BY-SA 4.0 license. You may freely reproduce the text, videos and images it contains, provided that you indicate the author’s name and place no restrictions on the subsequent use of the content. If you would like to reproduce an illustration that does not contain the CC BY-SA notice, you must obtain approval from the author.

0
Read More

EcoCloud Annual Event

EcoCloud Annual Event

Registrations Open:

Date of the event:
24th May, 2022

Location:
Lausanne Palace Hotel 
Rue du Grand Chêne, 7-9
CH-1002 Lausanne
+41 21 331 31 31
http://www.lausanne-palace.com

Program

8:00 – 8:30Pick-up badges/registration + welcome coffee
8:30 Introduction
David Atienza and Ed Bugnion (EPFL)
Session 1Sustainable smart cities, transport systems and agriculture
9:00Digital Twins at the services of Sustainable Cities & Transport systems : challenges?
Frédéric Dreyer (EPFL)
9:15Towards a Data-driven Operational Digital Twin for Railway Wheels
Olga Fink (EPFL)
9:35Competitional aspects of climate/smart agriculture and forestry
Marilyn Wolf (University of Nebraska-Lincoln)
10:00Data production in renewable energy hubs
François Maréchal (EPFL)
10:30Coffee break
Session 2Energy-constrained and sustainable deep learning
11:00Advanced computational imaging: the rise and fall of deep neural nets
Michael Unser (EPFL)
11:35Carbon-Aware Deep Learning to Promote Model Optimization: An Application in Global Health
Annie Hartley (EPFL)
12:00 – 13:00Standing lunch
Session 3Energy-constrained trustworthy and computing systems
13:00A cryptocurrency to Save the Planet
Rachid Guerraoui (EPFL)
13:30Solving Hard Optimization Problems with Light
Francesca Parmigiani (Microsoft)
14:00Optical Computing With Optical Fibers
Christophe Moser (EPFL)
14:30 – 15:00Break
Session 4Sustainable Scientific Computing
15:00Some aspects of scientific computing and data management in climate and environmental applications
Michael Lehning (EPFL) 
15:30Heating up while cooling down: How Decentralized Multi-Functional Computing Infrastructures can contribute to reach Switzerland’s climate goals?
Berat Denizdurduran (DeepSquare)
16:00Challenges in solving high profile, high impact scientific problems with HPC: the tokamak fusion reactors and the Square Kilometer Array cases
Gilles Fourestey (EPFL)
16:30 – 18:30Poster session + networking aperitif
0
Read More
Ⓒ ETUC

We stand with Ukraine

EcoCloud strongly condemns Russia’s military invasion and acts of war in Ukraine, as well as the dreadful violation of international humanitarian and human rights law. We are really shocked by the tragedy currently unfolding in Ukraine, and we fully support everyone affected by the war.

The EcoCloud community calls on the different governments to take immediate action to protect everyone in that country, particularly including its civilian population and people affiliated with its universities. Now more than ever, we must promote our societal values (justice, freedom, respect, community, and responsibility) and confront this situation collectively and peacefully to end this nonsensical war.

0
Read More

ASPLOS is back – in person

After going virtual since 2022, ASPLOS is returning to Lausanne for the 2022 edition, 28th February to 4th March.

The 2022 edition of ASPLOS marks its 40th anniversary. In 1982, ASPLOS emerged as the ultimate conference for researchers from a variety of software and hardware system communities to collaborate and engage technically. It has been ahead of the curve in technologies such as RISC and VLIW processors, small and large-scale multiprocessors, clusters and networks-of-workstations, optimizing compilers, RAID, and network-storage system designs.

Today, as we enter the post-Moore era, ASPLOS 2022 re-establishes itself as the premier platform for cross-layer design and optimization to address fundamental challenges in computer system design in the coming years.

We look forward to welcoming you at the SwissTech Center, EPFL.

Official Website

0
Read More