News

EPFL takes another step towards carbon neutrality

Today EPFL inaugurated its new heating plant, which has the capacity to heat and cool the Lausanne campus solely by drawing water from Lake Geneva and recovering excess heat from a connected data center. The ceremony was attended by local political leaders including Vassilis Venizelos, who is a Vaud Canton councilor and the head of the Canton’s department of youth, the environment and security.

The plant – some three years in the making – marks a major step towards the School’s goal of becoming carbon neutral. EPFL began renovating the heating and cooling facilities at its Lausanne campus in 2019 after they had become obsolete.
“Our School has long been a pioneer in making efficient use of energy resources,” says Matthias Gäumann, EPFL’s Vice President for Operations. “We installed the first lake-water-fed cooling system in the late 1970s, and in 1986 we began heating parts of the Lausanne campus with lake water, too. Thanks to the plant inaugurated today, all of the campus’ heating and cooling needs will now be met through a combination of water pumped from Lake Geneva, heat exchangers, heat recovered from a data center and solar panels. All that corresponds to 54% of the campus’ total energy requirement.” 40% of the remaining energy comes from electricity and just 6% from natural gas; the campus uses no fuel oil.

Recovering excess heat
The new heating plant was built through a project with Bouygues. Its innovative design includes an integrated system bringing together different types of renewable energy. The sides and roof of the plant’s main building (located near a metro track) are covered entirely with solar panels, while a new pumping station makes it possible to draw water from Lake Geneva. This water – sourced more deeply than in the previous plant – is drawn at a constant temperature of 7°C before next-generation heat pumps raise it to 67°C through a thermodynamic process involving compression, condensation, expansion and evaporation. The end result is significantly better energy performance.
Another major advance is that the plant makes use of the excess heat generated by a data center built on top of it, through a system that was started up early this year. The server-rack doors in the center are designed to accommodate filtered industrial water cooled by lake water. The system is energy-efficient and technically quite bold due to the challenges involved in running water near electrical fittings. By using the heat recovered from cooling the servers to warm the rest of the Lausanne campus, the School can cut its power bill considerably, particularly when compared to the conventional approach of using refrigeration units to cool servers.

“A crucial step”
Looking further ahead, the plant could one day make use of a nearby composting facility that converts organic waste from neighboring parks and gardens. A digester for food waste from campus cafeterias could also be installed, eventually leading to small-scale local biogas production.
Last but not least, the new plant could also be used for research aims. EPFL’s EcoCloud research center is working with the School’s Energy Center on a project to reduce the data center’s carbon emissions. The project entails incorporating the solar panels on the heating plant along with a battery located on campus, and setting up a system for controlling the data center directly.
“Few people in the EPFL community have ever really wondered where the energy they use to heat buildings, light up classrooms and run experiments comes from,” says Gisou van der Goot, EPFL’s Vice President for Responsible Transformation. “But times are changing, and we need to showcase our School’s energy strategy as an example worth following. Especially in today’s circumstances, it’s reassuring to know that our main campus uses no fuel oil and almost no natural gas, and runs primarily on renewable energy thanks to our new heating plant. This plant marks a crucial step towards our long-term objectives.”

Key figures
54% of the Lausanne campus’ energy comes from the heating plant, which uses water from Lake Geneva to heat and cool the entire campus
6% comes from natural gas
40% comes from electricity
218 GWh expected energy use in 2022


Author: Mediacom

Source: EPFL

0
Read More

Beating hackers at bug hunting with automated, far-reaching technology

On the 9th of December, 2021 the world of IT security abruptly went into a state of shock. An alarming message was spreading like wildfire:

    RCE 0-day exploit found in log4j   

For the uninitiated, there is a lot to unpack here. “RCE” stands for remote code execution: similar to when somebody takes control of your computer with TeamViewer to run programs of their choosing. In this context, however, control is exerted without the consent, or even the knowledge of the owner.

A zero day exploit is a major software vulnerability, previously unknown to the developer. They must act quickly to develop a patch because, by the time the developer learns about it, adversaries could already be exploiting the opening.

The log4j library allows Java software to log (report) certain conditions, and is widely used in Java software. A vulnerability in it could allow an adversary to execute arbitrary code in the context of the underlying software.

Put it all together and you get this: at the time the above headline was published, a system tool used by companies all over the world – in cloud servers, game servers and financial platforms – was already being exploited by hackers, allowing them to take control of servers and data centers.

News spread fast about the staggering vulnerability

93% of the world’s cloud services affected

According to the Wall Street Journal, “U.S. officials say hundreds of millions of devices are at risk, hackers could use the bug to steal data, install malware or take control.”

One estimate stated that the vulnerability affected 93% of enterprise cloud environments. At EPFL, all IT administrators were sent instructions to patch their server software immediately. Even Oracle Corporation, world leaders in information security, had to send out a distress call:

“Due to the severity of this vulnerability and the publication of exploit code on various sites, Oracle strongly recommends that customers apply the updates provided by [our] Security Alert as soon as possible.”

It is hard to gauge the full extent of the damage caused, but it is clear that these vulnerabilities have real-world impact: among confirmed victims of the log4j bug are the Belgian Ministry of Defence, the UK’s National Health Service and a range of financial trading platforms. So the question begs itself – what are corporations like Oracle doing about it?

As a matter of fact, Oracle had already been working against this kind of vulnerability long before the log4j zero day. The log4j library uses deserialization: a server receives structured data (a form of code and object relationships) for processing. If the checks during deserialization are insufficient, and allow the attacker leeway in how the data is interpreted, it often results in RCE. Identifying the vulnerabilities exposed during the deserialization process had long been a subject of interest to Oracle researchers by 2020, when they reached out to Prof. Mathias Payer of EPFL’s HexHive lab:

“We had already covered fuzzing and program analysis, and had worked on cloud security as part of EPFL’s EcoCloud Center,” explains Prof. Payer, “but we had not approached these deserialization bugs.  Then we got to work with Oracle Labs (part of Oracle Inc) who provided funding via a gift. François Gauthier and Kostyantyn Vorobyov, two Oracle researchers from Oracle Labs introduced us to the complex technical issues that they were facing. And then we worked together, and developed a platform for discovering deserialization vulnerabilities.

“People have been attempting to find and exploit vulnerabilities in deserialization code, including Oracle’s, for years: either intent on gaining some kind of direct advantage, or to earn money by submitting bug reports. Either way, these are dedicated, manual attacks. In these manual attacks, the analyst thoroughly analyzes the source code of the target and then painstakingly crafts the exploit. What we have developed is a mechanism that automates the process, and allows Oracle to get ahead of the attackers.

Eight moves ahead, like a chess grandmaster

“In addition to this, the bugs that we are finding can be much more complex than the ones that experts are finding manually. Most analysts are trained to search to a depth of two manipulations: an entry and a control vector. Our platform creates an abstract dependency graph for all available classes, and can do a fuzzy search to a depth of up to eight manipulations.”

The battle between IT security managers and attackers is one where the defenders hope to find bugs before the attackers do. However, Prof. Payer explains that security managers have one key advantage when it comes to using HexHive’s platform: “Although our tool is neutral, i.e., it can be used by both attackers and defenders, developers have full access to and understanding of their own code, which gives them a huge advantage over a hacker when it comes to interpreting the results. They therefore have a very good chance of finding weak points before the attacker.”

Negotiations are under way to set up internships for HexHive researchers at Oracle Corporation. “This will be good for Oracle because they will have people who actually developed some of the code on site, which will make it easier to integrate the platform into their pipeline. Another thing I appreciate is that our prototype will remain open source, and bug reports will be published.”

So long as information technology is around, the battle between security managers and hackers will rage on. Thanks to their collaboration with HexHive, however, Oracle will be able to keep one step ahead of the aggressor: faster, higher, stronger.

0
Read More

The Oracle Database Multilingual Engine (MLE) and APEX

Featured talk at EPFL: UGraalVM’s full power in the Database

Starting with Oracle Database 21c, developers can execute JavaScript within the database. This functionality is enabled by the Multilingual Engine (MLE), powered by GraalVM, and enables Oracle APEX 20.2 to be the first (and only) low code framework on the planet which natively supports server-side JavaScript, out of the box.

In this talk, we will get a glimpse into the architecture of MLE which solves the challenge of embedding a multilingual compiler and runtime (GraalVM) into a database management system (Oracle Database). In addition, we will see how MLE enables low-code platforms like APEX to run dynamic languages like JavaScript natively in the database, both in theory as well as in a short practical demonstration within the Oracle Cloud Autonomous Database.

Speaker:

Dr. Lucas Braun is a Program Manager at Oracle Labs, Oracle’s research and development branch. He received his doctoral degree from the computer science department of ETH Zurich (Systems Group) and cloud databases are his primary research focus. Lucas is part of the team that develops the Oracle Database Multilingual Engine (MLE) which got first released in December 2020 as part of the Oracle Database 21c.

Date: Wednesday 1st June 2022
Time: 10:15
Place: CM2

0
Read More

Using the matrix to help Meta gear up

Just 12-months after it was created, in December 2004, 1-million people were active on Facebook. As of December 2021 it had an average 1.93 billion daily active users. EPFL is in a unique collaboration with its parent company Meta around distributed deep learning research.

For a user base of this size, large-scale automated-systems must be utilized to understand user experience in order to ensure accuracy and success. EPFL’s Machine Learning and Optimization Laboratory (MLO), led by Professor Martin Jaggi, is in active collaboration with Meta Platforms, Inc., Facebook’s parent company, to solve this unique challenge.

With funding from EPFL’s EcoCloud Research Center, MLO collaborates with Meta through internships at the company for MLO researchers and the use by Meta of a pioneering MLO invention: PowerSGD. MLO is helping Meta to analyze and better understand millions of users’ experiences while at the same time respecting user privacy. This requires collaborative learning, that is, privacy-preserving analysis of information from many devices for the training of a neural network that gathers, and even predicts, patterns of behavior.

To do this, a key strategy is to divide the study of these patterns over “the edge”, using both the user’s device, and others that sit between it and the data center, as a form of distributed training. This requires a fast flow of information and efficient analysis of the data. PowerSGD is an algorithm which compresses model updates in matrix form, allowing a drastic reduction in the communication required for distributed training. When applied to standard deep learning benchmarks, such as image recognition or transformer models for text, the algorithm saves up to 99% of the communication while retaining good model accuracy.

PowerSGD was used to speed up training of the XLM-R model by up to 2x. XLM-R is a critical Natural Language Processing model powering most of the text understanding services at Meta. Facebook, Instagram, WhatsApp and Workplace all rely on XLM-R for their text understanding needs. Use cases include: 1) Content Integrity: detecting hate speech, violence, bullying and harassment; 2) Topic Classification: the classification of topics enabling feed ranking of products like Facebook; 3) Business Integrity: detecting any policy violation for Ads across all products; 4) Shops: providing better product understanding and recommendations for shops.

“There are three aspects to the process. The first is to develop gradient compression algorithms to speed up the training, reducing the time required to prepare this information for its transfer to a centralized hub. The second is efficient training of the neural network within a data center – it would normally take several weeks to process all the information, but we distribute the training, reducing computation from months to days,” said MLO doctoral researcher Tao Lin.

Tao Lin of MLO

As a third aspect, privacy is a constant factor under consideration. “We have to distinguish between knowledge and data. We need to ensure users’ privacy by making sure that our learning algorithms can extract knowledge without extracting their data and we can do this through federated learning,” continued Lin.

The PowerSGD algorithm has been gaining in reputation over the last few years. The developers of deep learning software PyTorch have included it as part of their software suite (PyTorch 1.10), which is used by Meta, OpenAI, Tesla and similar technology corporations that rely on artificial intelligence. The collaboration with Meta is due to run until 2023.


Related news: EPFL’s PowerSGD Algorithm Features in Software Industry

Authors: Tanya Petersen, John Maxwell

Source: EPFL

This content is distributed under a Creative Commons CC BY-SA 4.0 license. You may freely reproduce the text, videos and images it contains, provided that you indicate the author’s name and place no restrictions on the subsequent use of the content. If you would like to reproduce an illustration that does not contain the CC BY-SA notice, you must obtain approval from the author.

0
Read More
Ⓒ ETUC

We stand with Ukraine

EcoCloud strongly condemns Russia’s military invasion and acts of war in Ukraine, as well as the dreadful violation of international humanitarian and human rights law. We are really shocked by the tragedy currently unfolding in Ukraine, and we fully support everyone affected by the war.

The EcoCloud community calls on the different governments to take immediate action to protect everyone in that country, particularly including its civilian population and people affiliated with its universities. Now more than ever, we must promote our societal values (justice, freedom, respect, community, and responsibility) and confront this situation collectively and peacefully to end this nonsensical war.

0
Read More

ASPLOS is back – in person

After going virtual since 2022, ASPLOS is returning to Lausanne for the 2022 edition, 28th February to 4th March.

The 2022 edition of ASPLOS marks its 40th anniversary. In 1982, ASPLOS emerged as the ultimate conference for researchers from a variety of software and hardware system communities to collaborate and engage technically. It has been ahead of the curve in technologies such as RISC and VLIW processors, small and large-scale multiprocessors, clusters and networks-of-workstations, optimizing compilers, RAID, and network-storage system designs.

Today, as we enter the post-Moore era, ASPLOS 2022 re-establishes itself as the premier platform for cross-layer design and optimization to address fundamental challenges in computer system design in the coming years.

We look forward to welcoming you at the SwissTech Center, EPFL.

Official Website

0
Read More
© 2022 Cyber-Defence Campus

Heterogeneous computing creates new electrical-level vulnerabilities

Under the initiative of the armasuisse – Cyber-Defence Campus, a team of EPFL scientists, including CYD Doctoral Fellow Dina Mahmoud of PARSA, recently presented the first proof-of-concept for undervolting-based fault injection from the programmable logic of a field programmable gate array (FPGA) to the software executing on a processing system in the same system-on-chip (SoC). The team also proposes a number of future research directions, which, if addressed, should help to ensure the security of today’s heterogeneous computing systems.

Most Cyberattacks such as ransomware exploit vulnerabilities in software. While often neglected, hardware-based attacks can be just as powerful, on top of being more difficult to patch, as the underlying vulnerability remains in the deployed hardware. Hardware attacks in which adversaries have physical access to their target devices have long been investigated. However, with the world wide web and the possibility to access computing resources remotely in the cloud, remotely-controlled hardware attacks have become a reality. Examples of remote attacks include fault-injection attacks causing computation or data manipulation errors and side-channel attacks extracting secrets from power or electromagnetic side channels.

With Moore’s law losing pace in recent years, customizable hardware combining various types of processing units together in one heterogeneous system has become a global trend to increase performance. Since heterogeneous computing is a relatively recent phenomenon, not all security vulnerabilities have been fully understood or investigated. To better understand the landscape of cybersecurity in relation to heterogeneous systems, we surveyed state-of-the-art research on electrical-level attacks and defenses. We focused on exploits which leverage vulnerabilities caused by the electrical signals or their coupling. For example, demanding more power than the power supply can provide, results in lowered voltage for the entire system; the undervolting can affect the functioning of the circuits (e.g., in a computer) and cause faults. Or, an adversary can monitor minute variations in the voltage waveform and use them to classify or even fully uncover the operations executed by the victim. Our survey, which will appear in ACM Computing Surveys, addresses the electrical-level attacks on central processing units (CPUs), field-programmable gate arrays (FPGAs), and graphics processing units (GPUs), the three processing units frequently combined in heterogeneous platforms. We discuss whether electrical-level attacks targeting only one processing unit can extend to the heterogeneous system as a whole and highlight open research directions necessary for ensuring the security of these systems in the future.

In the survey, we discuss a number of system-level vulnerabilities which have not been investigated yet. One of the open research questions we highlight is the possibility of inter-component fault-injection attacks in our subsequent work, which will be presented in March at the Design, Automation and Test in Europe conference (DATE 2022), we demonstrate the feasibility of such an attack. We show the first undervolting attack in which circuits, implemented using the FPGA programmable logic, act as an aggressor while the CPU, residing on the same system-on-chip, is the victim. We program the FPGA to deploy malicious hardware circuits in order to force the FPGA to draw considerable current and cause a drop in the power supply voltage. Since the power supply is shared, the obtained voltage drop propagates across the entire chip. As a result, the computation performed by the CPU faults. If exploit2ed in a remote setting, this attack can lead to denial-of-service or data breach. With these findings, we further confirm the need for continuing research on the security of heterogeneous systems in order to prevent such attacks.

 

FundingThe CYD Fellowships are supported by armasuisse Science and Technology.

ReferencesMahmoud, Dina Gamaleldin Ahmed Shawky; Hussein, Samah; Lenders, Vincent; Stojilovic, Mirjana: FPGA-to-CPU Undervolting Attacks. 25th Design, Automation and Test in Europe – DATE 2022 , Antwerp, Belgium [Virtual], March 14-23, 2022: https://infoscience.epfl.ch/record/291432?ln=en

Mahmoud, Dina G.; Lenders, Vincent; Stojilović, Mirjana: Electrical-Level Attacks on CPUs, FPGAs, and GPUs: Survey and Implications in the Heterogeneous Era. ACM Computing Surveys, Volume 55, Issue 3, April 2022, Article No.: 58. DOI: 10.1145/3498337


0
Read More

Intel funds EcoCloud Midgard-based research

An exciting new development in the progress of Midgard, a novel re-envisioning of the virtual memory abstraction ubiquitous to computer systems, sees a tech leader funding research that will bring together experts from Yale, the University of Edinburgh and EcoCloud at EPFL.

Global semiconductor manufacturer Intel is sponsoring an EcoCloud-led project entitled “Virtual Memory for Post-Moore Servers”, which is part of its wider research into improving power performance and total cost of ownership for servers in big-scale datacenters.

What is the Post-Moore era?

Moore’s law was conceived by Gordon Moore in 1965. He would later become CEO of Intel. Moore predicted that the number of transistors in a dense integrated circuit would double, roughly every two years. This has been remarkably accurate up to now, but we are reaching the stage where physical limitations will curtail this pattern within the next couple of years: we are approaching the “Post-Moore era”. Many observers are optimistic about the continuation of technological progress in a variety of other areas, including new chip architectures, quantum computing and artificial intelligence.

Midgard is a radical new technology which helps provide optimizations for memory in data centers with an innovative, highly efficient namespace for access and protection control. Efficient memory protection is a foundation for virtualization, confidential computing, use of accelerators, and emerging computing paradigms such as serverless computing.

The Midgard layer is an intermediate stratum that renders possible staggering gains in performance for data servers as memory grows, and which is compatible with modern operating systems such as Linux, Android, macOS and Windows.

The project Virtual Memory for Post-Moore Servers aims to disrupt traditional server technology, targeting full-stack evaluation and hardware/software co-design, based on Midgard’s radical approach to virtual memory.

Midgard is a consortium of the following principal investigators at EcoCloud, University of Edinburgh and Yale:
David Atienza, Abhishek Bhattacharjee, Babak Falsafi, Boris Grot and Mathias Payer.

Link to the Midgard Website

 

0
Read More

Compusapien: More computing, less energy

© cherezoff / Adobe Stock

Today’s data centres have an efficiency problem – much of their energy is used not to process data, but to keep the servers cool. A new server architecture under development by the EU-funded COMPUSAPIEN project could solve this.

As the digital revolution continues to accelerate, so too does our demand for more computing power. Unfortunately, current semiconductor technology is energy-inefficient, meaning so too are the servers and cloud technologies that depend on them. In fact, as much as 40 % of a server’s energy is used just to keep it cool. “This problem is aggravated by the fact that the complex design of the modern server results in a high operating temperature,” says David Atienza Alonso, who heads the Embedded Systems Laboratory (ESL) at the Swiss Federal Institute of Technology Lausanne (EPFL). “As a result, servers cannot be operated at their full potential without the risk of overheating and system failures.” To tackle this problem, the EU has issued several policies addressing the increasing energy consumption of data centres, including the JRC EU Code for Data Centres. According to Atienza Alonso, meeting the goals of these policies requires an overhaul of computing server architecture and the metrics used to measure their efficiency – which is exactly what the COMPUSAPIEN (Computing Server Architecture with Joint Power and Cooling Integration at the Nanoscale) project aims to do. “The project intends to completely revise the current computing server architecture to drastically improve its energy efficiency and that of the data centres it serves,” explains Atienza Alonso, who serves as the project’s principal investigator.

Cooling conundrum

At the heart of the project, which is supported by the European Research Council, is a disruptive, 3D architecture that can overcome the worst-case power and cooling issues that have plagued servers. What makes this design so unique is its use of a heterogeneous, many-core architecture template with an integrated on-chip microfluidic fuel cell network, which allows the server to simultaneously provide both cooling and power. According to Atienza Alonso, this design represents the ultimate solution to the server cooling conundrum. “This integrated, 3D cooling approach, which uses tiny microfluidic channels to both cool servers and convert heat into electricity, has proved to be very effective,” he says. “This guarantees that 3D many-core server chips built with the latest nanometre-scale process technologies will not overheat and stop working.”

A greener cloud

Atienza Alonso estimates that the new 3D heterogeneous computing architecture template, which recycles the energy spent in cooling with the integrated micro-fluidic cell array (FCA) channels, could recover 30-40 % of the energy typically consumed by data centres. With more gains expected when the FCA technology is improved in the future, the energy consumption (and environmental impact) of a data centre will be drastically reduced, with more computing being done using the same amount of energy. “Thanks to integration of new optimised computing architectures and accelerators, the next generation of workloads on the cloud (e.g. deep learning) can be executed much more efficiently,” adds Atienza Alonso. “As a result, servers in data centres can serve many more applications using much less energy, thus dramatically reducing the carbon footprint of the IT and cloud computing sector.”

 

ORIGINALLY PUBLISHED:
https://cordis.europa.eu/article/id/435313-more-computing-less-energy

0
Read More