News

VLDB Women in Database Research Award 2020

Celebrating the Woman Researcher: Anastasia Ailamaki Receives VLDB Women in Database Research Award 2020

In a major recognition of Swiss innovation and excellence in database research, the VLDB Endowment has conferred the prestigious VLDB Women in Database Research Award on Anastasia Ailamaki, EPFL professor and co-founder of Raw Labs. In announcing the award, VLDB has acknowledged Ailamaki’s ‘pioneering research on the interaction between hardware micro-architecture and database engine performance.”

Even as a budding researcher during her undergrad days at a computer engineering school in Greece, Anastasia Ailamaki inspired gender diversity in the academic field. She was one of only 9 female students in a class of 154. Although she never encountered gender discrimination in those days, she admits that it was difficult for her to find a job as a system developer or network manager. That turned to be a blessing in disguise because it encouraged her to pursue her PhD. Today, she is a leading woman achiever not only as head of EPFL’s Data Intensive Applications and Systems Laboratory (DIAS) but also for her excellence in database research. However, she calls on woman researchers not to let thoughts on gender predominate because it’s not about “women or men in science, but just great scientists.”

At EPFL, Ailamaki has worked on data-intensive systems and applications, particularly the interaction between database software and emerging hardware and I/O devices, and automation of data management to support computationally-demanding, data-intensive scientific applications. She is currently working on developing real-time analytics infrastructures (or real-time intelligent systems) that incorporate change as a core premise.

Reacting to the VLDB award, Ailamaki said that her students are her biggest achievement:

“It’s always humbling and fabulous to win awards and I have been fortunate to have my work recognized often, but I feel that my biggest achievement, and most important contribution to my research field, are my students and post-docs. I enjoy working with young people very much and I learn from them as much as I hope that they learn from me.”

For Ailamaki, life is often an extension of her computer science approach, and she applies her systematic computational and algorithmic thinking processes to make decisions in her everyday life. She believes that “when you make a decision with all the data at hand, there are no regrets.”

September had yet another achievement in store for Ailamaki; she was elected as a member of Academia Europaea, an association that aims to advance excellence in science and research for public benefit and education.

Professor Ailamaki’s achievement is yet another distinguished addition to EPFL’s long list of impressive accolades.


References:

0
Read More

Data Centers Need to Consider their Carbon Footprint

Digital technology is running up against its physical limits. One solution is to build more data centers – but that needs to go hand in hand with a reduction in their carbon footprint.

For reasons you can imagine, much of what we used to do in the physical world is now being done virtually. That’s having an effect on energy-related pollution – CO2 emissions from manufacturing and transportation have fallen drastically, for example. But there has been a concomitant increase in energy use for digital services. Exact numbers aren’t yet available, but according to Babak Falsafi, the director of EPFL’s EcoCloud center, the trend is clear. “Behind every digital service we use lies a data center. And we’re heading towards a world where everything is done digitally, across all markets and industries.”

Falsafi continues: “A lot of business activities have been shifted online because of the pandemic, causing a huge surge in demand, mainly for video. Non-work-related demand for streaming has also exploded. What’s more, today’s ultra-high-resolution screens use up a lot of energy. People don’t understand everything that’s involved in watching a movie in 8k – a lot of power is needed for all that data processing and transfer. You put that all together and it’s huge!”

Relentless rise in demand

The current situation is set to last for a while longer: it’ll be weeks, or probably months, before a vaccine is ready – and that’s without factoring in a second wave. Many organizations, including schools and universities, have announced that they will keep holding classes online, at least partly. But the issue of data-center-related emissions was already a pressing one before the pandemic. “New technology like the internet of things, artificial intelligence, 5G and 4k televisions – which are now going to 8k – has pumped up demand, and therefore energy use,” says Falsafi. According to an article appearing in MIT Technology Review last year, training a single Transformer artificial intelligence model can generate as much carbon emissions as five American cars throughout their useful life. In another example, Netflix announced that its electricity use jumped 84% in 2019 to 451,000 megawatt-hours, or enough to power 40,000 homes in the United States for a year.

Some studies predict that digital technology will account for 8% of global electricity use by 2030 – up from 3–5% today – and 4% of CO2 emissions. This includes data centers, those huge buildings that house the servers we use to store, process, analyze and transfer data on the cloud (the biggest data centers already consume hundreds of megawatts). It also includes, in an equal measure, the telecommunication systems that transport those data. Consumer electronics and the energy used to build computing facilities also play a role, albeit smaller.

The end of Moore’s law

While demand is skyrocketing, supply is bumping up against a ceiling. Moore’s law, which states that the number of transistors contained on a silicon chip doubles every year, has pretty much expired. We can’t keep packing more computing power onto chips like we’ve been doing over the past 50 years. The two options currently available are to build new data centers or expand existing ones. The data-center construction market is expected to swell to 57 billion dollars over the next five years.

Who’s responsible for keeping a lid on digital-related emissions? “Nobody!” replies Falsafi. “Nobody is being held accountable for those emissions. We pay telecom operators for our internet connections, for example, but the services we use on the internet – like Google and Facebook – are free. And those service providers aim to collect as much data about us as possible to improve their services. At no point in this arrangement are carbon emissions taken into account, since power use is measured at data centers and not on telecom networks.” Edouard Bugnion, EPFL’s Vice President for Information Systems, adds: “Data centers are basically technological advancement wrapped up in a consumable format. They are the vector by which cyberspace can develop. Google wouldn’t exist without data centers. Neither would much of the research conducted at EPFL.”

Towards more sustainable data centers

Engineers at EPFL’s EcoCloud have been working since 2011 to find a way around the supply ceiling. Their approach involves not only making data centers more efficient, but also reducing their carbon footprint. Nobody cared much about this latter aspect when the centers were first being built – but times have changed. “There are three things that need to be factored into the equation. First, the energy efficiency of data centers, which needs to be improved. Second, the CO2 that’s emitted to run them, which can be reduced by switching to renewable energy. And third, the energy the servers give off in the form of heat – can’t we do more with that heat than open a window and warm up the parking lot?” says Bugnion.

EcoCloud, along with the other members of the Swiss Datacenter Efficiency Association (SDEA), introduced an energy-efficiency certification system in early 2020. Their program – called the SDEA Label (see article) – quantifies how much CO2 per kWh data centers emit, with the goal of encouraging operators to use renewable energy. EcoCloud also scouts opportunities for EPFL labs to work with businesses to develop advanced systems for cooling, energy management, energy storage and power generation – all within a local innovation ecosystem designed to help data center operators shrink their carbon footprint.

New Certification System Encourages Greener Data Centers

datacenter

EPFL, along with other members of a tech-industry consortium, introduced the world’s first energy-efficiency certification system for data centers in January.

“When we created EcoCloud in 2011,” says director Babak Falsafi, “the goal was to cut data centers’ energy use and CO2 emissions – at the time, IT industry heavyweights cared only about the financial and business aspects. We developed pioneering technology that brought renewable energy into the data center ecosystem.” His research center aims to spur innovation across the ICT sector – from algorithms to infrastructure – to help meet today’s major challenges.

And data centers will play a growing role in those challenges as people rely more and more on digital technology. The amount of power consumed by data centers is set to expand rapidly, and by 2030 could account for 8% of global electricity use.

Making data centers carbon-free

To help keep that electricity use in check, a consortium of Swiss tech-industry organizations created the Swiss Datacenter Efficiency Association (SDEA). The initiative was spearheaded by digitalswitzerland and Hewlett Packard Enterprise (HPE); members include EcoCloud, HPE, Green IT Switzerland, the Luzern University of Applied Sciences and Arts (HSLU), the Swiss datacenter association (Vigiswiss) and the Swiss telecom industry association (ASUT). The initiative is also being supported by the Swiss Federal Office of Energy (SFOE) through its SwissEnergy program.

In January, the SDEA introduced a green certification system specifically for data centers. The system involves calculating data centers’ carbon footprint based on the energy efficiency of the building and IT equipment, as well as the IT equipment’s power load. “Until now, there was no way to measure data centers’ impact on CO2 emissions,” says Falsafi. “Our certification system is unique because it also factors in the source of the power used and how well heat is recovered. Everything is connected – if a data center uses renewable energy, its performance improves.”

The SDEA uses three certification levels (bronze, silver and gold) to encourage data center operators to cut their power consumption. Pilot tests at ten sites in Switzerland show that the SDEA’s “toolkit” effectively takes into account their efforts to shift in full or in part to renewable energy.

Arriving at just the right time

Since computer processors are reaching their maximum physical capacity, the only solution for managing the surging amount of data is to build more data centers. “Our certification system comes at just the right time,” says Falsafi. “We hope that it will encourage data center operators to build facilities that run on renewable energy, and stimulate innovation and investment in this field.”

Servers Designed to Work like Humans

Picture of David Atienza

David Atienza believes that when it comes to IT systems, everything that can be done locally, should be. That includes processing data where they are generated – thereby substantially reducing the amount of power required.

EPFL’s Embedded Systems Laboratory (ESL) is studying two major energy-related problems with servers. The first is that they aren’t being used anywhere near their maximum capacity. Actual use is closer to 60%, according to ESL head David Atienza. “Servers are designed for tasks that require a lot of processing power – such as running neural networks – but they’re being used mainly for watching movies or sending pictures via chat,” he says. As a result, they overheat. “It’s like driving a Ferrari at 40 km/h – it would burn up a lot more energy at that speed than a Twingo would,” he adds.

The problem is that even if the servers are the only equipment that overheats in a data center, operators still have to cool the whole center. To help find a solution, Atienza is working on the Compusapien research project, for which he received an ERC Consolidator Grant in 2016. His team found that cooling servers locally can cut a data center’s power requirement by 40%. They worked with engineers from IBM to develop a system where cooling water is used to lower the temperature of individual servers, as opposed to running fans to cool the entire room. With this system, heat is recovered in the cooling water and reused. The water runs through microfluidic channels that are just 50–100 µm high and sandwiched between two layers on a cooling plate. As the water absorbs heat from the servers, it transfers it to microfluidic fuel cells where it’s converted into electricity. The electricity can then be fed back to the servers as power, reducing the amount of power that the data center draws from the grid.

Processing data locally

“The human brain works the same way. Blood carries nutrients to the brain and cools it. It’s just that with servers, the process is a little more complicated!” says Atienza. While a lot of data centers already use cooling water, his is the first system to use microfluidic fuel cells to recover heat and turn it into electricity. The technology – nicknamed “electronic blood” – was tested in a 3D prototype developed in association with IBM, and proved to be feasible from a technical point of view. Now ESL engineers are building a 3D version of the integrated system and plan to develop a full server through a joint project with another company.

The second component of Atienza’s approach is to process data as locally as possible since transmitting them takes up a lot of energy. One example of this approach is a next-generation Nespresso machine that the ESL team developed. Their machine uses an embedded artificial intelligence system to manage maintenance and restocking completely on its own. “More and more applications – especially those for smartphones – operate locally and don’t go through data centers,” says Atienza. “That’s a lot like the human body. Our bodies have lots of tiny modules that can carry out two or three simple functions; the brain gets involved only when important decisions need to be made. That’s a lot more efficient than today’s data centers where everything runs all the time.”

Using Mathematics to Manage Big Data

VLDB Women in Database Research Award 2020

Most organizations run a huge variety of computer software and hardware, which bogs down their IT systems and wastes energy. But Anastasia Ailamaki has a found a solution that works by giving all the different system components a common language.

“Sustainability means coming up with solutions for problems as efficiently as possible while using fewer resources,” says Anastasia Ailamaki, a professor at EPFL’s Data Intensive Applications & Systems Laboratory (DIAS) and founder of local startup RAW Labs SA. “And so we can say that our research is directly related to sustainability.” The engineers at DIAS are developing a data management system that makes as much use as possible of an organization’s existing hardware and software – a real challenge given the wide variety of hardware and software out there.

“Hardware that’s turned on but not used is a waste of energy,” says Ailamaki. In the same way that our bodies burn calories even when we’re just sitting there, computers burn up a considerable amount energy even when they’re idling. “Most computers are used to only 20% of their potential. It’s like if you filled up your fridge with food, let it sit there until it goes bad, and then complained that you don’t have anything to eat,” says Ailamaki. The same holds true for software. She explains: “Today most people use only 10% of the data they store. But before data can be stored, they have to be saved onto a server and standardized – and you generally need to know what you want to do with them afterwards.” She offers this example: “Suppose you have a series of interviews saved in different Word files, along with an Excel file listing all the companies that invest in EPFL startups. Imagine you wanted to search all the files and find the names of people who mention sustainability in their interview and who have invested in an EPFL startup. You couldn’t do it. The data would have to be stored in a database for that kind of query. And because the data are stored in two different formats – text and table – you’d have to standardize them before you could save them in the database.”

Ordering data à la carte

Ailamaki’s approach involves formatting the data even before this process begins. “Instead of standardizing the data, our system recognizes what kind they are and gives them a mathematical format based on how they will be searched. Then when it comes time to search the data, our program generates the exact code needed to execute the query – one query at a time,” she says. However, one potential drawback to this approach is that it’s significantly slower than conventional methods where data are already saved in a database and searches are done directly. But Ailamaki’s team found a solution for that, too. “Our system uses artificial intelligence and machine learning to remember the kinds of queries performed. It stores all the work done previously – much like a cache that lets programs respond much more quickly to queries of the same dataset,” she says.

The system, with its just-in-time approach, allows users to search any type of data in any way, combine data from any source and create a cache of the most frequently used data. It’s similar to ordering à la carte. And the system can be used with any kind of data, hardware or application since it doesn’t rely on code, but mathematics. RAW Labs used this approach to develop its RAW technology for combining different types of data on the fly and generating important information in a ready-to-use format for both businesses and consumers.

When it comes to hardware, the DIAS engineers are taking the same approach. Their just-in-time code-generation method can develop a program for mapping out hardware’s properties on the fly and helping organizations run their IT equipment more efficiently. “With our system, organizations can use their computers at up to 80% of their potential,” says Ailamaki.

IT ASSETS

EPFL gets a new data center

EPFL is building a new data center as part of the upgrade to its thermal power station. The high-density center, which will eventually have 3 megawatts of capacity, will be used to store, manage and process data collected by EPFL scientists during their lab experiments. Its sides and roof will be covered entirely in solar panels, and the heat generated by its servers will be recovered and used in the new power station. It’s scheduled to go into service in the second half of 2021.


Source: EPFL
0
Read More
Carmela Troncoso among Fortune’s 40 Under 40

Carmela Troncoso among Fortune’s 40 Under 40

Each year, Fortune magazine recognizes the top 40 influencers or emerging leaders aged below 40 years. In a deviation, the magazine has taken cognizance of the monumental challenges and changes witnessed this year by highlighting 40 influential people in five categories instead of one: finance, technology, healthcare, government and politics, and media and entertainment. Carmela Troncoso, head of the Security and Privacy Engineering Lab (SPRING) in EPFL’s School of Computer and Communication Sciences (IC), figures in the technology category for her leading role in building the Decentralized Privacy-Preserving Proximity Tracing system, or DP-3T.

DP-3T stores temporary, anonymized contact data on a user’s phone, rather than on a central server, making hacks or misuse much harder. The system’s design helped guide Apple and Google’s development of a shared contact-tracing protocol, which is now being used by COVID-19 tracing apps across Europe and the U.S. DP-3T is the basis for SwissCovid, a tracing app that serves as a useful tool in stemming the spread of the disease in Switzerland.

The work done by Troncoso and her team is quite extraordinary because privacy protection, a major anxiety for tech users, is at the core of DP-3T. Privacy concerns always topped Troncoso’s agenda. In a statement to Schweizer Illustrierte few months ago, she evinced those very thoughts: “The technological and social challenges [around the protection of privacy in IT systems] give me sleepless nights.” The result is DP-3T, a secure and privacy-preserving system that is playing a crucial role in fighting the pandemic.

Her inclusion in Fortune’s 40 Under 40 provides Troncoso a platform to showcase the work done by the SPRING lab in alleviating the negative impact of technology on society, such as privacy concerns, and presenting purely system-based solutions rather than data-driven platforms. She believes that the recognition by Fortune vindicates that approach. In her own words, “The SwissCovid tracing app is also purely a systems solution, it has no data. For the first time we have governments that have gone for data-less solutions and the fact that Fortune has recognized this paradigm change is key for privacy.”


References:

 

0
Read More

Novel Frequency Division Technique to Generate Low-Noise Microwave Signals

Researchers at EPFL’s Laboratory of Photonics and Quantum Measurements (K-Lab), Trinity College Dublin (TCD), and Dublin City University (DCU) have teamed up to develop a new technique for generating variable low-noise microwaves with a single optical microresonator. The paper was recently published in Science Advances.

Optical frequency combs (OFCs) based on femtosecond pulse lasers have the potential to revolutionize the fields of optical metrology and spectroscopy. Development of the frequency division technique has allowed the use of photodetection of pulse trains to synthesize microwaves with lowest phase noise levels. However, the use of mode-locked laser-based OFCs has been limited to the laboratory due to their unwieldy size, high power consumption, and delicate structure. Although some approaches have been proposed to make OFCs field-deployable, they have limitations that prevent wider application.

The new research proposes a frequency division scheme in which two compact frequency combs, (a soliton microcomb and a semiconductor gain-switched comb) are combined to demonstrate low-noise microwave generation. Using the technique, the team successfully generated new microwaves that showed much lower phase-noise levels than those of a microresonator frequency comb oscillator and off-the-shelf microwave oscillators.

The technique presented by the authors enables spectral purity transfer between different microwave signals. Lead author Wenle Weng explains:

“Traditionally, executing perfect microwave frequency division in a variable fashion has not been easy. Thanks to the fast-modulated semiconductor laser developed by our colleagues at TCD and DCU, now we can achieve this using a low-cost photodetector and a moderate control system.”

While the traditional optical injection locking method uses a continuous-wave (CW) laser as the master, the new scheme locks a semiconductor laser to the entire microcomb, transferring both the carrier phase coherence and the soliton repetition rate spectral purity to the gain-switched laser (GSL). Consequently, the GSL can generate additional comb teeth that are fully coherent and equally spaced, facilitating the application of high-repetition rate microcombs in metrology and spectroscopy.

With the ability to be portable and mass-produced, the variable microwave oscillator and frequency comb generator developed by the team can revolutionize the market for portable low-noise microwave and frequency comb sources.

The research was funded by Swiss National Science Foundation; Defense Advanced Research Projects Agency, Defense Sciences Office (US); Science Foundation Ireland (SFI); and SFI/European Regional Development Fund.


References:

0
Read More
Picture of David Atienza

3D-ICE Thermal Modeling Research Wins Retrospective Most Influential Paper Award

Given the fast pace of research, very few scientific studies stand the test of time. Even rarer is a study that continues to influence research a decade after its first publication. That distinction goes to “3D-ICE: Fast Compact Transient Thermal Modeling for 3D ICs with Inter-Tier Liquid Cooling,” a paper presented at the IEEE/ACM International Conference on Computer-Aided Design (ICCAD) 2010 Conference. It has been selected as the winner of the prestigious ICCAD 2020 – Ten Year Retrospective Most Influential Paper Award , which is one the most prestigious given in the Electronic Design Automation (EDA) community about industrial and academic relevance of a technical paper.

The ICCAD Executive Committee has recognized this work about the design of open-source 3D Interlayer Cooling Emulator (3D-ICE) tool (link: https://www.epfl.ch/labs/esl/research/open-source-software-projects/3d-ice/) for compact transient thermal modeling of 2D/3D multi-processor system-on-chip (MPSoC) with liquid-cooling as “the most influential on research and industrial practice in computer-aided design of integrated circuits over the ten years since its original appearance at ICCAD.” The authors of the paper are Arvind Sridhar, Alessandro Vincenzi, Martino Ruggiero, David Atienza (all from the Embedded Systems Laboratory – ESL at EPFL), and Thomas Brunschwiler (IBM Zurich Research Laboratory).

As the researchers argue in their paper, the vertical integration of high-performance integrated circuits in the form of 3D stacks (3D ICs) is highly demanding since the effective areal heat dissipation increases with number of dies generating high chip temperatures. To deal with the thermal challenge, inter-tier integrated microchannel cooling is a promising and scalable solution. However, a robust design of a 3D IC and its subsequent thermal management requires accurate modeling of the effects of liquid cooling with respect to other cooling solutions regarding the thermal behavior of the IC. Therefore, the authors developed 3D-ICE as a compact transient thermal model (CTTM) for the thermal simulation that can consider the non-linear thermal properties of liquids and nano-scale materials used in 2D and 3D Multi-Processor System-on-Chip (MPSoC) architectures with multiple inter-tier microchannel liquid cooling. The model offers significant speed-up over a typical commercial computational fluid dynamics simulation tool while preserving accuracy. Based on 3D-ICE, the study presented a thermal simulator capable of running in parallel on multicore architectures and possible to be parallelized in GPUs, offering further savings in simulation time and higher efficiency.

3D-ICE is a Linux-based Thermal Emulator Library written in C, which can perform transient thermal analyses of vertically stacked 3D integrated circuits with inter-tier Microchannel Liquid Cooling. This approach and tool developed at EPFL was used in the design of Aquasar (the first chip-level water-cooled server by IBM). A decade after it was developed, 3D-ICE continues to be the go-to tool for more than 1500 teams worldwide.

The ICCAD award recognizes the worldwide relevance of EPFL’s work on micro-electronics and MPSoC thermal-aware design. It will be presented on November 2 at the opening session of the Virtual Event of ICCAD 2020.


0
Read More

Black-box Estimation of the Bayes Risk Using ML Methods

One of the key researches in the domain of quantitative information flow (QIF) is to effectively estimate information leaks in a system in order to prevent adversarial attacks. Most existing approaches are based on the white-box approach. However, this approach is often impractical due to the size or complexity of its internals, or the presence of unknown factors. This and other challenges forced a shift in focus to investigate methods for measuring a system’s leakage in a black-box manner.

Thus far, the only approach for the black-box estimation has been founded on the frequentist paradigm, which cannot be scaled to real-world problems and be applied to systems with continuous outputs (e.g., time side channels, network traffic). To address that problem, EPFL’s Giovanni Cherubin and coauthors have proposed to leverage an analogy between Machine Learning (ML) and black-box leakage estimation to show that the Bayes risk of a system can be estimated by using a class of ML methods.

In their paper, which was presented at the IEEE Symposium on Security and Privacy (2019), the researchers took cognizance of a fundamental equivalence between ML and black-box leakage estimation to demonstrate that any ML rule from a certain class (the universally consistent rules) can be used to estimate with arbitrary precision the leakage of a system. More specifically, their work is based on the nearest neighbor principle, which significantly reduces the number of black-box queries required for a precise estimation and exploits a metric on the output space to achieve a considerably faster convergence than frequentist approaches. The research adds a completely new class of estimators that can be used in practical applications.

Based on their findings, the researchers have developed a tool called F-BLEAU (Fast Black-box Leakage Estimation AUtomated). The tool computes nearest neighbor and frequentist estimates, and selects the one converging faster. F-BLEAU considers a generic system as a black-box, taking secret inputs and returning outputs accordingly, and it measures how much the outputs “leak” about the inputs.

F-BLEAU is available as an open source software at https://github.com/gchers/fbleau

Giovanni Cherubin is Postdoctoral Fellow in Machine Learning and Security at EPFL. He is the recipient of an EcoCloud post-doctoral fellowship since October 2018, and his work on F-BLEAU was supported by that fellowship.


G. Cherubin, K. Chatzikokolakis and C. Palamidessi, “F-BLEAU: Fast Black-Box Leakage Estimation,” 2019 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 2019, pp. 835-852, doi: 10.1109/SP.2019.00073.

Full text pdf at https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8835250

0
Read More

Microchannel Network Inspired by the Human Circularity System

While scientists have successfully reduced the size and costs of electronic components, a major challenge faced by such tiny devices is the absence of an optimum thermal and energy management technology. To bridge that gap, Elison Matioli and his colleagues at EPFL’s Power and Wide-band-gap Electronics Research Laboratory (POWERlab) have developed a novel microchannel network that not only cools electronic components but also makes them energy efficient.

Since electronic components are averse to high temperatures, they are usually cooled down by means of conventional fan-cooled heat exchangers or more complex fluid-carrying microchannels running through them. The microchannels need to be extremely narrow and small to have the required impact, but that necessitates a higher amount of pressure for proper flow of the fluid. That translates into higher energy consumption. To address that energy challenge, Matioli and others have integrated microfluidics and electronics within the same semiconductor substrate. This embedded approach is unlike state-of-the-art technology, where electronics and cooling are treated separately.

The EPFL researchers used a chip containing a thin layer of a semiconductor called gallium nitride (GaN) on top of a thicker silicon substrate. In a departure from existing techniques, they carved the microchannels within the substrate and aligned them with the parts of the chip that tend to heat up the most, thus helping the system cool down efficiently. For reducing the energy needed to pump the fluid through the microchannels, the researchers drew inspiration from the human circulatory system, which comprises larger blood vessels that become thinner and transform into capillaries in certain areas of the body. They designed the microchannel network with wider channels that taper in the exact location where the heat builds up more. This radically reduced the total amount of energy needed to push the fluid. Experiment results showed an unprecedented coefficient of performance (exceeding 10,000) for single-phase water-cooling of heat fluxes exceeding 1 kilowatt per square centimetre, corresponding to a 50-fold increase compared to straight microchannels.

The research paper “Co-designing electronics with microfluidics for more sustainable cooling” is published in the latest issue of Nature.


van Erp, R., Soleimanzadeh, R., Nela, L. et al. Co-designing electronics with microfluidics for more sustainable cooling. Nature 585, 211–216 (2020). https://doi.org/10.1038/s41586-020-2666-1

https://www.scientificamerican.com/article/blood-and-silicon-new-electronics-cooling-system-mimics-human-capillaries/

 

0
Read More
David Atienza, Babak Falsafi, Marting Jaggi, and Mathias Payer

Facebook-EPFL Joint ML Research Engagement

Facebook and EPFL have initiated a collaborative program that aims to carry out seminal research with common meeting points for both organizations. Facebook seeks to leverage EPFL’s proven expertise in Computer Science and Engineering to enable the flow of technology from one of the most renowned research institutions to the leading American social media conglomerate. The collaboration will also help the latter strengthen its position in Switzerland and gain access to some of the best academic minds in Europe.

The following projects have already been lined up for the collaborative Full-System Accelerated and Secure ML Collaborative Research program:

  • Training for Recommendation Models on Heterogeneous Servers
  • Distributed Transformer Benchmarks
  • Full-System API Inference to Enforce Security
  • Communication Stacks for µServices in Datacenters

Each of these projects will be conducted by a member of the expert team from EPFL. The team includes David Atienza, Babak Falsafi, Martin Jaggi, and Mathias Payer. Babak Falsafi will be the point of contact for the engagement.

Training for Recommendation Models on Heterogeneous Servers   

This project aims to develop strategies to automatically select the best accelerator to run a specific DNN training. The research by David Atienza and team will develop the necessary software libraries to allocate workload efficiently by considering performance, power, and accuracy constraints. Meta-learning algorithms will be created to train DL models and configure their hyper-parameters in an automated way, outperforming current state-of-the- art approaches. This approach is expected to result in significant savings in the total training time and improved robustness against minimization for smaller memory size designs.

Distributed Transformer Benchmarks

MLBench, a framework for distributed machine learning, aims to perform the role of an easy-to-use and fair benchmarking suite for algorithms as well as for systems (software frameworks and hardware). It will provide re-usable and reliable reference implementations of distributed ML training algorithms. MLBench renders support to a wide range of platforms, ML frameworks, and machine learning tasks. Its goal is to benchmark all/most currently relevant distributed execution frameworks. Lead researcher Martin Jaggi and team will soon release the first results and reference code for distributed training (starting with Cifar10 and ImageNet, in both PyTorch and TensorFlow).

Full-System API Inference to Enforce Security

Mathias Payer and team aim to build an API flow graph (AFG) that encodes all valid API interactions and their parameters. The proposed algorithm will build the global AFG by analyzing all uses of a function on the system’s source code. The researchers will leverage test projects that provide a large corpus of test cases and input files for a wide variety of programs. The data set will help infer API usage by monitoring the state construction through the provided seeds and examples.

Communication Stacks for µServices in Datacenters

In this study, Babak Falsafi and others will investigate technologies to support communication in microservices. The research is an extension of their prior work on tighter integration of network with memory with support for memory pooling and RPC scheduling. It aims to tackle the software bottleneck in communication for microservices and address challenges such as memory scalability for RPC, software stacks for high fan-out RPC processing, higher-level object access semantics via RPC to avoid multiple roundtrips, and support for data transformation across diverse language and software ecosystem boundaries. The researchers will investigate codesigned RPC technologies with hardware terminating protocols that enable serving packets directly out of CPU’s SRAM to eliminate DRAM capacity and bandwidth provisioning and enable a new class of RPC substrate that is inherently technology-scalable. They propose to investigate optimizations for data transformation for common case data formats running conventional CPU’s. They will delve into the integration of data transformation into an optimized RPC stack (from above) to identify opportunities for data placement, reduction in data movement and buffering on commodity hardware. Technologies for hardware/software co-design of data transformers will also be within the scope of the work.

The Facebook-EPFL collaborative engagement has been approved for funding for an initial period of one year, with an expected renewal each year for at least three years. Each project includes a grant of CHF 200,000 per year, which will be used to financially support one student.

For more details of the individual projects, visit:

https://ecocloud.ch/project/training-for-recommendation-models-on-heterogeneous-servers/
https://ecocloud.ch/project/mlbench/
https://ecocloud.ch/project/full-system-api-inference-to-enforce-security/

0
Read More
Picture of Carmela Troncoso

Datashare Network: A Decentralized Search Engine for Journalists

EPFL researchers at the Security and Privacy Engineering (SPRING) Lab, School of Computer and Communication Sciences (IC), have developed a ‘Datashare Network’ that allows investigative journalists to exchange information securely and anonymously. A detailed paper on the subject will be presented by the scientists at the 29th Usenix Security Symposium (USENIX Security ’20), which will be held online from August 12 to 14. The event, which brings together specialists in the security and privacy of computer systems and networks, will undoubtedly draw worldwide attention to the EPFL research.

Important revelations with global implications require the active cooperation and sharing of data among investigative journalists across national borders. This is particularly true for cases involving fraud, deception, and tax evasion. A good example is the infamous Panama Papers case, which brought to light the existence of thousands of shell companies run by many noted politicians, businesspeople, and sports personalities to evade taxes. Such investigations imply the sharing of millions of sensitive documents among international journalists in a secure environment that precludes leaks of any kind. To address that challenge, the International Consortium of Investigative Journalists (ICIJ), comprising 200 members in 70 countries, sought the help of the SPRING Lab. The outcome is the Datashare Network, a fully anonymous, decentralized system for searching and exchanging information.

To ensure anonymity of shared information, the Datashare Network issues virtual secure tokens that journalists can attach to their messages and documents to prove to others that they are ICIJ members. All documents are typically stored on members’ servers or computers, and only essential information critical for further investigation is shared with other users. Using the search engine, users can look for relevant information and then contact, in complete anonymity on either side, the member(s) in possession of that information.

Since users work in different time zones, the network provisions asynchronous searches and responses. In their paper, the research group describes two new secure building blocks developed by them: an asynchronous search engine and a messaging system. The research also introduces the “multi-set private set intersection” (MS-PSI) protocol, which ensures the security of the search engine and mitigates the risk of leaks.

As observed by Carmela Troncoso, head of the SPRING Lab:

“This system, which addresses real-world needs, has enabled SPRING to tackle some interesting challenges…. The hurdles we encountered during the development process…have paved the way to a new area of research with significant potential for other fields.”


Source: https://actu.epfl.ch/news/a-secure-decentralized-search-engine-for-journalis/

0
Read More
Picture of Marina Zapater, and David Atienza

EPFL Authors Win ISVLSI 2020 Best Paper Award

The three-day IEEE Annual Symposium on VLSI (ISVLSI 2020) concluded on July 8 at Limassol, Cyprus. This year, the event attracted more than 190 papers, out of which only 34 were finally accepted after a stringent selection process. The contribution by a group of EPFL scientists not only made it to that august list but also won the Best Paper award at the prestigious event.

The paper “Enabling Optimal Power Generation of Flow Cell Arrays in 3D MPSoCs with On-Chip Switched Capacitor Converters” is a collaborative research by Halima Najibi,1 Alexandre Levisse,2 Marina Zapater,3 and David Atienza,4 who are associated with EPFL’s Embedded Systems Laboratory (ESL). Working with their co-authors Jorge Hunter and Miroslav Vasic, The EPFL group designed an on-chip direct current to direct current (DC-DC) converter to improve FCA power generation in high-performance 3D MPSoCs. They used switched capacitor (SC) technology and explored different design space parameters to achieve minimal area requirement and maximal power extraction.

The proposed converter enables a stable and optimal voltage between FCA electrodes, allowing users to dynamically control FCA connectivity to 3D PDNs and switching off power extraction during chip inactivity. The study demonstrates that regulated FCAs generate up to 123% higher power with respect to the case they are directly connected to 3D PDNs. By connecting multiple flow cells to a single optimized converter, the area requirement is down to 1.26% while maintaining IRdrop below 5%. Experiments show that activity-based dynamic FCA switching extends by over 1.8× and 4.5× electrolytes lifetime for a processor duty-cycle of 50% and 20%, respectively.

The papers presented at ISVLSI 2020 explored emerging trends and novel ideas and concepts in the area of VLSI and brought the VLSI experience to new areas and technologies such as security, artificial intelligence and cyber-physical systems. A key area of emphasis of ISVLSI events is future design methodologies and new CAD tools. As in previous editions, the 2020 symposium brought together leading scientists and researchers from academia and industry.

Considering the reputation of the symposium, built over a period of three decades, it is no mean achievement to have a paper accepted and then selected as the Best Paper. Many congratulations to the EPFL authors for their singular academic triumph.

 


Best Paper winner (PDF)

1 Halima Najibi is pursuing her doctoral program in Electrical Engineering at EPFL

2 Alexandre Sébastien Julien Levisse is Scientist at ESL

3 Marina Zapater Sancho is currently Associate Professor in the REDS Institute but collaborates with ESL.

4 David Atienza is Associate Professor, ESL

0
Read More