This year's industrial session will feature prominent speakers from the IT industry, including:

image: Evangelos Eleftheriou

Cost-effective Cloud Storage Technologies

Evangelos Eleftheriou, IBM Research

Cost efficiency is one of the major driving forces of cloud adoption. Dense and cost-effective storage is critical to cloud providers especially for storing large volumes of data on the cloud. In this talk, I will present storage technologies that enable cost efficiency for different types of cloud storage. Starting with the high-performance segment of storage, I'll present SALSA, an I/O stack optimized for Flash, that can elevate the performance and endurance of low-cost, consumer Flash-based SSDs to meet datacenter requirements, thereby enabling all-Flash cloud storage at low cost. Next, I will talk about archival storage for the cloud, focusing on IceTier, a research prototype that enables the seamless integration of tape as an archival back-end to cloud object stores, offering dramatically reduced cost for cold data. Finally, I will present MCStore, a cloud gateway technology that enables traditional storage systems to take advantage of the cloud, thus bringing the merits of cost-effective cloud storage to the traditional datacenter.

Evangelos Eleftheriou

Evangelos Eleftheriou received a B.S degree in Electrical Engineering from the University of Patras, Greece, in 1979, and M.Eng. and Ph.D. degrees in Electrical Engineering from Carleton University, Ottawa, Canada, in 1981 and 1985, respectively.
He joined the IBM Research – Zurich laboratory in Rüschlikon, Switzerland, as a Research Staff Member in 1986. Since 1998, he has held various management positions and currently heads the Cloud and Computing Infrastructure department of IBM Research – Zurich, which focuses on enterprise solid-state storage, storage for big data, microserver/cloud server and accelerator technologies, high-speed I/O links, storage security, and memory and cognitive technologies.

He holds over 100 patents (granted and pending applications). In 2002, he became a Fellow of the IEEE. He was co-recipient of the prestigious 2005 Technology Award of the Eduard Rhein Foundation in Germany. Also in 2005, he was appointed an IBM Fellow and inducted into the IBM Academy of Technology. In 2009, he was co-recipient of the IEEE CSS Control Systems Technology Award.

image: Kenny Gross

Data Center Ambient Temperature Optimization: Key To Large-Scale Energy Savings

Kenny Gross, Oracle

ASHRAE has for many years recommended that data center owners save energy by reducing air conditioning and warming up the data center. For the first 25 years of the computer industry, this was very sound advice because there was no relationship between the operating temperature of enterprise computing servers and their energy efficiency. But for the most recent ~4-5 years, this is no longer the case. Extensive Oracle research has demonstrated that with the latest generations of enterprise computing servers there are now very temperature-sensitive "energy wastage" mechanisms in IT server and storage systems that not only waste significant energy in warm data centers, but also degrade compute performance. This presentation shows novel Oracle temperature-aware algorithms that enable intelligent optimization of data center ambient temperatures to minimize or avoid these heretofore non-observable energy wastage mechanisms in IT systems. Oracle's suite of "Energy Aware Data Center" (EADC) algorithms predict an optimal ambient temperature set point, decreasing energy wastage throughout the data center, significantly increasing overall compute performance, decreasing the carbon footprint, while increasing return-on-assets for data center owners.

Kenny Gross

Kenny Gross is a Distinguished Engineer for Oracle and researcher with the System Dynamics Characterization and Control team in Oracle's Physical Sciences Research Center in San Diego. Kenny specializes in advanced pattern recognition, continuous system telemetry, and dynamic system characterization for improving the reliability, availability, and energy efficiency of enterprise computing systems, as well as the datacenters in which the systems are deployed. Kenny has 227 US patents issued and pending, 186 scientific publications, and was awarded a 1998 R&D 100 Award for one of the top 100 technological innovations of that year, for an advanced statistical pattern recognition technique that was originally developed for nuclear and aerospace applications and is now being used for a variety of applications to improve the quality-of-service, availability, and optimal energy efficiency for enterprise computer servers. Kenny's Ph.D. is in nuclear engineering from the U. of Cincinnati.

image: Ippokratis Pandis

Impala: A Modern, Open-Source SQL Engine for Hadoop

Ippokratis Pandis, Cloudera

The Cloudera Impala project is pioneering the next generation of Hadoop capabilities: the convergence of fast SQL queries with the capacity, scalability, and flexibility of a Hadoop cluster. With Impala, the academic and Hadoop communities now have an open-sourced codebase that helps query data stored in HDFS and Apache HBase in real time, using familiar SQL syntax. In contrast with other SQL-on-Hadoop initiatives, Impala's operations are fast enough to do interactively on native Hadoop data rather than in long-running batch jobs.

This talk starts out with an overview of Impala from the user's perspective, followed by a presentation of Impala's architecture and implementation. It concludes with a summary of Impala's benefits when compared with the available SQL-on-Hadoop alternatives.

Ippokratis Pandis

Ippokratis Pandis is a software engineer at Cloudera working on the Impala project. Before joining Impala and Cloudera, Ippokratis was member of the research staff at IBM Almaden Research Center. At IBM, he was member of the core team that designed and implemented the BLU column-store engine, which currently ships as part of IBM's DB2 LUW v10.5 with BLU Acceleration. Ippokratis received his PhD from the Electrical and Computer Engineering department at Carnegie Mellon University. He is the recipient of Best Demonstration awards at ICDE 2006 and SIGMOD 2011 and Best Paper award at CIDR 2013. He has served as PC chair of DaMon 2014 and DaMoN 2015.

pawlowski

New Approaches to Compute Architecture for Energy Optimized Data Movement

Steve Pawlowski, Micron

As the number of on-die transistors continues to grow, new computing models are needed to utilize this growing compute capacity despite a clock-frequency scaling wall, and relatively sluggish improvements in I/O bandwidth. The spatial compute and programming model, as introduced by the OpenSPL specification, provides a method for taking advantage of compute capacity offered by current and trending hardware technology. With spatial computing, compute processing units are laid out in space (either physically or conceptually) and connected by flows of data. The result of this approach is compute implementations which are naturally highly-parallel and thus very effectively use modern transistor-rich hardware. In this talk, I will describe both the spatial computing model and a practical realization of the OpenSPL specification: Multiscale Dataflow Engines. Multiscale Dataflow Engines are a platform which directly implements the spatial computing model in hardware while at the same time supporting tight integration with conventional CPU-based compute resources.

Steve Pawlowski

Steve Pawlowski is Vice President of Advanced Computing Solutions at Micron Technology. He is responsible for defining and developing innovative memory solutions for the enterprise and high performance computing markets. Prior to joining Micron in July 2014, Mr. Pawlowski was a Senior Fellow and the Chief Technology Officer for Intel’s Data Center and Connected Systems Group. Mr. Pawlowski’s extensive industry experience includes 31 years at Intel, where he held several high-level positions and led teams in the design and development of next-generation system architectures and computing platforms. Mr. Pawlowski earned bachelor’s degrees in electrical engineering and computer systems engineering technology from the Oregon Institute of Technology and a master’s degree in computer science and engineering from the Oregon Graduate Institute. He also holds 58 patents.

image: Daniele Tonella

Embrace Innovation, the Challenge of Big Corporations

Daniele Tonella, AXA Technology Services

This talk will highlight the challenges that big corporations face in embracing innovation: agility vs. legacy, dealing with the pace of innovation, digital transformation and its challenges for a company like AXA-Tech.

Daniele Tonella

Daniele Tonella joined the AXA Group in 2013 as CEO of AXA Technology Services. In this role he is responsible for the overall vision, strategy and operation of AXA’s global IT infrastructure, including cloud and developer platforms, thus contributing to AXA’s digital transformation. Before joining AXA, Daniele was Global CIO of Evalueserve, a knowledge process outsourcing company headquartered in India. From 2002 to 2010 he held various IT leadership roles with increasing responsibility at Swiss Life, notably CTO and CIO. He spent the initial part of his professional career as a consultant, initially for Mercer Management Consulting and subsequently for McKinsey and Company. He is currently a member of the Foundation Board of the International Risk Governance Council (IRGC). Daniele Tonella was born in 1971 and is a Swiss citizen. He holds an engineering degree from the Swiss Federal Institute of Technology in Zurich (ETH).

Back