News (Classic Version)
Posted on March 31, 2020
The IBM PhD Fellowship Awards Program was initiated in the 1950s to recognize and support outstanding graduate students who have an interest in solving problems that are important to IBM and fundamental to innovation in many academic disciplines and areas of study. Like every year, many nominations were received for the intensely competitive worldwide program for the academic years of 2020-2021 and 2021-2022. Finally, IBM has announced the list of awardees; among the recipients is Atri Bhattacharyya, Doctoral Assistant at EPFL’s School of Computer and Communication Sciences.
Atri joined EPFL in 2016 and completed his MS in Computer Science in 2018. Since then, he has worked at EPFL in the areas of microarchitectural security and design, and datacenter architectures. Through his years at EPFL, Atri has worked under the supervision of Prof. Babak Falsafi, Prof. Paolo Ienne, and Prof. M. Payer, completing many projects along the way. His research on a speculative-execution attack using port contention as a side-channel was published in the Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. He is also the winner of many awards, including Best Research Presentation Award (2019) and EPFL IC School Fellowship (2018).
The IBM PhD Fellowship will be a major boost for Atri’s academic career because of the international recognition associated with the prestigious award. The fellowship includes a competitive stipend for living expenses, travel and to attend conferences for the two academic years. IBM will also match an IBM Mentor with the technical interests of each recipient.
We felicitate Atri Bhattacharyya on his achievement and hope that the IBM PhD Fellowship will lead him to pioneering work and thereby make his mark in promising and disruptive technologies of the future.
Posted on March 16, 2020
Lana Josipović, Shabnam Sheikhha, Andrea Guerrieri, Paolo Ienne (all from EPFL, Processor Architecture Lab), and Jordi Cortadella (from Universitat Politècnica de Catalunya, Barcelona, Spain) are winners of the Best Paper Award at the 28th ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (FPGA ’20), which concluded on February 25 at Seaside, California.
Their paper ‘Buffer Placement and Sizing for High-Performance Dataflow Circuits’ addresses a fundamental problem in dynamically-scheduled high-level synthesis (HLS): how to strategically place buffers into dataflow circuits obtained out of high-level code to optimize their performance. The paper tackles two aspects that are necessary to achieve high-performance circuits: constraining the critical path and maximizing throughput. Lana and colleagues discuss the difficulties of performing such optimizations in the context of dataflow designs and present a performance optimization model based on marked graph theory. Their mixed-integer linear programming (MILP) model successfully achieved maximum design parallelism at the desired clock frequency and with minimal resource cost. Additionally, the authors propose a computationally-efficient strategy to decompose the problem that achieves near-optimal results. The optimizations presented in this paper are crucial to make dynamic scheduling truly competitive with existing HLS techniques.
The annual ACM/SIGDA International Symposium is the premier conference for presentation of advances in all areas related to FPGA technology, such as FPGA architecture, circuit design, high-level abstractions and tools, and design studies. This year’s Best Paper Award was selected from a total of 149 submissions, out of which 25% of the reviewed papers were finally shortlisted for presentation at the conference.
Lana is a Doctoral Assistant at the School of Computer and Communication Sciences and a winner of Google’s PhD Fellowship award in 2018 for outstanding research in the systems and networking domain.
With several publications to her credit, she works on bridging the gap between software and hardware to build efficient circuits for FPGAs.
Posted on March 9, 2020
The Facebook Fellowship Program, initiated in 2013 and awarded in 21 different categories, encourages and supports doctoral students who are engaged in innovative research on computer science, engineering, and allied domains. The winners for 2020 have been announced, and they include the first-ever awardees from EPFL. Panagiotis Sioulas and Merlin Nimier-David, both PhD students at the School of Computer and Communication Sciences, are winners in the categories of Structured Data Stores and Computer Graphics respectively.
Panagiotis Sioulas’s core research interests are in database management systems. He aims to design hardware and workload-conscious analytical databases that allow efficient multiple concurrent data-intensive queries by scheduling execution and by exploiting shared data and work across queries. The Fellowship will serve as an incentive for him to pursue his ongoing research in data management systems in the Data-Intensive Applications and Systems Lab (DIAS) under the supervision of Anastasia Ailamaki. Before joining EPFL, Panagiotis obtained a BSc at the National and Kapodistrian University of Athens.
The Facebook Fellowship covers two years of tuition and fees, and provides an annual stipend of $37,000 and up to $5,000 in conference travel support. Winners will also be eligible for a paid visit to Facebook headquarters for the annual Fellowship Summit.
EcoCloud wishes the awardees great success in their future research endeavors.
Posted on March 3, 2020
Computer Science is taking rapid strides in realigning itself to advancing technologies. It is beginning to emerge from the cocoon of traditional research to address new challenges posed by avant-garde technologies. In an article published in EPFL Magazine in December 2019, School of Computer and Communication Sciences (IC) Dean James Larus recorded his candid observations on the current status of the discipline at EPFL and the way forward.
As EPFL takes the next big step into the new decade and beyond, Larus has already initiated the crucial step of hiring young minds in diverse fields of Computer Science. His deanship has seen IC attain a “critical mass,” although the process needs to continue to ensure, in Larus’s words, the “outward growth” of the department and move beyond “the traditional avenues of research.” In keeping with this philosophy, the school has diversified into data science and machine learning, and the next few years will see more such expansions taking place at EPFL.
Apart from building the research team at IC, James Larus calls for greater interdisciplinary study in areas such as Artificial Intelligence to address ethical and humanitarian concerns about tech applications both in the current and future scenarios.
Computer Science at EPFL also needs to gear up to meet nascent technologies that are likely to play a major role in the future. James Larus refers to the upcoming quantum revolution and bio-computing as major opportunities where Computer Science can scale up to new heights. While quantum computers can disrupt today’s data processing capabilities by processing an unprecedented volume of data, Bio-computing represents a paradigm shift by turning the focus away from building computing power and finding parallels between biological processes, such as the working of the human brain or genetics, and computing systems.
James Larus has a vision of computer scientists at EPFL playing a pivotal role in converting plausible ideas of today into reality tomorrow.
Posted on February 25, 2020
Most discourses on the risks of Artificial Intelligence tend to focus on tech applications that are in the future horizon. The preoccupation with perceived threats such as sentient robots and AI consciousness takes away attention from AI-related issues that are already in the present, affecting simple daily activities such as reading the news, watching YouTube, or using a smartphone app. As School of Computer and Communication Sciences (IC) researchers Lê Nguyên Hoang and El Mahdi El Mhamdi emphasize in their new book, there is an urgent need to restate ethical questions related to algorithms in computational terms.
In their work The fabulous endeavor: making AI robustly beneficial, Hoang and El Mhamdi bring to fore their expertise in machine learning systems and mathematics to provide a conceptual understanding of key algorithms. They believe that the need to make AI “robustly beneficial” should be seen in the current context and not only in future applications. Without an ethical framework in place, algorithms are making millions of decisions that create problematic content in domains such as, inter alia, communication, commerce, entertainment, and politics. The situation calls for an urgent and time-bound implementation of ethical guidelines, referred to by the authors as “philosophy with a deadline.” Citing a real-world example of how people are exposed to problematic content every day, Hoang and El Mhamdi draw attention to recommender algorithms used by YouTube, which govern 70% of viewing decisions by the user; that leaves very little scope for clicks based on a direct search.
The moot question is how to deal with the ethical dilemmas associated with AI: is the onus on ethicists or computer scientists? The authors argue that such questions are in the domain of computer scientists, and could open a new research area for budding scientists. In the words of Hoang, “Many computer science scholars today are focusing on performance, and that’s good, but these computational ethics problems are not only more urgent – they are also extremely challenging and fascinating.”
The book is currently available in French from EDP Sciences. An English edition will be published later this year.
Posted on February 11, 2020
Seaside in Monterey County, California, will host the 28th ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (FPGA) between February 23 and 25 this year. Recognized as the premier conference for advances in FPGA technology, the symposium draws research papers, tutorial papers on emerging applications and methodologies, and panel discussion proposals. Among the papers being presented at FPGA 2020 are several original submissions and a tutorial paper by computer scientists at EPFL, which comprise a very good representation at the prestigious global event.
The papers by EPFL researchers at FPGA 2020 cover some of the core areas of interest related to FPGA technology. Mirjana Stojilović has led an exceptional all women-authors team (Seyedeh Sharareh Mirzargar and guest PhD student Zeinab Seifoori) to present an extension to PathFinder FPGA routing algorithm, which enables it to deliver FPGA designs free from risks of crosstalk side-channel attacks. Such attacks represent a serious threat for large designs assembled from various IPs, but Mirjana and colleagues have designed several strategies to show that crosstalk-attack-aware router ensures that no information leaks at a very small penalty.
In her second paper accepted for FPGA 2020, Mirjana and coauthors Ognjen Glamocanin, Louis Coulon, and Francesco Regazzoni (ALaRI, Lugano) address another security concern for FPGAs: side-channel attacks and evaluation of a system’s resistance to such attacks. They present a design and an FPGA implementation of a built-in test, which allows the FPGA to measure its own internal power-supply voltage and compute the t-test statistic in real time.
Mirjana, Sharareh and Andrea Guerrieri present a poster on how to accurately locate malicious power wasting activities by introducing voltage sensors into the FPGA circuits before deployment on the cloud. This voltage monitoring system and a novel sensor measurement metric take advantage of the unused logic and routing resources of the FPGA to pinpoint all malicious power wasting activities.
Moving from security and efficiency to performance, a group of EPFL researchers (Lana Josipović, Andrea Guerrieri and Shabnam Sheikhha, a Summer@EPFL intern), led by Paolo Ienne, team up with Jordi Cortadella (Universitat Politècnica de Catalunya, Spain) to show how to strategically place buffers into dataflow circuits to optimize their performance.
Considering the importance of scheduling in high-level synthesis, Lana Josipović, Paolo Ienne and coauthors Jianyi Cheng, George A. Constantinides and John Wickerson (Imperial College London) propose an approach that combines both dynamic and static scheduling to use the best of both approaches and obtain a high performance benefits.
Staying with performance improvement, Stefan Nikolić and Paolo Ienne team up with Grace Zgheib (Intel Corporation) to show the usefulness of enhancing FPGA architectures with direct connections between Look-Up Tables (LUTs). They present an algorithm that can automatically search the most interesting patterns of such direct connections.
FPGA 2020 will also see Lana Josipović, Andrea Guerrieri, and Paolo Ienne come together once again for the Invited Tutorial on “Dynamatic: From C/C++ to Dynamically Scheduled Circuits.” The tutorial demonstrates Dynamatic, an open-source HLS framework developed at EPFL, which generates synchronous dataflow circuits out of C/C++ code. By describing some of the applications of Dynamatic, the tutorial will enable others to use the tool and allow them to contribute to its enhancement.
Last but not least, a collaborative effort by James Larus, Endri Bezati, and Seyedmahyar Emami will see an important poster presentation at FPGA 2020. Their work presents a single programming model and the StreamBlocks framework for hardware-software stream programs on heterogeneous platforms. The main advantage of their programming model is the direct support for hardware-software systems, in which an FPGA functions as a coprocessor to a CPU.
EPFL has always had a strong representation in previous editions of the FPGA conference. This year, however, the collection of papers, tutorial, and posters—comprising almost 20% of the conference program—significantly augments EPFL’s contribution to novel research and advancements in FPGA architecture, security and design.
Posted on February 3, 2020
Since its establishment in 2016, the EPFL International Risk Governance Center (IRGC) has not only drawn attention to increasingly complex risks that affect society, but also developed mitigation strategies for perceived risks. Given its fundamental role in the risk governance framework at EPFL, the appointment of James Larus, Dean of the School of Computer and Communication Sciences, assumes great significance.
With his past experience as researcher, manager, and director at Microsoft Research for over 16 years followed by his leadership role at EPFL, a host of research papers, and achievements in the fields of programming languages, compilers, and computer architecture, James Larus was appointed as a member of IRGC’s Advisory Board in 2018. In his new role as Academic Director, he will be well-placed to help IRGC develop governance frameworks in areas such as risk assessment, emerging and systemic risks, and resilience-building. IRGC is already working on issues related to IT security in the cyber space by holding a series of expert workshops on cybersecurity, IoT, distributed ledger technologies, and decision-making algorithms. Professor Larus will bring his vast experience into play by addressing the increasing amount of the Center’s work in the broad category of digitalization and the rapid evolution of the risk landscape.
Apart from digitization, IRGC is also focusing on risk domains such as nanotechnology, precision medicine, synthetic biology, and critical infrastructure resilience—most of which are emerging technologies and therefore entail deeper risks. Since IRGC is an interdisciplinary unit dedicated to risk governance, James Larus will be working closely with other specialists to augment the Center’s existing body of concepts, frameworks, and publications, bringing key stakeholders such as the government, corporate sector, citizens, and the academia even closer together.
Posted on January 24, 2020
Alliance Announces the First Datacenter Efficiency Label
- Academia and industry leaders have come together to form the Swiss Datacenter Efficiency Association (SDEA) and announce the first datacenter efficiency label that aims to decarbonize datacenters and significantly reduce energy consumption on data platforms and infrastructures. It has École Polytechnique Fédérale de Lausanne (EPFL) as one of its founding members.
- The initiative is supported by the Swiss Federal Office of Energy through the program SwissEnergy.
Davos, January 23, 2020
Among the significant developments at the World Economic Forum Annual Meeting today was the announcement of a datacenter energy efficiency label launched by the Swiss Datacenter Efficiency Association (SDEA). SDEA represents an unprecedented alliance between academia and industry leaders to significantly reduce datacenter energy consumption and decarbonize datacenters around the world. Initiated by industry association digitalswitzerland and Hewlett Packard Enterprise (HPE), SDEA has EcoCloud (EPFL), HPE, Green IT Switzerland, the Lucerne University of Applied Sciences and Arts (HSLU), and the Swiss Datacenter and Telecommunication Associations, Vigiswiss and ASUT, among its founding members. SDEA is also being promoted through SwissEnergy, a program run by the Swiss Federal Office of Energy.
The label will be awarded for excellence in energy efficiency and environmental sustainability of datacenters by measuring their carbon footprint based on three main efficiency criteria:
- The hosting infrastructure accounting for the end-to-end energy flow, from ingest to output, including recycling capabilities of output energy;
- The IT infrastructure quantifying the efficiency of the processing, communication and storage equipment;
- The IT workloads accounting for the efficiency of IT usage for datacenter services.
Formation of the association was imperative considering the surge in electricity consumption in datacenters consequent to the world going digital. Current projections indicate that energy consumption by datacenters could increase from roughly 2% of electricity worldwide currently to 8% by 2030, which calls for urgent action by the industry. SDEA aims to play a significant role in that process by helping datacenters reduce their overall energy consumption by up to 70%. This is a realistic goal, as illustrated by ten pilot runs involving operations of several top brands in Switzerland. Legislative action is likely to follow to enact new laws to govern energy efficiency for new datacenters. Ultimately, the SDEA initiative could have a global impact with internationalization of the concept at world forums.
The scope and character of SDEA is expected to change and expand as the IT sector advances. With conventional silicon technologies having reached a saturation point, energy-efficient datacenters are crucial for sustainable IT development. That is where SDEA could play a pivotal role.
As observed by SDEA’s president Babak Falsafi, Professor in the School of Computer and Communication Sciences (EPFL) and founding director of EcoCloud:
“We are witnessing a paradigm shift in IT where conventional silicon technologies that historically resulted in doubling in chip density and efficiency every two years for five decades have reached their physical limits. As a result, future IT performance growth can only come from building more infrastructure, including datacenters spanning from the edge with closer proximity to data sources and real-time constraints all the way to the hyperscale public clouds. This label is timely and will facilitate the adoption of renewable energy in datacenters with quantifiable metrics.”
The label will be instrumental for countries like Switzerland where future growth in IT is likely to come from the edge clouds with infrastructure for data analytics closer to data sources and users. The emergence of digital technologies with real-time constraints and 5G networks will lead to the proliferation of edge infrastructure. Post-Moore datacenters will not only bring in custom AI technologies that are vertically integrated in hardware and software but also bring in integrated hosting infrastructure including technologies for renewable energy and cooling that work hand-in-hand with IT software and hardware.
With the accent on shifting away from conventional power-hungry datacenters, the future lies in using renewable energy to usher in a sustainable ecosystem for datacenters. This is an area that EcoCloud has focused on since its inception.
EcoCloud, an industrial/academic consortium at EPFL, is the only academic center investigating sustainable cloud technologies. For more, visit ecocloud.ch.
Posted on January 20, 2020
The much anticipated list of Highly Cited Researchers for 2019 is out. Published annually by the Web of Science Group, a Clarivate Analytics company, the list, generally known as the Thomson Reuters list of Highly Cited Researchers, includes scientists who produced multiple papers ranking in the top 1% by citations for their field and year of publication. Among them is Tobias Kippenberg, Full Professor at EPFL’s Institute of Physics and Electrical Engineering and head of the Laboratory of Photonics and Quantum Measurements (K-Lab).
The papers considered for the HCR list are segmented in one or more of 21 main subject fields from the Essential Science Indicators (ESI). Besides, there is a cross-field category to cover researchers who have produced exceptional publications across multiple subject fields. To determine the “who’s who” of influential researchers, the Web of Science Group uses the data and analysis performed by bibliometric experts from the Institute for Scientific Information. To be on the HCR list means a lot for scientists because it confirms their position within a small fraction of the researcher population and shows that they contribute meaningfully to gains in society, innovation and knowledge.
A breakdown of the HCR list based on country and region shows a remarkable concentration of top talent. Although the highly cited researchers are spread across 60 countries, 85% are affiliated to institutions from just ten countries. Switzerland has a total HCR count of 155 this year, which includes 24 EPFL researchers.
By being part of the HCR list since 2014, Tobias Kippenberg, who recently joined EcoCloud, has demonstrated his significant research influence among peers in Physics through citations accumulated through many years of extensive research publications.
Posted on January 14, 2020
EcoCloud welcomes Tobias J. Kippenberg, professor at EPFL’s Laboratory of Photonics and Quantum Measurements (K-Lab), among its faculty.
Prof. Kippenberg completed his doctoral program at California Institute of Technology, Pasadena, in 2004. After a stint at the Max Planck Institute of Quantum Optics in Garching, he joined EPFL in 2008 as a Tenure Track Assistant Professor. Since 2013, he has been a Full Professor with the Institute of Physics and Electrical Engineering and heads K-Lab. His core research interest is in photonics, notably high Q optical microcavities and their use in cavity quantum optomechanics and frequency metrology. At K-Lab, Prof. Kippenberg has helped develop and pioneer microcombs, which provide access to equidistant optical carriers and represent a novel disruptive technology with a proven track record. Earlier this year, he won a Proof of Concept grant from the European Research Council for the project.
Over the years, Prof. Kippenberg has published and presented numerous papers that have had exceptional impact on his area of expertise. That has earned him a place in the Thomson Reuters List of 1% most highly cited authors (since 2014). His research has led him to many awards, which include Helmholtz Price for Metrology (2009), EPS Fresnel Prize (2009), EFTF Young Investigator Award (2010), Swiss National Latsis Award (2015), and ZEISS Research Award (2018).
With his pioneering research in the field of cavity quantum optomechanics and photonic-integrated optical frequency combs based on optical microresonators, Prof. Tobias J. Kippenberg will undoubtedly play a crucial role in research and student-faculty interactions at EcoCloud.
Posted on January 7, 2020
Improving software security implies a two-pronged approach: testing the software security environment and mitigating attacks. While testing enables developers to detect bugs that are reachable through adversarially controlled input, mitigation involves the process of patching the underlying bugs and prohibiting an attack. On the one hand, mitigations such as ASLR, DEP, or stack canaries protect against unknown or unpatched vulnerabilities by stopping an exploit; on the other, lightweight runtime guards detect a security violation and terminate the process. Nonetheless, despite rapid advancements in software security, attacks continue to expose software vulnerabilities by reusing existing code. In this context, a team of researchers at EPFL, led by tenure-track assistant professor Mathias Payer, are working on a project to enhance software security by focusing on multidimensional, input-guided software security testing through sanitization.
The MultiSan project, which is being funded by an Eccellenza Grant from the Swiss National Science Foundation (SNSF), specifically focuses on code that is exposed to potentially adversary-controlled data. By targeting the immediate attack surface instead of testing all code, the research aims to prioritize the search for bugs on exposed code, thus enabling developers to take care of security vulnerabilities before they can be exploited.
The project hopes to improve software security along four different lines: policy-based sanitization, automatic (security) test inference, scaling testing, and guarding the hardware/software interface. Policy-based sanitization will lead to faster and more accurate detection of security violations. A report will be generated whenever a bug is triggered and not when the program crashes. On the other hand, automatic (security) test inference will customize input generation and modify the program to remove hard-to-trigger checks, such as checksums. Scaling testing will apply to complex environments by providing end-to-end testing for large code bases. In its fourth dimension, the MultiSan project will allow developers to test hardware and/or software interfaces and expose an inherently attacker-controlled environment. The hardware testing approach proposed by the research team will virtualize drivers and let them interact with emulated mock-hardware controlled by the testing framework.
The findings of the research project–including prototypes, benchmarks, and code—will be available as open source releases. That would enable the research community to build on the findings and further improve them over time. Conversely, the end users or developers can access the documentation, reports, and prototypes produced during the research to protect their code.
Posted on December 17, 2019
Deep neural networks (DNNs) perform complicated tasks such as image classification, speech recognition, and natural language processing with high precision. However, most DNN accelerators are primarily based on a linear synapse model, which limits the accelerators’ density. Existing techniques to improve DNN accelerator density have generally fallen short. While analog neural nets represent a viable option to increase computational density, they display linear synapse characteristics and require bulky circuits. To circumvent these problems, a group of researchers at EPFL have proposed a novel nonlinear analog synapse circuit based on a black-box training model that interpolates data from circuit simulation to calculate gradients.
In their paper recently published in the journal IEEE Micro, Ahmet Caner Yuzuguler, Firat Celik, Mario Drumond, Babak Falsafi, and Pascal Frossard demonstrate that their synapse circuit is not only more resilient to fabrication error than existing models, but also has a low hardware footprint. The simulation results presented in the paper show that the circuit with black-box training can achieve a classification accuracy with minimal deviation across fabricated chips, without any need for calibration or re-training.
Compared to the baseline digital accelerator, the proposed circuit offers 12x better energy efficiency and 29x better computational density. It achieves 582x better computational density and offers 12x better energy efficiency and 29x better computational density.
The research team plans to extend the precision of their weight generator circuit to support DNN applications that require weight precision higher than 4 bits. They are also exploring different types of digital-to-analog converter types for their weight generator circuit. Although the proposed circuit is applicable to any type of neural network, the EPFL researchers aim to benchmark their design with a recurrent neural network (RNN) workload and achieve a significant improvement in performance and energy-efficiency.
Posted on November 25, 2019
An eleven-member jury formed by Swiss business magazine Bilanz has announced the 100 most important heads of Switzerland who are at the forefront of digitization this year. The list of achievers has been sorted into various categories such as investors, blockchainers, scalers, transformers, administrators, drone acrobats, mentors, and data miners. Among the blockchainers is Professor Bryan Ford, who heads the Decentralized and Distributed Systems Lab (DEDIS) at EPFL’s School of Computer and Communication Sciences.
The list of Digitial Shapers assumes great significance because of Switzerland’s push for innovation as a path for progress. The need to keep pace with current technology calls for digital pioneers and visionaries with great potential. The 2019 edition of 100 Digital Shapers aims to highlight and inspire that potential.
With a Doctorate at the renowned Massachusetts Institute of Technology, Professor Ford has worked extensively in areas such as secure distributed systems, anonymous communications, system security, and blockchain technology. But his research on blockchain and digital democracy has attracted the most attention in the country. The e-voting system based on blockchain technology and developed by his research group is used internally at EPFL.
Bryan Ford acknowledges the growing interest in his work on digital humanity, which focuses on digital identity. In fact, his approach moves well beyond the digitization of “identity attributes,” such as the age or training of people, and into the area of “fundamental digital rights” for equal rights and freedom in the network. He is one of the first advocates of the concept of delegative democracy, also called liquid democracy.
Professor Ford’s past accolades include the Jay Lepreau Best Paper Award, the NSF Career Award, and the AXA Research Chair.
Posted on November 18, 2019
The European Research Council (ERC) has awarded a Starting Grant for the open-source research proposal “Code Sanitization for Vulnerability Pruning and Exploitation Mitigation.” The Principal Investigator of the research is Professor Mathias Payer, IC tenure-track assistant professor and head of the HexHive lab on software systems security at EPFL.
Dubbed “CodeSan,” the project aims to improve computer code by automating the process of discovering bugs and sanitizing vulnerable software. The technology will apply to software in development stage as well as to those already active. Since it will be an open source technology, all implementation prototypes developed through the project can be deployed to protect browser-based software (the likes of Google Chrome and Mozilla Firefox) as well as Android and Linux systems from attacks.
The research is expected to make a significant contribution to building more resilient systems for unknown or unpatched vulnerabilities. It proposes to do so by employing sanitization techniques that can detect property violations and thus mitigate exploitable vulnerabilities.
Professor Payer has worked extensively on protecting applications in the presence of vulnerabilities. His research focus is on software security, system security, binary exploitation, effective mitigations, strong sanitization, and software testing using binary analysis and compiler-based techniques.
ERC has awarded the prestigious funding to 480 early-career researchers for 2019. Each grant is up to a maximum of €2.5 million, and the total worth of the grants this year is €621 million. They are awarded as part of the EU Research and Innovation programme Horizon 2020.
Posted on November 11, 2019
The bitcoin arguably represents the most robust computational structure. However, many computer scientists today believe that the bitcoin protocol is an overkill because it lacks efficiency in the way it processes transactions and suffers from latency. Besides, it is also an energy guzzler. According to an estimate, the bitcoin algorithm consumes as much energy as that used by the Czech Republic and Denmark, and close to that of Austria. The high energy consumption is naturally being criticized because of climate change. Looking at these drawbacks of the bitcoin protocol, there is a growing need to explore more efficient decentralized transaction systems that do not compromise security or reliability.
After extensive research, Professor Rachid Guerraoui and colleagues at EPFL’s School of Computer and Communication Sciences (IC) have proposed a nearly zero-energy alternative to the bitcoin. The system, dubbed Byzantine Reliable Broadcast, represents a paradigm shift in the approach toward cryptocurrencies.
In contrast to the original bitcoin idea of solving the problem of ‘consensus’ to ensure secure transactions, the EPFL researchers work on the premise that there is no need to reach consensus. They demonstrate that it is possible to achieve safe and secure cryptocurrency transactions on a large scale with high energy efficiency. While bitcoin leaves a heavy carbon footprint of 300kg for a single transaction, the system developed by the EPFL team consumes only a few grams.
At the base of the Byzantine Reliable Broadcast is communication and broadcasting. The players need to communicate among each other and that prevents the system from accepting any payment from a malicious player.
Guerraoui and colleagues have already published papers on the theoretical results of the system and its implementation. Their paper “Scalable Byzantine Reliable Broadcast” won the Best Paper Award at DISC 2019 and has already drawn considerable attention from the industry.
Posted on November 4, 2019
Today, we have a slew of media houses and streaming services that inundate consumers with audio, video, online, and print news. This has raised the specter of rampant news misinformation and disinformation. The threat is amplified by broadcasters who use the same source to disseminate news to consumers. Any bias in the original news source is perpetrated by all secondary services, thus delivering a limited view of news to consumers. However, researchers at EPFL’s Distributed Information Systems Laboratory (LSIR) in the School of Computer and Communication Sciences have developed an algorithm that can detect such biases and external influences in the news industry.
The research team—Jérémie Rappaz, Dylan Bourgeois, and Karl Aberer—is working with Swiss daily newspaper Le Temps to develop a web-based platform that will model news production around the world, bring transparency in news services, and thus increase public awareness about the threat of disinformation.
After feeding the algorithm about 500 million articles published by 8,000 different sources over the past three years, the LSIR team mapped the articles based on their similarities. Apart from classifying news services in terms of region and topic, the algorithm also detected the influence of media groups on the news they release. By associating a particular media group with its news content, the project aims to build awareness among people about such external influences that lead to a phenomenon called “media concentration.”
Media Observatory is supported by the EPFL-based Initiative for Media Innovation (IMI). The online platform, to be launched next year, will be based on open-source technology and personalized algorithms. Users worldwide, including journalists and news consumers, can access an interactive map to identify patterns that influence the media industry and gain critical insights on how news is covered.
Posted on October 21, 2019
Anastasia Ailamaki, professor of Computer and Communication Sciences at EPFL and co-founder of RAW Labs SA, has been honored with the SIGMOD E.F. Codd Innovation Award. The award recognizes her “pioneering work on the architecture of database systems, its interaction with computer architecture, and scientific data management.” She joins a distinguished group of past awardees, all of whom are influential scientists in the field of database management.
The award, instituted in 1992 as the “SIGMOD Innovations Award,” was renamed in 2004 in honor of Dr. E.F. Codd (1923–2003) for his invention of the relational data model and significant role in developing database management as a scientific discipline. The award is an acknowledgment of Professor Ailamaki’s innovative, highly significant, and enduring contributions to the development, understanding, and use of database systems and databases. It adds to her bouquet of distinctions that include the EDBT Test of Time award (2019), the Nemitsas Prize in Computer Science (2018), ERC Consolidator Award (2013), the European Young Investigator Award from the European Science Foundation (2007), the Alfred P. Sloan Research Fellowship (2005), and ten best-paper awards in database, storage, and computer architecture conferences. She is also an ACM fellow, an IEEE fellow, and an elected member of the Swiss, Belgian, and Cypriot National Research Councils.
Professor Ailamaki received the award at the prestigious ACM SIGMOD conference held in Amsterdam between June 30 and July 5. The event is the foremost gathering of researchers on data management for industries as well as the academia.
Posted on October 9, 2019
The good old roll of the dice is the archetype of randomness. And then there are lottery drawings and competitions where the outcome depends on generating random numbers. However, verifiable randomness, or the lack of predictability, continues to be a deep-rooted problem in cryptography. The newly constituted League of Entropy, with EPFL as a founding member, has decided to tackle the problem head on.
The consortium also includes global organizations and individual contributors, such as web performance and security company Cloudflare, protocol and systems developer Protocol Labs (primarily researcher Nicolas Gailly), and University of Chile. At EPFL, researchers Philipp Jovanovic and Ludovic Barman are chiefly involved in the project.
Many organizations have already developed publicly available randomness beacons, which are servers generating completely unpredictable 512-bit strings (about 155-digit numbers) at regular intervals. However, such single-source randomness has often led to biased results. In contrast, the League presents eight independent globally distributed beacons to guard the process against manipulation. Their network of servers runs a distributed randomness beacon software called drand, which originated from the Decentralized/Distributed Systems (DEDIS) lab at EPFL. It has now developed into a collaborative project across many organizations.
By ensuring properties like availability, unpredictability, unbiasability, and verifiability, drand generates publicly verifiable random values every 60 seconds, which translates to 1440 fresh random values each day for users. Even if few of the servers were to be compromised or unavailable, the remaining servers would continue to provide new, unbiasable, and unpredictable numbers.
The League is committed to ensuring user trust and providing end-to-end solutions for public entropy. It is working toward an even more secure and distributed system of generating randomness in readiness for the Internet of the future.
Posted on September 30, 2019
EPFL’s home-grown programming language Scala has won this year’s Programming Languages Software Award. The honor is awarded by ACM SIGPLAN each year to an individual or an institution to recognize the development of a software system that has had a significant impact on programming language research, implementations, and tools. Scala was originally developed by Professor Martin Odersky in 2004 at the School of Computer and Communication Sciences (IC). Professor Odersky now heads the Scala Center, an open-source foundation for the software based at EPFL.
The award was presented at the annual SIGPLAN PLDI (Programming Languages Design and Implementation) conference in Phoenix, Arizona, on June 24. The event was marked by the presence of several Scala collaborators and EPFL students.
Scala won the distinction because of its widespread acceptance by the programming language community. Among its early adopters were Twitter and LinkedIn, which helped Scala widen its reach. Today, Scala forms the basis of the popular Apache Spark data analytics platform. According to the SIGPLAN award citation, Scala has also served as the basis for research on metaprogramming, macros, staging, and embedded domain-specific languages, including DSLs for machine learning and GPU execution (Delite and OptiML).
Although co-owned by EPFL and the commercial Scala entity Lightbend, contributors around the world—including many companies, organizations, and individuals—are continuously developing the code to push the frontiers of its language evolution.
Posted on July 29, 2019
The 2019 Spring Simulation Conference (SpringSim’19) concluded on May 2 at Tucson, Arizona. During the four-day event, many original papers were presented on the theory and practice of modeling and simulation in the scientific and engineering fields. The conference was especially significant for EPFL and EcoCloud because a paper co-authored by PhD scholar Yasir Mahmood Qureshi was selected for the “Runner-up Paper Award.”
In developing the paper “Gem5-X: A Gem5-Based System Level Simulation Framework to Optimize Many-Core Platforms,” Yasir Qureshi worked closely with co-authors William Andrew Simon, Marina Zapater, David Atienza (all from EPFL), and Katzalin Olcoz (Complutense University of Madrid). Addressing the two fundamental limitations of online services—power and latency—the paper presents gem5-X, a gem5-based system level simulation framework to optimize many-core systems. Gem5-X is a full-system simulation of ARM-64 in-order and Out-of-Order architectures running on a modern Linux OS.
The study showed that gem5-X can be used to identify bottlenecks and evaluate the potential benefits of architectural extensions such as in-cache computing and 3D stacked High Bandwidth Memory. Using case studies of real-time video transcoding and image classification with convolutional neural networks (CNNs), the researchers were able to achieve significant improvements in performance efficiency. They recorded a 15% speed-up using in-order cores with in-cache computing when compared to a baseline in-order system and 76% energy savings when compared to an Out-of-Order system. With HBM, acceleration of real-time transcoding and CNNs were up by 7% and 8% respectively.
The gem5-X simulation framework could emerge as a major enabler for computer architects because it is open-sourced with a technical whitepaper. It offers fast simulation of many-core ARM 64-bit architectures with innovative architectural extensions. It is available at esl.epfl.ch/gem5-x
The eminent selection committee at SpringSim’19 selected the paper for the Runner-up Paper Award because it makes a major contribution to the state-of-the-art in modeling and simulation. The award highlights the work being done by EcoCloud and EPFL researchers on new open-source architectural simulation frameworks for ARM and RISC-V many-core systems.
Posted on July 22, 2019
Media coverage on the distant future of AI and machine learning have painted a scary picture of machines going berserk, rampaging killer robots, and rogue self-driving cars. Those ugly manifestations of machine learning are unlikely to go beyond fiction. But the dangers of machine learning can—and already have—taken different routes. A couple of podcasts featuring El Mahdi El Mhamdi, PhD scholar at EPFL, shed important light on the dark side of AI—poisoned data sets, bad actors, AI-generated fake news, and the Byzantine problem—and his work on technical AI safety and robustness in biological systems.
Both the podcasts were recorded in January this year. The Practical AI podcast, hosted by Chris Benson (Chief AI Strategist at Lockheed Martin, RMS APA Innovations), was recorded during the Applied Machine Learning Days Conference in Lausanne, Switzerland. The AI Alignment Podcast of the Future of Life Institute was recorded during the Beneficial AGI conference in Puerto Rico.
El Mahdi El Mhamdi discusses fault tolerance, or the lack of it. Referring to the allegory of the three Byzantine generals, he explains ‘Byzantine fault,’ where components fail in a distributed computing system and there is imperfect information sharing and poisoning attacks.
He calls some “recommender systems” as the “killer robots” of today. For instance, AI-backed search engines spread misconceptions about the “dangers” of vaccinations. As a result, vaccine-preventable diseases are reemerging and causing thousands of deaths, prompting WHO to list “vaccine hesitancy” as one of the ten threats to global health.
El Mahdi El Mhamdi also alludes to the weakness of applying average gradients in machine learning because that leads to completely skewed recommender systems. To address the problem, he is working with fellow researchers on systems that offer poisoning resilience and safe interruptability. They have developed a protocol called Gradient Descent and “derived a series of algorithms that behave like a median, and that provides guarantees that it is bounded in between a majority of points.” They have also developed a new version of TensorFlow (Google’s machine learning framework) to make it Byzantine resistant. The AI Alignment Podcast explains El Mahdi El Mhamdi’s work on Byzantine-resilient distributed machine learning, the difficulties along the way, and the importance of this line of research for long-term AI alignment
There are, perforce, limitations in computer science to tackle vulnerabilities in machine learning. After all, “Computationally, it’s way easier to be the poisoner.” However, El Mahdi El Mhamdi and his colleagues have successfully developed systems to improve security in AI and machine learning, and are working toward a future of technical AI safety.
Posted on June 11, 2019
Elison Matioli, assistant professor at EPFL’s Institute of Electrical Engineering and director of the POWERlab, has joined EcoCloud.
Prof. Matioli comes with rich and varied research experience. After obtaining a Bachelor’s degree in Applied Physics and Mathematics from Ecole polytechnique (Palaiseau, France) in 2002 and a B.Sc. in Electrical Engineering from Escola Politecnica, University of São Paulo (Brazil), he pursued his doctorate studies at the University of California, Santa Barbara (2006-2010). For his Ph.D. thesis, he worked on ‘Embedded Photonic Crystals for High-Efficiency GaN-Based Optoelectronic Devices.’ For the next five years, Elison Matioli worked as Postdoctoral Associate, first at UC Santa Barbara (2010) and then at Massachusetts Institute of Technology (2010-14). He joined EPFL in 2015.
In his research through the years, Prof. Matioli has worked extensively on the conception of advanced semiconductor technologies for energy-efficiency applications. More specifically, he has explored the application of nanotechnology to semiconductors to demonstrate large-area nanostructured electronic and optoelectronic devices that outperforms the state of the art.
His research work has been recognized on various platforms. In his years as a student, Elison Matioli received the Outstanding Graduate Student – Scientific Achievement Award at UC Santa Barbara (2009), Best Oral Presentation award at MIT/MTL Annual Research Conference (2012), and the George E. Smith Award from the IEEE Electron Device Society (2013). In December 2015, he received the ERC Starting Grant Award from the European Research Council (ERC) for his research on III-Nitride Nanostructures for energy-efficiency devices (In-Need).
EcoCloud welcomes Prof Matioli to its fold. With his induction, EcoCloud adds a new dimension in the allied fields of materials science, applied physics and electrical engineering.
Posted on May 27, 2019
Every weekday, avid followers of computer science wake up to a new writeup by the inimitable Adrian Colyer on his blog The Morning Paper. His insightful selections help bring practical ideas from the academia to the computing practitioner. In a year, readers are exposed to concepts and ideas from more than 200 papers. In his latest post, Adrian Colyer presents a paper co-authored by Alexandros Daglis, Mark Sutherland, and EcoCloud Founder-Director Babak Falsafi.
The research (“RPCValet: NI-driven tail-aware balancing of µs-scale RPCs”) was conducted when both Daglis and Sutherland were working on their PhD theses under the supervision of Prof. Falsafi. The authors present RPCValet, which operates at extremely low latencies and targets reduction in tail latency by minimizing the effect of queuing. Thus, as Adrian Colyer points out, the EPFL researchers present “a glimpse of the limits for low-latency RPCs under load.” In other words, RPCValet represents one of the lowest possible limits of speed-of-light propagation, thus enabling optimum handling of RPCs at the endpoints as soon as they are delivered from the network.
RPCValet is designed for “emerging architectures featuring fully integrated NIs and hardware-terminated transport protocols.” In terms of hardware features, the network interface has direct access to the server’s memory hierarchy, thus eliminating round trips over e.g. PCIe.
The authors discuss the implementation of RPCValet as an extension of the soNUMA architecture, including extension of the baseline protocol for native messaging and support for NI-driven load balancing. It works as a single-queue system bereft of the synchronization overheads generally incurred in single-queue implementations.
The Morning Paper considers the research as an influential paper in the world of computer science, and with good reason. RPCValet is far more robust than current RPC load-balancing approaches because it performs within 3–15% of the ideal single-queue system. It improves throughput under tight tail latency goals by up to 1.4x, and reduces tail latency before saturation by up to 4x.
Apart from presenting The Morning Paper, Adrian Colyer is Venture Partner at Accel Partners, London. He previously held CTO roles at SpringSource, VMware, and Pivotal. His widely read posts bridge the gap between deep academic researches and computer science applications in the real world.
Posted on May 20, 2019
Qualcomm Technologies has just announced four winners of the Qualcomm Innovation Fellowship (QIF) for 2019. Among them are Mario Paulo Drumond and Kaicheng Yu, students at EPFL’s School of Computer and Communication Sciences (EDIC). They have been recognized by Qualcomm for their outstanding research proposals on emerging technologies.
Mario Paulo Drumond is mentored by Babak Falsafi, Professor at EDIC and founding director of EcoCloud. He figures among the QIF winners for his proposal “ColTraIn: colocated deep learning training and inference”. In this research, Mario proposes an accelerator design for co-located training and inference. The most significant part of the research is the use of Block Floating Point in tensor dot products, which account for most of the arithmetic in deep learning. There are two aspects of the proposed design. An exponent per tensor helps reduce memory to logic traffic and turn all tensor product arithmetic into fixed-point for higher logic density, while the employment of Floating Point for activations and remaining calculations improves accuracy.
Mario conducts his research at EDIC as part of the Parallel Systems Architecture (PARSA) group. Apart from his research experience, he has published several papers and worked on teaching assignments. He has also completed internships at Microsoft Research (Redmond and Cambridge).
Kaicheng Yu is supervised by Mathieu Salzmann and Pascal Fua, experts in computer vision and machine learning. He was selected by Qualcomm for his proposal “Robust Neural Architecture Search with Soft Weight Sharing.” Conventional network designs use a simple heuristic where different architectures are updated while sharing parameters. Conversely, Kaicheng introduces a new training scheme with soft weight sharing that enhances the use of neural architecture search (NAS) in deep learning. His proposal also presents a unified evaluation framework for the assessment of the algorithms’ ranking disorder.
QIF, now in its 8th year, is an annual program that focuses on recognizing, rewarding and mentoring the most innovative engineering PhD students across Europe, India, and the U.S. The awardees for 2019 were selected from diverse fields of research such as automated speech recognition, fingerprint recognition, 3D computer vision, and adversarial neural networks.
Posted on April 29, 2019
Google AI has announced the list of winners for its Faculty Research Awards (2018), and among them is Professor Volkan Cevher from the LIONS’ lab at EPFL. He has earned the distinction under the category ‘Machine learning and data mining.” He is one of only two winners from Europe in that category.
Under the award program, Prof. Cevher’s research proposal “A Convex Optimization Perspective for GANs” will now receive “unrestricted gifts as support” from Google AI. The research calls attention to the over-reliance on heuristics and trial-and-error in training generative adversarial networks (GANs), which is causing GAN results to stagnate. Instead of that path, Prof. Cevher’s work presents a novel algorithmic framework for GANs via an infinite dimensional affine matrix game, which will not only address the optimization of GAN but also help attain the ‘mixed Nash Equilibrium’ for the selected formulations. The proposed research further develops his earlier work on Langevin dynamics to improve the efficiency of training procedures. It is also proposed to present specific applications that could be built around the new approach toward GAN formulations.
Apart from machine learning, Prof. Cevher’s other principal research interests are signal processing theory, convex optimization, and information theory. He has won several Best Paper awards at various forums, including IEEE Signal Processing Society (2016), CAMSAP (2015), and SPARS (2009). He has also won research grants from the European Research Council in 2011 and 2016.
Since 2005, Google’s Faculty Research Awards have helped foster new technologies worldwide by funding researches in computer science, engineering, and allied fields. The awardees are selected through a rigorous and highly competitive process in which only 15% of applicants receive funding each year.
Posted on April 1, 2019
In 2009, EPFL professors Anastasia Ailamaki and Babak Falsafi collaborated with their doctoral and postdoctoral students to present Shore-MT, a scalable storage manager for the multicore era. A decade later, Shore-MT continues to be a robust open-source database storage manager preferred by many users worldwide. In recognition of its continued relevance and usage, the original research paper has been honored with the 2019 EDBT Test-of-Time Award.
Initiated in 2014, the Test-of-Time Award from EDBT (Extending Database Technology) is conferred on only one research work each year that is deemed to have had the biggest impact in terms of research, methodology, conceptual contribution, or transfer to practice since it appeared in the proceedings of EDBT.
The recognition of EPFL’s innovative data system Shore-MT is particularly rewarding because it is highly unusual for a systems paper to earn the award. Contrary to the usual winners from the theoretical domain, the paper on Shore-MT deals with “the implementation, the research questions and answers, and the new challenges ahead,” said Professor Ailamaki.
When multicore chips came to the fore years ago, they affected the internal scalability of database management systems (DBMS) optimized for operation with limited cores. Shore-MT was proposed by the EPFL researchers as a robust alternative to other established open-source storage managers such as Shore, BerkeleyDB, MySQL, and PostgreSQL. In comparison to its peers, Shore-MT exhibits superior scalability and 2-4 times higher absolute throughput.
The 2009 paper, presented at the 12th EDBT conference, not only showed the efficacy of scalability compared to single-thread performance, but also highlighted the principles for writing scalable storage engines with real examples from the development of Shore-MT. The fact that it is still being widely used as a research platform shows that it has truly survived the test of time. As rightly described by the EDBT committee members, the work “has catalyzed and enabled substantial follow-up research and has demonstrated its high relevance to industry.”
The 2009 paper (Shore-MT: A Scalable Storage Manager for The Multicore Era. EDBT 2009: 24-35) was authored by Ryan Johnson, Ippokratis Pandis, Nikos Hardavellas, Anastasia Ailamaki, and Babak Falsafi. This news article recognizes their lasting contribution in terms of methodology, impact, and influence.
Posted on March 25, 2019
The impact of scientific research findings remains limited unless they are disseminated among the research community as a whole. However, sharing research openly is not easy because of many cultural and technological barriers. In a bid to remove those impediments in the way of open research, EPFL President Martin Vetterli launched the Open Science Fund in September 2018.
The Fund has a dedicated corpus of CHF 3 million, which will be disbursed over the next three years. In its first call for proposals, the presidency received an overwhelming 50 submissions. Out of them, nine have been selected to receive funding for their open science projects, including two laboratories of the School of Computer and Communication Sciences (IC).
The Integrated Systems Laboratory (LSI), headed by Professor Giovanni de Micheli, was awarded funding for its open science project, “Promoting Open Benchmarks in Logic Synthesis.” The Open Logic Synthesis Libraries and Benchmarks are a collection of modular open source C++ libraries and benchmarks developed at LSI to improve optimization algorithms in the field of logic synthesis. The project aims to promote the research community’s adoption of benchmarking tools and open software libraries for reproducing and comparing performance between different technologies. The contact person for the project is Heinz Riener, a post-doc at LSI.
On the other hand, the winning proposal by the Distributed Information Systems Laboratory (LSIR) was for the project, “Evaluating the Quality of Science News Articles.” The laboratory, led by Professor Karl Aberer, will use the Open Science Fund to develop a platform called SciLens to verify the credibility of journalistic articles and social media content on scientific findings. SciLens automatically generates indicators to combat fake news by identifying news items that misrepresent the results of scientific studies.
The Open Science Fund initiative has been accompanied with other related steps to help widen the reach of research and make it more open. These include a dedicated web page providing information, news, and happenings related to open science at EPFL, and the organization of an Open Science Day on October 18 this year to mark EPFL’s 50th anniversary.
Posted on March 18, 2019
In a paper published earlier this month, a team of researchers from EPFL and IBM Research introduce the port-induced side channel called SMoTher. They show how it can be leveraged (instead of a cache-based side channel), as a powerful transient execution attack to leak secrets that may be held in registers or the closely-coupled L1 cache, called SMoTherSpectre.
The authors focus on ‘contention,’ contrary to popular and conventional research that looks at a string of exploits leveraging caches. They demonstrate that by leveraging contention, it is possible to detect a sequence as small as a single instruction tied at design time to a specific subset of ports.
The study dwells on Simultaneously Multi-threaded (SMT) threads, which have ready micro-ops that can use the same port. Since they contend for the same port in each cycle, each thread would need to wait for a few cycles when the port under contention chooses to schedule a micro-op from another thread. That causes a detectable slowdown, sometimes upto 35% as recorded in the experiments conducted by the authors.
Since each instruction in a sequence of code can be scheduled on specific ports, it was possible to create a port-fingerprint for every sequence. By timing instructions specifically scheduled on these ports, the attacker can measure contention. SMoTherSpectre becomes a very powerful attack because of the vast availability of SMoTher gadgets, viz., a BTI gadget (to trigger speculation) and a SMoTher gadget (to leak the secret).
The researchers have released the proof of concept to facilitate further research on SMoTher. They have also created a concept exploit for OpenSSL.
The authors of the study include EPFL scholars Atri Bhattacharyya, Babak Falsafi, and Mathias Payer, and IBM Research experts Alexandra Sandulescu, Matthias Neugschwandtner, Alessandro Sorniotti, and Anil Kurmus. While the full paper is available on arXiv, a summary of their findings is available in a blog post by the EPFL team. The research is a collaborative work of EPFL’s HexHive and PARSA labs, and IBM Research Zurich.
Posted on March 11, 2019
In 2016, Google created the Google Security and Privacy Research Awards as a pilot program. Since 2017, Google has made it much more broad-based to recognize researchers working on the next generation of security and privacy breakthroughs. The winners for 2018 have just been announced, and among them is Carmela Troncoso, tenure-track assistant professor in the EPFL School of Computer and Communication Sciences (IC).
As head of the Security and Privacy Engineering Laboratory (SPRING), Troncoso has earned the Google distinction for her work on digital privacy and security machine learning. This is a critical aspect because it seeks to protect users from the downsides of machine learning and enable them to “fight back” against its overpowering nature of collecting data. The SPRING lab engages in tempering the capability of machine learning to amass and analyze too much data from users and jeopardize their privacy. To achieve this moderation in machine learning, Troncoso and her colleagues are working on sets of modified data that can be introduced on social media platforms to prevent algorithms from gathering inappropriate information. The team is also developing “protective optimization technologies,” which can tackle problems that are sometimes created by the adoption of machine learning tools.
Living in the digital and machine-learning age need not imply that humans have to surrender their right to make decisions or retain privacy. This is the premise of Troncoso’s research.
Leading up to the winners for 2018, Google has disbursed $1 million to 12 scholars for their work to improve online security and privacy. The fresh announcement this year adds Troncoso and six other winners to that list, each award receiving about $75,000 in research funding. Troncoso plans to utilize the funds to develop open-access protective technologies, as well as privacy evaluation tools. In the process, she hopes to build a framework for testing and improving the security environment of machine learning-based systems.
Posted on March 8, 2019
The 32nd Annual Conference on Neural Information Processing Systems (NeurIPS 2018) was held in Montreal between December 2 and 8. The proceedings brought together 8000 attendees and 1011 papers. It also included posters and workshops covering an array of algorithms, theories, experiments, and ideas presented by the crème de la crème of researchers on machine learning. Sieving through this massive database, the insightful platform Medium AI has shortlisted its influential list of papers and poster presentations. In the latter list is “Training DNNs with Hybrid Block Floating Point,” which was presented by EPFL researchers Mario Drumond, Tao Lin, Martin Jaggi, and Babak Falsafi.
Based on the NeurIPS 2018 poster sessions, Medium AI has prepared its list of influential contributions under five broad research categories: Understanding, Essentials, Progress, Big problems, and Future. The EPFL quartet is included under “Essentials” because their research deals with the critical aspect of training DNNs. While some data center operators employ densely packed full-precision floating-point arithmetic, others opt for fixed-point arithmetic to maximize performance density. Conversely, block floating point (BFP) could be a viable alternative since it exhibits a dynamic range and enables most DNN operations to be performed with fixed-point logic. However, BFP is hindered by its limited direct applicability. To address this problem, the EPFL researchers have proposed a hybrid BFP-FP approach, which delivers the best of both worlds: high accuracy of floating point at the superior hardware density of fixed point.
Mario Drumond is Doctoral Assistant, Tao Lin is pursuing his doctoral program, Martin Jaggi is Tenure Track Assistant Professor, and Babak Falsafi is Full Professor. They are attached to EPFL’s School of Computer and Communication Sciences.
NeurIPS is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia, and oral and poster presentations of refereed papers. NeurIPS 2019 will be held in Vancouver.
Posted on March 4, 2019
The 25th International Symposium on High-Performance Computer Architecture was held in Washington, D.C., between February 16 and 20. Over the years, HPCA symposia have enabled scientists and engineers to present their latest findings in this dynamic field of research. The forum has also recognized outstanding work in the form of Best Paper and Test of Time awards. At HPCA 2019, the Best Paper Award went to a research that introduced a simple ROB partitioning scheme called “Stretch.” The research was conducted by EPFL scholar Siddharth Gupta along with his coauthors Artemiy Margaritov, Rekai Gonzalez-Alberquilla, and Boris Grot.
Siddharth Gupta is pursuing his doctoral program at the School of Computer and Communication Sciences (IC) under the supervision of Professor Babak Falsafi, founding director of the EcoCloud research center. Siddharth’s special area of interest is on systems and interdisciplinary systems problems in modern, large-scale datacenters. His current research focuses on providing architectural support for high-performance durable transactions with persistent memory. The award-winning paper stems from that focal area of his research engagement.
The research caught the eye of decision-makers at HPCA 2019 because it focuses on a dilemmatic aspect of datacenter efficiency: How to maximize performance per total cost of operation (TCO) dollar? To achieve that delicate balance, modern datacenters are moving toward aggressive colocation of latency-sensitive and batch workloads. This results in the loss of single-thread performance, which adversely affects the quality of service (QoS). Siddharth and his co-researchers propose Stretch as a mechanism to boost the performance of batch workloads co-running with latency-sensitive services, and thus balance QoS and throughput for colocated server workloads on SMT cores.
Together with his teaching assignments on computer architecture, operating systems, and operating system implementation at IC, the recognition at HPCA 2019 will undoubtedly reinforce Siddharth’s future research goals at, and beyond, EPFL.
Posted on February 11, 2019
Data centers are taking on huge workloads including Deep Neural Networks, data analytics, and video streaming. Even the most robust CPU- and GPU-based architectures are unable to handle today’s demanding computing environment. Therefore, the current trend is to turn to new forms of accelerators called Field Programmable Gate Arrays (FPGAs), which demonstrate superior energy performance. Commercial behemoths like Intel, Amazon, and Microsoft have added FPGAs in their data centers through takeovers and system implementations. However, are FPGAs safe from security attacks? If not, how can such attacks be tackled? A fresh research proposal by EPFL’s Mirjana Stojilovic seeks to address these and related concerns regarding FPGAs.
As Scientist in the School of Computer and Communication Sciences, Mirjana has worked extensively on the susceptibility of FPGAs to various types of attacks that compromise security. In a cloud computing environment, where multitenancy is the norm, such attacks could have a significant outcome on data security. In her research project, Mirjana adopts a direct approach toward the security hazards in adding FPGAs for datacenter acceleration. These include Denial of Service (DoS) attacks (wherein an apparently valid design is downloaded and used to reset an FPGA or render it unresponsive), side-channel attacks (which steal secure information), and attacks that inject computational errors. The objective of the research is to propose tailored countermeasures that can locate malicious attacks and carry out corrective and preventive steps to avoid functional impairment.
The research is important because very few studies have highlighted the security risks in using FPGAs in commercial cloud computing. With the increasing preference for FPGAs over GPUs, a better understanding of the security risks of FPGAs, and their countermeasures, will go a long way in enhancing the security environment in the cloud.
Posted on February 4, 2019
The collaborative engagement between Microsoft and EPFL goes back to 2008 when they came together, along with ETH Zurich, for the Microsoft Innovation Cluster for Embedded Software (ICES). That relationship has matured through the years with various phases of the Swiss Joint Research Center (JRC) projects. In the first two phases (2014-18), Swiss JRC supported 9 EPFL projects. After reviewing and ranking 29 proposals for phase III, including 13 from EPFL, JRC has now confirmed nine proposals. Three of them are from EPFL, including two projects submitted by EcoCloud faculty.
The new research projects will be introduced by the Principal Investigators (PIs) at the 6th annual workshop of the Swiss JRC (January 31-February 1). The EPFL PIs for the proposal ‘Monitoring, Modelling, and Modifying Dietary Habits and Nutrition Based on Large-Scale Digital Traces’ are Robert West, Arnaud Chiolero, and Magali Rios-Leyvraz. The project will revolve around three sets of research questions: monitoring and modeling, quantifying and correcting biases, and modifying dietary habits.
Marios Kogias and Edouard Bugnion will introduce the project ‘TTL-MSR Taiming Tail-Latency for Microsecond-scale RPCs’ in their role as PIs for EPFL. The research proposes to make Remote Procedure Calls (RPCs) the “first-class citizens” of datacenter deployment by reorienting the overall architecture, application API, and network protocols involved. The project is based on a new RPC-oriented protocol called R2P2, which separates control flow from data flow and provides in-networking scheduling opportunities to tame tail latency.
The third confirmed Swiss JRC project from EPFL is ‘Hands in Contact for Augmented Reality’ with Pascal Fua, Mathieu Salzmann, and Helge Rhodin as the PIs. Along with the PIs for Microsoft Research, they will work on accurately capturing the interaction between hands and objects they touch and manipulate. This is crucial for accurately modeling the world in which we live.
After the conclusion of Phase II of Swiss JRC in 2018, the renewal of the association was announced last summer for five years leading up to 2022. The new round of projects are now raring to go, continuing the decade-long rich tradition of the Microsoft-EPFL-ETH Zurich research collaboration.
Posted on January 28, 2019
For four days (January 26-29), some of the best minds on Machine Learning and Artificial Intelligence congregated for the Applied Machine Learning Days (AMLD) conference at the SwissTech Convention Center at EPFL, Lausanne. With EPFL being the principal organizer of the event, professors Marcel Salathé, Martin Jaggi, and Bob West played a stellar role in the conduct of the event. AMLD2019 included talks, tutorials, and workshops, but it will be best remembered for introducing 16 different “AI & your domain” tracks, which featured talks by domain experts and interesting panels.
One of those sessions focused on AI & Computer Systems. Co-organized by EcoCloud Director Babak Falsafi and PhD Student Mario Drumond, the session began with a presentation on ‘Value-Based Deep Learning Hardware Acceleration’ by University of Toronto Professor Andreas Moshovosby. He has worked extensively on designs that offer a range of effective choices in terms of area cost, energy efficiency, and relative performance when embedded in server class installations. The next talk on ‘Catapult and Brainwave: Powering Microsoft’s Configurable Intelligent Cloud’ was delivered by Michael Papamichael, Researcher in the Microsoft Research ‘Project Catapult,’ an enterprise-level initiative in cloud computing. The third speaker was Hadi Esmaeilzadeh, professor at the University of California, San Diego, and currently involved in developing new technologies and cross-stack solutions to build the next generation computer systems. Thereafter, the three speakers participated in a panel discussion on their research themes.
The penultimate speaker of the session was Kevin Smeyers, Evolutionary Architect at ToThePoint. His paper titled ‘ToTheArcade: IoT and Machine Learning, a match made in heaven, a gamified PoC’ presented his experiments in combining machine learning with IoT. The session concluded with a presentation by IBM Developer Advocate Svetlana Levitan on ‘Defending deep learning from adversarial attacks.’ She has been at the forefront of many statistical and machine learning implementations and is currently representing IBM in the Data Mining Group.
The AI & Computer Systems track brought to the fore many new findings in the interplay between computers and Artificial Intelligence and, together with the other domain tracks at AMLD2019, redeemed the pledge of a dedicated organizational team at EPFL to push further in the realm of machine learning.
Posted on January 28, 2019
The 32nd Annual Conference on Neural Information Processing Systems (NeurIPS 2018) was held in Montreal between December 2 and 8. The proceedings brought together 8000 attendees and 1011 papers. It also included posters and workshops covering an array of algorithms, theories, experiments, and ideas presented by the crème de la crème of researchers on machine learning. Sieving through this massive database, the insightful platform Medium has shortlisted its influential list of papers and poster presentations. In the latter list is “Training DNNs with Hybrid Block Floating Point,” which was presented by EPFL researchers Mario Drumond, Tao Lin, Martin Jaggi, and Babak Falsafi.
Based on the NeurIPS 2018 poster sessions, Medium has prepared its list of influential contributions under five broad research categories: Understanding, Essentials, Progress, Big problems, and Future. The EPFL quartet is included under “Essentials” because their research deals with the critical aspect of training DNNs. While some data center operators employ densely packed full-precision floating-point arithmetic, others opt for fixed-point arithmetic to maximize performance density. Conversely, block floating point (BFP) could be a viable alternative since it exhibits a dynamic range and enables most DNN operations to be performed with fixed-point logic. However, BFP is hindered by its limited direct applicability. To address this problem, the EPFL researchers have proposed a hybrid BFP-FP approach, which delivers the best of both worlds: high accuracy of floating point at the superior hardware density of fixed point.
Mario Drumond is Doctoral Assistant, Tao Lin is pursuing his doctoral program, Martin Jaggi is Tenure Track Assistant Professor, and Babak Falsafi is Full Professor. They are attached to EPFL’s School of Computer and Communication Sciences.
NeurIPS is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia, and oral and poster presentations of refereed papers. NeurIPS 2019 will be held in Vancouver.
Posted on January 21, 2019
Teaching is an art, and not all teachers are blessed with that skill. It is one thing to deliver lectures to a classroom, and quite another to connect with the students in that classroom. Katerina Argyraki, Tenure Track Assistant Professor at EPFL’s School of Computer and Communication Sciences, clearly belongs to the rarer category of teachers who believe in understanding students’ aptitudes and tailoring lessons accordingly. It is, therefore, not at all surprising that she was recently chosen as ‘best teacher.’
Professor Argyraki received her PhD and MS in Electrical Engineering from Stanford University and joined EPFL in 2007. Since then, she has presented numerous research papers in conferences and workshops. In her six years of teaching, she has helped many present and past PhD students in the core area of neutrality and transparency of computer networks. But what sets her apart is her deep understanding of how her course is being received by students. She derives instant gratification when students give a positive feedback, and that comes from her innate ability to spark spontaneous interest in complex concepts. In her own words, the secret to her rapport with students is to encourage out-of the-box thinking instead of focusing on theoretical knowledge.
Professor Argyraki has no hesitation in acknowledging the role of her mother—a philologist and high-school teacher—in shaping her outlook toward the teaching profession. She has imbibed her mother’s traits of maintaining visual contact with students and adding value to the time they invest in class. She interacts with students at a pace that doesn’t make lessons stressful, and conducts recap sessions to ensure that students retain past lessons. Her prime goal is to inculcate creativity and independence in her students.
Teaching skills need to be reinvented almost constantly, and the best way to achieve that is to interact with fellow teachers. In that aspect, Professor Argyraki is fortunate to have her husband George Candea as a teacher in the same school. They not only discuss their profession regularly, but also share a Master’s class on the principles of computer systems.
If one were to search for the underpinnings of her success as a teacher, it would be her desire to explore the internal workings of the Internet. That has helped Professor Argyraki strike the right notes at the right pace in her teaching career.
Posted on January 13, 2019
In early September, scientists, researchers, and industry leaders assembled in Rome for the 26th European Signal Processing Conference (EUSIPCO 2018). This year, the conference received 869 submissions, out of which about 550 were accepted. The reviewers sieved through those hundreds of important research papers to finally announce the Eurasip Best Student Paper Award. The authors of the winning paper are Mira Rizkallah (INRIA, visiting scholar at EPFL), Francesca De Simone (Post-doctoral fellow at EPFL), Thomas Maugey (INRIA), Christine Guillemot (INRIA), and Pascal Frossard (Associate Professor, EPFL).
Their paper, titled “Rate Distortion Optimized Graph Partitioning for Omnidirectional Image Coding,” proposes a graph-based representation for omnidirectional images, which are widely used in virtual reality and immersive communications. The graph-based coder built by the researchers takes cognizance of the spherical geometry and provides a flexible way to efficiently store and compress the visual data. Instead of coding schemes that make it very difficult to control distortion, the efficient graph partitioning strategy proposed in the paper optimizes the smoothness of the signals on the subgraphs. The study also proposes a complete GFT-based lossy compression scheme and compare its performance with the classical DCT-based JPEG coding. Test results confirmed that the partitioning provides an effective tradeoff between the smoothness of signals on the subgraphs and the cost of coding the partition.
Mira Rizkallah worked on omnidirectional images graph-based compression during her stint at EPFL (Oct-Dec 2017) under the supervision of Prof. Pascal Frossard and Dr. Francesca de Simone, both of whom are co-authors in the award-winning research. She is currently PhD Candidate/ Researcher at INRIA, France. Her research areas include graph signal processing, sparse representations and coding of multiview images and videos using GBR. She is working on her thesis “Multiview Video Coding using Graph-Based Representations” under the supervision of Prof. Christine Guillemot and Dr. Thomas Maugey, who also worked on the award-winning paper.
EUSIPCO 2018 was organized by Roma Tre University. It featured inspiring plenary talks as well as tutorials on emerging topics in the field of signal processing, and a high-level technical program.
Posted on December 10, 2018
Browsing websites is not without perils. With each visit, you leave some personal data that might be stored and even used by the website to their advantage. Data protection policies posted on websites are meant to make visitors wary of the danger, but the policies are either wrapped in incomprehensible legalese or clothed with seemingly innocuous generic terms that increase ambiguity about what a website does with your personal data. In February this year, researchers at EPFL launched an AI-backed program called Polisis to make life simpler by automatically scanning thousands of websites and generating an accurate and intelligible summary of the data protection policies in a matter of seconds. Few months down the line, the unique program has attracted more than a score of licensing requests from all over the world.
The spurt of requests might have been partially triggered by the new EU regulation implemented in May. The new stipulation has made customers wary about sharing personal data on websites. Speaking on the success of Polisis, Hamza Harkous confirmed that EPFL’s Technology Transfer Office (TTO) has received more than twenty license requests from companies offering data protection and data monetization services, lawyers in the process of drafting data protection policies, and advertisers keen to comply with the new regulatory requirements enforced by the EU. So far, Polisis has avoided entering into exclusivity agreements. However, it has made an exception in the case of U.S. search engine DuckDuckGo because the search engine has a proven track record of protecting personal data and not storing any personal information about users. Hitherto, DuckDuckGo relied on its Privacy Essentials extension, which generated a summary of key data protection information based on the policies of just a limited number of websites. Going forward, Polisis’ algorithms would enable the search engine to generate summaries based on thousands of websites.
Polisis is not, however, an overnight success story. Hamza Harkous and his team spent many painstaking hours over a period of 18 months to finally have the end product. Since then, it has been tested by more than 30,000 enthusiasts.
Its popularity has been driven in great measure by the fact that it is immensely user-friendly. It has left all its competitors far behind because it is the only program that offers automatically generated summaries of how websites handle personal data. Today, it is helping users make data protection policies lucid and transparent, and thereby take necessary precautions while visiting websites. That is surely a major contribution in today’s world of data thefts and privacy concerns.
Posted on November 19, 2018
In early July this year, the Board of the Swiss Federal Institutes of Technology appointed Mathias Payer as Tenure Track Assistant Professor in EPFL’s School of Computer and Communication Sciences. In a later development, Prof. Payer agreed to become a member of EcoCloud and share his expertise in protecting computer systems from malicious attacks.
EcoCloud is at the forefront of innovation in cloud computing technologies, which accentuates the value of collaborations and interactions with experts like Prof. Payer. He brings a strong commitment to research and teaching, which will strengthen EcoCloud’s resolve in meeting and managing the current challenges to IT security.
Mathias Payer completed his D.Sc. from ETH Zurich in 2012 and joined BitBlaze group, UC Berkeley, as Post-doctoral scholar. Before joining EPFL, he was Assistant Professor in Computer Science at Purdue University (2014-18), where he mentored many Ph.D. students. His researches on software security and system security have resulted in several publications, some of whom went on to receive Best Paper awards at academic forums.
In his current position at EPFL’s School of Computer and Communication Sciences, Prof. Payer heads the HexHive group. His interests in software and system security, binary exploitation, sanitization, and fault isolation are in sync with EcoCloud’s objectives of delivering sustainable cloud computing solutions.
EcoCloud welcomes Prof. Payer to its fold and hopes that his skills will add new dimensions to the research center’s established goals.
Posted on November 8, 2018
The prestigious MICRO Test of Time (ToT) Award is an annual feature at the IEEE/ACM International Symposium on Microarchitecture. This year was the 51st edition of the conference, held between October 20 and 24 in Fukuoka City, Japan. In the course of the conference, the Awards Committee named Thomas Ball and James R. Larus as the winners of the fifth MICRO Test of Time Award. That is an honor for EPFL as well; Professor Larus is Dean of the School of Computer and Communication Sciences (IC).
The MICRO Test of Time award recognizes the most influential papers published in prior sessions of the international symposium. Each award-winning paper has had a significant impact on research in the concerned field. Ball and Larus won the distinction for their paper titled Efficient Path Profiling, which was published in MICRO 29 (1996). Their research was chosen from amongst more than 150 eligible papers that were nominated or shortlisted based on recommendations by members of the computer architecture community. All of them were published between 1996 and 2000.
The paper addressed the problem of basic block and edge profiles, which often predict frequencies of overlapping paths inaccurately. Such disparities are often ignored on the assumption that accurate profiling must be far more expensive than basic block or edge profiling. Ball and Larus dispelled this erroneous conclusion and presented a novel and efficient technique for path profiling. The algorithm developed by them opened new avenues for program optimization, performance tuning, and software test coverage. Consequently, it has found wide acceptance among various profile-driven compiler frameworks, as noted by Thomas Ball. Thus, the paper satisfied the ToT criterion of having an influence 18-22 years after its initial publication.
Expressing his happiness on receiving the award, Professor Larus fondly referred to the publication as his “favorite paper” in which “all of the pieces fell together and the end result is very satisfying.”
Before his position at EPFL, Professor Larus was a researcher, manager, and director in Microsoft Research for over 16 years and an assistant and associate professor in the Computer Sciences Department at the University of Wisconsin, Madison. He has published more than 100 papers (including 9 best and most influential paper awards), received 30 US patents, and served on numerous program committees and panels.
Posted on October 22, 2018
Machine learning has become ubiquitous today with applications ranging from accurate diagnosis of skin cancers and cardiac arrhythmia to recommendations on streaming channels and gaming. However, in the distributed machine learning scheme, what if one ‘worker’ or ‘peer’ is compromised? How can the aggregation system be resilient to the presence of such an adversary?
Although there are few existing solutions to make machine learning robust and efficient in the face of adversarial behavior, their success is limited. To tackle this problem, EPFL’s Rachid Guerraoui, Full Professor at the School of Computer and Communication Sciences, has proposed a new research to account for all kinds of adversarial behavior and build practical and robust distributed learning solutions.
The research stems from Prof Guerraoui’s past studies on the issue of adversarial (Byzantine) behavior. He has authored several papers on distributed machine learning and developed schemes that are resilient to malfunctions in both worker-server and peer-to-peer implementations. Two solutions introduced by Prof Guerraoui and colleagues are Krum—an update rule to guarantee convergence despite Byzantine workers—and Bulyan—an effective solution that achieves convergence without being susceptible to existing aggregation rules.
Apart from distributed machine learning, Prof. Guerraoui has worked extensively on secure distributed storage, transactional shared memory and distributed programming languages. He has also co-authored a book on Transactional Systems (Hermes) and another on reliable distributed programming (Springer).
Posted on September 24, 2018
Martin Jaggi, Tenure Track Assistant Professor at EPFL’s School of Computer and Communication Sciences, has won a Google Focused Research Award for 2018 in the area of Machine Learning. The award-winning proposal was on “Large-Scale Optimization: Beyond Convexity,” jointly with Alexandre d’Aspremont and Francis Bach.
The project proposes convergence acceleration techniques for solving generic optimization problems, including deep learning. This is of immense relevance today because of the sheer number and complexity of deep learning applications. In their study, Martin Jaggi and his coauthors propose an approach that tackles non-convex problems and deep neural networks with reduced implementation cost. This is because the complexity overhead is much less than original training algorithms and the proposed scheme allows reusability of existing methods, such as neural network training software. Their approach toward acceleration performance and distributed training could become a core component of modern deep neural network training pipelines.
The research is of crucial interest to Google’s continuous commitment to back innovative research in computer science and engineering. To further that commitment, Google instituted the Focused Research Awards program in 2010. Since then, the program has supported collaborations in more than twenty key research areas that are of interest to both the academic community and Google. They include Machine Learning, Artificial Intelligence, Algorithms, Cloud Computing, Geomapping, and Networking.
Martin Jaggi’s core areas of expertise are machine learning, optimization algorithms for learning systems, and text understanding. Before joining EPFL, he completed his Ph.D. on Learning and Optimization from ETH Zurich and worked as a post-doctoral researcher at ETH Zurich, at the Simons Institute in Berkeley, US, and at Ecole Polytechnique in Paris, France.
Posted on September 17, 2018
In about two months’ time, participants will assemble in Seattle for the 26th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (ACM SIGSPATIAL 2018). Apart from the academic discourses that will take place at the four-day conference (November 6-9), the event is also of particular interest for EPFL because two of its outstanding researchers will be awarded the Best Paper Award for their contribution to the previous edition of the annual event.
Mirjana Pavlovic, a senior Ph.D. student, and Anastasia Ailamaki, Professor and Director of DIAS Lab at EPFL, won the distinction for their paper “Dictionary Compression in Point Cloud Data Management.” Their co-authors were Kai-Niklas Bastian and Hinnerk Gildhoff from SAP SE.
In the paper, Pavlovic and others propose a time- and space-efficient solution to storing and managing point cloud data in main memory column-store DBMS. This could fulfill a long-felt lacuna in the management of point cloud data. It is extremely relevant in a world where the volume of point cloud data is increasing at a rapid rate due to advanced data acquisition and processing technologies now available with data scientists. Conventional solutions are unable to handle this massive volume of point cloud data. On the other hand, Space-Filling Curve Dictionary-Based Compression (SFC-DBC), the solution developed by the researchers, offers efficient query execution without putting extra pressure on storage resources. It minimizes the storage footprint and increases resilience to skew. The team evaluated the performance of SFC-DBC in the context of SAP HANA, a database management system developed by SAP SE. The results were extremely encouraging; compared to existing solutions, SFC-DBC fared 61% better in terms of space and up to 9.4x in terms of query performance.
Lead author Mirjana Pavlovic will be in Seattle to receive the meritorious award. Each year, the ACM SIGSPATIAL conference brings together researchers, developers, users, and practitioners to foster interdisciplinary research on geographic information systems.
Posted on September 10, 2018
The Dimitris N. Chorafas Foundation recognizes outstanding scientific work in selected fields in engineering sciences, medicine, and natural sciences. The winners are chosen each year from among the select list of graduating doctorate students submitted by the Foundation’s partner universities in Europe, North America, and Asia. One of this year’s awardees is Manos Karpathiotakis, who completed his PhD at EPFL’s Data-Intensive Applications and Systems (DIAS) Laboratory in 2017 and is currently a scientist at the laboratory.
The Foundation conferred the award on Manos Karpathiotakis for his groundbreaking research on just-in-time data management. His work has helped in the efficient analysis of voluminous heterogeneous data into useful insights, rendered algorithmic innovation, and enhanced software system contributions.
In 2017, Manos worked with Lab Director Anastasia Ailamaki to develop a thesis for data virtualization by designing and implementing systems that i) mask heterogeneity through the use of heterogeneity-aware, high-level building blocks and ii) offer fast responses through on-demand adaptation techniques. For the high-level building blocks, the researchers used a query language and algebra to handle multiple collection types, express transformations between these collection types, and express complex data cleaning tasks over them. In an earlier research, Manos Karpathiotakis and his co-authors proposed data management with ViDa, a system that reads data in its raw format and processes queries using adaptive, just-in-time operators.
Manos Karpathiotakis who is currently a research scientist at Facebook, has expertise in wide ranging fields such as data management, query processing, spatial databases, geographic information systems, semantic web, and linked data.
The Chorafas Foundation has been promoting excellence in scientific research since 1992. It screens many scientific studies and finally awards annual prizes of $5,000 each to exceptional doctoral students in each partner university.
Posted on September 3, 2018
In a press release last month, the Takis & Louki Nemitsas Foundation announced the selection of Anastasia Ailamaki, Professor and Director at EPFL’s Data-Intensive Applications and Systems Laboratory, as the Laureate of the NEMITSAS Prize 2018 in Computer Science.
The decision to honor Professor Ailamaki was made by an International Experts’ Committee, which included Prof. Joseph Sifakis (Université Grenoble Alpes, France), Prof. Tony Hey (Chief Data Scientist at Science and Technology Facilities Council, UK), and Prof. Constantinos Daskalakis (MIT, USA). It was unanimously approved by the foundation’s Board of Directors.
The Nemitsas Foundation was established in 2009 to award Cypriot scientists who excel either in Cyprus or abroad with their inventions and discoveries. This year, the selected stream was Computer Science.
The foundation invited nominations and applications from computer scientists worldwide and submitted the candidatures to the Experts’ Committee for evaluation in June. The Committee selected Professor Ailamaki for her outstanding work through the years in data-intensive systems and large-scale scientific and business applications. The citation reads:
“It has been decided to propose Ms. Anastasia Ailamaki, Professor at EPFL, for the 2018 Nemitsas Prize in Computer Science for her numerous seminal contributions at the intersection of computer architecture and database systems, showing how the design of modern processors impacts the performance of database systems.”
Professor Ailamaki will receive the award on October 4 in a special ceremony at the Presidential Palace in Cyprus. It will be presented in person by President Nicos Anastasiades.
The award is an important addition to Professor Ailamaki’s long list of distinctions, which include an ERC Consolidator Award (2013), a Finmeccanica endowed chair from the Computer Science Department at Carnegie Mellon (2007), a European Young Investigator Award from the European Science Foundation (2007), an Alfred P. Sloan Research Fellowship (2005), an NSF CAREER award (2002), and ten best-paper awards in scientific conferences.
Posted on July 24, 2018
The 48th International Conference on Dependable Systems and Networks (DSN-2018) was held in Luxembourg City. The four-day event (June 25-28) saw thematic workshops and a series of more than 60 presentations by scholars in the realms of dependability and security research, fields that have been the raison d’être of DSN conferences over the years.
It is a matter of great pride for EPFL that the organizers awarded the Best Paper Award to Kristina Spirovska, Diego Didona, and Willy Zwaenepoel for their seminal contribution, “Wren: Nonblocking Reads in a Partitioned Transactional Causally Consistent Data Store.” All of them are attached to the Operating Systems Laboratory (LABOS). They worked together earlier on “Optimistic Causal Consistency for Geo-Replicated Key-Value Stores,” a research paper presented at the International Conference on Distributed Computing Systems held in Atlanta last year.
Their research paper at DSN-2018 presented Wren, the first Transactional Causal Consistency (TCC) system that simultaneously implements nonblocking read operations with low latency, and allows an application to scale out within a replication site by sharding. The system introduces new protocols for transaction execution, dependency tracking, and stabilization.
The Best Paper Award is a highly acclaimed honor conferred by the organizers on the most outstanding scientific paper among all papers included in the Main Track. The selection process involves three phases: shortlisting of papers by the Program Committee, the announcement of three finalists by the DSN Steering Committee prior to the conference, and voting by DSN attendees in a special plenary paper session.
Link to the award-winning paper:
Posted on July 18, 2018
Houston hosted this year’s annual conference of the ACM Special Interest Group on Management of Data (SIGMOD). During the five-day event (June 10-15), several awards were presented to a select group of participants. One of the most coveted of these awards is the Best Demonstration Award, won this year by Professor Anastasia Ailamaki and her student Eleni Tzirita Zacharatou, as well as their collaborators from New York University, Harish Doraiswamy, Fabio Miranda, Marcos Lage, Claudio Silva and Juliana Freire. Prof. Ailamaki is Lab Director and Ms Tzirita Zacharatou is pursuing her doctoral program in computer and communication sciences and bothare attached to EPFL’s Data-Intensive Applications and Systems Laboratory.
The demonstration proposals received by the organizers went through rigorous and highly competitive selection criteria, leaving only one winner at the end of a six-month process: the EPFL duo’s presentation on “Interactive Visual Exploration of Spatio-Temporal Urban Data Sets using Urbane.” It was selected for showcasing an exciting and novel data management technology, backed by the extraordinary research and development efforts of the presenters.
Elucidating their work in the paper, the researchers highlight the burgeoning number of datasets emerging in urban environments and from sensor applications associated with human interactions. That presents new opportunities for data-driven approaches to understand and improve cities. Visual analytics systems like Urbane help domain experts explore multiple datasets in different spatial and temporal resolutions. The main challenge in using systems like Urbane is to attain interactivity. Conventional approaches are not effective in supporting ad-hoc query constraints or polygons of arbitrary shapes. To tackle this constraint, the researchers propose the Raster Join approach, which converts a spatial aggregation query into a set of drawing operations on a canvas and leverages the rendering pipeline of the graphics hardware (GPU). In the process, Raster Join analyzes queries on the fly at interactive speeds on laptops and desktops. In their prize-winning demonstration, they integrated Raster Join with Urbane to enable interactivity.
The members of SIGMOD include software developers, academic and industrial researchers, practitioners, users, and students. Over the years, the SIGMOD/PODS conference has become one of the most significant and selective events in the domain of data-driven management systems and technologies.
Posted on July 9, 2018
The ACM Multimedia Systems Conference (MMSys 2018) was held between June 12 and 15 in Amsterdam. More than 30 papers were presented at the event under the “research track,” but there was only one winner for the Best Paper Award: a research conducted by Xavier Corbillon, Francesca De Simone, Gwendal Simon, and Pascal Frossard on “Dynamic Adaptive Streaming for Multi-Viewpoint Omnidirectional Videos.”
Francesca De Simone and Pascal Frossard are well-recognized names at EPFL’s Signal Processing Laboratory (LTS4). Dr De Simone was involved in the research project during her affiliation with LTS4 as Visiting Researcher; she is currently Scientific Staff Member at CWI’s Distributed and Interactive Systems. On the other hand, Professor Frossard has been heading LTS4 since he joined EPFL in 2003.
The research paper delves into the multi-viewpoint (MVP) 360-degree video streaming system, where a scene is simultaneously captured by multiple omnidirectional video cameras. The user can only switch positions to predefined viewpoints (VPs). The authors present several options for video encoding with existing technologies and for the implementation of VP switching. Their study is a noteworthy advancement in the Virtual Reality space, where several questions are being asked about the implementation of six Degrees of Freedom (6DoF) applications. The results of the research highlight the importance of conducting further studies on VP switching prediction to reduce bandwidth consumption and to measure the impact of VP switching delay on the subjective Quality of Experience (QoE).
The paper was chosen by a committee appointed by Comcast and organizers of MMSys 2018. Their decision was based on the paper’s contribution, novelty, and presentation quality.
Posted on July 2, 2018
Each year, the Design Automation Conference (DAC) announces five winners of the Under-40 Innovators Award. This year, one of the winners of the coveted honor is David Atienza, Associate Professor of Electrical and Computer Engineering at EPFL’s Embedded Systems Laboratory.
Professor Atienza has carved a niche for himself as an expert in embedded systems design, 2D/3D thermal modeling and management for multi-processor system-on-chip (MPSoc), electronic design automation (EDA), wireless body sensor networks (WBSN), memory optimizations, low-power hardware, and software co-design. In a career spanning over a decade, he has co-authored more than 250 publications in cutting-edge journals and conferences, many of which have been adjudged “Best Paper” at various events. His pioneering research has earned him distinctions such as the External Research Faculty Award of Oracle (2011), the ACM SIGDA Outstanding New Faculty Award (2012), the IEEE CEDA Early Career Award (2013), and the ERC Consolidator Grant (2016). Besides, he is Senior Member of ACM and IEEE Fellow.
Finding a place among DAC’s Under 40 Innovators is an exciting event for Professor Atienza because he will join an august list of achievers. Last year, the five innovators honored by DAC were John Arthur, Research Staff Member and Hardware Manager at IBM Research; Paul Cunningham, Vice President of R&D at Cadence Design Systems; Douglas Densmore, Associate Professor at Boston University; Yongpan Liu, Associate Professor at Tsinghua University; and Sasikanth Manipatruni, Senior Staff Physicist/Engineer at Intel.
Inaugurated in 2017, the Award recognizes design and automation innovators in industry, research labs, startups, and academia. Young achievers in these fields, like David Atienza, are redefining and shaping the future of design automation. The top five innovators will be honored at the upcoming 55th DAC conference in San Francisco.
Each year, DAC attracts representatives of more than 1,000 organizations. It is sponsored by the Association for Computing Machinery (ACM), the Electronic Systems Design Alliance (ESDA), and the Institute of Electrical and Electronics Engineers (IEEE).
Posted on June 20, 2018
Today’s information-based culture has introduced security challenges such as cyber attacks, privacy issues, and malware. Considering the many ongoing studies in this field, the Information Security Society Switzerland (ISSS) awards the ISSS Excellence Award each year to recognize and motivate students. The most recent winner of this prestigious award is Hamza Harkous, a post-doctoral researcher at EPFL.
The ISSS jury, comprising eminent experts in ICT and Internet security, bestowed the honor on Hamza Harkous in recognition of the novelty, quality, and practical significance of his Ph.D. thesis. The researcher pursued his Ph.D. between 2012 and 2017 at EPFL’s Distributed Information Systems Laboratory. His thesis “Data-Driven, Personalized, Usable Privacy” presents an innovative approach towards improved handling of privacy risks and proposes data-driven methods that enable the end user to protect private data and build strong privacy policies.
In his budding academic career, Hamza Harkous has already made rapid advances in the arena of AI-driven systems with a focus on privacy and security domains. The ISSS award adds to the series of grants and awards won by him in the past. These include the Outstanding Paper Award at ACM CODASPY 2017 and the Best Presentation Award at SwissText 2017. He continues to diversify into research areas ranging from deep learning to data-driven privacy interfaces, and from human-computer interactions to full-stack development.
The ISSS awards only two researches out of the scores of nominations received for consideration from across all Swiss institutes. That emphasizes the novelty of the study by Hamza Harkous.
Going forward, he expects to develop many more online usable services, similar to the deep-learning chatbot ‘PriBot’ that answers questions about privacy policies and the AI research project Modemos that enhances child safety.
Posted on June 14, 2018
The 39th IEEE Symposium on Security and Privacy concluded at San Francisco on May 23. It is considered to be one of the most prestigious events in the academic calendar each year as far as computer security and privacy issues are concerned.
Sixty two papers were presented at the symposium, but only two were singled out by IEEE for the Distinguished Paper Award. Among them was the research presented by EPFL scholars and faculty members belonging to the School of Computer and Communication Sciences.
The group that won the distinction for EPFL includes four scholars—Stevens Le Blond, Alejandro Cuevas, Juan Ramón Troncoso-Pastoriza, and Philipp Jovanovic—and Professors Bryan Ford and Jean-Pierre Hubaux. Their paper, On Enforcing the Digital Immunity of a Large Humanitarian Organization, dwelled on the computer-security challenges faced by a large humanitarian organization like the Committee of the Red Cross (ICRC). Based on interviews with dozens of ICRC field workers, the researchers investigated the problems faced by humanitarian organizations in collecting, processing, transferring, and sharing data on sensitive activities. In their study, the writers highlight inhibiting issues such as trade-offs, legal barriers, and data leakages, all of which could severely compromise efficacy. Finally, they propose a set of technological safeguards that can help avoid such hindrances and enhance facilitating factors for effective humanitarian action, which include neutrality, impartiality, and independence.
Posted on June 11, 2018
Last month, Google announced the winners of its PhD Fellowship award for 2018. They include 39 researchers from North America, Europe, and the Middle East. Among them is Lana Josipović, a doctoral student in the Processor Architecture Laboratory led by Professor Paolo Ienne. She has been awarded for her outstanding research in the Systems and Networking domain.
Lana began her doctoral studies at the IC School in 2015. Her excellence in computing and technology, firm academic founding, and proven leadership quickly came to the fore. In September 2015, she was awarded the Google Anita Borg Memorial Scholarship to become one of only two Swiss scholars to benefit from the scholarship that year.
Since then, Lana has focused on bringing software and hardware closer together by developing efficient circuits for Field Programmable Gate Arrays (FPGAs), which perform a key role at data centers. In one of her recent papers this year, she highlighted high-level synthesis tools that can be used in new FPGA applications and showed the demands of computing in broader application domains. In another research, she presented an innovative and practical method to organize the allocation for an out-of-order load-store queue for spatial computing. The research detailed the construction of load-store queue and demonstrated its advantages over standard accelerator-memory interfaces.
Google established the Fellowship program in 2009 to affirm its commitment to supporting and building relationships with the academia. Over the years, the Fellowship has fostered several hundred researchers, innovators, and entrepreneurs in Computer Science and allied fields. In Lana’s case too, the recognition by Google will reinforce her research potential and help her contribute new findings in her specialized domain
Posted on May 30, 2018
The annual mega event at EcoCloud is just around the corner. In little over a fortnight, the Lausanne Palace Hotel will be a buzz of activity as it hosts the two-day EcoCloud annual event, slated for June 18–19. The venue’s prime location, which offers panoramic views of the city, Lake Geneva, and the magnificent Alps, will be an apt setting for industry experts to share insights on budding data and cloud computing platforms.
This year’s event will feature industrial speakers and presentations by EcoCloud researchers. Session I (June 18) will focus on Security, Privacy & Trust (Chair: Bryan Ford), Session II (June 19, morning) will include deliberations on Systems (Chair: Babak Falsafi), and Session III (June 19, afternoon) will be themed around Analytics (Chair: Martin Jaggi).
In his keynote address, Úlfar Erlingsson (Senior Staff Research Scientist in the Google Brain team) will introduce Google’s work on addressing privacy problems in systems and deep neural networks as well as the RAPPOR and Prochlo mechanisms for learning statistics in the Chromium and Fuschia open-source projects. He will also present techniques for training deep neural networks with strong privacy guarantees.
In the industrial session that will follow, leading speakers from the IT industry will share their expertise. Simon Knowles of Graphcore will speak on designing processors for intelligence; cryptography expert Nick Sullivan of Cloudflare will share his findings about evolving web architecture and its impact on security, privacy, and latency; and Hong Wang of Intel Labs will address the question of reinvigorating foundational uArch research to boost IPC.
Interspersed with the astute observations of the industrial experts will be presentations by EcoCloud researchers on a range of topics such as distributed clinical and genomic data, distributed ledger technologies, using a central server to protect keys, durability for non-volatile memory, verified NAT, taming skew in large-scale analytics, the revelations of a “click,” machine learning, and taxonomy induction.
This is the seventh edition of the EcoCloud Annual Event. Like past years, the outcome of the interactions among researchers and industry stalwarts is bound to have a major bearing on future innovations in the cloud computing industry.
Posted on May 29, 2018
EcoCloud, the EPFL research center that drives today’s cloud computing technologies, warmly welcomes four new professors to its fold. They are Pascal Frossard, Carmela Troncoso, Robert West, and Paolo Ienne.
Prof. Pascal Frossard is Associate Professor in the Electrical Engineering Institute at EPFL and Associate Dean for Research in the School of Engineering. Before joining EPFL in 2003, he was stationed at the IBM TJ Watson Research Center at Yorktown Heights, NY, USA. His core research areas include interpretable machine learning, data science, graph signal processing, image representation and analysis, computer vision and immersive communication systems. His most research contributions include analysis of the geometric properties of deep networks, deep nets robustness analysis, and representation learning for graph signals.
Carmela Troncoso has a PHD from KU Leuven, Belgium. Currently, she is a Tenure Track Assistant Professor at EPFL. She leads EPFL’s SPRING Lab, which focuses on Security and Privacy Engineering. Her ongoing researches include machine learning in security and privacy, privacy in crowd sourcing applications, anonymous communications, location privacy, and privacy engineering methodologies. In her past engagements, she has worked as a faculty member at the IMDEA Software Institute in Madrid and Security and Privacy Technical Lead Engineer at Gradiant. She has also conducted post-doctoral research at the COSIC group.
Robert West steers the Data Science Lab in his capacity as Assistant Professor in the School of Computer and Communication Sciences at EPFL. He delves into large amounts of data and uses algorithms to work on aspects such as social and information network analysis, machine learning, computational social science, data mining, natural language processing, and human computation.
Paolo Ienne heads the Processor Architecture Laboratory (LAP). In the early 1990s, he was an undergraduate researcher at Brunel University, Uxbridge, U.K. Thereafter, he worked as Research Assistant at the Microcomputing Laboratory (LAMI) and at the MANTRA Center for Neuro-Mimetic Systems of EPFL. He joined the Semiconductors Group of Siemens AG, Munich, in December 1996. He has been a professor at EPFL in 2000. Prof. lenne specializes in computer and processor architecture, FPGAs and reconfigurable computing, electronic design automation, and computer arithmetic.
The new members bring their expertise to EcoCloud’s distinguished faculty that already has leading names under its wings. They are expected to jumpstart several new studies and enrich EcoCloud’s research output in the coming years.
Posted on May 13, 2018
Each year, the IEEE Technical Committee on Cyber-Physical Systems (TCCPS) recognizes outstanding scientific contributions under various categories, including the Early- and Mid-Career awards. The winners for 2018 have just been announced by the Committee. Among the awardees is David Atienza, associate professor of electrical engineering and director of the Embedded Systems Laboratory (ESL) at EPFL. He has won the Mid-Career Award for “sustained contributions to thermal processor design and medical wearables.’’
The award recognizes Professor Atienza’s extensive work on smart wearables for the Internet-of-Things (IoT), particularly in the medical domain. He has been working in this arena since 2010 to develop specialized multiprocessor designs and microprocessor controllers. These tools can be used to target electrocardiogram analysis and embed advanced features in the sensors to help doctors analyze data remotely. The system functions autonomously and maintains uninterrupted communication between the patient and the doctor.
In a related research published earlier this year in IEEE Journal of Biomedical and Health Informatics, Professor Atienza and colleagues developed a simple, modular, and effective algorithm to delineate and locate the peaks and boundaries of different ECG waves.
In an earlier milestone, Professor Atienza won the IEEE CEDA Early Career Award in 2013 for his singular contributions to design methods and tools for multi-processor systems-on-chip (MPSoC), particularly for work on thermal-aware design, low-power architectures, and on-chip interconnects synthesis. He continues to work in these domains and has gained expertise in 2D/3D thermal modeling, electronic design automation (EDA), wireless body sensor networks (WBSN), memory optimizations, low-power hardware and software co-design.
His work through the years has been acclaimed by renowned institutions and international conferences. These include the External Research Faculty Award of Oracle (2011), the ACM SIGDA Outstanding New Faculty Award (2012), numerous “Best Paper” awards, and—most recently—DAC’s Under-40 Award for innovative research on design and automation.
Posted on May 7, 2018
The IBM PhD Fellowship Award, instituted in 1950 to recognize outstanding PhD students who drive innovation, is one of the most sought-after distinctions worldwide. Each year, only a chosen few make it to the elite group. Among the awardees for 2018 is Lefteris Kokoris-Kogias from EPFL’s Laboratory of Decentralized and Distributed Systems. His achievement is all the more creditable because he figured among the awardees for 2017 as well.
In course of his research, Lefteris has built a body of literature that has been published by leading computer science conferences like USENIX Security and IEEE Security & Privacy. Under the supervision of Professor Bryan Ford, he has worked extensively on decentralized trust systems that help the Internet become more dynamic and accessible. Among his exemplary works are the development of scalable blockchain systems and innovative applications of threshold cryptography and distributed consensus.
Lefteris’s work is in sync with IBM’s declared goals toward academic excellence. Over the years, the Fellowship has been awarded across a wide range of disciplines and for innovations in the fields of, inter alia, cognitive computing and augmented intelligence, quantum computing, blockchain, data-centric systems; and brain-inspired devices and infrastructure.
Currently, Lefteris is developing transparent access control systems for blockchains. Later this month, he will present his latest blockchain solution ‘OmniLedger’ at the IEEE Security and Privacy conference in San Francisco. OmniLedger offers a decentralized payment system that performs on par with centralized systems like VISA and has a latency of seconds. It has already been embraced by startups like Emotiq and IOVO.
The IBM Fellowship Awards (2017 and 2018) are the latest in a string of distinctions for Lefteris. These include the EDIC PhD Fellowship (2015), Kary Award nomination (2015), and Thomaidion Award for Academic Excellence (2016).
Posted on March 12, 2018
In March 2016, EPFL and the International Committee of the Red Cross (ICRC) signed a seminal agreement to establish the Humanitarian Tech Hub. The four-year program has opened many avenues of collaboration between the scientific and humanitarian fields. To further cement that relationship, ICRC has just announced the appointment of EPFL’s Edouard Bugnion to the ICRC Assembly.
Professor Bugnion has worked as a faculty member in the School of Computer and Communication Sciences since 2012 and is currently Vice President for Information Systems. He presents the unique combination of a successful entrepreneur and a distinguished academician, and is expected to add new building blocks to the edifice of the Humanitarian Hub built over the last two years. Humanitarian crises of various genres grip more than 150 million people globally. Edouard Bugnion hopes to make a difference to their lives by fostering the work done by EPFL and ICRC under the collaborative framework.
Both EPFL and ICRC share many common grounds, which include the use of Big Data, digitization, and computer technology. In fact, ICRC is a founding member of the Center for Digital Trust launched in December. With his induction to the ICRC Assembly, Edouard Bugnion will bring to the table his expertise in digital technologies, which are critical in disseminating rehabilitative care and aid to prisoners and refugees. He will also be expected to safeguard ICRC’s digital infrastructure from military or spy attacks. With his vast experience in the IT industry, Prof. Bugnion will help ICRC adopt a strategy in the face of an intense debate that’s gathering momentum: Should existing legal frameworks (such as the Geneva Convention) be modified to meet the challenges of the digital age?
The Assembly is the supreme governing body of the ICRC with 15-25 members of Swiss nationality. They manage all activities of the organization, formulate strategies and policies, and approve budgetary requirements.Edouard Bugnion, one of two new appointees to the Assembly, will assume his non-remunerative role from April 1.
Posted on February 26, 2018
Web browsing has become almost second nature to us. Each day, we plunge into tens of websites and unwittingly accept their long-winded privacy policies without bothering to peruse their stipulations. This is undoubtedly because those documents are shrouded in legalese too dense and cumbersome to read and digest. Yet, it is a well-known fact that many websites collect, store, and even use the private data that we inadvertently leave behind during our browsing sessions. Disturbingly, such practices are usually protected by the legal jargon contained in their privacy policies. So how do we ascertain the nature of data collected by a website? Is it possible to know how our data will be used by a website even before we start browsing that site?
To provide answers to questions like these, researchers from EPFL, University of Wisconsin, and University of Michigan have developed a program called Polisis that can read, decipher, and segmentize privacy policies of websites in a matter of seconds. The lead researcher is Hamza Harkous, Postdoctoral Researcher at EPFL’s Distributed Information Systems Laboratory.
Polisis is a free-to-use program available as a browser extension for Chrome and Firefox. It can also be accessed directly on the Polisis website.
Posted on February 8, 2018
Geo-replication is gaining ground for distributed services because it brings the services closer to the end users, reduces the page-load time, and increases user engagement. It also enables data platforms, such as that of Facebook, to survive data center failures. However, recent work has proven that no distributed data system can assure the best of desirable properties like low-latency access, partition tolerance, and strong consistency. There is an inevitable tradeoff between these factors, and that has prompted scholars to devise the best consistency model that can present the most favorable tradeoff point. Most researchers concur that causal consistency, which lies in a sweet spot between strong consistency and eventual consistency, is the most attractive model. However, a new study by Diego Didona, post-doctoral researcher at EPFL’s Operating Systems Laboratory (LABOS), takes a contrarian viewpoint by arguing that causal consistency has inherent limitations and is slower and less scalable than commonly believed.
The project aims to demonstrate that causal consistency also suffers from a tradeoff between low latency and high scalability. But its main contribution will be in covering the entire gamut of research on data store consistency and proposing strong theoretical foundations and system designs for robust causal consistency implementations.
The project is being developed by amalgamating the resources and expertise of two laboratories at EPFL, viz., LABOS and Distributed Computing (LPD) and funded by EcoCloud to strengthen multidisciplinary research among its laboratories. Both labs complement each other in terms of work done on distributed data platforms and protocols.
The project has set certain short-term goals based on the theoretical and experimental investigations at the two labs. This will entail both theory-side and system-side investigations. On the one hand, the study will show that there is an inherent trade-off between low latency and high scalability; on the other, it will propose and implement a design that offers the optimal trade-off between these performance goals.
In the long-term, the study could open new avenues of research and present new findings on Transactional Causal Consistency (TCC), besides raising new questions about other consistency models.
To involve the research community at large, the project leads will present their findings at various conferences and publish papers in reputed journals.
Posted on January 8, 2018
Training of large-scale machine-learning models is extremely challenging because the training data is much more than the memory capacity. However, scientists at IBM and EPFL have collaborated to develop a novel scheme that enables the use of accelerators such as GPUs and FPGAs to speed up the training of machine learning models. They presented their findings at the 31st Annual Conference on Neural Information Processing Systems (NIPS) in Long Beach, California.
As explained by the researchers Celestine Dünner, Thomas Parnell (IBM Research), and Martin Jaggi (EPFL), the scheme is particularly relevant in today’s milieu where computing systems are becoming increasingly heterogeneous. The lack of uniformity in terms of size, complexity, and power inhibits the development of efficient algorithms. However, the study proposes a new generic and reusable component to efficiently distribute the workload among heterogeneous compute units to accelerate large-scale learning.
GPUs and FPGAs typically have a limited memory capacity, which was a major challenge for the researchers. They had to devise a method that could enable scientists to take advantage of the superior compute power of these accelerators. Toward this objective, they demonstrated that the problem could be solved by being selective about which data to train on. If one makes smart choices and leverages the heterogeneous character of data, it is possible to accelerate the training process. In this context, the study proposes DUHL, an efficient gap memory-based strategy, to select which part of the data to make available for fast processing. For their large-scale experiments, the scientists used a 30-gigabyte version of the Kaggle Dogs vs. Cats ImageNet dataset to show that it is possible to train 40,000 photos of cats and dogs in less than one minute, which is 10X faster than the existing methods for limited memory training.
The speed and efficiency of the new algorithm can enable scientists to re-train the models frequently and even adapt to changes in real time. It also has financial implications because faster learning implies significant savings in costs for cloud applications. Thus, the novel scheme has immense potential for data science practitioners in research institutes and various industrial sectors.
Posted on December 21, 2017
Machine learning and artificial intelligence (AI) are finding new applications across industries. Many tasks that were performed by humans are now being handled by machines, adding efficiency to the output. But what would happen if AI crosses the threshold of human control and makes unilateral decisions? It is a frightening, but highly probable, scenario. In 2014, it prompted Google to consider the idea of a “big red button” to stop dangerous AI in an emergency. However, the challenge is not in being able to stop or interrupt an AI process but in preventing AI from biased learning due to such frequent interruptions. The biased learning can be extremely dangerous in multi-agent systems, where several machines are involved in an AI task.
To negate that possibility, human operators must be able to interrupt a task assigned to an AI agent and simultaneously ensure safety by preventing individual agents from learning from each other based on the interruptions. That is the essence of a new study by EPFL researchers El Mahdi El Mhamdi, Rachid Guerraoui, Hadrien Hendrikx, and Alexandre Maurer.
In their paper presented on December 5 at the Neural Information Processing Systems (NIPS) conference in California, the researchers argued that an AI application involves several machines and not just one unit. Therefore, unlike the safe interruptibility proposed by earlier scholars for a single machine (or learner), the current research proposes sufficient conditions in the learning algorithm to enable dynamic safe interruptibility for multi-agent systems.
AI machines learn by the proverbial carrot and stick routine, otherwise known as reinforcement learning. To achieve safe interruption for joint-action learners, the researchers altered the machines’ learning and reward system by adding ‘forgetting’ mechanisms to the learning algorithms that essentially delete bits of a machine’s memory.
The results of the research are likely to have a major impact on the development of autonomous cars and unmanned drones, facilitating their mass production. Humans, after all, will have the final say.
Posted on December 18, 2017
The program in French can be found here.
Posted on December 11, 2017
Anastasia Ailamaki, Professor and Lab Director at the Data-Intensive Applications and Systems Laboratory (School of Computer and Communication Sciences), has just added another feather to the cap of EPFL’s research excellence. IEEE has included her as IEEE Fellow in the Class of 2018.
In placing Professor Ailamaki in the prestigious grade, IEEE has recognized her decisive contributions to hardware-conscious database systems and scientific data management. That is along expected lines because Professor Ailamaki has worked extensively on database systems and management for close to two decades. After completing her PhD thesis (University of Wisconsin, 2000) on Architecture-Conscious Database Systems, she has published many papers on scientific data management on modern hardware and devices, and cloud data management. These include studies that have won Best Paper awards from reputed organizations such as IEEE, ACM, USENIX, and VLDB. She was recognized as ACM Fellow in 2015 for her contributions to the design, implementation, and evaluation of modern database systems.
At EPFL, her intensive research program seeks to reinforce the interaction between database systems and modern processor hardware and disks. Besides, she is developing computational database support for scientific applications and delving into fields such as storage device modeling, performance prediction, and internet query caching.
IEEE is one of the largest professional organizations with more than 423,000 members from 160 countries. Out of them, only 0.1% is awarded the fellowship, the highest grade of membership. The selection process is based on peer nomination and backed by excellence in the profession. That underlines Professor Ailamaki’s remarkable achievement.
The fellowship includes a certificate with the name of the Fellow and a brief citation, and a gold sterling silver Fellow lapel pin. These have become coveted items for IEEE members since the inception of the grade in 1912.
Congratulatory messages are pouring in for Professor Ailamaki from many quarters. However, figuring in the elite group is far from an end in itself for Professor Ailamaki. The award has probably opened new doors for greater accomplishments in her chosen field of research.
Posted on December 4, 2017
The Association for Computing Machinery (ACM) has named EPFL Professor Edouard Bugnion as ACM Fellow for 2017. This is ACM’s most prestigious member grade where only the crème de la crème of the computing research fraternity find admittance.
After obtaining his PhD in Computer Science from Stanford University, Professor Bugnion started his career at EPFL in 2012. Since then, he has focused his research on data center systems. As part of EPFL’s Swiss Data Science Center, he has cultivated and widened his expertise in operating systems, data center infrastructure, and computer architecture.
His inclination for computing excellence took root early in life. In 2008, he showed his entrepreneurial acumen by cofounding two startups in the U.S.: VMware and Nuova Systems. At the latter company, which was acquired by Cisco in 2008, Professor Bugnion helped develop the Unified Computing System (UCS) platform, a core product for virtualized data centers. In 2008, Edouard Bugnion caught the eye of ACM’s Awards Committee, which conferred on him the SIGOPS Hall of Fame Award for outstanding research. His association with ACM continued the following year when he bagged the Software System Award for developing the VMWare workstation for Linux 1.0.
Becoming ACM Fellow this year is, thus, a culmination of ACM’s recognition of Professor Bugnion’s work over the years. His nomination is in keeping with ACM’s philosophy of awarding the distinction only to those who have contributed significantly to the transformation of science and society. They comprise a select group of researchers who are at the forefront of the digital revolution, which has a palpable impact on our lifestyles today. In fact, only 1% of more than 100,000 ACM members find a place in this member grade. To be considered for the position, a candidate must be a Professional Member of ACM for at least five years without a break. Vicki Hanson, ACM’s Secretary/Treasurer and a founding member of ACM-W Europe, summarizes the importance of being an ACM Fellow: “Fellows are chosen by their peers and hail from leading universities, corporations and research labs throughout the world. Their inspiration, insights and dedication bring immeasurable benefits that improve lives and help drive the global economy.”
As ACM Fellow, Professor Edouard Bugnion is positioned well to continue his research drive and scale new heights in innovative computing.
Posted on November 27, 2017
It’s been less than a decade since bitcoin, the world’s first decentralized cryptocurrency, was born. Despite a rollercoaster ride, bitcoin and other cryptocurrencies that followed have steadily increased their influence in the world of finance. However, their assimilation as modes of payments has squarely rested on the synergy between financial and computational expertise. At the vanguard of such co-disciplinary researches is the Initiative for CryptoCurrencies and Contracts (IC3), based at the Jacobs Technion-Cornell Institute at Cornell Tech in New York City. IC3 draws on the experience of faculty members at Cornell University, Cornell Tech, UC Berkeley, University of Illinois Urbana-Champaign, and Technion (Israel). In November, IC3 added an important member to this panel by inducting Professor Bryan Ford, who heads the Decentralized/Distributed Systems (DEDIS) lab at EPFL.
Prof Ford has a lot of experience in developing and working with secure decentralized systems, private and anonymous communication technologies, Internet architecture, and secure operating systems. This domain knowledge is in sync with IC’s main deliverables: blockchain science and code. As part of IC3, Prof Ford will play a stellar role in the growth of a dynamic blockchain and crypto-finance ecosystem. Welcoming Prof Ford and the DEDIS lab aboard IC3, Dean Jim Larus observed, “The research will ultimately contribute to the next generation of financial services and likely even more innovative applications of the technology not only here in Switzerland but globally.”
IC3 is working on reducing the over-dependence of the existing cryptocurrencies and contracts on heuristic designs and developing scalable and reliable blockchain-based solutions based on scientific rigor. That’s where the experience of Prof Ford can provide a major fillip to IC3’s objectives. Conversely, as part of IC3, Prof Ford will be at an excellent vantage point to widen the geographic reach of his research to regions where significant transformations are taking place in blockchain technology.
Apart from Prof Ford, two other Europeans joined IC3 in November: Sarah Meiklejohn (Associate Professor at University College London) and Srdjan Capkun (Professor at ETH Zurich). The expansion of IC3’s core team will hopefully enable cryptocurrencies to fructify their promise for both business and society. Echoing this sentiment, Prof Ford said, “I am thrilled to work more closely with the stellar team of researchers at IC3, who collectively answer the urgent need in the blockchain community for world-class academic expertise in technology, economics, and policy.”
Posted on November 20, 2017
The digital revolution is now all-pervasive, charting breakthroughs in computing and information technology. Driving that change is a group of leading innovators across the world. Among them is David Atienza, associate professor of Electrical and Computer Engineering and director of the Embedded Systems Laboratory at the School of Engineering, EPFL. In recognition of his outstanding scientific contributions to computing, the Association for Computing Machinery (ACM) has acknowledged him as a “pioneering innovator” and a “2017 Distinguished Member.”
To figure among the chosen few in ACM’s recognition program is not only a distinctive honor but also an achievement par excellence. To make it to the group, a member needs to have a minimum domain experience of 15 years, backed by innovation that has left an indelible imprint in the computing world.
Professor Atienza’s inclusion in that august list is not surprising, considering his rich and varied experience in the field. After receiving his PhD degree in 2005, Professor Atienza went on to excel in several research arenas that include, inter alia, system-level design methodologies for high-performance MPSoCs, design architectures for wireless body sensor networks, and memory optimization. Being an author of many research papers and book chapters, Professor Atienza has a decade of deep research experience in wearables and IoT-driven objects that can engender new business opportunities.
The announcement by ACM is the latest of many accolades received by Professor Atienza. These include the Oracle External Research Faculty Award (2011), the ACM SIGDA Outstanding New Faculty Award (2012), and the IEEE CEDA Early Career Award (2013). He is also an ACM Senior Member (2013).
The recognition cements EPFL’s position in a select club of leading universities and institutions across the world. ACM’s 43 Distinguished Members hail from Australia, Asia, Europe, the U.S., and South Africa, making it a truly global mix of leading exponents of computing technologies.
Posted on November 1, 2017
The widespread availability of video streaming services and the proliferation of smartphones have enabled users to do away with the need to download heavy content and thus save storage space on their devices. But the service provider—be it YouTube, Netflix, or any other—has to face serious challenges in offering a seamless experience to users. Two of the major concerns are storage space on their servers, and the resultant power consumption. Conversely, the user is confronted with challenges like bandwidth issues, unstable streaming flow, and video encoding issues. However, a solution is in the making to enhance the user experience and simultaneously minimize the worries of the service provider.
Marina Zapater Sancho and Arman Iranfar, researchers at EPFL’s Embedded Systems Laboratory (ESL), are working on a superior method for streaming that will have the twin advantages of better resource utilization (for the service provider) and user-specific output in terms of compression quality and encoding (for the end user). It is expected that the new method will reduce power consumption by a fifth and improve the user experience by about 37%. The impact of the research can be gauged from the fact that 80% of traffic on the Internet is in the form of video streams. Thus, it will be a win-win situation for both the provider as well as the user.
The researchers have adopted a machine learning-based approach to improve the functionality of embedded applications on multiprocessor systems-on-chip (MPSoCs). This is expected to manage power and temperature levels efficiently. As observed by Arman Iranfar, computers and the encoding systems will assimilate past experiences to optimize power consumption, performance, and compression. The study envisages the machine-learning model to calculate the best resource allocation possible without compromising the quality of streaming.
Instead of storing multiple copies of a video at different bitrates, which not only eats up storage space on servers but also increases power consumption, service provides only need to store one good-quality video. With the optimization method proposed by the researchers, the streaming will occur with automatic adjustments depending on individual users.
When complete and deployed, the study will indubitably scale new frontiers in streaming technology. It is being conducted as part of the MANGO project, which is supported by the EU Horizon 2020 program. Audiences in Seoul will get a glimpse of the efficient streaming method this month at the 13th ACM/IEEE Embedded Systems Week.
Posted on October 25, 2017
Providers of payment systems and password-protected applications use advanced computation to ensure security of their services. It is generally accepted that if large numbers are used in developing a code, it becomes extremely difficult to solve the math and break the code. In this process, computation of discrete logarithms plays a crucial part. Until recently, the record for computing a discrete logarithm was in the multiplicative group of a 596-bit prime field. However that has now been surpassed in a collaborative research between EPFL and the University of Leipzig. The team has cracked an extremely lengthy code by using complex mathematical calculations.
The groundbreaking research was carried out by Thorsten Kleinjung, Claus Diem, Arjen K. Lenstra, Christine Priplata, and Colin Stahlke. They started their computation in February 2015 and, after almost a year and half of hard work, they announced the computation of a discrete logarithm in the multiplicative group of a 232-digit (768-bit) prime field. The researchers presented their findings at Eurocrypt 2017, held in Paris this May, and won the distinction being the Runner-up for the Best Paper Award.
The researchers dispelled any doubts about their research having a detrimental impact on messaging security on the Internet. This is because of the extremely complicated and daunting task accomplished by them. Apart from the long duration of the research, they went through the arduous process of sieving through calculations on more than 3500 cores, which is the equivalent of more than 300 computers.
The security of Internet protocols like https and Virtual Private Networks depends on discrete logarithm calculations. Therefore, more studies like the one completed by the EPFL-University of Leipzig team are necessary for ensuring the security of data and systems. That will push the frontiers of algorithmic computations even further, perhaps resulting in the publication of even a 1024-bit record in the future. But that will certainly take some doing.
Posted on October 18, 2017
In April this year, researchers at EPFL’s School of Computer and Communication Sciences (IC) gained recognition for exemplary work in computer science. While Vasileios Trigonakis was awarded the 2017 Eurosys Roger Needham Doctoral Dissertation Award, Immanuel Trummer bagged Honorable Mention for the 2017 SIGMOD Jim Gray Doctoral Dissertation Award.
Vasileios Trigonakis’s outstanding thesis titled “Towards Scalable Synchronization on Multi-Cores” was recognized at the Eurosys 2017 Conference in Belgrade, Serbia. The thesis was developed with the supervision of Professor Rachid Guerraoui of the Distributed Programming Laboratory at EPFL. The work explores ways and means to reduce the effects of synchronization on software scalability. The study highlights the fact that the scalability of synchronization is directly proportional to the capability of the underlying hardware. In other words, synchronization unequivocally inhibits the performance of concurrent software. However—and this is the main contribution of Trigonakis’s research—it is possible to accomplish portability of software without foregoing performance if design patterns and abstractions are created to maximize the robustness of the hardware. This process precludes the role of software developers. Trigonakis’s research is founded on a two-pronged approach. The first approach is centred on OPTIK, which helps implement robust and scalable data structures. The second approach revolves around a multi-core topology abstraction called MCTOP, which goes a long way in optimizing policies implemented by developers.
On the other hand, Immanuel Trummer received the award for his PhD dissertation on “From Massive Parallelization to Quantum Computing: Seven Novel Approaches to Query Optimization.” He worked with the expert guidance of Professor Christoph Koch, who is attached to EPFL’s Data Analysis Theory and Applications Laboratory.
Over the years, there have been seminal changes in the way queries are executed. Trummer’s study takes cognizance of these changes, which include query execution platforms and processing methods and models and techniques such as cloud computing and crowdsourcing. The thesis proposes three major approaches toward query optimization: moving query optimization before run time to relax constraints on optimization time, trading optimization time for relaxed optimality guarantees, and reducing optimization time by taking advantage of new software and hardware platforms.
Both researchers thus have a strong foundation as they venture forth in their academic and professional pursuits. Currently, Vasileios Trigonakis works as Senior Member of Technical Staff at Oracle, while Immanuel Trummer is Assistant Professor for computer science at Cornell University.
Posted on October 11, 2017
All our conscious decisions are focused on the extent of control exercised by the stakeholders. This applies to developing a new project, nurturing a new company, or even building communities. Traditionally, the overarching drive in such activities has been the retention of centralized authority. But times are changing, and so is the Internet, with yeomen researches on the benefits of a decentralized system. At the forefront of such researches is the work of PhD scholar and EPFL researcher Lefteris Kokoris-Kogias. His outstanding work has earned him the IBM PhD Fellowship for 2017.
Kokoris-Kogias is a student of Professor Bryan Ford, who heads the Laboratory of Decentralized and Distributed Systems at the Swiss Federal Institute of Technology in Lausanne (EPFL). At the core of his research is the interaction of computer systems, cyber security, and cryptography in an environment that fosters decentralization. He focuses on building greater transparency on the Internet and increasing scalability of security systems. Using the immense power of the Internet as a communication tool, Kokoris-Kogias is working to develop applications, platforms, and services that would increase the robustness of the Internet instead of making it a centralized and closed medium.
In the shift toward a large-scale decentralized Internet, blockchain’s role as a fast-growing organization tool is apparent. In this context, Kokoris-Kogias is researching on the innovative application of threshold cryptography. With his ongoing doctoral research, Kokoris-Kogias is well on the way to creating algorithms that can use threshold cryptography to increase Internet security, and yet allow the move toward decentralization.
Another fundamental concept on which he is working is distributed consensus. The idea of consensus is already at work in computer science to achieve increased system reliability in the midst of several fault processes. But the success of open systems like cryptocurrency rides on the use of distributed consensus. In contrast to the way consensus works, the work of a researcher working on distribution consensus is far more complicated because it is a difficult process to induce the various nodes of a network to come to an agreement, as they do in a bitcoin environment. This accentuates the importance of the work being done by Kokoris-Kogias.
It is, therefore, little surprise that IBM has recognized the research potential of Lefteris Kokoris-Kogias by awarding him the Fellowship. Since its inception in 1951, the Fellowship program has honoured thousands of PhD students, and more than 700 students in just the last decade.
Indubitably, researchers like Kokoris-Kogias promise a constructive disruption of technology to transform systems, industries, and societies.
Posted on October 4, 2017
In this age of online marketing, e-commerce companies have turned into mega advertisers on the Internet. They use web browsers and mobile apps as their hidden eye to target personalized offers based on browsing and buying habits of the user.
However, by yielding to an urge to click on a product advertisement, you could be giving away a lot about yourself. It is veritably a trade-off between utility and privacy, where utility wins more often than not, with privacy being the proverbial sacrificial goat.
But is that trade-off really inevitable? Must we always sacrifice privacy to take advantage of an online product recommendation? The answer to both questions is a firm ‘No,’ as shown in a recent study by EPFL researcher Mahsa Taziki. By means of a meticulously formulated algorithm, Taziki and her co-researchers Rachid Guerraoui and Anne-Marie Kermarrec have found an optimum method to surf the Internet and click on product recommendations without surrendering private information.
After feeding their algorithm to Movielens’ 100K dataset, the researchers concluded that most advertisements (78.40% of clicks) provoke a trade-off between utility and privacy. Conversely, and happily, there are a sizeable number of clicks (5.12%) that promote utility without compromising privacy. The remainder of clicks (16.48%) do not play a role in either enhancing your utility or shielding your privacy. In short, the “click advisor diagram” designed by Mahsa Taziki and team tells you the amount of information you reveal by clicking on a specific link. Armed with that knowledge, you can decide your clicking strategy. The algorithm enables you, in real time, to weigh the advantages gained from a click against the information you let out, and then make an informed choice. In contrast to an ad blocker, the algorithm doesn’t hide information from you; it only helps you choose the appropriate ones.
The robust algorithm categorizes each potential click into one of four zones: Safe, Trade-off, Dangerous, and Deleterious. The user is then presented with the option to choose a clicking strategy. If you are a risk-taker, you could decide to confirm all the pre-clicks; if you are more careful, you could confirm only those clicks that are in the Safe and Trade-off zones.
The researchers are working toward developing a browser extension with the built-in algorithm to warn the user about the quantum of personal data likely to be compromised with a click on a particular product recommendation. That could be a major asset for discerning users because they could temper their clicking decisions based on the “click advisor.”
Posted on April 25, 2017
Martin Jaggi and fellow computer science professors Robert West and Marcel Salathé co-chaired the very first Applied Machine Learning Days at EPFL on January 30th and 31st 2017 at the SwissTech Convention Center. The event hosted more than 450 participants and gave an opportunity to industrial experts as well as academic researchers to share valuable insights on the role and future of artificial intelligence. More information on the EcoCloud co-sponsored event can be found here.
Posted on April 24, 2017
Martin Jaggi was invited by the Swiss Radio and TV Station RTS 1 to discuss artificial intelligence algorithms and biases. The program in French can be found here.
Posted on April 24, 2017
Please join industry experts from HP Enterprise, Google, IBM, Microsoft, and Xilinx and EcoCloud researchers to share insights on future data and cloud computing platforms on June 12th and 13th, 2017 at the sixth EcoCloud annual event at the Royal Savoy Hotel in Lausanne, Switzerland. You can find more information on the event including agenda and speakers’ information here.
Posted on April 19, 2017
Posted on April 11, 2017
Martin Jaggi is a recipient of the prestigious 2016 Google Faculty Research Award for his proposal on “A Computational View on Sentence Embedding”. Martin and his team will attempt “to improve the quality, the computational performance and the theoretical understanding of learning representations for sequences of words from unsupervised machine training”. You can find more information on Martin’s work here.
The Google Faculty Research Award funds “world class technical research in computer science, engineering and related fields”.
Posted on February 28, 2017
Researchers from EPFL, ETH Zurich and Microsoft Research – all partners of the Swiss Joint Research Center (Swiss JRC) – assembled for a workshop at the UK-based Microsoft Research Cambridge Lab in February.
During the workshop, the researchers presented the 10 projects selected for funding by the Swiss JRC steering committee. The selection was made on the basis of the projects’ intellectual merit, potential scientific and societal impact and evidence of strong collaborative interest between the project partners
Four of the projects bring together researchers from ETH Zurich and Microsoft Research. The six projects between EPFL and Microsoft Research are: Towards Resource-Efficient Data Centers; Near-Memory System Services; Coltrain: Co-located Deep Learning Training and Inference; From Companion Drones to Personal Trainers; Revisiting Transactional Computing on Modern Hardware, and Fast and Accurate Algorithms for Clustering.
Posted on February 6, 2017
A project aiming to investigate an approach to effect checking that is fundamentally different from previous research is to receive funding from the Swiss National Science Foundation.
To understand a program that makes use of effects – the interaction of a procedure with its environment in a way that goes beyond just taking arguments and producing a result – its execution history must be taken into account.
Posted on February 6, 2017
The Swiss National Science Foundation (SNSF) is to fund a project to research ways to better express and export fundamental programming abstractions used in the interfaces between databases and programming languages.
Scala is the programming language of choice for many of the most popular and innovative big data frameworks and is used by hundreds of thousands of developers worldwide. A general trend of increasing confluence of programming and database technologies is currently built on shaky foundations. Interfaces between programming and databases are poorly understood, hard to maintain, and not future proof.
The project, led by Martin Odersky of the Programming Methods Laboratory, will explore three orthogonal research areas. The first concerns projecting data and will involve investigating how generic programming abstractions can best be embedded in Scala. The second focuses on projecting control by embedding easy-to-use yet hard-to-abuse meta-programming techniques. The third area of research concerns distributed programming abstractions.
Posted on January 23, 2017
Rachid Guerraoui appeared on Swiss Radio and TV RTS 1 CFQD program to discuss about algorithms.
The program, which is in French, can be found below.
Posted on January 11, 2017
The Swiss National Science Foundation (SNSF) has awarded a grant to a big data project at EPFL’s Operating Systems Laboratory (LABOS). Entitled “Building Flexible Large-Graph Processing Systems on Commodity Hardware”, the project aims to advance the state of the art in graph processing.
A great variety of information is naturally encoded as graphs. Large graphs are present in social networks as well as many other applications, including biology, forensics and logistics. Yet many first-generation graph processing systems are inflexible, restricting users to a particular environment and computation on static graphs.
The LABOS researchers, led by Willy Zwaenepoel, will build on their earlier work on out-of-core graph processing systems. Their project will involve building systems that scale gracefully between memory and storage and are capable of dealing with dynamic graphs. The research team also intends to further optimize out-of-core performance, in terms of both performance and capacity.
Posted on January 11, 2017
EPFL’s Laboratory for Information and Inference Systems (LIONS), led by Volkan Cevher, is to receive funding from the Swiss National Science Foundation (SNSF) for its research project “Theory and Methods for Accurate and Scalable Learning Machines”.
The project focuses on applying machine learning – the ability of computers to learn from data – to the design of the next generation of online education systems. The researchers’ goal is to automatically adapt such systems to the background, skills and learning style of students to improve the delivery of knowledge.
Posted on January 11, 2017
A two-part paper by Jason Parker from Volkan Cevher’s Laboratory for Information and Inference Systems (LIONS) at EPFL has won the 2016 IEEE Signal Processing Society Best Paper Award.
The papers, “Bilinear Generalized Approximate Message Passing – Part I: Derivation” and “Bilinear Generalized Approximate Message Passing – Part II: Applications”, were published in IEEE Transactions on Audio, Speech and Language Processing, Volume 62, N° 22 in November 2014.
The award ceremony will take place at the 2017 IEEE International Conference on Acoustics, Speech, and Signal Processing in New Orleans, USA in March.
Posted on January 9, 2017
EPFL’s Decentralized and Distributed Systems Laboratory under Bryan Ford has developed an innovative and effective solution to counter delays, inconsistencies and attacks encountered by users of the increasingly popular virtual currency Bitcoin.
Dubbed ByzCoin, the solution is inspired by protocols such as Practical Byzantine Fault Tolerance (PBFT). It is based on the idea that an active group of Bitcoin miners using novel cryptographic algorithms work collectively on the transaction blocks that, once verified, are added to the blockchain – the bitcoin network’s public ledger. The miners would only require the approval of two-thirds of other members of their group for each transaction to be processed.
This approach would guarantee a higher level of consistency within the blockchain and allow transactions to be irreversibly confirmed within seconds. Accelerated confirmation would mitigate dishonest practices such as double spending and selfish mining.
Posted on January 9, 2017
Baris Kasikci has received the Patrick Denantes Memorial Prize 2016 for his PhD thesis Techniques for Detection, Root Cause Diagnosis, and Classification of In-Production Concurrency Bugs.
Concurrency bugs are at the heart of some of the worst software bugs. They can slow down software development by weeks or even months, as they are difficult to identify and fix.
Baris’ thesis introduces techniques to automatically detect concurrency bugs during production and identify the root causes of in-production failures – particularly those caused by concurrency bugs. It also explores a technique that automatically classifies a data race based on its potential consequence.
The thesis was developed in EPFL’s Dependable Systems Laboratory under George Candea. A toolchain built to implement the techniques demonstrated their effectiveness, accuracy and precision.
Posted on January 9, 2017
Doctoral student David Kozhaya received the Best Presentation Award at the IEEE International Real-Time System Symposium in Portugal in December 2016.
David’s presentation was based on his paper “Right On Time Distributed Shared Memory”, which he co-authored with Rachid Guerraoui – who heads the Distributed Programming Laboratory – and ABB corporate researcher Yvonne-Anne Pignolet-Oswald.
The paper explores the construction of a shared memory abstraction as a first step towards satisfying the growing demand for real-time data storage in distributed control systems (DCSs). Real-time DCS guarantees is particularly challenging as more and more sensor and actuator devices are connected to industrial plants and message loss must be taken into account. Find out more here.
Posted on January 9, 2017
Georgios Chatzopoulos, a student from EPFL’s Distributed Programming Laboratory, received the Best Paper Award at the 17th ACM/IFIP/USENIX International Middleware Conference in Italy in December 2016.
The winning paper, “Locking Made Easy”, presents GLS, a middleware designed to simplify and increase the efficiency of lock-based programming, which protects shared data from concurrent accesses.
GLS is based on the generic lock algorithm GLK. It offers debugging options for detecting various lock-related issues such as deadlocks. Rachid’s team evaluated GLS and GLK on two modern hardware platforms, using several software systems. They demonstrated how GLK improves performances of the systems by an average 23% compared to their default locking strategies.
Posted on December 27, 2016
EcoCloud professor Volkan Cevher has received an ERC Consolidator Grant for his research proposal “Time-Data Trade-Offs in High-Dimensional Statistical Learning via Convex Optimization”.
Computational power is growing slowly in relation to data sizes; consequently, large-scale problems require a long time to solve. Volkan’s research will explore and build on an emerging perspective that holds that data should be treated as a resource to be traded off with other resources, such as running time.
It is the first research project that aims to establish time-data trade-offs while characterizing their optimality. It is expected to change the way data is treated in statistical sciences and promises substantial computational flexibility for data-driven learning.
Volkan Cevher’s biography is available here.
Posted on December 27, 2016
EcoCloud professor David Atienza has received an ERC Consolidator Grant for his research proposal “COMPUSAPIEN: Computing Server Architecture with Joint Power and Cooling Integration at the Nanoscale”.
COMPUSAPIEN’s focus is the design of a three-dimensional computing server inspired by the mammalian brain. The project will involve developing and integrating breakthrough innovations in heterogeneous computing architecture, cooling-power subsystem design, combined microfluidic power delivery and temperature management in computers.
The integrated electronic-electrochemical design is expected to result in drastic energy savings and guarantee energy scalability in future server architectures.
David Atienza’s biography is available here.
Posted on October 21, 2016
We move towards the end of 2016 with pride, gratitude and optimism. The recent funding of several of our projects acknowledges the value of our work and allows us to go even further in our quest to drive innovation. We have also had the pleasure of welcoming another great talent to our faculty, while a leading global provider of ICT solutions has joined our Industrial Affiliates Program. You’ll find details of all this and more in our roundup of news, which we hope you’ll enjoy reading here.
Posted on September 7, 2016
Cyberhaven, a cyber security startup founded by Professor George Candea and some of his students, has raised more than $2 millions in the first round of financing to bring simplicity and strength to enterprise security. The company pledges to protect enterprise clients from malware, malicious insiders, and social engineering. The technology, which was developed at EPFL DSLAB over seven years, has been validated in the market as well as through open-source projects. For more information on Cyberhaven, please click here.
Posted on September 7, 2016
Four EcoCloud projects received funding from Microsoft as part of the Joint Research Center between EPFL, ETH, and Microsoft. The projects include “Near-Memory System Services” by Babak Falsafi, “Co-located Deep Learning Training and Inference” by Babak Falsafi and Martin Jaggi, “Revisiting Transactional Computing on Modern Hardware” by Rachid Guerraoui, and “Toward Resource-Efficient Data Centers” by Florin Dinu. Pascal Fua and Michael Kapralov from IC also received funding for their projects. This round of the Joint Research Center also funded 4 projects from ETH.
Posted on June 6, 2016
Babak Falsafi featured on the CompuCast podcast on June 2nd, 2016. CompuCast is a podcast about computer science by computer scientists. You can find the podcast here.
Posted on May 23, 2016
John Thome won the Nusselt-Reynolds Prize for outstanding contribution in the field of “experimentation, visualisation and modelling of macro- and micro-scale two-phase flow and two-phase heat transfer, application of this science to the development of new thermal technologies of industrial importance, and for the broad dissemination of this work to the engineering community in five authored books”. The prize is bestowed for outstanding scientific and engineering contributions and eminent achievements in the fields of heat transfer, fluid mechanics and thermodynamics
John is an outstanding researcher who has contributed significantly to field of two-phase flow and heat transfer. You can find more information on John’s research here.
Posted on May 23, 2016
John Thome will moderate the ITHERM 2016 Panel on “Micro-Two-Phase Liquid Cooling Systems for Electronics” on June 1st, 2016 at the ITHERM 2016 Conference in Las Vegas. The distinguished panelists include Dr. David Copeland (Oracle), Dr. Thomas Brunschwiler (IBM), Dr. Todd Salamon (Nokia/Bell Labs), Dr. Abhinav Dixit (Eaton), Dr. Soheil Farshchian, and Dr. Jackson Marcinichen (EPFL) who will discuss the challenges and concerns associated with two-phase cooling. For more information on the panel, please visit the ITHERM 2016 webpage.
Posted on May 23, 2016
Pinar Tozun received an honourable mention from the ACM SIGMOD Jim Gray Doctoral Dissertation Award Committee for her thesis on “Transactions Chasing Scalability and Instruction Locality on Multicores”.
Pinar Tozun is a Research Staff Member at IBM Almaden Research Center since January 2015. In November 2014, she received her PhD degree from Ecole Polytechnique Federale de Lausanne working under the supervision of Prof. Anastasia Ailamaki in Data-Intensive Applications and Systems Laboratory. Her research focuses on scalability and efficiency of data management systems on modern hardware. Pinar was an intern at Oracle Labs (Redwood Shores, CA) during summer 2012. Before starting her PhD, she received her BSc degree in Computer Engineering department of Koc University in 2009.
Posted on May 20, 2016
Baris Kasikci received the Eurosys Roger Needham PhD Dissertation Award for his Ph.D. dissertation on “Techniques for Detection, Root Cause Diagnosis and Classification of In-Production Concurrency Bugs”. The award is given for exceptional and innovative contribution to knowledge in systems areas for doctoral students in European Universities.
Baris graduated from the Ph.D. program at EPFL under the guidance of George Candea in the Dependable Systems Lab. His research was centered around building techniques, tools, and environments that will ultimately help developers build more reliable software. He is interested in finding solutions that will allow programmers to debug their code in an easier way. In this regard, he strives to find efficient ways to deal with concurrency bugs in general, and data races in particular.
Posted on May 20, 2016
Jim Larus along with his co-authors Manuel Fahndrich, Mark Aiken, Chris Hawblitzel, Orion Hodson, Galen Hunt, Steven Levi received the EuroSys Test of Time Award for their EuroSys 2006 paper: Language support for Fast and Reliable Message-based Communication in Singularity OS. The paper describes “language, verification, and run-time system features that make messages practical as the sole means of communication between processes in the Singularity operating system”.
Jim Larus is the Dean of the Computer Science Department at EPFL. His research focuses on the hardware and software challenges associated with very large scale systems. For more information on Jim’s research, please visit http://vlsc.epfl.ch.
Posted on May 20, 2016
Professor Katerina Argyraki, Tenure Track Assistant Professor and head of the Network Architecture Lab, has won the 2nd Eurosys Jochen Liedtke Young Researcher Award for outstanding contribution in the field of computer science. Katerina is the 2nd EcoCloud and Computer Science faculty member at EPFL to receive this award since its inception after George Candea received the very first award last year.
Katerina Argyraki’s research is on fundamental questions on the design and building of dependable network systems including the type of functionality and implementation deployed within these systems. For more information on Katerina’s research, please visit http://nal.epfl.ch.
Posted on April 26, 2016
Babak Falsafi has received funding from Google to integrate Cloudsuite into Perfkit. Perfkit is an open source framework for measuring cloud performance from Google. The framework contains automated benchmarking tools that allow for practical benchmarking at scale. Integrating Cloudsuite into Perfkit will enable measuring representative benchmark metrics and allowing for rapid, effective and practical benchmarking at scale. In an interview with Google Cloud Performance Blog, Javier Picorel, explains: “We believe that PerfKit Benchmarker (PKB) is a step towards the standardization of cloud benchmarking. In essence, we envision PKB as the “SPEC for cloud-server systems.” You can find more information on the blog here.
Posted on April 26, 2016
Anastasia Ailamaki will investigate the potentials of hardware/software co-design for efficient utilization of micro-architectural resources in collaboration with Huawei. Past research has shown that that DBMSs severely under-utilize their micro-architectural resources with more than 50% of the CPU cycles going for memory stalls and the number of retired instructions per cycle barely reaching one on machines that are able to retire four instructions per cycle. Pure software-level optimizations are not enough to fully exploit the micro-architectural resources. This under-utilization limits the performance of DBMSs and leads to poor energy efficiency. The goal of the project is to reconsider the design of OLTP systems by making the utilization of micro-architectural resources the highest priority so as to achieve high throughput, low latency, hardware utilization and better energy efficiency.
Posted on April 26, 2016
David Atienza, our affiliate Eaton and several European partners received Horizon-2020 funding to explore future HPC platforms. The project, named MANGO, aims at achieving extreme resource efficiency in future QoS-sensitive HPC workloads through ambitious cross- layer system exploration for better performance/power/predictability. The system architecture will be inherently heterogeneous as an enabler for efficiency and application- based customization, where general-purpose compute nodes are intertwined with heterogeneous acceleration nodes, linked by a homogeneous interconnect.
Posted on April 26, 2016
David Atienza and Babak Falsafi are members of a European-wide consortium to define a vision for the future of HPC, to bring together research communities from future and emerging platform technologies to data analytic, management and simulation tools for users, and to create a foundation for a center of excellence in high-performance computing in Europe. The consortium is led by Chalmers University and includes partners from U. of Augsburg, BSC in Barcelona, Edinburgh, ETH, FORTH in Crete, Ghent, INRIA, University of Manchester, RWTH in Aachen, Technion and University of Stuttgart.
Posted on April 26, 2016
EPFL is the lucky recipient of one of twenty Intel-Altera Heterogeneous Architecture Research Platform (HARP). The HARP system contains an “Intel microprocessor and an Altera Stratix® V FPGA module that incorporates Intel® QuickAssist Technology”. Our group has also acquired an FPGA-enhanced Hybrid Memory Cube platform from Micron. The HMC system is Linux based and comes with two Xilinx AC-510 modules each with a 4GB HMC. These platforms will be used by EcoCloud researchers to develop accelerators for server platforms and serve as a hardware prototyping substrate for research on architectural mechanisms for in-memory rack-scale computing.
Posted on April 26, 2016
Immanuel Trummer has been pioneering novel multi-objective query optimization paradigms with award winning results through incremental algorithms, randomized algorithms, and parallel processing. He is now turning his focus onto less conventional platforms, namely quantum computers, to truly put a dent into the topic. “Our recent access to a D-Wave 2X adiabatic quantum annealer with over 1000 qubits at NASA Ames Research Center enabled us to experimentally evaluate the potential of quantum computing for solving optimization problems that arise in large-scale data analysis” he says. Quantum computers harness the laws of quantum physics for computation exploring multiple computational paths at the same time and solving search problems that are otherwise impractical in scale on conventional platforms. Immanuel’s preliminary results indicate speedups of up to four orders of magnitude compared to traditional approaches for multiple query optimizations.
Posted on April 26, 2016
Ioannis Alagiannis’ research on “NoDB: Efficient Query Execution on Raw Data Files” featured as a CACM research highlight. CACM selects a few contributions among the best in computer science for publication as a research highlight. The paper presents a new paradigm, called data virtualization, which enables querying data in-situ with all the features of modern databases without the burden of loading the data into a database. You can find more information on the article here. Anastasia Ailamaki also featured in WORK magazine on presenting RAW Labs, an EPFL startup that designs software for big data applications “through efficient queries to never-before-seen data, we aim at maximizing efficiency of analytics applications and enabling new discoveries for sciences, businesses, and their users”. You can read the article that appears in French here.
Posted on April 26, 2016
Baris Kasikci’s and his collaborators at Intel and Microsoft have made a splash in the world of software development! Software complexity is now a major concern not only due to the emergence of multicores a decade ago but also the slowdown in silicon efficiency pushing platforms and software to heterogeneity. Baris’ proposal, “failure sketching” is an automated debugging technique that provides developers with an explanation (i.e., a failure sketch) of the root cause of a failure that occurs in execution. These results, which appeared in the flagship conference USENIX in 2015, are getting integrated in software toolchains at Intel.
Posted on April 26, 2016
The encyclopedia of two-phase heat transfer and two-phase flow is now available in eight volumes. The encyclopedia is the first comprehensive summary of the fundamentals of two- phase flows, heat transfer mechanisms, and cooling. The latter technology has been pioneered by our own John Thome, the editor-in-chief of the encyclopedia, and is emerging as the only viable approach to heat removal in future high-density server platforms. You can find more information on the encyclopedia here. There will be a first-hand demo of two- phase liquid cooling in a server rack at our annual event.
Posted on April 26, 2016
coCloud sponsored a workshop on “Reconfigurable Computing for the Masses” at the FPL Conference in London on September 4th, 2015. The workshop hosted many distinguished speakers from academia and industry to discuss the recent trends and challenges of using FPGAs to accelerate computing tasks in embedded and server platforms. More information including the workshop’s material is available here.
Posted on April 26, 2016
Cloudsuite 3.0 was released at the 2016 HiPEAC Conference in Prague in January. The third version is a major enhancement over prior releases both in workloads and infrastructure. It includes benchmarks that represent massive data manipulation with tight latency constraints such as in-memory data analytics using Apache Spark, a new real-time video streaming benchmark following today’s most popular video-sharing website setups, and a new web serving benchmark mirroring today’s multi-tier web server software stacks with a caching layer. To facilitate deployment, the benchmarks are integrated into the Docker container system and Google’s PerfKit Benchmarker. PerfKit enables automated benchmarking and a performance comparison across a broad spectrum of cloud server systems. CloudSuite 3.0 will run on both real hardware and a QEMU based emulation platform. There is a tutorial scheduled for EuroSys in London for those interested. For further info on CloudSuite 3.0 please visit the website.
Posted on April 26, 2016
EcoCloud’s affiliate Swisscom just signed a strategic partnership with EPFL to establish a Swisscom Digital Lab at the EPFL Innovation Park. Swisscom will invest CHF 1M per year for seven years in research activities ranging from applications and software to infrastructure with specific focus on interconnected people and homes. Swisscom will also support the innovative ecosystem at EPFL by organizing events related to digitalization on campus. EcoCloud is delighted about the arrival of Swisscom on campus and looks forward to strengthening our research collaborations with them.
Posted on January 28, 2016
Welcome to this edition of EcoCloud’s electronic newsletter. This edition of the newsletter can be found here.
This year we are proud to announce that our faculty have been recognized by the leading professional organizations with high distinctions for their contributions to computer science and engineering, our students not only have won prestigious awards but have transferred their innovations to our affiliates, and our family of faculty and affiliates has grown with the addition of two accomplished professors and a leading company in storage solutions. We hope that you will enjoy browsing through this newsletter and wish everyone a productive successful 2016!
Posted on December 16, 2015
David Atienza was named IEEE Fellow for his contributions in design methods and tools in multiprocessor systems on chip effective January 1st 2016. “The IEEE Grade of Fellow is conferred by the IEEE Board of Directors upon a person with an outstanding record of accomplishments in any of the IEEE fields of interest. The total number selected in any one year cannot exceed one-tenth of one- percent of the total voting membership. IEEE Fellow is the highest grade of membership and is recognized by the technical community as a prestigious honor and an important career achievement.” “David Atienza receives this recognition for his sustained and outstanding contributions in the areas of thermal-aware design, hardware-software co-optimization methodologies for wireless body sensor nodes and low-power multi-core system architectures.” David is an associate professor of electrical engineering at EPFL. He is also a member of the EcoCloud Executive Committee.
Posted on December 16, 2015
Anastasia Ailamaki and Babak Falsafi were named ACM Fellows. Fellowship is ACM most prestigious award given to the top 1% of ACM members for their outstanding contributions in computing and information technologies. According to ACM, this year’s awardees’ achievements “are fueling advances in computing that are driving the growth of the global digital economy.” Anastasia Ailamaki, a professor of computer science at EPFL, received her award for outstanding contribution to the design, implementation, and evaluation of modern database systems. Her research interests are in database systems and applications including strengthening the interaction between the database software and emerging hardware and I/O devices and automating database management to support computationally demanding and data-intensive scientific applications. She is also a founding member of EcoCloud.
Babak Falsafi receives his award for outstanding contributions to multiprocessor and memory architecture design and evaluation. Babak is a professor of computer science at EPFL working on architectural innovation to address emerging challenges in the design and performance-scalability of future computer systems. Babak is also the founder and director of EcoCloud and a member of its executive committee. You can find more information on ACM 2015 Fellows here.
Posted on June 15, 2015
George Prekas and Immanuel Trummer have won the Google European Doctoral Fellowship award that “recognizes and support outstanding graduate students doing exceptional work in Computer cience and related disciplines.”
George Prekas’ research interest is in energy efficient resource control for datacenter applications with high-throughput and low-latency requirements.
Immanuel Trummer’s research focuses mainly on different variants of the multi-objective query optimization problem where the goal is to strike a good balance between conflicting cost metrics in query processing.
Posted on May 29, 2015
Welcome to this edition of EcoCloud’s electronic newsletter. This edition of the newsletter can be found here.
The EcoCloud Annual Event will be held on June 22nd and 23rd at Lausanne Palace. This year’s event features an exciting lineup of EcoCloud and industrial speakers and presenters. In this newsletter, we are also delighted to announce two additions to our Industrial Affiliate Program, a new EcoCloud faculty member expanding our research portfolio in energy management, and the latest news on our research, accomplishments and outreach.
Posted on May 11, 2015
We are pleased to announce the first Summer School on DSL Design and Implementation from July 12th to July 17th, 2015 at EPFL. The summer school is an opportunity for students to interact and learn from leading experts in the field of Domain Specific Languages.
The covered topics will include:
Lectures will be followed by hands-on sessions where students will work with state-of-the-art tools and technologies.
The summer school is open to any MSc/PhD student having an interest in the field. The student should have some basic knowledge of the field. An undergraduate compiler course is a plus. For more information, please click on the link above.
Posted on March 12, 2015
Tudor David, a PhD student working with Rachid Guerraoui in the Distributed Programming Lab, has won the prestigious VMware Graduate Fellowship for the 2015-2016 academic year. The VMware fellowships are awarded to outstanding students pursuing research related to VMware’s business interests which include core machine virtualization and cloud computing.
Tudor is doing research on concurrent search data-structure and message-passing agreement on many-cores.
Posted on March 12, 2015
Manos Karpathiotakis is a 2015-2016 winner of the prestigious IBM Ph.D. Fellowship Award. The IBM Ph.D. Fellowship Awards Program is a worldwide competitive program, which honors exceptional Ph.D. students who have an interest in solving problems that are important to IBM and fundamental to innovation in many academic disciplines and areas of study. Award Recipients are selected based on their overall potential for research excellence, and their academic progress to-date, as evidenced by publications and endorsements from their faculty advisor and department head.
Manos is from the Data Intensive Applications and Systems Lab under Professor Anastasia Ailamaki. His primary research is on database systems. Please visit his homepage for more information on Manos’ research.
Posted on March 5, 2015
EcoCloud’s article on “Clouds, Datacenters & the Future of IT” appeared in French under the title “Cloud, datacenters et l’avenir de l’informatique” on page 14 of an insert by SmartMedia in L’Hebdo #5 “Semaine du 29 Janvier 2015”. The French article can be found here. Please find below the English version of the article.
Clouds, Datacenters & the Future of IT
Information technology (IT) has been undergoing a data-centric revolution in recent years in which enterprises, governments, and research organizations alike use analytics on massive data to extract information and monetize data to improve their practices, products, and services. Data now lies at the core of the supply chain for both products and services in modern economies. Analyzing text and documents online has led to groundbreaking advances in language technologies and has enabled investment banks to identify financial trends. Graph analytics can help uncover insights in applications as broad as social media, telecommunications, healthcare, and utilities. Data-intensive scientific discovery now complements theoretical, empirical, and simulation-driven science as a fourth paradigm for scientific discovery.
Today, data-centric IT services, also referred to as cloud services, are provided with centralized infrastructure called datacenters to maximize resource sharing and exploit economies of scale. In contrast to supercomputers aimed at the high-cost/high-performance scientific domain, datacenters consist of volume servers aiming at cost-effective data processing, communication and storage. Datacenter owners prioritize capital and operating costs over ultimate performance. While larger organizations are consolidating their IT infrastructure and services into privately owned clouds to guarantee data ownership, confidentiality and privacy, many are opting for public clouds primarily due to economic reasons forgoing legal implications and data governance.
The exponential growth in IT in recent years has led to unprecedented demands on datacenters worldwide. In 2013, Amazon Web Services added daily enough server capacity to support all of Amazon’s global infrastructure in 2003 when it was a $5.2 billon annual revenue enterprise, according to its VP, James Hamilton. IDC projects that data will reach 40 zettabytes by 2020 (equivalent to 100 iPads for every woman, man and child in 2020). This growth in data surpasses by far the exponential improvements in digital platform capabilities enabled by the conventional semiconductor fabrication technologies in the past four decades. The semiconductor fabrication technologies have now hit fundamental physical barriers with economic, energy and environmental repercussions necessitating fundamental research and ground breaking new solutions to enable a continued growth in IT.
The future of IT is of key strategic relevance not only to the world at large but also Switzerland and European countries whose economies are primarily innovation- and service-based and are highly dependent on data-centric IT. Moreover, as a top spender of IT per capita and with a mandate to minimize its energy footprint by 2050, Switzerland must invest in large-scale IT infrastructure for both sustainability and digital sovereignty.
Ecocloud contributes to Google PerfKit benchmarker, a new open source cloud performance measuring tool
Posted on February 23, 2015
EcoCloud is one of several academic or industrial institutions that contributed to the new Google Perfkit benchmarker, an open source tool to measure cloud performance. The tool will help collaborate on a set of benchmarks and already include common cloud workloads including workloads from the CloudSuite benchmark developed at EPFL. For more information on Perfkit, please click here.
Posted on February 5, 2015
The Swiss National Science Foundation has awarded Katerina Argyraki the ERC Starting Grant for her research on adapting and “evolving network functionality with the needs of its users and operators” through virtual data plane.
Posted on November 10, 2014
Martin Odersky received the 2014 Swiss ICT Special Award for his development of Scala, a platform-independent, scalable programming language. According to the award committee, Martin is “representative of the innovative force and successful commercialisation of research projects in the industry in the best traditions of Swiss universities.”
Posted on November 10, 2014
Alexandra Olteanu, Anne-Marie Kermarrec and Karl Aberer received the 2014 best paper award from the Web Information System Engineering (WISE) for his paper on “Comparing the Predictive Capability of Social and Interest Affinity for Recommendations”. The paper highlights the importance of social affinity (how well connected are people on a social graph) as a predictor of user’s taste as compared to interest affinity (how similarity users rate or how items are rated).
Posted on October 21, 2014
Welcome to EcoCloud’s Electronic Newsletter! We are pleased to announce that EcoCloud Newsletter will be semiannual henceforth. This edition of the newsletter can be found here.
In this issue, you will learn about new additions to our team, the latest about our research, accomplishments and outreach, and our visiting scholars this year. Last but not least, we are truly excited to announce in this newsletter the arrival of EcoCloud’s new Deputy Director.
Posted on September 5, 2014
Professors Babak Falsafi from EPFL and Boris Grot from University of Edinburgh have highlighted the challenges and opportunities of system designs in the era of Big Data in the IEEE Micro July/August 2014 edition. You can find the link to the introduction below: http://www.computer.org/csdl/mags/mi/2014/04/mmi2014040004.pdf.
Posted on July 14, 2014
Edouard Bugnion, co-founder of VMware and Nuova Systems (acquired by Cisco), was named Adjunct Professor in the School of Computer and Communication Sciences (IC) by “Le Conseil Des EPF”. Professor Bugnion joined EPFL in 2012. His research is focused in Data Center Systems including scale-out NUMA, domain-specifc operating systems and virtual data planes.
Posted on July 4, 2014
Professor George Candea, Associate Professor and head of the Dependable Systems Lab, has won the 1st Eurosys Jochen Liedtke Young Researcher Award for outstanding contribution in the field of computer science.
George Candea’s research is on practical ways of achieving reliability and security in complex software systems. His main focus is on real-world large-scale systems, with hundreds of threads and millions of lines of code written by hundreds of programmers—going from a small program to a large system introduces fundamental challenges that cannot be addressed with the techniques that work at small scale. For more information, please visit http://dslab.epfl.ch.
Posted on July 2, 2014
Onur Kocberber from the PARSA Lab has won the Google Ph.D Fellowship award fro 2014-2015. The Google European Doctoral Fellowship is awarded to outstanding doctoral students doing exceptional research in Computer Science or closely related areas.
Onur Kocberber’s main interest is in computer systems. His research is centered on server system architecture, particularly focusing on on-chip accelerators for database systems to improve the performance and energy efficiency of server processors. He is the release co-manager of CloudSuite and a co-developer of the Flexus simulation framework. Visit his homepage for more information.
Posted on July 2, 2014
Cansu Kaynak from the PARSA Lab has won the prestigious Anita Borg Memorial Scholarship Award for 2014-2015. The Anita Borg Memorial Scholarship “encourages women to excel in computing and technology, and become active role models and leaders in this field.”
The Scholarship is awarded to women who demonstrate leadership, strong academic credentials, and passion for increasing women’s involvement in computer sciences.
Cansu is mainly interested in computer architecture. Her research is centered around server system architecture, particularly focusing on memory system design to bridge the performance gap between processor and memory. To this end, she has been exploring ways to predict memory activity to proactively move instructions and data closer to the processor to hide the memory access latency from the processor. Please visit her homepage for more information on Cansu’s research.
Posted on May 6, 2014
Posted on March 17, 2014
Cansu Kaynak, a PhD student at the Parallel Systems Architecture Lab (PARSA), directed by Prof. Babak Falsafi, received a prestigious and highly competitive IBM Ph.D. Fellowship Award. The IBM Ph.D. Fellowship Awards Program is an intensely competitive worldwide program, which honors exceptional Ph.D. students who have an interest in solving problems that are important to IBM and fundamental to innovation in many academic disciplines and areas of study.
Award Recipients are selected based on their overall potential for research excellence, and their academic progress to-date, as evidenced by publications and endorsements from their faculty advisor and department head. The program also supports their long-standing commitment to workforce diversity. IBM values diversity in the workplace and encourages nominations of women, minorities and all who contribute to that diversity.
Cansu is mainly interested in computer architecture. Her research is centered around server system architecture, particularly focusing on memory system design to bridge the performance gap between processor and memory. To this end, she has been exploring ways to predict memory activity to proactively move instructions and data closer to the processor to hide the memory access latency from the processor. Please visit her homepage for more information on Cansu’s research.
Posted on February 20, 2014
The 2014 issue of the IEEE Micro’s Top Picks from the Computer Architecture Conferences will feature Clearing the Clouds, a paper conducted with the researchers of PARSA and DIAS labs directed by Prof. Babak Falsafi and Prof. Anastasia Ailamaki, respectively, as one of the most influential papers in computer architecture. According to Journal Citation Reports, IEEE Micro has on of the highest impact factors among computer science magazines.
Posted on January 18, 2014
Baris Kasikci, a PhD student working with George Candea, was selected as one of four recipients of the prestigious VMware Graduate Fellowship for the academic year 2014/2015.
The fellowships are awarded to outstanding students pursuing research related to VMware’s business interests which include core machine virtualization and cloud computing. This is the first time EPFL is eligible for these fellowships and Baris is our first winner.
Before starting his PhD, Baris worked as a software engineer for four years, mainly developing real time embedded systems software. He received his B.S. and M.S. degrees in Electrical and Electronics Engineering from Middle East Technical University, Ankara, Turkey in 2006 and 2009, respectively. His research is centered around building techniques, tools, and environments that will ultimately help developers build more reliable software. He is interested in finding solutions that will allow programmers to debug their code in an easier way. In this regard, he strives to find efficient ways to deal with concurrency bugs in general, and data races in particular.
Posted on December 20, 2013
Onur Koçberber, Boris Grot, Javier Picorel and Prof. Babak Falsafi of EcoCloud along with co-authors, were honored with the Best Paper Runner-Up award for their paper titled “Meet the Walkers” at the 46th International Symposium on Microarchitecture (MICRO-46). Micro is the premier forum for presenting, discussing, and debating innovative microarchitecture ideas and techniques for advanced computing and communication systems. This year MICRO was particularly competitive because it had an acceptance rate of only 16%. Read the full article here.
Posted on December 18, 2013
Welcome to EcoCloud’s third annual electronic newsletter! A full version of the newsletter is available here.
In this issue, we are delighted to report EcoCloud’s achievements last year and what is new in 2014. We have new faculty members in our community bringing a wealth of knowledge, research and industrial expertise, over half a dozen projects spanning from data analytics to green infrastructure, and a number of prestigious awards by EcoCloud researchers, making 2013 a fantastic year. Besides these accomplishments including research highlights covered in international media, we also hosted collaborators and researchers from peer institutions in our Visiting Scholars program.
This year, our annual event will be on June 5th and 6th, 2014 in Lausanne Palace. We look forward to seeing you there.
Posted on November 21, 2013
EcoCloud professor, Anastasia Ailamaki, is a winner of EU’s Consolidator ERC Grant in 2013. The grants “support researchers in consolidating their own independent research team or program and strengthen independent and excellent new individual research teams that have been recently created.” With project ViDa, Anastasia will be pioneering Big Data technologies that defy data deluge by enabling efficient queries on raw heterogeneous data, obviating the need to pre-format or load the data into a database. Anastasia’s ERC Consolidator Grant will be fully funded at a level of 2M Euros over five years. With Anastasia, EcoCloud now boasts a total of six current and seven in total faculty members who have received ERC Grants.
Posted on October 5, 2013
With ever growing demands on more efficient and cost-effective processing, communication and storage of data, the software and hardware technologies to help develop parallel, robust and efficient servers and data centers are going through major transformations. The CUSO Winter School on Data-Centric Systems in collaboration with EcoCloud will cover a series of lectures from internationally-recognized experts on emerging software and hardware technologies at the intersection of Big Data and efficiency.
Posted on September 17, 2013
EcoCloud professor, Rachid Guerraoui, is one of the winners of EU’s Advanced ERC Grants in 2013. ERC Advanced Grants “allow exceptional established research leaders to pursue ground-breaking, high-risk projects that open new directions in their respective research fields or other domains.” Rachid will be pioneering technologies for robust cloud computing with this new project titled “Adversary-Oriented Computing” targeting a division of software into components that individually implement a specific “adversarial” strategy and can be designed, implemented, verified, tested and debugged independantly. ERC Advanced Grants are funded at a level of 2M Euros over five years. With Rachid, EcoCloud now boasts a total of five current and six in total faculty members who have received ERC Grants.
Posted on August 20, 2013
David Atienza is the recipient of the 2013 IEEE CEDA Early Career Award for his contributions to the area of design methods and tools for multiprocessor system-on-chip architectures, particularly for work on thermal-aware design, low-power architectures and on-chip interconnect synthesis. The award honors “an individual who has made innovative and substantial technical contributions to the area of Electronic Design Automation in the early stages of his or her career.”
This is the first time this award is given outside the USA. Prof Atienza will be given his prize at the inaugural ceremony of IEEE/ACM 32nd International Conference on Computer-Aided Design (ICCAD) in San Jose, CA, USA, in November 2013. David is also the first recepient of the ACM SIGDA Outstanding New Faculty Award in 2012, also the first given outside North America.
Posted on July 27, 2013
With diminishing levels in silicon design efficiency (aka the slowdown in Dennard Scaling), cooling has taken center stage in server innovation to enable both designs that can dissipate higher levels of power to improve server performance and to improve cooling efficiency to reduce the Total Cost of Ownership, a metric that big datacenter owners strive to optimize. EcoCloud’s John Thome is a pioneer in two-phase liquid cooling in servers, where cooling liquid is circulated in two phases to improve heat removal efficiency while requiring a lower flow rate (for lower operation cost) and enabling better temperature uniformity across the chip. In collaboration with Jackson Marcinichen, they have recently invented two-phase cooling at the chip level for maximum efficiency. Their technology is showcased on the cover of Electronics Cooling, a high-profile magazine dedicated to thermal management in electronics industry.
Posted on July 15, 2013
We are delighted to announce that Vitaly Chipounov and Djordje Jevdjic are recipients of this year’s Intel Doctoral Fellowships.
EcoCloud is now four for four in nomination and winning of the Intel Fellowships since the program started in Europe, with two Intel Fellowships winners last year, Pejman Lotfi-Kamran and Cristian Zamfir.
The award letter states that “This was a highly competitive process with many outstanding quality applicants across several universities and exciting areas of research”. The awardees will gather at the Intel ERIC conference in October for a reception ceremony.
Congratulations to Vitaly and Djordje!
Posted on May 27, 2013
This year’s program includes a keynote entitled “Big Data is (at least) Four Different Problems” by the database visionary Mike Stonebraker of MIT, followed by presentations from EcoCloud researchers, a poster session, and an industrial perspectives session form a group of experts among EcoCloud’s industrial affiliates and partners.
The event brings together researchers and technologists from academia and industry interested in monetizing Big Data at maximum efficiency and minimal cost. EcoCloud’s research highlights this year include technologies for massive analytics and graph processing, real-time and performance-stable cloud services, scalable parallel software, and data-centric server chips and infrastructure.
The industrial session includes talks by experts from EcoCloud’s partners and affiliates including major IT vendors. Anne Holler from VMware and Paolo Faraboschi from HP Labs will each present their respective vision on Software-Defined Datacenters. Eric Chung from Microsoft Research will present hardware specialization for Big Data services. Peter Dickman of Google will present emerging efficiency challenges in Warehouse-Scale Computing.
The event is also a great opportunity for EcoCloud to showcase its Industrial Affiliates Program, promoting research collaborations with industry to help pave the way for impending technological challenges as well as problems on the horizon and outside industry’s immediate concerns. “We target solutions towards a long-term vision for efficient and scalable data-centric IT that are also of value and interest to industrial partners in the short- and medium-term,” says Babak Falsafi, EcoCloud’s Director and Professor in the School of Computer and Communication Sciences.
EcoCloud’s targeted research enables laboratories to work together towards a common goal, thereby propelling collaboration and the potential for trans-disciplinary innovation. In so doing, it echoes and reinforces the core ethos of EPFL itself.
Please see our event’s website for more info.
Posted on May 25, 2013
Computing Now, the online portal highlighting IEEE Computer Society’s top articles features Boris Grot’s recent results on Optimizing Datacenter TCO with Scale-Out Processors. The guest editor, Sundara Nagarajan writes “The article defines TCO as an optimization metric that considers the costs of real estate, power delivery and cooling infrastructure, hardware-acquisition costs, and operating expenses. This excellent study will have far-reaching impact on storage system architecture.” To read the article, click here.
Posted on May 16, 2013
Our second annual event (in June 2012) was a great success thanks to EcoCloud researchers/staff, keynote and industrial session speakers, and student presenters. In this issue, we are delighted to announce two outstanding new members in our research community bringing a wealth knowledge and expertise, and report a number of achievements by EcoCloud members, making 2012 an even more productive year since we launched the center. Besides these accomplishments ranging from research highlights covered in international media to new projects and faculty and student awards, we are also happy to report that in 2012 we have introduced the EcoCloud Visiting Scholar program to attract world-renowned researchers to spend a sabbatical and collaborate with us.
This year, we will have our annual retreat on May 31st, 2013, at the same venue as last year, Hotel de la Paix in Lausanne. We look forward to seeing you there!
Posted on December 2, 2012
The Tech Tour Cloud & Big Data Summit, held at both the Lausanne Palace Hotel and EPFL Rolex Learning Center over 21-22 November 2012, has been a major showcase for the expertise of EcoCloud.
Prof. Babak Falsafi, Director, EcoCloud, was a member on Tuesday’s Panel “Cloud Computing: The Ups and Downs.” Additional speakers included: Rajas Gokhale, Capgemini, Tim Harper, Cientifica and Matthias Haendly, SAP. Prof. Falsafi also officially opened the Summit’s proceedings on Wednesday.
Also that day, EcoCloud Executive Committee member Prof. Anastasia Ailamaki presented her research in a talk entitled “Dias, Scientific Discovery through Raw Data Exploration” and EcoCloud Scientist, EPFL & VMWare Co-Founder Dr. Edouard Bugnion gave the event’s Keynote Speech.
Behind the scenes, EcoCloud’s Deputy Director, Dr. Anne Wiggins contributed on the Summit’s Selection Committee.
More information about the Tech Tour Cloud & Big Data Summit can be located here: http://www.techtour.com/Cloud-BigData-Summit-2012/Overview.htm
Posted on December 1, 2012
Cloud Computing has become the predominant way of delivering and consuming IT infrastructure (computation and storage), middleware and applications. Such a fundamental transformation, as with the advent of the web, will change how we communicate, do business, and offer services. EcoCloud’s Director, Prof. Babak Falsafi and Deputy Director, Dr. Anne Wiggins, collaborated on The Swiss Academy of Engineering Sciences’ topical platform “ICT – Computing in science and technology” to write a white paper about “Cloud Computing in Switzerland”.
Posted on November 30, 2012
Also of EPFL’s Embedded Systems Laboratory, Prof. Atienza was interviewed last week by RTS about his joint laboratory and EcoCloud-related research, which has resulting in a 50% reduction of energy consumption in Credit Suisse datacenters. In the interview, Prof. Atienza was also asked more generally about cloud computing energy consumption optimization research and progress.
In addition, he was asked to comment about the idea of “micro-clouds”, which is currently discussed to create localized hubs of cloud computing networks for specific restrictions of access to information.
Prof. Atienza indicated that this idea is very similar to the well-known concept of “private clouds”, which evidently make sense in specific contexts and that are a specific way to provide access. However, EcoCloud treats the problem of scalable and secured access to data in a more general way.
Posted on November 29, 2012
Today’s data centers consume extraordinary amounts of power, often measured in tens of megawatts per installation and equal to the power draw of 40,000 residential homes.
The high power requirements are, in part, due to inefficiencies of existing server processors, which are deployed by thousands in each data center, yet are poorly matched to the memory-intensive software applications powering online services (including web search, social networking and business analytics). With data centers already currently consuming approximately 2% of the global power budget, experts are projecting exponential growth in data center power consumption in the coming decades.
The physical space and power limitations that inhibit the growth – and increase the costs – of large-scale data centers must be overcome. Optimal performance of the memory-intensive software applications powering online services (including web search, social networking and business analytics) is hindered by inefficient chips, energy budgets and conventional server processors (which were designed for a broad range of workloads).
The EuroCloud project targets a 10x improvement in data center cost- and energy-efficiency, representing a major step toward sustainable data center IT. The EuroCloud team have been developing advanced low-power server architectures with many cores and integrating 3D DRAM to provide very dense low-power microprocessor technologies, adapted from those used in mobile phones. This This technology can scale to hundreds of cores in a single server, and make a 1M core data center feasible. The commercial application of these results would make European data center investment more affordable, thereby facilitating industrial growth. EuroCloud has laid the foundation for funding research on green data centers as a separate program in FP7 and Horizon 2020, the upcoming next-generation EU funding program.
European Commission Vice-President Neelie Kroes said: “Today’s power-hungry cloud data centres are not sustainable in the long run. The EuroCloud chip addresses the core of this energy consumption problem. I hope further development of the EuroCloud chip will boost the position of European businesses in a sector currently dominated by non-Europeans.”
Dr. Max Lemke, Deputy Head of Unit for Embedded Systems and Control in the Directorate General Information Society and Media of the European Commission, referred to the project in order to illustrate how the main goal of research in computing systems is getting energy efficient and low-cost computing technologies into the full spectrum of devices and systems, from mobile and embedded systems to data centers and supercomputers. “Computing is a key enabler for Europe’s competitiveness in engineering, which is a key driver for the European economy. Europe has to leverage its unique expertise in embedded and mobile computing systems to innovate in energy efficient and low-cost computing technologies,” Dr. Lemke said.
Posted on November 28, 2012
The Association for Computing Machinery’s Special Interest Group on Design Automation has awarded its Outstanding New Faculty Award to EcoCloud’s David Atienza, of the Embedded Systems Laboratory (ESL). This marks the first time that the award has been won outside the USA.
The Outstanding New Faculty Award recognizes “a junior faculty member early in her or his academic career who demonstrates outstanding potential as an educator and researcher in the field of electronic design automation”.
Posted on November 28, 2012
The economic and environmental benefits are considerable.
Databases have revolutionized the business world. Every bottle of shampoo you buy, every purchase you make, is just one more data point sent out to your bank’s and your supermarket’s servers. This enormous quantity of detailed information allows merchants to optimize their inventories and displays and bankers to optimize the flow of money. Gigantic farms of servers are deployed in an effort to keep up with this breakneck pace of information storage and transfer. Researchers in EPFL’s DATA Laboratory have developed DBToaster, a system that speeds up the pace of operations by a factor of 100 – 10,000. The latest version has just been made available on www.dbtoaster.org.
“Ten years ago, CERN set up one of the world’s largest databases,” explains EPFL professor Christoph Koch, DBToaster’s creator. “Today, your average supermarket has a bigger system.” This inflation has escalated dramatically, to the point that optimizing databases has become an environmental issue. In the U.S., electricity use by server farms is growing exponentially, currently representing 2% of total electricity consumption.
Avoiding data jams by accelerating the flow of data
In a classic database, data are handled in a series of successive packets. For example, say a bank wants a list of all its clients who live in Zurich who have a balance of at least 5,000 francs. The user queries the database by selecting certain criteria. This request is translated into a series of mathematical operations. Because every banking transaction results in a separate database entry, the amount of information that must be sorted is phenomenal – the first operation has to search through billions of entries. The resulting data set is then sorted by the second operator, and so on, until the list is reduced to the clients desired.
The data are so vast that often the server’s RAM is not large enough to temporarily store initial results, causing a data jam. The server must temporarily store intermediate results on the hard disk before sending them on to the next operator. This slows things down considerably, because accessing the hard disk is 10,000 times slower than accessing RAM. It also requires much more electricity.
The EPFL scientists were able to get their system to compile successive operators as one single operator. This extremely complex operation makes it possible to store huge intermediate results. In doing so, DBToaster is able to efficiently prevent data jams.
Keeping queries in memory so you don’t have to reinvent the wheel
DBToaster has a second innovation, as well. The researchers took into account the fact that queries are often repetitive. “In general, the same operator is used many times within brief periods of time,” explains Koch. Rather than having to recalculate everything each time, the system keeps the preceding result in memory and merges it with new entries. “The big innovation with DBToaster is its ability to generate efficient code that manages to figure out how previous queries should be changed in order to be updated.” In this way, only recently entered data has to be queried, rather than billions of entries.
DBToaster is available online for no charge. Financial institutions, in particular, are enthusiastic about the system. According to Koch, DBToaster “enables analytical processing in real time, which financial institutions need to perform automated trading or to enforce regulatory compliance – for instance to detect patterns of money laundering in their streams of financial transactions.” But the benefits go farther than this. As data processing consumes escalating amounts of power, DBToaster is a solution that can be easily deployed on existing servers to reduce their electricity consumption and mitigate their impact on the environment.
Posted on November 27, 2012
Cloud computing has emerged as a dominant computing platform providing billions of users world-wide with online services. The software applications powering these services, commonly referred to as scale-out workloads and which include web search, social networking and business analytics, tend to be characterized by massive working sets, high degrees of parallelism, and real-time constraints – features that set them apart from desktop, parallel and traditional commercial server applications. To support the growing popularity and continued expansion of cloud services, providers must overcome the physical space and power constraints that limit the growth of data centers. Problematically, the predominant processor micro-architecture is inherently inefficient for running these demanding scale-out workloads, which results in low compute density and poor trade-offs between performance and energy. Continuing the current trends for data production and analysis will further exacerbate these inefficiencies.
Improving the cloud’s computational resources whilst operating within physical constraints requires server efficiency to be optimized in order to ensure that server hardware meets the needs of scale-out workloads. To this end, the team of Babak Falsafi, a Professor in the School of Computer and Communication Sciences at EPFL, the director of the EcoCloud research center at EPFL (founded to innovate future energy-efficient and environmentally friendly cloud technologies) and a HiPEAC member, presented Clearing the Clouds: A Study of Emerging Workloads on Modern Hardware, which received the best paper award at ASPLOS 2012.
In this paper, the EPFL team explained how they used performance counters on modern servers to assess how well today’s predominant processor micro-architecture is aligned with the requirements of scale-out applications. What they discovered is that there is a significant mismatch between the two, stemming from inefficiencies in the instruction supply and execution logic as well as memory system organization. Their research shows that efficiently executing scale-out workloads requires optimizing the instruction-fetch path for multi-megabyte instruction working sets, reducing the core complexity, and shrinking the capacity of on-die caches to reduce area and power overheads. The authors also introduced CloudSuite, a benchmark suite of emerging scale-out workloads, that is expected to benefit the broader research community.
The insights gleaned as part of the evaluation are now driving the team to develop server processors tuned to the demands of scale-out workloads. The team has recently proposed a processor organization that unlike current industrial chip design trends does away with power-hungry cores and large on-die caches and networks to free area and power for a large number of simple cores built around a streamlined memory hierarchy. Not only do these improvements lead to greater performance and efficiency at the level of each processor chip, they also enable significant cost and power savings at the level of an entire data center.
This work was partially funded by the EuroCloud Server Project, a European Commission FP7 Computing Systems Program and is deemed as a European “flagship” project, led by major research centers and industrial partners such as ARM, IMED, Nokia and the University of Cyprus. Running from Jan-2010 until Dec-2012, EuroCloud’s multiple partners are focused on increasing by 10x the efficiency in server chip level power consumption. Dr. Max Lemke, Deputy Head of Unit for Embedded Systems and Control in the Directorate General Information Society and Media of the European Commission, referred to the project to illustrate how the main goal of research in computing systems is getting energy efficient and low-cost computing technologies into the full spectrum of devices and systems, from mobile and embedded systems to data centers and supercomputers.
“Computing is a key enabler for Europe’s competitiveness in engineering, which is a key driver for the European economy,” Dr. Lemke said in his keynote address at the recent HiPEAC 2012 Conference. “Europe has to leverage its unique expertise in embedded and mobile computing systems to innovate in energy efficient and low-cost computing technologies,” he added.
Babak joined the School of Computer and Communication Sciences at EPFL in 2008. Prior to that, he was a full Professor of Electrical & Computer Engineering and Computer Science at Carnegie Mellon where he led the Microarchitecture theme of the FCRP Center on Circuit and System Solutions, a multi-university consortium of over 50 academics investigating digital platform designs for the end of CMOS roadmap. He is the founding director of the EcoCloud research center pioneering future energy-efficient and environmentally-friendly cloud technologies at EPFL.
His research targets technology-scalable datacenters, design for dark silicon, architectural support for software and hardware robustness, and analytic and simulation tools for computer system performance evaluation. He is a recipient of an NSF CAREER award in 2000, IBM Faculty Partnership Awards in 2001, 2003 and 2004, and an Alfred P. Sloan Research Fellowship in 2004. He has been a member of ISCA Hall of Fame since 2003 and the Micro Hall of Fame since 2011 for contributions to the flagship IEEE/ACM conferences in computer architecture and microarchitecture respectively. He is a fellow of IEEE.
Posted on November 26, 2012
Né en 2008, le concept de “Big Data” fait désormais tendance. Pas de définition précise. Le terme englobe tout à la fois la problématique et les technologies visant à traiter le gigantes que volume d’informations généré par les technologies IT. Difficile de quantifier le phénomène. A en croire le magazine The Economist, quelque 1200 exaoctets (milliards de gigaoctets) circulent désormais via le réseau informatique, contre 150 exaoctets en 2005. La consommation énergétique des data centers progresse également de façon exponentielle. Aujourd’hui, on estime que l’empreinte carbone des centres de calcul équivaut à celle de la navigation aérienne internationale. En 2010, les technologies de l’information représentaient 1,5% de l’énergie consommée aux Etats Unis, soit 4,5 milliards de dollars.
La viabilité économique du secteur IT dépend désormais de sa capacité à juguler ses besoins énergétiques, commente Babak Falsafi, professeur au Laboratoire d’architecture de systèmes parallèles et directeur du centre de recherche EcoCloud, les deux à l’EPFL. Pour freiner le niveau de croissance actuel, il faut parvenir à décupler l’efficience des processeurs et des mémoires par un facteur de 100 d’ici à dix ans.
C’est l’objectif que s’est fixé EcoCloud. Fondé en mai 2011, le consortium réunit 13 laboratoires de l’Ecole polytechnique fédérale de Lausanne (EPFL). Spécialisés dans les processus de cloud computing et la gestion de Big Data, les 14 informaticiens concernés collaborent sur trois axes de recherche clef: les données, l’énergie et l’intelligence.
Le paradoxe est que le nuage informatique contribue à augmenter la circulation de données, tout en représentant aussi le meilleur pôle d’économie pour le secteur IT, observe Babak Falsafi. Jusqu’à présent, l’industrie s’est appliquée à réduire le voltage des puces électroniques pour éviter que la consommation énergétique n’augmente au même rythme que la puissance de calcul des processeurs. Nous arrivons toutefois au bout de cette logique et n’avons plus d’autre choix que de travailler à la mutualisation des ressources pour optimiser les performances des centres de traitement.
Ce nouveau concept de développement oblige à repenser la totalité des architectures et des connectivités des data centers. Il implique une démarche holistique et un travail coordonné tant au niveau des logiciels que du hardware, des serveurs et des systèmes de refroidissement.
Au nombre des pistes prometteuses mises en oeuvre par EcoCloud figure un projet visant à optimiser le fonctionnement des machines et l’acheminement des données. Un autre pôle de recherche travaille à réguler la température des processeurs en fonction des actions programmées.
D’autres laboratoires oeuvrent à une nouvelle agrégation des informations ou à la résolution de bugs logiciels entièrement automatisée. Dénominateur commun des recherches: concevoir des outils hautement spécialisés destinés à effectuer des tâches ciblées de manière plus efficace et moins énergivore.
La puce tridimensionnelle
Le Laboratoire d’architecture de systèmes parallèles de l’EPFL collabore à l’élaboration d’une nouvelle génération de puce électronique.
Jusqu’à présent, les processeurs étaient destinés à réaliser des opérations mathématiques, résume Babak Falsafi. Avec l’avènement du nuage, leurs performances ne se mesurent plus seulement à leur puissance de calcul, mais aussi à leur capacité à accéder aux informations disponibles sur de sserveurs distants. Les téléphones portables utilisent déjà une technologie de ce type. Son transfert au secteur informatique exige toutefois une optimisation de l’interconnectivité et des procédures de traitement de grands volumes de données. Alors que les processeurs traditionnels utilisent une architecture bidimensionnelle composée d’unités de calculs alignés côte à côte, les chercheurs de l’EPFL ont conçu une puce électronique en trois dimensions utilisant des coeurs superposés.
Fondée sur la technologie Through Silicon Vias, cette nouvelle architecture verticale multiplie les connexions et accélère d’au moins 10 fois la vitesse de traitement des données, précise le professeur.
Les transactions améliorées
De son côté, le laboratoire de systèmes et d’applications de traitement archide données massives (DIAS) développe des technologies destinées à doper la performance des ordinateurs et à faciliter le maniement du Big Data. “Avant l’arrivée du nuage informatique, il suffisait de disposer d’une mémoire suffisante pour lire et organiser une certaine quantité d’informations, observe Anastasia Aïlamaki, cofondatrice d’EcoCloud et directrice de DIAS. Aujourd’hui les systèmes doivent puiser les données dans des unités de stockage distantes et les combiner de façon à obtenir une réponse rapide et fiable. Notre concept contribue à accroître l’efficacité des systèmes et à quantifier les ressources nécessaires à leur fonctionnement.” L’innovation s’applique notamment à la gestion des transactions financières. Dans une architecture traditionnelle, les ordres passés depuis plusieurs plateformes ne peuvent pas être effectués de manière simultanée. Les informations sont préalablement filtrées par un logiciel chargé de vérifier et de valider successivement chaque modification de valeur. Pour contourner ce verrouillage central qui participe à ralentir le trafic, DIAS a glissé une couche immatérielle dans son architecture logicielle. “Nous ne procédons pas à une structuration physique, mais à une organisation logique des informations, explique la directrice. Les données relatives à un même objet sont réunies dans des modules et transférées sous la forme d’une image virtuelle. Les processeurs concernés sont dès lors en mesure de synchroniser les opérations et de fournir des résultats quasi instantanés. La technologie garantit la fiabilité des transactions, tout en évaluant la puissance de calcul nécessaire à chaque opération. Les entreprises ont ainsi l’avantage de pouvoir planifier les ressources matérielles nécessaires à l’exécution des tâches qu’elles se sont fixées.”
Les travaux relatifs à cette architecture des plus novatrices ont été publiés l’an dernier et le laboratoire de l’EPFL a d’ores et déjà élaboré un prototype convaincant. A en croire son instigatrice, le produit serait désormais exploitable par le marché. “La mise en oeuvre de notre solution obligerait toutefois à repenser la globalité des structures informatiques.
La voie d’implémentation plus rapide consisterait à utiliser nos recherches pour l’élaboration d’un nouveau système core banking”, précise encore Anastasia Aïlamaki. Pour l’ heure, aucun candidat ne s’est manifesté. Le laboratoire oeuvre désormais au développement d’un outil de récupération de données. Il affine également ses algorithmes afin d’assurer leur compatibilité avec les nouvelles mémoires de changement de phase. Mieux connues sous l’abréviation PCM pour Phase change memory, ces unités de stockage pourraient s’avérer jusqu’à lOfais plus rapides que la mémoire flash et 1000 fois plus efficaces que les disques mécaniques traditionnels.
Récemment présentées lors de prestigieuses conférences IT à Hanovre et Athènes, les innovations du laboratoire DIAS ont suscité l’enthousiasme des experts, et aussi décroché de précieux soutiens financiers. Oracle s’est ainsi engagée à financer un plan de recherche axé sur l’utilisation de processeurs multicorps. IBM subventionnera pour sa part les travaux relatifs aux nouveaux supports de stockage de l’information. Les contrats renouvelables portent sur le versement global de quelque 180 000 francs par an.
Les ambitions de CS
A l’instar de la majorité des laboratoires académiques, les membres du réseau EcoCloud monnaient leurs services auprès des entreprises. Ils bénéficient en outre d’un pot commun alloué au consortium par des fonds de recherche suisses et européens et des partenaires industriels. Credit Suisse, HP, IBM, Microsoft, Nokia, Oracle, Swisscom et Intel financent en effet le projet à raison de l,7 million de francs par an. Première banque à s’être installée dans le Quartier de l’innovation de l’EPFL début 2011, Credit Suisse ne cache pas ses ambitions en matière de développement IT et mise sur la coopération pour accélérer son innovation. “Les recherches menées au sein d’EcoCloud s’inscrivent dans la logique des projets amorcés par l’établissement, commente Hans Martin Graf, directeur du Centre de développement IT de CS. Nous travaillons à la décentralisation de nos infrastructures depuis près d’une dizaine d’années, alors que les processus de virtualisation présentent encore de nombreuses inconnues. La mise en commun des compétences des divers spécialistes de l’EPFL est d’autant plus prometteuse qu’elle offre une approche globale de l’ensemble des problématiques liées au cloud computing.”
Réservoir de forces vives
Credit Suisse a entamé un programme d’optimisation énergétique de ses data centers avec le Laboratoire des systèmes intégrés de l’EPFL. Objectif: sélectionner les composants matériels et les systèmes les moins énergivores et les plus performants (voir aussi Green IT dans BAS septembre 2011). L’établissement bancaire négocie désormais de nouveaux axes de collaboration dans le domaine de la sécurité notamment. De nouveaux projets concrets devraient être lancés début 2012.
Dans l’intervalle, l’entreprise poursuit ses propres recherches dans le secteur du management de documents et des logiciels de gestion de portefeuille en particulier. Avec un investissement de quelque 10 millions de francs par an, le Centre de développement IT entend profiter des synergies offertes par son implantation au coeur de l’EPFL et souhaite affirmer son rôle d’employeur formateur. Forte d’une soixantaine de collaborateurs, la nouvelle entité de CS entend recruter une quinzaine de jeunes talents d’ici à la fin de l’année.
Une stratégie qui ne manquera pas de nourrir le réservoir de forces vives de l’établissement bancaire et participera sans nul doute à le préserver de la pénurie de spécialistes annoncée dans le secteur des technologies de l’information. La division informatique de CS emploie actuellement quelque 17 000 collaborateurs au plan mondial, dont 6000 en Suisse.
Posted on November 26, 2012
CloudSuite is a benchmark suite for emerging scale-out applications. The first release consists of six applications that have been selected based on their popularity in today’s datacenters. The benchmarks are based on real-world software stacks and represent real-world setups. Please visit the CloudSuite web page for further information and instructions on how to download the suite.
Posted on November 25, 2012
The emergence of global-scale online services has galvanized scale-out software, characterized by splitting vast datasets and massive computation across many independent servers. In a paper appearing in ASPLOS 2012, Profs. Ailamaki and Falsafi and their teams identify the inefficiencies in modern server processors and memory systems when running emerging scale-out workloads (e.g., analytics, data serving, debugging as a service, video streaming and web) and advocate server chip architectures and hardware mechanisms that maximize silicon efficiency for these workloads. For more information see, Clearing the Clouds: A Study of Emerging Workloads on Modern Hardware by Ferdman et al., available as an EPFL Tech. Report.
Posted on November 24, 2012
Intelligent feedback-control algorithms are emerging as instrumental in controlling temperature in entreprise servers. Until just a few years ago, servers relied on only trivial control actuators based on high-temperature thresholds for asset protection. This award is given to David for the introduction of an intelligent global feedback-control algorith that combines multiple local controllers, with gauranteed operation stability, to improve thermal characteristics and reduce energy in Oracle’s future servers. Congratulations David!
Posted on November 23, 2012
The project’s goal is to enable the Swiss public to trust and use cloud storage infrastructure through the design and development of innovative technology that addresses the most crucial shortcomings of the current state-of-the-art.
Cloud computing is our new world, in which everything is a service, and users subscribe to it without knowing where is the disk that holds their data or where is the processor that performs the computation. Although most people use cloud services, many are still reluctant to entrust the cloud with their most private data. The reasons are slow and unpredictable cloud storage, limited privacy or security, and questionable cloud properties. The team aspires to remedy the very roots of the aforementioned problems by developing innovative technology which improves performance and predictability, as well as security and verifiability of the cloud services.
The proposal’s PI is Anastasia Ailamaki (EPFL). CoPIs are George Candea (EPFL), Arjen Lenstra (EPFL), Fernando Pedone (Lugano), Pascal Felber (Neuchatel), and Srdjan Capkun (ETH).
Posted on November 22, 2012
In a recent paper in IEEE Micro special issue on Big Chips, July 2011, EcoCloud researchers project that server chips will not scale beyond a few tens to low hundreds of cores, and an increasing fraction of the chip in future technologies will be dark silicon that one cannot afford to power. Specialized on-chip architectures can leverage the underutilized die area to overcome the initial power barrier, delivering significantly higher performance for the same bandwidth and power envelopes.
For more information see, Toward Dark Silicon in Servers by Hardavellas et al., in IEEE Micro, July of 2011.