Title: Keeping-up with the Flood of Data in Extreme Scale Simulations
Speaker: Franck Cappello (Senior Computer Scientist, Argonne National Laboratory)
Date: May 18, 2018 (Fri)
Venue: Center for Computational Sciences, Meeting Room A
Many extreme-scale scientific simulations are already generating more data than can be communicated, stored, and analyzed.
The data flood will get even worse with exascale systems. In this talk, we will discuss two projects of the U.S. Exascale Computing Project (ECP) addressing this problem in the contexts of checkpoint/restart and simulation I/O. The first project is VeloC, a multilevel checkpoint/restart environment reducing dramatically the checkpoint time perceived by the application. With minimal code modifications (generally about 10-20 lines of codes), the VeloC environment leverages the different levels of the storage hierarchy to perform fast checkpointing and asynchronous checkpoint movements. We will present early performance results of this environment that is co-designed with ECP applications and that will serve as the main checkpoint/restart library for ECP simulations. The second project, EZ, develops and improves the SZ lossy compressor for scientific data. Scientific data reduction is necessary in order to dramatically accelerate I/O and reduce the data storage footprint. Lossy compression seems the only practical and effective direction to reduce significantly scientific datasets for a broad spectrum of applications. Care must be taken, however, to keep the information that matters for the scientists. We will introduce the latest version of the SZ compressor and discuss its performance for several ECP datasets.
Franck is senior computer scientist at Argonne National Laboratory and adjunct associate professor in the department of computer science at University of Illinois at Urbana Champaign. He is the director of the Joint-Laboratory on Extreme Scale Computing gathering seven of the leading high-performance computing institutions in the world: Argonne National Laboratory (ANL), National Center for Scientific Applications (NCSA), Inria, Barcelona Supercomputing Center (BSC), Julich Supercomputing center (JSC), Riken CCS and UTK/ICL. Franck is an expert in fault tolerance for high-performance parallel computing. Recently he started investigating lossy compression for scientific datasets to respond to the pressing needs of scientists performing large scale simulations and experiments for significant data reduction. Franck is member of the editorial board of IEEE Transactions on Parallel and Distributed Computing and has served the steering committees of ACM HPDC and IEEE CCGRID. He is fellow of the IEEE.
Coordinator :Taisuke Boku