symposium

The 30th Anniversary Symposium of the Center for Computational Sciences at the University of Tsukuba [Abstracts]

Preparing for Post-Exascale Computing
Jeffrey S. Vetter (Oak Ridge National Laboratory) 
DOE has just deployed its first Exascale system, so now is an appropriate time to revisit our Exascale predictions from a decade ago. What predictions did we get right, wrong, or omit entirely? Likewise, it is also an appropriate time to start preparing for Post-Exascale Computing. We are seeing a Cambrian explosion of new technologies during this this ‘golden age of architectures;’ however, we are also experiencing a major challenge in preparing software and applications for this new era. In fact, we expect that software will need to be redesigned to exploit these new capabilities and provide some level of performance portability across these diverse architectures. In this talk, I will revisit Exascale predictions, survey these Post-Exascale technologies, and discuss their implications for both system design and software.

 

High Precision Physics from High Performance Computing
Norman H. Christ (Department of Physics, Columbia Univ.)
Experimental measurements at high precision or of rare processes offer information at the frontier of particle physics. Confronting these experimental results with the predictions of the Standard Model may provide the key to understanding today’s outstanding mysteries of Nature from the excess of particles over anti- particles in the Universe to the origin of the Higgs particle’s mass. We will describe some of the current challenging lattice QCD calculations that attempt to make these predictions — some with a precision now below 1% which include the effects of both quantum chromodynamics and electromagnetism.

 

Machine learning and quantum computing in physics research and education
Morten Hjorth-Jensen (University of Oslo/Michigan State Univ.)
Advances in machine learning methods provide tools that have broad applicability in scientific research. These techniques are being applied across the diversity of physics research topics, spanning essentially all fields of physics.  
Combined with recent advances in quantum computing and quantum  technologies, there is a great potential for advances that will facilitate scientific discoveries and societal applications. In this talk I will try to emphasize research directions that focus on solving quantum mechanical many-body problems using both machine learning algorithms as well as algorithms from quantum computing. I will also try to outline how to improve our basic physics education by introducing machine learning and quantum computing algorithms in our undergraduate and graduate physics education. 

 

Computics approach to development of the next-generation semiconductor science
OSHIYAMA Atsushi (Nagoya University)
Computics is a word I have made to express interdisciplinary collaboration between materials science and computer science (http://computics-material.jp/index-e.html). I will explain such efforts to develop a real-space scheme of the density-functional-theory (DFT) calculations and also a neural-network-assisted DFT scheme which shows the linear scaling to the system size N. These schemes are applied to the issues in semiconductor science. In this talk, I will discuss the physics and chemistry of the lattice vacancy in silicon nitride which is speculated but not yet clarified to be a principal element of the flash memory. An important role of the floating electron state peculiar in sparse materials is emphasized.

 

Persistent Memory Supercomputer Pegasus for Data-driven and AI-driven Science
TATEBE Osamu (Univ. Tsukuba)
This talk introduces a new Pegasus persistent memory supercomputer for data-driven and AI-driven science.  It will be the world’s first system to introduce next generation Xeon CPU, next generation Optane persistent memory and H100 Tensor Core GPU connected with InfiniBand NDR networks.  Each compute node has more than 2 TB memory space to strongly support large-scale data analysis in Big Data and AI.  This talk also introduces research activities of caching file system exploiting the node-local persistent memory.  It provides highly scalable file data and metadata performance, and accelerates the storage performance of HPC applications and data analysis in Python without any modification of the code.

 

Simulating the Evolution of the Universe and the Emergence of its Non-linear, Multi-scale, Interconnected Structures
Andreas Burkert (Ludwig-Maximilians Universität München)
Recent observations and numerical simulations have revolutionized our understanding of the evolution of the Universe. We now know that all structures in the Universe are intimately coupled by a universal cosmic flow of matter that starts in the cosmic web and that ends in protoplanetary disks and planets. Dark matter dominates the cosmic web the largest structure in the Universe with dimensions of millions of light years. The cosmic web, in turn, results from tiny quantum fluctuations within the Big Bang. Galaxies form at its nodes and are fed by gas inflows from the web. This inflow regulates galaxy evolution and generates a new web-like structure, the filamentary molecular interstellar medium. In these filaments dense cores of molecular gas form that condense into stars. Gas flows from the filaments penetrate deep into the collapsing cores and feed gas directly into protoplanetary disks, triggering planet formation. In my talk I will discuss progress and open puzzles in understanding our multi-scale, interconnected Universe.

 

Using and developing bioinfomatics in deep level phylogenomic reconstructions
Matthew Brown (Mississippi State University)
Recent advances of both sequencing technologies and sequence library production have revolutionized the fields of genomics and transcriptomics. With these new technologies, we are now able to examine the expression profiles of cell types at developmental stages. We are also able to use single cell RNAseq to robustly examine the evolutionary positions of organisms that are rare and or difficult to culture. Here I detail the methodologies, pitfalls, costs, and benefits associated with the production of such data. I also provide several case studies of the use of this technology for both types of experiments. We apply these ultra-low input RNAseq methods to phylogenomically examine the deep relationships of many protistan supergroups using a new bioinformatic tool called PhyloFisher, which will be explained. Additionally, with these methods, we examine the developmental pathways using expression profiling of discrete developmental stages of life cycle of aggregatively multicellular amoebae and sporocarpic amoebae that make fruiting bodies individually. Using time-lapse microscopy as well as the methods in single/few cell transcriptomics, we are now able to begin to unlock the developmental program in these slime molds. Here we begin to unravel the developmental program of these disparate taxa to examine if these organisms use underlying homologous.

 

Developing Climate Resilient Cities: From Heat Islands to Digital Twins
Dev Niyogi (University of Texas at Austin, USA)
Cities are complex systems that house more than half of the humanity and are also major emitters of greenhouse emissions. Additionally cities are disproportionately impacted by climatic extremes.  In this changing climate how should we design future cities that are resilient to climatic shocks and how can we use computational approaches to aid that process? Using examples of urban heat island mapping, to rainfall extremes, this presentation will lay the foundation for development of urban digital twins that can aid knowledge co-production, and bridge urban climate with urban planning and engineering solutions for equitable, socioeconomic outcomes.

 

Data Science for the Study of History: from Statistics to Machine Learning 
Chuan XIAO (Osaka University)
Nowadays, data science is an indispensable research methodology for a variety of research fields. While computational methods have been applied to natural sciences for centuries, interest in applying data science to social sciences has grown rapidly in the last few decades, mainly in the name of cliodynamics, quantitative history, and digital history. This talk focuses on the case of using data science in the study of history. I first review the evolution of the use of data science methods in historical research, from early attempts that employed preliminary statistical approaches to the mathematical modeling era, and then to recent advances featuring machine learning technologies. Then, I outline the challenges and future directions in this transdisciplinary area. In particular, the opportunities of using knowledge bases, text mining, and natural language processing techniques will be discussed.