Virtual Institute – High Productivity Supercomputing

Keynotes - 10th Anniversary

Improving Scientific Software Productivity and Sustainability - The IDEAS Approach

9:30 - 10:30
Anshu Dubey
Argonne National Laboratory

Abstract

Computational science and engineering communities develop complex applications to solve scientific and engineering challenges. However, many science teams struggle with issues in software productivity and sustainability partly due to lack of training and partly due to lack of resources within the team. The DOE-funded IDEAS project (www.ideas-productivity.org ) is working to help application teams improve developer productivity and software sustainability while ensuring continued scientific success. Members of the project are serving as catalysts for engaging the broader community to curate, create, customize, and disseminate software methodologies, processes, and tools that lead to improved scientific software. For example, the project website provides concise characterizations and best practices on topics such as testing, documentation, version control, and performance portability. We also conduct webinars and tutorials on several relevant topics to engage with the computational science community and to help change the culture of scientific software development. In this presentation I will describe challenges of improving software productivity in computational science projects, especially those that involve extreme computing, and our efforts in mitigating these challenges through information dissemination and engagement with the relevant communities.

About the speaker

Anshu Dubey received her Ph.D. in computer science from Old Dominion University in 1993. She then joined the University of Chicago Astronomy & Astrophysics Department for post-doctoral work. In 2001 she joined the ASC/Flash Center, where she was computer science/applications group leader from 2003 and associate director from 2008 . From 2013 to 2015 she was on the staff at Lawrence Berkeley National Laboratory, where she served as work lead and computer systems engineer
 in the Applied Numerical Algorithms Group. In 2015 she joined the Mathematics and Computer Science Division at Argonne as a computer scientist. In 2015 she was named a senior fellow of the Computation Institute. She has two decades of experience in design, development, and management of scientific software for simulating multiphysics phenomena and has earned wide recognition for her contributions.



True convergence of HPC and Big Data/AI towards AI-Exaflops

11:30 - 12:30
Satoshi Matsuoka
Tokyo Institute of Technology

Abstract

Japanese investment into public, open science HPC infrastructures for research and academia have had a long history. Now there is a focus on Big Data / AI as a government initiative, with three national-level AI centers by the three Ministries at their national labs, namely AIST’s AIRC (AI Research Center), Riken AIP (AI Project), and NICT’s brain inspired AI research center. In particular, at AIST-AIRC, as a Fellow I lead a project of facilitating one of the world’s largest BD/AI focused open and public computing infrastructure called ABCI (AI-Based Bridging Infrastructure). The performance of the machine is slated to be well above 130 Petaflops for machine learning, as well as acceleration in I/O and other properties desirable for accelerating BD/AI. ABCI’s architecture owes a lot to our next-gen TSUBAME3 supercomputer at Tokyo Tech who will sport a 66 Petaflops performance for AI in combination with its predecessor, TSUBAME2, to be commissioned in August 2017, whose design features and metrics were determined by earlier research we have been conducting in our JST CREST EBD (Extreme Big Data) Project, in which we seek to achieve true convergence of HPC and BD/AI actually reaching hundreds of Petaflops in early 2018 towards Exaflops of performance for AI shortly thereafter. In the talk I will cover the research results as well as the architectures of TSUBAME3 and ABCI, both hardware and software, and discuss the requirements for software tools in such a converged architecture.

About the speaker

Satoshi Matsuoka has been a Full Professor at the Global Scientific Information and Computing Center (GSIC), a Japanese national supercomputing center hosted by the Tokyo Institute of Technology, and since 2016 a Fellow at the AI Research Center (AIRC), AIST, the largest national lab in Japan. He received his Ph. D. from the University of Tokyo in 1993. He is the leader of the TSUBAME series of supercomputers, including TSUBAME2.0 which was the first supercomputer in Japan to exceed Petaflop performance and became the 4th fastest in the world on the Top500 in Nov. 2010, as well as the recent TSUBAME-KFC becoming #1 in the world for power efficiency for both the Green 500 and Green Graph 500 lists in Nov. 2013. He is also currently leading several major supercomputing research projects, such as the MEXT Green Supercomputing, JSPS Billion-Scale Supercomputer Resilience, as well as the JST-CREST Extreme Big Data. He has written over 500 articles according to Google Scholar, and chaired numerous ACM/IEEE conferences, most recently the overall Technical Program Chair at the ACM/IEEE Supercomputing Conference (SC13) in 2013. He is a fellow of the ACM and European ISC, and has won many awards, including the JSPS Prize from the Japan Society for Promotion of Science in 2006, awarded by his Highness Prince Akishino, the ACM Gordon Bell Prize in 2011, the Commendation for Science and Technology by the Minister of Education, Culture, Sports, Science and Technology in 2012, and recently the 2014 IEEE-CS Sidney Fernbach Memorial Award, the highest prestige in the field of HPC.