Virtual Institute — High Productivity Supercomputing

5th VI-HPS Tuning Workshop (TUM, Germany)

Date

March 8-10, 2010

Location

TUM, LRZ, RZG, Garching/München

Photographs from workshop

32 participants from Belgium, France and Germany

participants presenters introduction class

Goals

This workshop will:

  • give an overview of the VI-HPS programming tools suite
  • explain the functionality of individual tools, and how to use them effectively
  • offer hands-on experience and expert assistance using the tools

The workshop will be held in English and run from 09:00 to 17:30 each day, with breaks for lunch and refreshments. Additional presentations may be scheduled on demand.

Schedule

Day 1 Monday 8th March
08:30 (registration & set-up of notebook computers)
09:00 Welcome & Introduction to VI-HPS [Wylie, JSC]
  • Building and running the NPB-MPI-BT example code
  • PAPI library & utilities
  • 09:30 Introduction to parallel application engineering [Gerndt, TUM]
    10:15 Marmot correctness checking tool [Hilbrich, TUD-ZIH/GWT]
  • Marmot hands-on tutorial exercises
  • 10:45 (break)
    11:15 Scalasca performance analysis toolset [Wylie/Geimer, JSC]
  • Scalasca hands-on tutorial exercises
  • Scalasca case studies
  • 12:30 (lunch)
    13:30 Vampir performance analysis toolset [Doleschal/William, TUD-ZIH/GWT]
  • Vampir hands-on tutorial exercises
  • 14:45 (break)
    15:15 Periscope automatic performance analysis tool [Oleynik/Petkov, TUM]
  • Periscope hands-on tutorial exercises
  • 16:30 Review of day and schedule for rest of workshop.
    Individual preparation of participants' own code(s) and further exercises.
    17:30 (adjourn)
    Day 2 Tuesday 9th March
    09:00 Hands-on coaching with participants' own code(s)
  • Recap and review of local installation
  • 12:30 (lunch)
    13:30 Hands-on coaching with participants' own code(s) and/or additional presentations (scheduled on demand)
  • Advanced use of the Scalasca toolset
  • Advanced use of Marmot & Vampir
  • 17:00 Review of day and schedule for tomorrow
    17:30 (adjourn)
    19:30 Social dinner at Swagat indian restaurant sponsored by ParTec
    Day 3 Wednesday 10th March
    09:00 Coaching to apply tools to check, analyze & tune participants' own code(s). Additional presentations covering in-depth and advanced tool use (scheduled according to demand).
    12:00 (lunch)
    13:00 Review of workshop and participants' experiences with tools
    15:00 (adjourn or continue with work to 17:30)

    Classroom capacity is limited, therefore priority will be given to applicants with codes already running on the workshop computer systems, and those bringing codes from similar systems to work on. Participants are therefore encouraged to prepare their own MPI, OpenMP and hybrid OpenMP/MPI parallel application code(s) for analysis.

    Participants are expected to use their own notebook computers for the first day tutorial using a Live-DVD for hands-on exercises: it may be possible to arrange alternatives for those who don't have access to an x86-compatible notebook computer with DVD drive, if the organizers are informed in advance.

    VI-HPS Tools


    • MARMOT is a free correctness checking tool for MPI programs developed by TUD-ZIH and HLRS.
    • PAPI is a free library interfacing to hardware performance counters developed by UTK-ICL, used by Periscope, Scalasca, VampirTrace, and multiple other tools.
    • Periscope is a prototype automatic performance analysis tool using a distributed online search for performance bottlenecks being developed by TUM.
    • Scalasca is an open-source toolset developed by JSC that can be used to analyze the performance behaviour of parallel applications and automatically identify inefficiencies.
    • Vampir is a commercial framework and graphical analysis tool developed by TUD-ZIH to display and analyze trace files.
    • VampirTrace is an open-source library for generating event trace files which can be analyzed and visualized by Vampir.

    Hardware and Software Platforms

    The VI-HPS tools support and are installed on a wide variety of HPC platforms, including:

    • SGI Altix 4700 (LRZ HLRB-II): Itanium2 dual-core processors, SGI Linux, SGI MPT, Intel compilers
    • SGI Altix ICE (LRZ ICE1): Xeon quad-core processors, SGI Linux, SGI MPT (and MVAPICH2 & Intel MPI), Intel/GNU/PGI compilers
    • IBM p5-575 cluster (RZG VIP): Power6 dual-core processors, AIX OS, IBM POE MPI, IBM XL compilers
    • IBM BlueGene/P (RZG Genius): PowerPC 450 quad-core processors, BG-Linux compute kernel, IBM BG-MPI library, IBM BG-XL compilers
    • IBM BlueGene/P (JSC Jugene): PowerPC 450 quad-core processors, BG-Linux compute kernel, IBM BG-MPI library, IBM BG-XL compilers
    • Sun/Bull Nehalem cluster (JSC Juropa/HPC-FF): Intel Xeon X5570 quad-core processors, SLES Linux, ParaStation MPI, Intel compilers
    • IBM p5-575 cluster (JSC JUMP): Power6 dual-core processors, AIX OS, IBM POE MPI, IBM XL compilers
    • SGI Altix 4700 (ZIH): Itanium2 dual-core processors, SGI Linux, SGI MPT, Intel compilers
    • SGI Altix ICE (HLRN): Xeon quad-core processors, SGI Linux, SGI MPT (and MVAPICH2 & Intel MPI), Intel/GNU/PGI compilers
    • Intel Xeon cluster (RWTH): Xeon quad-core processors, Scientific Linux OS, Intel MPI, Intel compilers

    The local LRZ/MPG systems are expected to be the primary platforms for the workshop, with priority for improved job turnaround and local system support. Other systems where up-to-date versions of the tools are installed can also be used when preferred. Participants are expected to already possess user accounts on the systems they intend to use, and should be familiar with the procedures for compiling and running parallel applications in batch queues on the systems.

    Further information and registration

    Registration is closed.