43rd VI-HPS Tuning Workshop (CALMIP, Toulouse, France)
This workshop has been postponed
The workshop will take place at the CALMIP Mesocentre, Espace Clément Ader, 3, rue Caroline Aigle, 31400 Toulouse, France.
This workshop organised by VI-HPS and the CALMIP Mesocentre will:
- give an overview of the VI-HPS programming tools suite
- explain the functionality of individual tools, and how to use them effectively
- offer hands-on experience and expert assistance using the tools
On completion participants should be familiar with common performance analysis and diagnosis techniques and how they can be employed in practice (on a range of HPC systems). Those who prepared their own application test cases will have been coached in the tuning of their measurement and analysis, and provided optimization suggestions.
Presentations and hands-on sessions are planned on the following topics:
- Setting up, welcome and introduction
- TAU performance system
- MAQAO performance analysis & optimisation
- Score-P instrumentation and measurement
- Verificarlo numerical accuracy analysis
- ... and potentially others to be added
A brief overview of the capabilities of these and associated tools is provided in the VI-HPS Tools Guide.
The workshop will be held in English and run from 09:00 to not later than 18:00 each day, with breaks for lunch and refreshments.
Classroom capacity is limited, therefore priority may be given to applicants with parallel codes already running on the workshop computer system (TURPAN), and those bringing codes from similar ARM Linux cluster systems to work on. Participants are encouraged to prepare their own MPI, OpenMP and hybrid MPI+OpenMP parallel application codes for analysis. Codes using multiple GPUs via OpenACC, OpenCL or CUDA may also be analysed.
Programme in Detail (provisional) - all times given as CEST (UTC+2)
|Day 1:||Monday 11 September|
|17:30||Schedule for remainder of workshop|
|Day 2:||Tuesday 12 September|
|14:00||Hands-on coaching to apply MAQAO & Verificarlo to analyze participants' own code(s).|
|16:00||Hands-on coaching to apply TAU to analyze participants' own code(s).|
|Day 3:||Wednesday 13 September|
|11:00||Hands-on coaching to apply Score-P/CUBE to analyze participants' own code(s).|
|14:00||Hands-on coaching to apply tools to analyze participants' own code(s).|
|Day 4:||Thursday 14 September|
Hands-on coaching to apply tools to analyze participants' own code(s)
|11:00||Hands-on coaching to apply tools to analyze participants' own code(s).|
Hardware and Software Platforms
- In a node, the detailed architecture is as follows: around the CPU, we have 512GB RAM divided into 8 x 64GB dims on independent channels, two Nvidia A100-80 GPU cards, connected via PCI express x16, 2 infiniband 200GB/s network cards each, also connected via PCI express x16, 6TB of local storage, and standard connectivity (USB, Ethernet etc).
- In a Turpan node, the processor is an Ampere Altra Q80-30, with 80 cores at 3GHz, implementing an ARM version 8.2 architecture, with a data transfer speed of 3200 MT/s. Computing power is 1.9 TF/s per socket. Turpan also has 2 Nvidia A100-80 GPU accelerators, each with 6912 Streaming Multiprocessors (SM). The peak performance of a GPU is 19.5 Tflops. In total, at maximum load, when using 80 CPU cores and 2 GPU accelerators, the supercomputer's peak performance is 40.9 Tflops. In theory, with 15 nodes, Turpan has a power of 611.85 Tflops.
- In terms of storage, the Turpan machine has 343 TB on mechanical disks for scratch and project storage. And 17TB of SSDs used as cache to accelerate output. Physically, there are 60 8TB mechanical disks and 11 3.8TB SSD disks.
The local HPC system TURPAN is the primary platform for the workshop and will be used for the hands-on exercises. Course accounts will be provided during the workshop to participants without existing accounts. Other systems where up-to-date versions of the tools are installed can also be used when preferred, though support may be limited and participants are expected to already possess user accounts on non-local systems. Regardless of whichever external systems they intend to use, participants should be familiar with the relevant procedures for compiling and running their parallel applications (via batch queues where appropriate).
Tuning Workshop Series
Université de Versailles Paris Saclay