Virtual Institute — High Productivity Supercomputing


Numerous VI-HPS tutorials and workshops on four continents have already taken place and further are planned: see Training for details. These feature hands-on exercises on local HPC systems or using a provided Linux LiveISO (sometimes on DVD or USB memory stick) with a typical HPC development environment for MPI and OpenMP containing the VI-HPS tools.


Due to limited time and network bandwidth typically available during tutorials, participants who intend to install a virtual machine and download the OVA archive (or ISO image) on their notebook computers should do so in advance.

The latest OVA files are currently only available as 64-bit versions, requiring a 64-bit x86-based processor and a 64-bit OS if running a virtual machine. Your filesystem also needs to handle files larger than 4GB (e.g., ExFAT). For assistance and to report problems contact


Generally it is most convenient to boot a downloaded ISO image or OVA archive within a virtual machine (e.g., VirtualBox): in this configuration, performance is reduced and hardware counters are not accessible, however, sessions can be saved and restored. Alternatively, the ISO image can be used to create a bootable DVD or USB memory stick: natively booting from this device changes no files on the hard disk and produces better performance, including access to hardware counters. Both methods provide a safe way to experiment with the tools without installing them.

Once booted, the running system provides a typical Linux HPC environment consisting of the GNU Compiler Collection (including support for OpenMP multithreading) and OpenMPI message-passing library, along with a variety of parallel debugging, correctness checking and performance analysis tools.

Depending on available memory, it should be possible to apply the provided tools and run small-scale parallel programs (e.g., 16 MPI processes or OpenMP threads on a system with 2GB of RAM). When the available processors are over-subscribed, however, measured execution performance will not be representative of dedicated HPC compute resources. Sample measurements and analyses of example and real applications from a variety of HPC systems (many at large scale) are therefore provided for examination and investigation of actual execution performance issues.