The Dimemas simulator reconstructs the time behaviour of a parallel application using as input an event trace that captures the time resource demands (CPU and network) of a parallel application. The target machine is modelled by a reduced set of key factors influencing the performance that model linear components like the point to point transfer time as well as non-linear factors like resources contention or synchronization. Using a simple model Dimemas allows to simulate parametric studies in a very short time frame. The supported target architecture is a cloud of parallel machines, each one with multiple nodes and multiple CPUs per node allowing evaluating a very high range of alternatives despite the most common environment is a computing cluster. Dimemas can generate as part of its output a Paraver trace file, enabling the user to conveniently examine the simulated run and understand the behaviour.
Dimemas targets message-passing programming models as well as tasks oriented programs. The current instrumentation allows to use Dimemas with MPI or MPI+OmpSs applications.
Open source: LGPL
Barcelona Supercomputing Center