two-planes-at-airshowApplication domains, such as fluid dynamics, meteorology, nuclear physics, or material science, heavily rely on numerical simulations on HPC resources. A simulation and analysis process is typically composed of three steps: the first step is the domain decomposition by means of partitioning and mesh creation; the second step is the numerical computation of the simulation; visualisation and analysis of the resulting data is the third step. The performance of extreme-scale simulations on exascale computing systems will depend on the efficiency of massively parallel numerical computations and their scalability on hybrid architectures with acceleration hardware. However, also the pre- and post-processing including rendering are crucial factors for the success of a simulation. CRESTA focused on these two steps.

In order to evaluate new scalable and fault tolerant approaches for the pre- and post-processing in heterogeneous exascale environments, a variety of simulation grid types were considered. Each type requires optimised solutions. For instance, domain decomposition and data mapping to processing units for rectilinear grids used in weather prediction models may be completely different from those approaches needed for unstructured grids employed in OpenFOAM. A particular challenge was data processing for the sparse geometry lattice Boltzmann code HemeLB from UCL, intended for hemodynamic simulations.