Systemware

clock-gearsWhile hardware advances towards exascale are outwith the scope of CRESTA, these advances will significantly influence software and systemware developments. Hence CRESTA strived to understand and track underpinning exascale technology tracking future architecture developments towards exascale and assessing their impact on exascale software developments within CRESTA.

balanceIn order to cope with the challenges of exascale computing, particularly the massive amounts of heterogeneous processing units, the deep memory hierarchies, and the deep and heterogeneous communication facilities, application developers need support in all phases of the application lifecycle, including programming models that allow the construction of efficient, yet portable, applications, and advanced compilation techniques and adaptive runtime environments.

conceptual-twisted-arrowsCRESTA boasts an impressive range of European partner tools for debugging and performance analysis. This includes Allinea’s DDT debugger, KTH’s perfminer and TUD’s Vampir tool-suite and  MUST runtime error detection tool (developed in collaboration with LLNL and ASC Trt-Labs).

Performance analysis tools. With the increasing complexity and parallelism of HPC systems, it becomes more and more challenging to understand the runtime behaviour of applications.

Debugging tools. Debugging tools are an important requirement to cope with the complexity of parallel systems in general and with future exascale systems in particular.

blue-contactMany of the most common underlying algorithms are already performance limited on current tera and petascale platforms. These limitations will only increase, and in some cases become untenable, at exascale. For example, achieving a reasonable computation/communication balance without a major increase in problem size, coping with increasing memory latencies, avoiding global synchronisation points and load imbalance are all important considerations.

two-planes-at-airshowApplication domains, such as fluid dynamics, meteorology, nuclear physics, or material science, heavily rely on numerical simulations on HPC resources. A simulation and analysis process is typically composed of three steps: the first step is the domain decomposition by means of partitioning and mesh creation; the second step is the numerical computation of the simulation; visualisation and analysis of the resulting data is the third step.