News & Events

The CRESTA project is proud to announce the release of the CRESTA benchmark suite. The suite contains the co-design applications, with a range of appropriate input files. Used within the project for test and development purposes, these input files are representative of the types of simulations that may require exascale resources in the future.

Download the Benchmarks Here

After downloading the CRESTA_BENCH.tar.gz file, you can unpack it using

tar -xvfz CRESTA_BENCH.tar.gz

Instructions on how to use the benchmark suite are provided in the top-level README file. The application specific instructions are given in the README files located in each application directory (e.g. GROMACS specific instructions are inside the CRESTA_BENCH/applications/GROMACS/README file). 

Read one of the CRESTA White Papers: Exascale Pre- and Post-Processing. You can read that White Paper and the others that CRESTA has created here. Some information on this White Paper:

Today's large-scale simulations deal with complex geometries and numerical data on an extreme scale. As computation approaches the exascale, it will no longer be possible to write and store fullsized result data sets. In-situ data analysis and scientific visualisation provide feasible solutions to the analysis of complex large-scale simulations. To bring pre- and post-processing to the exascale we must consider modifications to data structure and memory layout, and address latency and error resiliency. 

For pre-processing, it is crucial to have a load-balancing strategy that supports multiple simulation phases and includes their costs in order to calculate a data distribution that leads to an optimal performance for the full simulation. For distributed post-processing, in-situ processing is a key concept in order to perform scalable on-the-fly data analysis and user interaction to on-going simulations. Remote hybrid rendering (RHR) is suitable to access remote exascale simulations from immersive projection environments over the Internet. RHR decouples local interaction from remote rendering and thus guarantees smooth interactivity during exploration of large remote data sets. In this white paper, we present strategies, algorithms and techniques for pre- and post-processing in exascale scenarios. With software prototypes developed in CRESTA and integrated into CRESTA applications, we demonstrate the effectiveness of our pre- and post-processing concepts for extremely parallel systems.

We have published a White Paper- Operating Systems at the Extreme Scale - which can be viewed here

Here is some information on the White Paper:

"As we move ever closer to the first exascale systems, understanding the current status of operating system development becomes increasingly important. In particular, being able to quantify the potential impact of the operating system on applications at scale is key. This white paper does just this, before evaluating and looking to drive developments in operating systems to address identified scaling issues. 

The current trend in operating system research and development of re-implementing existing APIs is likely to continue. However, this approach is incremental and driven by developments in hardware as well as the necessity to improve the operating system to make full use of current technologies. Unfortunately, improvements that enhance scalability of the operating system often reduce usability. 

This method of operating system development will provide scalability for the immediate future but it is likely to be limited by the original design decisions of modern HPC technology. Developments in hardware, operating systems, programming models and programming languages are all interdependent, which leads to cyclical improvements rather than novel approaches. The abstractions that have held true for hardware for several decades are no longer adequate to describe modern hardware. For example, procedural languages such as C and FORTRAN, assume single-threaded, sequential processing and memory isolation enforced by hardware protection. Operating systems now depend on this hardware protection mechanism to isolate the memory spaces for different processes, which requires an expensive context-switch when transferring control from one process to another. This cannot be avoided unless a disruptive technology breaks the dependency by introducing a novel way to protect process memory spaces. 

Similarly, disruptive technologies may be needed to solve other scalability and performance issues, in operating systems and hardware, without sacrificing usability."

The CRESTA project has published a new White Paper - Architectural Developments Towards Exascale.

You can download the paper and some of our other White papers hereHere is some information on the White Paper:

Developing a computer system that can deliver sustained Exaflop performance is an extremely difficult challenge for the HPC and scientific community. In addition to developing hardware that can compute an Exaflop within a feasible power budget, scientific applications need to be able to exploit the performance that such a system can offer. 

The applications can only achieve this type of performance if they are supported by a complete stack of systemware, programming models, compilers, libraries and tools. However developing this stack, not to mention the applications, takes many man-years of effort. In order to be able to direct such efforts efficiently, it is important to try and predict what the architecture of an Exascale system may look like. Although it is obviously not possible to predict future developments with 100% accuracy, estimates that are based on the analysis of past trends in HPC system architecture development in conjunction with the trends in the current market will provide some guidance. 

In this white paper, we give our own analysis of the architectural developments towards Exascale systems and discuss the implications for the CRESTA co-design applications.

The CRESTA project has just published a new White Paper- The Exascale Development Environment State of the Art and Gap Analysis. You can download the paper and some of our other White papers here

Here is some information on the White Paper:

The development and implementation of efficient computer codes for exascale supercomputers will crucially depend on a combined advancement of all development environment components. This white paper presents the state of the art of programming models, compiler technologies, run-time systems, debuggers, correctness checkers, and performance monitoring tools and it identifies the common challenges and problems that need to be solved before the exascale era.

The main focus of this white paper is on emerging and novel technologies in programming models and tools. Together with the traditional approaches, the white paper presents the PGAS parallel programming models and new approaches for programming accelerators, such as OpenACC. It is important that these emerging programming models can be combined with traditional ones for their uptake on exascale supercomputer. For this reason, we discuss in detail the interoperability of  different programming approaches. Because we recognize that hand-optimization of parallel codes will be significantly more complex on exascale machines, we present recent progress in software frameworks for automatic tuning and run-time systems to schedule processes on million of computing units. Finally, an overview of the state of the art in parallel debuggers, correctness checkers and performance monitoring and analysis tools is presented focusing on which approaches can provide scalability on exascale machine.

After discussing the state of the art in the field, we analyze the two main common challenges for the developments environment on exascale supercomputers. First, all the components of the programming environments will deal with unprecedented amount of data coming from executing/debugging/ scheduling/monitoring codes running on million computing units, and they will be required to provide responsiveness and interactivity but still introducing minimal overhead. Second, programming tools will need to provide support for novel programming models, such as PGAS, and hardware accelerators, such as GPU and Intel MIC, that will become more and more common on exascale machines.

This white paper provides an overview of different approaches with exascale potential and indicates the progress that are eeded to fill the existing gap between petascale and exascale development environment technologies. The results of this white paper guide the current work on development environment in the CRESTA project.