The Exascale Challenge – to build a supercomputer that can deliver an exaflop - a million million million calculations per second.

 Why do we need High Performance Computing?

High Performance Computing (HPC) uses very large-scale computers to solve some of the world’s biggest computational problems. Used where the problem is too big, too small, too fast, too slow, too complex to perform an experiment on directly, many of our scientific breakthroughs rely on models, simulated on supercomputers.

Many countries world-wide are investing in HPC. In Europe, the HPC community have been collaborating on many projects, such as the DEISA project and PRACE Research Infrastructure. But, to maintain our key positions on the world stage in areas as diverse as the automotive, pharmaceutical, financial, biological and renewable energy sectors requires that Europe invests in HPC to model and simulate the scientific advancements needed to develop our future products and services. This case was made in 2006 by the HPC in Europe Taskforce’s (HET) white paper, ‘Scientific Case for Advanced Computing in Europe’.

It is quite clear that the global economies that invest in modelling and simulation are those that will, over time, gain the greatest competitive advantage and reap the largest economic benefits. Hence the current scramble to invest in large-scale leading-edge HPC systems worldwide.

Current HPC systems

For years, increases in HPC performance was delivered through increased clock-speeds, larger, faster memories and higher bandwidth, lower latency interconnects. However, from 2005 onwards we have witnessed an ever slower increase and, most recently, reduction in the clock speeds of microprocessor cores. This has resulted in higher and higher core counts – the largest systems today have in excess of 100,000 cores ( systems in 1995 had around 512 microprocessors).

The scale of today’s leading HPC systems, which operate at the petascale, has put a strain on many simulation codes. Only a small number worldwide have to date demonstrated petaflop/s performance. As one of Europe’s key strengths is in simulation and modelling applications, this is a key challenge; which CRESTA is keen to address.

Tomorrow’s HPC systems: The Exascale Challenge

The current challenge is to move from 1015 flop/s (petaflop) to the next milestone of 1018 flop/s – an exaflop. The exascale challenge has been articulated in detail at the global level by the International Exascale Software Project and in Europe by the European Exascale Software Initiative. The timescale for demonstrating the world’s first exascale system is estimated to be 2018. From a hardware point of view we can speculate that such systems will consist of: 

  • Large numbers of low-power, many-core microprocessors (possibly millions of cores)
  • Numerical accelerators with direct access to the same memory as the microprocessors (almost certainly based on evolved GPGPU designs)
  • High-bandwidth, low-latency novel topology networks (almost certainly custom-designed)
  • Faster, larger, lower-powered memory modules (perhaps with evolved memory access interfaces)

Only a small number of organisations will be able to build such systems. However it is crucial to note that hardware is not the only exascale computing challenge, but also software and applications. Such systems could have over a million cores, but also need to excel in reliability, programmability, power consumption and usability (to name a few). Thus the focus of the CRESTA project.