The Exascale Challenge

The Exascale Challenge – to build a supercomputer that can deliver an exaflop - a million million million calculations per second.

 

Why do we need High Performance Computing?

High Performance Computing (HPC) uses very large-scale computers to solve some of the world’s biggest computational problems.

HPC is without doubt a key enabling technology for many technologically advanced nations in the 21st century. Many countries world-wide are investing in In Europe, the HPC community which has collaborated for more than two decades has cemented this collaboration in the past 5 years with the establishment of the DEISA project and now the PRACE Research Infrastructure.

Over the past half century, modelling and simulation has come to complement theory and experiment as a key component of the scientific method. Used where the problem is too big, too small, too fast, too slow, too complex to perform an experiment on directly, many of our scientific breakthroughs rely on models, simulated on supercomputers. Over this period, the scale and complexity of supercomputers has grown in a symbiotic relationship where it is often unclear if the technology has been driving our ability to perform the science or if the science has driven the design and production of larger and more effective systems.

Being at the forefront of design and manufacturing is a central driver of the European economy. To maintain our key positions on the world stage in areas as diverse as the automotive, pharmaceutical, financial, biological and renewable energy sectors requires that Europe invests in its ability to model and simulate the scientific advancements needed to develop our future products and services. This case was clearly made in 2006 by the HPC in Europe Taskforce (HET) who produced a white paper entitled “Scientific Case for Advanced Computing in Europe” that laid out a convincing case for the huge potential of HPC to support European scientific discovery and hence the European economy. Modelling and simulation is a key enabler of scientific and industrial innovation today. It is quite clear that the global economies that invest in modelling and simulation are those that will, over time, gain the greatest competitive advantage and reap the largest economic benefits. Hence the current scramble to invest in large-scale leading-edge HPC systems worldwide. There is a clear understanding worldwide that the net economic impact of such systems greatly outweighs the amount of money spent procuring them.

Current HPC systems

hector-renderFrom the mid-1990s to the mid-2000s massively parallel supercomputer designs were remarkable similar. The fastest systems in 1995 had around 512 microprocessors while 10 years later a large system had only three or four times this. Increases in performance were delivered through increased clock-speeds, larger, faster memories and higher bandwidth, lower latency interconnects. A small number of scientific grand challenges have driven the need to build larger and more powerful supercomputers over the past decade. From 2005 onwards we have witnessed an ever slower increase and, most recently, reduction in the clock speeds of microprocessor cores. This has resulted in higher and higher core counts – the largest systems today have in excess of 100,000 cores. At the same time the amount of memory bandwidth per core has decreased as core counts have increased. Interconnects have continued to improve – particularly interconnects designed specifically for such systems.

The scale of today’s leading HPC systems, which operate at the petascale, has put a strain on many simulation codes – both scientific and commercial. Only a small number worldwide have to date demonstrated petaflop/s performance. As one of Europe’s key strengths is in simulation and modelling applications, this is a key challenge, and a challenge which CRESTA is keen to address.

Tomorrow’s HPC systems: The Exascale Challenge

Having demonstrated a small number of scientific applications running at the petascale, the nature of the HPC community, particularly the hardware community, is to look to the next challenge. In this case the challenge is to move from 1015 flop/s to the next milestone of 1018 flop/s – an exaflop. Hence the exascale challenge that has been articulated in detail at the global level by the International Exascale Software Project and in Europe by the European Exascale Software Initiative. Many of the partners in CRESTA are leading members of one or both of these initiatives.

In tackling the delivery of an exaflop/s formidable challenges exist not just in scale, such systems could have over a million cores, but also in reliability, programmability, power consumption and usability (to name a few).

The timescale for demonstrating the world’s first exascale system is estimated to be 2018. From a hardware point of view we can speculate that such systems will consist of:

  • Large numbers of low-power, many-core microprocessors (possibly millions of cores)
  • Numerical accelerators with direct access to the same memory as the microprocessors (almost certainly based on evolved GPGPU designs)
  • High-bandwidth, low-latency novel topology networks (almost certainly custom-designed)
  • Faster, larger, lower-powered memory modules (perhaps with evolved memory access interfaces)

 Only a small number of companies will be able to build such systems. However it is crucial to note that hardware is not the only exascale computing challenge, but also software and applications. 


Featured Articles

Latest News