News & Events CRESTA https://www.cresta-project.eu/table/news-events/feed/atom.html 2019-07-18T11:37:06+00:00 CRESTA lorna@epcc.ed.ac.uk Joomla! - Open Source Content Management Archives 2015-01-14T10:51:38+00:00 2015-01-14T10:51:38+00:00 https://www.cresta-project.eu/archives.html Doug Rocks-Macqueen doug.rocks-macqueen@ed.ac.uk <p>The CRESTA project completed at the begining of 2015. Record and articles from the project have been saved. Please see the links below to view the following archived webpages:</p> <p>&nbsp;</p> <p><a href="https://www.cresta-project.eu/index.php?option=com_content&amp;view=article&amp;id=67&amp;Itemid=173">Press Corner</a></p> <p><a href="https://www.cresta-project.eu/index.php?option=com_content&amp;view=category&amp;layout=blog&amp;id=14&amp;Itemid=120">Magazine Articles</a></p> <p><a href="https://www.cresta-project.eu/index.php?option=com_content&amp;view=category&amp;layout=blog&amp;id=15&amp;Itemid=121">Deliverables</a></p> <p><a href="https://www.cresta-project.eu/index.php?option=com_content&amp;view=category&amp;layout=blog&amp;id=25&amp;Itemid=172">CRESTA Newsletters</a></p> <p><a href="https://www.cresta-project.eu/index.php?option=com_content&amp;view=article&amp;id=69&amp;Itemid=174">CRESTA Publications</a></p> <p><a href="https://www.cresta-project.eu/index.php?option=com_content&amp;view=category&amp;layout=blog&amp;id=12&amp;Itemid=114">News &amp; Events<span style="color: #000000;"></span></a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>The CRESTA project completed at the begining of 2015. Record and articles from the project have been saved. Please see the links below to view the following archived webpages:</p> <p>&nbsp;</p> <p><a href="https://www.cresta-project.eu/index.php?option=com_content&amp;view=article&amp;id=67&amp;Itemid=173">Press Corner</a></p> <p><a href="https://www.cresta-project.eu/index.php?option=com_content&amp;view=category&amp;layout=blog&amp;id=14&amp;Itemid=120">Magazine Articles</a></p> <p><a href="https://www.cresta-project.eu/index.php?option=com_content&amp;view=category&amp;layout=blog&amp;id=15&amp;Itemid=121">Deliverables</a></p> <p><a href="https://www.cresta-project.eu/index.php?option=com_content&amp;view=category&amp;layout=blog&amp;id=25&amp;Itemid=172">CRESTA Newsletters</a></p> <p><a href="https://www.cresta-project.eu/index.php?option=com_content&amp;view=article&amp;id=69&amp;Itemid=174">CRESTA Publications</a></p> <p><a href="https://www.cresta-project.eu/index.php?option=com_content&amp;view=category&amp;layout=blog&amp;id=12&amp;Itemid=114">News &amp; Events<span style="color: #000000;"></span></a></p> <p>&nbsp;</p> <p>&nbsp;</p> CRESTA Publications 2012-11-05T22:17:04+00:00 2012-11-05T22:17:04+00:00 https://www.cresta-project.eu/publications2.html Katie Urquhart katie.urquhart@ed.ac.uk <p>Here is a selection of publications created by the CRESTA project:</p> <h2>White papers</h2> <p><span style="font-family: arial, helvetica, sans-serif; font-size: 10pt;"><span style="font-size: 12pt;"><strong><a href="https://www.cresta-project.eu/images/cresta_whitepaper.pdf" style="font-size: 0.8em;">The Exascale Development Environment: State of the Art and Gap Analysis</a></strong>:&nbsp;t</span>he state of the art and a gap analysis for the exascale development environment.</span></p> <p><span style="font-family: arial, helvetica, sans-serif; font-size: 10pt;"><span style="color: #000000;"><strong><a href="https://www.cresta-project.eu//images/WhitePapers/cresta_whitepaper1_2014.pdf">Architectural Developments Towards Exascale</a></strong>:</span><span style="color: #2c2928;">&nbsp;linking trends in HPC architectures with the potential impact on&nbsp;</span><span style="color: #2c2928;">the applications.</span></span></p> <p><span style="font-family: arial, helvetica, sans-serif; font-size: 10pt;"><span style="color: #2c2928;"><strong><a href="https://www.cresta-project.eu//images/WhitePapers/cresta_whitepaper_2_2014.pdf">Operating Systems at the Extreme Scale</a></strong>: quantifying the potential impact of the operating system on applications at scale, and evaluating and looking to drive developments in operating systems to address identified scaling issues. </span></span></p> <p><span style="font-family: arial, helvetica, sans-serif; font-size: 10pt;"><span style="color: #2c2928;"><strong><a href="https://www.cresta-project.eu//images/WhitePapers/cresta_whitepaper_3_2014.pdf">Exascale Pre- and Post-Processing</a></strong>: considering the modifications which will be required to data structure and memory layout, and how to address latency and error resiliency, in order to bring pre- and post-processing to the exascale.</span></span></p> <p><span style="font-family: arial, helvetica, sans-serif; font-size: 10pt;"><span style="color: #2c2928;"><strong><a href="https://www.cresta-project.eu//images/WhitePapers/cresta_whitepaper_4_2014.pdf">Benchmarking MPI Collectives</a></strong>: evaluating the impact of late arrivals on collective operations, and examining the benefit from the overlap of computation and communication with non-blocking collectives. <br /></span></span></p> <h2>Case studies</h2> <p><span style="font-family: arial, helvetica, sans-serif; font-size: 10pt;">Our case studies highlight the successes of our co-design approach.&nbsp;</span></p> <p><span style="font-family: arial, helvetica, sans-serif; font-size: 10pt;"><strong><a href="https://www.cresta-project.eu/images/cresta_case_study.pdf">Extreme Weather Forecasting with Extreme Computing</a></strong>, discusses the <a href="http://www.ecmwf.int">European Centre for Medium-Range Weather Forecasts</a>' (ECMWF) IFS and its use of co-array Fortran.&nbsp;</span></p> <p><span style="color: #000000; font-family: arial, helvetica, sans-serif; font-size: 10pt;"><strong><a href="https://www.cresta-project.eu/images/cresta_casestudy1_2014.pdf">Application soars above petascale after tools collaboration</a></strong>: T<span style="color: #000000;">he HemeLB research group at&nbsp;<a href="http://www.ucl.ac.uk">University College London</a>&nbsp;(UCL) has an exciting vision for HPC, one that will change the way neurosurgeons operate in the future and improve outcomes for patients. The group develops software to model intercranial blood flow – and collaboration with&nbsp;<a href="http://www.allinea.com">Allinea Software</a>&nbsp;is helping them to address the challenges of application development at scale.</span></span></p> <p><span style="font-family: arial, helvetica, sans-serif; font-size: 10pt;"><span style="color: #000000;"><strong><a href="https://www.cresta-project.eu/images/CaseStudies/cresta_casestudy2_2014.pdf">Large-scale fluid dynamics simulations - towards a virtual wind tunnel</a></strong>:&nbsp;</span><span style="color: #000000;">Today numerical modelling is one of the major tools in scientific and engineering work. It allows to investigate and understand a variety of physical phenomena occurring in our environment giving the possibility to design efficient tools and devices we need in our everyday lives.</span></span></p> <p><span style="font-family: arial, helvetica, sans-serif; font-size: 10pt;"><strong><span style="color: #000000;"><a href="https://www.cresta-project.eu//images/CaseStudies/cresta_casestudy3_2014.pdf">Massively parallel liquid crystal simulation</a>:&nbsp;</span></strong><span style="color: #000000;">Liquid crystals (LCs) are widespread in technology (including displays and other optical devices) and also in nature, but much is yet to be understood about the range of possible LC configurations. Simulations are vital in paving the way to improved knowledge and exciting new applications.</span></span></p> <p><span style="font-family: arial,helvetica,sans-serif; font-size: 10pt;"><strong><span style="color: #000000;"><a href="https://www.cresta-project.eu//images/CaseStudies/cresta_casestudy4_2014.pdf">Cray helps prepare ECMWF's Integrated Forecast System for exascale</a>: </span></strong><span style="color: #000000;">The comprehensive Earth-system model developed at the European Centre for Medium-Range Weather Forecasts (ECMWF) in co-operation with Météo-France forms the basis for the centre's data assimilation and forecasting activities. All the main applications required are available through one computer software system called the Integrated Forecasting System (IFS).<br /></span></span></p> <p><span style="font-family: arial,helvetica,sans-serif; font-size: 10pt;"><strong><span style="color: #000000;"><a href="https://www.cresta-project.eu//images/CaseStudies/cresta_casestudy5_2014.pdf">Auto-tuning OpenACC directives within the NEK5000 application to fully exploit GPU architectures</a>: </span></strong>Applications need to be tuned to get the best performance from a High Performance Supercomputer. This can be a time-consuming process to do by hand, with potentially thousands of parameter combinations to explore. Auto-tuners can speed this process up significantly, but existing methods are not well suited to HPC applications.<br /></span></p> <p><span style="font-family: arial,helvetica,sans-serif; font-size: 10pt;"><strong><span style="color: #000000;"><a href="https://www.cresta-project.eu//images/CaseStudies/cresta_casestudy6_2014.pdf">CRESTA Benchmark Suite</a>: </span></strong><span style="color: #000000;">The principle of co-design lies at the very heart of all work undertaken in the CRESTA project. Our co-design vehicles are a set of scientific applications, which drive the research. These applications have now been collected in a Benchmark Suite, whih is available for public use.<br /></span></span></p> <p><span style="font-family: arial,helvetica,sans-serif; font-size: 10pt;"><span style="color: #000000;">&nbsp;</span></span></p> <p><span style="color: #000000; font-family: Helvetica; font-size: 12px;">&nbsp;</span></p> <p>&nbsp;</p> <p><span style="color: #000000; font-family: Helvetica; font-size: 12px;"><span style="color: #000000; font-family: Helvetica; font-size: 12px;">&nbsp;</span></span></p> <p>Here is a selection of publications created by the CRESTA project:</p> <h2>White papers</h2> <p><span style="font-family: arial, helvetica, sans-serif; font-size: 10pt;"><span style="font-size: 12pt;"><strong><a href="https://www.cresta-project.eu/images/cresta_whitepaper.pdf" style="font-size: 0.8em;">The Exascale Development Environment: State of the Art and Gap Analysis</a></strong>:&nbsp;t</span>he state of the art and a gap analysis for the exascale development environment.</span></p> <p><span style="font-family: arial, helvetica, sans-serif; font-size: 10pt;"><span style="color: #000000;"><strong><a href="https://www.cresta-project.eu//images/WhitePapers/cresta_whitepaper1_2014.pdf">Architectural Developments Towards Exascale</a></strong>:</span><span style="color: #2c2928;">&nbsp;linking trends in HPC architectures with the potential impact on&nbsp;</span><span style="color: #2c2928;">the applications.</span></span></p> <p><span style="font-family: arial, helvetica, sans-serif; font-size: 10pt;"><span style="color: #2c2928;"><strong><a href="https://www.cresta-project.eu//images/WhitePapers/cresta_whitepaper_2_2014.pdf">Operating Systems at the Extreme Scale</a></strong>: quantifying the potential impact of the operating system on applications at scale, and evaluating and looking to drive developments in operating systems to address identified scaling issues. </span></span></p> <p><span style="font-family: arial, helvetica, sans-serif; font-size: 10pt;"><span style="color: #2c2928;"><strong><a href="https://www.cresta-project.eu//images/WhitePapers/cresta_whitepaper_3_2014.pdf">Exascale Pre- and Post-Processing</a></strong>: considering the modifications which will be required to data structure and memory layout, and how to address latency and error resiliency, in order to bring pre- and post-processing to the exascale.</span></span></p> <p><span style="font-family: arial, helvetica, sans-serif; font-size: 10pt;"><span style="color: #2c2928;"><strong><a href="https://www.cresta-project.eu//images/WhitePapers/cresta_whitepaper_4_2014.pdf">Benchmarking MPI Collectives</a></strong>: evaluating the impact of late arrivals on collective operations, and examining the benefit from the overlap of computation and communication with non-blocking collectives. <br /></span></span></p> <h2>Case studies</h2> <p><span style="font-family: arial, helvetica, sans-serif; font-size: 10pt;">Our case studies highlight the successes of our co-design approach.&nbsp;</span></p> <p><span style="font-family: arial, helvetica, sans-serif; font-size: 10pt;"><strong><a href="https://www.cresta-project.eu/images/cresta_case_study.pdf">Extreme Weather Forecasting with Extreme Computing</a></strong>, discusses the <a href="http://www.ecmwf.int">European Centre for Medium-Range Weather Forecasts</a>' (ECMWF) IFS and its use of co-array Fortran.&nbsp;</span></p> <p><span style="color: #000000; font-family: arial, helvetica, sans-serif; font-size: 10pt;"><strong><a href="https://www.cresta-project.eu/images/cresta_casestudy1_2014.pdf">Application soars above petascale after tools collaboration</a></strong>: T<span style="color: #000000;">he HemeLB research group at&nbsp;<a href="http://www.ucl.ac.uk">University College London</a>&nbsp;(UCL) has an exciting vision for HPC, one that will change the way neurosurgeons operate in the future and improve outcomes for patients. The group develops software to model intercranial blood flow – and collaboration with&nbsp;<a href="http://www.allinea.com">Allinea Software</a>&nbsp;is helping them to address the challenges of application development at scale.</span></span></p> <p><span style="font-family: arial, helvetica, sans-serif; font-size: 10pt;"><span style="color: #000000;"><strong><a href="https://www.cresta-project.eu/images/CaseStudies/cresta_casestudy2_2014.pdf">Large-scale fluid dynamics simulations - towards a virtual wind tunnel</a></strong>:&nbsp;</span><span style="color: #000000;">Today numerical modelling is one of the major tools in scientific and engineering work. It allows to investigate and understand a variety of physical phenomena occurring in our environment giving the possibility to design efficient tools and devices we need in our everyday lives.</span></span></p> <p><span style="font-family: arial, helvetica, sans-serif; font-size: 10pt;"><strong><span style="color: #000000;"><a href="https://www.cresta-project.eu//images/CaseStudies/cresta_casestudy3_2014.pdf">Massively parallel liquid crystal simulation</a>:&nbsp;</span></strong><span style="color: #000000;">Liquid crystals (LCs) are widespread in technology (including displays and other optical devices) and also in nature, but much is yet to be understood about the range of possible LC configurations. Simulations are vital in paving the way to improved knowledge and exciting new applications.</span></span></p> <p><span style="font-family: arial,helvetica,sans-serif; font-size: 10pt;"><strong><span style="color: #000000;"><a href="https://www.cresta-project.eu//images/CaseStudies/cresta_casestudy4_2014.pdf">Cray helps prepare ECMWF's Integrated Forecast System for exascale</a>: </span></strong><span style="color: #000000;">The comprehensive Earth-system model developed at the European Centre for Medium-Range Weather Forecasts (ECMWF) in co-operation with Météo-France forms the basis for the centre's data assimilation and forecasting activities. All the main applications required are available through one computer software system called the Integrated Forecasting System (IFS).<br /></span></span></p> <p><span style="font-family: arial,helvetica,sans-serif; font-size: 10pt;"><strong><span style="color: #000000;"><a href="https://www.cresta-project.eu//images/CaseStudies/cresta_casestudy5_2014.pdf">Auto-tuning OpenACC directives within the NEK5000 application to fully exploit GPU architectures</a>: </span></strong>Applications need to be tuned to get the best performance from a High Performance Supercomputer. This can be a time-consuming process to do by hand, with potentially thousands of parameter combinations to explore. Auto-tuners can speed this process up significantly, but existing methods are not well suited to HPC applications.<br /></span></p> <p><span style="font-family: arial,helvetica,sans-serif; font-size: 10pt;"><strong><span style="color: #000000;"><a href="https://www.cresta-project.eu//images/CaseStudies/cresta_casestudy6_2014.pdf">CRESTA Benchmark Suite</a>: </span></strong><span style="color: #000000;">The principle of co-design lies at the very heart of all work undertaken in the CRESTA project. Our co-design vehicles are a set of scientific applications, which drive the research. These applications have now been collected in a Benchmark Suite, whih is available for public use.<br /></span></span></p> <p><span style="font-family: arial,helvetica,sans-serif; font-size: 10pt;"><span style="color: #000000;">&nbsp;</span></span></p> <p><span style="color: #000000; font-family: Helvetica; font-size: 12px;">&nbsp;</span></p> <p>&nbsp;</p> <p><span style="color: #000000; font-family: Helvetica; font-size: 12px;"><span style="color: #000000; font-family: Helvetica; font-size: 12px;">&nbsp;</span></span></p> SC12 Workshop Agenda 2012-10-25T11:28:49+00:00 2012-10-25T11:28:49+00:00 https://www.cresta-project.eu/sc12-workshop-agenda.html Katie Urquhart katie.urquhart@ed.ac.uk <p style="text-align: center;" align="center"><strong>Preparing Applications for Exascale Through Co-design</strong></p> <p style="text-align: center;" align="center">Workshop at SC12, Salt Lake City, Utah, Room 250-DE</p> <p style="text-align: center;" align="center">Friday November 16<sup>th</sup>, 8:30am - 12:30pm</p> <p style="text-align: center;">Organisers: Lorna Smith, Mark Parsons, Achim Basermann, Bastian Koller, Stefano Markidis, Frédéric Magoulès</p> <p style="text-align: center;">&nbsp;</p> <table style="border-collapse: collapse;" border="1" cellspacing="0" cellpadding="0"> <tbody> <tr> <td style="width: 69.2pt; border: 1pt solid windowtext; padding: 0cm 5.4pt;" valign="top" width="69"> <p style="margin-bottom: 0.0001pt; line-height: normal;">08:30-08:35</p> </td> <td style="width: 392.9pt; border-width: 1pt 1pt 1pt medium; border-style: solid solid solid none; border-color: windowtext windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" colspan="2" valign="top" width="393"> <p style="margin-bottom: 0.0001pt; text-align: center; line-height: normal;" align="center"><strong>Introduction and Welcome</strong></p> <p style="margin-bottom: 0.0001pt; text-align: center; line-height: normal;" align="center">Mark Parsons, CRESTA Coordinator from EPCC, The University of Edinburgh</p> </td> </tr> <tr> <td style="width: 69.2pt; border-right: 1pt solid windowtext; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; padding: 0cm 5.4pt;" valign="top" width="69"> <p style="margin-bottom: 0.0001pt; line-height: normal;">08:35-09:00</p> </td> <td style="width: 163pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="163"> <p style="margin-bottom: 0.0001pt; line-height: normal;">Mark Parsons, EPCC, The University of Edinburgh</p> </td> <td style="width: 229.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="230"> <p style="margin-bottom: 0.0001pt; line-height: normal;">CRESTA: Collaborative Research into Exascale Systemware, Tools and Applications</p> </td> </tr> <tr> <td style="width: 69.2pt; border-right: 1pt solid windowtext; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; padding: 0cm 5.4pt;" valign="top" width="69"> <p style="margin-bottom: 0.0001pt; line-height: normal;">09:00-09:30</p> </td> <td style="width: 163pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="163"> <p style="margin-bottom: 0.0001pt; line-height: normal;"><em>Invited Speaker:</em> Dana Knoll, Los Alamos National Laboratory</p> </td> <td style="width: 229.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="230"> <p style="margin-bottom: 0.0001pt; line-height: normal;">CoCoMANS: Computational Co-design for Multi-scale Applications in the Natural Sciences</p> </td> </tr> <tr> <td style="width: 69.2pt; border-right: 1pt solid windowtext; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; padding: 0cm 5.4pt;" valign="top" width="69"> <p style="margin-bottom: 0.0001pt; line-height: normal;">09:30-10:00</p> </td> <td style="width: 163pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="163"> <p style="margin-bottom: 0.0001pt; line-height: normal;">George Mozdzynski European Centre for Medium-Range Weather Forecasts</p> </td> <td style="width: 229.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="230"> <p style="margin-bottom: 0.0001pt; line-height: normal;">A PGAS implementation by co-design of the ECMWF Integrated Forecasting System (IFS)</p> </td> </tr> <tr> <td style="width: 69.2pt; border-right: 1pt solid windowtext; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; padding: 0cm 5.4pt;" valign="top" width="69"> <p style="margin-bottom: 0.0001pt; line-height: normal;">10:00-10:30</p> </td> <td style="width: 392.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" colspan="2" valign="top" width="393"> <p style="margin-bottom: 0.0001pt; text-align: center; line-height: normal;" align="center"><strong>Coffee Break</strong></p> </td> </tr> <tr> <td style="width: 69.2pt; border-right: 1pt solid windowtext; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; padding: 0cm 5.4pt;" valign="top" width="69"> <p style="margin-bottom: 0.0001pt; line-height: normal;">10:30-11:00</p> </td> <td style="width: 163pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="163"> <p style="margin-bottom: 0.0001pt; line-height: normal;"><em>Invited Speaker:</em> Giovanni Lapenta, Katholieke Universiteit Leuven</p> </td> <td style="width: 229.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="230"> <p style="margin-bottom: 0.0001pt; line-height: normal;">Codesign effort centred on space weather</p> </td> </tr> <tr> <td style="width: 69.2pt; border-right: 1pt solid windowtext; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; padding: 0cm 5.4pt;" valign="top" width="69"> <p style="margin-bottom: 0.0001pt; line-height: normal;">11:00-11:30</p> </td> <td style="width: 163pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="163"> <p style="margin-bottom: 0.0001pt; line-height: normal;">Richard Graham, Mellanox Technologies</p> </td> <td style="width: 229.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="230"> <p style="margin-bottom: 0.0001pt; line-height: normal;">InifinBand CORE-Direct and Dynamically Connected Transport Service: A Hardware-Application Co-Design Case Study</p> </td> </tr> <tr> <td style="width: 69.2pt; border-right: 1pt solid windowtext; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; padding: 0cm 5.4pt;" valign="top" width="69"> <p style="margin-bottom: 0.0001pt; line-height: normal;">11:30-12:00</p> </td> <td style="width: 163pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="163"> <p style="margin-bottom: 0.0001pt; line-height: normal;">Achim Basermann, German Aerospace Center (DLR)</p> </td> <td style="width: 229.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="230"> <p style="margin-bottom: 0.0001pt; line-height: normal;">Enabling In-situ Pre- and Post-Processing for Exascale Hemodynamic Simulations - A Co-Design Study with the Sparse Geometry Lattice Boltzmann Code HemeLB</p> </td> </tr> <tr> <td style="width: 69.2pt; border-right: 1pt solid windowtext; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; padding: 0cm 5.4pt;" valign="top" width="69"> <p style="margin-bottom: 0.0001pt; line-height: normal;">12:00-12:30</p> </td> <td style="width: 163pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="163"> <p style="margin-bottom: 0.0001pt; line-height: normal;">Michael Schliephake, SeRC-Swedish e-Science Research Center and PDC Royal Institute of Technology</p> </td> <td style="width: 229.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="230"> <p style="margin-bottom: 0.0001pt; line-height: normal;">Communication Performance Analysis of CRESTA’s Co-Design Applications NEK5000 and IFS</p> </td> </tr> <tr> <td style="width: 69.2pt; border-right: 1pt solid windowtext; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; padding: 0cm 5.4pt;" valign="top" width="69"> <p style="margin-bottom: 0.0001pt; line-height: normal;">14:00</p> </td> <td style="width: 392.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" colspan="2" valign="top" width="393"> <p style="margin-bottom: 0.0001pt; line-height: normal; text-align: center;" align="center"><strong>Close</strong></p> </td> </tr> </tbody> </table> <p style="text-align: center;" align="center"><strong>Preparing Applications for Exascale Through Co-design</strong></p> <p style="text-align: center;" align="center">Workshop at SC12, Salt Lake City, Utah, Room 250-DE</p> <p style="text-align: center;" align="center">Friday November 16<sup>th</sup>, 8:30am - 12:30pm</p> <p style="text-align: center;">Organisers: Lorna Smith, Mark Parsons, Achim Basermann, Bastian Koller, Stefano Markidis, Frédéric Magoulès</p> <p style="text-align: center;">&nbsp;</p> <table style="border-collapse: collapse;" border="1" cellspacing="0" cellpadding="0"> <tbody> <tr> <td style="width: 69.2pt; border: 1pt solid windowtext; padding: 0cm 5.4pt;" valign="top" width="69"> <p style="margin-bottom: 0.0001pt; line-height: normal;">08:30-08:35</p> </td> <td style="width: 392.9pt; border-width: 1pt 1pt 1pt medium; border-style: solid solid solid none; border-color: windowtext windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" colspan="2" valign="top" width="393"> <p style="margin-bottom: 0.0001pt; text-align: center; line-height: normal;" align="center"><strong>Introduction and Welcome</strong></p> <p style="margin-bottom: 0.0001pt; text-align: center; line-height: normal;" align="center">Mark Parsons, CRESTA Coordinator from EPCC, The University of Edinburgh</p> </td> </tr> <tr> <td style="width: 69.2pt; border-right: 1pt solid windowtext; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; padding: 0cm 5.4pt;" valign="top" width="69"> <p style="margin-bottom: 0.0001pt; line-height: normal;">08:35-09:00</p> </td> <td style="width: 163pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="163"> <p style="margin-bottom: 0.0001pt; line-height: normal;">Mark Parsons, EPCC, The University of Edinburgh</p> </td> <td style="width: 229.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="230"> <p style="margin-bottom: 0.0001pt; line-height: normal;">CRESTA: Collaborative Research into Exascale Systemware, Tools and Applications</p> </td> </tr> <tr> <td style="width: 69.2pt; border-right: 1pt solid windowtext; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; padding: 0cm 5.4pt;" valign="top" width="69"> <p style="margin-bottom: 0.0001pt; line-height: normal;">09:00-09:30</p> </td> <td style="width: 163pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="163"> <p style="margin-bottom: 0.0001pt; line-height: normal;"><em>Invited Speaker:</em> Dana Knoll, Los Alamos National Laboratory</p> </td> <td style="width: 229.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="230"> <p style="margin-bottom: 0.0001pt; line-height: normal;">CoCoMANS: Computational Co-design for Multi-scale Applications in the Natural Sciences</p> </td> </tr> <tr> <td style="width: 69.2pt; border-right: 1pt solid windowtext; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; padding: 0cm 5.4pt;" valign="top" width="69"> <p style="margin-bottom: 0.0001pt; line-height: normal;">09:30-10:00</p> </td> <td style="width: 163pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="163"> <p style="margin-bottom: 0.0001pt; line-height: normal;">George Mozdzynski European Centre for Medium-Range Weather Forecasts</p> </td> <td style="width: 229.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="230"> <p style="margin-bottom: 0.0001pt; line-height: normal;">A PGAS implementation by co-design of the ECMWF Integrated Forecasting System (IFS)</p> </td> </tr> <tr> <td style="width: 69.2pt; border-right: 1pt solid windowtext; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; padding: 0cm 5.4pt;" valign="top" width="69"> <p style="margin-bottom: 0.0001pt; line-height: normal;">10:00-10:30</p> </td> <td style="width: 392.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" colspan="2" valign="top" width="393"> <p style="margin-bottom: 0.0001pt; text-align: center; line-height: normal;" align="center"><strong>Coffee Break</strong></p> </td> </tr> <tr> <td style="width: 69.2pt; border-right: 1pt solid windowtext; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; padding: 0cm 5.4pt;" valign="top" width="69"> <p style="margin-bottom: 0.0001pt; line-height: normal;">10:30-11:00</p> </td> <td style="width: 163pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="163"> <p style="margin-bottom: 0.0001pt; line-height: normal;"><em>Invited Speaker:</em> Giovanni Lapenta, Katholieke Universiteit Leuven</p> </td> <td style="width: 229.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="230"> <p style="margin-bottom: 0.0001pt; line-height: normal;">Codesign effort centred on space weather</p> </td> </tr> <tr> <td style="width: 69.2pt; border-right: 1pt solid windowtext; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; padding: 0cm 5.4pt;" valign="top" width="69"> <p style="margin-bottom: 0.0001pt; line-height: normal;">11:00-11:30</p> </td> <td style="width: 163pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="163"> <p style="margin-bottom: 0.0001pt; line-height: normal;">Richard Graham, Mellanox Technologies</p> </td> <td style="width: 229.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="230"> <p style="margin-bottom: 0.0001pt; line-height: normal;">InifinBand CORE-Direct and Dynamically Connected Transport Service: A Hardware-Application Co-Design Case Study</p> </td> </tr> <tr> <td style="width: 69.2pt; border-right: 1pt solid windowtext; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; padding: 0cm 5.4pt;" valign="top" width="69"> <p style="margin-bottom: 0.0001pt; line-height: normal;">11:30-12:00</p> </td> <td style="width: 163pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="163"> <p style="margin-bottom: 0.0001pt; line-height: normal;">Achim Basermann, German Aerospace Center (DLR)</p> </td> <td style="width: 229.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="230"> <p style="margin-bottom: 0.0001pt; line-height: normal;">Enabling In-situ Pre- and Post-Processing for Exascale Hemodynamic Simulations - A Co-Design Study with the Sparse Geometry Lattice Boltzmann Code HemeLB</p> </td> </tr> <tr> <td style="width: 69.2pt; border-right: 1pt solid windowtext; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; padding: 0cm 5.4pt;" valign="top" width="69"> <p style="margin-bottom: 0.0001pt; line-height: normal;">12:00-12:30</p> </td> <td style="width: 163pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="163"> <p style="margin-bottom: 0.0001pt; line-height: normal;">Michael Schliephake, SeRC-Swedish e-Science Research Center and PDC Royal Institute of Technology</p> </td> <td style="width: 229.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" valign="top" width="230"> <p style="margin-bottom: 0.0001pt; line-height: normal;">Communication Performance Analysis of CRESTA’s Co-Design Applications NEK5000 and IFS</p> </td> </tr> <tr> <td style="width: 69.2pt; border-right: 1pt solid windowtext; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; padding: 0cm 5.4pt;" valign="top" width="69"> <p style="margin-bottom: 0.0001pt; line-height: normal;">14:00</p> </td> <td style="width: 392.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0cm 5.4pt;" colspan="2" valign="top" width="393"> <p style="margin-bottom: 0.0001pt; line-height: normal; text-align: center;" align="center"><strong>Close</strong></p> </td> </tr> </tbody> </table> Speakers 2012-10-25T10:59:21+00:00 2012-10-25T10:59:21+00:00 https://www.cresta-project.eu/speakers.html Katie Urquhart katie.urquhart@ed.ac.uk <h3>Invited Speakers</h3> <h4><span style="font-size: 11pt; line-height: 115%; font-family: Calibri;">Mark Parsons, CRESTA</span></h4> <p style="text-align: left;"><span style="font-size: 11pt; line-height: 115%; font-family: Calibri;"><strong>Title: CRESTA: Collaborative Research into Exascale Systemware, Tools and Applications</strong></span></p> <p style="text-align: left;">The CRESTA project uses a novel approach to exascale system co-design which focuses on the use of a small, representative set of applications to inform and guide software and systemware developments. By limiting our work to a small set of representative applications, we are developing key insights into the necessary changes to applications and system software required to compute at this scale.&nbsp; In CRESTA, we recognise that incremental improvements are simply not enough and we need to look at disruptive changes to the HPC software stack from the operating system, through tools and libraries, to the applications themselves. In this talk I will present an overview of our work in this area.</p> <h4><span style="font-size: 11pt; line-height: 115%; font-family: Calibri;">Dana Knoll, LANL </span></h4> <p><strong>Title: CoCoMANS: Computational Co-design for Multi-scale Applications in the Natural Sciences</strong></p> <p style="text-align: left;">CoCoMANS is a three year project at Los Alamos National Laboratory (LANL) intended to advanced LANL's understanding of useful computational co-design practices and processes. The exascale future will bring many challenges to current practices and processes in general large- scale computational science. The CoCoMANS project is meeting these challenges by forging a qualitatively new predictive-science capability exploiting evolving high-performance computer architectures for multiple national-security-critical application areas including materials, plasmas, and climate by simultaneously evolving the four corners of science, methods, software, and hardware in an integrated computational co-design process. We are developing new Sapplications-based, self-consistent, two-way, scale-bridging methods that have broad applicability to the targeted science. These algorithms will map well to emerging heterogeneous computing models (while concurrently guiding hardware and software to maturity), and provide the algorithmic acceleration necessary to probe new scientific challenges at unprecedented scales. Expected outcomes of the CoCoMANS project include 1) demonstrating a paradigm shift in the pursuit of world-class science at LANL by bringing a wide spectrum of pertinent expertise in the computer and natural sciences in to play; 2) delivering a documented computational co-design knowledge-base, built upon an evolving software infrastructure; 3) fostering broad, long-term research collaborations with appropriate hardware partners; and 4) deepening our understanding of, and experience-base with, consistent, two-way, scale-bridging algorithms. In this talk we will provide an overview of the CoCoMANS project and provide some results from the project to date.</p> <h4><span style="font-size: 11pt; line-height: 115%; font-family: Calibri;">Giovanni Lapenta, KU Leuven </span></h4> <p style="text-align: justify;"><strong>Title: Codesign effort centred on space weather </strong></p> <p style="text-align: left;">Space weather, the study of the highly dynamic conditions in the coupled system formed by the Sun, the Earth and the interplanetary space, provides an exciting challenge for computer science. Space weather brings together the modeling of multiple physical processes happening in a system with hugely varying local scales and with a multiplicity of concurring processes. The high variety of methods and processes provides the testing grounds for new and upcoming computer architectures. Our approach is based on taking one powerful and widely used approach, the particle in cell method, as our central reference point. We then investigate how it performs on existing and upcoming platforms, we reconsider its design and its practice of implementation with an eye towards innovative (and even revolutionary) formulations centered around co-design: can we rethink the method so that the needs, the requests and the possibility in the hardware, in the software middle-layer and in the physics itself can be combined in the best way. Insisting on algorithms and mathematical-physcs model developed in the 50's is probably not the best use of petascale and exascale. We report on our recent experience built on the best combination of algorithms (using implicit formulations) of software implementations and programming languages and the best perspectives in upcoming hardware. The work reported here is partly completed as part of the EC-FP7 networks DEEP (deep-project.eu) and SWIFF (swiff.eu) and of the Intel Exascience Lab (exascience.com).</p> <h3 style="text-align: left;">AccEPTED Talks</h3> <p style="text-align: left;"><strong>Title: InifinBand CORE-Direct and Dynamically Connected Transport Service: A Hardware-Application Co-Design Case Study </strong></p> <p style="text-align: justify;">Authors<strong>:</strong> Noam Bloch, Richard Graham, Gilad Shainer, Todd Wilde Mellanox Technologies</p> <p style="text-align: left;">The challenges facing the High Performance Computing community, as it moves towards Exascale computing encompass the full system, starting at the hardware, and up through application software. The challenges require application software to use unprecedented levels of parallelism, using systems in constant flux, with the cost to move data at such scale, keeping within the energy budget, posing a large challenge for hardware designers. With the enormity of such challenges, it is essential that application development and hardware design be done cooperatively, to help close the gap between todays programming practices and the design constraints facing hardware designers as well as to enable application to take full advantage of the hardware capabilities. Over the past several years, Mellanox Technologies has been working in close cooperation with application developers in developing new communication technologies. CORE-Direct and the newly developed Dynamic Connected Transport being the outcome of such co-design efforts. This talk will describe these capabilities, and discuss future co-design plans.</p> <p style="text-align: left;">The CORE-Direct technology has been developed to address scalability issues faced by application using collective communication, including effects of system noise. A large fraction of scientific simulations tend to use such functionality, with the performance of collective communication often being a limiting factor for application scalability. With collective algorithms being used to guide hardware and software development, the CORE-Direct functionality has been developed to offload collective communication progression to the Host Channel Adapter (HCA), leaving the Central Processing Unit to continue to perform other work, as the collective communication progresses. Such an implementation is provides hardware support for the implementation of asynchronous non-blocking collective communication, as well as the means of addressing some of the system noise problems. This functionality has been available in Mellanox HCA's since ConnectX-2, and has been shown to provide both good absolute performance, as well as well as effective asynchronous communication support.</p> <p style="text-align: left;">In addition to addressing the scalability of InfiniBand' s collective communication support, hardware support has been added to the new Connect-IB for scalable point-to-point communications. A new transport has been added called Dynamically Connected Transport (DC). With this transport, the hardware creates reliable connections dynamically, with the number of connections required scaling based on application communication characteristics and single host communication capabilities. As such, this forms the basis for a reliable scalable transport substrate, aimed at supporting application needs at the Exascale. This talk will describe the co-design principals used in developing both the CORE-Direct and the Dynamically Connected Transport capabilities. Detailed results will be discussed from experiments performed using the CORE-Direct functionality, and very early results from using the DC transport. Lessons learned as well as a discussion of continued future application-hardware co-design work will also be discussed.</p> <p style="text-align: justify;"><strong>Title: Enabling In-situ Pre- and Post-Processing for Exascale Hemodynamic Simulations - A Co-Design Study with the Sparse Geometry Lattice Boltzmann Code HemeLB</strong></p> <p style="text-align: justify;">Authors<strong>:</strong> Fang Chen*, Markus Flatken*, Achim Basermann*, Andreas Gerndt*, James Hetherington⁺, Timm Krüger⁺, Gregor Matura⁺, Rupert Nash⁺</p> <p style="text-align: justify;">*German Aerospace Center (DLR), ⁺University College London</p> <p>Today’s fluid simulations deal with complex geometries and numerical data on an extreme scale. As computation approaches the exascale, it will no longer be possible to write and store the full-sized data set. In-situ data analysis and scientific visualization provide feasible solutions to the analysis of complex large scaled CFD simulations. To bring pre- and post-processing to the exascale we must consider modifications to data structure and memory layout, and address latency and error resiliency. In this respect, a particular challenge is the exascale data processing for the sparse geometry lattice Boltzmann code HemeLB, intended for hemodynamic simulations.</p> <p>In this paper, we assess the needs and challenges of HemeLB users and sketch a co-design infrastructure and system architecture for pre- and post-processing the simulation data. To enable in-situ data visualization and analysis during a running simulation, post-processing needs to work on a reduced subset of the original data. Particular choices of data structure and visualization techniques need to be co-designed with the application scientists in order to achieve efficient and interactive data processing and analysis. In this work, we focus on the hierarchical data structure and suitable visualization techniques which provide possible solutions to interactive in-situ data processing at exascale.</p> <p><strong>Title: Communication Performance Analysis of CRESTA’s Co-Design Applications NEK5000 </strong></p> <p>Authors: Michael Schliephake and Erwin Laure, SeRC-Swedish e-Science Research Center and PDC Royal Institute of Technology, Swede</p> <p>The EU FP7 project CRESTA addresses the exascale challenge that requires new solutions with respect to algorithms, programming models, and system software amongst many others. CRESTA has chosen a co-design approach between the joint development of important HPC applications with proven high performance, and system software supporting high application efficiency.</p> <p>We present results from one of the on-going co-design development efforts. They exemplify CRESTA’s approach to co-design in general, and challenges of application developers in the design of MPI communications on current and upcoming large-scale systems in particular. This co-design effort started with the analysis of the CRESTA application NEK5000, which represents important classes of numerical simulation codes.</p> <p>NEK5000 is an open-source solver for calculations in computational fluid dynamics and scales well to more than 250.000 cores. An analysis of the performance of its communication infrastructure is presented. It turns out that its implementation based on an assumed hypergraph network topology shows very good performance on different system architectures.</p> <p>Finally, we discuss conclusions drawn from the application analysis with respect to their further development in order to facilitate larger systems in the future. Another important use of the knowledge gained from the performance analysis will be its application in the implementation of run-time services in order to support dynamic load balancing. This will close the co-design circle of application needs that inspire system software developments that in turn improve the application significantly.</p> <p><strong>Title: A PGAS implementation by co-design of the ECMWF Integrated Forecasting System (IFS) </strong></p> <p style="text-align: justify;">Authors: George Mozdzynski*, Mats Hamrud*, Nils Wedi1, Jens Doleschal, TUD, and Harvey Richardson, CRAY</p> <p>* ECMWF</p> <p style="text-align: left;">ECMWF is a partner in the Collaborative Research into Exascale Systemware, Tools and Applications (CRESTA) project, funded by a recent EU call for projects in Exa-scale computing, software and simulation (ICT-2011.9.13). The significance of the research carried out within the CRESTA project is that it will demonstrate techniques required to scale the current generation of petascale simulation codes towards the performance levels required for running on future exascale systems.</p> <p style="text-align: left;">Within CRESTA, ECMWF is exploring the use of Fortran2008 coarrays; in particular it is possibly the first time that coarrays have been used in a world leading production application within the context of OpenMP parallel regions. The purpose of these optimizations is primarily to allow the overlap of computation and communication, and further, in the case of the semi-Lagrangian optimization, to reduce the volume of the data communicated by removing the need for a constant width halo for computing the trajectory of particles of air backwards in time. The importance of this research is such that if these developments are successful then the IFS model can continue to use the spectral method to 2025 and beyond for the currently planned model resolutions on an exascale sized system. This research is further significant as the techniques used should be applicable to other hybrid MPI/OpenMP codes with the potential to overlap computation and communication.</p> <p style="text-align: left;">In a nutshell, IFS is a spectral, semi-implicit, semi-Lagrangian weather prediction code, where model data exists in three spaces, namely, grid-point, Fourier and spectral space. In a single time-step data is transposed between these spaces so that the respective grid-point, Fourier and spectral computations are independent over two of the three co-ordinate directions in each space. Fourier transforms are performed between grid-point and Fourier space, and Legendre transforms are performed between Fourier and spectral space.</p> <p style="text-align: left;">At ECMWF, this same model is used in an Ensemble Prediction System (EPS) suite where today 51 models are run at lower resolution with perturbed input conditions to provide probabilistic information to complement the accuracy of the high resolution deterministic forecast. The EPS suite is a perfect candidate to run on future exascale systems, with each ensemble member being independent of other such jobs. Increase the number of members and their resolution and trivially we can fill an exascale system. There will always be a need for a high resolution deterministic forecast which is more challenging to scale and the reason for ECMWF’s focus in the CRESTA project.</p> <p style="text-align: left;">Today ECMWF uses a 16 km global grid for its operational deterministic model, and plans to scale up to a 10 km grid in 2014-15, followed by a 5 km grid in 2020-21, and a 2.5 km grid in 2025-26. These planned resolution increases will require IFS to run efficiently on about a million cores by 2025.</p> <p style="text-align: left;">The current status of the coarray scalability developments to IFS will be presented in this talk, including an outline of planned developments.</p> <h3>Invited Speakers</h3> <h4><span style="font-size: 11pt; line-height: 115%; font-family: Calibri;">Mark Parsons, CRESTA</span></h4> <p style="text-align: left;"><span style="font-size: 11pt; line-height: 115%; font-family: Calibri;"><strong>Title: CRESTA: Collaborative Research into Exascale Systemware, Tools and Applications</strong></span></p> <p style="text-align: left;">The CRESTA project uses a novel approach to exascale system co-design which focuses on the use of a small, representative set of applications to inform and guide software and systemware developments. By limiting our work to a small set of representative applications, we are developing key insights into the necessary changes to applications and system software required to compute at this scale.&nbsp; In CRESTA, we recognise that incremental improvements are simply not enough and we need to look at disruptive changes to the HPC software stack from the operating system, through tools and libraries, to the applications themselves. In this talk I will present an overview of our work in this area.</p> <h4><span style="font-size: 11pt; line-height: 115%; font-family: Calibri;">Dana Knoll, LANL </span></h4> <p><strong>Title: CoCoMANS: Computational Co-design for Multi-scale Applications in the Natural Sciences</strong></p> <p style="text-align: left;">CoCoMANS is a three year project at Los Alamos National Laboratory (LANL) intended to advanced LANL's understanding of useful computational co-design practices and processes. The exascale future will bring many challenges to current practices and processes in general large- scale computational science. The CoCoMANS project is meeting these challenges by forging a qualitatively new predictive-science capability exploiting evolving high-performance computer architectures for multiple national-security-critical application areas including materials, plasmas, and climate by simultaneously evolving the four corners of science, methods, software, and hardware in an integrated computational co-design process. We are developing new Sapplications-based, self-consistent, two-way, scale-bridging methods that have broad applicability to the targeted science. These algorithms will map well to emerging heterogeneous computing models (while concurrently guiding hardware and software to maturity), and provide the algorithmic acceleration necessary to probe new scientific challenges at unprecedented scales. Expected outcomes of the CoCoMANS project include 1) demonstrating a paradigm shift in the pursuit of world-class science at LANL by bringing a wide spectrum of pertinent expertise in the computer and natural sciences in to play; 2) delivering a documented computational co-design knowledge-base, built upon an evolving software infrastructure; 3) fostering broad, long-term research collaborations with appropriate hardware partners; and 4) deepening our understanding of, and experience-base with, consistent, two-way, scale-bridging algorithms. In this talk we will provide an overview of the CoCoMANS project and provide some results from the project to date.</p> <h4><span style="font-size: 11pt; line-height: 115%; font-family: Calibri;">Giovanni Lapenta, KU Leuven </span></h4> <p style="text-align: justify;"><strong>Title: Codesign effort centred on space weather </strong></p> <p style="text-align: left;">Space weather, the study of the highly dynamic conditions in the coupled system formed by the Sun, the Earth and the interplanetary space, provides an exciting challenge for computer science. Space weather brings together the modeling of multiple physical processes happening in a system with hugely varying local scales and with a multiplicity of concurring processes. The high variety of methods and processes provides the testing grounds for new and upcoming computer architectures. Our approach is based on taking one powerful and widely used approach, the particle in cell method, as our central reference point. We then investigate how it performs on existing and upcoming platforms, we reconsider its design and its practice of implementation with an eye towards innovative (and even revolutionary) formulations centered around co-design: can we rethink the method so that the needs, the requests and the possibility in the hardware, in the software middle-layer and in the physics itself can be combined in the best way. Insisting on algorithms and mathematical-physcs model developed in the 50's is probably not the best use of petascale and exascale. We report on our recent experience built on the best combination of algorithms (using implicit formulations) of software implementations and programming languages and the best perspectives in upcoming hardware. The work reported here is partly completed as part of the EC-FP7 networks DEEP (deep-project.eu) and SWIFF (swiff.eu) and of the Intel Exascience Lab (exascience.com).</p> <h3 style="text-align: left;">AccEPTED Talks</h3> <p style="text-align: left;"><strong>Title: InifinBand CORE-Direct and Dynamically Connected Transport Service: A Hardware-Application Co-Design Case Study </strong></p> <p style="text-align: justify;">Authors<strong>:</strong> Noam Bloch, Richard Graham, Gilad Shainer, Todd Wilde Mellanox Technologies</p> <p style="text-align: left;">The challenges facing the High Performance Computing community, as it moves towards Exascale computing encompass the full system, starting at the hardware, and up through application software. The challenges require application software to use unprecedented levels of parallelism, using systems in constant flux, with the cost to move data at such scale, keeping within the energy budget, posing a large challenge for hardware designers. With the enormity of such challenges, it is essential that application development and hardware design be done cooperatively, to help close the gap between todays programming practices and the design constraints facing hardware designers as well as to enable application to take full advantage of the hardware capabilities. Over the past several years, Mellanox Technologies has been working in close cooperation with application developers in developing new communication technologies. CORE-Direct and the newly developed Dynamic Connected Transport being the outcome of such co-design efforts. This talk will describe these capabilities, and discuss future co-design plans.</p> <p style="text-align: left;">The CORE-Direct technology has been developed to address scalability issues faced by application using collective communication, including effects of system noise. A large fraction of scientific simulations tend to use such functionality, with the performance of collective communication often being a limiting factor for application scalability. With collective algorithms being used to guide hardware and software development, the CORE-Direct functionality has been developed to offload collective communication progression to the Host Channel Adapter (HCA), leaving the Central Processing Unit to continue to perform other work, as the collective communication progresses. Such an implementation is provides hardware support for the implementation of asynchronous non-blocking collective communication, as well as the means of addressing some of the system noise problems. This functionality has been available in Mellanox HCA's since ConnectX-2, and has been shown to provide both good absolute performance, as well as well as effective asynchronous communication support.</p> <p style="text-align: left;">In addition to addressing the scalability of InfiniBand' s collective communication support, hardware support has been added to the new Connect-IB for scalable point-to-point communications. A new transport has been added called Dynamically Connected Transport (DC). With this transport, the hardware creates reliable connections dynamically, with the number of connections required scaling based on application communication characteristics and single host communication capabilities. As such, this forms the basis for a reliable scalable transport substrate, aimed at supporting application needs at the Exascale. This talk will describe the co-design principals used in developing both the CORE-Direct and the Dynamically Connected Transport capabilities. Detailed results will be discussed from experiments performed using the CORE-Direct functionality, and very early results from using the DC transport. Lessons learned as well as a discussion of continued future application-hardware co-design work will also be discussed.</p> <p style="text-align: justify;"><strong>Title: Enabling In-situ Pre- and Post-Processing for Exascale Hemodynamic Simulations - A Co-Design Study with the Sparse Geometry Lattice Boltzmann Code HemeLB</strong></p> <p style="text-align: justify;">Authors<strong>:</strong> Fang Chen*, Markus Flatken*, Achim Basermann*, Andreas Gerndt*, James Hetherington⁺, Timm Krüger⁺, Gregor Matura⁺, Rupert Nash⁺</p> <p style="text-align: justify;">*German Aerospace Center (DLR), ⁺University College London</p> <p>Today’s fluid simulations deal with complex geometries and numerical data on an extreme scale. As computation approaches the exascale, it will no longer be possible to write and store the full-sized data set. In-situ data analysis and scientific visualization provide feasible solutions to the analysis of complex large scaled CFD simulations. To bring pre- and post-processing to the exascale we must consider modifications to data structure and memory layout, and address latency and error resiliency. In this respect, a particular challenge is the exascale data processing for the sparse geometry lattice Boltzmann code HemeLB, intended for hemodynamic simulations.</p> <p>In this paper, we assess the needs and challenges of HemeLB users and sketch a co-design infrastructure and system architecture for pre- and post-processing the simulation data. To enable in-situ data visualization and analysis during a running simulation, post-processing needs to work on a reduced subset of the original data. Particular choices of data structure and visualization techniques need to be co-designed with the application scientists in order to achieve efficient and interactive data processing and analysis. In this work, we focus on the hierarchical data structure and suitable visualization techniques which provide possible solutions to interactive in-situ data processing at exascale.</p> <p><strong>Title: Communication Performance Analysis of CRESTA’s Co-Design Applications NEK5000 </strong></p> <p>Authors: Michael Schliephake and Erwin Laure, SeRC-Swedish e-Science Research Center and PDC Royal Institute of Technology, Swede</p> <p>The EU FP7 project CRESTA addresses the exascale challenge that requires new solutions with respect to algorithms, programming models, and system software amongst many others. CRESTA has chosen a co-design approach between the joint development of important HPC applications with proven high performance, and system software supporting high application efficiency.</p> <p>We present results from one of the on-going co-design development efforts. They exemplify CRESTA’s approach to co-design in general, and challenges of application developers in the design of MPI communications on current and upcoming large-scale systems in particular. This co-design effort started with the analysis of the CRESTA application NEK5000, which represents important classes of numerical simulation codes.</p> <p>NEK5000 is an open-source solver for calculations in computational fluid dynamics and scales well to more than 250.000 cores. An analysis of the performance of its communication infrastructure is presented. It turns out that its implementation based on an assumed hypergraph network topology shows very good performance on different system architectures.</p> <p>Finally, we discuss conclusions drawn from the application analysis with respect to their further development in order to facilitate larger systems in the future. Another important use of the knowledge gained from the performance analysis will be its application in the implementation of run-time services in order to support dynamic load balancing. This will close the co-design circle of application needs that inspire system software developments that in turn improve the application significantly.</p> <p><strong>Title: A PGAS implementation by co-design of the ECMWF Integrated Forecasting System (IFS) </strong></p> <p style="text-align: justify;">Authors: George Mozdzynski*, Mats Hamrud*, Nils Wedi1, Jens Doleschal, TUD, and Harvey Richardson, CRAY</p> <p>* ECMWF</p> <p style="text-align: left;">ECMWF is a partner in the Collaborative Research into Exascale Systemware, Tools and Applications (CRESTA) project, funded by a recent EU call for projects in Exa-scale computing, software and simulation (ICT-2011.9.13). The significance of the research carried out within the CRESTA project is that it will demonstrate techniques required to scale the current generation of petascale simulation codes towards the performance levels required for running on future exascale systems.</p> <p style="text-align: left;">Within CRESTA, ECMWF is exploring the use of Fortran2008 coarrays; in particular it is possibly the first time that coarrays have been used in a world leading production application within the context of OpenMP parallel regions. The purpose of these optimizations is primarily to allow the overlap of computation and communication, and further, in the case of the semi-Lagrangian optimization, to reduce the volume of the data communicated by removing the need for a constant width halo for computing the trajectory of particles of air backwards in time. The importance of this research is such that if these developments are successful then the IFS model can continue to use the spectral method to 2025 and beyond for the currently planned model resolutions on an exascale sized system. This research is further significant as the techniques used should be applicable to other hybrid MPI/OpenMP codes with the potential to overlap computation and communication.</p> <p style="text-align: left;">In a nutshell, IFS is a spectral, semi-implicit, semi-Lagrangian weather prediction code, where model data exists in three spaces, namely, grid-point, Fourier and spectral space. In a single time-step data is transposed between these spaces so that the respective grid-point, Fourier and spectral computations are independent over two of the three co-ordinate directions in each space. Fourier transforms are performed between grid-point and Fourier space, and Legendre transforms are performed between Fourier and spectral space.</p> <p style="text-align: left;">At ECMWF, this same model is used in an Ensemble Prediction System (EPS) suite where today 51 models are run at lower resolution with perturbed input conditions to provide probabilistic information to complement the accuracy of the high resolution deterministic forecast. The EPS suite is a perfect candidate to run on future exascale systems, with each ensemble member being independent of other such jobs. Increase the number of members and their resolution and trivially we can fill an exascale system. There will always be a need for a high resolution deterministic forecast which is more challenging to scale and the reason for ECMWF’s focus in the CRESTA project.</p> <p style="text-align: left;">Today ECMWF uses a 16 km global grid for its operational deterministic model, and plans to scale up to a 10 km grid in 2014-15, followed by a 5 km grid in 2020-21, and a 2.5 km grid in 2025-26. These planned resolution increases will require IFS to run efficiently on about a million cores by 2025.</p> <p style="text-align: left;">The current status of the coarray scalability developments to IFS will be presented in this talk, including an outline of planned developments.</p> CRESTA SC12 Workhop - The call for abstracts 2012-08-08T11:40:12+00:00 2012-08-08T11:40:12+00:00 https://www.cresta-project.eu/cresta-sc12-workhop-call-for-abstracts.html Katie Urquhart katie.urquhart@ed.ac.uk <h3>Preparing applications for exascale through co-design </h3> <p>The need for exascale platforms is being driven by a set of important scientific drivers. These drivers are scientific challenges of global significance that cannot be solved on current petascale hardware, but require exascale systems. Example grand challenge problems originate from energy, climate, nanotechnology and medicine and have a strong societal focus. Meeting these challenges requires associated application codes to utilise developing exascale systems appropriately. Achieving this requires a close interaction between software and application developers. The concept of co-design dates from the late 18th century, and recognised the importance of a priori knowledge. In modern software terms, co-design recognises the need to include all relevant perspectives and stakeholders in the design process. With application, software and hardware developers now engaged in co-design to guide exascale development, a workshop bringing these communities together is timely.</p> <p>Authors are invited to submit novel research and experience in all areas associated with co-design and we particularly welcome research that brings together current theory and practise. This half-day workshop seeks contributions in the form of abstracts on relevant topics, including, but not limited to co-design:</p> <p>• From an application scientist perspective</p> <p>• From a software engineers perspective</p> <p>• In fusion science</p> <p>• In QCD</p> <p>• In weather prediction</p> <p>• In computational fluid dynamics</p> <p>• In molecular simulation</p> <p>• And numerical algorithms</p> <p>• And pre- and post- processing</p> <p>• And programming models and libraries</p> <p>• Best practise and innovation</p> <p>A peer review process will be used to select abstracts. Successful authors will be invited to present their work at the workshop and to submit a paper for publication in the SC 2012 digital proceedings.</p> <p><strong>Abstract submissions should be no longer than 500 words in length and should be submitted to paec12@cresta-project.eu no later than 16th September 2012.</strong></p> <p><strong><br /></strong><a href="https://www.cresta-project.eu/index.php?option=com_content&amp;view=article&amp;id=53:cresta-sc12-workshop&amp;catid=2:uncategorised">Back to CRESTA SC12 Workshop pages</a><strong><br /></strong></p> <h3>Preparing applications for exascale through co-design </h3> <p>The need for exascale platforms is being driven by a set of important scientific drivers. These drivers are scientific challenges of global significance that cannot be solved on current petascale hardware, but require exascale systems. Example grand challenge problems originate from energy, climate, nanotechnology and medicine and have a strong societal focus. Meeting these challenges requires associated application codes to utilise developing exascale systems appropriately. Achieving this requires a close interaction between software and application developers. The concept of co-design dates from the late 18th century, and recognised the importance of a priori knowledge. In modern software terms, co-design recognises the need to include all relevant perspectives and stakeholders in the design process. With application, software and hardware developers now engaged in co-design to guide exascale development, a workshop bringing these communities together is timely.</p> <p>Authors are invited to submit novel research and experience in all areas associated with co-design and we particularly welcome research that brings together current theory and practise. This half-day workshop seeks contributions in the form of abstracts on relevant topics, including, but not limited to co-design:</p> <p>• From an application scientist perspective</p> <p>• From a software engineers perspective</p> <p>• In fusion science</p> <p>• In QCD</p> <p>• In weather prediction</p> <p>• In computational fluid dynamics</p> <p>• In molecular simulation</p> <p>• And numerical algorithms</p> <p>• And pre- and post- processing</p> <p>• And programming models and libraries</p> <p>• Best practise and innovation</p> <p>A peer review process will be used to select abstracts. Successful authors will be invited to present their work at the workshop and to submit a paper for publication in the SC 2012 digital proceedings.</p> <p><strong>Abstract submissions should be no longer than 500 words in length and should be submitted to paec12@cresta-project.eu no later than 16th September 2012.</strong></p> <p><strong><br /></strong><a href="https://www.cresta-project.eu/index.php?option=com_content&amp;view=article&amp;id=53:cresta-sc12-workshop&amp;catid=2:uncategorised">Back to CRESTA SC12 Workshop pages</a><strong><br /></strong></p>