Reinventing Parallel Programming for Massive Parallelism
Post-Doctoral Researcher at Lawrence Berkeley National Laboratory
Abstract: The importance of parallel computing is rising because more and more scientific and industrial challenges such as drug discovery, climate change, and financial data analysis are being studied on the basis of very large-scale computer simulations. Supercomputer systems increasingly rely on on-chip parallelism, which requires dramatic changes to chip architecture. To meet performance expectations, application software must undergo extensive redesign. In response, support from programming models and performance prediction tools is crucial to help application developers take advantage of this massive hardware parallelism.
The first part of this talk introduces the Mint programming model, which provides programming model support for the massive on-chip parallelism. Mint allows the programmer to express parallelism at a high level. Its domain-specific compiler parallelizes loop-nests, performs data locality optimizations for stencil methods, and relieves the programmer of a variety of tedious tasks such as managing threads. The second part of the talk introduces the ExaSAT framework, a forward-looking tool designed to quantitatively assess the hardware-software design trade-offs for the potential hardware realizations in the exascale timeframe (2020). ExaSAT focuses on the extraction of data movement information from the source code through compiler analysis. It can then estimate the performance along with other important metrics by utilizing a parameterized hardware/software model. ExaSAT is successfully being used for the combustion co-design project.
In addition to introducing Mint and ExaSAT, the talk will highlight a number of exciting research topics in the area of parallel programming models and performance modeling.
Bio: Didem Unat is a postdoctoral researcher and the recipient of the Luis Alvarez Fellowship at Lawrence Berkeley National Laboratory. Her research interest lies primarily in the area of high performance computing, parallel programming models, compiler analysis and performance modeling. She is currently working on designing and evaluating programming models for future exascale architectures, as part of hardware software co-design project. She received her Ph.D at University of California-San Diego. She holds a B.S in computer engineering from Bogazici University.