Why don’t we have data close to the computation? Let’s understand and optimize data locality problem.
Due to the evolving complexity of the current hardware architectures, having the data close to the compute engine is becoming crucial for mastering performance of numerical simulations. In this presentation we will guide through examples and use cases, starting from the node level to the vector registers, on the most common issues raising in data locality. With the help of performance tools, we also try to detect them and find a common solution for the most typical patterns.
Fabio Baruffa is a software technical consulting engineer in the Developer Products Division (DPD) at Intel. He is working in the compiler team and provides customer support in the high performance computing (HPC) area. Prior at Intel, he has been working as HPC application specialist and developer in the largest supercomputing centers in Europe, mainly the Leibniz Supercomputing Center and the Max-Plank Computing and Data Facility in Munich, as well as Cineca in Italy. He has been involved in software development, analysis of scientific code and optimization for HPC systems. He holds a PhD in Physics from University of Regensburg for his research in the area of spintronics device and quantum computing.