Calcul réparti et grid computing
Objectives
Programming and algorithmic issues for
large scale parallel computers (hundreds to hundred thousands of core)
are addressed in this lecture
Description
This module begins with a general
introduction to high performance computing and programming where the
general concepts used in the design of high performance computers
(from multicore cache based memory computers to large clusters of
nodes) are described along with the main issues related to efficient
high performance programming (from sequential code optimization
techniques up to shared memory parallel programming and distributed
computing). Afterward, notions on architecture and execution modeling
of a parallel program are provided aiming at accurate performance
prediction. The notions of speed-up, isoefficiency, scalability are
also introduced at this moment. The module is concluded with some
brief concepts of Grid computing and the related issues.
The module consists of nine lectures of two hours each and 2 to 4 hours of supervised lab
during which students experience distributed
memory computing in a message passing environment. PVM ("Parallel
Virtual Machine") and XPVM (interactive trace analysis) are used to
develop and validate a relatively simple application such as the
iterative block Jacobi method for the solution of banded linear
systems of equations. At the end of the lab sessions the students are
asked to design and develop a distributed memory dynamic scheduler to
automatically adapt the distribution of the parallel tasks of this
application to the load of the nodes of the target computer.
Pre-requisites
Computer architecture, operating system and synchonisation mechanisms, programming
Contact(s)
BUTTARI ALFREDOPlaces
- Toulouse