Home > Research > CoMPIler  
Home Events Past Events 
CoMPI Project Page at UIUC (CoPI Torsten Hoefler)DescriptionThis is the page related to the ASCR DOE XStack software research project "Compiled MPI" (short: compi) at the University of Illinois at UrbanaChampaign (UIUC) led by Torsten Hoefler. This project is a joint project with Lawrence Livermore National Laboratory (LLNL, CoPIs Dan Quinlan and Greg Bronevetsky) and Indiana University (IU, CoPI Andrew Lumsdaine). UIUC and IU are responsible for runtime optimization and integration while LLNL handles the compiler infrastructure based on ROSE and the transformations. Grad Student RAs NeededWe are actively looking for grad student RAs at UIUC for the following two subprojects: (1) datatype optimizations and (2) communication optimizations. A short description follows below, please consult Torsten Hoefler if you have questions or are interested to work on any of the projects.Communication OptimizationThis project deals with the static and dynamic optimization of communication schedules. Communication schedules are a set of communication operations and dependencies and define an order of their execution. A set of such operations and dependencies form a global communication graph. The goal of this project is to optimize the communication graph in a given model (e.g., LogGP) and to compare the quality of solutions. For example, a broadcast communication from node 0 to nodes 1..3 can be expressed as the set {(0,1), (0,2), (0,3), (0,4)} (where a tuple (x,y) represents communication from x to y) or {(0,1}, (1,3), (0,2)} in a treelike shape. Using a broadcast tree is more efficient in this trivial example. The project aims to develop modelbased optimization techniques for the optimization of such communication operation represented in the tupleform above. The main work is to develop and proof optimality of algorithms working on this tupleform using wellknown communication models such as LogGP. Reaching optimality is generally very hard. We plan to follow three avenues: (1) analytical algorithms and proofs, (2) wellknown optimization methods (linear or integer optimization), and (3) heuristics and learningbased methods. The results should be implemented in an MPIlike library. The student working on this project should know what MPI is, be familiar with the C and C++ programming languages and he should be very familiar with linear optimization, (mixed) integer programming and basic network models. The student should understand the papers Alexandrov et al. "LogGP: incorporating long messages into the LogP model—one step closer towards a realistic model for parallel computation" and Bruck et al. "Efficient Algorithms for AlltoAll Communications in MultiPort MessagePassing Systems" well. For previous work in this area see references [3]. MPI Shared Memory OptimizationThe goal is to optimize the implementation of MPI implementations for shared memory supercomputers. The student working on this project should know MPI and he should be very familiar with the C/C++ programming languages and computer architecture. Please contact Torsten Hoefler for more information and if you're a student at UIUC and are interested in this project. For previous work in this area see references [1,2].References

serving: 3.215.16.238:49782  © Torsten Hoefler 