October 23, 2015
Speeding up Supercomputers: Optimizing Shapes to Minimize Communication
How do we make supercomputers even faster? One way is to write algorithms that use the supercomputers we have more effectively. Most current supercomputers are heterogeneous, which means that they are composed of a number of processors, each of which may calculate and communicate at different speeds. These processors are then programmed to collaborate, which is known as parallel computing. Two main focuses of writing parallel programs are load-balancing, making sure that each processor has the right amount of work so that they all finish at the same time, and minimizing communication. Supercomputers are commonly used to process huge amounts of data for scientific computing, and matrix multiplication is frequently used in such computations. This summer, Emily was designing algorithms for parallel matrix multiplication on heterogeneous computers. These algorithms assigned portions of the matrix to each processor in such a way as to minimize communication volume. In this talk, Emily will discuss the challenges she encountered and knowledge she gained through this research.
Explore the MHC Social Universe >