Implementing Linear Algebra Routines on Multi-Core Processors with Pipelining and a Look Ahead
Jakub Kurzak and Jack Dongarra
Linear algebra algorithms commonly encapsulate parallelism in Basic Linear Algebra Subroutines (BLAS). This solution relies on the fork-join model of parallel execution, which may result in suboptimal performance on current and future generations of multi-core processors. To overcome the shortcomings of this approach a pipelined model of parallel execution is presented, and the idea of the look ahead is utilized in order to suppress the negative effects of sequential formulation of the algorithms. Application to one-sided matrix factorizations, LU, Cholesky and QR, is described. Shared memory implementation using POSIX threads is presented.
Published 2006-09-18 04:00:00 as ut-cs-06-581 (ID:139)