Skip to content Skip to main navigation Report an accessibility issue

EECS Publication

Optimizing Krylov Subspace Solvers on Graphics Processing Units

Hartwig Anzt and Stanimire Tomov and Piotr Luszczek and Ichitaro Yamazaki and Jack Dongarra and William Sawyer

Krylov subspace solvers are often the method of choice when solving sparse linear systems iteratively. At the same time, hardware accelerators such as graphics processing units (GPUs) continue to offer significant floating point performance gains for matrix and vector computations through easy-to-use libraries of computational kernels. However, as these libraries are usually composed of a well optimized but limited set of linear algebra operations, applications that use them often fail to leverage the full potential of the accelerator. In this paper we target the acceleration of the BiCGSTAB solver for GPUs, showing that significant improvement can be achieved by reformulating the method and developing application-specific kernels instead of using the generic CUBLAS library provided by NVIDIA. We propose an implementation that benefits from a significantly reduced number of kernel launches and GPUhost communication events, by means of increased data locality and a simultaneous reduction of multiple scalar products. Using experimental data, we show that, depending on the dominance of the untouched sparse matrix vector products, significant performance improvements can be achieved compared to a reference implementation based on the CUBLAS library. We feel that such optimizations are crucial for the subsequent development of highlevel sparse linear algebra libraries.

Published  2014-02-17 05:00:00  as  ut-eecs-14-725 (ID:583)

ut-eecs-14-725.pdf

« Back to Listing