Skip to content Skip to main navigation Report an accessibility issue

EECS Publication

A Note on Auto-tuning GEMM for GPUs

Yinan Li, Jack Dongarra, and Stanimire Tomov

The development of high performance dense linear algebra (DLA) critically depends on highly optimized BLAS, and especially on the matrix multiplication routine (GEMM). This is especially true for Graphics Processing Units (GPUs), as evidenced by recently published results on DLA for GPUs that rely on highly optimized GEMM [13, 11]. However, the current best GEMM performance, e.g. of up to 375 GFlop/s in single precision and of up to 75 GFlop/s in double precision arithmetic on NVIDIA's GTX 280, is difficult to achieve. The development involves extensive GPU knowledge and even backward engineering to understand some undocumented insides about the architecture that have been of key importance in the development [12]. In this paper, we describe some GPU GEMM auto-tuning optimization techniques that allow us to keep up with changing hardware by rapidly reusing, rather than reinventing, the existing ideas. Auto-tuning, as we show in this paper, is a very practical solution where in addition to getting an easy portability, we can often get substantial speedups even on current GPUs (e.g. up to 27% in certain cases for both single and double precision GEMMs on the GTX 280). Keywords: Auto-tuning, matrix multiply, dense linear algebra, GPUs.

Published  2009-01-14 05:00:00  as  ut-cs-09-635 (ID:67)

ut-cs-09-635.pdf

« Back to Listing