Skip to content Skip to main navigation Report an accessibility issue

EECS Publication

High-Performance Tensor Contractions for GPUs

A. Abdelfattah and M. Baboulin and V. Dobrev and J. Dongarra and C. Earl and J. Falcou and A. Haidar and I. Karlin and Tz. Kolev and I. Masliah and S. Tomov

We present a computational framework for high-performance tensor contractions on GPUs. High-performance is difficult to obtain using existing libraries, especially for many independent contractions where each contraction is very small, e.g., sub-vector/warp in size. However, using our framework to batch contractions plus application-specifics, we demonstrate close to peak performance results. In particular, to accelerate large scale tensor-formulated high-order finite element method (FEM) simulations, which is the main focus and motivation for this work, we represent contractions as tensor index reordering plus matrix-matrix multiplications (GEMMs). This is a key factor to achieve algorithmically many-fold acceleration (vs. not using it) due to possible reuse of data loaded in fast memory. In addition to using this context knowledge, we design tensor data-structures, tensor algebra interfaces, and new tensor contraction algorithms and implementations to achieve 90+% of a theoretically derived peak on GPUs. On a K40c GPU for contractions resulting in GEMMs on square matrices of size 8 for example, we are 2.8x faster than CUBLAS, and 8.5x faster than MKL on 16 cores of Intel Xeon ES-2670 (Sandy Bridge) 2.60GHz CPUs. Finally, we apply autotuning and code generation techniques to simplify tuning and provide an architecture-aware, user-friendly interface.

Published  2016-01-21 05:00:00  as  ut-eecs-16-738 (ID:596)

ut-eecs-16-738.pdf

« Back to Listing