Computes a matrix-matrix product with general matrices.
event gemm(queue &exec_queue, transpose transa, transpose transb, std::int64_t m, std::int64_t n, std::int64_t k, T alpha, const T *a, std::int64_t lda, const T *b, std::int64_t ldb, T beta, T *c, std::int64_t ldc, const vector_class<event> &dependencies = {});
gemm supports the following precisions and devices.
Ts | Ta | Tb | Tc | Devices Supported |
---|---|---|---|---|
half | half | half | half | Host, CPU, and GPU |
float | half | half | float | Host, CPU, and GPU |
float | float | float | float | Host, CPU, and GPU |
double | double | double | double | Host, CPU, and GPU |
std::complex<float> | std::complex<float> | std::complex<float> | std::complex<float> | Host, CPU, and GPU |
std::complex<double> | std::complex<double> | std::complex<double> | std::complex<double> | Host, CPU, and GPU |
The gemm routine computes a scalar-matrix-matrix product and adds the result to a scalar-matrix product, with general matrix inputs. The operation is defined as
C <- alpha*op(A)*op(B) + beta*C
where:
op(X) is one of op(X) = X, or op(X) = XT, or op(X) = XH,
alpha and beta are scalars,
A, B and C are matrices:
op(A) is an m-by-k matrix,
op(B) is a k-by-n matrix,
C is an m-by-n matrix.
The queue where the routine should be executed.
Specifies the form of op(A), the transposition operation applied to A. See Data Types for more details.
Specifies the form of op(B), the transposition operation applied to B. See Data Types for more details.
Specifies the number of rows of the matrix op(A) and of the matrix C. The value of m must be at least zero.
Specifies the number of columns of the matrix op(B) and the number of columns of the matrix C. The value of n must be at least zero.
Specifies the number of columns of the matrix op(A) and the number of rows of the matrix op(B). The value of k must be at least zero.
Scaling factor for the matrix-matrix product.
Pointer to input matrix A. If A is not transposed, A is an m-by-k matrix so the array a must have size at least lda*k (respectively, lda*m) if column (respectively, row) major layout is used to store matrices. If A is transposed, A is an k-by-m matrix so the array a must have size at least lda*m (respectively, lda*k) if column (respectively, row) major layout is used to store matrices. See Matrix and Vector Storage for more details.
The leading dimension of A. If matrices are stored using column major layout, lda must be at least m if A is not transposed, and at least k if A is transposed. If matrices are stored using row major layout, lda must be at least k if A is not transposed, and at least m if A is transposed.
Pointer to input matrix B. If B is not transposed, B is an k-by-n matrix so the array b must have size at least ldb*n (respectively, ldb*k) if column (respectively, row) major layout is used to store matrices. If B is transposed, B is an n-by-k matrix so the array a must have size at least ldb*k (respectively, ldb*n) if column (respectively, row) major layout is used to store matrices. See Matrix and Vector Storage for more details.
The leading dimension of B. If matrices are stored using column major layout, ldb must be at least k if B is not transposed, and at least n if B is transposed. If matrices are stored using row major layout, ldb must be at least n if B is not transposed, and at least k if B is transposed.
Scaling factor for matrix C.
The pointer to input/output matrix C. It must have a size of at least ldc*n if column major layout is used to store matrices or at least ldc*m if row major layout is used to store matrices. See Matrix and Vector Storage for more details.
The leading dimension of C. It must be positive and at least m if column major layout is used to store matrices or at least n if row major layout is used to store matrices.
List of events to wait for before starting computation, if any. If omitted, defaults to no dependencies.
Pointer to the output matrix, overwritten by alpha*op(A)*op(B) + beta*C.
If beta = 0, matrix C does not need to be initialized before calling gemm.
Output event to wait on to ensure computation is complete.
An example of the USM version of gemmcan be found in the Intel® oneMKL installation directory:
examples/sycl/blas/gemm_usm.cpp