Get Started with Intel® oneAPI Math Kernel Library

The Intel® oneAPI Math Kernel Library (oneMKL) helps you achieve maximum performance with a math computing library of highly optimized, extensively parallelized routines for CPU and GPU. The library has C and Fortran interfaces for most routines on CPU, and DPC++ interfaces for some routines on both CPU and GPU. You can find comprehensive support for several math operations in various interfaces including:

For C and Fortran on CPU

For DPC++ on CPU and GPU (Refer to the Intel® oneAPI Math Kernel Library—Data Parallel C++ Developer Reference for more details.)

Before You Begin

Visit the Release Notes page for the Known Issues and most up-to-date information.

Visit the Intel® oneAPI Math Kernel Library System Requirements page for system requirements.

Visit the Get Started with the Intel® oneAPI DPC++ Compiler for DPC++ Compiler requirements.

Step 1: Install Intel® oneAPI Math Kernel Library

Download Intel® oneAPI Math Kernel Library from the oneAPI base toolkit.

Step 2: Select a Function or Routine

Select a function or routine from oneMKL that is best suited for your problem. Use these resources:

Resource Link Contents

Intel® oneMKL Developer Guide for Linux*

Intel® oneMKL Developer Guide for Windows*

Intel® oneMKL Developer Guide for macOS*

The Developer Guide contains detailed information on several topics including:

  • Compiling and linking applications
  • Building custom DLLs
  • Threading
  • Memory Management

Intel® oneMKL Developer Reference - C Language

Intel® oneMKL Developer Reference - Fortran Language

Intel® oneMKL Developer Reference - DPC++ Language

The Developer Reference (in C, Fortran, and DPC++ formats) contains detailed descriptions of the functions and interfaces for all library domains.

Intel® Math Kernel Library Function Finding Advisor

Use the LAPACK Function Finding Advisor to explore LAPACK routines that are useful for a particular problem. For example, if you specify an operation as:

  • Routine type: Computational
  • Computational problem: Orthogonal factorization
  • Matrix type: General
  • Operation: Perform QR factorization

Step 3: Link Your Code

Use the Intel® MKL Link Line Advisor to configure the link command according to your program features.

Some limitations and additional requirements:

Intel® oneAPI Math Kernel Library for DPC++ only supports using the mkl_intel_ilp64 interface library and sequential or TBB threading.

For DPC++ interfaces with static linking on Linux

dpcpp -fsycl-device-code-split=per_kernel -DMKL_ILP64 <typical user includes and linking flags and other libs> ${MKLROOT}/lib/intel64/libmkl_sycl.a -Wl,--start-group ${MKLROOT}/lib/intel64/libmkl_intel_ilp64.a ${MKLROOT}/lib/intel64/libmkl_<sequential|tbb_thread>.a ${MKLROOT}/lib/intel64/libmkl_core.a -Wl,--end-group -lsycl -lOpenCL -lpthread -ldl -lm

For example, building/statically linking main.cpp with ilp64 interfaces and TBB threading:

dpcpp -fsycl-device-code-split=per_kernel -DMKL_ILP64 -I${MKLROOT}/include main.cpp ${MKLROOT}/lib/intel64/libmkl_sycl.a -Wl,--start-group ${MKLROOT}/lib/intel64/libmkl_intel_ilp64.a ${MKLROOT}/lib/intel64/libmkl_tbb_thread.a ${MKLROOT}/lib/intel64/libmkl_core.a -Wl,--end-group -lsycl -lOpenCL -ltbb -lpthread -ldl -lm

For DPC++ interfaces with dynamic linking on Linux

dpcpp -DMKL_ILP64 <typical user includes and linking flags and other libs> -L${MKLROOT}/lib/intel64 -lmkl_sycl -lmkl_intel_ilp64 -lmkl_ <sequential|tbb_thread> -lmkl_core -lsycl -lOpenCL -lpthread -ldl -lm

For example, building/dynamically linking main.cpp with ilp64 interfaces and TBB threading:

dpcpp  -DMKL_ILP64 -I${MKLROOT}/include main.cpp -L${MKLROOT}/lib/intel64 -lmkl_sycl -lmkl_intel_ilp64 -lmkl_tbb_thread -lmkl_core -lsycl -lOpenCL -ltbb -lpthread -ldl -lm

For DPC++ interfaces with static linking on Windows

dpcpp -fsycl-device-code-split=per_kernel -DMKL_ILP64 <typical user includes and linking flags and other libs> mkl_sycl.lib -static -lmkl_intel_ilp64.lib -lmkl_<sequential|tbb_thread>.lib -lmkl_core.lib -lsycl -lOpenCL

For example, building/statically linking main.cpp with ilp64 interfaces and TBB threading:

dpcpp  -fsycl-device-code-split=per_kernel -DMKL_ILP64 -I”%MKLROOT%\include” main.cpp -lmkl_sycl.lib -lmkl_intel_ilp64.lib -lmkl_tbb_thread.lib -lmkl_core.lib -lsycl -lOpenCL

For DPC++ interfaces with dynamic linking on Windows

dpcpp -DMKL_ILP64 <typical user includes and linking flags and other libs> mkl_sycl_dll.lib -lmkl_intel_ilp64_dll.lib -lmkl_<sequential|tbb_thread>_dll.lib -lmkl_core_dll.lib -lsycl -lOpenCL

For example, building/dynamically linking main.cpp with ilp64 interfaces and TBB threading:

dpcpp  -fsycl-device-code-split=per_kernel -DMKL_ILP64 -I”%MKLROOT%\include” main.cpp -L”%MKLROOT%\lib\intel64” -lmkl_sycl_dll.lib -lmkl_intel_ilp64_dll.lib -lmkl_tbb_thread_dll.lib -lmkl_core_dll.lib -lsycl -lOpenCL

For C Interfaces with OpenMP Offload Support

Use the classic Intel® MKL interface, threading and core libraries as recommended in the Intel® Math Kernel Library Link Line Advisor tool and add “-lOpenCL” to the link line.

See the C OpenMP Offload Developer Guide for build/link options that may also be necessary like “-fiopenmp” and “-fopenmp-targets=spir64”.

For example, building/ dynamically linking main.cpp on Linux with ilp64 interfaces and OpenMP threading:

icx -fiopenmp -fopenmp-targets=spir64 -DMKL_ILP64 -I$(MKLROOT)/include main.cpp -L${MKLROOT}/lib/intel64 -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -lOpenCL -lpthread -ldl -lm

Note, these two options (-fiopenmp, -fopenmop-targets=spir64) are required for build and link line on Windows as well.

Find More

Resource

Description

Intel MKL training courses

The online training site is an excellent resource for learning Intel® MKL basics with Get Started guides, videos, tutorials, webinars, and technical articles.

Tutorial: Using Intel® Math Kernel Library for Matrix Multiplication:

This tutorial demonstrates how you can use the Intel® MKL to multiply matrices, measure the performance of matrix multiplication, and control threading.

Intel® oneAPI Math Kernel Library (oneMKL) Release Notes

The release notes contain information specific to the latest release of Intel® MKL including new and changed features. The release notes include links to principal online information resources related to the release. You can also find information on:

  • What's new in the release
  • Product contents
  • Obtaining technical support
  • License definitions

Other Intel® MKL Documentation

Related documentation, such as performance data, application notes, and examples.

Intel® oneAPI Math Kernel Library

The Intel® oneAPI Math Kernel Library (oneMKL) product page. See this page for support and online documentation.

Notices and Disclaimers

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit www.intel.com/benchmarks.

Intel technologies may require enabled hardware, software or service activation.

No product or component can be absolutely secure.

Your costs and results may vary.

© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

Optimization Notice

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.