Use Automatic Vectorization

Automatic vectorization is supported on Intel® 64 architectures. The information below will guide you in setting up the auto-vectorizer.

Vectorization Speedup

Where does the vectorization speedup come from? Consider the following sample code, where a, b, and c are integer arrays:

for (i=0;i<=MAX;i++)
   c[i]=a[i]+b[i];

If vectorization is not enabled, and you compile using the O1, -no-vec- (Linux), or /Qvec- (Windows) option, the compiler processes the code with unused space in the SIMD registers, even though each register can hold three additional integers. If vectorization is enabled (compiled using O2 or higher options), the compiler may use the additional registers to perform four additions in a single instruction. The compiler looks for vectorization opportunities whenever you compile at default optimization (O2) or higher.

Note

This option enables vectorization at default optimization levels for both Intel® microprocessors and non-Intel microprocessors. Vectorization may call library routines that can result in additional performance gain on Intel® microprocessors than on non-Intel microprocessors. The vectorization can also be affected by certain options, such as /arch (Windows), -m (Linux), or [Q]x.

To get details about the type of loop transformations and optimizations that took place, use the [Q]opt-report-phase option by itself or along with the [Q]opt-report option.

Linux

To evaluate performance enhancement, run guided_matmul_opt_report:

  1. Source an environment script such as setvars.sh in the $ONEAPI_ROOT directory.
  2. Navigate to the oneAPI-samples/DirectProgramming/C++/CompilerInfrastructure/guided_matmul_opt_report directory. This application multiplies a vector by a matrix using the following loop:

        for (i = 0; i < size1; i++) {
            b[i] = 0;
            for (j = 0;j < size2; j++) {
                b[i] += a[i][j] * x[j];
            }
        }

  3. Build and run the application, first without enabling auto-vectorization. The default O2 optimization enables vectorization, so you need to disable it with a separate option.

    icx -qopt-report=3 -O2 -xAVX -no-vec Driver.c Multiply.c -o NoVectMult
    ./NoVectMult

  4. Build and run the application, this time with auto-vectorization.

    icx -qopt-report=3 -O2 -xAVX -vec    Driver.c Multiply.c -o VectMult
    ./VectMult

Windows

To evaluate performance enhancement, run guided_matmul_opt_report::

  1. Source an environment script such as setvars.sh in the $ONEAPI_ROOT directory.
  2. Navigate to the oneAPI-samples/DirectProgramming/C++/CompilerInfrastructure/guided_matmul_opt_reportdirectory. This application multiplies a vector by a matrix using the following loop:

        for (i = 0; i < size1; i++) {
            b[i] = 0;
            for (j = 0;j < size2; j++) {
                b[i] += a[i][j] * x[j];
            }
        }

  3. Build and run the application, first without enabling auto-vectorization. The default O2 optimization enables vectorization, so you need to disable it with a separate option.

    icx-cl /Qopt-report=3 /O2 /QxAVX /Qvec-  Driver.c Multiply.c -o NoVectMult
    NoVectMult.exe

  4. Build and run the application, this time with auto-vectorization.

    icx-cl icx-cl /Qopt-report=3 /O2 /QxAVX /Qvec  Driver.c Multiply.c -o VectMult  
    VectMult.exe

When you compare the timing of the two runs, you may see that the vectorized version runs faster. The time for the non-vectorized version is only slightly faster than would be obtained by compiling with the O1 option.

Obstacles to Vectorization

The following issues do not always prevent vectorization, but frequently cause the compiler to decide that vectorization would not be worthwhile.

Help the Compiler Vectorize

Sometimes the compiler has insufficient information to decide to vectorize a loop. There are several ways to provide additional information to the compiler:

See Also