Intel® Advisor Help
To model performance of a Data Parallel C++ (DPC++), OpenCL™, or OpenMP* target application on a graphics processing unit (GPU) device, run the GPU-to-GPU modeling workflow of the Offload Modeling perspective.
The GPU-to-GPU performance modeling workflow is similar to the CPU-to-GPU modeling workflow and includes the following steps:
Compared to the CPU-to-GPU performance modeling, GPU-to-GPU performance modeling has a better modeling accuracy because it considers the similarities in hardware configurations, compilers code-generation principles, and software implementation aspects between the baseline and modeled code. During the GPU-to-GPU performance modeling, Intel Advisor does the following:
You can run the GPU-to-GPU performance modeling only from command line with the Intel Advisor Python* scripts. Use one of the following methods:
Run the collect.py and analyze.py Scripts
Run the scripts as follows:
advisor-python <APM>/collect.py <project-dir> --collect=basic --gpu [<analysis-options>] -- <target-application> [<target-options>]
where <script-options> is one or several options to modify the script behavior. See collect.py Script for a full option list.
This command runs the Survey, Trip Counts, and FLOP analyses only for the GPU kernels.
advisor-python <APM>/analyze.py <project-dir> --gpu [--config=<config-file>] [--out-dir <path>] [<analysis-options>]
where:
Run the run_oa.py Script
Collect baseline performance metrics for GPU kernels and model their performance on a target GPU:
advisor-python <APM>/run_oa.py <project-dir> --collect=basic --gpu [--config=<config-file>] [--out-dir <path>] [<analysis-options>] -- <target-application> [<target-options>]
where:
This command runs the Survey, Trip Counts, and FLOP analyses only for the GPU kernels and models their performance on the selected target GPU.
Once the Intel Advisor finishes the analyses, it prints a result summary and a result file location to a command prompt. By default, if you did not use the --out-dir option to change the result location, Intel Advisor generates a set of reports to the <project-dir>/e<NNN>/pp<NNN>/data.0 directory. The directory includes the following files:
Examine the results with the interactive HTML report. See Explore Performance Gain from GPU-to-GPU Modeling for details.