Intel® Advisor Help

Model Offloading to a GPU

Find high-impact opportunities to offload/run your code and identify potential performance bottlenecks on a target graphics processing unit (GPU) by running the Offload Modeling perspective.

The Offload Modeling perspective can help you to:

With the Offload Modeling perspective, the following workflows are available:

Note

You can model application performance only on Intel® GPUs.

How It Works

The Offload Modeling perspective runs the following steps:

  1. Get the baseline performance data for your application by running a Survey analysis.
  2. Identify the number of times loops are invoked and executed and the number of floating-point and integer operations, estimate cache and memory traffics on target device memory subsystem by running the Characterization analysis.
  3. Mark up loops of interest and identify loop-carried dependencies that might block parallel execution by running the Dependencies analysis.
  4. Estimate the total program speedup on a target device and other performance metrics according to Amdahl's law, considering speedup from the most profitable regions by running Performance Modeling. A region is profitable if its execution time on the target is less than on a host.

Offload Modeling Summary

Offload Modeling perspective measures performance of your application and compares it with its modeled performance on a selected target GPU so that you can decide what parts of your application you can execute on the GPU and how you can optimize it to get a better performance after offloading.

Example of a Summary report of the Offload Modeling perspective

See Also