Intel® Advisor Help

Manage Invocation Taxes

You can control how to model invocation taxes for your application.

When Intel® Advisor detects high call count value for a potentially profitable code region, it assumes that the kernel invocation tax is paid as many times as the kernel is launched. The result is high invocation tax and cost of offloading, which means that this code region cannot benefit from offloading. This is a pessimistic assumption.

However, for simple applications where there is no need to wait for a kernel instance to finish, this cost can be hidden every time except the very first one.

To reflect the two approaches for the kernel invocation tax, the Offload Modeling HTML report has the following columns:

You can tell Intel Advisor how to handle invocation taxes for your application when modeling its performance on a target device.

Note

In the commands below, <APM> is the environment variable that points to script directory. Replace it with $APM on Linux* OS or %APM% on Windows* OS.

Hide All Taxes

For simple applications, you are recommended to enable the optimistic approach for estimating invocation taxes. In this approach, Offload Modeling assumes the invocation tax is paid only for the first time the kernel executes.

To enable this approach:

With this option, the HTML report shows the tax in the Configuration Tax column only, and the Invocation Tax column reports 0.

Do Not Hide Taxes

By default, Offload Modeling estimates invocation taxes using the pessimistic approach and assumes the invocation tax is paid each time the kernel is launched.

With this option, the HTML report shows the tax in the Invocation Tax column only, and the Configuration Tax column reports 0.

Fine-Tune the Number of Hidden Taxes

You can fine-tune the number of invocation taxes to hide by specifying the Invoke_tax_ratio parameter and a fraction of invocation taxes to hide in a TOML configuration file.

  1. Create a new TOML file, for example, my_config.toml. Copy and paste the following text there:

    [scale]
    # Fraction of invocation taxes to hide.
    # Note: The invocation tax of the first kernel instance is not scaled.
    # Possible values: 0.0--1.0
    # Default value: 0.0
    Invoke_tax_ratio = <float>

    where <float> is a fraction of invocation taxes to hide, for example, Invoke_tax_ratio = 0.95, which means that 95% of invocation taxes will be hidden and only 5% of the taxes will be estimated.

  2. Save and close the file.

  3. Run the performance projection with the new TOML file using the analyze.py --config <path> or advisor --collect=projection --custom-config=<path>. For example, with analyze.py:

    advisor --collect=projection --config my_config.toml --project-dir=./advi_results

    where ./advi_results is a path to your project directory. Make sure to replace it with your actual project directory where you collected results to before running the command.

    Important

    If you use the configuration parameter to control the number of taxes to hide, do not use the --assume-hide-taxes or --assume-never-hide-taxes option. These options overwrite the value of the configuration parameter.

In the generated HTML report, the Configuration Tax column reports the tax paid only for the first time the kernel is executed in the specified fraction, and the Invocation Tax column reports the rest of the taxes assuming it is paid each time the kernel executes.

Tip

If you want to model performance for a specific accelerator using a pre-defined configuration file and apply the invocation tax configuration parameter to it, you can specify several configuration files. For example, to model performance on an integrated Intel® Processor Graphics Gen9 configuration with the custom configuration tax, use the following command:
advisor --collect=projection --config=gen9_gt2 --config=my_config.toml --project-dir=./advi_results

where ./advi_results is a path to your project directory. Make sure to replace it with your actual project directory where you collected results to before running the command.

Related information
Run Offload Modeling Perspective from Command Line