Intel® Advisor Help
This section shows example workflows for analyzing MPI applications with Intel® Advisor. In the commands below:
This example shows how to run a Survey analysis to get a basic performance and vectorization report for an MPI application. The analysis is performed for an application that is run in four processes.
$ mpirun -n 4 “advisor --collect=survey --project-dir=./advi” <PATH>/mpi-sample/1_mpi_sample_serial
$ mpirun -n 4 -gtool "advisor --collect=survey --project-dir=./advi:0" <PATH>/mpi-sample/1_mpi_sample_serial
If you need to copy the data to the development system, do so now.
$ advisor –-import-dir=./advi --project-dir=./new-advi --mpi-rank=3 --search-dir src:=<PATH>/mpi_sample
$ advisor-gui ./new-advi
You can proceed to run other analyses one by one. After you finish, you need to import and finalize result for an MPI rank of interest to be able to view it.
For a full vectorization workflow, see Analyze Vectorization and Memory Aspects of an MPI Application recipe in the Intel Advisor Cookbook.
This example shows how to run Offload Modeling to get insights about your MPI application performance modeled on a GPU. In this example:
To model performance:
$ advisor-python $APM/collect.py ./advi --dry-run --config=gen9_gt2 -- <PATH>/mpi-sample/1_mpi_sample_serial
$ mpirun -n 4 "advisor --collect=survey --project-dir=./advi --return-app-exitcode --auto-finalize --static-instruction-mix --stackwalk-mode=online" <PATH>/mpi-sample/1_mpi_sample_serial
$ for x in ./advi/rank.*; do advisor-python $APM/collect.py $x --arch gen --markup generic; done
advisor-python $APM/collect.py <project-dir>/rank.<n> --arch gen --markup generic
You need to specify this path to all other analyses.
$ mpirun -n 4 "advisor --collect=tripcounts --project-dir=./advi --return-app-exitcode --flop --auto-finalize --ignore-checksums --stacks --enable-data-transfer-analysis --track-memory-objects --profile-jit --cache-sources --track-stack-accesses --enable-cache-simulation --cache-config=3:1w:4k/1:64w:512k/1:16w:8m" <PATH>/mpi-sample/1_mpi_sample_serial
$ mpirun -n 4 "advisor --collect=dependencies --project-dir=./advi --return-app-exitcode --filter-reductions --loop-call-count-limit=16 --ignore-checksums" <PATH>/mpi-sample/1_mpi_sample_serial
$ for x in ./advi/rank.*; do advisor-python $APM/analyze.py $x --config=gen9_gt2 -o $x/perf_models; done
The results are generated per rank in a ./advi/rank.X/perf_models directory. You can transfer them to the development system and view the report.
For all analysis types: When using a shared partition on Windows*, either the network paths must be used to specify the project and executable location, or the MPI options mapall or map can be used to specify these locations on the network drive.
For example:
$ mpiexec -gwdir \\<host1>\mpi -hosts 2 <host1> 1 <host2> 1 advisor --collect=survey --project-dir=\\<host1>\mpi\advi -- \\<host1>\mpi\mpi_sample.exe
$ advisor --import-dir=\\<host1>\mpi\advi --project-dir=\\<host1>\mpi\new-advi --search-dir src:=\\<host1>\mpi --mpi-rank=1
$ advisor --report=survey --project-dir=\\<host1>\mpi\new-advi
Or:
$ mpiexec -mapall -gwdir z:\ -hosts 2 <host1> 1 <host2> 1 advisor --collect=survey --project-dir=z:\advi -- z:\mpi_sample.exe
Or:
$ mpiexec -map z:\\<host1>\mpi -gwdir z:\ -hosts 2 <host1> 1 <host2> 1 advisor --collect=survey --project-dir=z:\advi -- z:\mpi_sample.exe