Documentation links¶
Note that documentation, and especially web based documentation, is very fluid. Links change rapidly and were correct when this page was developed right after the course. However, there is no guarantee that they are still correct when you read this and are only guaranteed to be updated at a course event.
This documentation page is far from complete but bundles a lot of links mentioned during the presentations, and some more.
Web documentation¶
-
Slurm version 23.02.7, on the system after the August-September 2024 update
-
HPE Cray Programming Environment web documentation contains a lot of HTML-processed man pages in an easier-to-browse format than the man pages on the system.
The presentations on debugging and profiling tools referred a lot to pages that can be found on this web site. The manual pages mentioned in those presentations are also in the web documentation and are the easiest way to access that documentation.
-
Cray PE Github account with whitepapers and some documentation.
-
Cray DSMML - Distributed Symmetric Memory Management Library
-
Clang latest version documentation (Usually for the latest version)
-
Clang 13.0.0 version (basis for aocc/3.2.0)
-
Clang 15.0.0 version (cce/15.0.0 and cce/15.0.1 in 22.12/23.03)
-
Clang 16.0.0 version (cce/16.0.0 in 23.09 and aocc/4.1.0 in 23.12/24.03)
-
Clang 17.0.1 version (cce/17.0.0 in 23.12 and cce/17.0.1 in 24.03)
-
-
-
AOCC 4.0 Compiler Options Quick Reference Guide (Version 4.0 compilers will come when the 23.05 or later CPE release gets installed on LUMI)
-
-
-
rocminfo application for reporting system info.
-
Libraries:
-
Random number generation: rocRAND
-
Iterative solvers: rocALUTION
-
Machine Learning Libraries: MIOpen (similar to cuDNN), Tensile (GEMM Autotuner), RCCL (ROCm analogue of NCCL) and Horovod (Distributed ML)
-
Machine Learning Frameworks: Tensorflow, Pytorch and Caffe
-
Development tools:
-
rocgdb resources:
-
2021 Linux Plumbers Conference presentation with youTube video with a part of the presentation
-
-
Man pages¶
A selection of man pages explicitly mentioned during the course:
-
Compilers
PrgEnv C C++ Fortran PrgEnv-cray man craycc
man crayCC
man crayftn
PrgEnv-gnu man gcc
man g++
man gfortran
PrgEnv-aocc/PrgEnv-amd - - - Compiler wrappers man cc
man CC
man ftn
-
Web-based versions of the compiler wrapper manual pages (the version on the system is currently hijacked by the GNU manual pages):
-
man cc
(or latest version) -
man CC
(or latest version) -
man ftn
(or latest version)
-
-
OpenMP in CCE
-
OpenACC in CCE
-
MPI:
-
MPI itself:
man intro_mpi
orman mpi
(or latest version) -
libfabric:
man fabric
-
CXI: `man fi_cxi'
-
-
LibSci
-
man intro_libsci
andman intro_libsci_acc
-
man intro_blas1
,man intro_blas2
,man intro_blas3
,man intro_cblas
-
man intro_lapack
-
man intro_scalapack
andman intro_blacs
-
man intro_irt
-
man intro_fftw3
-
-
DSMML - Distributed Symmetric Memory Management Library
man intro_dsmml
-
Slurm manual pages are also all on the web and are easily found by Google, but are usually those for the latest version. The links on this page are for the version on LUMI at the time of the course.
Via the module system¶
Most HPE Cray PE modules contain links to further documentation. Try module help cce
etc.
From the commands themselves¶
PrgEnv | C | C++ | Fortran |
---|---|---|---|
PrgEnv-cray | craycc --help craycc --craype-help |
crayCC --help crayCC --craype-help |
crayftn --help crayftn --craype-help |
PrgEnv-gnu | gcc --help |
g++ --help |
gfortran --help |
PrgEnv-aocc | clang --help |
clang++ --help |
flang --help |
PrgEnv-amd | amdclang --help |
amdclang++ --help |
amdflang --help |
Compiler wrappers | cc --craype-help cc --help |
CC --craype-help CC --help |
ftn --craype-help ftn --help |
For the PrgEnv-gnu compiler, the --help
option only shows a little bit of help information, but mentions
further options to get help about specific topics.
Further commands that provide extensive help on the command line:
rocm-smi --help
, even on the login nodes.
Documentation of other Cray EX systems¶
Note that these systems may be configured differently, and this especially applies to the scheduler. So not all documentations of those systems applies to LUMI. Yet these web sites do contain a lot of useful information.
-
Archer2 documentation. Archer2 is the national supercomputer of the UK, operated by EPCC. It is an AMD CPU-only cluster. Two important differences with LUMI are that (a) the cluster uses AMD Rome CPUs with groups of 4 instead of 8 cores sharing L3 cache and (b) the cluster uses Slingshot 10 instead of Slinshot 11 which has its own bugs and workarounds.
It includes a page on cray-python referred to during the course.
-
ORNL Frontier User Guide and ORNL Crusher Qucik-Start Guide. Frontier is the first USA exascale cluster and is built up of nodes that are very similar to the LUMI-G nodes (same CPA and GPUs but a different storage configuration) while Crusher is the 192-node early access system for Frontier. One important difference is the configuration of the scheduler which has 1 core reserved in each CCD to have a more regular structure than LUMI.
-
KTH Dardel documentation. Dardel is the Swedish "baby-LUMI" system. Its CPU nodes use the AMD Rome CPU instead of AMD Milan, but its GPU nodes are the same as in LUMI.
-
Setonix User Guide. Setonix is a Cray EX system at Pawsey Supercomputing Centre in Australia. The CPU and GPU compute nodes are the same as on LUMI.