Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 6 additions & 3 deletions website/sysadmin_julia.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Generally speaking, there is also little reason to provide regular Julia package

## Providing global package preferences

What you might want to do is to globally override binary dependencies of Julia packages, so-called [Julia artifacts](https://pkgdocs.julialang.org/v1/artifacts/) ([JLLs](https://github.com/JuliaBinaryWrappers)), such that users automatically get redirected to system binaries under the hood. This is especially relevant if vendor specific binaries (e.g. a Cray MPI library) are the only ones that function properly on the HPC cluster. Below we describe how to realize this by overriding package [preferences](https://github.com/JuliaPackaging/Preferences.jl). However, note that JLLs have the big advantage that they are convenient and work nicely together. You should hence only override preferences if there is a good reason for it.
What you might want to do is to globally override binary dependencies of Julia packages, so-called [Julia artifacts](https://pkgdocs.julialang.org/v1/artifacts/) ([JLLs](https://github.com/JuliaBinaryWrappers)), such that users automatically get redirected to system binaries under the hood. This is especially relevant if vendor specific binaries (e.g. a Cray MPI library) are the only ones that function properly on the HPC cluster. JUHPC enables doing this in a robust and automatic fashion by overriding package [preferences](https://github.com/JuliaPackaging/Preferences.jl) (see [here](https://github.com/JuliaParallel/JUHPC?tab=readme-ov-file#1-export-environment-variables-for-the-installation-of-some-hpc-key-packages)). Below we describe how to realize this manually. However, note that JLLs have the big advantage that they are convenient and work nicely together. You should hence only override preferences if there is a good reason for it.


### Example: MPI.jl and CUDA.jl
Expand Down Expand Up @@ -77,7 +77,9 @@ The only thing we then have to do is to append the path to this file to the `JUL

Since every user should get the modified `JULIA_LOAD_PATH` above, the environment variable should best be set directly in the Lmod module that also provides the Julia binaries as well as the MPI and CUDA installations we're pointing to. This way, any user who loads the module and then adds MPI.jl and CUDA.jl to one of their environments (i.e. `] add MPI CUDA`) will automatically use the system MPI and system CUDA under the hood without having to do anything else. That is to say that the global preference system is *opt-out* (the user can always override the global preferences with local preferences, e.g. via a `LocalPreferences.toml`, on a per-project basis).

**Side note:** Speaking of setting environment variables in the module file, it is recommended to set `JULIA_CUDA_MEMORY_POOL=none` to disable the [memory pool](https://cuda.juliagpu.org/stable/usage/memory/#Memory-pool) that CUDA.jl uses by default. This is particularly advisable when you use a system CUDA due to incompatibility with certain CUDA APIs.
JUHPC can be used as part of a recipe for the generation of modules or uenvs (see [here](https://github.com/JuliaParallel/JUHPC?tab=readme-ov-file#example-2-using-uenv); used on the ALPS supercomputer at the Swiss National Supercomputing Centre). All the required environment variables can be set by sourcing an activate script; alternatively, they can be set by other means provided by the module or uenv.

**Side note:** Speaking of setting environment variables in the module file, it is recommended to set `JULIA_CUDA_MEMORY_POOL=none` to disable the [memory pool](https://cuda.juliagpu.org/stable/usage/memory/#Memory-pool) that CUDA.jl uses by default (automatically set with JUHPC). This is particularly advisable when you use a system CUDA due to incompatibility with certain CUDA APIs.

\note{You might want to check out [JuliaHPC_Installer](https://git.uni-paderborn.de/pc2-public/juliahpc_installer) which is a Julia script that automates the installation of a new Julia version via easybuild, including the global `Project.toml` setup discussed above. It is used on [Noctua 2](https://pc2.uni-paderborn.de/hpc-services/available-systems/noctua2) at [PC2](https://pc2.uni-paderborn.de/). However, it is currently not portable out of the box (since e.g. paths are hardcoded).}

Expand Down Expand Up @@ -151,8 +153,9 @@ At PC2 such a script is provided for every Julia version / Julia module (see [Ju

## Potentially useful resources

* [JUHPC](https://github.com/JuliaParallel/JUHPC): a Julia HPC community scripting project hosted on the JuliaParallel organization. JUHPC creates in a portable an automatic fashion a HPC setup for juliaup, julia and HPC key packages requiring system libraries. It is used, among others, on the ALPS supercomputer at the Swiss National Supercomputing Centre.
* [JuliaHPC_Installer](https://git.uni-paderborn.de/pc2-public/juliahpc_installer) by [Carsten Bauer](https://carstenbauer.eu): a Julia script that automates the installation of a new Julia version via easybuild, including the global `Project.toml` setup discussed above. It is used on [Noctua 2](https://pc2.uni-paderborn.de/hpc-services/available-systems/noctua2) at [PC2](https://pc2.uni-paderborn.de/). However, it is currently not portable out of the box (since e.g. paths are hardcoded).
* [Johannes Blaschke](https://github.com/jblaschke) provides [scripts and templates](https://gitlab.blaschke.science/nersc/julia/-/tree/main/modulefiles) to set up modules file for Julia on some of NERSC's systems (warning: potentially outdated?)
* [Samuel Omlin](https://github.com/omlins) and colleagues from CSCS provide their [Easybuild configuration files](https://github.com/eth-cscs/production/tree/master/easybuild/easyconfigs/j/Julia) used for Piz Daint (warning: potentially outdated?)
* [Samuel Omlin](https://github.com/omlins) and colleagues from CSCS provide their [uenv recipes](https://github.com/eth-cscs/alps-uenv) used for the ALPS supercomputer and their [Easybuild configuration files](https://github.com/eth-cscs/production/tree/master/easybuild/easyconfigs/j/Julia) used for the pre-ALPS Piz Daint.

[⤴ _**back to Content**_](#content)