Slurm Example Scripts

The following Slurm Scripts are meant to be used with small adjustments for your usecase. They should be saved inside a text file on GLiCID, and by convention, end with .slurm or .sh.

Inside them, they have some CHANGEME content. Replace them with valid items.

If you need, you can access many different environment variables inside your scripts. Take a look at the official Slurm documentation to see which ones you can use.

Basic examples

Hello world

#!/usr/bin/env bash

# BEGIN MANDATORY OPTIONS
#SBATCH --time=0-00:05:00       # Time limit
#SBATCH --qos=debug             # priority/quality of service
#SBATCH --account=CHANGEME      # Replace with your CLAM project name
# END MANDATORY OPTIONS

# BEGIN INFORMATIONAL OPTIONS
#SBATCH --job-name=CHANGEME        # Name for your job
#SBATCH --comment="Run CHANGEME"  # Comment for your job
#SBATCH --output=%x_%j.out      # Output file
#SBATCH --error=%x_%j.err       # Error file
#SBATCH --mail-type=BEGIN,END,FAIL   # Mail on start and end job
#SBATCH --mail-user=CHANGEME   # Email address for the job
# END INFORMATIONAL OPTIONS

# BEGIN RESOURCES OPTIONS
#SBATCH --partition=standard      # partition standard
#SBATCH --ntasks=1              # How many CPUs to use
# END RESOURCES OPTIONS


# BEGIN JOB

# Best practices for Bash
set -euo pipefail

# It's a good idea to always change the working directory of the job to be on a scratch space

cd /scratch/nautilus/users/"${USER}"/
# or
# cd /scratch/waves/users/"${USER}"/

hostname

Apptainer

#!/usr/bin/env bash

# BEGIN MANDATORY OPTIONS
#SBATCH --time=0-00:05:00       # Time limit
#SBATCH --qos=debug             # priority/quality of service
#SBATCH --account=CHANGEME      # Replace with your CLAM project name
# END MANDATORY OPTIONS

# BEGIN INFORMATIONAL OPTIONS
#SBATCH --job-name=CHANGEME        # Name for your job
#SBATCH --comment="Run CHANGEME"  # Comment for your job
#SBATCH --output=%x_%j.out      # Output file
#SBATCH --error=%x_%j.err       # Error file
#SBATCH --mail-type=BEGIN,END,FAIL   # Mail on start and end job
#SBATCH --mail-user=CHANGEME   # Email address for the job
# END INFORMATIONAL OPTIONS

# BEGIN RESOURCES OPTIONS
#SBATCH --partition=standard      # partition standard
#SBATCH --ntasks=1              # How many CPUs to use
# END RESOURCES OPTIONS


# BEGIN JOB

# Best practices for Bash
set -euo pipefail

# It's a good idea to always change the working directory of the job to be on a scratch space

cd /scratch/nautilus/users/"${USER}"/
# or
# cd /scratch/waves/users/"${USER}"/


# Load the Apptainer module
module load apptainer


./my_image.sif

MPI Examples

OpenMPI

#!/usr/bin/env bash

# BEGIN MANDATORY OPTIONS
#SBATCH --time=0-00:05:00       # Time limit
#SBATCH --qos=debug             # priority/quality of service
#SBATCH --account=CHANGEME      # Replace with your CLAM project name
# END MANDATORY OPTIONS

# BEGIN INFORMATIONAL OPTIONS
#SBATCH --job-name=CHANGEME        # Name for your job
#SBATCH --comment="Run CHANGEME"  # Comment for your job
#SBATCH --output=%x_%j.out      # Output file
#SBATCH --error=%x_%j.err       # Error file
#SBATCH --mail-type=BEGIN,END,FAIL   # Mail on start and end job
#SBATCH --mail-user=CHANGEME   # Email address for the job
# END INFORMATIONAL OPTIONS

# BEGIN RESOURCES OPTIONS
#SBATCH --partition=standard      # partition standard
#SBATCH --ntasks=16              # How many CPUs to use
# END RESOURCES OPTIONS


# BEGIN JOB

# Best practices for Bash
set -euo pipefail

# It's a good idea to always change the working directory of the job to be on a scratch space

cd /scratch/nautilus/users/"${USER}"/
# or
# cd /scratch/waves/users/"${USER}"/


# Load the necessary modules, if needed
module load openmpi/xxxxxx

# All of the complexity of mpirun is handled by srun.
srun myopenmpibinary

IntelMPI

#!/usr/bin/env bash

# BEGIN MANDATORY OPTIONS
#SBATCH --time=0-00:05:00       # Time limit
#SBATCH --qos=debug             # priority/quality of service
#SBATCH --account=CHANGEME      # Replace with your CLAM project name
# END MANDATORY OPTIONS

# BEGIN INFORMATIONAL OPTIONS
#SBATCH --job-name=CHANGEME        # Name for your job
#SBATCH --comment="Run CHANGEME"  # Comment for your job
#SBATCH --output=%x_%j.out      # Output file
#SBATCH --error=%x_%j.err       # Error file
#SBATCH --mail-type=BEGIN,END,FAIL   # Mail on start and end job
#SBATCH --mail-user=CHANGEME   # Email address for the job
# END INFORMATIONAL OPTIONS

# BEGIN RESOURCES OPTIONS
#SBATCH --partition=standard      # partition standard
#SBATCH --ntasks=16              # How many CPUs to use
# END RESOURCES OPTIONS


# BEGIN JOB

# Best practices for Bash
set -euo pipefail

# It's a good idea to always change the working directory of the job to be on a scratch space

cd /scratch/nautilus/users/"${USER}"/
# or
# cd /scratch/waves/users/"${USER}"/


# Load the necessary modules, if needed
module load intel/mpi/xxxxxx

# All of the complexity of mpirun is handled by srun.
# You need to export this variable for IntelMPI to work correctly
export I_MPI_PMI_LIBRARY=/usr/lib64/libpmi2.so

srun --mpi=pmi2 myintelmpibinary

Hybrid examples

OpenMP + OpenMPI

#!/usr/bin/env bash

# BEGIN MANDATORY OPTIONS
#SBATCH --time=0-00:05:00       # Time limit
#SBATCH --qos=debug             # priority/quality of service
#SBATCH --account=CHANGEME      # Replace with your CLAM project name
# END MANDATORY OPTIONS

# BEGIN INFORMATIONAL OPTIONS
#SBATCH --job-name=CHANGEME        # Name for your job
#SBATCH --comment="Run CHANGEME"  # Comment for your job
#SBATCH --output=%x_%j.out      # Output file
#SBATCH --error=%x_%j.err       # Error file
#SBATCH --mail-type=BEGIN,END,FAIL   # Mail on start and end job
#SBATCH --mail-user=CHANGEME   # Email address for the job
# END INFORMATIONAL OPTIONS

# BEGIN RESOURCES OPTIONS
#SBATCH --partition=standard      # partition standard
#SBATCH --ntasks=2              # How many MPI tasks to use
#SBATCH --cpus-per-task=8       # How many OpenMP processes to use per MPI task
# END RESOURCES OPTIONS


# BEGIN JOB

# Best practices for Bash
set -euo pipefail

# It's a good idea to always change the working directory of the job to be on a scratch space

cd /scratch/nautilus/users/"${USER}"/
# or
# cd /scratch/waves/users/"${USER}"/


# Load the necessary modules, if needed
module load openmpi/xxxxxx

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK

# All of the complexity of mpirun is handled by srun.
srun myhybridbinary

Application-specific examples

Coming soon ! We request your Slurm scripts to put them in the docs !