Quickstart Beginner

The documentation is in progress and not yet stabilised. |
This page is one of the pages dedicated to beginners.
1. Getting Started With GliCID
Welcome to GliCID Quickstart. This section will guide you through the initial steps of logging into the GLiCID cluster, transferring files, loading software, and submitting your first job.
It is imperative that you read what Yoda has to say to you : French version / English version |
This page is one of the pages dedicated to beginners.
2. How to access GLiCID Cluster
In order to request access to GLiCID you have to follow these steps:
-
Create an account on https://clam.glicid.fr (school account, or CRU account for external users).
-
The account will be validated by an administrator.
-
User has to generate and upload the pulblic SSH key to CLAM portal (in profile’s SSH Access tab).
-
Edit the .ssh/config file and add the defined configuration.
-
Finally, you can log in using SSH from terminal (on Linux and macOS) and PowerShell (on Windows).
3. Login using Secure Shell Protocol (SSH)
At GLiCID, we use key-based SSH authentication for secure access. We’ll show you how to set it up on your system using OpenSSH, supporting GNU/Linux, macOS and Windows (PowerShell).
3.1. OpenSSH Key Generation
To ensure security, use ed25519 key pairs for key-based authentication. Generate a key with this command:
ssh-keygen -t ed25519
Note: Should the file ~/.ssh/id_ed25519 already exist, ssh-keygen will prompt for confirmation before overwriting it. It is advisable not to overwrite the file, especially if it serves as credentials for another system. Instead, opt for a distinct file name, such as ~/.ssh/id_glicid, and ensure consistency by using the same file name in all subsequent instructions provided in this document.
Following this, ssh-keygen will prompt you to create a passphrase. While it’s optional to enter a passphrase—simply pressing Enter allows you to proceed without one—it is advisable to provide a robust passphrase. In the future, you’ll need to enter this passphrase to unlock your private key. Using a password manager is recommended for securely storing your key and facilitating the use of complex passphrases.
Ensure the safety and confidentiality of the private key, located at ~/.ssh/id_ed25519, on your local host. Simultaneously, the generated public key, found at ~/.ssh/id_ed25519.pub, must be uploaded to the CLAM user portal at https://clam.glicid.fr.
3.2. OpenSSH Configuration
To set-up, edit the .ssh/config file and add the following:
Host Bastion(1)
Hostname bastion.glicid.fr
User the-login-name(2)
IdentityFile ~/.ssh/id_ed25519(4)
ForwardAgent yes
Host Nautilus(1)
Hostname nautilus-devel-001.nautilus.intra.glicid.fr
User the-login-name(2)
ProxyJump Bastion(3)
IdentityFile ~/.ssh/id_ed25519(4)
1 | Please note that Nautilus (with a capital "N") is an alias. You must now use ssh Nautilus This is just an example, the alias can take any name. |
2 | To be replaced by the correct login assigned to you by clam, obviously. For example : doe-j@univ-nantes.fr : for Nantes University or doe-j@ec-nantes.fr for Nantes Central school or doe-j@univ-angers.fr for Angers University etc. |
3 | Requires OpenSSH client >= 7.3. For previous versions, ProxyCommand ssh le-nom-de-login@bastion.glicid.fr -W %h:%p can be used instead. |
4 | Be careful of the confusion, it is the private key that must be referenced here. |
You’re currently logged in to our Nautilus login node; however, you have access to additional login nodes for advanced usage.
Connect to GLiCID Cluster with these commands:
ssh Nautilus
After a successful SSH connection, a new prompt or window will appear, reflecting the environment of the connected system. For instance, if you logged into Nautilus, you should observe the corresponding command-line interface associated with Nautilus.
<username>:~$ ssh Nautilus
#################################################################
# This service is restricted to authorized users only. All #
# activities on this system are logged. #
# Unauthorized access will be fully investigated and reported #
# to the appropriate law enforcement agencies. #
#################################################################
Last login: Tue Nov 21 11:06:05 2023 from 194.167.60.11
_ _ _ _ _ lxkkdc
| \ | | | | (_) | kWNOdc
| \| | __ _ _ _| |_ _| |_ _ ___ kW0c
| . ` |/ _` | | | | __| | | | | / __| kW0c
| |\ | (_| | |_| | |_| | | |_| \__ \ kW0c
\_| \_/\__,_|\__,_|\__|_|_|\__,_|___/ cOWKl
cx0KXWMWXK0xc
ccllllloxXWMMMMMMMMMWKo
coooolc codxkO0KXXNNNWWWWWMMMMMMMMMMMMWOl
c0WWWWWO oxOKNWMMMMMMMMMMMMMMMMMMMMMMMMMMMMMWNKOxoc
lKMMMMMKxOXWMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMWXOdc
lKMMMMMWWMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMW0xdx0NMMWKx
lKMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMWO kWMMMWOc
lKMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMWOc kWMMMMNd
lKMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMWKkxkKWMMMMMNd
lKMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMXx
lKMMMMMNNWMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMWXkl
lKMMMMM0ookKNWMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMWNKOdc
ckKKKKKx ldkOKXNWMMMMMMMMMMMMMMMMMMMMMMMWNXK0kdl
clodxkkO000KKKKKKK000OOkxdolc
-----------------------------------------------------------------------------
Welcome to GLiCID HPC cluster Nautilus
=== Computing Nodes =========================================== #RAM/n = #C =
cnode[301-340] 40 BullSequana X440 (2 AMD EPYC 9474F@3.6GHz 48c) 384 3840
cnode[701-708] 8 BullSequana X440 (2 AMD EPYC 9474F@3.6GHz 48c) 768 768
visu[1-4] 4 BullSequana X450 (2 AMD EPYC 9474F@3.6GHz 48c) 768 384
with Nvidia A40 (48G) 2 GPUs per node
gnode[1-4] 4 BullSequana X410 (2 AMD EPYC 9474F@3.6GHz 48c) 768 384
with Nvidia A100 (80G) 4 GPUs per node
-----------------------------------------------------------------------------
Fast interconnect using InfiniBand 100 Gb/s technology
Shared Storage (scratch) : 427 TB (IBM/Spectrum Scale - GPFS)
Remote Visualization Apps through XCS portal @https://xcs.glicid.fr/xcs/
-----------------------------------------------------------------------------
User storage :
- user directory ......... /home/<username>
- project directory ...... /LAB-DATA/GLiCID/projects/<projectname>
- scratch directory ..... /scratch/users/<username>
- scratch SSD .......... /scratch-shared
- scratch Liger .......... /scratchliger/<old_liger_username> (temporary, ro)
- softwares directory .... /opt/software
-----------------------------------------------------------------------------
Softwares :
- use modules ......... module avail
- use GUIX ............ guix install <software> (documentation for details)
module load guix/latest first (Nautilus only)
-----------------------------------------------------------------------------
Useful Links :
- User DOC ........ https://doc.glicid.fr
- Support ......... https://help.glicid.fr or help@glicid.fr
- Chat ............ bottom right corner on CLAM when admins are available
- Admins .......... tech@glicid.fr
- Forum ........... coming soon
- Status page ..... https://ckc.glicid.fr
[<username>@nautilus-devel-001 ~]$
To close the SSH connection, simply type
exit
Note: If you are a Windows user and find the terminal interface less familiar, you have the option to use MobaXterm.
5. Software modules
HPC cluster systems typically have a large number of software packages installed. On GLiCID, we use the modules package to manage the user environment for the installed packages. Software modules simplify the utilization of pre-installed software packages and libraries by configuring the necessary environment variables. This makes it simple to use different packages or switch between versions of the same package without conflicts.
5.1. To see all available modules, run:
$ module avail ---------------------------------------- /usr/share/Modules/modulefiles/applications ---------------------------------------- aspect/2.6.0 fidelity/23.20 lammps/15Jun2023 openmolcas/2024 pari/gp2c-0.0.13 turbomole/7.3 aspect/2.6.0_znver3 finemarine/10.1 lammps/29Aug2024 openmolcas/current sharc/3.0 turbomole/7.5 castem/2021 finemarine/12.1 lammps/29Aug2024_gpu orca/5.3 sharc/current turbomole/7.41 castem/2023 gaussian/g16 mrcc/2023 orca/5.4 specfem2d/8.1.0 vasp/6.4.3 cfour/2.1 gaussian/g16reva03 neper/4.10.1 orca/6.0 specfem3d/4.1.1 visit/3.3.3 code-aster/14.6.0 gaussian/g16revc02 openfast/3.5.3 orca/6.1 telemac/v8p2r1 dftb/22.1 gcm/r3393 openfoam/11-openmpi paraview/5.11.2-mpi telemac/v8p4r0 dftb/23.1 hyperworks/2022.2 openfoam/v2312-openmpi pari/2.15.5 telemac/v8p5r0 ----------------------------------------- /usr/share/Modules/modulefiles/libraries ------------------------------------------ aocl-blis/4.0 hdf5/1.14.1-2_intel intel/mkl/2023.1.0 pnetcdf/1.12.3_itl blas/3.12.0_gnu hdf5/1.14.1-2_intel_intelmpi intel/mkl/latest rdma/46.0_gnu boost/1.82.0_gnu intel/ccl/2021.9.0 intel/mkl32/2023.1.0 scalapack/2.2.0_gnu boost/1.86.0_gnu intel/ccl/latest intel/mkl32/latest scotch/7.0.6 cuda/12.2.0_535.54.03 intel/dnnl-cpu-gomp/2023.1.0 intel/tbb/2021.9.0 suitesparse/7.8.2 eigen/3.4.0 intel/dnnl-cpu-gomp/latest intel/tbb/latest szip/2.1_gnu fftw/3.3.10_gnu_serial intel/dnnl-cpu-iomp/2023.1.0 intel/tbb32/2021.9.0 ucx/1.14.1_gnu fftw/3.3.10_intel_serial intel/dnnl-cpu-iomp/latest intel/tbb32/latest ucx/1.17.0_gnu fftw/3.3.10_intel_serial_sp intel/dnnl-cpu-tbb/2023.1.0 lapack/3.12.0_gnu zlib/1.2.8_gnu fftw/mpi/3.3.10_gnu_13.1.0_openmpi intel/dnnl-cpu-tbb/latest libnsl/2.0.1 fftw/mpi/3.3.10_intel_2023.1.0_intelmpi intel/dnnl/2023.1.0 libtool/2.4.6_gnu fftw/omp/3.3.10_gnu_omp intel/dnnl/latest metis/4.0.3 fftw/omp/3.3.10_intel_omp intel/dpl/2022.1.0 metis/5.1.0 gmp/6.2.1 intel/dpl/latest mpfr/4.2.1_gnu :...skipping... ---------------------------------------- /usr/share/Modules/modulefiles/applications ---------------------------------------- aspect/2.6.0 fidelity/23.20 lammps/15Jun2023 openmolcas/2024 pari/gp2c-0.0.13 turbomole/7.3 aspect/2.6.0_znver3 finemarine/10.1 lammps/29Aug2024 openmolcas/current sharc/3.0 turbomole/7.5 castem/2021 finemarine/12.1 lammps/29Aug2024_gpu orca/5.3 sharc/current turbomole/7.41 castem/2023 gaussian/g16 mrcc/2023 orca/5.4 specfem2d/8.1.0 vasp/6.4.3 cfour/2.1 gaussian/g16reva03 neper/4.10.1 orca/6.0 specfem3d/4.1.1 visit/3.3.3 code-aster/14.6.0 gaussian/g16revc02 openfast/3.5.3 orca/6.1 telemac/v8p2r1 dftb/22.1 gcm/r3393 openfoam/11-openmpi paraview/5.11.2-mpi telemac/v8p4r0 dftb/23.1 hyperworks/2022.2 openfoam/v2312-openmpi pari/2.15.5 telemac/v8p5r0 ----------------------------------------- /usr/share/Modules/modulefiles/libraries ------------------------------------------ aocl-blis/4.0 hdf5/1.14.1-2_intel intel/mkl/2023.1.0 pnetcdf/1.12.3_itl blas/3.12.0_gnu hdf5/1.14.1-2_intel_intelmpi intel/mkl/latest rdma/46.0_gnu boost/1.82.0_gnu intel/ccl/2021.9.0 intel/mkl32/2023.1.0 scalapack/2.2.0_gnu boost/1.86.0_gnu intel/ccl/latest intel/mkl32/latest scotch/7.0.6 cuda/12.2.0_535.54.03 intel/dnnl-cpu-gomp/2023.1.0 intel/tbb/2021.9.0 suitesparse/7.8.2 eigen/3.4.0 intel/dnnl-cpu-gomp/latest intel/tbb/latest szip/2.1_gnu fftw/3.3.10_gnu_serial intel/dnnl-cpu-iomp/2023.1.0 intel/tbb32/2021.9.0 ucx/1.14.1_gnu fftw/3.3.10_intel_serial intel/dnnl-cpu-iomp/latest intel/tbb32/latest ucx/1.17.0_gnu fftw/3.3.10_intel_serial_sp intel/dnnl-cpu-tbb/2023.1.0 lapack/3.12.0_gnu zlib/1.2.8_gnu fftw/mpi/3.3.10_gnu_13.1.0_openmpi intel/dnnl-cpu-tbb/latest libnsl/2.0.1 fftw/mpi/3.3.10_intel_2023.1.0_intelmpi intel/dnnl/2023.1.0 libtool/2.4.6_gnu fftw/omp/3.3.10_gnu_omp intel/dnnl/latest metis/4.0.3 fftw/omp/3.3.10_intel_omp intel/dpl/2022.1.0 metis/5.1.0 gmp/6.2.1 intel/dpl/latest mpfr/4.2.1_gnu gmsh/4.11.1_gnu intel/intel_ipp_ia32/2021.8.0 mumps/5.4.1 gmt/4.5.15_gnu intel/intel_ipp_ia32/latest nco/5.2.8_gnu gmt/5.3.1_gnu intel/intel_ipp_intel64/2021.8.0 ncview/2.1.10_gnu graphviz/12.2.1 intel/intel_ipp_intel64/latest netcdf/c-4.9.2_gnu gsl/2.8_gnu intel/intel_ippcp_ia32/2021.7.0 netcdf/f-4.6.1_gnu hdf5/1.10.2_gnu intel/intel_ippcp_ia32/latest openssl/3.0.9_gnu hdf5/1.14.1-2_gnu intel/intel_ippcp_intel64/2021.7.0 petsc/3.21.5_gnu hdf5/1.14.1-2_gnu_openmpi intel/intel_ippcp_intel64/latest pnetcdf/1.12.3_gnu ----------------------------------------- /usr/share/Modules/modulefiles/compilers ------------------------------------------ amd/4.0.0 intel/compiler-rt/2023.1.0 intel/compiler32/2023.1.0 julia/1.9.4 R-project/4.3.1_gnu_mkl cmake/3.26.4 intel/compiler-rt/latest intel/compiler32/latest llvm/18.1.1 rust/1.77.2 gcc/12.4.0 intel/compiler-rt32/2023.1.0 intel/icc/2023.1.0 nvhpc/22.7 gcc/13.1.0 intel/compiler-rt32/latest intel/icc/latest nvhpc/23.9 gcc/14.1.0 intel/compiler/2023.1.0 intel/icc32/2023.1.0 nvhpc/24.5 gcc/14.2.0 intel/compiler/latest intel/icc32/latest python/3.11.4 ------------------------------------------- /usr/share/Modules/modulefiles/tools -------------------------------------------- apptainer/1.1.6 intel/clck/2021.7.3 intel/dpct/2023.1.0 intel/oclfpga/2023.1.0 ompp/0.8.5_gnu curl/8.9.1 intel/clck/latest intel/dpct/latest intel/oclfpga/latest ompp/0.8.5_itl expat/2.6.3 intel/dal/2023.1.0 intel/init_opencl/2023.1.0 intel/vtune/2023.1.0 texinfo/7.1 git/2.44.0 intel/dal/latest intel/init_opencl/latest intel/vtune/latest valgrind/3.21.0 guix/latest intel/debugger/2023.1.0 intel/inspector/2023.1.0 libffi/3.4.6 guix/v1.1 intel/debugger/latest intel/inspector/latest nano/8.3 intel/advisor/2023.1.0 intel/dev-utilities/2021.9.0 intel/itac/2021.9.0 numdiff/5.9.0 intel/advisor/latest intel/dev-utilities/latest intel/itac/latest nvtop/3.1.0 ------------------------------------------ /usr/share/Modules/modulefiles/parallel ------------------------------------------ intel/mpi/2021.9.0 openmpi/ucx/4.1.8_gcc_13.1.0_ucx_1.17.0_rdma_46.0_internal intel/mpi/2021.11 openmpi/ucx/4.1.8_gcc_13.1.0_ucx_1.17.0_rdma_46.0_pmix4 intel/mpi/latest openmpi/ucx/5.0.6_gcc_13.1.0_ucx_1.17.0_rdma_46.0 openmpi/ucx/4.1.5_gcc_8.5.0_ucx_1.14.1_rdma_46.0 openmpi/ucx/5.0.6_gcc_13.1.0_ucx_1.17.0_rdma_46.0-pmix4 openmpi/ucx/4.1.6_gcc_8.5.0_ucx_1.14.1_rdma_46.0-pmi2 pmix/3.2.2 openmpi/ucx/4.1.6_gcc_13.1.0_ucx_1.17.0_rdma_46.0-pmix3 pmix/4.2.9 openmpi/ucx/4.1.6_gcc_13.1.0_ucx_1.17.0_rdma_46.0-pmix4 pmix/4.2.9_gm ------------------------ /LAB-DATA/GLiCID/projects/modes/blondel-a@univ-nantes.fr/modulefiles/modes ------------------------- cfour/2.1/omp_nautilus orca/5.3/omp_nautilus psi4/1.8/omp_nautilus turbomole/7.4/omp_nautilus cfour/2.1SSD/omp_nautilus orca/5.4/omp_nautilus psi4/current/omp_nautilus turbomole/7.5/mpi_nautilus cfour/current/omp_nautilus orca/6.0/mpi_nautilus qchem/6.2/omp_nautilus turbomole/7.5/omp_nautilus modes_tools/2.0 orca/6.0/omp_nautilus qchem/current/omp_nautilus turbomole/7.8/mpi_nautilus mrcc/2023/omp_nautilus orca/6.1/mpi_nautilus turbomole/7.3/mpi_nautilus turbomole/7.8/omp_nautilus mrcc/2023SSD/omp_nautilus orca/6.1/omp_nautilus turbomole/7.3/omp_nautilus turbomole/current/mpi_nautilus mrcc/current/omp_nautilus orca/current/omp_nautilus turbomole/7.4/mpi_nautilus turbomole/current/omp_nautilus
This list is not automatically updated so …
5.3. To load a module, use:
module load <module_name>
For example, to load Apptainer, run:
module load Apptainer
5.4. To check the list of loaded modules, run:
module list
For example
$ module list No Modulefiles Currently Loaded. $ module load gcc/13.1.0 openmpi/ucx/4.1.5_gcc_8.5.0_ucx_1.14.1_rdma_46.0 $ module list Currently Loaded Modulefiles: 1) gcc/13.1.0 2) rdma/46.0_gnu 3) ucx/1.14.1_gnu 4) openmpi/ucx/4.1.5_gcc_8.5.0_ucx_1.14.1_rdma_46.0
Some modules may be loaded by others because of dependences
5.6. To switch between two versions of the same module, run:
module switch <old_module> <new_module>
For example
$ module list No Modulefiles Currently Loaded. $ module load nvhpc/22.7 $ module list Currently Loaded Modulefiles: 1) nvhpc/22.7 $ module switch nvhpc/22.7 nvhpc/23.9 $ module list Currently Loaded Modulefiles: 1) nvhpc/23.9
6. File Transfer
For transferring files to or from the cluster, we recommend using 'scp' command. To transfer a file from your system to the cluster run:
scp file_name login-name:/path/to/destination
For example, to transfer a file from local to the /scratch on Nautilus, run:
scp file_name nautilus:/scratch/nautilus/users/username/path_to_folder
Also, to transfer a file from the cluster to local, run:
scp nautilus:/scratch/nautilus/users/username/path_to_file /local_path_to_folder
7. Slurm Workload Manager
GLiCID uses Slurm as its workload manager, a flexible and scalable batch system that manages compute node access. To use Slurm, create a job script specifying resources (time, nodes, memory) and commands for execution.
The Slurm script generally follows this format:
#!/bin/bash # Declaring Slurm Configuration Options # Loading Software/Libraries # Running Code
7.1. Sample Slurm Script
For example, let’s create a sample Slurm job script and submit the job to the cluster. First, create an empty file using vim editor(or your favourite editor) and insert the following script and save it using .slurm or .sh extension (for example, myjob.slurm or myjob.sh):
#!/bin/bash #SBATCH --job-name=myjob # Name for your job #SBATCH --comment="Run My Job" # Comment for your job #SBATCH --output=%x_%j.out # Output file #SBATCH --error=%x_%j.err # Error file #SBATCH --time=0-00:05:00 # Time limit #SBATCH --ntasks=2 # How many tasks per node #SBATCH --cpus-per-task=2 # Number of CPUs per task #SBATCH --mem-per-cpu=10g # Memory per CPU #SBATCH --qos=short # priority/quality of service # Command to run hostname # Run the command hostname
In this example, we run the bash command hostname.
7.2. To submit this job, run:
sbatch myjob.slurm
This will submit your job to the Slurm for execution and a message with Job_ID will be displayed.
$ sbatch myjob.slurm Submitted batch job 3113275 on cluster nautilus
7.3. You can check the status of your jobs using
The command
squeue -u $USER
or the equivalent :
squeue --me
For this example, one gets :
$ squeue --me Fri Dec 15 15:13:26 2023 CLUSTER: nautilus JOBID PARTITION NAME USER STATE TIME TIME_LIMI NODES NODELIST(REASON) 3113275 all myjob <login> RUNNING 0:00 5:00 1 cnode328 CLUSTER: waves JOBID PARTITION NAME USER STATE TIME TIME_LIMI NODES NODELIST(REASON)
7.4. To obtain complete information about a job (allocated resources and execution status), run:
scontrol show job $Job_ID
7.5. To cancel a job, run:
scancel $Job_ID
For more information, please check the official documentation of Slurm.