CPU available

logo glicid

1. CPU Cluster Nautilus

1.1. Servers compute CPU cnode[301-340,701-708]

1.1.1. Hardware configuration

cnode[301-340] are the compute CPU servers with 384GB of memory. cnode[701-708] are the compute CPU servers with 768GB of memory. Hardware configuration

The cnode[301-340,701-708] servers are Bullx X440-A6 2U4N2S composed of:

2x AMD EPYC Genoa 9474F 48-Core

384GB or 768GB DDR5 memory @4800MT/s

1x 960GB SSD

2x 1GbE RJ45 ports

1x CNX4 25GbE DP OCP3.0 PCIe3.0 x16 SFP28 Ethernet Card

1x Infiniband ConnectX-6 SP HDR EDR card 100Gb QSFP56 PCIe3x16

cpu view

1.1.2. Slurm Constraint

To easily launch jobs from any front end, we have implemented constraint relating to particular configurations of clusters and nodes. These constraints allow you to target the desired nodes, especially if you are not on the target cluster.

To use them, add the Slurm --constraint=<constraint_name> option

if you do not specify a partition then the calculations will start on the default partition. On Nautilus it is the "standard" partition

Example : To request a feature/constraint, you must add the following line to your submit script:

#SBATCH --constraint="loc_ecn&cpu_genoa"