System architecture
MesoBFC cluster is composed of two kind of partitions: CPU and GPU partitions.
CPU partitions
mpi1 = Skylake_AVX512 processors
The first CPU partition mpi1
is composed of 36 nodes (864 cores), based on Intel processors.
- 854 cores
- 3456 GB RAM
- 71.9 Tera FLOP/s
Hardware model | Dell C6420 |
---|---|
CPU Model | Skylake_AVX512 |
Processor | Intel(R) Xeon(R) Gold 6126 |
Total processor per node | 2 |
Total cores per processor | 12 |
Total cores per node | 24 |
Clock rate | 2.60 GHz |
RAM | 92 GB |
Nodes are interconnected using OMNI-PATH network
mpi2 = Sapphire Rapids processors
The second CPU partition mpi2
is composed of 48 nodes, based on Intel Sapphire Rapids processors.
- 2304 cores
- 12288 GB RAM
- 191.7 Tera FLOP/s
Hardware model | Dell C6620 |
---|---|
CPU Model | Sapphire Rapids |
Processor | Intel(R) Xeon(R) Gold 6442Y |
Total processor per node | 2 |
Total cores per processor | 24 |
Total cores per node | 48 |
Clock rate | 2.60 GHz |
RAM | 252 GB |
Nodes are interconnected using Mellanox infiniband network
nompi
The nompi
partition is currently made of 2 nodes picked from the mpi2 "Sapphire Rapids" rapid partition.
This partition is dedicated to small jobs that cannot fill an entire node.
In future, this partition may contain heterogeneous nodes (different CPUs, RAM, etc.) that are not interconnected in Infiniband.
transfer
The 'transfer' partition should only be used for offline/batch data copy jobs, not for calculation jobs.
Currently, this partition comprises 4 slots located on the login node.
GPU partition
The 'gpu' partition contains nodes equipped with GPUs.
Currently, the queue is made up of 3 nodes, each equipped with 2 H100 GPUs.
Hardware model | HPE DL385 |
---|---|
CPU Model | ZEN4 |
Processor | AMD EPYC 9254 |
Total processor per node | 2 |
Total cores per processor | 24 |
Total cores per node | 48 |
Clock rate | 2.9 GHz |
RAM | 772 GB |
GPUs | 2x NVIDIA H100 |
GPUs RAM | 2x 94 GB |
Coming soon : 17 nodes (544 cores, 34 GPUs) based on AMD processors and NVIDIA A100:
Hardware model | Dell R7525 |
---|---|
Model | ZEN3 |
Processor | AMD EPYC 7313 |
Total processor per node | 2 |
Total cores per processor | 16 |
Total cores per node | 32 |
Clock rate | 3.0 GHz |
RAM | 252 GB |
GPUs | 2x NVIDIA A100 |
GPUs RAM | 2x 80 GB |
Login node
Model | CascadeLake |
Processor | Intel(R) Xeon(R) Silver 4210R |
Total processor per node | 2 |
Total cores per processor | 10 |
Total cores per node | 20 |
Clock rate | 2.40 GHz |
RAM | 64 GB |
Local storage (/tmp) | 2.6 TB |
Networks
- 100 Gb/s Omni-Path InfiniBand for MPI and RDMA applications
- 100 Gb/s Mellanox InfiniBand for MPI and RDMA applications
- 10 and 25 Gb/s Ethernet for data transfer and network file systems (NFS, BeeGFS)
Storage
- 4.0 TB for NFS over 10 G/s network.
- 500 TB for BeeGFS over 10 G/s network.
Operating System
- Rocky Linux 8 is deployed on all nodes.