OPTIMUM (UBC)
SLIM’s OPTIMUM HPC Cluster (UBC)
In 2014, with the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), and with matching support of SINBAD sponsors ION, CGG, Schlumberger and Sub Salt Solutions, SLIM acquired an SGI Linux HPC cluster (based on Intel’s Ivy Bridge 2.8 GHz E5-2680v2 processor) with cumulative theoretical peak floating-point performance of 25 Tflops. The compute subsystem of the cluster comprises 1120 CPU cores. The high-speed inter-processor communication utilizes FDR Infiniband network at 56 Gbs. Storage subsystem consist of 176 TB high-throughput Lustre distributed parallel file-system over the Infiniband network and 6 TB of NFS storage over Ethernet network. OPTIMUM cluster has 600 licenses for MATLAB distributed computing server. The system's dedicated use is for research, development, and testing of SLIM’s algorithms. Configuration specifics:
- Compute subsystem (25 Tflops):
- 56 compute nodes (1120 cores total)
- 2 CPUs per node (112 total)
- 10 cores per CPU (2.8 GHz E5-2680v2)
- Memory (7.5 TB total)
- 52 nodes with
- 128 GB per node (6.5 TB total)
- 6.4 GB per CPU core
- 4 nodes with
- 256 GB per node (1 TB total)
- 12.8 GB per CPU core
- 52 nodes with
- Inter-processor communication
- Mellanox Infiniband network
- FDR x4 aggregate (56 Gbs)
- Blocking factor 2:1
- 56 compute nodes (1120 cores total)
- Storage subsystem (182 TB):
- Lustre distributed parallel file-system for /data and /scratch
- 240 TB raw capacity
- 176 TB formated redundant capacity
- 3 GBs on write and 4 GBs on read via Infiniband
- NFS file system for /home and software
- 7.2 TB raw capacity
- 6 TB formated redundant capacity
- 128 MBs via Ethernet
- Lustre distributed parallel file-system for /data and /scratch
- Access subsystem:
- Three login nodes
- 2 CPUs per node
- 10 cores per CPU (2.8 GHz E5-2680v2)
- 128 GB per node
- 2 TB of redundant local storage
- Three login nodes
This purchase was made via UBC tender process RFP #2013010325.