YEMOJA (UBC)

The YEMOJA Supercomputer was established in May of 2015 through a collaboration of the Brazilian Agência Nacional do Petróleo, Gás Natural e Biocombustíveis (ANP), BG Brasil (now Shell), the Brazilian Ministry of Science, Technology and Innovation (MCTI), the Financiadora de Estudos e Projetos (FINEP), SENAI Nacional, the Government of Bahia, and Intel. At the time of purchase, YEMOJA placed 95th on the global supercomputer TOP 500 and still ranks as the fastest supercomputer in South America. Housed at the SENAI CIMATEC Supercomputing Centre for Industrial Innovation in Bahia, the YEMOJA Supercomputer was designed as the supporting facility for the International Inversion Initiative, an innovative research program to develope the technologies of Full Waveform Inversion and apply them to seismic imaging and processing of Brazil’s extensive offshore energy resources. The III collaboration draws together the researchers of the Universidade Federal do Rio Grande do Norte (UFRN), Natal; the SLIM Group’s Prof Felix Herrmann at the University of British Columbia, and Fullwave Consortium’s Prof Mike Warner at Imperial College London (UK). BG’s geophysicists C. Jones, H. Macintyre and P. Nadin played key roles in conceiving this ambitious project and bringing it to fruition. YEMOJA is a Silicon Graphics ICE-X Linux HPC cluster (based on Intel’s Ivy Bridge 3 GHz E5-2690v2 processor) with cumulative theoretical peak floating-point performance of 405 Tflops. The compute subsystem of the cluster comprises 17120 CPU cores. The high-speed inter-processor communication utilizes FDR Infiniband network at 56 Gbs in 6D SGI enhanced hypercube topology. The storage subsystem consist of 432 TB high-throughput Lustre distributed parallel file-system over the Infiniband network. YEMOJA cluster has 4,000 licenses for MATLAB distributed computing server, among the largest installations of MATLAB in the world.  The system is used for research, development, and testing of FWI and related inversion technologies, and development of techniques in seismic data processing, big data handling and machine learning. As key partners in the III project, SLIM is allocated a dedicated 40% capacity on this resource, with the aim of field-testing SLIM’s theoretical research results on industrial level (3D) datasets. For more information: SENAI CIMATEC news release World Oil feature article Top 500 Cluster configuration:

  1. Compute subsystem (405 Tflops):
    1. 856 compute nodes (17120 cores total)
      1. 2 CPUs per node (1712 total)
      2. 10 cores per CPU (3 GHz E5-2690v2)
    2. Memory (132 TB total)
      1. 656 nodes with
        1. 128 GB per node (82 TB total)
        2. 6.4 GB per CPU core
      2. 200 nodes with
        1. 256 GB per node (50 TB total)
        2. 12.8 GB per CPU core
    3. Inter-processor communication
      1. Mellanox Infiniband network
      2. FDR x4 aggregate (56 Gbs)
      3. 6D SGI enhanced hypercube with max 7 hops
  2. Storage subsystem
    1. Lustre distributed parallel file-system for /data and /scratch
      1. 432 TB raw capacity
      2. 324 TB formated redundant capacity
      3. 30 GBs on write and 40 GBs on read via Infiniband
  3. Access subsystem
    1. Ten login nodes
      1. 2 CPUs per node
      2. 10 cores per CPU (3 GHz E5-2690v2)
      3. 256 GB per node