Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Page detailing the architecture of the current generation Tufts High Performance Compute (HPC) cluster.

 

 Tufts Linux Research Cluster

Tufts Technology Services TTS provides a wide array of services in support of Tufts research community.  High Performance Computing(HPC) hardware from Cisco and IBM is used to create  the cluster.   The hardware complement includes Cisco  blades,  IBM  M3 and M4 iDataplexes, nVidia GPUs and a 10Gb/s interconnect Cisco network.

m4 nodes:      Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
Cicso nodes:  Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz
m3 nodes:      Intel(R) Xeon(R) CPU    X5675  @ 3.07GHz

 

By late Dec. 2014 there is  approximately 163 compute nodes. Total slurm managed cpu/core count is ~3600 and a peak performance of roughly 60+ Teraflops. In this HPC environment, TTS also provides researchers with access to commercial engineering software, popular open-source research software applications and tools for bioinformatics and statistics.  Secure networked storage for research data (400+ TB CIFS and NFS storage on NetApp appliances) is available. 

Each cluster node has 12, 16 or 20 cores using  three different  Intel CPUs.   Compute node memory ranges from 24 to 384 gigabytes of memory.

GPU computing is supported by 12 nVidia GPU models:

K20,  M2070, M2050