Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Tufts Linux Research Cluster

Tufts Technology Services TTS provides a wide array of services in support of Tufts research community.  High Performance Computing(HPC) hardware from Cisco and IBM is used to create  the cluster.   The hardware complement includes Cisco  blades,  IBM  M3 and M4 iDataplexes, nVidia GPUs and a 10Gb/s interconnect Cisco network.

...

GPU computing is supported by 12 nVidia GPU models:

K20,  M2070, M2050

 

The Linux operating system on each node is configured identically across every machine. In addition there is a login node and a file transfer node supporting the compute nodes. Client/user workstations access the cluster via the Tufts Network using ssh based connection client software. Remote ssh access for researchers is also supported.  The user login node has an additional network interface  that connects to the compute nodes using private IP addressing via 10Gig network hardware. This scheme allows the compute nodes to be a "virtual" resource managed by slurm job queueing software. This approach also allows the cluster to scale to a large number of nodes thus providing the structure for future growth. The login node of the cluster is reserved for the use of compilers, running shell tools, and launching and submitting programs to compute nodes. The login node is not intended for running  research programs, or for general computing purposes, and all jobs are to be submitted to compute nodes using slurm. 

A separate  file transfer node, xfer.cluster.tufts.edu,  is also provided to accommodate large data transfers.