The Tufts High Performance Compute (HPC) cluster delivers 35,845,920 cpu hours and 59,427,840 gpu hours of free compute time per year to the user community.

Teraflops: 60+ (60+ trillion floating point operations per second) cpu: 4000 cores gpu: 6784 cores Interconnect: 40GB low latency ethernet

For additional information, please contact Research Technology Services at tts-research@tufts.edu


Restrictions

Research Cluster Usage Expectations and Restrictions

Expectations

no user root access

supported OS is RedHat 6 Enterprise version

no user ability to reboot node(s)

all cluster login access is via the login headnode

no user machine room access to cluster hardware

no alternative linux kernels other than the current REDHAT version

no access to 10Gig Ethernet network hardware or software

no user cron or at access

no user servers/demons such as: HTTP(apache), FTP. etc.

Cluster quality of service is managed through slurm

all user jobs destined for compute nodes are submitted via slurm  commands

all compute nodes follow a naming convention

only Tufts Technology Services NFS approved research storage is supported

idle nodes are scheduled by slurm

no user contributed direct connect storage such as usb memory, or external disks

only limited outgoing Internet access from the headnode will be allowed; exceptions must be reviewed

allow approximate 2-week turn around for software requests

whenever possible, commercial software limit to the two most recent versions

Only user home directories and optional research NFS mounted storage is backed up

temporary public storage file systems have no quota and are subject to automated file deletions

Cluster does not export file systems to user desktops

Cluster does not support Virtual Machine instances


For additional information, please contact Research Technology Services at tts-research@tufts.edu