Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Excerpt

Grant Application Information

...

Grant project support may leverage the research computing capacity of Tufts University to provide a dedicated computing cluster and access to Tufts research storage area network (SAN).

The Tufts University Linux Research Cluster is comprised of Cisco

...

, IBM and Penguin hardware running the 
Red Hat Enterprise Linux 6.9 operating system. Nodes are interconnected via 10Gig network

...

with future expansion in the works to 100GB and 200GB ethernet and infiniband. Memory configurations range from 32GB, 128GB, 256GB, 384GB, 512GB and 1TB and each node has a core count of 16, 20, 40 or 72 cores. 
There are 5 GPU nodes with 12 Nvidia cards including Tesla K20Xm, P100 and soon V100.
The system is managed via the SLURM scheduler with a total of 7636 cores and 32TB of memory with 41216 GPU cores.

In addition to the research cluster, this resource is supported by Tufts research networked storage infrastructure. Currently, Tufts University offers a total of 400+ TB of storage capacity on a Network Appliance(NetApp) SAN to help researchers safely store their data. These storage appliances are backed up on a daily basis with up to 1 year of back-ups kept off site at any moment. Requests for storage will be provisioned for free to up to 500GB.  Above 500GB, storage will be provisioned at a fixed recovery rate, and financial details will be finalized at time of award. Provisioned research storage space will also be available for mounting on the Tufts research cluster to leverage the computing power of the latter for data analysis. Details regarding the Tufts research cluster and associated research storage can be found at http://go.tufts.edu/cluster Storage consists of a dedicated 600TB parallel file system (GPFS) along with 600TB of object storage (DDN WOS) for archival purposes. Dedicated login, management, file transfer, compute, storage and virtualization nodes are available on the cluster, all connected via a dedicated network infrastructure. Users access the system locally and remotely thru ssh clients as well as a number of scientific gateways and portals, enabling not only access for experienced users but emerging interest across all domains. The system was also one of the first to support singularity, the emerging container standard for high performance computing which has proven popular among users of machine and deep learning software stacks. Web based access is provide via the OnDemand (OOD) web portal software which Tufts has been a participant in porting, testing and deploying along with other HPC centers. 

Tufts Technology Services (TTS) has been a leader in the adoption of not only tools and technology but training, workshop and outreach programs within the university both at a regional and national level. Our staff continually participate in the Advanced Cyber Infrastructure – Research and Education Facilitators (ACI-REF), Practice & Experience in Advanced Research Computing Conference Series (PEARC), Super Computing (SC),as well as the eXtreme Science and Engineering Discovery Environment (XSEDE) programs. As a participant in XSEDE, Tufts has been involved in the Campus Champions, Student Champions, leadership team and Region-7 (New England) programs. In the past year Tufts has provided hundreds of hours worth of training and workshops on research computing via institutional instructors, XSEDE workshops and intensive boot camps from partners such as the Pittsburg Supercomputer Center (PSC) and the Petascale Institute from the National Energy Research Scientific Computing Center (NERSC).

Network Security in relation to Research Cluster, Storage Services

...