The Tufts High Performance Compute (HPC) cluster delivers 35,845,920 cpu hours and 59,427,840 gpu hours of free compute time per year to the user community.

Teraflops: 60+ (60+ trillion floating point operations per second) cpu: 4000 cores gpu: 6784 cores Interconnect: 40GB low latency ethernet

For additional information, please contact Research Technology Services at tts-research@tufts.edu


Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

What is a Cluster?

Cluster computing is the result of connecting many local computers (nodes) together via a high speed connection to provide a single shared resource. Its distributed processing system allows complex computations to run in parallel as the tasks are shared among the individual processors and memory. Applications that are capable of utilizing cluster systems break down the large computational tasks into smaller components that can run in serial or parallel across the cluster systems, enabling a dramatic improvement in the time required to process large problems and complex tasks.

Cluster Research use cases
Typical Cluster Usage at Tufts

Faculty, Research Staff and students use this resource in support of a variety of research projects.

Architecture

Research Grant Information

Cluster Research use cases
Typical Cluster Usage at Tufts

Faculty, Research Staff and students use this resource in support of a variety of research projects.

Cluster User Accounts

Click Account Information for additional information about cluster accounts.

Orientation for new cluster users

This content is for someone that has never used linux or time share mainframes or super computing centers.

 

Research Cluster Restrictions

Conditions and use of the research cluster include and are not limited to the following expectations. Additional related details may be found throughout this page.

Expectations

no user root access

supported OS is RedHat 6 Enterprise version

no user ability to reboot node(s)

all cluster login access is via the login headnode

no user machine room access to cluster hardware

no alternative linux kernels other than the current REDHAT version

no access to 10Gig Ethernet network hardware or software

no user cron or at access

no user servers/demons such as: HTTP(apache), FTP. etc.

Cluster quality of service is managed through slurm

all user jobs destined for compute nodes are submitted via slurm  commands

all compute nodes follow a naming convention

only Tufts Technology Services NFS approved research storage is supported

idle nodes are scheduled by slurm

no user contributed direct connect storage such as usb memory, or external disks

only limited outgoing Internet access from the headnode will be allowed; exceptions must be reviewed

allow approximate 2-week turn around for software requests

whenever possible, commercial software limit to the two most recent versions

Only user home directories and optional research NFS mounted storage is backed up

temporary public storage file systems have no quota and are subject to automated file deletions

Cluster does not export file systems to user desktops

Cluster does not support Virtual Machine instances

Please see SoftwareRequest for policy, details and timeline for software installation requests on the cluster.

Software request policy

Please send your request via email to tts-research@tufts.edu and address the following questions:

  • What is the the name of the software?
  • Where can additional information about the software be found?
  • Who are the intended users of the software?
  • When is it needed by?
  • Will it be used in support of a grant and if so what grant?
  • What if any special requirements are needed?

Note: A software request normally may take up to 2 weeks. However depending on the installation complexity and number of packages requested it may take longer. When it appears that an assessment of the tasks suggest longer than 2 weeks we will contact you with an estimate so that prioritization can be made.

 

Recent Cluster News

Click News

Cluster Storage Options

Click here for details.

Network Concurrent Software Licenses

Click here

Support venue

If you have any questions about cluster related usage, applications, or assistance with software, please contact tts-research@tufts.edu.

MODULES: Cluster software environment

Click here

Installed Cluster Software

Click here

Compilers, Editors, etc...

Click here

Frequently Asked Questions - FAQs:

Cluster Connections/Logins

Click here

Parallel programming related information

Click here

User Account related FAQs:

Click here

X based graphics FAQs

Click here

Application specific Information FAQs

Click here

Linux and cluster information FAQs

Click here

Compilation FAQs

Click here

Miscellaneous FAQs

Click here

 

 

For additional information please contact tts-research@tufts.edu.

 

  • No labels