Cluster computing is the result of connecting many local computers (nodes) together via a high speed connection to provide a single shared resource. Its distributed processing system allows complex computations to run in parallel as the tasks are shared among the individual processors and memory. Applications that are capable of utilizing cluster systems break down the large computational tasks into smaller components that can run in serial or parallel across the cluster systems, enabling a dramatic improvement in the time required to process large problems and complex tasks.What is a Cluster?
Faculty, Research Staff and students use this resource in support of a variety of research projects.
Tufts Linux Research Cluster
Tufts Technology Services TTS provides a wide array of services in support of Tufts research community. High Performance Computing(HPC) hardware from Cisco and IBM is used to create the cluster. The hardware complement includes Cisco blades, IBM M3 and M4 iDataplexes, nVidia GPUs and a 10Gb/s interconnect Cisco network.
IBM m4 nodes: Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz Cisco nodes: Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz IBM m3 nodes: Intel(R) Xeon(R) CPU X5675 @ 3.07GHz |
---|
By late Dec. 2014 there is approximately 163 compute nodes. Total slurm managed cpu/core count is ~4000+ and a peak performance of roughly 60+ Teraflops. In this HPC environment, TTS also provides researchers with access to commercial engineering software, popular open-source research software applications and tools for bioinformatics and statistics. Secure networked storage for research data (400+ TB CIFS desktop on NetApp appliances and 511 TB GPFS cluster storage ) is available.
Each cluster node has 12, 16 or 20 cores using three different Intel CPUs. Compute node memory ranges from 24 to 384 gigabytes of memory.
GPU computing is supported by 12 nVidia GPU models:
K20, M2070, M2050 |
---|
The Linux operating system(RedHat 6.7) on each node is configured identically across every machine. In addition there is a login node and a file transfer node supporting the compute nodes. Client/user workstations access the cluster via the Tufts Network using ssh based connection client software. Remote ssh access for researchers is also supported. The user login node has an additional network interface that connects to the compute nodes using private IP addressing via 10Gig network hardware. This scheme allows the compute nodes to be a "virtual" resource managed by slurm job queuing software. This approach also allows the cluster to scale to a large number of nodes thus providing the structure for future growth. The login node of the cluster is reserved for the use of compilers, running shell tools, and launching and submitting programs to compute nodes. The login node is not intended for running research programs, or for general computing purposes, and all jobs are to be submitted to compute nodes using slurm.
A separate file transfer node, xfer.cluster.tufts.edu, is also provided to accommodate large data transfers.
See the Conceptual diagram and layout of cluster nodes . |
---|
Cluster User Accounts
Click Account Information for additional information about cluster accounts.
Orientation for new cluster users
This content is for someone that has never used linux or time share mainframes or super computing centers.
Research Cluster Restrictions
Conditions and use of the research cluster include and are not limited to the following expectations. Additional related details may be found throughout this page.
Expectations |
---|
no user root access |
supported OS is RedHat 6 Enterprise version |
no user ability to reboot node(s) |
all cluster login access is via the login headnode |
no user machine room access to cluster hardware |
no alternative linux kernels other than the current REDHAT version |
no access to 10Gig Ethernet network hardware or software |
no user cron or at access |
no user servers/demons such as: HTTP(apache), FTP. etc. |
Cluster quality of service is managed through slurm |
all user jobs destined for compute nodes are submitted via slurm commands |
all compute nodes follow a naming convention |
only Tufts Technology Services NFS approved research storage is supported |
idle nodes are scheduled by slurm |
no user contributed direct connect storage such as usb memory, or external disks |
only limited outgoing Internet access from the headnode will be allowed; exceptions must be reviewed |
allow approximate 2-week turn around for software requests |
whenever possible, commercial software limit to the two most recent versions |
Only user home directories and optional research NFS mounted storage is backed up |
temporary public storage file systems have no quota and are subject to automated file deletions |
Cluster does not export file systems to user desktops |
Cluster does not support Virtual Machine instances |
Please see SoftwareRequest for policy, details and timeline for software installation requests on the cluster.
Software request policy
Please send your request via email to tts-research@tufts.edu and address the following questions:
- What is the the name of the software?
- Where can additional information about the software be found?
- Who are the intended users of the software?
- When is it needed by?
- Will it be used in support of a grant and if so what grant?
- What if any special requirements are needed?
Note: A software request normally may take up to 2 weeks. However depending on the installation complexity and number of packages requested it may take longer. When it appears that an assessment of the tasks suggest longer than 2 weeks we will contact you with an estimate so that prioritization can be made.
Recent Cluster News
Click News
Cluster Storage Options
Click here for details.
Network Concurrent Software Licenses
Support venue
If you have any questions about cluster related usage, applications, or assistance with software, please contact tts-research@tufts.edu.
MODULES: Cluster software environment
Installed Cluster Software
Compilers, Editors, etc...
Frequently Asked Questions - FAQs:
Cluster Connections/Logins
Parallel programming related information
User Account related FAQs:
X based graphics FAQs
Application specific Information FAQs
Linux and cluster information FAQs
Compilation FAQs
Miscellaneous FAQs
How do Tufts students and faculty make use of the cluster?
For additional information please contact tts-research@tufts.edu.