The Tufts High Performance Compute (HPC) cluster delivers 35,845,920 cpu hours and 59,427,840 gpu hours of free compute time per year to the user community.
Teraflops: 60+ (60+ trillion floating point operations per second) cpu: 4000 cores gpu: 6784 cores Interconnect: 40GB low latency ethernet
For additional information, please contact Research Technology Services at tts-research@tufts.edu
Research Grant Information
Institutional Computing Resources
Grant project support may leverage the research computing capacity of Tufts University to provide a dedicated computing cluster and access to Tufts research storage area network (SAN).
In addition to the research cluster, this resource is supported by Tufts research networked storage infrastructure. Storage consists of a dedicated 600TB parallel file system (GPFS) along with 600TB of object storage (DDN WOS) for archival purposes. Dedicated login, management, file transfer, compute, storage and virtualization nodes are available on the cluster, all connected via a dedicated network infrastructure. Users access the system locally and remotely thru ssh clients as well as a number of scientific gateways and portals, enabling not only access for experienced users but emerging interest across all domains. The system was also one of the first to support singularity, the emerging container standard for high performance computing which has proven popular among users of machine and deep learning software stacks. Web based access is provide via the OnDemand (OOD) web portal software which Tufts has been a participant in porting, testing and deploying along with other HPC centers.
Network Security in relation to Research Cluster, Storage Services
Tufts University maintains a distributed information technology environment, with central as well as local aspects of overall planning and control. Tufts' information security program is structured in a similar manner. Operationally, Tufts central IT organization (TTS) and each local IT group maintain standards of quality and professionalism regarding operational processes and procedures that enable effective operational security. For TTS managed systems, the emphasis is on centralized resources such as administration and finance, telecommunications, research computing and networking, systems and operations as well as directory, email, LDAP, calendaring, storage and Windows domain services. TTS also provides data center services and backups for all of these systems. Additionally, a large number of management systems (for patching), anti-virus and firewall services are centrally provided and/or managed by TTS. Within TTS, processes and procedures exist for managed infrastructure changes, as change control is required for all critical central systems. Tufts University provides anti-virus software for computers owned by the University, and makes anti-virus software available at no charge for users who employ personally owned computers in the course of their duties at the University.
Tufts Research Storage services is based on a Network Appliance(NetApp) storage infrastructure located in the Tufts Administration Building(TAB) machine room. Provisioned storage is NFS (Network File System) mounted on the Research Computing Cluster for project access. NFS exports are not exported outside of TTS managed systems. Tufts Research Computing Cluster is also co-located within TAB's machine room. Network based storage connected to the cluster is via a private(non public) network connection.
Access to the Tufts IP network itself is controlled via MAC address authentication which is performed via the Tufts login credentials and tracked in the TUNIS Cardinal system; this system uses an 8 character password scheme. A switched versus broadcast hub network architecture is in place limiting traffic to just the specific ports in use to transport data from source to destination. Access to Tufts LAN network resources is controlled via Active Directory where applicable or LDAP, which requires the user to authenticate each time a system joins the domain. All of these controls are identically implemented on the wired as well as wireless Tufts networks.
Both Research Storage and linux based Cluster Compute server operating systems are kept current via sound patch management procedures. For example, PC's owned and managed by Tufts are automatically patched via the Windows Server Update Service. All other computing platforms are required to be on a similar automated patching schedule. From an operational standpoint, most central and local systems are maintained and managed using encrypted communications channels. For UNIX/linux servers, SSH is utilized; on Windows, Microsoft Terminal Services is utilized. User access to cluster services is via SSH and LDAP. No direct user login access to central Research Storage services is possible.
Additional user related Cluster information can be found here: http://go.tufts.edu/cluster
All devices and users are subject to the Tufts Acceptable Use policy found on TTS website:
How to reference the computing and storage resources for grant purposes.
Please reference this resource as: Tufts High-performance Computing Research Cluster
For additional information, please contact Research Technology Services at tts-research@tufts.edu