Update Announcement
UIT is pleased to announce the completion of the Tufts research cluster upgrade project. The new High-Performance Computing (HPC) research environment is in production bringing more than 1000-cores of computing power to the Tufts community. Over the past three years, UIT has observed an increasing demand for our HPC research cluster. In anticipation of the ongoing need for additional resources, we began a project to increase resources more than three fold.
...
As of August 29, 2011, the new cluster environment is at 77% of total capacity with 776 cores in production (out of 1004) and we anticipate full completion with a full 1004-core production capacity by early September.
Please read the information below carefully since some of the changes to the new cluster environment may affect you.
User Interface:
There are no changes to your login home directory, user account names or passwords. If you don’t remember your password, please visit Tufts tools at http://tuftstools.tufts.edu/ . Home directories have retained their previous backup schedules and the default login shell environment remains bash.
Please note that you may receive a warning that the SSH key has changed when logging for the first time to the new research cluster at cluster.uit.tufts.edu. This is the result of the new installation and is not a security issue. Your login credentials and login hostname have not changed, and simple fixes are detailed below.
Storage:
Data on temporary file systems (/tmp, /scratch or /scratch2) were not migrated to the new cluster. Users were encouraged to transfer any important data prior to August 12. Temporary storage on file system /cluster/shared/your_user_name are available on the new cluster as before. Similarly, any directory on file system /cluster/tufts/ are available on the new cluster.
Compute Nodes:
There has been a slight change to the compute node naming convention to help reflect hardware differences:
...
Going forward, new iDataPlex nodes contributed by Tufts faculty will be named: contribNN (contrib01, contrib02, etc)
LSF Queues:
All LSF queues keep the same name on the new cluster.
Parallel related:
Access to the new GPU resource will be activated in September after the new cluster is stable and in full production.
SSH related login key issues
We are aware that changes to the ssh keys have caused some connectivity problems with the new cluster head node. A simple solution is to remove old cluster keys from your ssh client. This process depends on how you connect. For those that use ssh directly, (Mac OS X or Windows with CygWin) one may remove/edit out the old key in the following manner:
...
Once you have connected to the new head node, you may need to clear old ssh keys to ensure connectivity with all the new cluster nodes. The easiest way to do this is to run the following command when you have
connected:
cleanSSH.sh
Support
Please check your mailbox for a follow-up announcements. If you have any questions or concerns with Tufts new HPC research cluster environment, please direct them via email to cluster-support@tufts.edu.