The Tufts High Performance Compute (HPC) cluster delivers 35,845,920 cpu hours and 59,427,840 gpu hours of free compute time per year to the user community.

Teraflops: 60+ (60+ trillion floating point operations per second) cpu: 4000 cores gpu: 6784 cores Interconnect: 40GB low latency ethernet

For additional information, please contact Research Technology Services at tts-research@tufts.edu


Alexandre B. Sousa

Alexandre B. Sousa is a graduate student with the High Energy Physics Group and as part of the MINOS experiment collaboration, they have been one of the main people responsible for mass event reconstruction using the Fermilab fixed-target farm. Earlier this year, a Mock Data Challenge was issued to the experiment in order to shake down reconstruction and analysis shortcomings before real data collection starts in January. This effort requested the generation of a rather large MonteCarlo sample, which was subsequently reconstructed at Fermilab. However, the generation of the MC sample was quite hard to setup at Fermilab, where space constraints, e-bureaucracy and competition with other experiments meant they would not be able to do it in a timely manner. That was when they decided to test the Tufts Linux Cluster to perform this task. They were setup with an area on the /cluster/shared space within a day of my original request, and after a few tests, they were able to generate 80% of the total necessary MC sample in less than a week. They were, of course, lucky to be almost the exclusive user of the cluster for that period, but they really had no problems setting things up and using it in what is seen as a nice success of the Tufts High energy Physics Group. Given this success, they have volunteered to become one of the spearheading institutions taking part on the upcoming MC generation effort which should start later this month, and the gained experience was transformed into a document and relayed to other institutions that are starting to run their own clusters and hope to join this effort. They have used the cluster a second time to do a customized reprocessing data for the CC nue analysis group, which they integrate, which required compilation in the cluster of the MINOS Offline Software, installation of a mysql database and assembling some shell scripts to handle the job output. That went quite well, and the full data sample was processed in 2 hours, with about 1 day of setup. Having worked for 2 years with the Fermilab batch farm, they were mainly impressed by the speed of the network connection of the CPU nodes to the I/O node, almost 20 times the Fermilab data transfer speeds and also by the great flexibility of use given to the users, which implied minimal back and forth contact with the admins and dramatically improved work efficiency.


For additional information, please contact Research Technology Services at tts-research@tufts.edu