The Tufts High Performance Compute (HPC) cluster delivers 35,845,920 cpu hours and 59,427,840 gpu hours of free compute time per year to the user community.
Teraflops: 60+ (60+ trillion floating point operations per second) cpu: 4000 cores gpu: 6784 cores Interconnect: 40GB low latency ethernet
For additional information, please contact Research Technology Services at tts-research@tufts.edu
Copy of OpenMPI 2017
XSEDE HPC Monthly Workshop - April 18-19, 2017 - OpenMPI
Â
Info:Â https://portal.xsede.org/course-calendar/-/training-user/class/543/session/1201
Location: Tufts University
New Location
- 167 Holland St, Somerville, MA 02144
- Room 203B
- http://campusmaps.tufts.edu/medford/#fid=m030
- Â http://campusmaps.tufts.edu/medford/
- Parking: Guest parking is difficult near TAB so please park on campus or take the T. Consider Dowling Parking Garage, http://publicsafety.tufts.edu/adminsvc/parking-garages/, which is a 10 minute walk from TAB or  Public Transportation. Take the Red Line to Davis Sq. and TAB is only a 5 minute walk.
Â
Previous Location
- Address: Collaborative Learning and Innovation Complex 574 Boston Ave Medford, MA 02155
- Map: https://goo.gl/maps/qZnMefhjtdJ2
- Collaborative Learning and Innovation Complex (CLIC) http://now.tufts.edu/articles/space-next-generation-thinking
Parking: Guest parking is available adjacent to CLIC however you might also consider Dowling Parking Garage, http://publicsafety.tufts.edu/adminsvc/parking-garages/, which is a short walk away.
Public Transportation: Take the Red Line to Davis Sq. and the Jumbo Shuttle to campus center, a short walk from CLIC. http://publicsafety.tufts.edu/adminsvc/shuttle-services-2/
Questions, comments, concerns:Â shawn.doughty@tufts.edu
Description: This workshop is intended to give C and Fortran programmers a hands-on introduction to MPI programming. Both days are compact, to accommodate multiple time zones, but packed with useful information and lab exercises. Attendees will leave with a working knowledge of how to write scalable codes using MPI – the standard programming tool of scalable parallel computing. It will have a hands-on component using the Bridges computing platform at the Pittsburgh Supercomputing Center.
Agenda: All times given are Eastern time
Tuesday, April 18
11:00 Welcome
11:15 Computing Environment
12:00 Intro to Parallel Computing
1:00 Lunch Break
2:00 Introduction to MPI
3:30 Introductory Exercises
4:10 Intro Exercises Review
4:15 Scalable Programming: Laplace code
5:00 Adjourn/Laplace Exercises
Â
Wednesday, April 19
All times given are Eastern
11:00 Advanced MPI
12:30 Lunch Break
1:30 Laplace Review
2:00 Outro to Parallel Computing
2:45 Parallel Debugging and Profiling Tools
3:00 Exercises
4:30 Adjourn
Â
For additional information, please contact Research Technology Services at tts-research@tufts.edu