Child pages
  • OpenMPI Oct 2017

The Tufts High Performance Compute (HPC) cluster delivers 35,845,920 cpu hours and 59,427,840 gpu hours of free compute time per year to the user community.

Teraflops: 60+ (60+ trillion floating point operations per second) cpu: 4000 cores gpu: 6784 cores Interconnect: 40GB low latency ethernet

For additional information, please contact Research Technology Services at

Skip to end of metadata
Go to start of metadata

XSEDE HPC Monthly Workshop - Oct. 3-4, 2017 - OpenMPI


Location: Tufts University

Address: Collaborative Learning and Innovation Complex 574 Boston Ave Medford, MA 02155


Another Map:

Collaborative Learning and Innovation Complex (CLIC)

Parking: Guest parking is available adjacent to CLIC however you might also consider Dowling Parking Garage,, which is a short walk away.

Public Transportation: Take the Red Line to Davis Sq. and the Jumbo Shuttle to campus center, a short walk from CLIC.

Questions, comments, concerns:

Description: This workshop is intended to give C and Fortran programmers a hands-on introduction to MPI programming. Both days are compact, to accommodate multiple time zones, but packed with useful information and lab exercises. Attendees will leave with a working knowledge of how to write scalable codes using MPI – the standard programming tool of scalable parallel computing. It will have a hands-on component using the Bridges computing platform at the Pittsburgh Supercomputing Center.

Agenda: All times given are Eastern time

Tuesday, Oct. 3

11:00 Welcome
11:15 Computing Environment
12:00 Intro to Parallel Computing
1:00 Lunch Break
2:00 Introduction to MPI
3:30 Introductory Exercises
4:10 Intro Exercises Review
4:15 Scalable Programming: Laplace code
5:00 Adjourn/Laplace Exercises


Wednesday, Oct. 4
All times given are Eastern
11:00 Advanced MPI
12:30 Lunch Break
1:30 Laplace Review
2:00 Outro to Parallel Computing
2:45 Parallel Debugging and Profiling Tools
3:00 Exercises
4:30 Adjourn


  • No labels