Child pages
  • MPI May 2019

The Tufts High Performance Compute (HPC) cluster delivers 35,845,920 cpu hours and 59,427,840 gpu hours of free compute time per year to the user community.

Teraflops: 60+ (60+ trillion floating point operations per second) cpu: 4000 cores gpu: 6784 cores Interconnect: 40GB low latency ethernet

For additional information, please contact Research Technology Services at tts-research@tufts.edu


Skip to end of metadata
Go to start of metadata

XSEDE HPC Monthly Workshop - Mon. May 06 - Tue. May 07 2019 - MPI 

This workshop is intended to give C and Fortran programmers a hands-on introduction to MPI programming. Both days are compact, to accommodate multiple time zones, but packed with useful information and lab exercises. Attendees will leave with a working knowledge of how to write scalable codes using MPI – the standard programming tool of scalable parallel computing. It will have a hands-on component using the Bridges computing platform at the Pittsburgh Supercomputing Center.

Sign-up at: Coming Soon.

Date: 11:00am - 5:00pm, Mon. May 06, 2019 - Tues. May 07, 2019

Location: Collaborative Learning and Innovation Complex (CLIC) 114

Address: 574 Boston Ave Medford, MA 02155

Audience: All faculty, student and staff

Agenda: All times given are Eastern time

Monday, May 06

11:00 Welcome
11:15 Computing Environment
12:00 Intro to Parallel Computing
1:00 Lunch Break
2:00 Introduction to MPI
3:30 Introductory Exercises
4:10 Intro Exercises Review
4:15 Scalable Programming: Laplace code
5:00 Adjourn/Laplace Exercises

 

Tuesday, May 07
All times given are Eastern
11:00 Advanced MPI
12:30 Lunch Break
1:30 Laplace Review
2:00 Outro to Parallel Computing
2:45 Parallel Debugging and Profiling Tools
3:00 Exercises
4:30 Adjourn

  • No labels