Child pages
  • OpenMPI 2017

The Tufts High Performance Compute (HPC) cluster delivers 35,845,920 cpu hours and 59,427,840 gpu hours of free compute time per year to the user community.

Teraflops: 60+ (60+ trillion floating point operations per second) cpu: 4000 cores gpu: 6784 cores Interconnect: 40GB low latency ethernet

For additional information, please contact Research Technology Services at tts-research@tufts.edu


Skip to end of metadata
Go to start of metadata

XSEDE HPC Monthly Workshop - April 18-19, 2017 - OpenMPI
 

Info:  https://portal.xsede.org/course-calendar/-/training-user/class/543/session/1201

Location: Tufts University

New Location

Previous Location

Questions, comments, concerns: shawn.doughty@tufts.edu

Description: This workshop is intended to give C and Fortran programmers a hands-on introduction to MPI programming. Both days are compact, to accommodate multiple time zones, but packed with useful information and lab exercises. Attendees will leave with a working knowledge of how to write scalable codes using MPI – the standard programming tool of scalable parallel computing. It will have a hands-on component using the Bridges computing platform at the Pittsburgh Supercomputing Center.

Agenda: All times given are Eastern time

Tuesday, April 18

11:00 Welcome
11:15 Computing Environment
12:00 Intro to Parallel Computing
1:00 Lunch Break
2:00 Introduction to MPI
3:30 Introductory Exercises
4:10 Intro Exercises Review
4:15 Scalable Programming: Laplace code
5:00 Adjourn/Laplace Exercises

 

Wednesday, April 19
All times given are Eastern
11:00 Advanced MPI
12:30 Lunch Break
1:30 Laplace Review
2:00 Outro to Parallel Computing
2:45 Parallel Debugging and Profiling Tools
3:00 Exercises
4:30 Adjourn

 


  • No labels