The Tufts High Performance Compute (HPC) cluster delivers 35,845,920 cpu hours and 59,427,840 gpu hours of free compute time per year to the user community.

Teraflops: 60+ (60+ trillion floating point operations per second) cpu: 4000 cores gpu: 6784 cores Interconnect: 40GB low latency ethernet

For additional information, please contact Research Technology Services at

Skip to end of metadata
Go to start of metadata

What is XSEDE? The Extreme Science and Engineering Discovery Environment (XSEDE), funded by the NSF (National Science Foundation) is the next generation replacement for TeraGrid. The goal is to provide access to super computer resources for US based researchers as well as expand into new avenues such as research collaboration. Training, Education, and Outreach are a significant component of Xsede and there are many fantastic resources available.What is available? XSEDE provides access to a number of very large systems, many with hundreds of thousands of cpu cores, high speed interconnections and fast storage. High memory, shared memory systems are available along with a large number of GPGPU (General Purpose Computation on Graphics Processing Units) and acceleration units such as those available from Nvidia, AMD and Intel MIC (Many Integrated Core). Rather than duplicate the list, please see the following

How to apply? Xsede provides a number of ways to get a resource allocation on member systems. Please review the following pages.


When to apply? The application period for XSEDE resources is quarterly so apply as early as possible.

Dec 15 thru Jan 15April 1
Mar 15 thru Apr 15Jul 1
Jun 15 thru Jul 15Oct 1
Sep 15 thru Oct 15Jan 1

How does Tufts participate in XSEDE? Currently RGTS (Research and Geospatial Technology Services) aids in dissemination of information regarding available Xsede resources, how to apply for allocations and basic information regarding access, usage and file transfer. As part of Tufts connection to Internet2 we have high speed access to systems on Xsede.

How to get help? Two avenues are available for receiving help.

  • Local: You can open a ticket at or email Please specifically mention Xsede so the ticket gets routed properly to RGTS (Research and Geospatial Technology Services) It is very important that we know locally who is accessing Xsede and what questions you have.
  • XSEDE: For escalation to the XSEDE helpdesk you can go to or email both of which will open a ticket. You can also reach the XSEDE helpdesk 24/7 by telephone at 1-866-907-2383

How to access resources? Check out the Xsede user guides for each system in question Generally speaking most XSEDE systems are configured in a similar fashion and are accessed via ssh like other HPC systems around the world.

Can I login with Tufts credentials ie UTLN? No. XSEDE does not allow logins using authentication credentials from other institutions though the issue is being examined and may be available in the future.

How to copy files? Similar to using ssh to login, you can use sftp, scp, and rsync over ssh to copy files to a system on XSEDE. If your files are on the Tufts compute cluster they will copy over to XSEDE much faster than from a typical desktop, workstation or laptop on wifi or a wired 1GB network drop. Please feel free to contact Tufts staff to make sure your file copy procedures are as efficient as possible.

  • Is Globus supported at Tufts as a file transfer mechanism? Currently no though this is being examined. Globus is based upon GridFTP and allows for higher speed transfer, via multiple data paths, than typical tools such as scp, sftp, and rsync. This is of particular importance as files and datasets get larger. Globus also allows for disconnected transfer and restart of large file copies.
  • Is Aspera supported at Tufts as a file transfer mechanism? Aspera is not deployed across XSEDE but we will keep apprised of the matter and update this page as appropriate.

How do I chose between the Tufts HPC (High Performance Compute) cluster and XSEDE? The decision is not mutually exclusive as both clusters are general purpose clusters running a wide variety of software for an equally broad user base. Typically if you are running custom code it should be compiled individually for each system.


  • No labels