The Tufts High Performance Compute (HPC) cluster delivers 35,845,920 cpu hours and 59,427,840 gpu hours of free compute time per year to the user community.

Teraflops: 60+ (60+ trillion floating point operations per second) cpu: 4000 cores gpu: 6784 cores Interconnect: 40GB low latency ethernet

For additional information, please contact Research Technology Services at

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

A short orientation to the Cluster and what Tufts HPC is and is not

High Performance Computing(HPC) refers to the means of providing massive computing resources for tasks that are not suitable for desktops, laptops, iPads and portable devices. The Tufts Cluster is designed in a typical research cluster sense for resource scalability, redundancy and consistency with other HPC centers. Compute resources are shared amongst users and access to compute nodes are allocated by a job scheduling program that provides different levels of service.

The Tufts cluster does not support the following:

  • MicroSoft Excel, Word processing, MicroSoft Access, MicroSoft compilers
  • Web access, for example there is no Web interface for you to visit to use the cluster
  • desktop integration: The cluster does not see any local devices on your computer such as printers,etc...
  • Adobe products
  • X11 Desktops(various graphical interfaces)
  • remote desktops

The Tufts cluster does support the following:

  • RedHat linux via a variety of command line environments know as shells(bash, csh, tcsh,...)
  • public domain research codes written in C, C++, fortran, python, Perl, Java
  • various compilers such as gnu C, C++, Portland compilers, Intel Compilers, python, Perl, Java, lisp
  • popular commercial software packages such as Matlab, Ansys, Abaqus, Mathematica, Maple, Comsol, TecPLot, and others
  • parallel computation involving several approaches: threads, MPI, GPU
  • distributed computing tasks via Platform LSF job scheduler
  • high performance network attached storage
  • No labels