Child pages
  • Research Grant Information

The Tufts High Performance Compute (HPC) cluster delivers 35,845,920 cpu hours and 59,427,840 gpu hours of free compute time per year to the user community.

Teraflops: 60+ (60+ trillion floating point operations per second) cpu: 4000 cores gpu: 6784 cores Interconnect: 40GB low latency ethernet

For additional information, please contact Research Technology Services at

Skip to end of metadata
Go to start of metadata

Grant Application Information

Institutional Computing Resources

Grant project support may leverage the research computing capacity of Tufts University to provide a dedicated computing cluster and access to Tufts research storage area network (SAN).

The Tufts University Linux Research Cluster is comprised of Cisco, IBM and Penguin hardware running the 
Red Hat Enterprise Linux 6.9 operating system. Nodes are interconnected via 10Gig network with future expansion in the works to 100GB and 200GB ethernet and infiniband. Memory configurations range from 32GB, 128GB, 256GB, 384GB, 512GB and 1TB and each node has a core count of 16, 20, 40 or 72 cores. 
There are 5 GPU nodes with 12 Nvidia cards including Tesla K20Xm, P100 and soon V100.
The system is managed via the SLURM scheduler with a total of 7636 cores and 32TB of memory with 41216 GPU cores.

In addition to the research cluster, this resource is supported by Tufts research networked storage infrastructure. Storage consists of a dedicated 600TB parallel file system (GPFS) along with 600TB of object storage (DDN WOS) for archival purposes. Dedicated login, management, file transfer, compute, storage and virtualization nodes are available on the cluster, all connected via a dedicated network infrastructure. Users access the system locally and remotely thru ssh clients as well as a number of scientific gateways and portals, enabling not only access for experienced users but emerging interest across all domains. The system was also one of the first to support singularity, the emerging container standard for high performance computing which has proven popular among users of machine and deep learning software stacks. Web based access is provide via the OnDemand (OOD) web portal software which Tufts has been a participant in porting, testing and deploying along with other HPC centers. 

Tufts Technology Services (TTS) has been a leader in the adoption of not only tools and technology but training, workshop and outreach programs within the university both at a regional and national level. Our staff continually participate in the Advanced Cyber Infrastructure – Research and Education Facilitators (ACI-REF), Practice & Experience in Advanced Research Computing Conference Series (PEARC), Super Computing (SC),as well as the eXtreme Science and Engineering Discovery Environment (XSEDE) programs. As a participant in XSEDE, Tufts has been involved in the Campus Champions, Student Champions, leadership team and Region-7 (New England) programs. In the past year Tufts has provided hundreds of hours worth of training and workshops on research computing via institutional instructors, XSEDE workshops and intensive boot camps from partners such as the Pittsburg Supercomputer Center (PSC) and the Petascale Institute from the National Energy Research Scientific Computing Center (NERSC).

Network Security in relation to Research Cluster, Storage Services

Tufts University maintains a distributed information technology environment, with central as well as local aspects of overall planning and control. Tufts' information security program is structured in a similar manner. Operationally, Tufts central IT organization (TTS) and each local IT group maintain standards of quality and professionalism regarding operational processes and procedures that enable effective operational security. For TTS managed systems, the emphasis is on centralized resources such as administration and finance, telecommunications, research computing and networking, systems and operations as well as directory, email, LDAP, calendaring, storage and Windows domain services. TTS also provides data center services and backups for all of these systems. Additionally, a large number of management systems (for patching), anti-virus and firewall services are centrally provided and/or managed by TTS. Within TTS, processes and procedures exist for managed infrastructure changes, as change control is required for all critical central systems. Tufts University provides anti-virus software for computers owned by the University, and makes anti-virus software available at no charge for users who employ personally owned computers in the course of their duties at the University.

Tufts Research Storage services is based on a Network Appliance(NetApp) storage infrastructure located in the Tufts Administration Building(TAB) machine room. Provisioned storage is NFS (Network File System) mounted on the Research Computing Cluster for project access. NFS exports are not exported outside of TTS managed systems. Tufts Research Computing Cluster is also co-located within TAB's machine room. Network based storage connected to the cluster is via a private(non public) network connection.

Access to the Tufts IP network itself is controlled via MAC address authentication which is performed via the Tufts login credentials and tracked in the TUNIS Cardinal system; this system uses an 8 character password scheme. A switched versus broadcast hub network architecture is in place limiting traffic to just the specific ports in use to transport data from source to destination. Access to Tufts LAN network resources is controlled via Active Directory where applicable or LDAP, which requires the user to authenticate each time a system joins the domain. All of these controls are identically implemented on the wired as well as wireless Tufts networks.

Both Research Storage and linux based Cluster Compute server operating systems are kept current via sound patch management procedures. For example, PC's owned and managed by Tufts are automatically patched via the Windows Server Update Service. All other computing platforms are required to be on a similar automated patching schedule. From an operational standpoint, most central and local systems are maintained and managed using encrypted communications channels. For UNIX/linux servers, SSH is utilized; on Windows, Microsoft Terminal Services is utilized. User access to cluster services is via SSH and LDAP. No direct user login access to central Research Storage services is possible.

Additional user related Cluster information can be found here:

All devices and users are subject to the Tufts Acceptable Use policy found on TTS website:

How to reference the computing and storage resources for grant purposes.

Please reference this resource as: Tufts High-performance Computing Research Cluster

  • No labels