The Tufts High Performance Compute (HPC) cluster delivers 35,845,920 cpu hours and 59,427,840 gpu hours of free compute time per year to the user community.

Teraflops: 60+ (60+ trillion floating point operations per second) cpu: 4000 cores gpu: 6784 cores Interconnect: 40GB low latency ethernet

For additional information, please contact Research Technology Services at tts-research@tufts.edu


Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 309 Next »

UIT Research Computing Resources

For additional information, please contact Lionel Zupan, Associate Director for Research Computing, at x74933 or via email Lionel.Zupan@Tufts.edu.

Tufts UIT Research computing options

  • High-performance computing research cluster
  • Bioinformatics server
  • CarmaWeb server
  • Visualization Center
  • GIS Center

See overview for additional information about UIT Academic Technology Research Services.

1. Tufts High-performance computing research cluster

What is a Cluster?

Cluster computing is the result of connecting many local computers (nodes) together via a high speed connection to provide a single shared resource. Its distributed processing system allows complex computations to run in parallel as the tasks are shared among the individual processors and memory. Applications that are capable of utilizing cluster systems break down the large computational tasks into smaller components that can run in serial or parallel across the cluster systems, enabling a dramatic improvement in the time required to process large problems and complex tasks. For more specifics check this IBM document.

Tufts Linux Research Cluster

The Tufts Linux Research Cluster is comprised of 56 IBM Linux systems (compute nodes) interconnected via an Infiniband network. Each cluster node has eight or 12 cores with 2.8Ghz Intel Xeon CPUs and 16, 32 or 50 gigabytes of memory. The 468 compute cores has a capacity of about 5 TeraFlops. The Linux operating system on each node is RedHat 5 configured identically across every machine. In addition there is a login node and a management node supporting the compute node array. Client/user workstations access the cluster via the Tufts Network or remotely with ssh. The user/login node has an additional network interface  that connects to the compute nodes using private non-routable IP addressing via the Infiniband network hardware. This scheme allows the compute nodes to be a "virtualized" resource managed by the queueing software LSF, and abstracted away behind the user node. This approach also allows the cluster to scale to a large number of nodes and provides the structure for future growth.

The login node of the cluster is reserved for the use of compilers, running shell tools, and launching and submitting programs to compute nodes. The login node is not for long running programs, etc... for computing purpose, please use the compute nodes and various queues.

Grant Applications related information

Content to support applications can be found here.

Cluster User Accounts

Click Account Information for additional information about cluster accounts.

Contribute your own nodes to the new research cluster

Researchers that need their own high-performance computing (HPC) resources (and are applying for external grant funding to do so) may wish to consider contributing additional nodes to the research cluster rather than to develop and support their own HPC infrastructure. The research cluster has been designed to allow for this kind of compute node expansion. The obvious advantage to a researcher is that one does not have to support a separate computing resource, obtain additional licensing, and obtain priority access.

In order to participate, additional nodes need to be of a certain kind, consistent with the current cluster design(as described above). In addition, a special LSF queue will be structured to allow one or more designated researchers priority access to the contributed cores. In return, when those cores are unused, they will become part of the larger pool of LSF managed compute node resources available to the Tufts research community.

For additional information, please contact Lionel Zupan, Associate Director for Research Computing, at x74933 or via email Lionel.Zupan@Tufts.edu.

Research Cluster Restrictions

Conditions and use of the research cluster include and are not limited to the following expectations. Additional related details may be found throughout this page.

Expectations

no user root access

supported OS is RedHat 5 Enterprise version

no user ability to reboot node(s)

all cluster login access is via the headnode

no user machine room access to cluster hardware

no alternative linux kernels other than that provided by RHEL 5

no access to Infiniband or Ethernet network hardware or software

no user cron or at access

no user servers/demons such as: HTTP, FTP. etc.

all user jobs destined for compute nodes are submitted via LSF's bsub command

all compute nodes follow one naming convention

only UIT NFS storage is supported

unused contributed node CPU time reverts to cluster user community

no user contributed direct connect storage

only limited outgoing Internet access from the headnode will be allowed; exceptions must be reviewed

allow 2-week turn around for software requests

Only user home directories are backed up

temporary public storage file systems have no quota and are subject to automated file deletions

Cluster quality of service is managed through LSF queues and priorities

Cluster does not export file systems to user desktops

Software request policy

Please send your request via email to cluster-support@tufts.edu and address the following questions:

  • What is the the name of the software?
  • Where can additional information about the software be found?
  • Who are the intended users of the software?
  • When is it needed by?
  • Will it be used in support of a grant and if so what grant?
  • What if any special requirements are needed?

Note, please allow up to 2 weeks for requests to be processed.

Recent Cluster News

Click News

Cluster Storage Options

Click here for details.

Network Concurrent Software Licenses

Click here

Support venue

If you have any questions about cluster related usage, applications, or assistance with software, please contact cluster-support@tufts.edu.

If you wish to provide general comments use the feedback page found here.

MODULES: Cluster software environment

Click here

Installed Cluster Software

Click here

Compilers, Editors, etc...

Click here

Frequently Asked Questions - FAQs:

Cluster Connections/Logins

Click here

Parallel programming related information

Click here

Account related FAQs:

Click here

X based graphics FAQs

Click here

Application specific Information FAQs

Click here

Linux and LSF information FAQs

Click here

Compilation FAQs

Click here

Miscellaneous FAQs

Click here

How do Tufts students and faculty make use of the cluster?

See How

2. Bioinformatics services

A separate server is used to support these services in some cases. However some software may require installation on the linux research cluster. Check the Installed Software link on this page for Bioinformatic software available on the cluster. To make a special request for software installation, please follow the instructions as noted elsewhere on this page.

Emboss services can be found here

3. Tufts Center for Scientific Visualization (or VisWall)


A description may be found here. The user guide is available here.

See the attached EduCause document on Visualization for general information and direction this technology provides. You may find it under this wiki page "Page Operations" section above.

The research cluster is available to VisWall users for additional computational resources. Current connectivity follows standard practices using ssh and x11 forwarding. Viswall users with a cluster account may forward cluster based application graphic output for display on the VisWall. As of mid-May 2009, a new and dedicated 10Gig fiber network connects the cluster nodes with the Viswall. This provides for a novel and very high speed connection for moving massive amounts of data.

Visit Tufts CS faculty member Alexandre Francois's wiki page for 3d stereo Viswall demonstration programs and discussion.

Monthly training classing on the use of the facility can be checked here

4. GIS Center


Several GIS links can be found here.

Tufts Research Cluster indirectly supports GIS spatial statistical computation with the availability of modern spatial statistics programs as found in R. This is a useful resource when faced with either complex estimation tasks, long runtimes or access to more memory than is often available on desktop workstations. R programs such as the following are available:

fields, ramps, spatial, geoR, geoRglm, RandomFields, sp, spatialCovariance, spatialkernel, spatstat, spBayes, splancs,

For additional information please contact cluster-support@tufts.edu.

5. Tufts ICPSR data subscription

The Inter-university Consortium for Political and Social Research (ICPSR) is a unit of the Institute for Social Research at the University of Michigan. ICPSR was established in 1962 to serve social scientists around the world by providing a central repository and dissemination service for computer-readable social science data, training facilities in basic and advanced techniques of quantitative social analysis, and resources that facilitate the use of advanced computer technology by social scientists.

The Tufts community may obtain research data and related web services from the ICPSR while one's computer is in the Tufts network domain. This is required for license authenication purposes. Special case exceptions are possible, but need to be arranged ahead of time. For additional information please contact cluster-support@tufts.edu.

  • No labels