compute nodes in the teal cluster
Teal Nodes

The VCU High Performance Research Computing (HPRC) Core Facility occupies approximately 2000 sq ft of total space, predominantly on the third floor of Harris Hall on the Monroe Park Campus. The mission of the HPRC is to provide high-performance computing services for the VCU research community. To accomplish this goal, the HPRC maintains four major supercomputing clusters, each specialized for different computing environments. They may be summarized as follows (descriptions current as of April 2022)

  • teal.hprc.vcu.edu is the primary cluster intended for large-scale parallel computing, and is especially well suited for applications such molecular dynamics simulations, quantum chemistry and other Physical Sciences jobs. Teal consists of ~4500 64 bit Intel and AMD compute cores, each with 2-4 GB RAM/core, 10.2 TB of total RAM, 180 TB of /home space, and tmp space of between 360 and 787 GB per node. High-speed network infrastructure is provided by a 20 Gb/second InfiniBand architecture.
  • huff.hprc.vcu.edu is the newest cluster for large-scale parallel and distributed computing. Huff includes regular computing nodes, large memory nodes, and GPU nodes. Huff uses new processors (AMD EPYC2) with a total of 2300 cores, new high-performance storage (a 2.1 PB Lustre filesystem) and integrates three new GPU systems (each with dual 32GB V100 GPUs).
  • godel.hprc.vcu.edu is a cluster optimized for bioinformatics applications, with  1768 Intel and AMD 64 bit cores, each with at least 3 GB RAM/core, 4.8 TB of total RAM, 17 TB of /home space, tmp space of at least 180 GB/node, and 40 Gb/second Infiniband networking, 1.2TB of GPFS high-performance parallel file system storage.
  • fenn.hprc.vcu.edu is a cluster designed to support research using data that must comply with federal security and privacy requirements, with 1016 Intel 64 bit cores, 2/GB of RAM/core, 840TB of GPFS high-performance parallel file system storage (expandable to 2.2PB)m 54 Gb/second Infiniband networking. These clusters are collectively served by over 1.9 PB of networked NFS and GPFS high-speed storage
GPU system image
GPU systems

To support this infrastructure, the HPRC currently has 5 FTE staff positions, (Alberto Cano, Faculty Director; J. Mike Davis, Technical Director; Carlisle Childress & Brad Freeman, Systems Analysts; and John Layne, Applications Analyst). In addition to maintaining the hardware, the HPRC works collaboratively with the user base to maintain and optimize a large number of applications and development tools (BLAST, R, MATLAB, NAMD, Gaussian, Gromacs, Charm, C/C++, Fortran compilers, as well as other scientific, statistical and development software.

Privacy Statement