/tag/supercomputer
Using UVA’s High-Performance Computing Systems
Afton is the University of Virginia’s newest High-Performance Computing system. The Afton supercomputer is comprised of 300 compute node each with 96 compute cores based on the AMD EPYC 9454 architecture for a total of 28,800 cores. The increase in core count is augmented by a significant increase in memory per node compared to Rivanna. Each Afton node boasts a minimum of 750 Gigabytes of memory, with some supporting up to 1.5 Terabytes of RAM memory. The large amount of memory per node allows researchers to efficiently work with the ever-expanding datasets we are seeing across diverse research disciplines. The Afton and Rivanna systems provide access to 55 nodes with NVIDIA general purpose GPU accelerators (RTX2080, RTX3090, A6000, V100, A40, and A100), including an NVIDIA BasePOD.
Allocations
Time on Rivanna/Afton is allocated as Service Units (SUs). One SU corresponds to one core-hour. Multiple SUs make up what is called an allocation (e.g., a new allocation = 1M SUs). Allocations are managed through Grouper (requires VPN connection) groups that should be created by Principal Investigators (PIs) before they submit an allocation request. Eligibility and Account Creation All UVA faculty are eligible to serve as PI and request access to RC services (e.g. storage & HPC allocations, virtual machines, microservices) for their research group. Postdocs and staff are encouraged to use an allocation provided by a faculty sponsor, although they may request their own allocation pending departmental or RC approval.
ACCORD: Jupyter Lab
Back to Overview
Jupyter Lab allows for interactive, notebook-based analysis of data. A good choice for pulling quick results or refining your code in numerous languages including Python, R, Julia, bash, and others.
Learn more about Jupyter Lab
ACCORD: RStudio
Back to Overview
RStudio is the standard IDE for research using the R programming language.
Learn more about RStudio
ACCORD: Theia IDE
Back to Overview
Theia Python is a rich IDE that allows researchers to manage their files and data, write code with an intelligent editor, and execute code within a terminal session.
Learn more about the Theia Python IDE
ACCESS: Advanced Cyberinfrastructure Coordination Ecosystem: Services and Support
The NSF’s ACCESS (Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support) program builds upon the successes of the 11-year XSEDE project, while also expanding the ecosystem with capabilities for new modes of research and further democratizing participation. ACCESS Home: access-ci.org access-ci.org/about Allocations Allocations: allocations.access-ci.org Documentation Support: support.access-ci.org Community Engagement ACCESS: support.access-ci.org/affinity-groups Campus Champions: https://campuschampions.cyberinfrastructure.org UVa Research Computing has two Champions, Ed Hall and Katherine Holcomb For more help, please feel free to contact RC staff to set up a consultation or visit us during office hours.
XSEDE: Extreme Science and Engineering Development Environment
XSEDE’s Mission was to substantially enhance the productivity of a growing community of scholars, researchers, and engineers through access to advanced digital services that support open research; and coordinate and add significant value to the leading cyberinfrastructure resources funded by the NSF and other agencies. — The XSEDE project ended on August 31, 2022 and was succeeded by the ACCESS project.
XSEDE Home: www.xsede.org
Rivanna and Afton FAQs
General Usage Allocations Research Software Job Management Storage Management Data Transfer Downloading Files Other Questions General Usage How do I gain access to Rivanna/Afton? A faculty member must first request an allocation on the HPC system. Full details can be found here.
How do I log on to Rivanna/Afton? Use an SSH client from a campus-connected machine and connect to login.hpc.virginia.edu. Instructions for using ssh and other login tools, as well as recommended clients for different operating systems, are here. You can also access the HPC system through our Web-based interface Open OnDemand or FastX.
Off Campus? Connecting to Rivanna and Afton HPC systems from off Grounds via Secure Shell Access (SSH) or FastX requires a VPN connection.
Graphical SFTP/SCP Transfer Tools
Several options are available to transfer data files between a local computer and the HPC system through user-friendly, graphical methods.
Off Campus? Connecting to Rivanna and Afton HPC systems from off Grounds via Secure Shell Access (SSH) or FastX requires a VPN connection. We recommend using the UVA More Secure Network if available. The UVA Anywhere VPN can be used if the UVA More Secure Network is not available. Only Windows and Mac OSX operating systems are supported by the Cisco client provided by ITS. Linux users should refer to these unsupported instructions to install and configure a VPN. The More Secure Network requires authentication through Duo; users should follow the instructions on the dialog box to enter "
Logging in to the UVA HPC systems
The UVA HPC systems (Rivanna and Afton) are accessible through a web portal, secure shell terminals, or a remote desktop environment. For of all of these access points, your login is your UVA computing ID and your password is your Eservices password. If you do not know your Eservices password you must change it through ITS.
Off Campus? Connecting to Rivanna and Afton HPC systems from off Grounds via Secure Shell Access (SSH) or FastX requires a VPN connection. We recommend using the UVA More Secure Network if available. The UVA Anywhere VPN can be used if the UVA More Secure Network is not available.
MobaXterm
MobaXterm is the recommended login tool for Windows users. It bundles a tabbed ssh client, a graphical drag-and-drop sftp client, and an X11 window server for Windows, all in one easy-to-use package. Some other tools included are a simple text editor with syntax coloring and several useful Unix utilities such as cd, ls, grep, and others, so that you can run a lightweight Linux environment on your local machine as well as use it to log in to a remote system.
Download To download MobaXterm, click the link below. Select the “Home” version, “Installer” edition,
Download MobaXterm
Run the installer as directed.
Slurm Job Manager
SLURM Would you like to take an interactive SLURM quiz? y/N |
Overview UVA HPC is a multi-user, managed environment. It is divided into login nodes (also called frontends), which are directly accessible by users, and compute nodes, which must be accessed through the resource manager. Users prepare their computational workloads, called jobs, on the login nodes and submit them to the job controller, a component of the resource manager that runs on login nodes and is responsible for scheduling jobs and monitoring the status of the compute nodes.
We use Slurm, an open-source tool that manages jobs for Linux clusters.