/tag/afton
Afton Cluster Dedicated to Prof. John Hawley
On September 16, 2024, RC dedicated the new Afton computing cluster to the memory of John F. Hawley (1958-2021), late Professor of Astronomy who was a leading researcher in computational astrophysics. He also served in the Office of the Dean of the College and Graduate School of Arts and Sciences for nine years, first as Associate Dean for the Sciences and later as Senior Associate Dean for Academic Affairs. The ceremony featured some remarks by Josh Baller, Associate Vice President for Research Computing, and Scott Ruffner, Director of Infrastructure for Research Computing, along with a recorded message from Provost Ian Baucom.
High-Security Standard Storage Maintenance: Oct 15, 2024
The Ivy Virtual Machines (VMs) and high security zone HPC system will be down for storage maintenance on Tuesday, Oct 15, 2024, beginning at 6 a.m. The system is expected to return to full service by 6 a.m. on Wednesday, Oct 16.. IMPORTANT MAINTENANCE NOTES During the maintenance all VMs will be down as well as the UVA Ivy Data Transfer Node (DTN) and Globus services. The High-Security HPC cluster will also be unavailable for all job scheduling and viewing.
If you have any questions about the upcoming Ivy system maintenance, you may contact our user services team.
Ivy Central Storage transition to HSZ Research Standard To transition from old storage hardware, we have retired the Ivy Central Storage and replaced it with the new High Security Zone Research Standard storage.
HPC Maintenance: Oct 15, 2024
The HPC cluster will be down for maintenance on Tuesday, Oct 15, 2024 beginning at 6 am. You may continue to submit jobs until the maintenance period begins, but if the system determines your job will not have time to finish, it will not start until the cluster is returned to service.
All systems are expected to return to service by Wednesday, Oct 16 at 6 am.
IMPORTANT MAINTENANCE NOTES Expansion of /home To transition away from the Qumulo filesystem, we will migrate all /home directories to the GPFS filesystem and automatically increase each user’s /home directory limit to 200GB.
Research Computing Open House 2024
UPDATE: The Research Computing Open House was held on a blustery, rainy day, but the spirits of the staff and attendees were not dampened. Turnout was above expectations despite the wet weather. Attendees enjoyed the buffet and their interactions with RC staff.
The winners of the random-drawing prizes were
Maria Luana Morais, SOM Matt Panzer, SEAS Artun Duransoy, SEAS
HPC Maintenance: Aug 13, 2024
The HPC cluster will be partially down for maintenance on Tuesday, Aug 13, 2024 beginning at 6 a.m. The following nodes will be unavailable during this period:
all of parallel afton nodes in standard and interactive A40 GPU nodes in gpu The nodes are expected to return to service by Wednesday, Aug 14 at 6 a.m.
There is no impact on other nodes or services. Jobs on other nodes will continue to run.
Reinstatement of file purging of personal /scratch files on Afton and Rivanna
On Sep 1, 2024 RC system engineers will reinstate a file purging policy for personal /scratch folders on the Afton and Rivanna high-performance computing (HPC) systems. From Sep 1 forward, scratch files that have not been accessed for over 90 days will be permanently deleted on a daily rolling basis. This is not a new policy; it is a reactivation of an established policy that follows general HPC best practices.
The /scratch filesystem is intended as a temporary work directory. It is not backed up and old files must be removed periodically to maintain a stable HPC environment.
Key Points: Purging of personal scratch files will start on Sep 1, 2024.
Production Release of the Afton HPC System: July 2, 2024
Our new supercomputer, “Afton,” is now available for general use. This represents the first major expansion of RC’s computing resources since Rivanna’s last hardware refresh in 2019. Afton represents a substantial increase in the High-Performance Computing (HPC) capabilities available at UVA, more than doubling the available compute capacity. Each of the 300 compute nodes in the new system has 96 compute cores, an increase from a maximum of 48 cores per node in Rivanna. The increase in core count is augmented by a significant increase in memory per node. Each Afton node boasts a minimum of 750GB of memory, with some supporting up to 1.