Description
ANSYS simulation software enables organizations to confidently predict
how their products will operate in the real world. We believe that every product is
a promise of something greater.
Software Category: cae
Local support is minimal; users should make an account at the student forum through the ANSYS
website for technical support and for obtaining detailed information.
Available Versions
The current installation of ANSYS
incorporates the most popular packages. To find the available versions and learn how to load them, run:
module spider ansys
The output of the command shows the available ANSYS
module versions.
For detailed information about a particular ANSYS
module, including how to load the module, run the module spider
command with the module’s full version label. For example:
module spider ansys/2023r1
Module | Version |
Module Load Command |
ansys | 2023r1 |
module load ansys/2023r1
|
Licensing
The current general UVA license can be used for research but is limited in the size of the models it can use, and some of the more advanced features are not available. Users who have their own research licenses with greater capabilities must specify that license. To use such a research license on the UVA HPC system, before running ANSYS set the following environment variable
export ANSYSLMD_LICENSE_FILE=1055@myhost.mydept.virginia.edu
You may also need
export ANSYSLI_SERVERS=2325@myhost.mydept.virginia.edu
You must obtain the full names of the hosts and the port numbers from your group’s license administrator. The numbers in the above lines are the standard ANYSYS ports, but it is possible they may differ for some license servers; consult your license administrator for specific values. The ANSYSLI_SERVERS environment variable is generally not necessary if the default port is used, but ANSYSLMD_LICENSE_FILE will always be required.
These environment variables must be set in each shell and in every Slurm script that invokes ANSYS.
Using ANSYS Workbench
If you wish to run jobs using the Workbench, you need to edit the ~/.kde/share/config/kwinrc
file and add the following line:
FocusStealingPreventionLevel=0
The workbench application, runwb2
, should be executed in an interactive Open OnDemand Desktop session.
When you are assigned a node, launch the desktop, start a terminal, load the desired module and start the workbench with the runwb2
command.
module load ansys
unset SLURM_GTIDS
runwb2
Be sure to delete your Open OnDemand session if you finish before your requested time expires.
Multi-Core Jobs
It is possible to run multicore jobs through Open OnDemand. In a terminal, load the ansys module and then run the appropriate package frontend: for general ANSYS applications, including CFX, that is the Workbench; for Fluent run fluent
to start its graphical interface. Choose the “Parallel Options” tab to set up a run. Be sure to use only the number of cores you requested when you launched the OOD Desktop.
For longer jobs, and for all multinode jobs, you should run in batch mode using a Slurm script. Please refer to ANSYS documentation for instructions in running from the command line. These examples use threading to run on multiple cores on a single node.
ANSYS Slurm Script:
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --cpus-per-task=8
#SBATCH --time=12:00:00
#SBATCH --partition=standard
#SBATCH -J myCFXrun
#SBATCH -A mygroup
#SBATCH --output=myANSYSrun.txt
mkdir /scratch/$USER/myANSYSrun
cd /scratch/$USER/myANSYSrun
module load ansys/2023r1
ansys231 -np ${SLURM_CPUS_PER_TASK} -def /scratch/yourpath/yourdef.def -ini-file/scratch/yourpath/yourresfile.res
CFX Slurm Script:
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --cpus-per-task=20
#SBATCH --partition=standard
#SBATCH -J myCFXrun
#SBATCH -A mygroup
#SBATCH --output=myCFXrun.txt
module load ansys/2023r1
cfx5solve -double -def /scratch/yourpath/mydef.def -par-local -partition "$SLURM_CPUS_PER_TASK"
Multi-Node MPI Jobs
For Fluent specify -mpi=intel
along with the flag -srun
to dispatch the MPI tasks using Slurm’s task launcher. If more than the default memory per core is required, it is generally better with ANSYS and related products to request a total memory over all processes rather than using --mem-per-cpu
, because a process can exceed the allowed memory per core. Please refer to our documentation for current information about default memory per core in each partition.
These examples also show the minimum number of command-line options; you may require more for large jobs.
Fluent Slurm Script:
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=40
#SBATCH --time=12:00:00
#SBATCH --partition=parallel
#SBATCH -J myFluentrun
#SBATCH -A mygroup
#SBATCH --output=myFluentrun.txt
NODEFILE="$(pwd)/slurmhosts.$SLURM_JOB_ID.txt"
srun hostname -s >> $NODEFILE
module load ansys/2023r1
fluent 3ddp -g -t${SLURM_NTASKS} -cnf=$FLUENTNODES -srun -pinfiniband -mpi=intel -i myjournalfile.jou
The syntax for CFX is different and includes a “start-method.” We recommend Intel MPI. Please refer to documentation for other options that may be required.
CFX Slurm script:
#!/bin/bash
#SBATCH --nodes=1
#SBATCH --cpus-per-task=20
#SBATCH --partition=standard
#SBATCH -J myCFXrun
#SBATCH -A mygroup
#SBATCH --output=myCFXrun.txt
module load ansys/2023r1
#Convert the node information into format for CFX host list
nodes=$(srun -s hostname | sort | \
uniq -c | \
awk '{print $2 "*" $1}' | \
paste -sd, -)
cfx5solve -batch -def /scratch/yourpath/mydef.def -par-dist $nodes -start-method "Intel MPI Distributed Parallel"