Current as of 20 Jan 2015
This notebook covers running Ansys HFSS (developed for Electromagnetics Suite 15.0.0, Build: 2013-10-03 20:02:45). The work discussed here was the result of almost of year of working with HFSS on the supercomputer and troubleshooting in order to get everything working. It is completely possible that something remains configured incorrectly. Please do not hesitate to open a ticket with the staff members.
I would like to thank the following people for helping get HFSS working on the supercomputer
Knowledge Resources: Solutions
#2035918
What are the version numbers for the latest release of the ANSYS Electronics/Electromagnetics products (Maxwell, HFSS, Q3D, Simplorer, Designer, SIwave, etc.)?
Product Family: Electronics
Product: ANSYS HFSS
Version: 2014
Area: General (HFSS)
SubArea: Other
Last Updated: Feb 04 2014
Answer:
At R15.0 the ANSYS Electronics/Electromagnetics products have been combined into a single EM Suite installation with a unified versioning number (ANSYS EM Suite 15.0). Attached is a document that outlines the versioning numbers for the last three ANSYS releases and includes links to download each version of the software.
Resolution Documents:
2035918_R15_Electronics_Versions.pdf
This document contains the table reproduced below
ANSYS Release | R15.0 | R14.5 | R14.0 |
---|---|---|---|
Designer | v2014 (EM Suite 15.0) | v8.0 | v7.0 |
HFSS | v2014 (EM Suite 15.0) | v15.0 | v14.0 |
Maxwell | v2014 (EM Suite 15.0) | v16.0 | v15.0 |
Q3D Extractor | v2014 (EM Suite 15.0) | v12.0 | v11.0 |
Simplorer | v2014 (EM Suite 15.0) | v11.0 | v10.0 |
SIwave | v2014 (EM Suite 15.0) | v7.0 | v6.0 |
TPA | v2014 (EM Suite 15.0) | v8.0 | v7.0 |
PExprt | v7.1 | v7.1 | v7.0 |
ECAD Translators | v8.0 | v7.0 | - |
Version 16 should be arriving soon... Look for a lot of cool improvements (a lot of work by EM Group alumnus Naveen Nair) including job scheduling from within the GUI.
rdp.hpcc.msu.edu
You can't run the HFSS graphics user interface (GUI) on the RDP gateway. You must first ssh into a dev node. This means that once you have a remote desktop up, you need to launch a terminal. In this terminal, ssh into a dev node and then launch HFSS.
ssh dev-intel14
hfss
I have not evaluated the performance of the GUI through the RDP gateway. It may be usable.
The following is a typical workflow for using HFSS on the supercomputer
HFSS is loaded as a module
module load HFSS
You can load other versions and do additional things with the module command. See the HPCC documentation page.
Powertools I suggest you use the module powertools as well.
module load powertools
powertools installpowertools
The first time that you launch HFSS on a computer, HFSS will run a first-time configuration. This can take a few minutes to run and may appear to hang on the first step. Give it time. This is done even on the compute nodes which means this could happen when you are running a job. Luckily, this only happens once (hence the name). The best solution: run more simulations!!!!! Below is the output
ANSYS Electromagnetics 15.0 Configuration
=========================================
Hostname: dev-intel14-phi
User: temmeand
> Running first-time configuration...
- Verifying all software dependencies are available: Done
- Retrieving user settings... Done
- Applying user settings... Done
- Configuring OCX files... Done
- Retrieving machine settings... Done
- Applying machine settings... Done
- Configuring binaries... Done
First-time configuration completed successfully.
This is the important part. HFSS must be passed numerous options in order to fully utilize the supercomputer. Below is the most basic script required. This is written to be called from the same directory as your simulation (PBS_O_WORKDIR is a variable for the directory from which you submitted your job). This script can be called any name. I suggest using a .qsub
extension, e.g. scriptName.qsub
.
#!/bin/bash -login
#PBS -l walltime=03:59:00,mem=50gb,nodes=19:ppn=1
#PBS -j oe
#PBS -m abe
#PBS -W x=gres:hfss_solve
module load HFSS/15.0
export OptFile=${PBS_O_WORKDIR}/Options.txt
export ANSYSEM_JOB_ID=${PBS_JOBID}
export ANSYSEM_HOST_FILE=$PBS_NODEFILE
export LINUX_SSH_BINARY_PATH=/usr/bin
export ANSYSEM_LINUX_HPC_UTILS=/opt/software/AnsysEM/15.0/AnsysEM15.0/Linux64/schedulers/utils
cd ${PBS_O_WORKDIR}
# mkdir -p ${PBS_O_WORKDIR}/scratch
echo creating batch options list
echo \$begin \'Config\' > ${OptFile}
echo \'HFSS/NumCoresPerDistributedTask\'=${PBS_NUM_PPN} >> ${OptFile}
echo \'HFSS/HPCLicenseType\'=\'Pool\' >> ${OptFile}
echo \'HFSS/SolveAdaptiveOnly\'=0 >> ${OptFile}
echo \'HFSS/MPIVendor\'= \'Intel\' >> ${OptFile}
echo \'HFSS-IE/NumCoresPerDistributedTask\'=${PBS_NUM_PPN} >> ${OptFile}
echo \'HFSS-IE/HPCLicenseType\'=\'Pool\' >> ${OptFile}
echo \'HFSS-IE/SolveAdaptiveOnly\'=0 >> ${OptFile}
echo \'HFSS-IE/MPIVendor\'=\'Intel\' >> ${OptFile}
# echo \'tempdirectory\'=\'${PBS_O_WORKDIR}/scratch\' >> ${OptFile}
echo \$end \'Config\' >> ${OptFile}
chmod 777 ${OptFile}
hfss -ng -monitor -distributed -machinelist num=${PBS_NUM_NODES} -batchoptions ${OptFile} -BatchSolve ${PBS_JOBNAME}.hfss
You can find this script and a few others on Bitbucket.
Permissions Make sure to change the permissions on the file so that it can run. Do this with the following command:
chmod 744 scriptName.qsub
Format/End of Line If you copy and paste the script from above, make sure that it is saved with UNIX \n
line endings. If it is not, you will get the following error:
qsub: script is written in DOS/Windows text format
This can be fixed easily on a Windows machine using Notepad++. Go to Edit -> EOL Conversion
and select UNIX format
.
Over SSH/Putty/Terminal, the line endings can be changed using Vim. To do so, type
vim scriptName.qsub
Press esc
a couple of times to make sure you are in command mode. Type
:update
:e ++ff=dos
:setlocal ff=unix
:w
If you have a VIM problem and don't know how to quit, try typing :q
.
Race Conditions You could create a race conditions if you submit multiple jobs with different options. Consider altering the above file to create unique option files for each job.
A job is submitted by running
qsub -N MySimulation -l walltime=0:05:00,mem=05gb,nodes=4:ppn=1 ./scriptName.qsub
Note that there is no .hfss
on the file name in the above command. The script appends the extension when it is needed.
I utilize the above script by wrapping it into another command, e.g. submit.sh
, in which I alter the resource requirements
qsub -N ${1%.hfss} -l walltime=0:05:00,mem=05gb,nodes=4:ppn=1 ./scriptName.qsub
Jobs can then be submitted using
./submit.sh MySimulation.hfss
or ./submit.sh MySimulation
The ${1%.hfss}
a few lines ago removes the .hfss
extension if it is there before calling the qsub submission script.
Note that the !
in the prompts below just tells the IPython notebook to run the command as a shell command and not as a Python command.
!cd Data; ./submitJob.sh MPI_tst.hfss
20168157.mgr-04.i
!qstat -u temmeand
mgr-04.i: Req'd Req'd Elap Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time -------------------- ----------- -------- ---------------- ------ ----- ------ ------ --------- - --------- 20168154.mgr-04. temmeand main MPI_tst 11037 4 12 5gb 00:05:00 C -- 20168156.mgr-04. temmeand main MPI_tst -- 4 12 5gb 00:05:00 Q -- 20168157.mgr-04. temmeand main MPI_tst -- 4 12 5gb 00:05:00 Q --
To constantly watch how your simulation is running, use the watch
command combined withe qstat. Check the wqs
command in my alias file near the end of this notebook.
After running a job in the manner above, you will get a few new items in the directory. For a simulation name sample.hfss
you would see the following:
HFSS simulations are simply text files. I would suggest using Git or another versioning system to track your simulations. It has saved me multiple times.
You can run an IPython notebook server on the HPCC. I don't know how long this will stay running on a dev node, but it has been over a week for me (I've restarted it for other reasons). Setup a tunnel through the gateway and then to your development node.
Useful. Lots of resources out there. It makes working from both home and campus a lot easier.
I like to split my terminal and put a watch command (see wqs
and wqst
in the alias file below) for qstat in a small pane
I highly suggest looking into Scikit-RF. This Python package provides a vast array of tools for RF design and analysis. Data processing is extremely easy as it can read in numerous different file types.
Example
import skrf as rf
rf.stylely()
data = rf.Network(rf.data.pwd + '/ring slot.s2p')
data.plot_s_db()
The aliases simplify connections to development nodes and create tunnels for IPython notebooks and other services.
test -f /etc/profile.dos && . /etc/profile.dos
# Some applications read the EDITOR variable to determine your favourite text
# editor. So uncomment the line below and enter the editor of your choice :-)
#export EDITOR=/usr/bin/vim
#export EDITOR=/usr/bin/mcedit
# add aliases if there is a .aliases file
test -s ~/.alias && . ~/.alias
module load powertools
if [[ $HOSTNAME != 'gateway-01' ]]
then
module load SciPy/0.11.0 NumPy/1.6.1 matplotlib
module load HFSS/15.0
module load git-python/0.3.1
module load git/1.7.11.5
fi
alias notebook='ipython notebook --profile=vimNote --no-browser --notebook-dir='~/Documents' --pylab=inline 2> ~/Documents/pyNotebook/ipynb.log'
alias pyamd09='ssh dev-amd09 -X -L 8888:127.0.0.1:8888'
#alias pyamd09='ssh dev-amd09 -L 8888:127.0.0.1:8888'
alias pyintel07='ssh dev-intel07 -X -L 8888:127.0.0.1:8888'
alias pyintel10='ssh dev-intel10 -X -L 8888:127.0.0.1:8888'
# alias pyintel14='ssh dev-intel14 -X -L 8888:127.0.0.1:8888'
alias pyintel14='ssh dev-intel14 -X -L 8888:127.0.0.1:8888 -L 8000:127.0.01:8000'
alias pyk20='ssh dev-intel14-k20 -X -L 8888:127.0.0.1:8888'
alias go='ssh dev-gfx10 -X -L 8888:127.0.0.1:8888 -L 8000:127.0.01:8000'
alias sumpy='source ~/sumpy/bin/activate'
alias topmine='top -u temmeand'
alias startnote='sumpy; cd ~/Documents/pyNotebook; notebook&'
alias ipy2='source ~/ipy2/bin/activate'
alias ipy2startnote='ipy2; cd ~/Documents/pyNotebook; notebook&'
These simplify life as a user of the supercomputer.
nbconvertPDF()
{
ipython nbconvert --to latex "$1" --post PDF
}
qst()
{
qstat -t -u $USER
}
wqs()
{
watch -n 10 "qstat -u temmeand | tail -n +6 | head -n 9"
}
wqst()
{
watch -n 10 "qstat -t -u temmeand | tail -n +6 | head -n 9"
}
lc()
{
licensecheck ansys | grep hfss
}
greperror()
{
succ=`grep -L "execution error" ./*.o* | wc -l`
err=`grep -l "execution error" ./*.o* | wc -l`
tot=`ls -1 *.o* | wc -l`
echo "Successful | Error | Total"
echo "$succ | $err | $tot"
}
errorwatch()
{
watch -n 10 "grep -l 'execution error' ./*.o*"
}
delres()
{
ls ./*.hfssresults
rm -r ./*.hfssresults
}
dellock()
{
ls ./*.hfss.lock
rm ./*.hfss.lock
}
Please contact me if you have questions or corrections (or submit a pull request).