Paraview Imaging¶
For systems without graphics cards, headless server builds (OSMesa/egl) are required for remote rendering of data sets larger than ~10 MB. 2-D data can often be visuilized on the client side. The client/server memory limits can be set in configuration, namely:
$HOME/.config/ParaView/ParaView-UserSettings.json
Some aspects of Paraview and Pvpython are not backwards compatible and thus various problems arise between the scripts and the python library.
On the ncar systems, paraview/5.11.1 is avalible but not built with headless support. Paraview 5.11.1 OSMesa is avalible:
/glade/u/home/alund/bin/ParaView-5.11.1-osmesa-MPI-Linux-Python3.9-x86_64/bin/
Remote Rendering¶
For large data sets that would be difficult to transfer, rendering can also be done on the server cpus/gpus. The Python libraries included with Paraview:
Pvpython: | Allows both interactive and script based imaging |
---|---|
Pvbatch: | Allows script imaging with MPI for supported systems/builds |
Batch processing is the most efficient in time and memory.
#!/bin/bash
#PBS -A ACCOUNT
#PBS -N Spvbatch
#PBS -q casper@casper-pbs
#PBS -l job_priority=regular
#PBS -l select=1:ncpus=1:ngpus=0:mem=10GB
#PBS -l walltime=2:00:00
#PBS -j oe
umask 027
cd $PBS_O_WORKDIR
#path=''
path='/glade/u/home/alund/bin/ParaView-5.11.1-osmesa-MPI-Linux-Python3.9-x86_64/bin/'
code=pvbatch
exec=analyze.py
outfile=$code.out
date > $outfile
$path$code --version >> $outfile
$path$code $exec &>> $outfile
date >> $outfile
Repository pvbatch_lib¶
The github repository MURI_Turbulence/pvbatch_lib is a library of python scripts for remote rendering. The programs are the main executable, while scripts contains file dependencies. See examples for sample input files analyze.inp. Executables:
programs/analyze.py See Below
Recipe
1 - Volume
2 - Vortexline
3 - Histogram
4 - Slice
5 - Line
6 - Plane
7 - Vortexline for Volume
8 - (7)w/ Separate vtks
101 - Timestamp
102 - Initial Profile
103 - PDF
104 - Line
105 - Plot resolution.out
106 - Plot spectra
programs/planes.py 2-D plane imaging with automatic data scaling
scripts/run_volViz Script to run volViz with changing data scale
scripts/submit PBS jobs with dependencies
Interactive Rendering¶
The Paraview version on the client and server must match!
Starting pvserver on the server side allows a local Paraview instance to connect via ssh tunneling. To open a port to the server, use the following command:
ssh user@host -L PORT_L:localhost:PORT_S -g -C
Where PORT_L/PORT_S are the local and server five digit port numbers. These port numbers can be the same, but must remain unopened for any other use. For this example, the following local/server ports were used 11114/11199. Each system should have a different local port PORT_L! Optional (-C) specifies data compression.
pvserver --server-port=PORT_S
FOR HPC SYSTEMS: All python scripts should be submitted to the queue. If the server is not avalible directly via ssh or the server name is not known, only after the job is running, the port can be opened. Note: These ports should NOT be open prior to tunneling!
ssh user@host -L PORT_L:NODENAME:PORT_S -g -C
The following bashrc function can replace this command. Note that a node base name can change on compute nodes ie casper-login -> crhtc. The output of pvserver will indicate the hostname and port opened.
pvcasper(){
node=$(printf "%02d" $1)
if [ $node != "00" ]; then
ssh user@host -L 11114:crhtc$node:11199 -g -C
else
echo "NODE Missing"
echo "ssh user@host -L 11114:crhtc$node:11199 -g -C"
fi
}
pvcasper 56
Finally to connect from client Paraview:
File -> Connect
Add Server
PORT=PORT_L
Leave Server Type (Client/Server) and Host (localhost). Name can be any unique identifier.