.. highlight:: none
Paraview Imaging
================
Paraview can be run in several different modes, which allow for both local and
remote, interactive and batch image processing. Local interactive mode is the
simplest, where paraview running on a graphical workstation reads data from
a local source. This mode is probably the most widely-used and is
well-documented (see
`Paraview documentation `_).
The remote and
batch modes are more difficult to use, but allow for both interactive and
batch image processing of large data setes that reside on remote systems.
In the remote interactive mode (sometimes referred to as client/server mode),
a non-interactive paraview server process is initiated on the remote machine
and then an interactive paraview client process on the local machine connects
to the remote server process over the network. The net result is an interactive
session very similar to the purely local mode, but without the need to transfer
and store large data files on the local system. Often there is a noticeable
latency in client/server mode, but it is usually manageable. Instructions
for the client/server mode can be found at
`Remote visualization `_
The third, and most complicated, option is non-interactive batch processing,
which is most useful for repetitive processing of large datasets. Batch
processing (often referred to as pvbatch), can be run both locally or remotely.
Since the output of a pvbatch run is just a sequence of one or more image files,
there can be a trendeous savings in file transfer time and storage if the
large data files remain on the remote system and only the relatively small
image data is sent back to the local system. Pvbatch can also run in parallel
and thus can leverage increased computational power availble on the remote
system. Instructions for pvbatch are listed at
`Using pvbatch `_
One complication of both the client/server and pvbatch modes is that custom
builds of the pvserver and pvbatch programs may be required, and that data
file size limitations may be encountered. If the remote system does not
contain a graphics card, then a custom OSMesa or egl build will most likely be
required. While most HPC centers will supply such builds, often there are
compatibility issues between the pvbatch scripts and and the pvbatch program
version. Paraview is notorious for violating backward-compatability, which
often means that custom scripts are required for each paraview release. It
is also often necessary to run the exact same paraview version when using
the client/server mode. Fortunately paraview provides pre-built binaries,
(including OSMesa and egl) for several different paraview versions and these
can be found at
`Paraview download `_ With easy access to
prebuilt binaries, often the most efficient solution to compatability issues
is to download and install a compatable version of the software either on the
local or remote system.
Another potential problem is exceeding either the client or server default
memory limits. Fortunately the limits can be increased by modifying the
file
::
$HOME/.config/ParaView/ParaView-UserSettings.json
Currently the NCAR systems provide paraview/5.11.1, but it has not been built
for headless (pvserver and pvbatch) support. A custom paraview 5.11.1 OSMesa
build is currently avalible at:
::
/glade/u/home/alund/bin/ParaView-5.11.1-osmesa-MPI-Linux-Python3.9-x86_64/bin/
Remote Rendering
----------------
For large data sets that would be difficult to transfer, rendering can also be done on the server cpus/gpus. The Python libraries included with Paraview:
:*Pvpython*: Allows both interactive and script based imaging
:*Pvbatch*: Allows script imaging with MPI for supported systems/builds
Batch processing is the most efficient in time and memory.
::
#!/bin/bash
#PBS -A ACCOUNT
#PBS -N Spvbatch
#PBS -q casper@casper-pbs
#PBS -l job_priority=regular
#PBS -l select=1:ncpus=1:ngpus=0:mem=10GB
#PBS -l walltime=2:00:00
#PBS -j oe
umask 027
cd $PBS_O_WORKDIR
#path=''
path='/glade/u/home/alund/bin/ParaView-5.11.1-osmesa-MPI-Linux-Python3.9-x86_64/bin/'
code=pvbatch
exec=analyze.py
outfile=$code.out
date > $outfile
$path$code --version >> $outfile
$path$code $exec &>> $outfile
date >> $outfile
Repository pvbatch_lib
----------------------
The github repository MURI_Turbulence/pvbatch_lib is a library of python scripts for remote rendering. The *programs* are the main executable, while *scripts* contains file dependencies. See *examples* for sample input files **analyze.inp**. Executables:
**programs/analyze.py** See Below
::
Recipe
1 - Volume
2 - Vortexline
3 - Histogram
4 - Slice
5 - Line
6 - Plane
7 - Vortexline for Volume
8 - (7)w/ Separate vtks
101 - Timestamp
102 - Initial Profile
103 - PDF
104 - Line
105 - Plot resolution.out
106 - Plot spectra
**programs/planes.py** 2-D plane imaging with automatic data scaling
**scripts/run_volViz** Script to run volViz with changing data scale
**scripts/submit** PBS jobs with dependencies
Interactive Rendering
----------------------------
The Paraview version on the client and server must match!
Starting *pvserver* on the server side allows a local Paraview instance to connect via ssh tunneling.
To open a port to the server, use the following command:
::
ssh user@host -L PORT_L:localhost:PORT_S -g -C
Where PORT_L/PORT_S are the local and server five digit port numbers.
These port numbers can be the same, but must remain unopened for any other use. For this example, the following local/server ports were used 11114/11199. Each system should have a different local port PORT_L! Optional (-C) specifies data compression.
::
pvserver --server-port=PORT_S
*FOR HPC SYSTEMS:* All python scripts should be submitted to the queue. If the server is not avalible directly via ssh or the server name is not known, only after the job is running, the port can be opened. Note: These ports should NOT be open prior to tunneling!
::
ssh user@host -L PORT_L:NODENAME:PORT_S -g -C
The following bashrc function can replace this command. Note that a node base name can change on compute nodes ie casper-login -> crhtc. The output of pvserver will indicate the hostname and port opened.
::
pvcasper(){
node=$(printf "%02d" $1)
if [ $node != "00" ]; then
ssh user@host -L 11114:crhtc$node:11199 -g -C
else
echo "NODE Missing"
echo "ssh user@host -L 11114:crhtc$node:11199 -g -C"
fi
}
::
pvcasper 56
Finally to connect from client Paraview:
::
File -> Connect
Add Server
PORT=PORT_L
Leave Server Type (Client/Server) and Host (localhost). Name can be any unique identifier.
Official Documentation
----------------------
`PARAVIEW_DOCS `_