ACESgrid
Alliance for Computational Earth Science
About & News
Getting Started
   Get An Account
   Login
   Environment customization
   ACES queues
   Queue Examples
   Compile code
      Compiler options
      High Performance Fortran
   Hardware Groups
   Itanium2 nodes and IA64 software
   Storage
   Office hours
Sites
Available software
Status
People
FAQ
Mailing Lists
Quick Links
Contact Us
Sponsors
Search

MIT logo

1. Using High Performance Fortran on ACESgrid

The ACESgrid cluster itrda offers two options for HPF programmers.

  • One is PGHPF which is a commercial compiler. We have a limit of 16 concurrent processes for programs compiled with pghpf due to licensing restrictions, so no individual parallel program may request more than this many processes.

  • The other option is ADAPTOR, which is an academic project, supporting HPF over shared memory or MPI as well as OpenMP and more (for more information check the homepage). ADAPTOR has been installed to work with MPICH-VMI (so that binaries need no recompilation to work with different underlying network interconnects) and the Intel Fortran Compiler (to give a different high performance back-end for Fortran code). There are no restrictions on the number of processes one may use with ADAPTOR.

1.1. Setup

To setup for using pghpf do

$module add pgi

To setup for using adaptor do

$ module add adaptor

1.2. Compiling

1.2.1. PGHPF

To compile (say sourcefile.hpf) for use over TCP:

$ pghpf -Mrpm  pgi_options -c sourcefile.hpf

To link

$ pghpf -Mrpm  pgi_options -o  executable_name sourcefile.o any_necessary_libraries

or in one step

$ pghpf -Mrpm  pgi_options -o  executable_name sourcefile.hpf any_necessary_libraries

To compile (say sourcefile.hpf) for use with MPICH-GM:

$ pghpf -Mmpi  pgi_options -c sourcefile.hpf

To link

$ pghpf -Mmpi  pgi_options -o  executable_name sourcefile.o any_necessary_libraries -L/usr/local/pkg/mpich-gm/mpich-gm-pgi/lib/ -L/usr/local/pkg/gm/gm-2.0.14/lib -lgm

or in one step

$ pghpf -Mmpi  pgi_options -o  executable_name sourcefile.hpf any_necessary_libraries -L/usr/local/pkg/mpich-gm/mpich-gm-pgi/lib/ -L/usr/local/pkg/gm/gm-2.0.14/lib -lgm

Keep in mind that code compiled to be used with MPICH-GM will only work on nodes that have Myrinet interfaces. Information about compile-time options can be found in the remote or local copy (in /usr/local/pkg/pgi/pgi-5.2/linux86/5.2/doc/pghpf_ref/) PGHPF Compiler User's Guide.

1.2.2. Adaptor

To compile (say sourcefile.hpf) for use over any interconnect:

$ adaptor -hpf -dm  intel_compiler_options -c sourcefile.hpf

To link

$ adaptor -hpf -dm  intel_compiler_options -o  executable_name sourcefile.o

or in one step

$ adaptor -hpf -dm  intel_compiler_options -o  executable_name sourcefile.hpf

For use on 2 cpu nodes another option is the following:

$ adaptor -hpf -dm -sm  intel_compiler_options -c sourcefile.hpf $ adaptor -hpf -dm  intel_compiler_options -o  executable_name sourcefile.o or in one step $ adaptor -hpf -dm -sm  intel_compiler_options -o  executable_name sourcefile.hpf

A quick overview can be found as part of the ADAPTOR User's Guide that covers the subject far more completely. Local copies of the documentation can be found at /usr/local/pkg/adaptor/adaptor-10.2/doc/).

1.3. Running

1.3.1. PGHPF

  • Example PBS script for the case of code compiled with pghpf and using PGI's communication library over TCP

#!/bin/csh
# running PGHPF-RPM compiled code on ITRDA Linux cluster
#
# All PBS options start as "#PBS " and can be specified on the command line
# after qsub instead of being embedded in the script file.
 
#----------------------------------------------
# o Queue name
# -q queue
# Queues available on itrda are:
# four (2hours,16nodes),four-twelve (12hours,26nodes),long (168hours,64nodes)
 
#PBS -q four
 
#----------------------------------------------
# o Job name instead of the PBS script filename
# -N Job name (use a distinguishing name)
 
#PBS -N MyNamePGHPF-RPM
 
#----------------------------------------------
# o Resource lists
# -l resource lists, separated by a ","
# To ask for N nodes use "nodes=N"
# To ask for 2 processor per node use ":ppn=2", otherwise ":ppn=1"
# after the nodes=N. Preferably use ppn=2 and ask for less nodes.
# To ask for Myrinet use ":myrinet", for Gigabit Ethernet use ":gigabit"
# after the nodes=N:ppn=M
# To specify total wallclock time use "walltime=hh:mm:ss"
 
#PBS -l nodes=16:ppn=2,walltime=00:10:00
 
#----------------------------------------------
# o stderr/out combination
# -j {eo|oe}
# Causes the standard error and standard output to be combined in one file.
# For standard output to be added to standard error use "eo"
# For standard error to be added to standard output  use "oe"
#
# o stderr/out (specify them instead if getting script.[oe]$PBS_JOBID
# -e standard error file
# -o standard output file
# You can append ${PBS_JOBID} to ensure distict filenames
 
#PBS -e myrunPGHPF-RPM.stderr
#PBS -o myrunPGHPF-RPM.stdout
 
#----------------------------------------------
# o Starting time
# -a time
# Declares the time after which the job is eligible for execution.
 
#----------------------------------------------
# o User notification
# -m {a|b|e}
# Send mail to the user when:
# job aborts: "a", job begins running: "b", job ends: "e"
 
#PBS -m ae
 
#----------------------------------------------
# o Exporting of environment
# -V export all my environment var's
 
#PBS -V
 
#----------------------------------------------
                                                                                
# Begin execution
 
#
# Check the environment variables
#
#printenv
 
#
# Get the right PGI and MPICH module 
#
module add pgi

#
# get PBS node info
#
echo $PBS_NODEFILE
cat  $PBS_NODEFILE
 
#----------------------------------------------
# cd to the working directory from which the job was submitted
#
cd $PBS_O_WORKDIR

# How many procs do I have?
setenv NP `wc -l $PBS_NODEFILE | awk '{print $1}'`
 
# Create uniq hostfile for use in hybrid (MPI/OpenMP) codes and for rsh-script use
uniq $PBS_NODEFILE > machinefile.uniq.$PBS_JOBID
 
# How many nodes do I have?
setenv NPU `wc -l machinefile.uniq.$PBS_JOBID | awk '{print $1}'`

#
# Run the PGHPF-RPM code called "executable", provided it is in PBS_O_WORKDIR
# The basic options are described below. Please read them carefully and for
# more information please go to the PGHPF documentation.
#
# "cmdln_options" should be replaced by any program specific command-line
# options and must always precede the "-pghpf" argument.
#
# The "-heapz" option instructs the PGHPF runtime on the size of the shared
# memory heap to be used for communications between processes on the same
# host using shared memory instead of TCP. The format is number followed 
# by "k" for KB and "m" for MB. Setting the environment variable PGHPF_HEAPZ
# to the same number is an equivalent way of handling this.
#
# The "-np $NP" argument instructs the HPF runtime on the number of
# processors to use. This is the same as setting the environment variable 
# PGHPF_NP to $NP.
#
# The "-host -file=machinefile.uniq.$PBS_JOBID" argument tells the HPF
# runtime which hosts to use. The hosts are assigned in roundrobin fashion.
# The environment variable PGHPF_HOST can be set instead for the same effect.
#
# The "-stat" command-line argument, if present, will only work for code
# compiled with "-Mstats" and takes options 
# "cpu", "mem", "msg", "all", "cpus", "mems", "msgs", "alls"
# where "cpu" and "mem" provides processor and memory utilization 
# information, respectively, and msg provides message passing statistics.
# The "s" versions provide information for all processors running the 
# program on a per-processor basis. Options without the "s" provide 
# summary information. This option incurs a small performance penalty.
# This behaviour can be replicated by setting the options to the
# environment variable PGHPF_STAT instead.

./executable cmdln_options -pghpf -heapz 10m -np $NP -host -file=machinefile.uniq.$PBS_JOBID -stat alls

# Cleanup
# Remove the unique machinefiles
rm machinefile.uniq.$PBS_JOBID

#
# Exit (not strictly necessary)
#
exit
  • Example PBS script for the case of code compiled with pghpf and using MPICH-GM

#!/bin/csh
# running PGHPF-MPICH-GM compiled code on ITRDA Linux cluster
#
# All PBS options start as "#PBS " and can be specified on the command line
# after qsub instead of being embedded in the script file.
 
#----------------------------------------------
# o Queue name
# -q queue
# Queues available on itrda are:
# four (2hours,16nodes),four-twelve (12hours,26nodes),long (168hours,64nodes)
 
#PBS -q four
 
#----------------------------------------------
# o Job name instead of the PBS script filename
# -N Job name (use a distinguishing name)
 
#PBS -N MyNamePGHPF-MPICH-GM
 
#----------------------------------------------
# o Resource lists
# -l resource lists, separated by a ","
# To ask for N nodes use "nodes=N"
# To ask for 2 processor per node use ":ppn=2", otherwise ":ppn=1"
# after the nodes=N. Preferably use ppn=2 and ask for less nodes.
# To specify total wallclock time use "walltime=hh:mm:ss"
 
#PBS -l nodes=16:ppn=2:myrinet,walltime=00:10:00
 
#----------------------------------------------
# o stderr/out combination
# -j {eo|oe}
# Causes the standard error and standard output to be combined in one file.
# For standard output to be added to standard error use "eo"
# For standard error to be added to standard output  use "oe"
#
# o stderr/out (specify them instead if getting script.[oe]$PBS_JOBID
# -e standard error file
# -o standard output file
# You can append ${PBS_JOBID} to ensure distict filenames
 
#PBS -e myrunPGHPF-MPICH-GM.stderr
#PBS -o myrunPGHPF-MPICH-GM.stdout
 
#----------------------------------------------
# o Starting time
# -a time
# Declares the time after which the job is eligible for execution.
 
#----------------------------------------------
# o User notification
# -m {a|b|e}
# Send mail to the user when:
# job aborts: "a", job begins running: "b", job ends: "e"
 
#PBS -m ae
 
#----------------------------------------------
# o Exporting of environment
# -V export all my environment var's
 
#PBS -V
 
#----------------------------------------------
                                                                                
# Begin execution
 
#
# Check the environment variables
#
#printenv
 
#
# Get the right PGI and MPICH module 
#
module add mpich-gm/pgi

#
# get PBS node info
#
echo $PBS_NODEFILE
cat  $PBS_NODEFILE
 
#----------------------------------------------
# cd to the working directory from which the job was submitted
#
cd $PBS_O_WORKDIR

# How many procs do I have?
setenv NP `wc -l $PBS_NODEFILE | awk '{print $1}'`
 
# Create uniq hostfile for use in hybrid (MPI/OpenMP) codes and for rsh-script use
uniq $PBS_NODEFILE > machinefile.uniq.$PBS_JOBID
 
# How many nodes do I have?
setenv NPU `wc -l machinefile.uniq.$PBS_JOBID | awk '{print $1}'`

#
# Run the PGHPF-MPICH-GM code called "executable", provided it is in 
# PBS_O_WORKDIR
# The basic options are described below. Please read them carefully and for
# more information please go to the PGHPF documentation.
#
# "cmdln_options" should be replaced by any program specific command-line
# options and must always precede the "-pghpf" argument.
#
# The "-stat" command-line argument, if present, will only work for code
# compiled with "-Mstats" and takes options 
# "cpu", "mem", "msg", "all", "cpus", "mems", "msgs", "alls"
# where "cpu" and "mem" provides processor and memory utilization 
# information, respectively, and msg provides message passing statistics.
# The "s" versions provide information for all processors running the 
# program on a per-processor basis. Options without the "s" provide 
# summary information. This option incurs a small performance penalty.
# This behaviour can be replicated by setting the options to the
# environment variable PGHPF_STAT instead.
#
# The "-unsafe yes" or "-unsafe" no argument enables or disables certain
# communication optimizations that can be used when using MPI as the
# underlying communication mechanism. The same behaviour can be obtained
# by setting the environment variable PGHPF_UNSAFE to "yes". The default
# is "-unsafe no". 

mpirun -machinefile $PBS_NODEFILE -np $NP ./executable cmdln_options -pghpf -stat alls -unsafe yes

# Cleanup
# Remove the unique machinefiles
rm machinefile.uniq.$PBS_JOBID

#
# Exit (not strictly necessary)
#
exit

1.3.2. ADAPTOR

  • Example PBS script for the case of code compiled with adaptor and using MPICH-VMI for execution over any protocal and network combination.

#!/bin/csh
# invoking mpirun for adaptor executables on ITRDA Linux cluster
#
# All PBS options start as "#PBS " and can be specified on the command line
# after qsub instead of being embedded in the script file.
 
#----------------------------------------------
# o Queue name
# -q queue
# Parallel queues available on itrda are:
# four (2hours,16nodes),four-twelve (12hours,26nodes),long (168hours,64nodes)
 
#PBS -q four
 
#----------------------------------------------
# o Job name instead of the PBS script filename
# -N Job name (use a distinguishing name)
 
#PBS -N MyNameADAPTOR
 
#----------------------------------------------
# o Resource lists
# -l resource lists, separated by a ","
# To ask for N nodes use "nodes=N"
# To ask for 2 processor per node use ":ppn=2", otherwise ":ppn=1"
# after the nodes=N. Preferably use ppn=2 and ask for less nodes.
# To ask for Myrinet use ":myrinet", for Gigabit Ethernet use ":gigabit"
# after the nodes=N:ppn=M
# To specify total wallclock time use "walltime=hh:mm:ss"
 
#PBS -l nodes=16:ppn=2,walltime=00:10:00
 
#----------------------------------------------
# o stderr/out combination
# -j {eo|oe}
# Causes the standard error and standard output to be combined in one file.
# For standard output to be added to standard error use "eo"
# For standard error to be added to standard output  use "oe"
#
# o stderr/out (specify them instead if getting script.[oe]$PBS_JOBID
# -e standard error file
# -o standard output file
# You can append ${PBS_JOBID} to ensure distict filenames
 
#PBS -e myrunADAPTOR.stderr
#PBS -o myrunADAPTOR.stdout
 
#----------------------------------------------
# o Starting time
# -a time
# Declares the time after which the job is eligible for execution.
 
#----------------------------------------------
# o User notification
# -m {a|b|e}
# Send mail to the user when:
# job aborts: "a", job begins running: "b", job ends: "e"
 
#PBS -m ae
 
#----------------------------------------------
# o Exporting of environment
# -V export all my environment var's
 
#PBS -V
 
#----------------------------------------------
                                                                                
# Begin execution
 
#
# Check the environment variables
#
#printenv
 
#
# Get the right ADAPTOR/MPICH-VMI module 
#
module add adaptor

#
# get PBS node info
#
echo $PBS_NODEFILE
cat  $PBS_NODEFILE
 
#----------------------------------------------
# cd to the working directory from which the job was submitted
#
cd $PBS_O_WORKDIR

# How many procs do I have?
setenv NP `wc -l $PBS_NODEFILE | awk '{print $1}'`
 
# Create uniq hostfile for use in hybrid (MPI/OpenMP) codes and for rsh-script use
uniq $PBS_NODEFILE > machinefile.uniq.$PBS_JOBID
 
# How many nodes do I have?
setenv NPU `wc -l machinefile.uniq.$PBS_JOBID | awk '{print $1}'`

#
# Run the MPI code called "executable", provided it is in PBS_O_WORKDIR
# 

# If the code was compiled with "-dm -sm" setup the OpenMP environment
# otherwise leave commented out.
#setenv OMP_NUM_THREADS 2

# Start VMIeyes
setenv VMIEYESBIN `which vmieyes`
mpirun -np $NPU -machinefile machinefile.uniq.$PBS_JOBID $VMIEYESBIN --reaper=localhost

# Default (will use the TCP transport)
mpirun -np $NP ./executable
# Or if the code was compiled with "-dm -sm"
#mpirun -np $NPU ./executable 

# Using Myrinet (Provided you have asked for Myrinet nodes)
mpirun -specfile myrinet -np $NP ./executable
# Or if the code was compiled with "-dm -sm"
#mpirun -specfile myrinet -np $NPU ./executable

# Using Gigabit/Fast Ethernet
mpirun -specfile tcp -np $NP ./executable
# Or if the code was compiled with "-dm -sm"
#mpirun -specfile tcp -np $NPU ./executable

# Using Gigabit/Fast Ethernet with Myrinet (best of)
mpirun -specfile xsite-myrinet-tcp -np $NP ./executable
# Or if the code was compiled with "-dm -sm"
#mpirun -specfile xsite-myrinet-tcp -np $NPU ./executable

# Cleanup
# Remove the unique machinefiles
rm machinefile.uniq.$PBS_JOBID
# Kill vmieyes
pbsdsh killall vmieyes
# Finally cleanup the temporary vmieyes database files
rm vmieyes-*.db

#
# Exit (not strictly necessary)
#
exit