User Tools

Site Tools


Quick Start guide

This may seem to be an obvious consideration in cluster usage but, in order to use cluster's resources, you need to connect into afferent master via SSH client:
$ ssh <hades|lucifer|baal>

The first and main thing to do, after having logged on and to start using the computing nodes cluster, is to load the single required environment module namely torque. I cannot advise strongly enough to add the command into your own .bash_profile located into your home directory.

$ echo "module load torque" >> ~/.bash_profile

You can learn more about Environment-Modules here.

For LBT lab members only: Since /home is not mounted on computing nodes, as already described in Cluster usage limits, you will need to create your workdir directory before getting started to submit any job.
$ mkdir /workdir/<groupname>/<username>

$ mkdir /$(/shared/scripts/|awk -F'/' '{print $2"/"$3"/"$4}')

The easiest way to run jobs on the computing resources is to use the following script. This way, your jobs will be in compliance with the cluster usage policies and rules.

#PBS -S /bin/bash
#PBS -N <job-name>
#PBS -o <job-name>.out
#PBS -e <job-name>.err
#PBS -l nodes=<number of nodes>:ppn=<number of cores per nodes>
#PBS -l walltime=<walltime in format hh:mm:ss>
#PBS -A <credit account>
#PBS -m abe
#PBS -M <your email>
#PBS -l epilogue=/shared/scripts/ADMIN__epilogue-qsub.example
if [[ $PBS_O_WORKDIR =~ ^/home/.*$ ]] ; then
        echo "You cannot run any job from your homedir directy!".
	exit 1
NUM_NODES=$(cat $PBS_NODEFILE|uniq|wc -l)
if [ ! -n "$PBS_O_HOME" ] || [ ! -n "$PBS_JOBID" ]; then
        echo "At least one variable is needed but not defined. Please touch your manager about."
        exit 1
        if [ $NUM_NODES -le 1 ]; then
                export WORKDIR+=$(echo $PBS_O_HOME |sed 's#.*/\(home\|workdir\)/\(.*_team\)*.*#\2#g')"/$PBS_JOBID/"
                mkdir $WORKDIR
                rsync -ap $PBS_O_WORKDIR/ $WORKDIR/
                # if you need to check your job output during execution (example: each hour) you can uncomment the following line
                # /shared/scripts/ADMIN__auto-rsync.example 3600 &
                export WORKDIR=$PBS_O_WORKDIR
echo "your current dir is: $PBS_O_WORKDIR"
echo "your workdir is: $WORKDIR"
echo "number of nodes: $NUM_NODES"
echo "number of cores: "$(cat $PBS_NODEFILE|wc -l)
echo "your execution environment: "$(cat $PBS_NODEFILE|uniq|while read line; do printf "%s" "$line "; done)
# If you're using only one node, it's counterproductive to use IB network for your MPI process communications
if [ $NUM_NODES -eq 1 ]; then
        export PSM_DEVICES=self,shm
        export OMPI_MCA_mtl=^psm
        export OMPI_MCA_btl=shm,self
# Since we are using a single IB card per node which can initiate only up to a maximum of 16 PSM contexts 
# we have to share PSM contexts between processes
# CIN is here the number of cores in node
        CIN=$(cat /proc/cpuinfo | grep -i processor | wc -l)
        if [ $(($CIN/16)) -ge 2 ]; then
                PPN=$(grep $HOSTNAME $PBS_NODEFILE|wc -l)
                if [ $CIN -eq 40 ]; then
                        export PSM_SHAREDCONTEXTS_MAX=$(($PPN/4))
                elif [ $CIN -eq 32 ]; then
                        export PSM_SHAREDCONTEXTS_MAX=$(($PPN/2))
                        echo "This computing node is not supported by this script"
                echo "PSM_SHAREDCONTEXTS_MAX defined to $PSM_SHAREDCONTEXTS_MAX"
	        echo "no PSM_SHAREDCONTEXTS_MAX to define"
## USER part
## Environment settings (environment module loadings, etc.)
# example: module load openmpi/gnu/1.6.5
## your app calls
# example: mpirun simulation.x
## To well chain your jobs, with afterok directive to be sure the current job will complete and OK before running the new one
# qsub -d `/shared/scripts/` -W depend=afterok:$PBS_JOBID <jobscript>
## END-USER part
# At the term of your job, you need to get back all produced data synchronizing workdir folder with you starting job folder 
# and delete the temporary one (workdir)
# A good practice is to reduce the file list you need to get back with rsync
if [ $NUM_NODES -le 1 ]; then
        cd $PBS_O_WORKDIR
        rsync -ap $WORKDIR/ $PBS_O_WORKDIR/
        rm -rf $WORKDIR

You just need to customize headers (job name, walltime, account name, number of nodes and cores-per-node, etc.) and add your own job calls into the USER-part block.

PBS parameter Description
-S <interpreter> shell environment
-N <jobname> job name as displayed in the queue
-o <filename> redirect standard output to filename
-e <filename> redirect standard error to filename
-l walltime=<hh:mm:ss> walltime
-l nodes=<nodes>:ppn<cores_per_node> requested resources
-A <project> project account name
-l epilogue=<> name of a script to be run at the end of the job
-m abe send emails when job aborts (a), begins (b), ends (e)
-M <email> email address
-q <queue> queue name to manually specify the destination of the job

Note the 2 last parameters above are just informative; this feature being disabled for security reason.

Once done, you just have to submit your job entering in terminal:

$ qsub run.pbs

Your job has been submitted and its job-ID (here 12345.torque1.cluster.lbt) is returned by qsub.

Concerning Baal cluster, if you want to submit your job into monop or test queue, dont forget to add “-q <queue>” parameter to your qsub command (or in the PBS section of your script).

It's not a good idea to use OpenMPI library for communications on single node processes. That said, if you are using a binary that has been compiled with, you should disable PSM communication to improve the performance and stability, and by this way to avoid any network issue.

To do this, you just need to add these following lines before your MPI parallel job launcher command in your submission script:

export PSM_DEVICES="self,shm"
export OMPI_MCA_mtl=^psm
export OMPI_MCA_btl=shm,self

To go more away with performance improvement, you can read my performance tips.

Dont forget you must only submit your jobs scripts from your workdir directory -because /home is not available on any computing node.

cluster-lbt/quick_start_guide.txt · Last modified: 2020/03/04 11:43 by admin