https://mediahub.qut.edu.au/media/t/0_d0bsv333
Launching a Batch Job - Constructing a Job Script
It is possible to submit a batch job completely from the command line, saving the job parameters and commands in a text file is very handy for documenting you use of the HPC. In a job script #PBS is used to provide instructions to PBS, they are not run as commands. A small script is:
#!/bin/bash -l #PBS -l select=1:ncpus=1:mem=2gb #PBS -l walltime=00:10:00 echo $(hostname)
This job will request one node (select=1), one cpu (ncpus=1), 2gb of memory (mem=2gb) and run for a maximum of 10 minutes (walltime=00:10:00)
Notice how the options after #PBS are the same as the qsub command line?
This script is very basic, it will run the command hostname, which outputs the name of the computer this job is running on, then echo that to the screen.
While the name of the file is not important, I like to save my PBS job scripts as {name}.pbs to easily identify them in the file list. Use training01.pbs here.
Let’s use nano to create the file, nano is provided by a module:
module load nano
Create the file:
nano training01.pbs
Launching a Batch Job - Submitting a Job Script
Since all the options are contained in the job script, the qsub line is short:
qsub training01.pbs
And you will see a job number printed on the screen. Use qjobs to check on the status of the job.
Checking on the Job Status
To quickly check on you jobs that are queued and running, use the qjobs command
qjobs
You will get a summary of each queued job and the running ones. The finished ones are not displayed.
An alternative way to list your jobs:
qstat -u $user
Get more details about a particular job:
qstat -f {jobid}
Checking the Output
Since we told the job to print the name of the node the job was running on, how do we see it? PBS will save the output of the commands run in the job into two files by default. The format is {job name}.o{job id} and {job name}.e{job id}
Let's examine these files:
# find the files by listing the contents of the folder sorted by reverse date ls -ltr # the 'e' file is empty cat training01.pbs.o{tab} cl4n018 PBS Job 5228698.pbs CPU time : 00:00:00 Wall time : 00:00:02 Mem usage : 0b
We can see in this case, the job ran on the cl4n018 node, use no measurable cpu and memory, and lasted for 2 seconds. The two files represent the standard output and the error output of the commands. The name of the files and merging them is possible with more options.
More options in job scripts
We have just scratched the surface of what you can specify when you submit and run jobs. A few useful ones are:
Be notified when the job starts, use the -m option eg be sent an email if the job is aborted, when it begins, and when it ends: #PBS -m abe
Give the job a name: To find your job in a long list give it a meaning name with the -N option: #PBS -N MyJob01
Merge the error file into the standard output file: #PBS -j oe
Overriding the email address: If you want to send the job notification email to another address, use the -M option, eg #PBS -M bob@bob.com
Tricks and Tips
When the job starts, PBS will logon to the node as you and your working directory will be your home folder. If your data is in a sub folder or in a shared folder, you can use this to automatically change to that folder:
cd $PBS_O_WORKDIR
$PBS_O_WORKDIR is a special environment variable created by PBS. This will be the folder where you ran the qsub command.