2024-2: Submitting PBS Jobs part 1

https://mediahub.qut.edu.au/media/t/0_d0bsv333

Requesting Resources

Before you can run your software analysis on the HPC, suitable resources must be reserved for you. You must determine the resources needed to run your software before you submit your job. If you do not request enough CPUs, your analysis may run slower than intended. If your request does request enough memory, the PBS scheduler will kill your job if it tries to allocate more than the limit. The PBS scheduler will also kill your job if it has not finished before time you requested in your job.

It can be a chicken and the egg situation as you may not know what resources you need to request. At the end of your job, PBS will create a summary of the resources you used. You can use this summary to tune your job requests. Asking for more resources than you need is good to ensure your job finishes successfully, but the more resources you ask for, the longer it usually takes before your job starts. So if you over estimate, use the results to lower your request for the next job. A good place to start is if you have run the analysis on your laptop, you can start there, typically 4 cpus and 16gb.

 

Types of Resources

PBS tracks the following resources which you can request in your job:

  • ncpus: The number of CPUs you need. Generally, the more CPUs you ask for, the faster your job will run (up to a limit). Specified as a number such as 2

  • mem: The amount of memory your analysis needs. Typically requested in gigabytes (gb). Be generous here, if you do not request enough, your job will be killed! Specified as a number with a unit, 8gb

  • walltime: How long you need to run your analysis. If you analysis is not finished by this time, it will be killed. Consider using a minimum of 1 hour to reduce the work the scheduler does. Specified in Hours:Minutes:Seconds, eg 4:00:00 for 4 hours, zero minutes and zero Seconds

Other resources can be requested but considered optional:

  • ngpus: Select one or more GPUs for your job. Only select if you know your software can use a GPU

  • cputype: Select the type of CPU you want to use. You can use Intel CPUs or AMD CPUs

 

Types of Jobs

When running software on the HPC, we do NOT run the apps on the Login Node. Since the Login node is shared amongst all the connected HPC users, we don’t want you slowing everyone else down, and we don’t want others slowing you down. To run any software, we need to submit a job.

The workhorse of the PBS is the Batch Job. This job type is where you request resources and software that will run without asking questions. This is important because if your software stops to ask you a question like “press Y to continue” there is no one to press Y. When you submit a Batch Job, PBS may run it at 3am the next morning. You do not have to wait, simply submit and move on.

Sometimes it may be difficult to find the right software that will run on the HPC, or you might need to experiment with providing command line options. An Interactive job can help here. When you submit an Interactive job, your terminal stops accepting input until the job is allocated to a node and starts. When it starts, you will be transfered to the node. This session is not shared, you can run apps without effecting others.

 

Helpful Tools

The command to submit jobs is qsub. qsub has many options and you find out all of them by accessing qsub’s man page.

To check on the status of your jobs, use the qstat command. Another tool written by QUT HPC staff is qjobs.

 

Launching an Interactive job

To launch an Interactive job, we need to supply the necessary options to qsub. Let's launch a small Interactive job, with 1 CPU and 1gb of memory for 3 hours.

qsub -I -S /bin/bash -l select=1:ncpus=1:mem=1gb -l walltime=3:00:00

You will see:

qsub: wating for job {job id} to start qsub: job {job id} ready {username}@{node}:~>

You are now connected to the node assigned to your job and you can run commands. Notice how the server name (after the @ symbol) has changed. We can now run commands that use all of the resources we requested. These resources are not shared like the Login Node.

We shall keep this job running for the next section.