1. Getting started with Nextflow
What is Nextflow?
Analysing data involves a sequence of tasks which is referred to as a workflow or a pipeline. These workflows typically require executing multiple software packages, sometimes running on different computing environments, such as a desktop or a compute cluster. Traditionally these workflows have been joined together in scripts using general purpose programming languages such as Bash or Python. However, as workflows become larger and more complex, the management of the programming logic and software becomes difficult.
Nextflow is a free and open-source pipeline management software that enables scalable and reproducible scientific workflows. It allows the adaptation of pipelines written in the most common scripting languages.
Key features of Nextflow that simplify the development, monitoring, execution and sharing of pipelines:
Reproducible → version control and use of containers ensure the reproducibility of nextflow pipelines
Portable → compute agnostic (i.e., HPC, cloud, desktop)
Time and resource management
Scalable → run from a single to thousands of samples
Continuous checkpoints & re-entrancy → allows you to resume its execution from the last successfully executed step
Minimal digital literacy → accessible to anyone
Active global community → more and more nextflow pipelines are available (i.e., Pipelines)
Nextflow is a pipeline engine that can take advantage of the batch nature of the HPC environment to efficiently and quickly run bioinformatic workflows.
For more information about Nextflow, please visit Nextflow - A DSL for parallel and scalable computational pipelines
Installing Nextflow
Connect to your Lyra account. Nextflow is meant to run from your home folder on a Linux machine like the HPC.
ssh [username]@lyra.qut.edu.au
Before we start using the HPC, let’s start an interactive session:
Not familiar with launching an interactive jobs and submitting PBS jobs, please review the Submitting PBS Jobs part 1 section of the Intro to HPC.
qsub -I -S /bin/bash -l walltime=10:00:00 -l select=1:ncpus=1:mem=4gb
This might take a few minutes to start
You will see this message first:
Followed by:
You can check that your interactive window is active by running the command:
qstat -u [username]
Nextflow also requires Java 11 or later to be installed. To load java, run the following command:
Not familiar with the module function? Please review the Modules section section of the Intro to HPC.
Finally we will create a folder which will contain all the exercises and code from today:
Installing Nextflow for the first time
Important: Please note if you already installed Nextflow before on the HPC, then skip this section and go directly to the next section Updating Nextflow.
To install Nextflow for the first time, copy and paste the following block of code into your terminal (i.e., PuTTy that is already connected to the terminal) and hit 'enter':
Line 1: This command downloads and assembles the parts of nextflow - this step might take some time.
Line 2: When finished, the nextflow binary will be in the current folder so it should be moved to your “bin” folder” so it can be found later.
Updating Nextflow
If you have installed Nextflow before on the HPC then you will have to run:
Check that your Nextflow installation worked
To verify that Nextflow is installed properly, you can run the following command:
We will also run locally your first Nextflow pipeline, which is called Hello:
Line 2: Make a temporary folder called nftemp for Nextflow to create files when it runs the hello pipeline; change directory to this newly created folder.
Line 3: Verify Nextflow is working.
You should see something like this:
If you got this output, well done! You have run your first Nextflow pipeline successfully.
Troubleshooting:
Please note that if you have run the Hello pipeline before, you might need to update it to the latest version for it to run properly. To do so, you will need to pull the latest code first:
If you see the following error message:
It is likely there is an typo in the command (e.g. pipeline name) you provided and the error message is telling you it is unable to find a pipeline under the name provided. Check your spelling and resubmit.
Now that you have managed to run the hello pipeline, go back to your home directory and clean the test folder.
Nextflow’s base configuration
A key Nextflow feature is the ability to decouple the workflow implementation, which describes the flow of data and operations to perform on that data, from the configuration settings required by the underlying execution platform.
This enables the workflow to be portable, allowing it to run on different computational platforms such as an institutional HPC or cloud infrastructure, without needing to modify the workflow implementation.
For instance, a user can configure Nextflow so it runs the pipelines locally (i.e. on the computer where Nextflow is launched), which can be useful for developing and testing a pipeline script on your computer. This is the default setting in Nextflow.
You can also configure Nextflow to run on a cluster such as a PBS Pro resource manage, which is the setting we will use on the HPC:
The base configuration that is applied to every Nextflow workflow you run is located in $HOME/.nextflow/config
.
Once you have installed Nextflow on Lyra, there are some settings that should be applied to your $HOME/.nextflow/config
to take advantage of the HPC environment at QUT.
To create a suitable config file for use on the QUT HPC, copy and paste the following text into your Linux command line and hit ‘enter’. This will make the necessary changes to your local account so that Nextflow can run correctly:
Line 1: Check if a
.nextflow/config
file already exists in your home directory. Create it if it does not existLine 2-15: Using the cat command, paste text in the newly created
.nextflow/config
file which specifies the cache location for your singularity and conda.What are the parameters you are setting?
Line 4-7 set the directory where remote Singularity images are stored and direct Nextflow to automatically mount host paths in the executed container.
Line 8-10 set the directory where Conda environments are stored.
Line 11-15 sets default directives for processes in your pipeline. Note that the executor is set to pbspro on line 12.
Line 16 provides the local path to genome files required for pipelines such as nf-core/rnaseq