Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Overview

Nextflow is a pipeline engine that can take advantage of the batch nature of the HPC environment to efficiently and quickly run Bioinformatic workflows.

For more information about Nextflow, please visit Nextflow - A DSL for parallel and scalable computational pipelines

  1. Install NextFlow locally

  2. PBS session: qsub -I -S /bin/bash -l walltime=168:00:00 -l select=1:ncpus=4:mem=8gb

  3. tmux

  4. module load java

  5. Test NextFlow: nextflow run nf-core/ampliseq -profile test,singularity --metadata "Metadata.tsv"

  • failed! Data files listed in metadata.tsv not found. Need local copies for this parameter.

  • Tried without metadata: NextFlow: nextflow run nf-core/ampliseq -profile test,singularity

  • Initially ran, then failed during trimming 'cutadapt: error: pigz: abort: read error on 1_S103_L001_R2_001.fastq.gz (No such file or directory) (exit code 2)' Looks like cannot pull down files. Need to pull files locally.

  • s

6. Run Nextflow tower

7.

Installing NextFlow

Follow the eResearch wiki entry for installing NextFlow:

Nextflow

Basic pipeline

This section provides a skeleton overview of the pipeline commands. if you’ve installed the tools and run this pipeline before, you can simply run these commands on your data. if this is your first time running this analysis, it’s strongly recommended you read the following sections of this article, as they provide important background information.

Analysis overview

Pipeline overview

This section provides a skeleton overview of the pipeline commands. if you’ve run this pipeline before and understand the following sections, you can simply run these commands on your data. if this is your first time running this analysis, it’s strongly recommended you read the following sections of this article, as they provide important background information.

Running analysis on QUTs HPC

NextFlow is designed to run on a Linux server. It makes use of server clusters and batch systems such as PBS to split up analysis into multiple jobs, vastly streamlining and speeding up analysis time. To get the most of out NextFlow you should run it on QUTs high performance cluster (HPC). 

Accessing the HPC

QUT staff and HDR students can run NextFlow on QUTs high performance computing cluster.

If you haven’t been set up or have used the HPC previously, click on this link for information on how to get access to and use the HPC:

Need a link here for HPC access and usage 

Creating a shared workspace on the HPC

If you already have access to the HPC, we strongly recommend that you request a shared directory to store and analyse your data. This is so yourself, and relevant members of your research team (e.g. your HPC supervisor) and eResearch bioinformaticians can access your data and assist with the analysis.

To request a shared directory, submit a request to HPC support, telling them what you want the directory to be called (e.g. your_name_16S) and who you want to access it.

https://eresearchqut.atlassian.net/servicedesk/customer/portals 

Running your analysis using the Portable Batch System (PBS)

The HPC has multiple ‘nodes’ available, each of which are allocated certain amounts of RAM and CPU cores.

You should run your data on one of these nodes, with a suitable amount of RAM and cores for your analysis needs. When you log on to the HPC you automatically are in the ‘head node’.

Do not run any of your analysis on the head node. Always request a node through the PBS.

To request a node using PBS, submit a shell script containing your RAM/CPU/analysis time requirements and the code needed to run your analysis. For an overview of submitting a PBS job, see here:

Need a link here for creating PBS jobs

Alternatively, you can start up an ‘interactive’ node, using the following:

Code Block
qsub -I -S /bin/bash -l walltime=72:00:00 -l select=1:ncpus=4:mem=8gb

This asks for a node with 4 CPUs and 8GB of memory, that will run for 72hrs. This may seem inadequate for a full analysis, but NextFlow will automatically start PBS sessions and allocate them sufficient memory for each analysis step.

Note that when you request a node, you are put into a queue until a node of those specifications becomes available. Depending on how many people are using the HPC and how much resources they are using, this may take a while, in which case you just need to wait until the job starts (“waiting for job xxxx.pbs to start”).

Once the interactive PBS job begins (you get a command prompt) you can run the following analysis: