Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Exercise 3: Run nf-core/sarek using a liver samples

 

 

The pipeline requires preparing at least 2 files:

  • Metadata file (samplesheet.csv) thatspecifies the following information:

Code Block
patient,sample,lane,fastq_1,fastq_2
ID1,S1,L002,/full/path/to/ID1_S1_L002_R1_001.fastq.gz,/full/path/to/ID1_S1_L002_R2_001.fastq.gz
  • PBS Pro script (launch_nf-core_sarek_trio.pbs) with instructions to run the pipeline

Create the metadata file (samplesheet.csv):

Change to the data folder directory:

Code Block
cd $HOME/workshop/sarek/data/trio
pwd

Copy the python script “create_samplesheet_nf-core_sarek.py" to the working folder

Code Block
cp /work/training/sarek/scripts/create_samplesheet_nf-core_sarek.py $HOME/workshop/sarek/data/trio
  • Note: you could replace ‘$HOME/workshop/sarek/data’ with “.” A dot indicates ‘current directory’ and will copy the file to the directory where you are currently located

Check help option on how to run the script:

Code Block
python create_samplesheet_nf-core_sarek.py --help
Code Block
python create_samplesheet_nf-core_sarek.py -h

usage: create_samplesheet_nf-core_sarek.py [-h] [--dir DIR] [--read1_extension READ1_EXTENSION] [--read2_extension READ2_EXTENSION] [--out OUT]

Extract metadata from fastq files in a directory.

optional arguments:

  -h, --help            show this help message and exit

  --dir DIR             Directory to search for files (default: current directory)

  --read1_extension READ1_EXTENSION

                        Extension for fastq_1 files (default: R1_001.fastq.gz)

  --read2_extension READ2_EXTENSION

                        Extension for fastq_2 files (default: R2_001.fastq.gz)

  --out OUT             Output metadata CSV file

Let’s generate the metadata file by running the following command:

Code Block
python create_samplesheet_nf-core_sarek.py --dir $HOME/workshop/sarek/data/trio \
  --read1_extension R1.fastq.gz \
  --read2_extension R2.fastq.gz \
  --out samplesheet.csv

Check the newly created samplesheet.csv file:

Code Block
ls -l
cat samplesheet.cvs

patient,sample,lane,fastq_1,fastq_2

SRR14724455,NA12892a,L001,/sarek/data/WES/trio/SRR14724455_NA12892a_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724455_NA12892a_L001_R2.fastq.gz

SRR14724456,NA12891a,L001,/sarek/data/WES/trio/SRR14724456_NA12891a_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724456_NA12891a_L001_R2.fastq.gz

SRR14724463,NA12878a,L001,/sarek/data/WES/trio/SRR14724463_NA12878a_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724463_NA12878a_L001_R2.fastq.gz

SRR14724474,NA12892b,L001,/sarek/data/WES/trio/SRR14724474_NA12892b_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724474_NA12892b_L001_R2.fastq.gz

SRR14724475,NA12891b,L001,/sarek/data/WES/trio/SRR14724475_NA12891b_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724475_NA12891b_L001_R2.fastq.gz

SRR14724483,NA12878b,L001,/sarek/data/WES/trio/SRR14724483_NA12878b_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724483_NA12878b_L001_R2.fastq.gz

Copy the PBS Pro script for running the nf-core/sarek pipeline (launch_nf-core_sarek_trio.pbs)

Copy and paste the code below to the terminal:

Code Block
cp $HOME/workshop/sarek/data/WES/trio/samplesheet.csv $HOME/workshop/sarek/runs/run2_sarek_trio
cp $HOME/workshop/sarek/scripts/launch_nf-core_sarek_trio.pbs $HOME/workshop/sarek/runs/run2_trio
cd $HOME/workshop/sarek/runs/run2_trio
  • Line 1: Copy the samplesheet.csv file generated above to the working directory

  • Line 2: copy the launch_nf-core_sarek_trio.pbs submission script to the working directory

  • Line 3: move to the working directory

View the content of the launch_nf-core_RNAseq_QC.pbs script:

Code Block
cat launch_nf-core_RNAseq_QC.pbs

#!/bin/bash -l

#PBS -N nfsarek_run2_trio

#PBS -l walltime=48:00:00

#PBS -l select=1:ncpus=1:mem=5gb

 

cd $PBS_O_WORKDIR

NXF_OPTS='-Xms1g -Xmx4g'

module load java

 

#specify the nextflow version to use to run the workflow

export NXF_VER=23.10.1

 

#run the sarek pipeline

nextflow run nf-core/sarek \

        -r 3.3.2 \

        -profile singularity \

        --genome GATK.GRCh38 \

        --input samplesheet.csv \

        --wes \

        --outdir ./results \

        --step mapping \

        --tools haplotypecaller,snpeff,vep \

        --snpeff_cache /work/training/sarek/NXF_SINGULARITY_CACHEDIR/snpeff_cache \

        --vep_cache /work/training/sarek/NXF_SINGULARITY_CACHEDIR/vep_cache \

        -resume

  • The above script will screen for germline (inherited) mutations using GATK’s haplotypecaller and then annotate the identified variants using snpeff and VEP.

  • Version 3.3.2 allows running the pipeline to do quality assessment only, without any alignment, read counting, or trimming.

  • The pipeline enables use to start at distinct stages, we are commencing from the start “--step mapping”

Submitting the job

Once you have created the folder for the run, the samplesheet.csv file, and launch.pbs, you are ready to submit the job to the HPC scheduler:

Code Block
qsub launch_nf-core_sarek_trio.pbs

Monitoring the Run

Code Block
qjobs

to check on the jobs, you are running. Nextflow will launch additional jobs during the run.

You can also check the .nextflow.log file for details on what is going on.

Once the pipeline has finished running - Assess the results as follows:

NOTE: To proceed, you need to be on QUT’s WiFi network or signed via VPN.

To browse the working folder in the HPC type in the file finder:

Windows PC

Code Block
\\hpc-fs\work\training\sarek

Mac

Code Block
smb://hpc-fs/work/training/sarek

Evaluate the nucleotide distributions in the 5'-end and 3'-end of the sequenced reads (Read1 and Read2). Look into the “MultiQC” folder and open the provided HTML report.