Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Exercise 2: Run nf-core/sarek using a family trio data (HapMap; Genome in a Bottle)

Public data

  • Family ID: 1463

  • Family information: family lineage from Utah of four grandparents, two parents, and 11 children (17 family members)

  • Genomics consortia: Genome in a Bottle, 1000 Genomes Project, International HapMap Project, Centre d'Etude du Polymorphisme Humain (CEPH)

...

The pipeline requires preparing at least 2 files:

  • Metadata file (samplesheet.csv) thatspecifies the following information:

Code Block
patient,sample,lane,fastq_1,fastq_2
ID1,S1,L002,/full/path/to/ID1_S1_L002_R1_001.fastq.gz,/full/path/to/ID1_S1_L002_R2_001.fastq.gz
  • PBS Pro script (launch_nf-core_sarek_trio.pbs) with instructions to run the pipeline (see below)

STEP1: Create the metadata file (samplesheet.csv):

List FASTQ files in the data folder directory of the family trio:

...

Code Block
cp $HOME/workshop/sarek/scripts/create_samplesheet_nf-core_sarek.py $HOME/workshop/sarek/run2_trio
cd $HOME/workshop/sarek/run2_trio
  • Note: you could replace ‘$HOME/workshop/sarek/data’ with “.” A dot indicates ‘current directory’ and will copy the file to the directory where you are currently located

Check help option on how to run the script:

...

Code Block
python create_samplesheet_nf-core_sarek.py --dir /work/training/sarek/data/WES/trio \
  --read1_extension R1.fastq.gz \
  --read2_extension R2.fastq.gz \
  --out samplesheet.csv

Alternatively copy the samplesheet.csv file:

Code Block
cp /work/training/sarek/data/WES/trio/samplesheet.csv .

Check the newly created samplesheet.csv file:

Code Block
ls -l
cat samplesheet.cvscsv

patient,sample,lane,fastq_1,fastq_2

SRR14724455,NA12892a,L001,/sarek/data/WES/trio/SRR14724455_NA12892a_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724455_NA12892a_L001_R2.fastq.gz

SRR14724456,NA12891a,L001,/sarek/data/WES/trio/SRR14724456_NA12891a_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724456_NA12891a_L001_R2.fastq.gz

SRR14724463,NA12878a,L001,/sarek/data/WES/trio/SRR14724463_NA12878a_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724463_NA12878a_L001_R2.fastq.gz

SRR14724474,NA12892b,L001,/sarek/data/WES/trio/SRR14724474_NA12892b_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724474_NA12892b_L001_R2.fastq.gz

SRR14724475,NA12891b,L001,/sarek/data/WES/trio/SRR14724475_NA12891b_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724475_NA12891b_L001_R2.fastq.gz

SRR14724483,NA12878b,L001,/sarek/data/WES/trio/SRR14724483_NA12878b_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724483_NA12878b_L001_R2.fastq.gz

STEP 2: running the nf-core/sarek pipeline (launch_nf-core_sarek_trio.pbs)

Copy and paste the code below to the terminal:

Code Block
cp $HOME/workshop/sarek/scripts/launch_nf-core_sarek_trio.pbs $HOME/workshop/sarek/run2_trio
cd $HOME/workshop/sarek/run2_trio
  • Line 1: copy the launch_nf-core_sarek_trio.pbs submission script to the working directory

  • Line 2: move to the working directory

View the content of the launch_nf-core_sarek_trio.pbs script:

...

#!/bin/bash -l

#PBS -N nfsarek_run2_trio

#PBS -l walltime=48:00:00

#PBS -l select=1:ncpus=1:mem=5gb

cd $PBS_O_WORKDIR

NXF_OPTS='-Xms1g -Xmx4g'

module load java

#specify the nextflow version to use to run the workflow

export NXF_VER=23.10.1

#run the sarek pipeline

nextflow run nf-core/sarek \

        -r 3.3.2 \

        -profile singularity \

        --genome GATK.GRCh38 \

        --input samplesheet.csv \

        --wes \

        --outdir ./results \

        --step mapping \

        --tools haplotypecaller,snpeff,vep \

        --snpeff_cache /work/training/sarek/NXF_SINGULARITY_CACHEDIR/snpeff_cache \

        --vep_cache /work/training/sarek/NXF_SINGULARITY_CACHEDIR/vep_cache \

        -resume

  • The above script will screen for germline (inherited) mutations using GATK’s haplotypecaller and then annotate the identified variants using snpeff and VEP.

  • Version 3.3.2 allows running the pipeline to do quality assessment only, without any alignment, read counting, or trimming.

  • The pipeline enables use to start at distinct stages, we are commencing from the start “--step mapping”

  • Pipeline steps:

...

Once the pipeline has finished running - Assess the results as follows:

NOTE: To proceed, you need to be on QUT’s WiFi network or signed via VPN.

To browse the working folder in the HPC type in the file finder:

...

During execution of the workflow two output folders are generated:

  • work - where all intermediate results and tasks are run

  • results - where all final results for all stages of the pipeline are copied

...