Exercise 2 - trio case study

Exercise 2: Run nf-core/sarek using a family trio data (HapMap; Genome in a Bottle)

Public data

  • Family ID: 1463

  • Family information: family lineage from Utah of four grandparents, two parents, and 11 children (17 family members)

  • Genomics consortia: Genome in a Bottle, 1000 Genomes Project, International HapMap Project, Centre d'Etude du Polymorphisme Humain (CEPH)

CEPH is an international genetic research center that provides a resource of immortalized cell cultures used to map genetic markers. The whole pedigree was sequenced to 50x depth on a HiSeq 2000 Illumina system, which is considered a platinum standard, where platinum refers to the quality and completeness of the resulting assembly, such as providing full chromosome scaffolds with phasing and haplotypes resolved across the entire genome.

This figure depicts the pedigree of the family sequenced for this study, where the ID for each sample is defined by adding the prefix NA128 to each numbered individual, so that 77 = NA12877 and 78 = NA12878, corresponding to the VCF tracks available in this track set. The dark orange individuals indicate sequences used in the analysis methods, whereas the blue represent the founder generations (grandparents), which were also sequenced and used in validation steps. The genomes of the parent-child trio on the top right side, 91-92-78, were also sequenced during Phase I of the 1000 Genomes Project.

Sample ID

Description

Biological sample source

Sample ID

Description

Biological sample source

NA12878 (Daughter)

Mother; donor subject has a single bp (G-to-A) transition at nucleotide 681 in exon 5 of the CYP2C19 gene (CYP2C19*2) which creates an aberrant splice site. The change altered the reading frame of the mRNA starting with amino acid 215 and produced a premature stop codon 20 amino acids downstream, resulting in a truncated, nonfunctional protein. Because of the aberrant splice site, a 40-bp deletion occurred at the beginning of exon 5 (from bp 643 to bp 682), resulting in deletion of amino acids 215 to 227. The truncated protein had 234 amino acids and would be catalytically inactive because it lacked the heme-binding region.

https://catalog.coriell.org/0/Sections/Search/Sample_Detail.aspx?Ref=NA12878&Product=DNA

 

NA12891 (Father)

Maternal Grandfather; donor subject is homozygous for a single bp (G-to-A) transition at nucleotide 681 in exon 5 of the CYP2C19 gene (CYP2C19*2) which creates an aberrant splice site. The change altered the reading frame of the mRNA starting with amino acid 215 and produced a premature stop codon 20 amino acids downstream, resulting in a truncated, nonfunctional protein. Because of the aberrant splice site, a 40-bp deletion occurred at the beginning of exon 5 (from bp 643 to bp 682), resulting in deletion of amino acids 215 to 227. The truncated protein had 234 amino acids and would be catalytically inactive because it lacked the heme-binding region.

https://catalog.coriell.org/0/Sections/Search/Sample_Detail.aspx?Ref=NA12891&Product=DNA

NA12892 (Mother)

Maternal Grandmother

https://catalog.coriell.org/0/Sections/Search/Sample_Detail.aspx?Ref=NA12892&Product=DNA

Location of raw data:

/work/training/sarek/data/WES/trio
├── trio │   ├── samplesheet.csv │   ├── SRR14724455_NA12892a_L001_R1.fastq.gz │   ├── SRR14724455_NA12892a_L001_R2.fastq.gz │   ├── SRR14724456_NA12891a_L001_R1.fastq.gz │   ├── SRR14724456_NA12891a_L001_R2.fastq.gz │   ├── SRR14724463_NA12878a_L001_R1.fastq.gz │   ├── SRR14724463_NA12878a_L001_R2.fastq.gz │   ├── SRR14724474_NA12892b_L001_R1.fastq.gz │   ├── SRR14724474_NA12892b_L001_R2.fastq.gz │   ├── SRR14724475_NA12891b_L001_R1.fastq.gz │   ├── SRR14724475_NA12891b_L001_R2.fastq.gz │   ├── SRR14724483_NA12878b_L001_R1.fastq.gz │   └── SRR14724483_NA12878b_L001_R2.fastq.gz

Where:

Pipeline requirements

The pipeline requires preparing at least 2 files:

  • Metadata file (samplesheet.csv) that specifies the following information:

patient,sample,lane,fastq_1,fastq_2 ID1,S1,L002,/full/path/to/ID1_S1_L002_R1_001.fastq.gz,/full/path/to/ID1_S1_L002_R2_001.fastq.gz
  • PBS Pro script (launch_nf-core_sarek_trio.pbs) with instructions to run the pipeline (see below)

STEP1: Create the metadata file (samplesheet.csv):

List FASTQ files in the data folder directory of the family trio:

Copy the python script “create_samplesheet_nf-core_sarek.py" to the working (run2) folder

  • Note: you could replace ‘$HOME/workshop/sarek/data’ with “.” A dot indicates ‘current directory’ and will copy the file to the directory where you are currently located

Check help option on how to run the script:

usage: create_samplesheet_nf-core_sarek.py [-h] [--dir DIR] [--read1_extension READ1_EXTENSION] [--read2_extension READ2_EXTENSION] [--out OUT]

Extract metadata from fastq files in a directory.

optional arguments:

  -h, --help            show this help message and exit

  --dir DIR             Directory to search for files (default: current directory)

  --read1_extension READ1_EXTENSION

                        Extension for fastq_1 files (default: R1_001.fastq.gz)

  --read2_extension READ2_EXTENSION

                        Extension for fastq_2 files (default: R2_001.fastq.gz)

  --out OUT             Output metadata CSV file

usage: create_samplesheet_nf-core_sarek.py [-h] [--dir DIR] [--read1_extension READ1_EXTENSION] [--read2_extension READ2_EXTENSION] [--out OUT]

Extract metadata from fastq files in a directory.

optional arguments:

  -h, --help            show this help message and exit

  --dir DIR             Directory to search for files (default: current directory)

  --read1_extension READ1_EXTENSION

                        Extension for fastq_1 files (default: R1_001.fastq.gz)

  --read2_extension READ2_EXTENSION

                        Extension for fastq_2 files (default: R2_001.fastq.gz)

  --out OUT             Output metadata CSV file

Let’s generate the metadata file by running the following command:

Alternatively copy the samplesheet.csv file:

 

Check the newly created samplesheet.csv file:

patient,sample,lane,fastq_1,fastq_2

SRR14724455,NA12892a,L001,/sarek/data/WES/trio/SRR14724455_NA12892a_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724455_NA12892a_L001_R2.fastq.gz

SRR14724456,NA12891a,L001,/sarek/data/WES/trio/SRR14724456_NA12891a_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724456_NA12891a_L001_R2.fastq.gz

SRR14724463,NA12878a,L001,/sarek/data/WES/trio/SRR14724463_NA12878a_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724463_NA12878a_L001_R2.fastq.gz

SRR14724474,NA12892b,L001,/sarek/data/WES/trio/SRR14724474_NA12892b_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724474_NA12892b_L001_R2.fastq.gz

SRR14724475,NA12891b,L001,/sarek/data/WES/trio/SRR14724475_NA12891b_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724475_NA12891b_L001_R2.fastq.gz

SRR14724483,NA12878b,L001,/sarek/data/WES/trio/SRR14724483_NA12878b_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724483_NA12878b_L001_R2.fastq.gz

patient,sample,lane,fastq_1,fastq_2

SRR14724455,NA12892a,L001,/sarek/data/WES/trio/SRR14724455_NA12892a_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724455_NA12892a_L001_R2.fastq.gz

SRR14724456,NA12891a,L001,/sarek/data/WES/trio/SRR14724456_NA12891a_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724456_NA12891a_L001_R2.fastq.gz

SRR14724463,NA12878a,L001,/sarek/data/WES/trio/SRR14724463_NA12878a_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724463_NA12878a_L001_R2.fastq.gz

SRR14724474,NA12892b,L001,/sarek/data/WES/trio/SRR14724474_NA12892b_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724474_NA12892b_L001_R2.fastq.gz

SRR14724475,NA12891b,L001,/sarek/data/WES/trio/SRR14724475_NA12891b_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724475_NA12891b_L001_R2.fastq.gz

SRR14724483,NA12878b,L001,/sarek/data/WES/trio/SRR14724483_NA12878b_L001_R1.fastq.gz,/sarek/data/WES/trio/SRR14724483_NA12878b_L001_R2.fastq.gz

STEP 2: running the nf-core/sarek pipeline (launch_nf-core_sarek_trio.pbs)

Copy and paste the code below to the terminal:

  • Line 1: copy the launch_nf-core_sarek_trio.pbs submission script to the working directory

  • Line 2: move to the working directory

View the content of the launch_nf-core_sarek_trio.pbs script:

#!/bin/bash -l

#PBS -N nfsarek_run2_trio

#PBS -l walltime=48:00:00

#PBS -l select=1:ncpus=1:mem=5gb

 

cd $PBS_O_WORKDIR

NXF_OPTS='-Xms1g -Xmx4g'

module load java

 

#specify the nextflow version to use to run the workflow

export NXF_VER=23.10.1

 

#run the sarek pipeline

nextflow run nf-core/sarek \

        -r 3.3.2 \

        -profile singularity \

        --genome GATK.GRCh38 \

        --input samplesheet.csv \

        --wes \

        --outdir ./results \

        --step mapping \

        --tools haplotypecaller,snpeff,vep \

        --snpeff_cache /work/training/sarek/NXF_SINGULARITY_CACHEDIR/snpeff_cache \

        --vep_cache /work/training/sarek/NXF_SINGULARITY_CACHEDIR/vep_cache \

        -resume

#!/bin/bash -l

#PBS -N nfsarek_run2_trio

#PBS -l walltime=48:00:00

#PBS -l select=1:ncpus=1:mem=5gb

 

cd $PBS_O_WORKDIR

NXF_OPTS='-Xms1g -Xmx4g'

module load java

 

#specify the nextflow version to use to run the workflow

export NXF_VER=23.10.1

 

#run the sarek pipeline

nextflow run nf-core/sarek \

        -r 3.3.2 \

        -profile singularity \

        --genome GATK.GRCh38 \

        --input samplesheet.csv \

        --wes \

        --outdir ./results \

        --step mapping \

        --tools haplotypecaller,snpeff,vep \

        --snpeff_cache /work/training/sarek/NXF_SINGULARITY_CACHEDIR/snpeff_cache \

        --vep_cache /work/training/sarek/NXF_SINGULARITY_CACHEDIR/vep_cache \

        -resume

Submitting the job

Once you have created the folder for the run, the samplesheet.csv file, and launch.pbs, you are ready to submit the job to the HPC scheduler:

Monitoring the Run

to check on the jobs, you are running. Nextflow will launch additional jobs during the run.

You can also check the .nextflow.log file for details on what is going on.

Once the pipeline has finished running - Assess the results as follows:

NOTE: To proceed, you need to be on QUT’s WiFi network or signed via VPN.

To browse the working folder in the HPC type in the file finder:

Windows PC

Mac

During execution of the workflow two output folders are generated:

  • work - where all intermediate results and tasks are run

  • results - where all final results for all stages of the pipeline are copied

Let’s browse the results of the pipeline: