Overview
Create a metadata “samplesheet.csv” for small RNAseq datasets.
Learn to use a “nextflow.config” file in the working directory to override Nextflow parameters (e.g., specify where to find the pipeline assets).
Learn how to prepare a PBS script to run the expression profiling of small RNAs against the reference miRBase database annotated microRNAs.
Preparing the pipeline inputs
The pipeline requires preparing at least 2 files:
Metadata file (samplesheet.csv) that specifies the name of the samples, location of FASTQ files ('Read 1' and ‘Read 2’), and strandedness (forward, reverse, or auto. Note: auto is used when the strandedness of the data is unknown)
PBS Pro script (launch_nf-core_RNAseq_QC.pbs) with instructions to run the pipeline
Nextflow.config - revision 2.3.1 of the nf-core/smrnaseq pipeline may not be able to identify the location of reference adapter sequences, thus, we will use a local nextflow.config file to tell Nextflow where to find the reference adapters necessary to trim the raw small_RNA-Seq data
A. Create the metadata file (samplesheet.csv):
Change to the data folder directory:
cd $HOME/workshop/2024-2/session6_smallRNAseq/data/human_disease
Copy the bash script to the working folder
cp /work/training/2024/smallRNAseq/scripts/create_nf-core_smallRNAseq_samplesheet.sh $HOME/workshop/2024-2/session6_smallRNAseq/data/human_disease
Note: you could replace ‘$HOME/workshop/data’ with “.” A dot indicates ‘current directory’ and will copy the file to the directory where you are currently located
View the content of the script:
cat create_nf-core_smallRNAseq_samplesheet.sh
NOTE: modify ‘read1_extension’ as appropriate for your data. For example: _1.fastq.gz or _R1_001.fastq.gz or _R1.fq.gz , etc
Let’s generate the metadata file by running the following command:
sh create_nf-core_smallRNAseq_samplesheet.sh $HOME/workshop/2024-2/session6_smallRNAseq/data/human_disease
Check the newly created samplesheet.csv file:
cat samplesheet.csv
sample,fastq_1 ERR409878,/work/training/2024/smallRNAseq/data/human_disease/ERR409878.fastq.gz ERR409879,/work/training/2024/smallRNAseq/data/human_disease/ERR409879.fastq.gz ERR409880,/work/training/2024/smallRNAseq/data/human_disease/ERR409880.fastq.gz ERR409881,/work/training/2024/smallRNAseq/data/human_disease/ERR409881.fastq.gz ERR409882,/work/training/2024/smallRNAseq/data/human_disease/ERR409882.fastq.gz ERR409883,/work/training/2024/smallRNAseq/data/human_disease/ERR409883.fastq.gz ERR409884,/work/training/2024/smallRNAseq/data/human_disease/ERR409884.fastq.gz ERR409885,/work/training/2024/smallRNAseq/data/human_disease/ERR409885.fastq.gz ERR409886,/work/training/2024/smallRNAseq/data/human_disease/ERR409886.fastq.gz ERR409887,/work/training/2024/smallRNAseq/data/human_disease/ERR409887.fastq.gz ERR409888,/work/training/2024/smallRNAseq/data/human_disease/ERR409888.fastq.gz ERR409889,/work/training/2024/smallRNAseq/data/human_disease/ERR409889.fastq.gz ERR409890,/work/training/2024/smallRNAseq/data/human_disease/ERR409890.fastq.gz ERR409891,/work/training/2024/smallRNAseq/data/human_disease/ERR409891.fastq.gz ERR409892,/work/training/2024/smallRNAseq/data/human_disease/ERR409892.fastq.gz ERR409893,/work/training/2024/smallRNAseq/data/human_disease/ERR409893.fastq.gz ERR409894,/work/training/2024/smallRNAseq/data/human_disease/ERR409894.fastq.gz ERR409895,/work/training/2024/smallRNAseq/data/human_disease/ERR409895.fastq.gz ERR409896,/work/training/2024/smallRNAseq/data/human_disease/ERR409896.fastq.gz ERR409897,/work/training/2024/smallRNAseq/data/human_disease/ERR409897.fastq.gz ERR409898,/work/training/2024/smallRNAseq/data/human_disease/ERR409898.fastq.gz ERR409899,/work/training/2024/smallRNAseq/data/human_disease/ERR409899.fastq.gz ERR409900,/work/training/2024/smallRNAseq/data/human_disease/ERR409900.fastq.gz |
---|
B. Prepare PBS Pro script to run the nf-core/smrnaseq pipeline
Copy the PBS Pro script for running the full small RNAseq pipeline (launch_nf-core_smallRNAseq_human.pbs)
Copy and paste the code below to the terminal:
cp $HOME/workshop/small_RNAseq/data/human/samplesheet.csv $HOME/workshop/small_RNAseq/run2_smallRNAseq_human cp $HOME/workshop/small_RNAseq/scripts/launch_nf-core_smallRNAseq_human.pbs $HOME/workshop/small_RNAseq/run2_smallRNAseq_human cd $HOME/workshop/small_RNAseq/run2_smallRNAseq_human
Line 1: Copy the samplesheet.csv file to the working directory
Line 2: copy the launch_nf-core_smallRNAseq_human.pbs submission script to the working directory
Line 3: move to the working directory
View the content of the launch_nf-core_RNAseq_QC.pbs
script:
cat launch_nf-core_smallRNAseq_human.pbs
#!/bin/bash -l #PBS -N nfsmallRNAseq #PBS -l select=1:ncpus=2:mem=4gb #PBS -l walltime=24:00:00 #PBS -m abe
#run the tasks in the current working directory cd $PBS_O_WORKDIR #load java and assign up to 4GB RAM memory for nextflow to use module load java export NXF_OPTS='-Xms1g -Xmx4g'
#run the small RNAseq pipeline nextflow run nf-core/smrnaseq -r 2.1.0 \ -profile singularity \ --outdir results \ --input samplesheet.csv \ --genome GRCh38-local \ --mirtrace_species hsa \ --three_prime_adapter 'TGGAATTCTCGGGTGCCAAGG' \ --fastp_min_length 18 \ --fastp_max_length 30 \ --hairpin /work/training/smallRNAseq/data/mirbase/hairpin.fa \ --mature /work/training/smallRNAseq/data/mirbase/mature.fa \ --mirna_gtf /work/training/smallRNAseq/data/mirbase/hsa.gff3 \ -resume |
---|
Submit the job to the HPC cluster:
qsub launch_nf-core_smallRNAseq_human.pbs
Monitor the progress:
qjobs
The job will take several hours to run, hence we will use precomputed results for the statistical analysis in the next section.
STEP1: copy metadata (samplesheet.csv) into the working folder (run2_RNAseq)
cp $HOME/workshop/2024-2/session4_RNAseq/data/mouse/samplesheet.csv $HOME/workshop/2024-2/session4_RNAseq/runs/run2_RNAseq cd $HOME/workshop/2024-2/session4_RNAseq/runs/run2_RNAseq
Line 1: Copy the samplesheet.csv file to the working directory
Line 2: move to the working directory
Copy the PBS Pro script to run the nf-core/rnaseq pipeline:
cp $HOME/workshop/2024-2/session4_RNAseq/scripts/launch_nf-core_RNAseq_pipeline.pbs $HOME/workshop/2024-2/session4_RNAseq/runs/run2_RNAseq
NOTE: if you had issues with the above lines. Alternatively, run the following code to copy the sample sheet.csv and launch files:
cp /work/training/2024/rnaseq/data/samplesheet.csv $HOME/workshop/2024-2/session4_RNAseq/runs/run2_RNAseq cp /work/training/2024/rnaseq/scripts/launch_nf-core_RNAseq_pipeline.pbs cd $HOME/workshop/2024-2/session4_RNAseq/runs/run2_RNAseq
Adjusting the Trim Galore (read trimming) options
Print the content of the launch_RNAseq.pbs
script:
cat launch_nf-core_RNAseq_pipeline.pbs
Submitting the job
qsub launch_nf-core_RNAseq_pipeline.pbs
Monitoring the Run
qjobs
Outputs
The pipeline will produce two folders, one called “work,” where all the processing is done, and another called “results,” where we can find the pipeline's outputs. The content of the results folder is as follows:
The quantification of the gene and transcript expressions can be found in the ‘star_salmon’ directory.
cd results/star_salmon
The following feature count tables are generated:
Copying data for hands-on exercises
Before we start using the HPC, let’s start an interactive session:
qsub -I -S /bin/bash -l walltime=10:00:00 -l select=1:ncpus=1:mem=4gb
Get a copy of the scripts to be used in this module
Use the terminal to log into the HPC and create a /RNAseq/ folder to run the nf-core/rnaseq pipeline. For example:
mkdir -p $HOME/workshop/small_RNAseq/scripts cp /work/training/smallRNAseq/scripts/* $HOME/workshop/small_RNAseq/scripts/ ls -l $HOME/workshop/small_RNAseq/scripts/
Line 1: The -p indicates create 'parental directories as required. Thus the line 1 command creates both /workshop/ and the subfolder /workshop/scripts/
Line 2: Copies all files from /work/datasets/workshop/scripts/ as noted by an asterisk to the newly created folder $HOME/workshop/scripts/
Line 3: List the files in the script folder
Copy multiple subdirectories and files using rsync
mkdir -p $HOME/workshop/small_RNAseq/data/ rsync -rv /work/training/smallRNAseq/data/ $HOME/workshop/small_RNAseq/data/
Line 1: The first command creates the folder /scripts/
Line 2: rsync copies all subfolders and files from the specified source folder to the selected destination folder. The -r = recursively will copy directories and files; -v = verbose messages of the transfer of files
Create a folder for running the nf-core small RNA-seq pipeline
Let’s create a “runs” folder to run the nf-core/rnaseq pipeline:
mkdir -p $HOME/workshop/small_RNAseq mkdir $HOME/workshop/small_RNAseq/run1_test mkdir $HOME/workshop/small_RNAseq/run2_smallRNAseq_human cd $HOME/workshop/small_RNAseq/
Lines 1-4: create sub-folders for each exercise
Line 5: change the directory to the folder “small_RNAseq”
Exercise 1: Running a test with nf-core sample data
First, let’s assess the execution of the nf-core/rnaseq pipeline by running a test using sample data.
Copy the launch_nf-core_smallRNAseq_test.pbs
to the working directory
cd $HOME/workshop/small_RNAseq/run1_test cp $HOME/workshop/small_RNAseq/scripts/launch_nf-core_smallRNAseq_test.pbs .
View the content of the script as follows:
cat launch_nf-core_smallRNAseq_test.pbs
#!/bin/bash -l #PBS -N nfsmrnaseq #PBS -l select=1:ncpus=2:mem=4gb #PBS -l walltime=24:00:00 #work on current directory (folder) cd $PBS_O_WORKDIR #load java and set up memory settings to run nextflow module load java export NXF_OPTS='-Xms1g -Xmx4g' # run the test nextflow run nf-core/smrnaseq -profile test,singularity --outdir results -r 2.1.0 |
---|
where:
nextflow command: nextflow run
pipeline name: nf-core/smrnaseq
pipeline version: -r 2.1.0
container type and sample data: -profile test,singularity
output directory: --outdir results
Submitting the job
Now we can submit the small RNAseq test job to the HPC scheduler:
qsub launch_nf-core_smallRNAseq_test.pbs
Monitoring the Run
qjobs
Exercise 2: Running the small RNA pipeline using public human data
The pipeline requires preparing at least 2 files:
Metadata file (samplesheet.csv) that specifies the “sample name” and “location of FASTQ files” ('Read 1').
PBS Pro script (launch_nf-core_smallRNAseq_human.pbs) with instructions to run the pipeline
Create the metadata file (samplesheet.csv):
Change to the data folder directory:
cd $HOME/workshop/small_RNAseq/data/human pwd
Copy the bash script to the working folder
cp /work/training/smallRNAseq/scripts/create_nf-core_smallRNAseq_samplesheet.sh $HOME/workshop/small_RNAseq/data/human
Note: you could replace ‘$HOME/workshop/data’ with “.” A dot indicates ‘current directory’ and will copy the file to the directory where you are currently located
View the content of the script:
cat create_nf-core_smallRNAseq_samplesheet.sh
#!/bin/bash -l #User defined variables. ########################################################## DIR='$HOME/workshop/small_RNAseq/data/human' INDEX='samplesheet.csv' ########################################################## #load python module module load python/3.10.8-gcccore-12.2.0 #fetch the script to create the sample metadata table wget -L https://raw.githubusercontent.com/nf-core/rnaseq/master/bin/fastq_dir_to_samplesheet.py chmod +x fastq_dir_to_samplesheet.py #generate initial sample metadata file ./fastq_dir_to_samplesheet.py $DIR index.csv \ --strandedness auto \ --read1_extension .fastq.gz #format index file cat index.csv | awk -F "," '{print $1 "," $2}' > ${INDEX} #Remove intermediate files: rm index.csv fastq_dir_to_samplesheet.py |
---|
Let’s generate the metadata file by running the following command:
sh create_RNAseq_samplesheet.sh
Check the newly created samplesheet.csv file:
ls -l cat samplesheet.cvs
sample,fastq_1 SRR20753704,/work/training/smallRNAseq/data/SRR20753704.fastq.gz SRR20753705,/work/training/smallRNAseq/data/SRR20753705.fastq.gz SRR20753706,/work/training/smallRNAseq/data/SRR20753706.fastq.gz SRR20753707,/work/training/smallRNAseq/data/SRR20753707.fastq.gz SRR20753708,/work/training/smallRNAseq/data/SRR20753708.fastq.gz SRR20753709,/work/training/smallRNAseq/data/SRR20753709.fastq.gz SRR20753716,/work/training/smallRNAseq/data/SRR20753716.fastq.gz SRR20753717,/work/training/smallRNAseq/data/SRR20753717.fastq.gz SRR20753718,/work/training/smallRNAseq/data/SRR20753718.fastq.gz SRR20753719,/work/training/smallRNAseq/data/SRR20753719.fastq.gz SRR20753720,/work/training/smallRNAseq/data/SRR20753720.fastq.gz SRR20753721,/work/training/smallRNAseq/data/SRR20753721.fastq.gz |
---|
Copy the PBS Pro script for running the full small RNAseq pipeline (launch_nf-core_smallRNAseq_human.pbs)
Copy and paste the code below to the terminal:
cp $HOME/workshop/small_RNAseq/data/human/samplesheet.csv $HOME/workshop/small_RNAseq/run2_smallRNAseq_human cp $HOME/workshop/small_RNAseq/scripts/launch_nf-core_smallRNAseq_human.pbs $HOME/workshop/small_RNAseq/run2_smallRNAseq_human cd $HOME/workshop/small_RNAseq/run2_smallRNAseq_human
Line 1: Copy the samplesheet.csv file to the working directory
Line 2: copy the launch_nf-core_smallRNAseq_human.pbs submission script to the working directory
Line 3: move to the working directory
View the content of the launch_nf-core_RNAseq_QC.pbs
script:
cat launch_nf-core_smallRNAseq_human.pbs
#!/bin/bash -l #PBS -N nfsmallRNAseq #PBS -l select=1:ncpus=2:mem=4gb #PBS -l walltime=24:00:00 #PBS -m abe
#run the tasks in the current working directory cd $PBS_O_WORKDIR #load java and assign up to 4GB RAM memory for nextflow to use module load java export NXF_OPTS='-Xms1g -Xmx4g'
#run the small RNAseq pipeline nextflow run nf-core/smrnaseq -r 2.1.0 \ -profile singularity \ --outdir results \ --input samplesheet.csv \ --genome GRCh38-local \ --mirtrace_species hsa \ --three_prime_adapter 'TGGAATTCTCGGGTGCCAAGG' \ --fastp_min_length 18 \ --fastp_max_length 30 \ --hairpin /work/training/smallRNAseq/data/mirbase/hairpin.fa \ --mature /work/training/smallRNAseq/data/mirbase/mature.fa \ --mirna_gtf /work/training/smallRNAseq/data/mirbase/hsa.gff3 \ -resume |
---|
Submit the job to the HPC cluster:
qsub launch_nf-core_smallRNAseq_human.pbs
Monitor the progress:
qjobs
The job will take several hours to run, hence we will use precomputed results for the statistical analysis in the next section.
Precomputed results:
We ran the small RNA seq samples and the results can be found at:
/work/training/smallRNAseq/runs/run2_smallRNAseq_human
The results of the miRNA profiling can be found in the folder call “edger”:
results/ ├── edger ├── fastp ├── fastqc ├── genome ├── index ├── mirdeep ├── mirdeep2 ├── mirtop ├── mirtrace ├── multiqc ├── pipeline_info ├── samtools └── unmapped
inside the “edger” folder find the “mature_counts.csv” file:
hairpin_counts.csv hairpin_CPM_heatmap.pdf hairpin_edgeR_MDS_distance_matrix.txt hairpin_edgeR_MDS_plot_coordinates.txt hairpin_edgeR_MDS_plot.pdf hairpin_log2CPM_sample_distances_dendrogram.pdf hairpin_log2CPM_sample_distances_heatmap.pdf hairpin_log2CPM_sample_distances.txt hairpin_logtpm.csv hairpin_logtpm.txt hairpin_normalized_CPM.txt hairpin_unmapped_read_counts.txt mature_counts.csv <-- we will use this file for the statistical analysis in the next section mature_counts.txt mature_CPM_heatmap.pdf mature_edgeR_MDS_distance_matrix.txt mature_edgeR_MDS_plot_coordinates.txt mature_edgeR_MDS_plot.pdf mature_log2CPM_sample_distances_dendrogram.pdf mature_log2CPM_sample_distances_heatmap.pdf mature_log2CPM_sample_distances.txt mature_logtpm.csv mature_logtpm.txt mature_normalized_CPM.txt mature_unmapped_read_counts.txt
Note: the “mature_counts.csv” needs to be transposed prior running the statistical analysis. This can be done either user the R script or using a script called “transpose_csv.py”.
Let’s initially create a “DESeq2” folder and copy the files needed for the statistical analysis:
mkdir -p $HOME/workshop/small_RNAseq/DESeq2 cp $HOME/workshop/small_RNAseq/scripts/transpose_csv.py $HOME/workshop/small_RNAseq/DESeq2 cp $HOME/workshop/small_RNAseq/data/human/metadata_microRNA.txt $HOME/workshop/small_RNAseq/DESeq2 cp /work/training/smallRNAseq/runs/deprecated/run2_smallRNAseq_human/results/edger/mature_counts.csv $HOME/workshop/small_RNAseq/DESeq2 cd $HOME/workshop/small_RNAseq/DESeq2
To transpose the initial “mature_counst.csv” file do the following:
python transpose_csv.py --input mature_counts.csv --out mature_counts.txt