“A spirit of openness is gaining traction in the science community.” (Gewin 2016)
“Most scientific disciplines are finding the data deluge to be extremely challenging, and tremendous opportunities can be realized if we can better organize and access the data.” (Science 11 Feb 2011)
Let’s say, hypothetically of course, you have a colleague who has a great script they made to generate figures and results for a paper:
See: Dr. Elias Oziolor’s markdown document
What would you tell your colleague?
In this lesson, you will be familiar with:
We will explore these options and tools by taking the following steps during this lesson:
fastqc
and multiqc
, and then runs bothBlogs are cool, and can be a way to share results to a wide audience when results are obtained, before publication!
Open data science and transparency is becoming a common practice. Frequently, I see colleagues sharing pre-prints and code and data before the peer-reviewed paper is released.
This advances everyone’s understanding of your cool science!
Here are some ways to share data:
The SRA and the ENA along with several other international respositories, are the main worldwide archives for depositing raw NGS sequencing data. Most journals require an accession ID from one of these archives for publication.
Additional references:
Install:
conda install -y sra-tools
Let’s also create a folder in our home directory, so that we keep things organized:
cd ~
mkdir openScienceTutorial
cd openScienceTutorial
Download example set of reads:
and extract first 1000 paired reads (they call reads “spots”):
fastq-dump -X 1000 --split-files --defline-seq '@$ac.$si.$sg/$ri' --defline-qual '+' SRR1300523
fastq-dump -X 1000 --split-files --defline-seq '@$ac.$si.$sg/$ri' --defline-qual '+' SRR1300540
fastq-dump -X 1000 --split-files --defline-seq '@$ac.$si.$sg/$ri' --defline-qual '+' SRR1300380
fastq-dump -X 1000 --split-files --defline-seq '@$ac.$si.$sg/$ri' --defline-qual '+' SRR1300313
(Because /1 and /2 keeping track of read pairs will not be included by default, see issue and blog)
Don’t do this now! If you want the full set of reads (will take >5 min)
fastq-dump SRR390728
Don’t do this now, either! This is the same as doing it in two steps (will take > 5 min):
wget ftp://ftp-trace.ncbi.nih.gov/sra/sra-instant/reads/ByRun/sra/SRR/SRR390/SRR390728/SRR390728.sra
fastq-dump SRR390728.sra
Advanced challenge for later (Requires scripting, using bash, Python, or R):
SraRunInfo.csv
from SRA for a large dataset, e.g. 719 Bioexperiments in the Marine Microbial Eukaryotic Transcriptome Sequencing Project (MMETSP).These are some options for sharing data products, such as transcriptomes, genomes and annotations.
The NCBI TSA (Transcriptome Shotgun Assembly Sequence Database) and Assembly are places to deposit data products, but you must be the owner of the original data. There is submission process. Read about the submission process here:
There are several public websites that are available for sharing your data products. These are just several options. You might have know of other methods for sharing data. What are those?
Exercise (think pair share)
Is data sharing a good idea? What are some of the venues where you prefer to share data with others?
These are some options we have tried, some features of each and some considerations we feel are important.
Location | Funding | Interface, Download/Upload | Versioning | DOI | File size/number limits | Comments — | — | — | — | — | — | — | zenodo.org | CERN (very sustainable) | website, manual down/upload | Yes | Yes | 50 GB/repository | Interface with GitHub figshare.com | Private | website, manual down/upload | Yes | Yes| Yes | Social media metrics osf.io | Non-profit | website + commandline | No | Yes | 3 GB/file, unlimited files |
Examples of repositories:
Open science framework, operated by the non-profit, COS (Center for Open Science). Repositories for projects, include files and data products. Like GitHub, but only for file sharing. 5 GB/file size limit. Okay for raw fastq NGS data, although not if files are > 5GB.
Features:
Workflow for OSF client:
Johnson, L., & Psomopoulos, F. E. (2018, July 11). DIBSI2018. Retrieved from osf.io/gweqv
pip install osfclient
export OSF_PASSWORD=
export OSF_USERNAME=
gweqv
osf -p gweqv clone
mv gweqv/osfstorage/scripts/ .
mv gweqv/osfstorage/Nematostella_annotation_files/ .
rm -rf gweqv
gweqv
below with your own project. (Upload privaleges will only be enabled from those listed as “Contributors”.)Upload one file:
osf -p gweqv upload SRR1300540_2.fastq reads/SRR1300540_2.fastq
cd ~/openScienceTutorial
mkdir reads
mv *.fastq reads
osf -p gweqv upload -r reads/ reads/
Upload file product (remember to substitute your own project code in the command):
osf -p gweqv upload Nematostella_annotation_files/trinity.nema.fasta.dammit.gff3 annotation_files/trinity.nema.fasta.dammit.gff3
Now that we have uploaded our input data (i.e. the two SRA
files and the dammit
output), we can add our scripts on a GitHub repository so that we can have the entire process available and linked.
Specifically, we will be uploading the script that does the QC for the fastq
files as well as the jupyter
notebook from dammit. Both files are also available here:
If all has been set up correctly, you should have the following structure in your openScienceTutorial
folder:
dibbears@js-16-204:~/openScienceTutorial$ ls -l
total 1032
drwxrwxr-x 2 dibbears dibbears 4096 Jul 12 11:45 Nematostella_annotation_files
-rw-rw-r-- 1 dibbears dibbears 128893 Jul 12 11:42 SRR1300313_1.fastq
-rw-rw-r-- 1 dibbears dibbears 128893 Jul 12 11:42 SRR1300313_2.fastq
-rw-rw-r-- 1 dibbears dibbears 128893 Jul 12 11:42 SRR1300380_1.fastq
-rw-rw-r-- 1 dibbears dibbears 128893 Jul 12 11:42 SRR1300380_2.fastq
-rw-rw-r-- 1 dibbears dibbears 128893 Jul 12 11:42 SRR1300523_1.fastq
-rw-rw-r-- 1 dibbears dibbears 128893 Jul 12 11:42 SRR1300523_2.fastq
-rw-rw-r-- 1 dibbears dibbears 128893 Jul 12 11:42 SRR1300540_1.fastq
-rw-rw-r-- 1 dibbears dibbears 128893 Jul 12 11:42 SRR1300540_2.fastq
drwxrwxr-x 2 dibbears dibbears 4096 Jul 12 11:45 scripts
Run the runQC.sh
script in the directory where you’ve downloaded the .fastq reads. (First, we’ll have to copy it into our working directory):
cp scripts/runQC.sh .
bash runQC.sh
Independent Challenge!
Use git to push our code in the sh
and ipynb
files and the trinity.nema.fasta.dammit.gff3
files into a GitHub repository. You can use the same process as listed here.
sh
and ipynb
and trinity.nema.fasta.dammit.gff3
files into the directory:git clone https://github.com/username/reponame.git
cd <reponame>
cp ~/openScienceTutorial/scripts/filename1 .
cp ~/openScienceTutorial/scripts/filename2 .
cp ~/openScienceTutorial/Nematostella_annotation_files/filename3 .
git add --all
git commit -m "initial commit"
git push origin master
ls -lah
or echo $(pwd)
. Then, repeat steps above to version control your code file:git add --all
git commit -m "changed commit"
git push origin master
Read about how to do this here.
Read about why GitHub + Zenodo makes your life great!
Now, we can link our GitHub repository to Binder. This will create an active version of the jupyter notebook in your GitHub repository:
Challenge:
Using the link above, how would you connect your GitHub repository to binder?