{"metadata":{"image":[],"title":"","description":""},"api":{"url":"","auth":"required","settings":"","results":{"codes":[]},"params":[]},"next":{"description":"","pages":[]},"title":"Data Cruncher environments and libraries","type":"basic","slug":"about-libraries-in-a-data-cruncher-analysis","excerpt":"","body":"At the moment, Data Cruncher offers a set of predefined libraries curated by Seven Bridges bioinformaticians, which are automatically available every time an analysis is started. The list of available libraries depends on the _environment_ you are using (**JupyterLab** or **RStudio**) and the selected _environment setup_ (set of preinstalled libraries that is available each time an analysis is started). Both of these settings are selected in the analysis creation wizard and cannot be changed once the analysis has been created.\n\n## JupyterLab\n\nDepending on the purpose and objective of your JupyterLab analysis, you can select an environment setup that you find most suitable for the given analysis. The following table shows the available JupyterLab environment setups and some details about available tools and libraries in each of them:\n\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Environment setup\",\n    \"h-1\": \"Details\",\n    \"1-0\": \"**SB Data Science - Python 3.6, R 3.4** (legacy)\",\n    \"1-1\": \"This environment setup contains **Python version 3.6.3**, **R version 3.4.1** and **Julia 0.6.2**. The setup also includes libraries that are available in [datascience-notebook](https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook), with the addition of the following libraries:\\n\\n*Python2 \\\\ Python3:* **path.py**, **biopython**, **pymongo**, **cytoolz**, **pysam**, **pyvcf**, **ipywidgets**, **beautifulsoup4**, **cigar**, **bioservices**, **intervaltree**, **appdirs**, **cssselect**, **bokeh**, **scikit-allel**, **cairo**, **lxml**, **cairosvg**, **rpy2**\\n\\n*R:* **r-ggfortify**, **r**, **r-stringi**, **r-pheatmap**, **r-gplots**, **bioconductor-ballgown**, **bioconductor-deseq2**, **bioconductor-metagenomeseq**, **bioconductor-biomformat**, **bioconductor-biocinstaller**, **r-xml**\",\n    \"3-0\": \"**SB Machine Learning - TensorFlow 2.0, Python 3.7**\",\n    \"3-1\": \"This environment setup is optimized for machine learning and *execution on GPU instances*. It is based on the **jupyter/tensorflow-notebook** image (**jupyter/scipy-notebook** that includes popular packages from the scientific Python ecosystem, with the addition of popular Python deep learning libraries). Learn more about [available libraries](https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html#jupyter-tensorflow-notebook).\",\n    \"0-0\": \"**SB Data Science - Python 3.9, R 4.1** (default)\",\n    \"0-1\": \"This environment setup contains **Python version 3.9**, **R version 4.1** and **Julia 1.6.2**. The setup also includes libraries that are available in [datascience-notebook](https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook), with the addition of the **tabix** library.\",\n    \"2-0\": \"**SB Data Science - Spark 3.1.2, Python 3.9** (beta)\\n\\n[Spark initialization and loading of Parquet/VCF files](#spark-parquet)\",\n    \"2-1\": \"This environment setup contains Python version 3.9, Spark version 3.1.2. The setup also includes libraries that are available in [allspark-notebook](https://github.com/jupyter/docker-stacks/tree/master/all-spark-notebook), with the addition of **glow** (version 1.1.2), **tabix**, **hail** and **bkzep** libraries.\\n\\n**To initialize Spark and learn how to load Parquet or VCF files, follow the instructions** [below](#spark-parquet).\\n\\nAn analysis using this environment will initialize a six-instance cluster with the following configuration:\\n* A master `m5.xlarge` instance with 1000 GB of storage space\\n* Five `m5.4xlarge` slave instances with 1000 GB of storage space each\\n\\nNote that this cluster of 6 instances counts towards the total parallel instance limit that applies to your account. For an analysis that uses this environment to start without delays, you need to be able to initialize 6 more instances in parallel, before reaching the parallel instance limit for your account. If that is not the case, the analysis environment will be in the initialization state until it is able to start all 6 required instances.\"\n  },\n  \"cols\": 2,\n  \"rows\": 4\n}\n[/block]\nAll available environment setups also contain **sevenbridges-python** and **sevenbridges-r** API libraries, as well as **htop** and **openvpn** as general-purpose tools. The libraries are installed using **conda**, as JupyterLab supports multiple programming languages and **conda** is a language-agnostic package manager. You can also install libraries directly from the notebook and use them during the execution of your analysis. For optimal performance and avoidance of potential conflicts, we recommend using **conda** when installing libraries within your analyses. However, unlike default libraries, libraries installed in that way will not be automatically available next time the analysis is started.\n\n<a name=\"spark-parquet\"></a>\n\n### Spark initialization and loading of Parquet/VCF files for the SB Data Science - Spark 3.1.2, Python 3.9 environment setup\n\nTo initialize Spark in the **Spark 3.1.2, Python 3.9** environment, use the following code:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"from pyspark.sql import SparkSession\\nspark = SparkSession \\\\\\n    .builder \\\\\\n    .appName(\\\"PythonPi\\\") \\\\\\n    .config(\\\"spark.jars.packages\\\", \\\"io.projectglow:glow-spark3_2.12:1.1.2\\\") \\\\\\n    .config(\\\"spark.hadoop.io.compression.codecs\\\", \\\"io.projectglow.sql.util.BGZFCodec\\\") \\\\\\n    .getOrCreate()\\nspark = glow.register(spark)\",\n      \"language\": \"python\"\n    }\n  ]\n}\n[/block]\nWhen loading Parquet or VCF files, use the following pattern:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"df = spark.read.parquet('/path/to/example.parquet')\\ndf_vcf = spark.read.format('vcf').load('/path/to/file.vcf')\",\n      \"language\": \"python\"\n    }\n  ]\n}\n[/block]\n## RStudio\n\nIf you select RStudio as the analysis environment, you can also select one of the available environment setups depending on the purpose of your analysis. This will help you optimize analysis setup and time to getting a fully-functional environment that suits your needs by having the needed libraries preinstalled in the selected environment setup. Here are the available options:\n\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Environment setup\",\n    \"h-1\": \"Details\",\n    \"3-0\": \"**SB Bioinformatics - R 3.6**\",\n    \"3-1\": \"This environment setup is based on the **rstudio/verse** image from [The Rocker Project](https://www.rocker-project.org/) and contains **tidyverse**, **devtools**, tex and publishing-related packages. For more information about the image, please see [its Docker Hub repository](https://hub.docker.com/r/rocker/verse). \\n\\nHere is a list of libraries that are installed by default:\\n\\n*CRAN* - **BiocManager**, **ggfortify**, **pheatmap**, **gplots**\\n\\n*Bioconductor* - **ballgown**, **DESeq2**, **metagenomeSeq**, **biomformat**, **BiocInstaller**\",\n    \"4-0\": \"**SB Machine Learning - TensorFlow 1.13, R 3.6**\",\n    \"4-1\": \"This environment setup is optimized for machine learning and *execution on GPU instances*. It is based on the **rocker/ml-gpu** image that is intended for machine learning and GPU-based computation in R. [Learn more](https://hub.docker.com/r/rocker/ml-gpu).\",\n    \"2-0\": \"**SB Bioinformatics - R 4.0**\",\n    \"2-1\": \"This environment setup is based on the official Bioconductor image **bioconductor_docker:RELEASE_3_11** which is built on top of **rockerdev/rstudio:4.0.0-ubuntu18.04**. For more information about the image, please see [its Docker Hub repository](https://hub.docker.com/r/bioconductor/bioconductor_docker). \\n\\nHere is a list of libraries that are installed by default:\\n\\n*CRAN* -  **BiocManager**, **devtools**, **doSNOW**, **ggfortify**, **gplots**, **pheatmap**, **Seurat**, **tidyverse**\\n\\n*Bioconductor* - **AnnotationDbi**, **arrayQualityMetrics**, **ballgown**, **Biobase**, **BiocParallel**, **biomaRt**, **biomformat**, **Biostrings**, **DelayedArray**, **DESeq2**, **edgeR**, **genefilter**, **GenomeInfoDb**, **GenomicAlignments**, **GenomicFeatures**, **GenomicRanges**, **GEOquery**, **IRanges**, **limma**, **metagenomeSeq**, **oligo**, **Rsamtools**, **rtracklayer**, **SummarizedExperiment**, **XVector**\",\n    \"1-1\": \"This environment setup is based on the official Bioconductor image **bioconductor/bioconductor_docker:RELEASE_3_13** which is built on top of **rocker/rstudio:4.1.0**. For more information about the image, please see [its Docker Hub repository](https://hub.docker.com/layers/bioconductor/bioconductor_docker/RELEASE_3_13/images/sha256-bb7b946225375891fe3da2fb796464ef6a83f2567e9ce33255d99c1dea415479?context=explore).\\n\\nHere is a list of libraries that are installed by default:\\n\\n*CRAN* -  **BiocManager**, **devtools**, **doSNOW**, **ggfortify**, **gplots**, **pheatmap**, **Seurat**, **tidyverse**\\n\\n*Bioconductor* - **AnnotationDbi**, **AnnotationHub**, **arrayQualityMetrics**, **ballgown**, **Biobase**, **BiocParallel**, **biomaRt**, **biomformat**, **Biostrings**, **DelayedArray**, **DESeq2**, **edgeR**, **genefilter**, **GenomeInfoDb**, **GenomicAlignments**, **GenomicFeatures**, **GenomicRanges**, **GEOquery**, **IRanges**, **limma**, **metagenomeSeq**, **oligo**, **Rsamtools**, **rtracklayer**, **sevenbridges**, **SummarizedExperiment**, **XVector**\",\n    \"1-0\": \"**SB Bioinformatics - R 4.1 - BioC 3.13**\",\n    \"0-1\": \"This environment setup is based on the official Bioconductor image **bioconductor/bioconductor_docker:RELEASE_3_14**. For more information about the image, please see [its Docker Hub repository](https://hub.docker.com/layers/bioconductor/bioconductor_docker/RELEASE_3_14/images/sha256-f58b7b38d157ef96642f72e3abd555e95c9626d4c74ceecf79a2998bd94f6589?context=explore).\\n\\nHere is a list of libraries that are installed by default:\\n\\n*CRAN* -  **BiocManager**, **devtools**, **doSNOW**, **ggfortify**, **gplots**, **pheatmap**, **Seurat**, **tidyverse**\\n\\n*Bioconductor* - **AnnotationDbi**, **AnnotationHub**, **arrayQualityMetrics**, **ballgown**, **Biobase**, **BiocParallel**, **biomaRt**, **biomformat**, **Biostrings**, **DelayedArray**, **DESeq2**, **edgeR**, **genefilter**, **GenomeInfoDb**, **GenomicAlignments**, **GenomicFeatures**, **GenomicRanges**, **GEOquery**, **IRanges**, **limma**, **metagenomeSeq**, **oligo**, **Rsamtools**, **rtracklayer**, **sevenbridges**, **SummarizedExperiment**, **XVector**\",\n    \"0-0\": \"**SB Bioinformatics - R 4.1 - BioC 3.14** *(default)*\"\n  },\n  \"cols\": 2,\n  \"rows\": 5\n}\n[/block]\nAll available environment setups also contain the [sevenbridges-r](https://github.com/sbg/sevenbridges-r) API library, as well as **htop** and **openvpn** as general-purpose tools.","updates":["610926ccd6bf650016a8abd0","610946aecae20f0041c658c2"],"order":8,"isReference":false,"hidden":false,"sync_unique":"","link_url":"","link_external":false,"_id":"594cee43c804570021d22185","project":"5773dcfc255e820e00e1cd4d","version":{"version":"1.0","version_clean":"1.0.0","codename":"","is_stable":true,"is_beta":false,"is_hidden":false,"is_deprecated":false,"categories":["5773dcfc255e820e00e1cd51","5773df36904b0c0e00ef05ff","577baf92451b1e0e006075ac","577bb183b7ee4a0e007c4e8d","577ce77a1cf3cb0e0048e5ea","577d11865fd4de0e00cc3dab","578e62792c3c790e00937597","578f4fd98335ca0e006d5c84","578f5e5c3d04570e00976ebb","57bc35f7531e000e0075d118","57f801b3760f3a1700219ebb","5804d55d1642890f00803623","581c8d55c0dc651900aa9350","589dcf8ba8c63b3b00c3704f","594cebadd8a2f7001b0b53b2","59a562f46a5d8c00238e309a","5a2aa096e25025003c582b58","5a2e79566c771d003ca0acd4","5a3a5166142db90026f24007","5a3a52b5bcc254001c4bf152","5a3a574a2be213002675c6d2","5a3a66bb2be213002675cb73","5a3a6e4854faf60030b63159","5c8a68278e883901341de571","5cb9971e57bf020024523c7b","5cbf1683e2a36d01d5012ecd","5dc15666a4f788004c5fd7d7","5eaff69e844d67003642a020","5eb00899b36ba5002d35b0c1","5eb0172be179b70073dc936e","5eb01b42b36ba5002d35ebba","5eb01f202654a20136813093","5eb918ef149186021c9a76c8","5f0839d3f4b24e005ebbbc29","5f893e508c9862002d0614a9","6024033e2b2f6f004dfe994c","60a7a12f9a06c70052b7c4db","60a7ab97266a4700161507c4","60b0c84babba720010a8b0b5"],"_id":"5773dcfc255e820e00e1cd50","__v":39,"createdAt":"2016-06-29T14:36:44.812Z","releaseDate":"2016-06-29T14:36:44.812Z","project":"5773dcfc255e820e00e1cd4d"},"category":{"sync":{"isSync":false,"url":""},"pages":[],"title":"Data Cruncher","slug":"data-cruncher","order":32,"from_sync":false,"reference":false,"_id":"594cebadd8a2f7001b0b53b2","project":"5773dcfc255e820e00e1cd4d","version":"5773dcfc255e820e00e1cd50","createdAt":"2017-06-23T10:21:33.309Z","__v":0},"user":"575e85ac41c8ba0e00259a44","createdAt":"2017-06-23T10:32:35.963Z","githubsync":"","__v":2,"parentDoc":null}

Data Cruncher environments and libraries


At the moment, Data Cruncher offers a set of predefined libraries curated by Seven Bridges bioinformaticians, which are automatically available every time an analysis is started. The list of available libraries depends on the _environment_ you are using (**JupyterLab** or **RStudio**) and the selected _environment setup_ (set of preinstalled libraries that is available each time an analysis is started). Both of these settings are selected in the analysis creation wizard and cannot be changed once the analysis has been created. ## JupyterLab Depending on the purpose and objective of your JupyterLab analysis, you can select an environment setup that you find most suitable for the given analysis. The following table shows the available JupyterLab environment setups and some details about available tools and libraries in each of them: [block:parameters] { "data": { "h-0": "Environment setup", "h-1": "Details", "1-0": "**SB Data Science - Python 3.6, R 3.4** (legacy)", "1-1": "This environment setup contains **Python version 3.6.3**, **R version 3.4.1** and **Julia 0.6.2**. The setup also includes libraries that are available in [datascience-notebook](https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook), with the addition of the following libraries:\n\n*Python2 \\ Python3:* **path.py**, **biopython**, **pymongo**, **cytoolz**, **pysam**, **pyvcf**, **ipywidgets**, **beautifulsoup4**, **cigar**, **bioservices**, **intervaltree**, **appdirs**, **cssselect**, **bokeh**, **scikit-allel**, **cairo**, **lxml**, **cairosvg**, **rpy2**\n\n*R:* **r-ggfortify**, **r**, **r-stringi**, **r-pheatmap**, **r-gplots**, **bioconductor-ballgown**, **bioconductor-deseq2**, **bioconductor-metagenomeseq**, **bioconductor-biomformat**, **bioconductor-biocinstaller**, **r-xml**", "3-0": "**SB Machine Learning - TensorFlow 2.0, Python 3.7**", "3-1": "This environment setup is optimized for machine learning and *execution on GPU instances*. It is based on the **jupyter/tensorflow-notebook** image (**jupyter/scipy-notebook** that includes popular packages from the scientific Python ecosystem, with the addition of popular Python deep learning libraries). Learn more about [available libraries](https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html#jupyter-tensorflow-notebook).", "0-0": "**SB Data Science - Python 3.9, R 4.1** (default)", "0-1": "This environment setup contains **Python version 3.9**, **R version 4.1** and **Julia 1.6.2**. The setup also includes libraries that are available in [datascience-notebook](https://github.com/jupyter/docker-stacks/tree/master/datascience-notebook), with the addition of the **tabix** library.", "2-0": "**SB Data Science - Spark 3.1.2, Python 3.9** (beta)\n\n[Spark initialization and loading of Parquet/VCF files](#spark-parquet)", "2-1": "This environment setup contains Python version 3.9, Spark version 3.1.2. The setup also includes libraries that are available in [allspark-notebook](https://github.com/jupyter/docker-stacks/tree/master/all-spark-notebook), with the addition of **glow** (version 1.1.2), **tabix**, **hail** and **bkzep** libraries.\n\n**To initialize Spark and learn how to load Parquet or VCF files, follow the instructions** [below](#spark-parquet).\n\nAn analysis using this environment will initialize a six-instance cluster with the following configuration:\n* A master `m5.xlarge` instance with 1000 GB of storage space\n* Five `m5.4xlarge` slave instances with 1000 GB of storage space each\n\nNote that this cluster of 6 instances counts towards the total parallel instance limit that applies to your account. For an analysis that uses this environment to start without delays, you need to be able to initialize 6 more instances in parallel, before reaching the parallel instance limit for your account. If that is not the case, the analysis environment will be in the initialization state until it is able to start all 6 required instances." }, "cols": 2, "rows": 4 } [/block] All available environment setups also contain **sevenbridges-python** and **sevenbridges-r** API libraries, as well as **htop** and **openvpn** as general-purpose tools. The libraries are installed using **conda**, as JupyterLab supports multiple programming languages and **conda** is a language-agnostic package manager. You can also install libraries directly from the notebook and use them during the execution of your analysis. For optimal performance and avoidance of potential conflicts, we recommend using **conda** when installing libraries within your analyses. However, unlike default libraries, libraries installed in that way will not be automatically available next time the analysis is started. <a name="spark-parquet"></a> ### Spark initialization and loading of Parquet/VCF files for the SB Data Science - Spark 3.1.2, Python 3.9 environment setup To initialize Spark in the **Spark 3.1.2, Python 3.9** environment, use the following code: [block:code] { "codes": [ { "code": "from pyspark.sql import SparkSession\nspark = SparkSession \\\n .builder \\\n .appName(\"PythonPi\") \\\n .config(\"spark.jars.packages\", \"io.projectglow:glow-spark3_2.12:1.1.2\") \\\n .config(\"spark.hadoop.io.compression.codecs\", \"io.projectglow.sql.util.BGZFCodec\") \\\n .getOrCreate()\nspark = glow.register(spark)", "language": "python" } ] } [/block] When loading Parquet or VCF files, use the following pattern: [block:code] { "codes": [ { "code": "df = spark.read.parquet('/path/to/example.parquet')\ndf_vcf = spark.read.format('vcf').load('/path/to/file.vcf')", "language": "python" } ] } [/block] ## RStudio If you select RStudio as the analysis environment, you can also select one of the available environment setups depending on the purpose of your analysis. This will help you optimize analysis setup and time to getting a fully-functional environment that suits your needs by having the needed libraries preinstalled in the selected environment setup. Here are the available options: [block:parameters] { "data": { "h-0": "Environment setup", "h-1": "Details", "3-0": "**SB Bioinformatics - R 3.6**", "3-1": "This environment setup is based on the **rstudio/verse** image from [The Rocker Project](https://www.rocker-project.org/) and contains **tidyverse**, **devtools**, tex and publishing-related packages. For more information about the image, please see [its Docker Hub repository](https://hub.docker.com/r/rocker/verse). \n\nHere is a list of libraries that are installed by default:\n\n*CRAN* - **BiocManager**, **ggfortify**, **pheatmap**, **gplots**\n\n*Bioconductor* - **ballgown**, **DESeq2**, **metagenomeSeq**, **biomformat**, **BiocInstaller**", "4-0": "**SB Machine Learning - TensorFlow 1.13, R 3.6**", "4-1": "This environment setup is optimized for machine learning and *execution on GPU instances*. It is based on the **rocker/ml-gpu** image that is intended for machine learning and GPU-based computation in R. [Learn more](https://hub.docker.com/r/rocker/ml-gpu).", "2-0": "**SB Bioinformatics - R 4.0**", "2-1": "This environment setup is based on the official Bioconductor image **bioconductor_docker:RELEASE_3_11** which is built on top of **rockerdev/rstudio:4.0.0-ubuntu18.04**. For more information about the image, please see [its Docker Hub repository](https://hub.docker.com/r/bioconductor/bioconductor_docker). \n\nHere is a list of libraries that are installed by default:\n\n*CRAN* - **BiocManager**, **devtools**, **doSNOW**, **ggfortify**, **gplots**, **pheatmap**, **Seurat**, **tidyverse**\n\n*Bioconductor* - **AnnotationDbi**, **arrayQualityMetrics**, **ballgown**, **Biobase**, **BiocParallel**, **biomaRt**, **biomformat**, **Biostrings**, **DelayedArray**, **DESeq2**, **edgeR**, **genefilter**, **GenomeInfoDb**, **GenomicAlignments**, **GenomicFeatures**, **GenomicRanges**, **GEOquery**, **IRanges**, **limma**, **metagenomeSeq**, **oligo**, **Rsamtools**, **rtracklayer**, **SummarizedExperiment**, **XVector**", "1-1": "This environment setup is based on the official Bioconductor image **bioconductor/bioconductor_docker:RELEASE_3_13** which is built on top of **rocker/rstudio:4.1.0**. For more information about the image, please see [its Docker Hub repository](https://hub.docker.com/layers/bioconductor/bioconductor_docker/RELEASE_3_13/images/sha256-bb7b946225375891fe3da2fb796464ef6a83f2567e9ce33255d99c1dea415479?context=explore).\n\nHere is a list of libraries that are installed by default:\n\n*CRAN* - **BiocManager**, **devtools**, **doSNOW**, **ggfortify**, **gplots**, **pheatmap**, **Seurat**, **tidyverse**\n\n*Bioconductor* - **AnnotationDbi**, **AnnotationHub**, **arrayQualityMetrics**, **ballgown**, **Biobase**, **BiocParallel**, **biomaRt**, **biomformat**, **Biostrings**, **DelayedArray**, **DESeq2**, **edgeR**, **genefilter**, **GenomeInfoDb**, **GenomicAlignments**, **GenomicFeatures**, **GenomicRanges**, **GEOquery**, **IRanges**, **limma**, **metagenomeSeq**, **oligo**, **Rsamtools**, **rtracklayer**, **sevenbridges**, **SummarizedExperiment**, **XVector**", "1-0": "**SB Bioinformatics - R 4.1 - BioC 3.13**", "0-1": "This environment setup is based on the official Bioconductor image **bioconductor/bioconductor_docker:RELEASE_3_14**. For more information about the image, please see [its Docker Hub repository](https://hub.docker.com/layers/bioconductor/bioconductor_docker/RELEASE_3_14/images/sha256-f58b7b38d157ef96642f72e3abd555e95c9626d4c74ceecf79a2998bd94f6589?context=explore).\n\nHere is a list of libraries that are installed by default:\n\n*CRAN* - **BiocManager**, **devtools**, **doSNOW**, **ggfortify**, **gplots**, **pheatmap**, **Seurat**, **tidyverse**\n\n*Bioconductor* - **AnnotationDbi**, **AnnotationHub**, **arrayQualityMetrics**, **ballgown**, **Biobase**, **BiocParallel**, **biomaRt**, **biomformat**, **Biostrings**, **DelayedArray**, **DESeq2**, **edgeR**, **genefilter**, **GenomeInfoDb**, **GenomicAlignments**, **GenomicFeatures**, **GenomicRanges**, **GEOquery**, **IRanges**, **limma**, **metagenomeSeq**, **oligo**, **Rsamtools**, **rtracklayer**, **sevenbridges**, **SummarizedExperiment**, **XVector**", "0-0": "**SB Bioinformatics - R 4.1 - BioC 3.14** *(default)*" }, "cols": 2, "rows": 5 } [/block] All available environment setups also contain the [sevenbridges-r](https://github.com/sbg/sevenbridges-r) API library, as well as **htop** and **openvpn** as general-purpose tools.