{"metadata":{"image":[],"title":"","description":""},"api":{"url":"","auth":"required","settings":"","results":{"codes":[]},"params":[]},"next":{"description":"","pages":[]},"title":"List of available Google Cloud Platform instances","type":"basic","slug":"list-of-available-google-cloud-platform-instances","excerpt":"","body":"The list below shows available Google Cloud Platform instances that you can choose and specify as a value for the `sbg:GoogleInstanceType` hint. \n\nSee the [Google Cloud Platform page on instance types](https://cloud.google.com/compute/vm-instance-pricing#n1_predefined) for details on pricing.\n\nPersistent disk storage can be set to anything between 2GB and 4 TB. Learn more from our [Persistent Disk Customization Documentation](doc:set-computation-instances#section-set-attached-storage-size).\n\n| Name | Cores | RAM [GB] |\n| --- | --- | --- | --- |\n|n1-standard-1 |1  |3.75 |\n|n1-standard-2 |2  |7.5  |\n|n1-standard-4 |4  |15.0 |\n|n1-standard-8 |8  |30.0 |\n|n1-standard-16|16 |60.0 |\n|n1-standard-32|32 |120.0|\n|n1-standard-64|64 |240.0|\n|n1-standard-96|96 |360.0|\n|n1-highcpu-2  |2  |1.8  |\n|n1-highcpu-4  |4  |3.6  |\n|n1-highcpu-8  |8  |7.2  |\n|n1-highcpu-16 |16 |14.4 |\n|n1-highcpu-32 |32 |28.8 |\n|n1-highcpu-64 |64 |57.6 |\n|n1-highcpu-96 |96 |86.4 |\n|n1-highmem-2  |2  |13.0 |\n|n1-highmem-4  |4  |26.0 |\n|n1-highmem-8  |8  |52.0 |\n|n1-highmem-16 |16 |104.0|\n|n1-highmem-32 |32 |208.0|\n|n1-highmem-64 |64 |416.0|\n|n1-highmem-96 |96 |624.0|\n\nWe last updated the list on Fri, 16 Oct 2020 12:27:09 GMT\n[block:parameters]\n{\n  \"data\": {\n    \"h-0\": \"Name\",\n    \"h-1\": \"GPUs\",\n    \"h-2\": \"vCPUs\",\n    \"h-3\": \"RAM (GiB)\",\n    \"0-0\": \"p2.xlarge\",\n    \"0-1\": \"1\",\n    \"0-2\": \"4\",\n    \"0-3\": \"61\",\n    \"1-0\": \"p2.8xlarge\",\n    \"2-0\": \"p2.16xlarge\",\n    \"1-1\": \"8\",\n    \"1-2\": \"32\",\n    \"1-3\": \"488\",\n    \"2-1\": \"16\",\n    \"2-2\": \"64\",\n    \"2-3\": \"732\",\n    \"3-0\": \"p3.2xlarge\",\n    \"4-0\": \"p3.8xlarge\",\n    \"5-0\": \"p3.16xlarge\",\n    \"3-1\": \"1 Tesla v100\",\n    \"4-1\": \"4 Tesla v100\",\n    \"5-1\": \"8 Tesla v100\",\n    \"3-2\": \"8\",\n    \"4-2\": \"32\",\n    \"5-2\": \"64\",\n    \"3-3\": \"61\",\n    \"4-3\": \"244\",\n    \"5-3\": \"488\"\n  },\n  \"cols\": 4,\n  \"rows\": 6\n}\n[/block]\nCreating Docker images containing tools that are run on GPU instances is similar to the process of creating Docker images with tools that are designed for CPU instances. The only major difference is that GPU tools have additional requirements for interaction with GPUs, which can be either [OpenCL](https://www.khronos.org/opencl/) or [CUDA](https://developer.nvidia.com/cuda-zone). NVIDIA drivers come preinstalled and optimized according to the [Amazon best practice](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/optimize_gpu.html) for the specific instance family and are accessible from the Docker container. It is recommended to use one of Docker images provided by NVIDIA as the base image. For tools that require CUDA, the list of supported images are available at [https://hub.docker.com/r/nvidia/cuda/](https://hub.docker.com/r/nvidia/cuda/), and for tools that are based on OpenCL at [https://hub.docker.com/r/nvidia/opencl](https://hub.docker.com/r/nvidia/opencl). The rest of the procedure for [creating and uploading](doc:upload-your-docker-image-1) a Docker image is the same as for tools designed to run on CPU instances. In case you have any problems with the setup, please contact our [Support Team](mailto:support:::at:::sevenbridges.com).\n\nWhen creating a Docker image containing GPU tools, it should be taken into account that older binaries are usually built for older GPU architectures and might not work on newer GPUs. If that is the case, these binaries can’t be used, and new ones should be built from source code.","updates":[],"order":4,"isReference":false,"hidden":false,"sync_unique":"","link_url":"","link_external":false,"_id":"5f8980a810f3470050a22217","createdAt":"2020-10-16T11:14:48.553Z","user":"5767bc73bb15f40e00a28777","category":{"sync":{"isSync":false,"url":""},"pages":[],"title":"Task Execution","slug":"task-execution","order":14,"from_sync":false,"reference":false,"_id":"5eb0172be179b70073dc936e","createdAt":"2020-05-04T13:22:51.351Z","version":"5773dcfc255e820e00e1cd50","project":"5773dcfc255e820e00e1cd4d","__v":0},"version":{"version":"1.0","version_clean":"1.0.0","codename":"","is_stable":true,"is_beta":false,"is_hidden":false,"is_deprecated":false,"categories":["5773dcfc255e820e00e1cd51","5773df36904b0c0e00ef05ff","577baf92451b1e0e006075ac","577bb183b7ee4a0e007c4e8d","577ce77a1cf3cb0e0048e5ea","577d11865fd4de0e00cc3dab","578e62792c3c790e00937597","578f4fd98335ca0e006d5c84","578f5e5c3d04570e00976ebb","57bc35f7531e000e0075d118","57f801b3760f3a1700219ebb","5804d55d1642890f00803623","581c8d55c0dc651900aa9350","589dcf8ba8c63b3b00c3704f","594cebadd8a2f7001b0b53b2","59a562f46a5d8c00238e309a","5a2aa096e25025003c582b58","5a2e79566c771d003ca0acd4","5a3a5166142db90026f24007","5a3a52b5bcc254001c4bf152","5a3a574a2be213002675c6d2","5a3a66bb2be213002675cb73","5a3a6e4854faf60030b63159","5c8a68278e883901341de571","5cb9971e57bf020024523c7b","5cbf1683e2a36d01d5012ecd","5dc15666a4f788004c5fd7d7","5eaff69e844d67003642a020","5eb00899b36ba5002d35b0c1","5eb0172be179b70073dc936e","5eb01b42b36ba5002d35ebba","5eb01f202654a20136813093","5eb918ef149186021c9a76c8","5f0839d3f4b24e005ebbbc29","5f893e508c9862002d0614a9"],"_id":"5773dcfc255e820e00e1cd50","__v":35,"createdAt":"2016-06-29T14:36:44.812Z","releaseDate":"2016-06-29T14:36:44.812Z","project":"5773dcfc255e820e00e1cd4d"},"project":"5773dcfc255e820e00e1cd4d","__v":0,"parentDoc":null}

List of available Google Cloud Platform instances


The list below shows available Google Cloud Platform instances that you can choose and specify as a value for the `sbg:GoogleInstanceType` hint. See the [Google Cloud Platform page on instance types](https://cloud.google.com/compute/vm-instance-pricing#n1_predefined) for details on pricing. Persistent disk storage can be set to anything between 2GB and 4 TB. Learn more from our [Persistent Disk Customization Documentation](doc:set-computation-instances#section-set-attached-storage-size). | Name | Cores | RAM [GB] | | --- | --- | --- | --- | |n1-standard-1 |1 |3.75 | |n1-standard-2 |2 |7.5 | |n1-standard-4 |4 |15.0 | |n1-standard-8 |8 |30.0 | |n1-standard-16|16 |60.0 | |n1-standard-32|32 |120.0| |n1-standard-64|64 |240.0| |n1-standard-96|96 |360.0| |n1-highcpu-2 |2 |1.8 | |n1-highcpu-4 |4 |3.6 | |n1-highcpu-8 |8 |7.2 | |n1-highcpu-16 |16 |14.4 | |n1-highcpu-32 |32 |28.8 | |n1-highcpu-64 |64 |57.6 | |n1-highcpu-96 |96 |86.4 | |n1-highmem-2 |2 |13.0 | |n1-highmem-4 |4 |26.0 | |n1-highmem-8 |8 |52.0 | |n1-highmem-16 |16 |104.0| |n1-highmem-32 |32 |208.0| |n1-highmem-64 |64 |416.0| |n1-highmem-96 |96 |624.0| We last updated the list on Fri, 16 Oct 2020 12:27:09 GMT [block:parameters] { "data": { "h-0": "Name", "h-1": "GPUs", "h-2": "vCPUs", "h-3": "RAM (GiB)", "0-0": "p2.xlarge", "0-1": "1", "0-2": "4", "0-3": "61", "1-0": "p2.8xlarge", "2-0": "p2.16xlarge", "1-1": "8", "1-2": "32", "1-3": "488", "2-1": "16", "2-2": "64", "2-3": "732", "3-0": "p3.2xlarge", "4-0": "p3.8xlarge", "5-0": "p3.16xlarge", "3-1": "1 Tesla v100", "4-1": "4 Tesla v100", "5-1": "8 Tesla v100", "3-2": "8", "4-2": "32", "5-2": "64", "3-3": "61", "4-3": "244", "5-3": "488" }, "cols": 4, "rows": 6 } [/block] Creating Docker images containing tools that are run on GPU instances is similar to the process of creating Docker images with tools that are designed for CPU instances. The only major difference is that GPU tools have additional requirements for interaction with GPUs, which can be either [OpenCL](https://www.khronos.org/opencl/) or [CUDA](https://developer.nvidia.com/cuda-zone). NVIDIA drivers come preinstalled and optimized according to the [Amazon best practice](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/optimize_gpu.html) for the specific instance family and are accessible from the Docker container. It is recommended to use one of Docker images provided by NVIDIA as the base image. For tools that require CUDA, the list of supported images are available at [https://hub.docker.com/r/nvidia/cuda/](https://hub.docker.com/r/nvidia/cuda/), and for tools that are based on OpenCL at [https://hub.docker.com/r/nvidia/opencl](https://hub.docker.com/r/nvidia/opencl). The rest of the procedure for [creating and uploading](doc:upload-your-docker-image-1) a Docker image is the same as for tools designed to run on CPU instances. In case you have any problems with the setup, please contact our [Support Team](mailto:[email protected]). When creating a Docker image containing GPU tools, it should be taken into account that older binaries are usually built for older GPU architectures and might not work on newer GPUs. If that is the case, these binaries can’t be used, and new ones should be built from source code.