Overview

Use the following commands to manage connected cloud storage via the Seven Bridges Command Line Interface.

volumes create

Create a volume. Volumes authorize the Platform to access and query objects on a specified cloud storage (Amazon Web Services or Google Cloud Storage) on your behalf.

You can set the value for service_file if you want to describe the service in a JSON file. If you do not set the value, the command expects a service object description from stdin.

Usage:
  sb volumes create --name <name_value> [--access_mode <access_mode_value>] [--description <description_value>] [--service_file <service_file_value>] [flags]

Flags:
      --name string           The name for the volume.
      --access_mode string    Sets the access to either read-write or read-only.
      --description string    Description of the volume.
      --service_file string   The file describing the service object. If omitted, input from stdin is expected.
  -h, --help                  help for create

volumes list

List all the volumes you have registered.

Usage:
  sb volumes list [flags]

Flags:
  -h, --help   help for list

volumes get

Get details about the specified volume.

Usage:
  sb volumes get <volume_id> [flags]

Arguments:
      volume_id   ID of the volume.

Flags:
  -h, --help   help for get

volumes update

Update the details of the specified volume.

Usage:
  sb volumes update <volume_id> [--access_mode <access_mode_value>] --url <url_value> --credentials <credentials_value> ... [--description <description_value>] [flags]

Arguments:
      volume_id   ID of the volume.

Flags:
      --access_mode string        Sets the access to either read-write or read-only.
      --url string                Cloud provider API endpoint to use when accessing this bucket.
      --credentials stringSlice   Contains credentials for underlying cloud provider.
      --description string        Description of the volume.
  -h, --help                      help for update

volumes delete

Delete the specified volume. This call deletes a volume you've created to refer to storage on Amazon Web Services or Google Cloud Storage. In order for a volume to be deleted, it has to be deactivated first.

Note that any files you've imported from your volume onto the Platform, known as an alias, will no longer be usable. If a new volume is created with the same volume_id as the deleted volume, aliases will point to files on the newly created volume instead (if those exist).

Usage:
  sb volumes delete <volume_id> [flags]

Arguments:
      volume_id   ID of the volume.

Flags:
  -h, --help   help for delete

exports list

Lists the export jobs initiated by the user.

When you export a file from a project on the Platform into a volume, you write to your cloud storage bucket on Amazon Web Services or Google Cloud Storage.

If an export command is successful, the original project file will become an alias to the newly exported object on the volume. The source file will be deleted from the Platform and, if no more copies of this file exist, it will no longer count towards your total storage price on the Platform. Once you export a file from the Platform to a volume, it is no longer part of the storage on the Platform and cannot be exported again.

Usage:
  sb exports list [flags]

Flags:
  -h, --help   help for list

exports get

Get information about the specified export job.

Usage:
  sb exports get <export_id> [flags]

Arguments:
      export_id   ID of the export job.

Flags:
  -h, --help   help for get

exports start

Queue a job to export a file from a project on the Platform into a volume, ie. to your cloud storage bucket on Amazon Web Services or Google Cloud Storage. The file selected for export must not be a public file or an alias. Aliases are objects stored in your cloud storage bucket which have been made available on the Platform. The volume you are exporting to must be configured for read-write access. To do this, set the access_mode parameter to RW when creating or modifying a volume.

When you export a file from a project on the Platform into a volume, you write to your cloud storage bucket on Amazon Web Services or Google Cloud Storage.

If an export command is successful, the original project file will become an alias to the newly exported object on the volume. The source file will be deleted from the Platform and, if no more copies of this file exist, it will no longer count towards your total storage price on the Platform. Once you export a file from the Platform to a volume, it is no longer part of the storage on the Platform and cannot be exported again.

Usage:
  sb exports start --file <file_value> --destination_volume <destination_volume_value> --location <location_value> [--properties <properties_value> ...] [--overwrite] [flags]

Flags:
      --file string                 The ID of the file for export. Must not be a public file nor an alias.
      --destination_volume string   The ID of the read-write volume to which the file will be exported.
      --location string             Volume-specific location to which the file will be exported.
      --properties stringSlice      Properties of the S3 volume.
      --overwrite                   Overwrite the file with the same name and prefix if it already exists.
  -h, --help                        help for start

imports list

Lists the import jobs initiated by the user.

When you import a file from your volume on your cloud storage provider (Amazon Web Services or Google Cloud Storage), you are creating an alias on the Platform which points to the file in your cloud storage bucket. Aliases appear as files on the Platform and can be copied, executed, and modified as such. They refer back to the respective file on the given volume.

Usage:
  sb imports list [--volume <volume_value>] [--project <project_value>] [--state <state_value>] [--offset <offset_value>] [--limit <limit_value>] [flags]

Flags:
      --volume string    Volume ID for which imports are listed.
      --project string   Project ID for which imports are listed.
      --state string     List imports with specified state.
      --offset string    Set offset when listing.
      --limit string     Set limit when listing.
  -h, --help             help for list

imports get

Get information about the specified import job.

Usage:
  sb imports get <import_id> [--result] [flags]

Arguments:
      import_id   ID of the import job.

Flags:
      --result   Get only result file from import.
  -h, --help    help for get

imports start

Start an import job from the specified volume.

When you import a file from your volume on your cloud storage provider (Amazon Web Services or Google Cloud Storage), you are creating an alias on the Platform which points to the file in your cloud storage bucket. Aliases appear as files on the Platform and can be copied, executed, and modified as such. They refer back to the respective file on the given volume.

This command queues a job to import a file from a volume into a project on the Platform. Essentially, you are importing a file from your cloud storage provider (Amazon Web Services or Google Cloud Storage) via the volume onto the Platform.

Usage:
  sb imports start --source_volume <source_volume_value> --location <location_value> --destination <destination_value> [--name <name_value>] [--overwrite] [flags]

Flags:
      --source_volume string   The ID of the source volume.
      --location string        Volume-specific location pointing to the file to import.
      --destination string     The ID of the project in which to create the alias.
      --name string            The name of the alias. Should be unique to the project.
      --overwrite              Overwrite the file with the same name if it already exists.
  -h, --help                   help for start

_ _