2014-07-17 19:03:11 +00:00
---
title: "Google Cloud Storage"
description: "Rclone docs for Google Cloud Storage"
2017-07-18 14:15:29 +00:00
date: "2017-07-18"
2014-07-17 19:03:11 +00:00
---
< i class = "fa fa-google" > < / i > Google Cloud Storage
-------------------------------------------------
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir` .
The initial setup for google cloud storage involves getting a token from Google Cloud Storage
which you need to do in your browser. `rclone config` walks you
through it.
Here is an example of how to make a remote called `remote` . First run:
rclone config
This will guide you through an interactive setup process:
```
n) New remote
d) Delete remote
q) Quit config
e/n/d/q> n
name> remote
2016-02-21 13:39:04 +00:00
Type of storage to configure.
Choose a number from below, or type in your own value
2016-07-11 11:42:44 +00:00
1 / Amazon Drive
2016-02-21 13:39:04 +00:00
\ "amazon cloud drive"
2017-01-09 05:09:19 +00:00
2 / Amazon S3 (also Dreamhost, Ceph, Minio)
2016-02-21 13:39:04 +00:00
\ "s3"
3 / Backblaze B2
\ "b2"
4 / Dropbox
\ "dropbox"
2017-01-09 05:09:19 +00:00
5 / Encrypt/Decrypt a remote
\ "crypt"
6 / Google Cloud Storage (this is not Google Drive)
2016-02-21 13:39:04 +00:00
\ "google cloud storage"
2017-01-09 05:09:19 +00:00
7 / Google Drive
2016-02-21 13:39:04 +00:00
\ "drive"
2017-01-09 05:09:19 +00:00
8 / Hubic
2016-02-21 13:39:04 +00:00
\ "hubic"
2017-01-09 05:09:19 +00:00
9 / Local Disk
2016-02-21 13:39:04 +00:00
\ "local"
2017-01-09 05:09:19 +00:00
10 / Microsoft OneDrive
2016-02-21 13:39:04 +00:00
\ "onedrive"
2017-01-09 05:09:19 +00:00
11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
2016-02-21 13:39:04 +00:00
\ "swift"
2017-03-05 10:14:57 +00:00
12 / SSH/SFTP Connection
\ "sftp"
13 / Yandex Disk
2016-02-21 13:39:04 +00:00
\ "yandex"
2017-01-09 05:09:19 +00:00
Storage> 6
2015-10-03 13:23:12 +00:00
Google Application Client Id - leave blank normally.
2017-01-09 05:09:19 +00:00
client_id>
2015-10-03 13:23:12 +00:00
Google Application Client Secret - leave blank normally.
2017-01-09 05:09:19 +00:00
client_secret>
2014-07-17 19:03:11 +00:00
Project number optional - needed only for list/create/delete buckets - see your developer console.
project_number> 12345678
2016-04-20 14:40:40 +00:00
Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
2017-01-09 05:09:19 +00:00
service_account_file>
2014-07-17 19:03:11 +00:00
Access Control List for new objects.
Choose a number from below, or type in your own value
2017-01-09 05:09:19 +00:00
1 / Object owner gets OWNER access, and all Authenticated Users get READER access.
\ "authenticatedRead"
2 / Object owner gets OWNER access, and project team owners get OWNER access.
\ "bucketOwnerFullControl"
3 / Object owner gets OWNER access, and project team owners get READER access.
\ "bucketOwnerRead"
4 / Object owner gets OWNER access [default if left blank].
\ "private"
5 / Object owner gets OWNER access, and project team members get access according to their roles.
\ "projectPrivate"
6 / Object owner gets OWNER access, and all Users get READER access.
\ "publicRead"
2014-07-17 19:03:11 +00:00
object_acl> 4
Access Control List for new buckets.
Choose a number from below, or type in your own value
2017-01-09 05:09:19 +00:00
1 / Project team owners get OWNER access, and all Authenticated Users get READER access.
\ "authenticatedRead"
2 / Project team owners get OWNER access [default if left blank].
\ "private"
3 / Project team members get access according to their roles.
\ "projectPrivate"
4 / Project team owners get OWNER access, and all Users get READER access.
\ "publicRead"
5 / Project team owners get OWNER access, and all Users get WRITER access.
\ "publicReadWrite"
2014-07-17 19:03:11 +00:00
bucket_acl> 2
2017-07-18 14:15:29 +00:00
Location for the newly created buckets.
Choose a number from below, or type in your own value
1 / Empty for default location (US).
\ ""
2 / Multi-regional location for Asia.
\ "asia"
3 / Multi-regional location for Europe.
\ "eu"
4 / Multi-regional location for United States.
\ "us"
5 / Taiwan.
\ "asia-east1"
6 / Tokyo.
\ "asia-northeast1"
7 / Singapore.
\ "asia-southeast1"
8 / Sydney.
\ "australia-southeast1"
9 / Belgium.
\ "europe-west1"
10 / London.
\ "europe-west2"
11 / Iowa.
\ "us-central1"
12 / South Carolina.
\ "us-east1"
13 / Northern Virginia.
\ "us-east4"
14 / Oregon.
\ "us-west1"
location> 12
The storage class to use when storing objects in Google Cloud Storage.
Choose a number from below, or type in your own value
1 / Default
\ ""
2 / Multi-regional storage class
\ "MULTI_REGIONAL"
3 / Regional storage class
\ "REGIONAL"
4 / Nearline storage class
\ "NEARLINE"
5 / Coldline storage class
\ "COLDLINE"
6 / Durable reduced availability storage class
\ "DURABLE_REDUCED_AVAILABILITY"
storage_class> 5
2014-07-17 19:03:11 +00:00
Remote config
2015-09-12 13:17:39 +00:00
Use auto config?
* Say Y if not sure
2015-09-30 14:04:02 +00:00
* Say N if you are working on a remote or headless machine or Y didn't work
2015-09-12 13:17:39 +00:00
y) Yes
n) No
y/n> y
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
Got code
2014-07-17 19:03:11 +00:00
--------------------
[remote]
type = google cloud storage
2017-01-09 05:09:19 +00:00
client_id =
client_secret =
2014-07-17 19:03:11 +00:00
token = {"AccessToken":"xxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","RefreshToken":"x/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_xxxxxxxxx","Expiry":"2014-07-17T20:49:14.929208288+01:00","Extra":null}
project_number = 12345678
object_acl = private
bucket_acl = private
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
```
2015-09-12 13:17:39 +00:00
Note that rclone runs a webserver on your local machine to collect the
token as returned from Google if you use auto config mode. This only
runs from the moment it opens your browser to the moment you get back
the verification code. This is on `http://127.0.0.1:53682/` and this
it may require you to unblock it temporarily if you are running a host
firewall, or use manual mode.
2014-07-17 19:03:11 +00:00
This remote is called `remote` and can now be used like this
See all the buckets in your project
rclone lsd remote:
Make a new bucket
rclone mkdir remote:bucket
List the contents of a bucket
rclone ls remote:bucket
Sync `/home/local/directory` to the remote bucket, deleting any excess
files in the bucket.
rclone sync /home/local/directory remote:bucket
2016-04-20 14:40:40 +00:00
### Service Account support ###
You can set up rclone with Google Cloud Storage in an unattended mode,
i.e. not tied to a specific end-user Google account. This is useful
when you want to synchronise files onto machines that don't have
actively logged-in users, for example build machines.
To get credentials for Google Cloud Platform
[IAM Service Accounts ](https://cloud.google.com/iam/docs/service-accounts ),
please head to the
[Service Account ](https://console.cloud.google.com/permissions/serviceaccounts )
section of the Google Developer Console. Service Accounts behave just
like normal `User` permissions in
[Google Cloud Storage ACLs ](https://cloud.google.com/storage/docs/access-control ),
so you can limit their access (e.g. make them read only). After
creating an account, a JSON file containing the Service Account's
credentials will be downloaded onto your machines. These credentials
are what rclone will use for authentication.
2016-04-22 18:58:52 +00:00
To use a Service Account instead of OAuth2 token flow, enter the path
to your Service Account credentials at the `service_account_file`
prompt and rclone won't use the browser based authentication
2018-04-27 15:07:37 +00:00
flow. If you'd rather stuff the contents of the credentials file into
the rclone config file, you can set `service_account_credentials` with
the actual contents of the file instead, or set the equivalent
environment variable.
2016-04-20 14:40:40 +00:00
2019-03-01 17:05:31 +00:00
### Application Default Credentials ###
If no other source of credentials is provided, rclone will fall back
to
[Application Default Credentials ](https://cloud.google.com/video-intelligence/docs/common/auth#authenticating_with_application_default_credentials )
this is useful both when you already have configured authentication
for your developer account, or in production when running on a google
compute host. Note that if running in docker, you may need to run
additional commands on your google compute machine -
[see this page ](https://cloud.google.com/container-registry/docs/advanced-authentication#gcloud_as_a_docker_credential_helper ).
Note that in the case application default credentials are used, there
is no need to explicitly configure a project number.
2017-06-06 15:40:00 +00:00
### --fast-list ###
This remote supports `--fast-list` which allows you to use fewer
transactions in exchange for more memory. See the [rclone
docs](/docs/#fast-list) for more details.
2015-06-06 09:05:21 +00:00
### Modified time ###
2014-07-17 19:03:11 +00:00
Google google cloud storage stores md5sums natively and rclone stores
modification times as metadata on the object, under the "mtime" key in
RFC3339 format accurate to 1ns.
2018-10-01 17:36:15 +00:00
2018-10-01 19:48:54 +00:00
<!-- - autogenerated options start - DO NOT EDIT, instead edit fs.RegInfo in backend/googlecloudstorage/googlecloudstorage.go then run make backenddocs -->
### Standard Options
Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
#### --gcs-client-id
Google Application Client Id
Leave blank normally.
- Config: client_id
- Env Var: RCLONE_GCS_CLIENT_ID
- Type: string
- Default: ""
#### --gcs-client-secret
Google Application Client Secret
Leave blank normally.
- Config: client_secret
- Env Var: RCLONE_GCS_CLIENT_SECRET
- Type: string
- Default: ""
#### --gcs-project-number
Project number.
Optional - needed only for list/create/delete buckets - see your developer console.
- Config: project_number
- Env Var: RCLONE_GCS_PROJECT_NUMBER
- Type: string
- Default: ""
#### --gcs-service-account-file
Service Account Credentials JSON file path
Leave blank normally.
Needed only if you want use SA instead of interactive login.
- Config: service_account_file
- Env Var: RCLONE_GCS_SERVICE_ACCOUNT_FILE
- Type: string
- Default: ""
#### --gcs-service-account-credentials
Service Account Credentials JSON blob
Leave blank normally.
Needed only if you want use SA instead of interactive login.
- Config: service_account_credentials
- Env Var: RCLONE_GCS_SERVICE_ACCOUNT_CREDENTIALS
- Type: string
- Default: ""
#### --gcs-object-acl
Access Control List for new objects.
- Config: object_acl
- Env Var: RCLONE_GCS_OBJECT_ACL
- Type: string
- Default: ""
- Examples:
- "authenticatedRead"
- Object owner gets OWNER access, and all Authenticated Users get READER access.
- "bucketOwnerFullControl"
- Object owner gets OWNER access, and project team owners get OWNER access.
- "bucketOwnerRead"
- Object owner gets OWNER access, and project team owners get READER access.
- "private"
- Object owner gets OWNER access [default if left blank].
- "projectPrivate"
- Object owner gets OWNER access, and project team members get access according to their roles.
- "publicRead"
- Object owner gets OWNER access, and all Users get READER access.
#### --gcs-bucket-acl
Access Control List for new buckets.
- Config: bucket_acl
- Env Var: RCLONE_GCS_BUCKET_ACL
- Type: string
- Default: ""
- Examples:
- "authenticatedRead"
- Project team owners get OWNER access, and all Authenticated Users get READER access.
- "private"
- Project team owners get OWNER access [default if left blank].
- "projectPrivate"
- Project team members get access according to their roles.
- "publicRead"
- Project team owners get OWNER access, and all Users get READER access.
- "publicReadWrite"
- Project team owners get OWNER access, and all Users get WRITER access.
2019-03-04 14:52:54 +00:00
#### --gcs-bucket-policy-only
Access checks should use bucket-level IAM policies.
If you want to upload objects to a bucket with Bucket Policy Only set
then you will need to set this.
When it is set, rclone:
- ignores ACLs set on buckets
- ignores ACLs set on objects
- creates buckets with Bucket Policy Only set
Docs: https://cloud.google.com/storage/docs/bucket-policy-only
- Config: bucket_policy_only
- Env Var: RCLONE_GCS_BUCKET_POLICY_ONLY
- Type: bool
- Default: false
2018-10-01 19:48:54 +00:00
#### --gcs-location
Location for the newly created buckets.
- Config: location
- Env Var: RCLONE_GCS_LOCATION
- Type: string
- Default: ""
- Examples:
- ""
- Empty for default location (US).
- "asia"
- Multi-regional location for Asia.
- "eu"
- Multi-regional location for Europe.
- "us"
- Multi-regional location for United States.
- "asia-east1"
- Taiwan.
2019-02-09 10:42:57 +00:00
- "asia-east2"
- Hong Kong.
2018-10-01 19:48:54 +00:00
- "asia-northeast1"
- Tokyo.
2019-02-09 10:42:57 +00:00
- "asia-south1"
- Mumbai.
2018-10-01 19:48:54 +00:00
- "asia-southeast1"
- Singapore.
- "australia-southeast1"
- Sydney.
2019-02-09 10:42:57 +00:00
- "europe-north1"
- Finland.
2018-10-01 19:48:54 +00:00
- "europe-west1"
- Belgium.
- "europe-west2"
- London.
2019-02-09 10:42:57 +00:00
- "europe-west3"
- Frankfurt.
- "europe-west4"
- Netherlands.
2018-10-01 19:48:54 +00:00
- "us-central1"
- Iowa.
- "us-east1"
- South Carolina.
- "us-east4"
- Northern Virginia.
- "us-west1"
- Oregon.
2019-02-09 10:42:57 +00:00
- "us-west2"
- California.
2018-10-01 19:48:54 +00:00
#### --gcs-storage-class
The storage class to use when storing objects in Google Cloud Storage.
- Config: storage_class
- Env Var: RCLONE_GCS_STORAGE_CLASS
- Type: string
- Default: ""
- Examples:
- ""
- Default
- "MULTI_REGIONAL"
- Multi-regional storage class
- "REGIONAL"
- Regional storage class
- "NEARLINE"
- Nearline storage class
- "COLDLINE"
- Coldline storage class
- "DURABLE_REDUCED_AVAILABILITY"
- Durable reduced availability storage class
<!-- - autogenerated options stop -->