2014-03-28 22:34:13 +00:00
---
title: "Amazon S3"
description: "Rclone docs for Amazon S3"
2016-07-11 11:12:28 +00:00
date: "2016-07-11"
2014-03-28 22:34:13 +00:00
---
2018-04-12 16:05:53 +00:00
< i class = "fa fa-amazon" > < / i > Amazon S3 Storage Providers
--------------------------------------------------------
2018-04-13 15:08:00 +00:00
The S3 backend can be used with a number of different providers:
* {{< provider name = "AWS S3" home = "https://aws.amazon.com/s3/" config = "/s3/#amazon-s3" > }}
2018-04-12 16:05:53 +00:00
* {{< provider name = "Ceph" home = "http://ceph.com/" config = "/s3/#ceph" > }}
* {{< provider name = "DigitalOcean Spaces" home = "https://www.digitalocean.com/products/object-storage/" config = "/s3/#digitalocean-spaces" > }}
2018-04-13 15:08:00 +00:00
* {{< provider name = "Dreamhost" home = "https://www.dreamhost.com/cloud/storage/" config = "/s3/#dreamhost" > }}
2018-04-12 16:05:53 +00:00
* {{< provider name = "IBM COS S3" home = "http://www.ibm.com/cloud/object-storage" config = "/s3/#ibm-cos-s3" > }}
* {{< provider name = "Minio" home = "https://www.minio.io/" config = "/s3/#minio" > }}
* {{< provider name = "Wasabi" home = "https://wasabi.com/" config = "/s3/#wasabi" > }}
2014-07-17 19:03:11 +00:00
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir` .
2014-03-28 22:34:13 +00:00
2018-04-13 15:08:00 +00:00
Once you have made a remote (see the provider specific section above)
you can use it like this:
See all buckets
rclone lsd remote:
Make a new bucket
rclone mkdir remote:bucket
List the contents of a bucket
rclone ls remote:bucket
Sync `/home/local/directory` to the remote bucket, deleting any excess
files in the bucket.
rclone sync /home/local/directory remote:bucket
## AWS S3 {#amazon-s3}
2014-03-28 22:34:13 +00:00
Here is an example of making an s3 configuration. First run
rclone config
This will guide you through an interactive setup process.
```
No remotes found - make a new one
n) New remote
2016-02-21 13:39:04 +00:00
s) Set configuration password
2018-03-13 20:47:29 +00:00
q) Quit config
n/s/q> n
2014-03-28 22:34:13 +00:00
name> remote
2016-02-21 13:39:04 +00:00
Type of storage to configure.
Choose a number from below, or type in your own value
2018-03-13 20:47:29 +00:00
1 / Alias for a existing remote
\ "alias"
2 / Amazon Drive
2016-02-21 13:39:04 +00:00
\ "amazon cloud drive"
2018-04-13 15:08:00 +00:00
3 / Amazon S3 Compliant Storage Providers (AWS, Ceph, Dreamhost, IBM COS, Minio)
2016-02-21 13:39:04 +00:00
\ "s3"
2018-03-13 20:47:29 +00:00
4 / Backblaze B2
2016-02-21 13:39:04 +00:00
\ "b2"
2018-03-13 20:47:29 +00:00
[snip]
23 / http Connection
\ "http"
2018-04-13 15:08:00 +00:00
Storage> s3
Choose your S3 provider.
2018-04-12 16:05:53 +00:00
Choose a number from below, or type in your own value
2018-04-13 15:08:00 +00:00
1 / Amazon Web Services (AWS) S3
2018-04-12 16:05:53 +00:00
\ "AWS"
2018-04-13 15:08:00 +00:00
2 / Ceph Object Storage
2018-04-12 16:05:53 +00:00
\ "Ceph"
2018-04-13 15:08:00 +00:00
3 / Digital Ocean Spaces
\ "DigitalOcean"
4 / Dreamhost DreamObjects
2018-04-12 16:05:53 +00:00
\ "Dreamhost"
2018-04-13 15:08:00 +00:00
5 / IBM COS S3
2018-04-12 16:05:53 +00:00
\ "IBMCOS"
2018-04-13 15:08:00 +00:00
6 / Minio Object Storage
2018-04-12 16:05:53 +00:00
\ "Minio"
2018-04-13 15:08:00 +00:00
7 / Wasabi Object Storage
\ "Wasabi"
8 / Any other S3 compatible provider
\ "Other"
provider> 1
2017-11-22 21:21:36 +00:00
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
2016-02-01 13:11:27 +00:00
Choose a number from below, or type in your own value
2016-02-21 13:39:04 +00:00
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
2016-02-01 13:11:27 +00:00
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
2018-03-13 20:47:29 +00:00
access_key_id> XXX
2016-02-01 13:11:27 +00:00
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
2018-03-13 20:47:29 +00:00
secret_access_key> YYY
2018-04-13 15:08:00 +00:00
Region to connect to.
2014-03-28 22:34:13 +00:00
Choose a number from below, or type in your own value
2016-02-21 13:39:04 +00:00
/ The default endpoint - a good choice if you are unsure.
1 | US Region, Northern Virginia or Pacific Northwest.
| Leave location constraint empty.
\ "us-east-1"
2018-03-13 20:47:29 +00:00
/ US East (Ohio) Region
2 | Needs location constraint us-east-2.
\ "us-east-2"
2016-02-21 13:39:04 +00:00
/ US West (Oregon) Region
2018-03-13 20:47:29 +00:00
3 | Needs location constraint us-west-2.
2016-02-21 13:39:04 +00:00
\ "us-west-2"
/ US West (Northern California) Region
2018-03-13 20:47:29 +00:00
4 | Needs location constraint us-west-1.
2016-02-21 13:39:04 +00:00
\ "us-west-1"
2018-03-13 20:47:29 +00:00
/ Canada (Central) Region
5 | Needs location constraint ca-central-1.
\ "ca-central-1"
/ EU (Ireland) Region
6 | Needs location constraint EU or eu-west-1.
2016-02-21 13:39:04 +00:00
\ "eu-west-1"
2018-03-13 20:47:29 +00:00
/ EU (London) Region
7 | Needs location constraint eu-west-2.
\ "eu-west-2"
2016-02-21 13:39:04 +00:00
/ EU (Frankfurt) Region
2018-03-13 20:47:29 +00:00
8 | Needs location constraint eu-central-1.
2016-02-21 13:39:04 +00:00
\ "eu-central-1"
/ Asia Pacific (Singapore) Region
2018-03-13 20:47:29 +00:00
9 | Needs location constraint ap-southeast-1.
2016-02-21 13:39:04 +00:00
\ "ap-southeast-1"
/ Asia Pacific (Sydney) Region
2018-03-13 20:47:29 +00:00
10 | Needs location constraint ap-southeast-2.
2016-02-21 13:39:04 +00:00
\ "ap-southeast-2"
/ Asia Pacific (Tokyo) Region
2018-03-13 20:47:29 +00:00
11 | Needs location constraint ap-northeast-1.
2016-02-21 13:39:04 +00:00
\ "ap-northeast-1"
2017-01-09 05:09:19 +00:00
/ Asia Pacific (Seoul)
2018-03-13 20:47:29 +00:00
12 | Needs location constraint ap-northeast-2.
2017-01-09 05:09:19 +00:00
\ "ap-northeast-2"
/ Asia Pacific (Mumbai)
2018-03-13 20:47:29 +00:00
13 | Needs location constraint ap-south-1.
2017-01-09 05:09:19 +00:00
\ "ap-south-1"
2016-02-21 13:39:04 +00:00
/ South America (Sao Paulo) Region
2018-03-13 20:47:29 +00:00
14 | Needs location constraint sa-east-1.
2016-02-21 13:39:04 +00:00
\ "sa-east-1"
region> 1
2015-08-15 17:44:45 +00:00
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
2018-04-13 15:08:00 +00:00
endpoint>
2015-08-15 17:44:45 +00:00
Location constraint - must be set to match the Region. Used when creating buckets only.
2014-03-28 22:34:13 +00:00
Choose a number from below, or type in your own value
2016-02-21 13:39:04 +00:00
1 / Empty for US Region, Northern Virginia or Pacific Northwest.
\ ""
2018-03-13 20:47:29 +00:00
2 / US East (Ohio) Region.
\ "us-east-2"
3 / US West (Oregon) Region.
2016-02-21 13:39:04 +00:00
\ "us-west-2"
2018-03-13 20:47:29 +00:00
4 / US West (Northern California) Region.
2016-02-21 13:39:04 +00:00
\ "us-west-1"
2018-03-13 20:47:29 +00:00
5 / Canada (Central) Region.
\ "ca-central-1"
6 / EU (Ireland) Region.
2016-02-21 13:39:04 +00:00
\ "eu-west-1"
2018-03-13 20:47:29 +00:00
7 / EU (London) Region.
\ "eu-west-2"
8 / EU Region.
2016-02-21 13:39:04 +00:00
\ "EU"
2018-03-13 20:47:29 +00:00
9 / Asia Pacific (Singapore) Region.
2016-02-21 13:39:04 +00:00
\ "ap-southeast-1"
2018-03-13 20:47:29 +00:00
10 / Asia Pacific (Sydney) Region.
2016-02-21 13:39:04 +00:00
\ "ap-southeast-2"
2018-03-13 20:47:29 +00:00
11 / Asia Pacific (Tokyo) Region.
2016-02-21 13:39:04 +00:00
\ "ap-northeast-1"
2018-03-13 20:47:29 +00:00
12 / Asia Pacific (Seoul)
2017-01-09 05:09:19 +00:00
\ "ap-northeast-2"
2018-03-13 20:47:29 +00:00
13 / Asia Pacific (Mumbai)
2017-01-09 05:09:19 +00:00
\ "ap-south-1"
2018-03-13 20:47:29 +00:00
14 / South America (Sao Paulo) Region.
2016-02-21 13:39:04 +00:00
\ "sa-east-1"
location_constraint> 1
2016-08-22 12:59:03 +00:00
Canned ACL used when creating buckets and/or storing objects in S3.
2017-03-29 12:38:34 +00:00
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
2016-08-22 12:59:03 +00:00
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
\ "public-read"
/ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
3 | Granting this on a bucket is generally not recommended.
\ "public-read-write"
4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
\ "authenticated-read"
/ Object owner gets FULL_CONTROL. Bucket owner gets READ access.
5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-read"
/ Both the object owner and the bucket owner get FULL_CONTROL over the object.
6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
\ "bucket-owner-full-control"
2018-03-13 20:47:29 +00:00
acl> 1
2016-06-14 20:22:54 +00:00
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
2018-03-13 20:47:29 +00:00
server_side_encryption> 1
2016-09-01 21:27:50 +00:00
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
\ ""
2 / Standard storage class
\ "STANDARD"
3 / Reduced redundancy storage class
\ "REDUCED_REDUNDANCY"
4 / Standard Infrequent Access storage class
\ "STANDARD_IA"
2018-04-13 12:36:25 +00:00
5 / One Zone Infrequent Access storage class
\ "ONEZONE_IA"
2018-03-13 20:47:29 +00:00
storage_class> 1
2015-08-15 17:44:45 +00:00
Remote config
2014-03-28 22:34:13 +00:00
--------------------
[remote]
2018-04-13 15:08:00 +00:00
type = s3
provider = AWS
2016-02-21 13:39:04 +00:00
env_auth = false
2018-03-13 20:47:29 +00:00
access_key_id = XXX
secret_access_key = YYY
2016-02-21 13:39:04 +00:00
region = us-east-1
2018-04-13 15:08:00 +00:00
endpoint =
location_constraint =
2017-01-09 05:09:19 +00:00
acl = private
2018-04-13 15:08:00 +00:00
server_side_encryption =
storage_class =
2014-03-28 22:34:13 +00:00
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
2018-04-13 15:08:00 +00:00
y/e/d>
2014-03-28 22:34:13 +00:00
```
2017-06-06 15:40:00 +00:00
### --fast-list ###
This remote supports `--fast-list` which allows you to use fewer
transactions in exchange for more memory. See the [rclone
docs](/docs/#fast-list) for more details.
2018-04-13 12:32:17 +00:00
### --update and --use-server-modtime ###
As noted below, the modified time is stored on metadata on the object. It is
used by default for all operations that require checking the time a file was
last updated. It allows rclone to treat the remote more like a true filesystem,
but it is inefficient because it requires an extra API call to retrieve the
metadata.
For many operations, the time the object was last uploaded to the remote is
sufficient to determine if it is "dirty". By using `--update` along with
`--use-server-modtime` , you can avoid the extra API call and simply upload
files whose local modtime is newer than the time it was last uploaded.
2015-06-06 09:05:21 +00:00
### Modified time ###
2014-03-28 22:34:13 +00:00
The modified time is stored as metadata on the object as
`X-Amz-Meta-Mtime` as floating point since the epoch accurate to 1 ns.
2015-08-15 17:44:45 +00:00
### Multipart uploads ###
rclone supports multipart uploads with S3 which means that it can
2018-01-06 14:30:10 +00:00
upload files bigger than 5GB. Note that files uploaded *both* with
multipart upload *and* through crypt remotes do not have MD5 sums.
2015-08-15 17:44:45 +00:00
2015-08-25 19:15:50 +00:00
### Buckets and Regions ###
With Amazon S3 you can list buckets (`rclone lsd`) using any region,
but you can only access the content of a bucket from the region it was
created in. If you attempt to access a bucket from the wrong region,
you will get an error, `incorrect region, the bucket is not in 'XXX'
region`.
2016-02-01 13:11:27 +00:00
### Authentication ###
2018-04-13 15:08:00 +00:00
2016-02-01 13:11:27 +00:00
There are two ways to supply `rclone` with a set of AWS
credentials. In order of precedence:
- Directly in the rclone configuration file (as configured by `rclone config` )
2017-09-11 21:49:59 +00:00
- set `access_key_id` and `secret_access_key` . `session_token` can be
optionally set when using AWS STS.
2016-02-01 13:11:27 +00:00
- Runtime configuration:
- set `env_auth` to `true` in the config file
2016-02-09 17:19:13 +00:00
- Exporting the following environment variables before running `rclone`
- Access Key ID: `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY`
- Secret Access Key: `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY`
2017-09-11 21:49:59 +00:00
- Session Token: `AWS_SESSION_TOKEN`
2018-04-13 15:08:00 +00:00
- Running `rclone` in an ECS task with an IAM role (AWS only)
- Running `rclone` on an EC2 instance with an IAM role (AWS only)
2016-02-01 13:11:27 +00:00
If none of these option actually end up providing `rclone` with AWS
credentials then S3 interaction will be non-authenticated (see below).
2017-06-02 11:06:06 +00:00
### S3 Permissions ###
2018-04-12 16:05:53 +00:00
When using the `sync` subcommand of `rclone` the following minimum
2017-06-02 11:06:06 +00:00
permissions are required to be available on the bucket being written to:
* `ListBucket`
* `DeleteObject`
2017-06-10 14:22:43 +00:00
* `GetObject`
2017-06-02 11:06:06 +00:00
* `PutObject`
* `PutObjectACL`
Example policy:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
},
"Action": [
"s3:ListBucket",
"s3:DeleteObject",
2017-06-10 14:22:43 +00:00
"s3:GetObject",
2017-06-02 11:06:06 +00:00
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME/*",
"arn:aws:s3:::BUCKET_NAME"
]
}
]
}
```
Notes on above:
1. This is a policy that can be used when creating bucket. It assumes
that `USER_NAME` has been created.
2018-04-12 16:05:53 +00:00
2. The Resource entry must include both resource ARNs, as one implies
2017-06-02 11:06:06 +00:00
the bucket and the other implies the bucket's objects.
2018-04-12 16:05:53 +00:00
For reference, [here's an Ansible script ](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b )
2017-06-02 11:06:06 +00:00
that will generate one or more buckets that will work with `rclone sync` .
2018-03-17 10:51:45 +00:00
### Key Management System (KMS) ###
If you are using server side encryption with KMS then you will find
you can't transfer small objects. As a work-around you can use the
`--ignore-checksum` flag.
A proper fix is being worked on in [issue #1824 ](https://github.com/ncw/rclone/issues/1824 ).
2017-09-09 12:02:56 +00:00
### Glacier ###
You can transition objects to glacier storage using a [lifecycle policy ](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html ).
The bucket can still be synced or copied into normally, but if rclone
tries to access the data you will see an error like below.
2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
In this case you need to [restore ](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html )
the object(s) in question before using rclone.
2016-09-01 21:27:50 +00:00
### Specific options ###
Here are the command line options specific to this cloud storage
system.
2016-08-23 15:46:09 +00:00
#### --s3-acl=STRING ####
Canned ACL used when creating buckets and/or storing objects in S3.
2017-03-29 12:38:34 +00:00
For more info visit the [canned ACL docs ](https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl ).
2016-08-23 15:46:09 +00:00
#### --s3-storage-class=STRING ####
2016-09-01 21:27:50 +00:00
Storage class to upload new objects with.
Available options include:
- STANDARD - default storage class
- STANDARD_IA - for less frequently accessed data (e.g backups)
2018-04-13 12:36:25 +00:00
- ONEZONE_IA - for storing data in only one Availability Zone
2016-09-01 21:27:50 +00:00
- REDUCED_REDUNDANCY (only for noncritical, reproducible data, has lower redundancy)
2018-04-04 15:03:16 +00:00
#### --s3-chunk-size=SIZE ####
Any files larger than this will be uploaded in chunks of this
size. The default is 5MB. The minimum is 5MB.
Note that 2 chunks of this size are buffered in memory per transfer.
If you are transferring large files over high speed links and you have
enough memory, then increasing this will speed up the transfers.
2015-09-29 08:58:03 +00:00
### Anonymous access to public buckets ###
If you want to use rclone to access a public bucket, configure with a
2018-04-13 15:08:00 +00:00
blank `access_key_id` and `secret_access_key` . Your config should end
up looking like this:
2015-09-29 08:58:03 +00:00
```
2018-04-13 15:08:00 +00:00
[anons3]
type = s3
provider = AWS
env_auth = false
access_key_id =
secret_access_key =
region = us-east-1
endpoint =
location_constraint =
acl = private
server_side_encryption =
storage_class =
2015-09-29 08:58:03 +00:00
```
Then use it as normal with the name of the public bucket, eg
rclone lsd anons3:1000genomes
You will be able to list and copy data but not upload it.
2015-08-15 17:44:45 +00:00
### Ceph ###
2018-03-13 20:47:29 +00:00
[Ceph ](https://ceph.com/ ) is an open source unified, distributed
storage system designed for excellent performance, reliability and
scalability. It has an S3 compatible object storage interface.
To use rclone with Ceph, configure as above but leave the region blank
and set the endpoint. You should end up with something like this in
your config:
2015-08-15 17:44:45 +00:00
```
2018-03-13 20:47:29 +00:00
[ceph]
type = s3
2018-04-13 15:08:00 +00:00
provider = Ceph
2018-03-13 20:47:29 +00:00
env_auth = false
access_key_id = XXX
secret_access_key = YYY
2018-04-12 16:05:53 +00:00
region =
2018-03-13 20:47:29 +00:00
endpoint = https://ceph.endpoint.example.com
2018-04-12 16:05:53 +00:00
location_constraint =
acl =
server_side_encryption =
storage_class =
2015-08-15 17:44:45 +00:00
```
Note also that Ceph sometimes puts `/` in the passwords it gives
users. If you read the secret access key using the command line tools
you will get a JSON blob with the `/` escaped as `\/` . Make sure you
only write `/` in the secret access key.
Eg the dump from Ceph looks something like this (irrelevant keys
removed).
```
{
"user_id": "xxx",
"display_name": "xxxx",
"keys": [
{
"user": "xxx",
"access_key": "xxxxxx",
"secret_key": "xxxxxx\/xxxx"
}
],
}
```
Because this is a json dump, it is encoding the `/` as `\/` , so if you
use the secret key as `xxxxxx/xxxx` it will work fine.
2016-07-11 11:12:28 +00:00
2018-03-13 20:47:29 +00:00
### Dreamhost ###
Dreamhost [DreamObjects ](https://www.dreamhost.com/cloud/storage/ ) is
an object storage system based on CEPH.
To use rclone with Dreamhost, configure as above but leave the region blank
and set the endpoint. You should end up with something like this in
your config:
```
[dreamobjects]
2018-04-13 15:08:00 +00:00
type = s3
provider = DreamHost
2018-03-13 20:47:29 +00:00
env_auth = false
access_key_id = your_access_key
secret_access_key = your_secret_key
region =
endpoint = objects-us-west-1.dream.io
location_constraint =
acl = private
server_side_encryption =
storage_class =
```
2017-10-19 16:31:25 +00:00
### DigitalOcean Spaces ###
[Spaces ](https://www.digitalocean.com/products/object-storage/ ) is an [S3-interoperable ](https://developers.digitalocean.com/documentation/spaces/ ) object storage service from cloud provider DigitalOcean.
To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "[Applications & API](https://cloud.digitalocean.com/settings/api/tokens)" page of the DigitalOcean control panel. They will be needed when promted by `rclone config` for your `access_key_id` and `secret_access_key` .
2017-10-28 06:03:51 +00:00
When prompted for a `region` or `location_constraint` , press enter to use the default value. The region must be included in the `endpoint` setting (e.g. `nyc3.digitaloceanspaces.com` ). The defualt values can be used for other settings.
2017-10-19 16:31:25 +00:00
Going through the whole process of creating a new remote by running `rclone config` , each prompt should be answered as shown below:
```
2018-03-13 20:47:29 +00:00
Storage> s3
2017-10-19 16:31:25 +00:00
env_auth> 1
access_key_id> YOUR_ACCESS_KEY
secret_access_key> YOUR_SECRET_KEY
2018-04-12 16:05:53 +00:00
region>
2017-10-19 16:31:25 +00:00
endpoint> nyc3.digitaloceanspaces.com
2018-04-12 16:05:53 +00:00
location_constraint>
acl>
storage_class>
2017-10-19 16:31:25 +00:00
```
The resulting configuration file should look like:
```
[spaces]
type = s3
2018-04-13 15:08:00 +00:00
provider = DigitalOcean
2017-10-19 16:31:25 +00:00
env_auth = false
access_key_id = YOUR_ACCESS_KEY
secret_access_key = YOUR_SECRET_KEY
2018-04-12 16:05:53 +00:00
region =
2017-10-19 16:31:25 +00:00
endpoint = nyc3.digitaloceanspaces.com
2018-04-12 16:05:53 +00:00
location_constraint =
acl =
server_side_encryption =
storage_class =
2017-10-19 16:31:25 +00:00
```
Once configured, you can create a new Space and begin copying files. For example:
```
rclone mkdir spaces:my-new-space
rclone copy /path/to/files spaces:my-new-space
```
2018-03-15 14:11:32 +00:00
### IBM COS (S3) ###
2018-04-13 15:08:00 +00:00
2018-03-26 19:49:53 +00:00
Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage)
2018-03-15 14:11:32 +00:00
To configure access to IBM COS S3, follow the steps below:
1. Run rclone config and select n for a new remote.
```
2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
```
2. Enter the name for the configuration
```
2018-04-12 16:05:53 +00:00
name> < YOUR NAME >
2018-03-15 14:11:32 +00:00
```
3. Select "s3" storage.
```
2018-04-12 16:05:53 +00:00
Choose a number from below, or type in your own value
1 / Alias for a existing remote
\ "alias"
2 / Amazon Drive
2018-03-15 14:11:32 +00:00
\ "amazon cloud drive"
2018-04-12 16:05:53 +00:00
3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, Minio, IBM COS)
2018-03-15 14:11:32 +00:00
\ "s3"
2018-04-12 16:05:53 +00:00
4 / Backblaze B2
\ "b2"
[snip]
23 / http Connection
\ "http"
Storage> 3
2018-03-15 14:11:32 +00:00
```
2018-04-12 16:05:53 +00:00
4. Select IBM COS as the S3 Storage Provider.
2018-03-15 14:11:32 +00:00
```
2018-04-12 16:05:53 +00:00
Choose the S3 provider.
Choose a number from below, or type in your own value
1 / Choose this option to configure Storage to AWS S3
\ "AWS"
2 / Choose this option to configure Storage to Ceph Systems
\ "Ceph"
3 / Choose this option to configure Storage to Dreamhost
\ "Dreamhost"
4 / Choose this option to the configure Storage to IBM COS S3
\ "IBMCOS"
5 / Choose this option to the configure Storage to Minio
\ "Minio"
Provider>4
2018-03-15 14:11:32 +00:00
```
5. Enter the Access Key and Secret.
```
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id> < >
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key> < >
```
2018-04-12 16:05:53 +00:00
6. Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an enpoint address.
2018-03-15 14:11:32 +00:00
```
2018-04-12 16:05:53 +00:00
Endpoint for IBM COS S3 API.
Specify if using an IBM COS On Premise.
2018-03-15 14:11:32 +00:00
Choose a number from below, or type in your own value
2018-04-12 16:05:53 +00:00
1 / US Cross Region Endpoint
\ "s3-api.us-geo.objectstorage.softlayer.net"
2 / US Cross Region Dallas Endpoint
\ "s3-api.dal.us-geo.objectstorage.softlayer.net"
3 / US Cross Region Washington DC Endpoint
\ "s3-api.wdc-us-geo.objectstorage.softlayer.net"
4 / US Cross Region San Jose Endpoint
\ "s3-api.sjc-us-geo.objectstorage.softlayer.net"
5 / US Cross Region Private Endpoint
\ "s3-api.us-geo.objectstorage.service.networklayer.com"
6 / US Cross Region Dallas Private Endpoint
\ "s3-api.dal-us-geo.objectstorage.service.networklayer.com"
7 / US Cross Region Washington DC Private Endpoint
\ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com"
8 / US Cross Region San Jose Private Endpoint
\ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com"
9 / US Region East Endpoint
\ "s3.us-east.objectstorage.softlayer.net"
10 / US Region East Private Endpoint
\ "s3.us-east.objectstorage.service.networklayer.com"
11 / US Region South Endpoint
[snip]
34 / Toronto Single Site Private Endpoint
\ "s3.tor01.objectstorage.service.networklayer.com"
endpoint>1
```
7. Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter
```
1 / US Cross Region Standard
\ "us-standard"
2 / US Cross Region Vault
\ "us-vault"
3 / US Cross Region Cold
\ "us-cold"
4 / US Cross Region Flex
\ "us-flex"
5 / US East Region Standard
\ "us-east-standard"
6 / US East Region Vault
\ "us-east-vault"
7 / US East Region Cold
\ "us-east-cold"
8 / US East Region Flex
\ "us-east-flex"
9 / US South Region Standard
\ "us-south-standard"
10 / US South Region Vault
\ "us-south-vault"
[snip]
32 / Toronto Flex
\ "tor01-flex"
location_constraint>1
2018-03-15 14:11:32 +00:00
```
2018-04-12 16:05:53 +00:00
9. Specify a canned ACL. IBM Cloud (Strorage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs.
2018-03-15 14:11:32 +00:00
```
2018-04-12 16:05:53 +00:00
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS
\ "private"
2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS
\ "public-read"
3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS
\ "public-read-write"
4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS
\ "authenticated-read"
acl> 1
2018-03-15 14:11:32 +00:00
```
2018-03-26 19:49:53 +00:00
12. Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this
2018-03-15 14:11:32 +00:00
```
2018-04-12 16:05:53 +00:00
[xxx]
type = s3
Provider = IBMCOS
access_key_id = xxx
secret_access_key = yyy
2018-03-15 14:11:32 +00:00
endpoint = s3-api.us-geo.objectstorage.softlayer.net
location_constraint = us-standard
acl = private
```
13. Execute rclone commands
```
1) Create a bucket.
rclone mkdir IBM-COS-XREGION:newbucket
2) List available buckets.
rclone lsd IBM-COS-XREGION:
-1 2017-11-08 21:16:22 -1 test
-1 2018-02-14 20:16:39 -1 newbucket
3) List contents of a bucket.
rclone ls IBM-COS-XREGION:newbucket
18685952 test.exe
4) Copy a file from local to remote.
rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
5) Copy a file from remote to local.
rclone copy IBM-COS-XREGION:newbucket/file.txt .
6) Delete a file on remote.
rclone delete IBM-COS-XREGION:newbucket/file.txt
```
2016-07-11 11:12:28 +00:00
### Minio ###
[Minio ](https://minio.io/ ) is an object storage server built for cloud application developers and devops.
It is very easy to install and provides an S3 compatible server which can be used by rclone.
2017-06-19 22:51:39 +00:00
To use it, install Minio following the instructions [here ](https://docs.minio.io/docs/minio-quickstart-guide ).
2016-07-11 11:12:28 +00:00
When it configures itself Minio will print something like this
```
2017-06-19 22:51:39 +00:00
Endpoint: http://192.168.1.106:9000 http://172.23.0.1:9000
AccessKey: USWUXHGYZQYFYFFIT3RE
SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Region: us-east-1
SQS ARNs: arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis
Browser Access:
http://192.168.1.106:9000 http://172.23.0.1:9000
Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
$ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
Object API (Amazon S3 compatible):
Go: https://docs.minio.io/docs/golang-client-quickstart-guide
Java: https://docs.minio.io/docs/java-client-quickstart-guide
Python: https://docs.minio.io/docs/python-client-quickstart-guide
JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
.NET: https://docs.minio.io/docs/dotnet-client-quickstart-guide
Drive Capacity: 26 GiB Free, 165 GiB Total
2016-07-11 11:12:28 +00:00
```
These details need to go into `rclone config` like this. Note that it
is important to put the region in as stated above.
```
env_auth> 1
2017-06-19 22:51:39 +00:00
access_key_id> USWUXHGYZQYFYFFIT3RE
secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
2016-07-11 11:12:28 +00:00
region> us-east-1
2017-06-19 22:51:39 +00:00
endpoint> http://192.168.1.106:9000
2017-01-09 05:09:19 +00:00
location_constraint>
2016-07-11 11:12:28 +00:00
server_side_encryption>
```
Which makes the config file look like this
```
[minio]
2018-04-13 15:08:00 +00:00
type = s3
provider = Minio
2016-07-11 11:12:28 +00:00
env_auth = false
2017-06-19 22:51:39 +00:00
access_key_id = USWUXHGYZQYFYFFIT3RE
secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
2016-07-11 11:12:28 +00:00
region = us-east-1
2017-06-19 22:51:39 +00:00
endpoint = http://192.168.1.106:9000
2017-01-09 05:09:19 +00:00
location_constraint =
server_side_encryption =
2016-07-11 11:12:28 +00:00
```
So once set up, for example to copy files into a bucket
2017-06-19 22:51:39 +00:00
```
rclone copy /path/to/files minio:bucket
2017-09-11 21:49:59 +00:00
```
2017-08-30 14:55:51 +00:00
### Wasabi ###
2017-09-25 16:55:19 +00:00
[Wasabi ](https://wasabi.com ) is a cloud-based object storage service for a
2017-08-30 14:55:51 +00:00
broad range of applications and use cases. Wasabi is designed for
individuals and organizations that require a high-performance,
reliable, and secure data storage infrastructure at minimal cost.
Wasabi provides an S3 interface which can be configured for use with
rclone like this.
```
No remotes found - make a new one
n) New remote
s) Set configuration password
n/s> n
name> wasabi
Type of storage to configure.
Choose a number from below, or type in your own value
1 / Amazon Drive
\ "amazon cloud drive"
2 / Amazon S3 (also Dreamhost, Ceph, Minio)
\ "s3"
[snip]
Storage> s3
2017-11-22 21:21:36 +00:00
Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
2017-08-30 14:55:51 +00:00
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
env_auth> 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id> YOURACCESSKEY
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key> YOURSECRETACCESSKEY
Region to connect to.
Choose a number from below, or type in your own value
/ The default endpoint - a good choice if you are unsure.
1 | US Region, Northern Virginia or Pacific Northwest.
| Leave location constraint empty.
\ "us-east-1"
[snip]
region> us-east-1
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
endpoint> s3.wasabisys.com
Location constraint - must be set to match the Region. Used when creating buckets only.
Choose a number from below, or type in your own value
1 / Empty for US Region, Northern Virginia or Pacific Northwest.
\ ""
[snip]
2018-04-12 16:05:53 +00:00
location_constraint>
2017-08-30 14:55:51 +00:00
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
[snip]
2018-04-12 16:05:53 +00:00
acl>
2017-08-30 14:55:51 +00:00
The server-side encryption algorithm used when storing this object in S3.
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
2018-04-12 16:05:53 +00:00
server_side_encryption>
2017-08-30 14:55:51 +00:00
The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
\ ""
2 / Standard storage class
\ "STANDARD"
3 / Reduced redundancy storage class
\ "REDUCED_REDUNDANCY"
4 / Standard Infrequent Access storage class
\ "STANDARD_IA"
2018-04-12 16:05:53 +00:00
storage_class>
2017-08-30 14:55:51 +00:00
Remote config
--------------------
[wasabi]
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
region = us-east-1
endpoint = s3.wasabisys.com
2018-04-12 16:05:53 +00:00
location_constraint =
acl =
server_side_encryption =
storage_class =
2017-08-30 14:55:51 +00:00
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
```
This will leave the config file looking like this.
```
[wasabi]
2018-04-13 15:08:00 +00:00
type = s3
provider = Wasabi
2017-08-30 14:55:51 +00:00
env_auth = false
access_key_id = YOURACCESSKEY
secret_access_key = YOURSECRETACCESSKEY
2018-04-13 15:08:00 +00:00
region =
2017-08-30 14:55:51 +00:00
endpoint = s3.wasabisys.com
2018-04-12 16:05:53 +00:00
location_constraint =
acl =
server_side_encryption =
storage_class =
2017-08-30 14:55:51 +00:00
```