2014-03-28 22:34:13 +00:00
---
title: "Amazon S3"
description: "Rclone docs for Amazon S3"
2014-04-26 16:43:41 +00:00
date: "2014-04-26"
2014-03-28 22:34:13 +00:00
---
2015-10-15 15:57:21 +00:00
< i class = "fa fa-amazon" > < / i > Amazon S3
2014-07-17 19:03:11 +00:00
---------------------------------------
2014-03-28 22:34:13 +00:00
2014-07-17 19:03:11 +00:00
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir` .
2014-03-28 22:34:13 +00:00
Here is an example of making an s3 configuration. First run
rclone config
This will guide you through an interactive setup process.
```
No remotes found - make a new one
n) New remote
q) Quit config
n/q> n
name> remote
What type of source is it?
Choose a number from below
2016-02-01 13:11:27 +00:00
1) amazon cloud drive
2) b2
3) drive
4) dropbox
5) google cloud storage
6) swift
7) hubic
8) local
9) onedrive
10) s3
11) yandex
type> 10
Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
* Enter AWS credentials in the next step
1) false
* Get AWS credentials from the environment (env vars or IAM)
2) true
env_auth> 2
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id>
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key>
2015-08-15 17:44:45 +00:00
Region to connect to.
2014-03-28 22:34:13 +00:00
Choose a number from below, or type in your own value
* The default endpoint - a good choice if you are unsure.
* US Region, Northern Virginia or Pacific Northwest.
* Leave location constraint empty.
2015-08-15 17:44:45 +00:00
1) us-east-1
* US West (Oregon) Region
* Needs location constraint us-west-2.
2) us-west-2
2016-02-01 13:11:27 +00:00
* US West (Northern California) Region
* Needs location constraint us-west-1.
[..snip..]
8) ap-northeast-1
2014-03-28 22:34:13 +00:00
* South America (Sao Paulo) Region
* Needs location constraint sa-east-1.
2015-08-15 17:44:45 +00:00
9) sa-east-1
* If using an S3 clone that only understands v2 signatures - eg Ceph - set this and make sure you set the endpoint.
10) other-v2-signature
* If using an S3 clone that understands v4 signatures set this and make sure you set the endpoint.
11) other-v4-signature
2016-02-01 13:11:27 +00:00
region> 3
2015-08-15 17:44:45 +00:00
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Specify if using an S3 clone such as Ceph.
2016-02-01 13:11:27 +00:00
endpoint>
2015-08-15 17:44:45 +00:00
Location constraint - must be set to match the Region. Used when creating buckets only.
2014-03-28 22:34:13 +00:00
Choose a number from below, or type in your own value
* Empty for US Region, Northern Virginia or Pacific Northwest.
2016-02-01 13:11:27 +00:00
1)
2014-03-28 22:34:13 +00:00
* US West (Oregon) Region.
2) us-west-2
2015-08-15 17:44:45 +00:00
* US West (Northern California) Region.
2016-02-01 13:11:27 +00:00
[..snip..]
8) ap-northeast-1
* South America (Sao Paulo) Region.
9) sa-east-1
location_constraint> 3
2015-08-15 17:44:45 +00:00
Remote config
2014-03-28 22:34:13 +00:00
--------------------
[remote]
2016-02-01 13:11:27 +00:00
env_auth = true
access_key_id =
secret_access_key =
region = us-west-1
endpoint =
location_constraint = us-west-1
2014-03-28 22:34:13 +00:00
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:
Name Type
==== ====
remote s3
e) Edit existing remote
n) New remote
d) Delete remote
q) Quit config
2016-02-01 13:11:27 +00:00
e/n/d/q>
2014-03-28 22:34:13 +00:00
```
This remote is called `remote` and can now be used like this
See all buckets
rclone lsd remote:
Make a new bucket
rclone mkdir remote:bucket
List the contents of a bucket
rclone ls remote:bucket
Sync `/home/local/directory` to the remote bucket, deleting any excess
files in the bucket.
rclone sync /home/local/directory remote:bucket
2015-06-06 09:05:21 +00:00
### Modified time ###
2014-03-28 22:34:13 +00:00
The modified time is stored as metadata on the object as
`X-Amz-Meta-Mtime` as floating point since the epoch accurate to 1 ns.
2015-08-15 17:44:45 +00:00
### Multipart uploads ###
rclone supports multipart uploads with S3 which means that it can
upload files bigger than 5GB. Note that files uploaded with multipart
upload don't have an MD5SUM.
2015-08-25 19:15:50 +00:00
### Buckets and Regions ###
With Amazon S3 you can list buckets (`rclone lsd`) using any region,
but you can only access the content of a bucket from the region it was
created in. If you attempt to access a bucket from the wrong region,
you will get an error, `incorrect region, the bucket is not in 'XXX'
region`.
2016-02-01 13:11:27 +00:00
### Authentication ###
There are two ways to supply `rclone` with a set of AWS
credentials. In order of precedence:
- Directly in the rclone configuration file (as configured by `rclone config` )
- set `access_key_id` and `secret_access_key`
- Runtime configuration:
- set `env_auth` to `true` in the config file
2016-02-09 17:19:13 +00:00
- Exporting the following environment variables before running `rclone`
- Access Key ID: `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY`
- Secret Access Key: `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY`
2016-02-01 13:11:27 +00:00
- Running `rclone` on an EC2 instance with an IAM role
If none of these option actually end up providing `rclone` with AWS
credentials then S3 interaction will be non-authenticated (see below).
2015-09-29 08:58:03 +00:00
### Anonymous access to public buckets ###
If you want to use rclone to access a public bucket, configure with a
blank `access_key_id` and `secret_access_key` . Eg
```
2016-02-01 13:11:27 +00:00
No remotes found - make a new one
2015-09-29 08:58:03 +00:00
n) New remote
q) Quit config
2016-02-01 13:11:27 +00:00
n/q> n
2015-09-29 08:58:03 +00:00
name> anons3
What type of source is it?
Choose a number from below
1) amazon cloud drive
2016-02-01 13:11:27 +00:00
2) b2
3) drive
4) dropbox
5) google cloud storage
6) swift
7) hubic
8) local
9) onedrive
10) s3
11) yandex
type> 10
Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
* Enter AWS credentials in the next step
1) false
* Get AWS credentials from the environment (env vars or IAM)
2) true
env_auth> 1
AWS Access Key ID - leave blank for anonymous access or runtime credentials.
access_key_id>
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
secret_access_key>
...
2015-09-29 08:58:03 +00:00
```
Then use it as normal with the name of the public bucket, eg
rclone lsd anons3:1000genomes
You will be able to list and copy data but not upload it.
2015-08-15 17:44:45 +00:00
### Ceph ###
Ceph is an object storage system which presents an Amazon S3 interface.
To use rclone with ceph, you need to set the following parameters in
the config.
```
access_key_id = Whatever
secret_access_key = Whatever
endpoint = https://ceph.endpoint.goes.here/
region = other-v2-signature
```
Note also that Ceph sometimes puts `/` in the passwords it gives
users. If you read the secret access key using the command line tools
you will get a JSON blob with the `/` escaped as `\/` . Make sure you
only write `/` in the secret access key.
Eg the dump from Ceph looks something like this (irrelevant keys
removed).
```
{
"user_id": "xxx",
"display_name": "xxxx",
"keys": [
{
"user": "xxx",
"access_key": "xxxxxx",
"secret_key": "xxxxxx\/xxxx"
}
],
}
```
Because this is a json dump, it is encoding the `/` as `\/` , so if you
use the secret key as `xxxxxx/xxxx` it will work fine.