--- title: "Amazon S3" description: "Rclone docs for Amazon S3" date: "2014-04-26" --- Amazon S3 --------------------------------------- Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in too, eg `remote:bucket/path/to/dir`. Here is an example of making an s3 configuration. First run rclone config This will guide you through an interactive setup process. ``` No remotes found - make a new one n) New remote q) Quit config n/q> n name> remote What type of source is it? Choose a number from below 1) amazon cloud drive 2) b2 3) drive 4) dropbox 5) google cloud storage 6) swift 7) hubic 8) local 9) onedrive 10) s3 11) yandex type> 10 Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value * Enter AWS credentials in the next step 1) false * Get AWS credentials from the environment (env vars or IAM) 2) true env_auth> 2 AWS Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> Region to connect to. Choose a number from below, or type in your own value * The default endpoint - a good choice if you are unsure. * US Region, Northern Virginia or Pacific Northwest. * Leave location constraint empty. 1) us-east-1 * US West (Oregon) Region * Needs location constraint us-west-2. 2) us-west-2 * US West (Northern California) Region * Needs location constraint us-west-1. [..snip..] 8) ap-northeast-1 * South America (Sao Paulo) Region * Needs location constraint sa-east-1. 9) sa-east-1 * If using an S3 clone that only understands v2 signatures - eg Ceph - set this and make sure you set the endpoint. 10) other-v2-signature * If using an S3 clone that understands v4 signatures set this and make sure you set the endpoint. 11) other-v4-signature region> 3 Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. Specify if using an S3 clone such as Ceph. endpoint> Location constraint - must be set to match the Region. Used when creating buckets only. Choose a number from below, or type in your own value * Empty for US Region, Northern Virginia or Pacific Northwest. 1) * US West (Oregon) Region. 2) us-west-2 * US West (Northern California) Region. [..snip..] 8) ap-northeast-1 * South America (Sao Paulo) Region. 9) sa-east-1 location_constraint> 3 Remote config -------------------- [remote] env_auth = true access_key_id = secret_access_key = region = us-west-1 endpoint = location_constraint = us-west-1 -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes: Name Type ==== ==== remote s3 e) Edit existing remote n) New remote d) Delete remote q) Quit config e/n/d/q> ``` This remote is called `remote` and can now be used like this See all buckets rclone lsd remote: Make a new bucket rclone mkdir remote:bucket List the contents of a bucket rclone ls remote:bucket Sync `/home/local/directory` to the remote bucket, deleting any excess files in the bucket. rclone sync /home/local/directory remote:bucket ### Modified time ### The modified time is stored as metadata on the object as `X-Amz-Meta-Mtime` as floating point since the epoch accurate to 1 ns. ### Multipart uploads ### rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don't have an MD5SUM. ### Buckets and Regions ### With Amazon S3 you can list buckets (`rclone lsd`) using any region, but you can only access the content of a bucket from the region it was created in. If you attempt to access a bucket from the wrong region, you will get an error, `incorrect region, the bucket is not in 'XXX' region`. ### Authentication ### There are two ways to supply `rclone` with a set of AWS credentials. In order of precedence: - Directly in the rclone configuration file (as configured by `rclone config`) - set `access_key_id` and `secret_access_key` - Runtime configuration: - set `env_auth` to `true` in the config file - Exporting the following environment variables before running `rclone` - Access Key ID: `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` - Secret Access Key: `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY` - Running `rclone` on an EC2 instance with an IAM role If none of these option actually end up providing `rclone` with AWS credentials then S3 interaction will be non-authenticated (see below). ### Anonymous access to public buckets ### If you want to use rclone to access a public bucket, configure with a blank `access_key_id` and `secret_access_key`. Eg ``` No remotes found - make a new one n) New remote q) Quit config n/q> n name> anons3 What type of source is it? Choose a number from below 1) amazon cloud drive 2) b2 3) drive 4) dropbox 5) google cloud storage 6) swift 7) hubic 8) local 9) onedrive 10) s3 11) yandex type> 10 Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value * Enter AWS credentials in the next step 1) false * Get AWS credentials from the environment (env vars or IAM) 2) true env_auth> 1 AWS Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> ... ``` Then use it as normal with the name of the public bucket, eg rclone lsd anons3:1000genomes You will be able to list and copy data but not upload it. ### Ceph ### Ceph is an object storage system which presents an Amazon S3 interface. To use rclone with ceph, you need to set the following parameters in the config. ``` access_key_id = Whatever secret_access_key = Whatever endpoint = https://ceph.endpoint.goes.here/ region = other-v2-signature ``` Note also that Ceph sometimes puts `/` in the passwords it gives users. If you read the secret access key using the command line tools you will get a JSON blob with the `/` escaped as `\/`. Make sure you only write `/` in the secret access key. Eg the dump from Ceph looks something like this (irrelevant keys removed). ``` { "user_id": "xxx", "display_name": "xxxx", "keys": [ { "user": "xxx", "access_key": "xxxxxx", "secret_key": "xxxxxx\/xxxx" } ], } ``` Because this is a json dump, it is encoding the `/` as `\/`, so if you use the secret key as `xxxxxx/xxxx` it will work fine.