@@ -12234,10 +12991,10 @@ y/e/d> y
- Config: encoding
- Env Var: RCLONE_ACD_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,InvalidUtf8,Dot
-Limitations
+Limitations
Note that Amazon Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see --retries
flag) which should hopefully work around this problem.
Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.
@@ -12263,10 +13020,12 @@ y/e/d> y
IONOS Cloud
Leviia Object Storage
Liara Object Storage
+Linode Object Storage
Minio
Petabox
Qiniu Cloud Object Storage (Kodo)
RackCorp Object Storage
+Rclone Serve S3
Scaleway
Seagate Lyve Cloud
SeaweedFS
@@ -12485,10 +13244,18 @@ y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d>
-Modified time
+Modification times and hashes
+Modification times
The modified time is stored as metadata on the object as X-Amz-Meta-Mtime
as floating point since the epoch, accurate to 1 ns.
If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archive storage the object will be uploaded rather than copied.
Note that reading this from the object takes an additional HEAD
request as the metadata isn't returned in object listings.
+Hashes
+For small objects which weren't uploaded as multipart uploads (objects sized below --s3-upload-cutoff
if uploaded with rclone) rclone uses the ETag:
header as an MD5 checksum.
+However for objects which were uploaded as multipart uploads or with server side encryption (SSE-AWS or SSE-C) the ETag
header is no longer the MD5 sum of the data, so rclone adds an additional piece of metadata X-Amz-Meta-Md5chksum
which is a base64 encoded MD5 hash (in the same format as is required for Content-MD5
). You can use base64 -d and hexdump to check this value manually:
+echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump
+or you can use rclone check
to verify the hashes are OK.
+For large objects, calculating this hash can take some time so the addition of this hash can be disabled with --s3-disable-checksum
. This will mean that these objects do not have an MD5 checksum.
+Note that reading this from the object takes an additional HEAD
request as the metadata isn't returned in object listings.
Reducing costs
Avoiding HEAD requests to read the modification time
By default, rclone will use the modification time of objects stored in S3 for syncing. This is stored in object metadata which unfortunately takes an extra HEAD request to read which can be expensive (in time and money).
@@ -12535,13 +13302,6 @@ y/e/d>
By default, rclone will HEAD every object it uploads. It does this to check the object got uploaded correctly.
You can disable this with the --s3-no-head option - see there for more details.
Setting this flag increases the chance for undetected upload failures.
-Hashes
-For small objects which weren't uploaded as multipart uploads (objects sized below --s3-upload-cutoff
if uploaded with rclone) rclone uses the ETag:
header as an MD5 checksum.
-However for objects which were uploaded as multipart uploads or with server side encryption (SSE-AWS or SSE-C) the ETag
header is no longer the MD5 sum of the data, so rclone adds an additional piece of metadata X-Amz-Meta-Md5chksum
which is a base64 encoded MD5 hash (in the same format as is required for Content-MD5
). You can use base64 -d and hexdump to check this value manually:
-echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump
-or you can use rclone check
to verify the hashes are OK.
-For large objects, calculating this hash can take some time so the addition of this hash can be disabled with --s3-disable-checksum
. This will mean that these objects do not have an MD5 checksum.
-Note that reading this from the object takes an additional HEAD
request as the metadata isn't returned in object listings.
Versions
When bucket versioning is enabled (this can be done with rclone with the rclone backend versioning
command) when rclone uploads a new version of a file it creates a new version of it Likewise when you delete a file, the old version will be marked hidden and still be available.
Old versions of files, where available, are visible using the --s3-versions
flag.
@@ -12722,9 +13482,9 @@ $ rclone -q --s3-versions ls s3:cleanup-test
If you configure a default retention period on a bucket, requests to upload objects in such a bucket must include the Content-MD5 header.
-As mentioned in the Hashes section, small files that are not uploaded as multipart, use a different tag, causing the upload to fail. A simple solution is to set the --s3-upload-cutoff 0
and force all the files to be uploaded as multipart.
+As mentioned in the Modification times and hashes section, small files that are not uploaded as multipart, use a different tag, causing the upload to fail. A simple solution is to set the --s3-upload-cutoff 0
and force all the files to be uploaded as multipart.
Standard options
-Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi).
+Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
--s3-provider
Choose your S3 provider.
Properties:
@@ -12799,6 +13559,10 @@ $ rclone -q --s3-versions ls s3:cleanup-test
+"Linode"
+
"Minio"
- Minio Object Storage
@@ -12815,6 +13579,10 @@ $ rclone -q --s3-versions ls s3:cleanup-test
+"Rclone"
+
"Scaleway"
- Scaleway Object Storage
@@ -13033,374 +13801,6 @@ $ rclone -q --s3-versions ls s3:cleanup-test
---s3-region
-region - the location where your bucket will be created and your data stored.
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: RackCorp
-- Type: string
-- Required: false
-- Examples:
-
-- "global"
-
-- Global CDN (All locations) Region
-
-- "au"
-
-- Australia (All states)
-
-- "au-nsw"
-
-- NSW (Australia) Region
-
-- "au-qld"
-
-- QLD (Australia) Region
-
-- "au-vic"
-
-- VIC (Australia) Region
-
-- "au-wa"
-
-- Perth (Australia) Region
-
-- "ph"
-
-- Manila (Philippines) Region
-
-- "th"
-
-- Bangkok (Thailand) Region
-
-- "hk"
-
-- "mn"
-
-- Ulaanbaatar (Mongolia) Region
-
-- "kg"
-
-- Bishkek (Kyrgyzstan) Region
-
-- "id"
-
-- Jakarta (Indonesia) Region
-
-- "jp"
-
-- "sg"
-
-- "de"
-
-- Frankfurt (Germany) Region
-
-- "us"
-
-- "us-east-1"
-
-- "us-west-1"
-
-- "nz"
-
-- Auckland (New Zealand) Region
-
-
-
---s3-region
-Region to connect to.
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: Scaleway
-- Type: string
-- Required: false
-- Examples:
-
-- "nl-ams"
-
-- Amsterdam, The Netherlands
-
-- "fr-par"
-
-- "pl-waw"
-
-
-
---s3-region
-Region to connect to. - the location where your bucket will be created and your data stored. Need bo be same with your endpoint.
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: HuaweiOBS
-- Type: string
-- Required: false
-- Examples:
-
-- "af-south-1"
-
-- "ap-southeast-2"
-
-- "ap-southeast-3"
-
-- "cn-east-3"
-
-- "cn-east-2"
-
-- "cn-north-1"
-
-- "cn-north-4"
-
-- "cn-south-1"
-
-- "ap-southeast-1"
-
-- "sa-argentina-1"
-
-- "sa-peru-1"
-
-- "na-mexico-1"
-
-- "sa-chile-1"
-
-- "sa-brazil-1"
-
-- "ru-northwest-2"
-
-
-
---s3-region
-Region to connect to.
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: Cloudflare
-- Type: string
-- Required: false
-- Examples:
-
-- "auto"
-
-- R2 buckets are automatically distributed across Cloudflare's data centers for low latency.
-
-
-
---s3-region
-Region to connect to.
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: Qiniu
-- Type: string
-- Required: false
-- Examples:
-
-- "cn-east-1"
-
-- The default endpoint - a good choice if you are unsure.
-- East China Region 1.
-- Needs location constraint cn-east-1.
-
-- "cn-east-2"
-
-- East China Region 2.
-- Needs location constraint cn-east-2.
-
-- "cn-north-1"
-
-- North China Region 1.
-- Needs location constraint cn-north-1.
-
-- "cn-south-1"
-
-- South China Region 1.
-- Needs location constraint cn-south-1.
-
-- "us-north-1"
-
-- North America Region.
-- Needs location constraint us-north-1.
-
-- "ap-southeast-1"
-
-- Southeast Asia Region 1.
-- Needs location constraint ap-southeast-1.
-
-- "ap-northeast-1"
-
-- Northeast Asia Region 1.
-- Needs location constraint ap-northeast-1.
-
-
-
---s3-region
-Region where your bucket will be created and your data stored.
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: IONOS
-- Type: string
-- Required: false
-- Examples:
-
-- "de"
-
-- "eu-central-2"
-
-- "eu-south-2"
-
-
-
---s3-region
-Region where your bucket will be created and your data stored.
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: Petabox
-- Type: string
-- Required: false
-- Examples:
-
-- "us-east-1"
-
-- "eu-central-1"
-
-- "ap-southeast-1"
-
-- Asia Pacific (Singapore)
-
-- "me-south-1"
-
-- "sa-east-1"
-
-- South America (São Paulo)
-
-
-
---s3-region
-Region where your data stored.
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: Synology
-- Type: string
-- Required: false
-- Examples:
-
-- "eu-001"
-
-- "eu-002"
-
-- "us-001"
-
-- "us-002"
-
-- "tw-001"
-
-
-
---s3-region
-Region to connect to.
-Leave blank if you are using an S3 clone and you don't have a region.
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: !AWS,Alibaba,ArvanCloud,ChinaMobile,Cloudflare,IONOS,Petabox,Liara,Qiniu,RackCorp,Scaleway,Storj,Synology,TencentCOS,HuaweiOBS,IDrive
-- Type: string
-- Required: false
-- Examples:
-
-- ""
-
-- Use this if unsure.
-- Will use v4 signatures and an empty region.
-
-- "other-v2-signature"
-
-- Use this only if v4 signatures don't work.
-- E.g. pre Jewel/v10 CEPH.
-
-
-
--s3-endpoint
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
@@ -13412,1168 +13812,6 @@ $ rclone -q --s3-versions ls s3:cleanup-test
Type: string
Required: false
---s3-endpoint
-Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: ChinaMobile
-- Type: string
-- Required: false
-- Examples:
-
-- "eos-wuxi-1.cmecloud.cn"
-
-- The default endpoint - a good choice if you are unsure.
-- East China (Suzhou)
-
-- "eos-jinan-1.cmecloud.cn"
-
-- "eos-ningbo-1.cmecloud.cn"
-
-- "eos-shanghai-1.cmecloud.cn"
-
-- East China (Shanghai-1)
-
-- "eos-zhengzhou-1.cmecloud.cn"
-
-- Central China (Zhengzhou)
-
-- "eos-hunan-1.cmecloud.cn"
-
-- Central China (Changsha-1)
-
-- "eos-zhuzhou-1.cmecloud.cn"
-
-- Central China (Changsha-2)
-
-- "eos-guangzhou-1.cmecloud.cn"
-
-- South China (Guangzhou-2)
-
-- "eos-dongguan-1.cmecloud.cn"
-
-- South China (Guangzhou-3)
-
-- "eos-beijing-1.cmecloud.cn"
-
-- North China (Beijing-1)
-
-- "eos-beijing-2.cmecloud.cn"
-
-- North China (Beijing-2)
-
-- "eos-beijing-4.cmecloud.cn"
-
-- North China (Beijing-3)
-
-- "eos-huhehaote-1.cmecloud.cn"
-
-- North China (Huhehaote)
-
-- "eos-chengdu-1.cmecloud.cn"
-
-- Southwest China (Chengdu)
-
-- "eos-chongqing-1.cmecloud.cn"
-
-- Southwest China (Chongqing)
-
-- "eos-guiyang-1.cmecloud.cn"
-
-- Southwest China (Guiyang)
-
-- "eos-xian-1.cmecloud.cn"
-
-- Nouthwest China (Xian)
-
-- "eos-yunnan.cmecloud.cn"
-
-- Yunnan China (Kunming)
-
-- "eos-yunnan-2.cmecloud.cn"
-
-- Yunnan China (Kunming-2)
-
-- "eos-tianjin-1.cmecloud.cn"
-
-- Tianjin China (Tianjin)
-
-- "eos-jilin-1.cmecloud.cn"
-
-- Jilin China (Changchun)
-
-- "eos-hubei-1.cmecloud.cn"
-
-- Hubei China (Xiangyan)
-
-- "eos-jiangxi-1.cmecloud.cn"
-
-- Jiangxi China (Nanchang)
-
-- "eos-gansu-1.cmecloud.cn"
-
-- "eos-shanxi-1.cmecloud.cn"
-
-- Shanxi China (Taiyuan)
-
-- "eos-liaoning-1.cmecloud.cn"
-
-- Liaoning China (Shenyang)
-
-- "eos-hebei-1.cmecloud.cn"
-
-- Hebei China (Shijiazhuang)
-
-- "eos-fujian-1.cmecloud.cn"
-
-- "eos-guangxi-1.cmecloud.cn"
-
-- Guangxi China (Nanning)
-
-- "eos-anhui-1.cmecloud.cn"
-
-
-
---s3-endpoint
-Endpoint for Arvan Cloud Object Storage (AOS) API.
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: ArvanCloud
-- Type: string
-- Required: false
-- Examples:
-
-- "s3.ir-thr-at1.arvanstorage.ir"
-
-- The default endpoint - a good choice if you are unsure.
-- Tehran Iran (Simin)
-
-- "s3.ir-tbz-sh1.arvanstorage.ir"
-
-- Tabriz Iran (Shahriar)
-
-
-
---s3-endpoint
-Endpoint for IBM COS S3 API.
-Specify if using an IBM COS On Premise.
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: IBMCOS
-- Type: string
-- Required: false
-- Examples:
-
-- "s3.us.cloud-object-storage.appdomain.cloud"
-
-- US Cross Region Endpoint
-
-- "s3.dal.us.cloud-object-storage.appdomain.cloud"
-
-- US Cross Region Dallas Endpoint
-
-- "s3.wdc.us.cloud-object-storage.appdomain.cloud"
-
-- US Cross Region Washington DC Endpoint
-
-- "s3.sjc.us.cloud-object-storage.appdomain.cloud"
-
-- US Cross Region San Jose Endpoint
-
-- "s3.private.us.cloud-object-storage.appdomain.cloud"
-
-- US Cross Region Private Endpoint
-
-- "s3.private.dal.us.cloud-object-storage.appdomain.cloud"
-
-- US Cross Region Dallas Private Endpoint
-
-- "s3.private.wdc.us.cloud-object-storage.appdomain.cloud"
-
-- US Cross Region Washington DC Private Endpoint
-
-- "s3.private.sjc.us.cloud-object-storage.appdomain.cloud"
-
-- US Cross Region San Jose Private Endpoint
-
-- "s3.us-east.cloud-object-storage.appdomain.cloud"
-
-- US Region East Endpoint
-
-- "s3.private.us-east.cloud-object-storage.appdomain.cloud"
-
-- US Region East Private Endpoint
-
-- "s3.us-south.cloud-object-storage.appdomain.cloud"
-
-- US Region South Endpoint
-
-- "s3.private.us-south.cloud-object-storage.appdomain.cloud"
-
-- US Region South Private Endpoint
-
-- "s3.eu.cloud-object-storage.appdomain.cloud"
-
-- EU Cross Region Endpoint
-
-- "s3.fra.eu.cloud-object-storage.appdomain.cloud"
-
-- EU Cross Region Frankfurt Endpoint
-
-- "s3.mil.eu.cloud-object-storage.appdomain.cloud"
-
-- EU Cross Region Milan Endpoint
-
-- "s3.ams.eu.cloud-object-storage.appdomain.cloud"
-
-- EU Cross Region Amsterdam Endpoint
-
-- "s3.private.eu.cloud-object-storage.appdomain.cloud"
-
-- EU Cross Region Private Endpoint
-
-- "s3.private.fra.eu.cloud-object-storage.appdomain.cloud"
-
-- EU Cross Region Frankfurt Private Endpoint
-
-- "s3.private.mil.eu.cloud-object-storage.appdomain.cloud"
-
-- EU Cross Region Milan Private Endpoint
-
-- "s3.private.ams.eu.cloud-object-storage.appdomain.cloud"
-
-- EU Cross Region Amsterdam Private Endpoint
-
-- "s3.eu-gb.cloud-object-storage.appdomain.cloud"
-
-- Great Britain Endpoint
-
-- "s3.private.eu-gb.cloud-object-storage.appdomain.cloud"
-
-- Great Britain Private Endpoint
-
-- "s3.eu-de.cloud-object-storage.appdomain.cloud"
-
-- "s3.private.eu-de.cloud-object-storage.appdomain.cloud"
-
-- EU Region DE Private Endpoint
-
-- "s3.ap.cloud-object-storage.appdomain.cloud"
-
-- APAC Cross Regional Endpoint
-
-- "s3.tok.ap.cloud-object-storage.appdomain.cloud"
-
-- APAC Cross Regional Tokyo Endpoint
-
-- "s3.hkg.ap.cloud-object-storage.appdomain.cloud"
-
-- APAC Cross Regional HongKong Endpoint
-
-- "s3.seo.ap.cloud-object-storage.appdomain.cloud"
-
-- APAC Cross Regional Seoul Endpoint
-
-- "s3.private.ap.cloud-object-storage.appdomain.cloud"
-
-- APAC Cross Regional Private Endpoint
-
-- "s3.private.tok.ap.cloud-object-storage.appdomain.cloud"
-
-- APAC Cross Regional Tokyo Private Endpoint
-
-- "s3.private.hkg.ap.cloud-object-storage.appdomain.cloud"
-
-- APAC Cross Regional HongKong Private Endpoint
-
-- "s3.private.seo.ap.cloud-object-storage.appdomain.cloud"
-
-- APAC Cross Regional Seoul Private Endpoint
-
-- "s3.jp-tok.cloud-object-storage.appdomain.cloud"
-
-- APAC Region Japan Endpoint
-
-- "s3.private.jp-tok.cloud-object-storage.appdomain.cloud"
-
-- APAC Region Japan Private Endpoint
-
-- "s3.au-syd.cloud-object-storage.appdomain.cloud"
-
-- APAC Region Australia Endpoint
-
-- "s3.private.au-syd.cloud-object-storage.appdomain.cloud"
-
-- APAC Region Australia Private Endpoint
-
-- "s3.ams03.cloud-object-storage.appdomain.cloud"
-
-- Amsterdam Single Site Endpoint
-
-- "s3.private.ams03.cloud-object-storage.appdomain.cloud"
-
-- Amsterdam Single Site Private Endpoint
-
-- "s3.che01.cloud-object-storage.appdomain.cloud"
-
-- Chennai Single Site Endpoint
-
-- "s3.private.che01.cloud-object-storage.appdomain.cloud"
-
-- Chennai Single Site Private Endpoint
-
-- "s3.mel01.cloud-object-storage.appdomain.cloud"
-
-- Melbourne Single Site Endpoint
-
-- "s3.private.mel01.cloud-object-storage.appdomain.cloud"
-
-- Melbourne Single Site Private Endpoint
-
-- "s3.osl01.cloud-object-storage.appdomain.cloud"
-
-- Oslo Single Site Endpoint
-
-- "s3.private.osl01.cloud-object-storage.appdomain.cloud"
-
-- Oslo Single Site Private Endpoint
-
-- "s3.tor01.cloud-object-storage.appdomain.cloud"
-
-- Toronto Single Site Endpoint
-
-- "s3.private.tor01.cloud-object-storage.appdomain.cloud"
-
-- Toronto Single Site Private Endpoint
-
-- "s3.seo01.cloud-object-storage.appdomain.cloud"
-
-- Seoul Single Site Endpoint
-
-- "s3.private.seo01.cloud-object-storage.appdomain.cloud"
-
-- Seoul Single Site Private Endpoint
-
-- "s3.mon01.cloud-object-storage.appdomain.cloud"
-
-- Montreal Single Site Endpoint
-
-- "s3.private.mon01.cloud-object-storage.appdomain.cloud"
-
-- Montreal Single Site Private Endpoint
-
-- "s3.mex01.cloud-object-storage.appdomain.cloud"
-
-- Mexico Single Site Endpoint
-
-- "s3.private.mex01.cloud-object-storage.appdomain.cloud"
-
-- Mexico Single Site Private Endpoint
-
-- "s3.sjc04.cloud-object-storage.appdomain.cloud"
-
-- San Jose Single Site Endpoint
-
-- "s3.private.sjc04.cloud-object-storage.appdomain.cloud"
-
-- San Jose Single Site Private Endpoint
-
-- "s3.mil01.cloud-object-storage.appdomain.cloud"
-
-- Milan Single Site Endpoint
-
-- "s3.private.mil01.cloud-object-storage.appdomain.cloud"
-
-- Milan Single Site Private Endpoint
-
-- "s3.hkg02.cloud-object-storage.appdomain.cloud"
-
-- Hong Kong Single Site Endpoint
-
-- "s3.private.hkg02.cloud-object-storage.appdomain.cloud"
-
-- Hong Kong Single Site Private Endpoint
-
-- "s3.par01.cloud-object-storage.appdomain.cloud"
-
-- Paris Single Site Endpoint
-
-- "s3.private.par01.cloud-object-storage.appdomain.cloud"
-
-- Paris Single Site Private Endpoint
-
-- "s3.sng01.cloud-object-storage.appdomain.cloud"
-
-- Singapore Single Site Endpoint
-
-- "s3.private.sng01.cloud-object-storage.appdomain.cloud"
-
-- Singapore Single Site Private Endpoint
-
-
-
---s3-endpoint
-Endpoint for IONOS S3 Object Storage.
-Specify the endpoint from the same region.
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: IONOS
-- Type: string
-- Required: false
-- Examples:
-
-- "s3-eu-central-1.ionoscloud.com"
-
-- "s3-eu-central-2.ionoscloud.com"
-
-- "s3-eu-south-2.ionoscloud.com"
-
-
-
---s3-endpoint
-Endpoint for Petabox S3 Object Storage.
-Specify the endpoint from the same region.
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Petabox
-- Type: string
-- Required: true
-- Examples:
-
-- "s3.petabox.io"
-
-- "s3.us-east-1.petabox.io"
-
-- "s3.eu-central-1.petabox.io"
-
-- "s3.ap-southeast-1.petabox.io"
-
-- Asia Pacific (Singapore)
-
-- "s3.me-south-1.petabox.io"
-
-- "s3.sa-east-1.petabox.io"
-
-- South America (São Paulo)
-
-
-
---s3-endpoint
-Endpoint for Leviia Object Storage API.
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Leviia
-- Type: string
-- Required: false
-- Examples:
-
-- "s3.leviia.com"
-
-- The default endpoint
-- Leviia
-
-
-
---s3-endpoint
-Endpoint for Liara Object Storage API.
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Liara
-- Type: string
-- Required: false
-- Examples:
-
-- "storage.iran.liara.space"
-
-- The default endpoint
-- Iran
-
-
-
---s3-endpoint
-Endpoint for OSS API.
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Alibaba
-- Type: string
-- Required: false
-- Examples:
-
-- "oss-accelerate.aliyuncs.com"
-
-- "oss-accelerate-overseas.aliyuncs.com"
-
-- Global Accelerate (outside mainland China)
-
-- "oss-cn-hangzhou.aliyuncs.com"
-
-- East China 1 (Hangzhou)
-
-- "oss-cn-shanghai.aliyuncs.com"
-
-- East China 2 (Shanghai)
-
-- "oss-cn-qingdao.aliyuncs.com"
-
-- North China 1 (Qingdao)
-
-- "oss-cn-beijing.aliyuncs.com"
-
-- North China 2 (Beijing)
-
-- "oss-cn-zhangjiakou.aliyuncs.com"
-
-- North China 3 (Zhangjiakou)
-
-- "oss-cn-huhehaote.aliyuncs.com"
-
-- North China 5 (Hohhot)
-
-- "oss-cn-wulanchabu.aliyuncs.com"
-
-- North China 6 (Ulanqab)
-
-- "oss-cn-shenzhen.aliyuncs.com"
-
-- South China 1 (Shenzhen)
-
-- "oss-cn-heyuan.aliyuncs.com"
-
-- South China 2 (Heyuan)
-
-- "oss-cn-guangzhou.aliyuncs.com"
-
-- South China 3 (Guangzhou)
-
-- "oss-cn-chengdu.aliyuncs.com"
-
-- West China 1 (Chengdu)
-
-- "oss-cn-hongkong.aliyuncs.com"
-
-- "oss-us-west-1.aliyuncs.com"
-
-- US West 1 (Silicon Valley)
-
-- "oss-us-east-1.aliyuncs.com"
-
-- "oss-ap-southeast-1.aliyuncs.com"
-
-- Southeast Asia Southeast 1 (Singapore)
-
-- "oss-ap-southeast-2.aliyuncs.com"
-
-- Asia Pacific Southeast 2 (Sydney)
-
-- "oss-ap-southeast-3.aliyuncs.com"
-
-- Southeast Asia Southeast 3 (Kuala Lumpur)
-
-- "oss-ap-southeast-5.aliyuncs.com"
-
-- Asia Pacific Southeast 5 (Jakarta)
-
-- "oss-ap-northeast-1.aliyuncs.com"
-
-- Asia Pacific Northeast 1 (Japan)
-
-- "oss-ap-south-1.aliyuncs.com"
-
-- Asia Pacific South 1 (Mumbai)
-
-- "oss-eu-central-1.aliyuncs.com"
-
-- Central Europe 1 (Frankfurt)
-
-- "oss-eu-west-1.aliyuncs.com"
-
-- "oss-me-east-1.aliyuncs.com"
-
-
-
---s3-endpoint
-Endpoint for OBS API.
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: HuaweiOBS
-- Type: string
-- Required: false
-- Examples:
-
-- "obs.af-south-1.myhuaweicloud.com"
-
-- "obs.ap-southeast-2.myhuaweicloud.com"
-
-- "obs.ap-southeast-3.myhuaweicloud.com"
-
-- "obs.cn-east-3.myhuaweicloud.com"
-
-- "obs.cn-east-2.myhuaweicloud.com"
-
-- "obs.cn-north-1.myhuaweicloud.com"
-
-- "obs.cn-north-4.myhuaweicloud.com"
-
-- "obs.cn-south-1.myhuaweicloud.com"
-
-- "obs.ap-southeast-1.myhuaweicloud.com"
-
-- "obs.sa-argentina-1.myhuaweicloud.com"
-
-- "obs.sa-peru-1.myhuaweicloud.com"
-
-- "obs.na-mexico-1.myhuaweicloud.com"
-
-- "obs.sa-chile-1.myhuaweicloud.com"
-
-- "obs.sa-brazil-1.myhuaweicloud.com"
-
-- "obs.ru-northwest-2.myhuaweicloud.com"
-
-
-
---s3-endpoint
-Endpoint for Scaleway Object Storage.
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Scaleway
-- Type: string
-- Required: false
-- Examples:
-
-- "s3.nl-ams.scw.cloud"
-
-- "s3.fr-par.scw.cloud"
-
-- "s3.pl-waw.scw.cloud"
-
-
-
---s3-endpoint
-Endpoint for StackPath Object Storage.
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: StackPath
-- Type: string
-- Required: false
-- Examples:
-
-- "s3.us-east-2.stackpathstorage.com"
-
-- "s3.us-west-1.stackpathstorage.com"
-
-- "s3.eu-central-1.stackpathstorage.com"
-
-
-
---s3-endpoint
-Endpoint for Google Cloud Storage.
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: GCS
-- Type: string
-- Required: false
-- Examples:
-
-- "https://storage.googleapis.com"
-
-- Google Cloud Storage endpoint
-
-
-
---s3-endpoint
-Endpoint for Storj Gateway.
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Storj
-- Type: string
-- Required: false
-- Examples:
-
-- "gateway.storjshare.io"
-
-
-
---s3-endpoint
-Endpoint for Synology C2 Object Storage API.
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Synology
-- Type: string
-- Required: false
-- Examples:
-
-- "eu-001.s3.synologyc2.net"
-
-- "eu-002.s3.synologyc2.net"
-
-- "us-001.s3.synologyc2.net"
-
-- "us-002.s3.synologyc2.net"
-
-- "tw-001.s3.synologyc2.net"
-
-
-
---s3-endpoint
-Endpoint for Tencent COS API.
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: TencentCOS
-- Type: string
-- Required: false
-- Examples:
-
-- "cos.ap-beijing.myqcloud.com"
-
-- "cos.ap-nanjing.myqcloud.com"
-
-- "cos.ap-shanghai.myqcloud.com"
-
-- "cos.ap-guangzhou.myqcloud.com"
-
-- "cos.ap-nanjing.myqcloud.com"
-
-- "cos.ap-chengdu.myqcloud.com"
-
-- "cos.ap-chongqing.myqcloud.com"
-
-- "cos.ap-hongkong.myqcloud.com"
-
-- Hong Kong (China) Region
-
-- "cos.ap-singapore.myqcloud.com"
-
-- "cos.ap-mumbai.myqcloud.com"
-
-- "cos.ap-seoul.myqcloud.com"
-
-- "cos.ap-bangkok.myqcloud.com"
-
-- "cos.ap-tokyo.myqcloud.com"
-
-- "cos.na-siliconvalley.myqcloud.com"
-
-- "cos.na-ashburn.myqcloud.com"
-
-- "cos.na-toronto.myqcloud.com"
-
-- "cos.eu-frankfurt.myqcloud.com"
-
-- "cos.eu-moscow.myqcloud.com"
-
-- "cos.accelerate.myqcloud.com"
-
-- Use Tencent COS Accelerate Endpoint
-
-
-
---s3-endpoint
-Endpoint for RackCorp Object Storage.
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: RackCorp
-- Type: string
-- Required: false
-- Examples:
-
-- "s3.rackcorp.com"
-
-- Global (AnyCast) Endpoint
-
-- "au.s3.rackcorp.com"
-
-- Australia (Anycast) Endpoint
-
-- "au-nsw.s3.rackcorp.com"
-
-- Sydney (Australia) Endpoint
-
-- "au-qld.s3.rackcorp.com"
-
-- Brisbane (Australia) Endpoint
-
-- "au-vic.s3.rackcorp.com"
-
-- Melbourne (Australia) Endpoint
-
-- "au-wa.s3.rackcorp.com"
-
-- Perth (Australia) Endpoint
-
-- "ph.s3.rackcorp.com"
-
-- Manila (Philippines) Endpoint
-
-- "th.s3.rackcorp.com"
-
-- Bangkok (Thailand) Endpoint
-
-- "hk.s3.rackcorp.com"
-
-- HK (Hong Kong) Endpoint
-
-- "mn.s3.rackcorp.com"
-
-- Ulaanbaatar (Mongolia) Endpoint
-
-- "kg.s3.rackcorp.com"
-
-- Bishkek (Kyrgyzstan) Endpoint
-
-- "id.s3.rackcorp.com"
-
-- Jakarta (Indonesia) Endpoint
-
-- "jp.s3.rackcorp.com"
-
-- Tokyo (Japan) Endpoint
-
-- "sg.s3.rackcorp.com"
-
-- SG (Singapore) Endpoint
-
-- "de.s3.rackcorp.com"
-
-- Frankfurt (Germany) Endpoint
-
-- "us.s3.rackcorp.com"
-
-- USA (AnyCast) Endpoint
-
-- "us-east-1.s3.rackcorp.com"
-
-- New York (USA) Endpoint
-
-- "us-west-1.s3.rackcorp.com"
-
-- Freemont (USA) Endpoint
-
-- "nz.s3.rackcorp.com"
-
-- Auckland (New Zealand) Endpoint
-
-
-
---s3-endpoint
-Endpoint for Qiniu Object Storage.
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Qiniu
-- Type: string
-- Required: false
-- Examples:
-
-- "s3-cn-east-1.qiniucs.com"
-
-- "s3-cn-east-2.qiniucs.com"
-
-- "s3-cn-north-1.qiniucs.com"
-
-- North China Endpoint 1
-
-- "s3-cn-south-1.qiniucs.com"
-
-- South China Endpoint 1
-
-- "s3-us-north-1.qiniucs.com"
-
-- North America Endpoint 1
-
-- "s3-ap-southeast-1.qiniucs.com"
-
-- Southeast Asia Endpoint 1
-
-- "s3-ap-northeast-1.qiniucs.com"
-
-- Northeast Asia Endpoint 1
-
-
-
---s3-endpoint
-Endpoint for S3 API.
-Required when using an S3 clone.
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: !AWS,ArvanCloud,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,Scaleway,StackPath,Storj,Synology,RackCorp,Qiniu,Petabox
-- Type: string
-- Required: false
-- Examples:
-
-- "objects-us-east-1.dream.io"
-
-- Dream Objects endpoint
-
-- "syd1.digitaloceanspaces.com"
-
-- DigitalOcean Spaces Sydney 1
-
-- "sfo3.digitaloceanspaces.com"
-
-- DigitalOcean Spaces San Francisco 3
-
-- "fra1.digitaloceanspaces.com"
-
-- DigitalOcean Spaces Frankfurt 1
-
-- "nyc3.digitaloceanspaces.com"
-
-- DigitalOcean Spaces New York 3
-
-- "ams3.digitaloceanspaces.com"
-
-- DigitalOcean Spaces Amsterdam 3
-
-- "sgp1.digitaloceanspaces.com"
-
-- DigitalOcean Spaces Singapore 1
-
-- "localhost:8333"
-
-- SeaweedFS S3 localhost
-
-- "s3.us-east-1.lyvecloud.seagate.com"
-
-- Seagate Lyve Cloud US East 1 (Virginia)
-
-- "s3.us-west-1.lyvecloud.seagate.com"
-
-- Seagate Lyve Cloud US West 1 (California)
-
-- "s3.ap-southeast-1.lyvecloud.seagate.com"
-
-- Seagate Lyve Cloud AP Southeast 1 (Singapore)
-
-- "s3.wasabisys.com"
-
-- Wasabi US East 1 (N. Virginia)
-
-- "s3.us-east-2.wasabisys.com"
-
-- Wasabi US East 2 (N. Virginia)
-
-- "s3.us-central-1.wasabisys.com"
-
-- Wasabi US Central 1 (Texas)
-
-- "s3.us-west-1.wasabisys.com"
-
-- Wasabi US West 1 (Oregon)
-
-- "s3.ca-central-1.wasabisys.com"
-
-- Wasabi CA Central 1 (Toronto)
-
-- "s3.eu-central-1.wasabisys.com"
-
-- Wasabi EU Central 1 (Amsterdam)
-
-- "s3.eu-central-2.wasabisys.com"
-
-- Wasabi EU Central 2 (Frankfurt)
-
-- "s3.eu-west-1.wasabisys.com"
-
-- Wasabi EU West 1 (London)
-
-- "s3.eu-west-2.wasabisys.com"
-
-- Wasabi EU West 2 (Paris)
-
-- "s3.ap-northeast-1.wasabisys.com"
-
-- Wasabi AP Northeast 1 (Tokyo) endpoint
-
-- "s3.ap-northeast-2.wasabisys.com"
-
-- Wasabi AP Northeast 2 (Osaka) endpoint
-
-- "s3.ap-southeast-1.wasabisys.com"
-
-- Wasabi AP Southeast 1 (Singapore)
-
-- "s3.ap-southeast-2.wasabisys.com"
-
-- Wasabi AP Southeast 2 (Sydney)
-
-- "storage.iran.liara.space"
-
-- "s3.ir-thr-at1.arvanstorage.ir"
-
-- ArvanCloud Tehran Iran (Simin) endpoint
-
-- "s3.ir-tbz-sh1.arvanstorage.ir"
-
-- ArvanCloud Tabriz Iran (Shahriar) endpoint
-
-
-
--s3-location-constraint
Location constraint - must be set to match the Region.
Used when creating buckets only.
@@ -14688,446 +13926,6 @@ $ rclone -q --s3-versions ls s3:cleanup-test
---s3-location-constraint
-Location constraint - must match endpoint.
-Used when creating buckets only.
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: ChinaMobile
-- Type: string
-- Required: false
-- Examples:
-
-- "wuxi1"
-
-- "jinan1"
-
-- "ningbo1"
-
-- "shanghai1"
-
-- East China (Shanghai-1)
-
-- "zhengzhou1"
-
-- Central China (Zhengzhou)
-
-- "hunan1"
-
-- Central China (Changsha-1)
-
-- "zhuzhou1"
-
-- Central China (Changsha-2)
-
-- "guangzhou1"
-
-- South China (Guangzhou-2)
-
-- "dongguan1"
-
-- South China (Guangzhou-3)
-
-- "beijing1"
-
-- North China (Beijing-1)
-
-- "beijing2"
-
-- North China (Beijing-2)
-
-- "beijing4"
-
-- North China (Beijing-3)
-
-- "huhehaote1"
-
-- North China (Huhehaote)
-
-- "chengdu1"
-
-- Southwest China (Chengdu)
-
-- "chongqing1"
-
-- Southwest China (Chongqing)
-
-- "guiyang1"
-
-- Southwest China (Guiyang)
-
-- "xian1"
-
-- Nouthwest China (Xian)
-
-- "yunnan"
-
-- Yunnan China (Kunming)
-
-- "yunnan2"
-
-- Yunnan China (Kunming-2)
-
-- "tianjin1"
-
-- Tianjin China (Tianjin)
-
-- "jilin1"
-
-- Jilin China (Changchun)
-
-- "hubei1"
-
-- Hubei China (Xiangyan)
-
-- "jiangxi1"
-
-- Jiangxi China (Nanchang)
-
-- "gansu1"
-
-- "shanxi1"
-
-- Shanxi China (Taiyuan)
-
-- "liaoning1"
-
-- Liaoning China (Shenyang)
-
-- "hebei1"
-
-- Hebei China (Shijiazhuang)
-
-- "fujian1"
-
-- "guangxi1"
-
-- Guangxi China (Nanning)
-
-- "anhui1"
-
-
-
---s3-location-constraint
-Location constraint - must match endpoint.
-Used when creating buckets only.
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: ArvanCloud
-- Type: string
-- Required: false
-- Examples:
-
-- "ir-thr-at1"
-
-- "ir-tbz-sh1"
-
-- Tabriz Iran (Shahriar)
-
-
-
---s3-location-constraint
-Location constraint - must match endpoint when using IBM Cloud Public.
-For on-prem COS, do not make a selection from this list, hit enter.
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: IBMCOS
-- Type: string
-- Required: false
-- Examples:
-
-- "us-standard"
-
-- US Cross Region Standard
-
-- "us-vault"
-
-- "us-cold"
-
-- "us-flex"
-
-- "us-east-standard"
-
-- US East Region Standard
-
-- "us-east-vault"
-
-- "us-east-cold"
-
-- "us-east-flex"
-
-- "us-south-standard"
-
-- US South Region Standard
-
-- "us-south-vault"
-
-- "us-south-cold"
-
-- "us-south-flex"
-
-- "eu-standard"
-
-- EU Cross Region Standard
-
-- "eu-vault"
-
-- "eu-cold"
-
-- "eu-flex"
-
-- "eu-gb-standard"
-
-- Great Britain Standard
-
-- "eu-gb-vault"
-
-- "eu-gb-cold"
-
-- "eu-gb-flex"
-
-- "ap-standard"
-
-- "ap-vault"
-
-- "ap-cold"
-
-- "ap-flex"
-
-- "mel01-standard"
-
-- "mel01-vault"
-
-- "mel01-cold"
-
-- "mel01-flex"
-
-- "tor01-standard"
-
-- "tor01-vault"
-
-- "tor01-cold"
-
-- "tor01-flex"
-
-
-
---s3-location-constraint
-Location constraint - the location where your bucket will be located and your data stored.
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: RackCorp
-- Type: string
-- Required: false
-- Examples:
-
-- "global"
-
-- "au"
-
-- Australia (All locations)
-
-- "au-nsw"
-
-- NSW (Australia) Region
-
-- "au-qld"
-
-- QLD (Australia) Region
-
-- "au-vic"
-
-- VIC (Australia) Region
-
-- "au-wa"
-
-- Perth (Australia) Region
-
-- "ph"
-
-- Manila (Philippines) Region
-
-- "th"
-
-- Bangkok (Thailand) Region
-
-- "hk"
-
-- "mn"
-
-- Ulaanbaatar (Mongolia) Region
-
-- "kg"
-
-- Bishkek (Kyrgyzstan) Region
-
-- "id"
-
-- Jakarta (Indonesia) Region
-
-- "jp"
-
-- "sg"
-
-- "de"
-
-- Frankfurt (Germany) Region
-
-- "us"
-
-- "us-east-1"
-
-- "us-west-1"
-
-- "nz"
-
-- Auckland (New Zealand) Region
-
-
-
---s3-location-constraint
-Location constraint - must be set to match the Region.
-Used when creating buckets only.
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: Qiniu
-- Type: string
-- Required: false
-- Examples:
-
-- "cn-east-1"
-
-- "cn-east-2"
-
-- "cn-north-1"
-
-- "cn-south-1"
-
-- "us-north-1"
-
-- North America Region 1
-
-- "ap-southeast-1"
-
-- Southeast Asia Region 1
-
-- "ap-northeast-1"
-
-- Northeast Asia Region 1
-
-
-
---s3-location-constraint
-Location constraint - must be set to match the Region.
-Leave blank if not sure. Used when creating buckets only.
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: !AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,Leviia,Liara,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS,Petabox
-- Type: string
-- Required: false
-
--s3-acl
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
@@ -15302,193 +14100,8 @@ $ rclone -q --s3-versions ls s3:cleanup-test
---s3-storage-class
-The storage class to use when storing new objects in OSS.
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: Alibaba
-- Type: string
-- Required: false
-- Examples:
-
-- ""
-
-- "STANDARD"
-
-- Standard storage class
-
-- "GLACIER"
-
-- "STANDARD_IA"
-
-- Infrequent access storage mode
-
-
-
---s3-storage-class
-The storage class to use when storing new objects in ChinaMobile.
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: ChinaMobile
-- Type: string
-- Required: false
-- Examples:
-
-- ""
-
-- "STANDARD"
-
-- Standard storage class
-
-- "GLACIER"
-
-- "STANDARD_IA"
-
-- Infrequent access storage mode
-
-
-
---s3-storage-class
-The storage class to use when storing new objects in Liara
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: Liara
-- Type: string
-- Required: false
-- Examples:
-
-- "STANDARD"
-
-- Standard storage class
-
-
-
---s3-storage-class
-The storage class to use when storing new objects in ArvanCloud.
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: ArvanCloud
-- Type: string
-- Required: false
-- Examples:
-
-- "STANDARD"
-
-- Standard storage class
-
-
-
---s3-storage-class
-The storage class to use when storing new objects in Tencent COS.
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: TencentCOS
-- Type: string
-- Required: false
-- Examples:
-
-- ""
-
-- "STANDARD"
-
-- Standard storage class
-
-- "ARCHIVE"
-
-- "STANDARD_IA"
-
-- Infrequent access storage mode
-
-
-
---s3-storage-class
-The storage class to use when storing new objects in S3.
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: Scaleway
-- Type: string
-- Required: false
-- Examples:
-
-- ""
-
-- "STANDARD"
-
-- The Standard class for any upload.
-- Suitable for on-demand content like streaming or CDN.
-- Available in all regions.
-
-- "GLACIER"
-
-- Archived storage.
-- Prices are lower, but it needs to be restored first to be accessed.
-- Available in FR-PAR and NL-AMS regions.
-
-- "ONEZONE_IA"
-
-- One Zone - Infrequent Access.
-- A good choice for storing secondary backup copies or easily re-creatable data.
-- Available in the FR-PAR region only.
-
-
-
---s3-storage-class
-The storage class to use when storing new objects in Qiniu.
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: Qiniu
-- Type: string
-- Required: false
-- Examples:
-
-- "STANDARD"
-
-- Standard storage class
-
-- "LINE"
-
-- Infrequent access storage mode
-
-- "GLACIER"
-
-- "DEEP_ARCHIVE"
-
-- Deep archive storage mode
-
-
-
Advanced options
-Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi).
+Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
--s3-bucket-acl
Canned ACL used when creating buckets.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
@@ -15840,7 +14453,7 @@ Windows: "%USERPROFILE%\.aws\credentials"
- Config: encoding
- Env Var: RCLONE_S3_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,InvalidUtf8,Dot
--s3-memory-pool-flush-time
@@ -15993,6 +14606,31 @@ Windows: "%USERPROFILE%\.aws\credentials"
Type: string
Required: false
+--s3-use-already-exists
+Set if rclone should report BucketAlreadyExists errors on bucket creation.
+At some point during the evolution of the s3 protocol, AWS started returning an AlreadyOwnedByYou
error when attempting to create a bucket that the user already owned, rather than a BucketAlreadyExists
error.
+Unfortunately exactly what has been implemented by s3 clones is a little inconsistent, some return AlreadyOwnedByYou
, some return BucketAlreadyExists
and some return no error at all.
+This is important to rclone because it ensures the bucket exists by creating it on quite a lot of operations (unless --s3-no-check-bucket
is used).
+If rclone knows the provider can return AlreadyOwnedByYou
or returns no error then it can report BucketAlreadyExists
errors when the user attempts to create a bucket not owned by them. Otherwise rclone ignores the BucketAlreadyExists
error which can lead to confusion.
+This should be automatically set correctly for all providers rclone knows about - please make a bug report if not.
+Properties:
+
+- Config: use_already_exists
+- Env Var: RCLONE_S3_USE_ALREADY_EXISTS
+- Type: Tristate
+- Default: unset
+
+--s3-use-multipart-uploads
+Set if rclone should use multipart uploads.
+You can change this if you want to disable the use of multipart uploads. This shouldn't be necessary in normal operation.
+This should be automatically set correctly for all providers rclone knows about - please make a bug report if not.
+Properties:
+
+- Config: use_multipart_uploads
+- Env Var: RCLONE_S3_USE_MULTIPART_UPLOADS
+- Type: Tristate
+- Default: unset
+
User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case.
Here are the possible system metadata items for the s3 backend.
@@ -16374,6 +15012,9 @@ provider = GCS
access_key_id = your_access_key
secret_access_key = your_secret_key
endpoint = https://storage.googleapis.com
+Note that --s3-versions
does not work with GCS when it needs to do directory paging. Rclone will return the error:
+s3 protocol error: received versions listing with IsTruncated set with no NextKeyMarker
+This is Google bug #312292516.
DigitalOcean Spaces
Spaces is an S3-interoperable object storage service from cloud provider DigitalOcean.
To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "Applications & API" page of the DigitalOcean control panel. They will be needed when prompted by rclone config
for your access_key_id
and secret_access_key
.
@@ -17153,6 +15794,19 @@ secret_access_key = YOURSECRETACCESSKEY
region = au-nsw
endpoint = s3.rackcorp.com
location_constraint = au-nsw
+Rclone Serve S3
+Rclone can serve any remote over the S3 protocol. For details see the rclone serve s3 documentation.
+For example, to serve remote:path
over s3, run the server like this:
+rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
+This will be compatible with an rclone remote which is defined like this:
+[serves3]
+type = s3
+provider = Rclone
+endpoint = http://127.0.0.1:8080/
+access_key_id = ACCESS_KEY_ID
+secret_access_key = SECRET_ACCESS_KEY
+use_multipart_uploads = false
+Note that setting disable_multipart_uploads = true
is to work around a bug which will be fixed in due course.
Scaleway
Scaleway The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos. Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool.
Scaleway provides an S3 interface which can be configured for use with rclone like this:
@@ -17938,6 +16592,127 @@ location_constraint =
acl =
server_side_encryption =
storage_class =
+Linode
+Here is an example of making a Linode Object Storage configuration. First run:
+rclone config
+This will guide you through an interactive setup process.
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter name for new remote.
+name> linode
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+ X / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others
+ \ (s3)
+[snip]
+Storage> s3
+
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+XX / Linode Object Storage
+ \ (Linode)
+[snip]
+provider> Linode
+
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+env_auth>
+
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> ACCESS_KEY
+
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> SECRET_ACCESS_KEY
+
+Option endpoint.
+Endpoint for Linode Object Storage API.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Atlanta, GA (USA), us-southeast-1
+ \ (us-southeast-1.linodeobjects.com)
+ 2 / Chicago, IL (USA), us-ord-1
+ \ (us-ord-1.linodeobjects.com)
+ 3 / Frankfurt (Germany), eu-central-1
+ \ (eu-central-1.linodeobjects.com)
+ 4 / Milan (Italy), it-mil-1
+ \ (it-mil-1.linodeobjects.com)
+ 5 / Newark, NJ (USA), us-east-1
+ \ (us-east-1.linodeobjects.com)
+ 6 / Paris (France), fr-par-1
+ \ (fr-par-1.linodeobjects.com)
+ 7 / Seattle, WA (USA), us-sea-1
+ \ (us-sea-1.linodeobjects.com)
+ 8 / Singapore ap-south-1
+ \ (ap-south-1.linodeobjects.com)
+ 9 / Stockholm (Sweden), se-sto-1
+ \ (se-sto-1.linodeobjects.com)
+10 / Washington, DC, (USA), us-iad-1
+ \ (us-iad-1.linodeobjects.com)
+endpoint> 3
+
+Option acl.
+Canned ACL used when creating buckets and storing or copying objects.
+This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
+For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+Note that this ACL is applied when server-side copying objects as S3
+doesn't copy the ACL from the source but rather writes a fresh one.
+If the acl is an empty string then no X-Amz-Acl: header is added and
+the default (private) will be used.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \ (private)
+[snip]
+acl>
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: s3
+- provider: Linode
+- access_key_id: ACCESS_KEY
+- secret_access_key: SECRET_ACCESS_KEY
+- endpoint: eu-central-1.linodeobjects.com
+Keep this "linode" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+This will leave the config file looking like this.
+[linode]
+type = s3
+provider = Linode
+access_key_id = ACCESS_KEY
+secret_access_key = SECRET_ACCESS_KEY
+endpoint = eu-central-1.linodeobjects.com
ArvanCloud
ArvanCloud ArvanCloud Object Storage goes beyond the limited traditional file storage. It gives you access to backup and archived files and allows sharing. Files like profile image in the app, images sent by users or scanned documents can be stored securely and easily in our Object Storage service.
ArvanCloud provides an S3 interface which can be configured for use with rclone like this.
@@ -18153,7 +16928,7 @@ cos s3
For Netease NOS configure as per the configurator rclone config
setting the provider Netease
. This will automatically set force_path_style = false
which is necessary for it to run properly.
Petabox
Here is an example of making a Petabox configuration. First run:
-
+
This will guide you through an interactive setup process.
No remotes found, make a new one?
n) New remote
@@ -18364,7 +17139,7 @@ y/n> n
Use the the native protocol to take advantage of client-side encryption as well as to achieve the best possible download performance. Uploads will be erasure-coded locally, thus a 1gb upload will result in 2.68gb of data being uploaded to storage nodes across the network.
Use this backend and the S3 compatible Hosted Gateway to increase upload performance and reduce the load on your systems and network. Uploads will be encrypted and erasure-coded server-side, thus a 1GB upload will result in only in 1GB of data being uploaded to storage nodes across the network.
For more detailed comparison please check the documentation of the storj backend.
-Limitations
+Limitations
rclone about
is not supported by the S3 backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
Synology C2 Object Storage
@@ -18550,9 +17325,9 @@ This remote supports `--fast-list` which allows you to use fewer
transactions in exchange for more memory. See the [rclone
docs](https://rclone.org/docs/#fast-list) for more details.
-### Modified time
+### Modification times
-The modified time is stored as metadata on the object as
+The modification time is stored as metadata on the object as
`X-Bz-Info-src_last_modified_millis` as milliseconds since 1970-01-01
in the Backblaze standard. Other tools should be able to use this as
a modified time.
@@ -18908,7 +17683,7 @@ Properties:
- Config: upload_concurrency
- Env Var: RCLONE_B2_UPLOAD_CONCURRENCY
- Type: int
-- Default: 16
+- Default: 4
#### --b2-disable-checksum
@@ -18988,6 +17763,37 @@ Properties:
- Type: bool
- Default: false
+#### --b2-lifecycle
+
+Set the number of days deleted files should be kept when creating a bucket.
+
+On bucket creation, this parameter is used to create a lifecycle rule
+for the entire bucket.
+
+If lifecycle is 0 (the default) it does not create a lifecycle rule so
+the default B2 behaviour applies. This is to create versions of files
+on delete and overwrite and to keep them indefinitely.
+
+If lifecycle is >0 then it creates a single rule setting the number of
+days before a file that is deleted or overwritten is deleted
+permanently. This is known as daysFromHidingToDeleting in the b2 docs.
+
+The minimum value for this parameter is 1 day.
+
+You can also enable hard_delete in the config also which will mean
+deletions won't cause versions but overwrites will still cause
+versions to be made.
+
+See: [rclone backend lifecycle](#lifecycle) for setting lifecycles after bucket creation.
+
+
+Properties:
+
+- Config: lifecycle
+- Env Var: RCLONE_B2_LIFECYCLE
+- Type: int
+- Default: 0
+
#### --b2-encoding
The encoding for the backend.
@@ -18998,9 +17804,76 @@ Properties:
- Config: encoding
- Env Var: RCLONE_B2_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
+## Backend commands
+
+Here are the commands specific to the b2 backend.
+
+Run them with
+
+ rclone backend COMMAND remote:
+
+The help below will explain what arguments each command takes.
+
+See the [backend](https://rclone.org/commands/rclone_backend/) command for more
+info on how to pass options and arguments.
+
+These can be run on a running backend using the rc command
+[backend/command](https://rclone.org/rc/#backend-command).
+
+### lifecycle
+
+Read or set the lifecycle for a bucket
+
+ rclone backend lifecycle remote: [options] [<arguments>+]
+
+This command can be used to read or set the lifecycle for a bucket.
+
+Usage Examples:
+
+To show the current lifecycle rules:
+
+ rclone backend lifecycle b2:bucket
+
+This will dump something like this showing the lifecycle rules.
+
+ [
+ {
+ "daysFromHidingToDeleting": 1,
+ "daysFromUploadingToHiding": null,
+ "fileNamePrefix": ""
+ }
+ ]
+
+If there are no lifecycle rules (the default) then it will just return [].
+
+To reset the current lifecycle rules:
+
+ rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30
+ rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHidingToDeleting=1
+
+This will run and then print the new lifecycle rules as above.
+
+Rclone only lets you set lifecycles for the whole bucket with the
+fileNamePrefix = "".
+
+You can't disable versioning with B2. The best you can do is to set
+the daysFromHidingToDeleting to 1 day. You can enable hard_delete in
+the config also which will mean deletions won't cause versions but
+overwrites will still cause versions to be made.
+
+ rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1
+
+See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules
+
+
+Options:
+
+- "daysFromHidingToDeleting": After a file has been hidden for this many days it is deleted. 0 is off.
+- "daysFromUploadingToHiding": This many days after uploading a file is hidden
+
## Limitations
@@ -19119,7 +17992,7 @@ Here is how to do it.
Delete this remote y/e/d> y
-### Modified time and hashes
+### Modification times and hashes
Box allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -19362,7 +18235,7 @@ Properties:
Impersonate this user ID when using a service account.
-Settng this flag allows rclone, when using a JWT service account, to
+Setting this flag allows rclone, when using a JWT service account, to
act on behalf of another user by setting the as-user header.
The user ID is the Box identifier for a user. User IDs can found for
@@ -19390,7 +18263,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_BOX_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot
@@ -20232,7 +19105,7 @@ revert (sometimes silently) to time/size comparison if compatible hashsums
between source and target are not found.
-### Modified time
+### Modification times
Chunker stores modification times using the wrapped remote so support
depends on that. For a small non-chunked file the chunker overlay simply
@@ -20512,7 +19385,7 @@ To copy a local directory to an ShareFile directory called backup
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
-### Modified time and hashes
+### Modification times and hashes
ShareFile allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -20707,7 +19580,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SHAREFILE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot
@@ -21006,7 +19879,7 @@ Example:
`1/12/qgm4avr35m5loi1th53ato71v0`
-### Modified time and hashes
+### Modification times and hashes
Crypt stores modification times using the underlying remote so support
depends on that.
@@ -21313,7 +20186,7 @@ has a header and is divided into chunks.
The initial nonce is generated from the operating systems crypto
strong random number generator. The nonce is incremented for each
chunk read making sure each nonce is unique for each block written.
-The chance of a nonce being re-used is minuscule. If you wrote an
+The chance of a nonce being reused is minuscule. If you wrote an
exabyte of data (10¹⁸ bytes) you would have a probability of
approximately 2×10⁻³² of re-using a nonce.
@@ -21700,7 +20573,7 @@ You can then use team folders like this `remote:/TeamFolder` and
A leading `/` for a Dropbox personal account will do nothing, but it
will take an extra HTTP transaction so it should be avoided.
-### Modified time and Hashes
+### Modification times and hashes
Dropbox supports modified times, but the only way to set a
modification time is to re-upload the file.
@@ -21946,6 +20819,30 @@ Properties:
- Type: bool
- Default: false
+#### --dropbox-pacer-min-sleep
+
+Minimum time to sleep between API calls.
+
+Properties:
+
+- Config: pacer_min_sleep
+- Env Var: RCLONE_DROPBOX_PACER_MIN_SLEEP
+- Type: Duration
+- Default: 10ms
+
+#### --dropbox-encoding
+
+The encoding for the backend.
+
+See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_DROPBOX_ENCODING
+- Type: Encoding
+- Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot
+
#### --dropbox-batch-mode
Upload file batching sync|async|off.
@@ -22032,30 +20929,6 @@ Properties:
- Type: Duration
- Default: 10m0s
-#### --dropbox-pacer-min-sleep
-
-Minimum time to sleep between API calls.
-
-Properties:
-
-- Config: pacer_min_sleep
-- Env Var: RCLONE_DROPBOX_PACER_MIN_SLEEP
-- Type: Duration
-- Default: 10ms
-
-#### --dropbox-encoding
-
-The encoding for the backend.
-
-See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
-
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_DROPBOX_ENCODING
-- Type: MultiEncoder
-- Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot
-
## Limitations
@@ -22151,7 +21024,7 @@ To copy a local directory to an Enterprise File Fabric directory called backup
rclone copy /home/source remote:backup
-### Modified time and hashes
+### Modification times and hashes
The Enterprise File Fabric allows modification times to be set on
files accurate to 1 second. These will be used to detect whether
@@ -22313,7 +21186,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_FILEFABRIC_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Del,Ctl,InvalidUtf8,Dot
@@ -22702,7 +21575,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_FTP_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Del,Ctl,RightSpace,Dot
- Examples:
- "Asterisk,Ctl,Dot,Slash"
@@ -22745,7 +21618,7 @@ at present.
The `ftp_proxy` environment variable is not currently supported.
-#### Modified time
+### Modification times
File modification time (timestamps) is supported to 1 second resolution
for major FTP servers: ProFTPd, PureFTPd, VsFTPd, and FileZilla FTP server.
@@ -22902,7 +21775,7 @@ Eg `--header-upload "Content-Type text/potato"`
Note that the last of these is for setting custom metadata in the form
`--header-upload "x-goog-meta-key: value"`
-### Modification time
+### Modification times
Google Cloud Storage stores md5sum natively.
Google's [gsutil](https://cloud.google.com/storage/docs/gsutil) tool stores modification time
@@ -23351,7 +22224,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_GCS_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,CrLf,InvalidUtf8,Dot
@@ -23417,6 +22290,8 @@ use. This changes what type of token is granted to rclone. [The
scopes are defined
here](https://developers.google.com/drive/v3/web/about-auth).
+A comma-separated list is allowed e.g. `drive.readonly,drive.file`.
+
The scope are
#### drive
@@ -23607,10 +22482,14 @@ large folder (10600 directories, 39000 files):
- without `--fast-list`: 22:05 min
- with `--fast-list`: 58s
-### Modified time
+### Modification times and hashes
Google drive stores modification times accurate to 1 ms.
+Hash algorithms MD5, SHA1 and SHA256 are supported. Note, however,
+that a small fraction of files uploaded may not have SHA1 or SHA256
+hashes especially if they were uploaded before 2018.
+
### Restricted filename characters
Only Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8),
@@ -23830,7 +22709,7 @@ Properties:
#### --drive-scope
-Scope that rclone should use when requesting access from drive.
+Comma separated list of scopes that rclone should use when requesting access from drive.
Properties:
@@ -24018,15 +22897,40 @@ Properties:
- Type: bool
- Default: false
+#### --drive-show-all-gdocs
+
+Show all Google Docs including non-exportable ones in listings.
+
+If you try a server side copy on a Google Form without this flag, you
+will get this error:
+
+ No export formats found for "application/vnd.google-apps.form"
+
+However adding this flag will allow the form to be server side copied.
+
+Note that rclone doesn't add extensions to the Google Docs file names
+in this mode.
+
+Do **not** use this flag when trying to download Google Docs - rclone
+will fail to download them.
+
+
+Properties:
+
+- Config: show_all_gdocs
+- Env Var: RCLONE_DRIVE_SHOW_ALL_GDOCS
+- Type: bool
+- Default: false
+
#### --drive-skip-checksum-gphotos
-Skip MD5 checksum on Google photos and videos only.
+Skip checksums on Google photos and videos only.
Use this if you get checksum errors when transferring Google photos or
videos.
Setting this flag will cause Google photos and videos to return a
-blank MD5 checksum.
+blank checksums.
Google photos are identified by being in the "photos" space.
@@ -24480,6 +23384,98 @@ Properties:
- Type: bool
- Default: true
+#### --drive-metadata-owner
+
+Control whether owner should be read or written in metadata.
+
+Owner is a standard part of the file metadata so is easy to read. But it
+isn't always desirable to set the owner from the metadata.
+
+Note that you can't set the owner on Shared Drives, and that setting
+ownership will generate an email to the new owner (this can't be
+disabled), and you can't transfer ownership to someone outside your
+organization.
+
+
+Properties:
+
+- Config: metadata_owner
+- Env Var: RCLONE_DRIVE_METADATA_OWNER
+- Type: Bits
+- Default: read
+- Examples:
+ - "off"
+ - Do not read or write the value
+ - "read"
+ - Read the value only
+ - "write"
+ - Write the value only
+ - "read,write"
+ - Read and Write the value.
+
+#### --drive-metadata-permissions
+
+Control whether permissions should be read or written in metadata.
+
+Reading permissions metadata from files can be done quickly, but it
+isn't always desirable to set the permissions from the metadata.
+
+Note that rclone drops any inherited permissions on Shared Drives and
+any owner permission on My Drives as these are duplicated in the owner
+metadata.
+
+
+Properties:
+
+- Config: metadata_permissions
+- Env Var: RCLONE_DRIVE_METADATA_PERMISSIONS
+- Type: Bits
+- Default: off
+- Examples:
+ - "off"
+ - Do not read or write the value
+ - "read"
+ - Read the value only
+ - "write"
+ - Write the value only
+ - "read,write"
+ - Read and Write the value.
+
+#### --drive-metadata-labels
+
+Control whether labels should be read or written in metadata.
+
+Reading labels metadata from files takes an extra API transaction and
+will slow down listings. It isn't always desirable to set the labels
+from the metadata.
+
+The format of labels is documented in the drive API documentation at
+https://developers.google.com/drive/api/reference/rest/v3/Label -
+rclone just provides a JSON dump of this format.
+
+When setting labels, the label and fields must already exist - rclone
+will not create them. This means that if you are transferring labels
+from two different accounts you will have to create the labels in
+advance and use the metadata mapper to translate the IDs between the
+two accounts.
+
+
+Properties:
+
+- Config: metadata_labels
+- Env Var: RCLONE_DRIVE_METADATA_LABELS
+- Type: Bits
+- Default: off
+- Examples:
+ - "off"
+ - Do not read or write the value
+ - "read"
+ - Read the value only
+ - "write"
+ - Write the value only
+ - "read,write"
+ - Read and Write the value.
+
#### --drive-encoding
The encoding for the backend.
@@ -24490,7 +23486,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_DRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: InvalidUtf8
#### --drive-env-auth
@@ -24511,6 +23507,29 @@ Properties:
- "true"
- Get GCP IAM credentials from the environment (env vars or IAM).
+### Metadata
+
+User metadata is stored in the properties field of the drive object.
+
+Here are the possible system metadata items for the drive backend.
+
+| Name | Help | Type | Example | Read Only |
+|------|------|------|---------|-----------|
+| btime | Time of file birth (creation) with mS accuracy. Note that this is only writable on fresh uploads - it can't be written for updates. | RFC 3339 | 2006-01-02T15:04:05.999Z07:00 | N |
+| content-type | The MIME type of the file. | string | text/plain | N |
+| copy-requires-writer-permission | Whether the options to copy, print, or download this file, should be disabled for readers and commenters. | boolean | true | N |
+| description | A short description of the file. | string | Contract for signing | N |
+| folder-color-rgb | The color for a folder or a shortcut to a folder as an RGB hex string. | string | 881133 | N |
+| labels | Labels attached to this file in a JSON dump of Googled drive format. Enable with --drive-metadata-labels. | JSON | [] | N |
+| mtime | Time of last modification with mS accuracy. | RFC 3339 | 2006-01-02T15:04:05.999Z07:00 | N |
+| owner | The owner of the file. Usually an email address. Enable with --drive-metadata-owner. | string | user@example.com | N |
+| permissions | Permissions in a JSON dump of Google drive format. On shared drives these will only be present if they aren't inherited. Enable with --drive-metadata-permissions. | JSON | {} | N |
+| starred | Whether the user has starred the file. | boolean | false | N |
+| viewed-by-me | Whether the file has been viewed by this user. | boolean | true | **Y** |
+| writers-can-share | Whether users with only writer permission can modify the file's permissions. Not populated for items in shared drives. | boolean | false | N |
+
+See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
+
## Backend commands
Here are the commands specific to the drive backend.
@@ -24774,6 +23793,11 @@ Waiting a moderate period of time between attempts (estimated to be
approximately 1 hour) and/or not using --fast-list both seem to be
effective in preventing the problem.
+### SHA1 or SHA256 hashes may be missing
+
+All files have MD5 hashes, but a small fraction of files uploaded may
+not have SHA1 or SHA256 hashes especially if they were uploaded before 2018.
+
## Making your own client_id
When you use rclone with Google drive in its default configuration you
@@ -25136,7 +24160,91 @@ This will guide you through an interactive setup process:
Properties: |
-- Config: encoding - Env Var: RCLONE_GPHOTOS_ENCODING - Type: MultiEncoder - Default: Slash,CrLf,InvalidUtf8,Dot |
+- Config: encoding - Env Var: RCLONE_GPHOTOS_ENCODING - Type: Encoding - Default: Slash,CrLf,InvalidUtf8,Dot |
+
+
+#### --gphotos-batch-mode |
+
+
+Upload file batching sync|async|off. |
+
+
+This sets the batch mode used by rclone. |
+
+
+This has 3 possible values |
+
+
+- off - no batching - sync - batch uploads and check completion (default) - async - batch upload and don't check completion |
+
+
+Rclone will close any outstanding batches when it exits which may make a delay on quit. |
+
+
+Properties: |
+
+
+- Config: batch_mode - Env Var: RCLONE_GPHOTOS_BATCH_MODE - Type: string - Default: "sync" |
+
+
+#### --gphotos-batch-size |
+
+
+Max number of files in upload batch. |
+
+
+This sets the batch size of files to upload. It has to be less than 50. |
+
+
+By default this is 0 which means rclone which calculate the batch size depending on the setting of batch_mode. |
+
+
+- batch_mode: async - default batch_size is 50 - batch_mode: sync - default batch_size is the same as --transfers - batch_mode: off - not in use |
+
+
+Rclone will close any outstanding batches when it exits which may make a delay on quit. |
+
+
+Setting this is a great idea if you are uploading lots of small files as it will make them a lot quicker. You can use --transfers 32 to maximise throughput. |
+
+
+Properties: |
+
+
+- Config: batch_size - Env Var: RCLONE_GPHOTOS_BATCH_SIZE - Type: int - Default: 0 |
+
+
+#### --gphotos-batch-timeout |
+
+
+Max time to allow an idle upload batch before uploading. |
+
+
+If an upload batch is idle for more than this long then it will be uploaded. |
+
+
+The default for this is 0 which means rclone will choose a sensible default based on the batch_mode in use. |
+
+
+- batch_mode: async - default batch_timeout is 10s - batch_mode: sync - default batch_timeout is 1s - batch_mode: off - not in use |
+
+
+Properties: |
+
+
+- Config: batch_timeout - Env Var: RCLONE_GPHOTOS_BATCH_TIMEOUT - Type: Duration - Default: 0s |
+
+
+#### --gphotos-batch-commit-timeout |
+
+
+Max time to wait for a batch to finish committing |
+
+
+Properties: |
+
+
+- Config: batch_commit_timeout - Env Var: RCLONE_GPHOTOS_BATCH_COMMIT_TIMEOUT - Type: Duration - Default: 10m0s |
## Limitations |
@@ -25178,7 +24286,7 @@ This will guide you through an interactive setup process:
If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to upload then uploaded the same image to album/my_album the filename of the image in album/my_album will be what it was uploaded with initially, not what you uploaded it with to album . In practise this shouldn't cause too many problems. |
-### Modified time |
+### Modification times |
The date shown of media in Google Photos is the creation date as determined by the EXIF information, or the upload date if that is not known. |
@@ -25566,7 +24674,7 @@ For this docker image the remote needs to be configured like this:
You can stop this image with `docker kill rclone-hdfs` (**NB** it does not use volumes, so all data
uploaded will be lost.)
-### Modified time
+### Modification times
Time accurate to 1 second is stored.
@@ -25596,16 +24704,16 @@ Here are the Standard options specific to hdfs (Hadoop distributed file system).
#### --hdfs-namenode
-Hadoop name node and port.
+Hadoop name nodes and ports.
-E.g. "namenode:8020" to connect to host namenode at port 8020.
+E.g. "namenode-1:8020,namenode-2:8020,..." to connect to host namenodes at port 8020.
Properties:
- Config: namenode
- Env Var: RCLONE_HDFS_NAMENODE
-- Type: string
-- Required: true
+- Type: CommaSepList
+- Default:
#### --hdfs-username
@@ -25669,7 +24777,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_HDFS_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot
@@ -25757,7 +24865,7 @@ Using
the process is very similar to the process of initial setup exemplified before.
-### Modified time and hashes
+### Modification times and hashes
HiDrive allows modification times to be set on objects accurate to 1 second.
@@ -26049,7 +25157,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_HIDRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Dot
@@ -26148,7 +25256,7 @@ Sync the remote `directory` to `/home/local/directory`, deleting any excess file
This remote is read only - you can't upload files to an HTTP server.
-### Modified time
+### Modification times
Most HTTP servers store time accurate to 1 second.
@@ -26255,6 +25363,46 @@ Properties:
- Type: bool
- Default: false
+## Backend commands
+
+Here are the commands specific to the http backend.
+
+Run them with
+
+ rclone backend COMMAND remote:
+
+The help below will explain what arguments each command takes.
+
+See the [backend](https://rclone.org/commands/rclone_backend/) command for more
+info on how to pass options and arguments.
+
+These can be run on a running backend using the rc command
+[backend/command](https://rclone.org/rc/#backend-command).
+
+### set
+
+Set command for updating the config parameters.
+
+ rclone backend set remote: [options] [<arguments>+]
+
+This set command can be used to update the config parameters
+for a running http backend.
+
+Usage Examples:
+
+ rclone backend set remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
+ rclone rc backend/command command=set fs=remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
+ rclone rc backend/command command=set fs=remote: -o url=https://example.com
+
+The option keys are named as they are in the config file.
+
+This rebuilds the connection to the http backend when it is called with
+the new parameters. Only new parameters need be passed as the values
+will default to those currently in use.
+
+It doesn't return anything.
+
+
## Limitations
@@ -26266,6 +25414,166 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
+# ImageKit
+This is a backend for the [ImageKit.io](https://imagekit.io/) storage service.
+
+#### About ImageKit
+[ImageKit.io](https://imagekit.io/) provides real-time image and video optimizations, transformations, and CDN delivery. Over 1,000 businesses and 70,000 developers trust ImageKit with their images and videos on the web.
+
+
+#### Accounts & Pricing
+
+To use this backend, you need to [create an account](https://imagekit.io/registration/) on ImageKit. Start with a free plan with generous usage limits. Then, as your requirements grow, upgrade to a plan that best fits your needs. See [the pricing details](https://imagekit.io/plans).
+
+## Configuration
+
+Here is an example of making an imagekit configuration.
+
+Firstly create a [ImageKit.io](https://imagekit.io/) account and choose a plan.
+
+You will need to log in and get the `publicKey` and `privateKey` for your account from the developer section.
+
+Now run
+rclone config
+
+This will guide you through an interactive setup process:
+
+No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n
+Enter the name for the new remote. name> imagekit-media-library
+Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] XX / ImageKit.io (imagekit) [snip] Storage> imagekit
+Option endpoint. You can find your ImageKit.io URL endpoint in your dashboard Enter a value. endpoint> https://ik.imagekit.io/imagekit_id
+Option public_key. You can find your ImageKit.io public key in your dashboard Enter a value. public_key> public_****************************
+Option private_key. You can find your ImageKit.io private key in your dashboard Enter a value. private_key> private_****************************
+Edit advanced config? y) Yes n) No (default) y/n> n
+Configuration complete. Options: - type: imagekit - endpoint: https://ik.imagekit.io/imagekit_id - public_key: public_**************************** - private_key: private_****************************
+Keep this "imagekit-media-library" remote? y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y
+List directories in the top level of your Media Library
+rclone lsd imagekit-media-library:
+Make a new directory.
+rclone mkdir imagekit-media-library:directory
+List the contents of a directory.
+rclone ls imagekit-media-library:directory
+
+### Modified time and hashes
+
+ImageKit does not support modification times or hashes yet.
+
+### Checksums
+
+No checksums are supported.
+
+
+### Standard options
+
+Here are the Standard options specific to imagekit (ImageKit.io).
+
+#### --imagekit-endpoint
+
+You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_IMAGEKIT_ENDPOINT
+- Type: string
+- Required: true
+
+#### --imagekit-public-key
+
+You can find your ImageKit.io public key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+
+Properties:
+
+- Config: public_key
+- Env Var: RCLONE_IMAGEKIT_PUBLIC_KEY
+- Type: string
+- Required: true
+
+#### --imagekit-private-key
+
+You can find your ImageKit.io private key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+
+Properties:
+
+- Config: private_key
+- Env Var: RCLONE_IMAGEKIT_PRIVATE_KEY
+- Type: string
+- Required: true
+
+### Advanced options
+
+Here are the Advanced options specific to imagekit (ImageKit.io).
+
+#### --imagekit-only-signed
+
+If you have configured `Restrict unsigned image URLs` in your dashboard settings, set this to true.
+
+Properties:
+
+- Config: only_signed
+- Env Var: RCLONE_IMAGEKIT_ONLY_SIGNED
+- Type: bool
+- Default: false
+
+#### --imagekit-versions
+
+Include old versions in directory listings.
+
+Properties:
+
+- Config: versions
+- Env Var: RCLONE_IMAGEKIT_VERSIONS
+- Type: bool
+- Default: false
+
+#### --imagekit-upload-tags
+
+Tags to add to the uploaded files, e.g. "tag1,tag2".
+
+Properties:
+
+- Config: upload_tags
+- Env Var: RCLONE_IMAGEKIT_UPLOAD_TAGS
+- Type: string
+- Required: false
+
+#### --imagekit-encoding
+
+The encoding for the backend.
+
+See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_IMAGEKIT_ENCODING
+- Type: Encoding
+- Default: Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket
+
+### Metadata
+
+Any metadata supported by the underlying remote is read and written.
+
+Here are the possible system metadata items for the imagekit backend.
+
+| Name | Help | Type | Example | Read Only |
+|------|------|------|---------|-----------|
+| aws-tags | AI generated tags by AWS Rekognition associated with the image | string | tag1,tag2 | **Y** |
+| btime | Time of file birth (creation) read from Last-Modified header | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | **Y** |
+| custom-coordinates | Custom coordinates of the file | string | 0,0,100,100 | **Y** |
+| file-type | Type of the file | string | image | **Y** |
+| google-tags | AI generated tags by Google Cloud Vision associated with the image | string | tag1,tag2 | **Y** |
+| has-alpha | Whether the image has alpha channel or not | bool | | **Y** |
+| height | Height of the image or video in pixels | int | | **Y** |
+| is-private-file | Whether the file is private or not | bool | | **Y** |
+| size | Size of the object in bytes | int64 | | **Y** |
+| tags | Tags associated with the file | string | tag1,tag2 | **Y** |
+| width | Width of the image or video in pixels | int | | **Y** |
+
+See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
+
+
+
# Internet Archive
The Internet Archive backend utilizes Items on [archive.org](https://archive.org/)
@@ -26457,7 +25765,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_INTERNETARCHIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot
### Metadata
@@ -26635,7 +25943,7 @@ them. Generally you should avoid these, unless you know what you are doing.
### --fast-list
-This remote supports `--fast-list` which allows you to use fewer
+This backend supports `--fast-list` which allows you to use fewer
transactions in exchange for more memory. See the [rclone
docs](https://rclone.org/docs/#fast-list) for more details.
@@ -26643,10 +25951,11 @@ Note that the implementation in Jottacloud always uses only a single
API request to get the entire list, so for large folders this could
lead to long wait time before the first results are shown.
-Note also that with rclone version 1.58 and newer information about
-[MIME types](https://rclone.org/overview/#mime-type) are not available when using `--fast-list`.
+Note also that with rclone version 1.58 and newer, information about
+[MIME types](https://rclone.org/overview/#mime-type) and metadata item [utime](#metadata)
+are not available when using `--fast-list`.
-### Modified time and hashes
+### Modification times and hashes
Jottacloud allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -26845,9 +26154,24 @@ Properties:
- Config: encoding
- Env Var: RCLONE_JOTTACLOUD_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot
+### Metadata
+
+Jottacloud has limited support for metadata, currently an extended set of timestamps.
+
+Here are the possible system metadata items for the jottacloud backend.
+
+| Name | Help | Type | Example | Read Only |
+|------|------|------|---------|-----------|
+| btime | Time of file birth (creation), read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N |
+| content-type | MIME type, also known as media type | string | text/plain | **Y** |
+| mtime | Time of last modification, read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N |
+| utime | Time of last upload, when current revision was created, generated by backend | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | **Y** |
+
+See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
+
## Limitations
@@ -26977,34 +26301,6 @@ Properties:
- Type: string
- Required: true
-#### --koofr-password
-
-Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password).
-
-**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
-
-Properties:
-
-- Config: password
-- Env Var: RCLONE_KOOFR_PASSWORD
-- Provider: digistorage
-- Type: string
-- Required: true
-
-#### --koofr-password
-
-Your password for rclone (generate one at your service's settings page).
-
-**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
-
-Properties:
-
-- Config: password
-- Env Var: RCLONE_KOOFR_PASSWORD
-- Provider: other
-- Type: string
-- Required: true
-
### Advanced options
Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
@@ -27045,7 +26341,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_KOOFR_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
@@ -27086,6 +26382,49 @@ This will guide you through an interactive setup process:
No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> other Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] 22 / Koofr, Digi Storage and other Koofr-compatible storage providers (koofr) [snip] Storage> koofr Option provider. Choose your storage provider. Choose a number from below, or type in your own value. Press Enter to leave empty. 1 / Koofr, https://app.koofr.net/ (koofr) 2 / Digi Storage, https://storage.rcs-rds.ro/ (digistorage) 3 / Any other Koofr API compatible storage service (other) provider> 3 Option endpoint. The Koofr API endpoint to use. Enter a value. endpoint> https://koofr.other.org Option user. Your user name. Enter a value. user> USERNAME Option password. Your password for rclone (generate one at your service's settings page). Choose an alternative below. y) Yes, type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password: Edit advanced config? y) Yes n) No (default) y/n> n -------------------- [other] type = koofr provider = other endpoint = https://koofr.other.org user = USERNAME password = *** ENCRYPTED *** -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y
+# Linkbox
+
+Linkbox is [a private cloud drive](https://linkbox.to/).
+
+## Configuration
+
+Here is an example of making a remote for Linkbox.
+
+First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n
+Enter name for new remote. name> remote
+Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. XX / Linkbox (linkbox) Storage> XX
+Option token. Token from https://www.linkbox.to/admin/account Enter a value. token> testFromCLToken
+Configuration complete. Options: - type: linkbox - token: XXXXXXXXXXX Keep this "linkbox" remote? y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y
+
+
+### Standard options
+
+Here are the Standard options specific to linkbox (Linkbox).
+
+#### --linkbox-token
+
+Token from https://www.linkbox.to/admin/account
+
+Properties:
+
+- Config: token
+- Env Var: RCLONE_LINKBOX_TOKEN
+- Type: string
+- Required: true
+
+
+
+## Limitations
+
+Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
+as they can't be used in JSON strings.
+
# Mail.ru Cloud
[Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a Russian internet company [Mail.Ru Group](https://mail.ru). The official desktop client is [Disk-O:](https://disk-o.cloud/en), available on Windows and Mac OS.
@@ -27148,17 +26487,15 @@ excess files in the path.
rclone sync --interactive /home/local/directory remote:directory
-### Modified time
+### Modification times and hashes
Files support a modification time attribute with up to 1 second precision.
Directories do not have a modification time, which is shown as "Jan 1 1970".
-### Hash checksums
-
-Hash sums use a custom Mail.ru algorithm based on SHA1.
+File hashes are supported, with a custom Mail.ru algorithm based on SHA1.
If file size is less than or equal to the SHA1 block size (20 bytes),
its hash is simply its data right-padded with zero bytes.
-Hash sum of a larger file is computed as a SHA1 sum of the file data
+Hashes of a larger file is computed as a SHA1 of the file data
bytes concatenated with a decimal representation of the data length.
### Emptying Trash
@@ -27436,7 +26773,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_MAILRU_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot
@@ -27492,7 +26829,7 @@ To copy a local directory to an Mega directory called backup
rclone copy /home/source remote:backup
-### Modified time and hashes
+### Modification times and hashes
Mega does not support modification times or hashes yet.
@@ -27679,7 +27016,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_MEGA_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,InvalidUtf8,Dot
@@ -27736,7 +27073,7 @@ testing or with an rclone server or rclone mount, e.g.
rclone serve webdav :memory:
rclone serve sftp :memory:
-### Modified time and hashes
+### Modification times and hashes
The memory backend supports MD5 hashes and modification times accurate to 1 nS.
@@ -28021,10 +27358,10 @@ This remote supports `--fast-list` which allows you to use fewer
transactions in exchange for more memory. See the [rclone
docs](https://rclone.org/docs/#fast-list) for more details.
-### Modified time
+### Modification times and hashes
-The modified time is stored as metadata on the object with the `mtime`
-key. It is stored using RFC3339 Format time with nanosecond
+The modification time is stored as metadata on the object with the
+`mtime` key. It is stored using RFC3339 Format time with nanosecond
precision. The metadata is supplied during directory listings so
there is no performance overhead to using it.
@@ -28034,6 +27371,10 @@ flag. Note that rclone can't set `LastModified`, so using the
`--update` flag when syncing is recommended if using
`--use-server-modtime`.
+MD5 hashes are stored with blobs. However blobs that were uploaded in
+chunks only have an MD5 if the source remote was capable of MD5
+hashes, e.g. the local disk.
+
### Performance
When uploading large files, increasing the value of
@@ -28062,12 +27403,6 @@ These only get replaced if they are the last character in the name:
Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
as they can't be used in JSON strings.
-### Hashes
-
-MD5 hashes are stored with blobs. However blobs that were uploaded in
-chunks only have an MD5 if the source remote was capable of MD5
-hashes, e.g. the local disk.
-
### Authentication {#authentication}
There are a number of ways of supplying credentials for Azure Blob
@@ -28621,10 +27956,10 @@ Properties:
#### --azureblob-access-tier
-Access tier of blob: hot, cool or archive.
+Access tier of blob: hot, cool, cold or archive.
-Archived blobs can be restored by setting access tier to hot or
-cool. Leave blank if you intend to use default access tier, which is
+Archived blobs can be restored by setting access tier to hot, cool or
+cold. Leave blank if you intend to use default access tier, which is
set at account level
If there is no "access tier" specified, rclone doesn't apply any tier.
@@ -28632,7 +27967,7 @@ rclone performs "Set Tier" operation on blobs while uploading, if obje
are not modified, specifying "access tier" to new one will have no effect.
If blobs are in "archive tier" at remote, trying to perform data transfer
operations from remote will not be allowed. User should first restore by
-tiering blob to "Hot" or "Cool".
+tiering blob to "Hot", "Cool" or "Cold".
Properties:
@@ -28713,7 +28048,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_AZUREBLOB_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8
#### --azureblob-public-access
@@ -28822,6 +28157,651 @@ advanced settings, setting it to
`http(s)://<host>:<port>/devstoreaccount1`
(e.g. `http://10.254.2.5:10000/devstoreaccount1`).
+# Microsoft Azure Files Storage
+
+Paths are specified as `remote:` You may put subdirectories in too,
+e.g. `remote:path/to/dir`.
+
+## Configuration
+
+Here is an example of making a Microsoft Azure Files Storage
+configuration. For a remote called `remote`. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Microsoft Azure Files Storage "azurefiles" [snip]
+Option account. Azure Storage Account Name. Set this to the Azure Storage Account Name in use. Leave blank to use SAS URL or connection string, otherwise it needs to be set. If this is blank and if env_auth is set it will be read from the environment variable AZURE_STORAGE_ACCOUNT_NAME
if possible. Enter a value. Press Enter to leave empty. account> account_name
+Option share_name. Azure Files Share Name. This is required and is the name of the share to access. Enter a value. Press Enter to leave empty. share_name> share_name
+Option env_auth. Read credentials from runtime (environment variables, CLI or MSI). See the authentication docs for full info. Enter a boolean value (true or false). Press Enter for the default (false). env_auth>
+Option key. Storage Account Shared Key. Leave blank to use SAS URL or connection string. Enter a value. Press Enter to leave empty. key> base64encodedkey==
+Option sas_url. SAS URL. Leave blank if using account/key or connection string. Enter a value. Press Enter to leave empty. sas_url>
+Option connection_string. Azure Files Connection String. Enter a value. Press Enter to leave empty. connection_string> [snip]
+Configuration complete. Options: - type: azurefiles - account: account_name - share_name: share_name - key: base64encodedkey== Keep this "remote" remote? y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d>
+
+Once configured you can use rclone.
+
+See all files in the top level:
+
+ rclone lsf remote:
+
+Make a new directory in the root:
+
+ rclone mkdir remote:dir
+
+Recursively List the contents:
+
+ rclone ls remote:
+
+Sync `/home/local/directory` to the remote directory, deleting any
+excess files in the directory.
+
+ rclone sync --interactive /home/local/directory remote:dir
+
+### Modified time
+
+The modified time is stored as Azure standard `LastModified` time on
+files
+
+### Performance
+
+When uploading large files, increasing the value of
+`--azurefiles-upload-concurrency` will increase performance at the cost
+of using more memory. The default of 16 is set quite conservatively to
+use less memory. It maybe be necessary raise it to 64 or higher to
+fully utilize a 1 GBit/s link with a single file transfer.
+
+### Restricted filename characters
+
+In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
+the following characters are also replaced:
+
+| Character | Value | Replacement |
+| --------- |:-----:|:-----------:|
+| " | 0x22 | " |
+| * | 0x2A | * |
+| : | 0x3A | : |
+| < | 0x3C | < |
+| > | 0x3E | > |
+| ? | 0x3F | ? |
+| \ | 0x5C | \ |
+| \| | 0x7C | | |
+
+File names can also not end with the following characters.
+These only get replaced if they are the last character in the name:
+
+| Character | Value | Replacement |
+| --------- |:-----:|:-----------:|
+| . | 0x2E | . |
+
+Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
+as they can't be used in JSON strings.
+
+### Hashes
+
+MD5 hashes are stored with files. Not all files will have MD5 hashes
+as these have to be uploaded with the file.
+
+### Authentication {#authentication}
+
+There are a number of ways of supplying credentials for Azure Files
+Storage. Rclone tries them in the order of the sections below.
+
+#### Env Auth
+
+If the `env_auth` config parameter is `true` then rclone will pull
+credentials from the environment or runtime.
+
+It tries these authentication methods in this order:
+
+1. Environment Variables
+2. Managed Service Identity Credentials
+3. Azure CLI credentials (as used by the az tool)
+
+These are described in the following sections
+
+##### Env Auth: 1. Environment Variables
+
+If `env_auth` is set and environment variables are present rclone
+authenticates a service principal with a secret or certificate, or a
+user with a password, depending on which environment variable are set.
+It reads configuration from these variables, in the following order:
+
+1. Service principal with client secret
+ - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID.
+ - `AZURE_CLIENT_ID`: the service principal's client ID
+ - `AZURE_CLIENT_SECRET`: one of the service principal's client secrets
+2. Service principal with certificate
+ - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID.
+ - `AZURE_CLIENT_ID`: the service principal's client ID
+ - `AZURE_CLIENT_CERTIFICATE_PATH`: path to a PEM or PKCS12 certificate file including the private key.
+ - `AZURE_CLIENT_CERTIFICATE_PASSWORD`: (optional) password for the certificate file.
+ - `AZURE_CLIENT_SEND_CERTIFICATE_CHAIN`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header.
+3. User with username and password
+ - `AZURE_TENANT_ID`: (optional) tenant to authenticate in. Defaults to "organizations".
+ - `AZURE_CLIENT_ID`: client ID of the application the user will authenticate to
+ - `AZURE_USERNAME`: a username (usually an email address)
+ - `AZURE_PASSWORD`: the user's password
+4. Workload Identity
+ - `AZURE_TENANT_ID`: Tenant to authenticate in.
+ - `AZURE_CLIENT_ID`: Client ID of the application the user will authenticate to.
+ - `AZURE_FEDERATED_TOKEN_FILE`: Path to projected service account token file.
+ - `AZURE_AUTHORITY_HOST`: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com).
+
+
+##### Env Auth: 2. Managed Service Identity Credentials
+
+When using Managed Service Identity if the VM(SS) on which this
+program is running has a system-assigned identity, it will be used by
+default. If the resource has no system-assigned but exactly one
+user-assigned identity, the user-assigned identity will be used by
+default.
+
+If the resource has multiple user-assigned identities you will need to
+unset `env_auth` and set `use_msi` instead. See the [`use_msi`
+section](#use_msi).
+
+##### Env Auth: 3. Azure CLI credentials (as used by the az tool)
+
+Credentials created with the `az` tool can be picked up using `env_auth`.
+
+For example if you were to login with a service principal like this:
+
+ az login --service-principal -u XXX -p XXX --tenant XXX
+
+Then you could access rclone resources like this:
+
+ rclone lsf :azurefiles,env_auth,account=ACCOUNT:
+
+Or
+
+ rclone lsf --azurefiles-env-auth --azurefiles-account=ACCOUNT :azurefiles:
+
+#### Account and Shared Key
+
+This is the most straight forward and least flexible way. Just fill
+in the `account` and `key` lines and leave the rest blank.
+
+#### SAS URL
+
+To use it leave `account`, `key` and `connection_string` blank and fill in `sas_url`.
+
+#### Connection String
+
+To use it leave `account`, `key` and "sas_url" blank and fill in `connection_string`.
+
+#### Service principal with client secret
+
+If these variables are set, rclone will authenticate with a service principal with a client secret.
+
+- `tenant`: ID of the service principal's tenant. Also called its "directory" ID.
+- `client_id`: the service principal's client ID
+- `client_secret`: one of the service principal's client secrets
+
+The credentials can also be placed in a file using the
+`service_principal_file` configuration option.
+
+#### Service principal with certificate
+
+If these variables are set, rclone will authenticate with a service principal with certificate.
+
+- `tenant`: ID of the service principal's tenant. Also called its "directory" ID.
+- `client_id`: the service principal's client ID
+- `client_certificate_path`: path to a PEM or PKCS12 certificate file including the private key.
+- `client_certificate_password`: (optional) password for the certificate file.
+- `client_send_certificate_chain`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header.
+
+**NB** `client_certificate_password` must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+#### User with username and password
+
+If these variables are set, rclone will authenticate with username and password.
+
+- `tenant`: (optional) tenant to authenticate in. Defaults to "organizations".
+- `client_id`: client ID of the application the user will authenticate to
+- `username`: a username (usually an email address)
+- `password`: the user's password
+
+Microsoft doesn't recommend this kind of authentication, because it's
+less secure than other authentication flows. This method is not
+interactive, so it isn't compatible with any form of multi-factor
+authentication, and the application must already have user or admin
+consent. This credential can only authenticate work and school
+accounts; it can't authenticate Microsoft accounts.
+
+**NB** `password` must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+#### Managed Service Identity Credentials {#use_msi}
+
+If `use_msi` is set then managed service identity credentials are
+used. This authentication only works when running in an Azure service.
+`env_auth` needs to be unset to use this.
+
+However if you have multiple user identities to choose from these must
+be explicitly specified using exactly one of the `msi_object_id`,
+`msi_client_id`, or `msi_mi_res_id` parameters.
+
+If none of `msi_object_id`, `msi_client_id`, or `msi_mi_res_id` is
+set, this is is equivalent to using `env_auth`.
+
+
+### Standard options
+
+Here are the Standard options specific to azurefiles (Microsoft Azure Files).
+
+#### --azurefiles-account
+
+Azure Storage Account Name.
+
+Set this to the Azure Storage Account Name in use.
+
+Leave blank to use SAS URL or connection string, otherwise it needs to be set.
+
+If this is blank and if env_auth is set it will be read from the
+environment variable `AZURE_STORAGE_ACCOUNT_NAME` if possible.
+
+
+Properties:
+
+- Config: account
+- Env Var: RCLONE_AZUREFILES_ACCOUNT
+- Type: string
+- Required: false
+
+#### --azurefiles-share-name
+
+Azure Files Share Name.
+
+This is required and is the name of the share to access.
+
+
+Properties:
+
+- Config: share_name
+- Env Var: RCLONE_AZUREFILES_SHARE_NAME
+- Type: string
+- Required: false
+
+#### --azurefiles-env-auth
+
+Read credentials from runtime (environment variables, CLI or MSI).
+
+See the [authentication docs](/azurefiles#authentication) for full info.
+
+Properties:
+
+- Config: env_auth
+- Env Var: RCLONE_AZUREFILES_ENV_AUTH
+- Type: bool
+- Default: false
+
+#### --azurefiles-key
+
+Storage Account Shared Key.
+
+Leave blank to use SAS URL or connection string.
+
+Properties:
+
+- Config: key
+- Env Var: RCLONE_AZUREFILES_KEY
+- Type: string
+- Required: false
+
+#### --azurefiles-sas-url
+
+SAS URL.
+
+Leave blank if using account/key or connection string.
+
+Properties:
+
+- Config: sas_url
+- Env Var: RCLONE_AZUREFILES_SAS_URL
+- Type: string
+- Required: false
+
+#### --azurefiles-connection-string
+
+Azure Files Connection String.
+
+Properties:
+
+- Config: connection_string
+- Env Var: RCLONE_AZUREFILES_CONNECTION_STRING
+- Type: string
+- Required: false
+
+#### --azurefiles-tenant
+
+ID of the service principal's tenant. Also called its directory ID.
+
+Set this if using
+- Service principal with client secret
+- Service principal with certificate
+- User with username and password
+
+
+Properties:
+
+- Config: tenant
+- Env Var: RCLONE_AZUREFILES_TENANT
+- Type: string
+- Required: false
+
+#### --azurefiles-client-id
+
+The ID of the client in use.
+
+Set this if using
+- Service principal with client secret
+- Service principal with certificate
+- User with username and password
+
+
+Properties:
+
+- Config: client_id
+- Env Var: RCLONE_AZUREFILES_CLIENT_ID
+- Type: string
+- Required: false
+
+#### --azurefiles-client-secret
+
+One of the service principal's client secrets
+
+Set this if using
+- Service principal with client secret
+
+
+Properties:
+
+- Config: client_secret
+- Env Var: RCLONE_AZUREFILES_CLIENT_SECRET
+- Type: string
+- Required: false
+
+#### --azurefiles-client-certificate-path
+
+Path to a PEM or PKCS12 certificate file including the private key.
+
+Set this if using
+- Service principal with certificate
+
+
+Properties:
+
+- Config: client_certificate_path
+- Env Var: RCLONE_AZUREFILES_CLIENT_CERTIFICATE_PATH
+- Type: string
+- Required: false
+
+#### --azurefiles-client-certificate-password
+
+Password for the certificate file (optional).
+
+Optionally set this if using
+- Service principal with certificate
+
+And the certificate has a password.
+
+
+**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+Properties:
+
+- Config: client_certificate_password
+- Env Var: RCLONE_AZUREFILES_CLIENT_CERTIFICATE_PASSWORD
+- Type: string
+- Required: false
+
+### Advanced options
+
+Here are the Advanced options specific to azurefiles (Microsoft Azure Files).
+
+#### --azurefiles-client-send-certificate-chain
+
+Send the certificate chain when using certificate auth.
+
+Specifies whether an authentication request will include an x5c header
+to support subject name / issuer based authentication. When set to
+true, authentication requests include the x5c header.
+
+Optionally set this if using
+- Service principal with certificate
+
+
+Properties:
+
+- Config: client_send_certificate_chain
+- Env Var: RCLONE_AZUREFILES_CLIENT_SEND_CERTIFICATE_CHAIN
+- Type: bool
+- Default: false
+
+#### --azurefiles-username
+
+User name (usually an email address)
+
+Set this if using
+- User with username and password
+
+
+Properties:
+
+- Config: username
+- Env Var: RCLONE_AZUREFILES_USERNAME
+- Type: string
+- Required: false
+
+#### --azurefiles-password
+
+The user's password
+
+Set this if using
+- User with username and password
+
+
+**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+Properties:
+
+- Config: password
+- Env Var: RCLONE_AZUREFILES_PASSWORD
+- Type: string
+- Required: false
+
+#### --azurefiles-service-principal-file
+
+Path to file containing credentials for use with a service principal.
+
+Leave blank normally. Needed only if you want to use a service principal instead of interactive login.
+
+ $ az ad sp create-for-rbac --name "<name>" \
+ --role "Storage Files Data Owner" \
+ --scopes "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/blobServices/default/containers/<container>" \
+ > azure-principal.json
+
+See ["Create an Azure service principal"](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) and ["Assign an Azure role for access to files data"](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli) pages for more details.
+
+**NB** this section needs updating for Azure Files - pull requests appreciated!
+
+It may be more convenient to put the credentials directly into the
+rclone config file under the `client_id`, `tenant` and `client_secret`
+keys instead of setting `service_principal_file`.
+
+
+Properties:
+
+- Config: service_principal_file
+- Env Var: RCLONE_AZUREFILES_SERVICE_PRINCIPAL_FILE
+- Type: string
+- Required: false
+
+#### --azurefiles-use-msi
+
+Use a managed service identity to authenticate (only works in Azure).
+
+When true, use a [managed service identity](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/)
+to authenticate to Azure Storage instead of a SAS token or account key.
+
+If the VM(SS) on which this program is running has a system-assigned identity, it will
+be used by default. If the resource has no system-assigned but exactly one user-assigned identity,
+the user-assigned identity will be used by default. If the resource has multiple user-assigned
+identities, the identity to use must be explicitly specified using exactly one of the msi_object_id,
+msi_client_id, or msi_mi_res_id parameters.
+
+Properties:
+
+- Config: use_msi
+- Env Var: RCLONE_AZUREFILES_USE_MSI
+- Type: bool
+- Default: false
+
+#### --azurefiles-msi-object-id
+
+Object ID of the user-assigned MSI to use, if any.
+
+Leave blank if msi_client_id or msi_mi_res_id specified.
+
+Properties:
+
+- Config: msi_object_id
+- Env Var: RCLONE_AZUREFILES_MSI_OBJECT_ID
+- Type: string
+- Required: false
+
+#### --azurefiles-msi-client-id
+
+Object ID of the user-assigned MSI to use, if any.
+
+Leave blank if msi_object_id or msi_mi_res_id specified.
+
+Properties:
+
+- Config: msi_client_id
+- Env Var: RCLONE_AZUREFILES_MSI_CLIENT_ID
+- Type: string
+- Required: false
+
+#### --azurefiles-msi-mi-res-id
+
+Azure resource ID of the user-assigned MSI to use, if any.
+
+Leave blank if msi_client_id or msi_object_id specified.
+
+Properties:
+
+- Config: msi_mi_res_id
+- Env Var: RCLONE_AZUREFILES_MSI_MI_RES_ID
+- Type: string
+- Required: false
+
+#### --azurefiles-endpoint
+
+Endpoint for the service.
+
+Leave blank normally.
+
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_AZUREFILES_ENDPOINT
+- Type: string
+- Required: false
+
+#### --azurefiles-chunk-size
+
+Upload chunk size.
+
+Note that this is stored in memory and there may be up to
+"--transfers" * "--azurefile-upload-concurrency" chunks stored at once
+in memory.
+
+Properties:
+
+- Config: chunk_size
+- Env Var: RCLONE_AZUREFILES_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 4Mi
+
+#### --azurefiles-upload-concurrency
+
+Concurrency for multipart uploads.
+
+This is the number of chunks of the same file that are uploaded
+concurrently.
+
+If you are uploading small numbers of large files over high-speed
+links and these uploads do not fully utilize your bandwidth, then
+increasing this may help to speed up the transfers.
+
+Note that chunks are stored in memory and there may be up to
+"--transfers" * "--azurefile-upload-concurrency" chunks stored at once
+in memory.
+
+Properties:
+
+- Config: upload_concurrency
+- Env Var: RCLONE_AZUREFILES_UPLOAD_CONCURRENCY
+- Type: int
+- Default: 16
+
+#### --azurefiles-max-stream-size
+
+Max size for streamed files.
+
+Azure files needs to know in advance how big the file will be. When
+rclone doesn't know it uses this value instead.
+
+This will be used when rclone is streaming data, the most common uses are:
+
+- Uploading files with `--vfs-cache-mode off` with `rclone mount`
+- Using `rclone rcat`
+- Copying files with unknown length
+
+You will need this much free space in the share as the file will be this size temporarily.
+
+
+Properties:
+
+- Config: max_stream_size
+- Env Var: RCLONE_AZUREFILES_MAX_STREAM_SIZE
+- Type: SizeSuffix
+- Default: 10Gi
+
+#### --azurefiles-encoding
+
+The encoding for the backend.
+
+See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_AZUREFILES_ENCODING
+- Type: Encoding
+- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot
+
+
+
+### Custom upload headers
+
+You can set custom upload headers with the `--header-upload` flag.
+
+- Cache-Control
+- Content-Disposition
+- Content-Encoding
+- Content-Language
+- Content-Type
+
+Eg `--header-upload "Content-Type: text/potato"`
+
+## Limitations
+
+MD5 sums are only uploaded with chunked files if the source has an MD5
+sum. This will always be the case for a local to azure copy.
+
# Microsoft OneDrive
Paths are specified as `remote:path`
@@ -28930,7 +28910,7 @@ You may try to [verify you account](https://docs.microsoft.com/en-us/azure/activ
Note: If you have a special region, you may need a different host in step 4 and 5. Here are [some hints](https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86).
-### Modification time and hashes
+### Modification times and hashes
OneDrive allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -28951,6 +28931,32 @@ your workflow.
For all types of OneDrive you can use the `--checksum` flag.
+### --fast-list
+
+This remote supports `--fast-list` which allows you to use fewer
+transactions in exchange for more memory. See the [rclone
+docs](https://rclone.org/docs/#fast-list) for more details.
+
+This must be enabled with the `--onedrive-delta` flag (or `delta =
+true` in the config file) as it can cause performance degradation.
+
+It does this by using the delta listing facilities of OneDrive which
+returns all the files in the remote very efficiently. This is much
+more efficient than listing directories recursively and is Microsoft's
+recommended way of reading all the file information from a drive.
+
+This can be useful with `rclone mount` and [rclone rc vfs/refresh
+recursive=true](https://rclone.org/rc/#vfs-refresh)) to very quickly fill the mount with
+information about all the files.
+
+The API used for the recursive listing (`ListR`) only supports listing
+from the root of the drive. This will become increasingly inefficient
+the further away you get from the root as rclone will have to discard
+files outside of the directory you are using.
+
+Some commands (like `rclone lsf -R`) will use `ListR` by default - you
+can turn this off with `--disable ListR` if you need to.
+
### Restricted filename characters
In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
@@ -29362,6 +29368,43 @@ Properties:
- Type: bool
- Default: false
+#### --onedrive-delta
+
+If set rclone will use delta listing to implement recursive listings.
+
+If this flag is set the the onedrive backend will advertise `ListR`
+support for recursive listings.
+
+Setting this flag speeds up these things greatly:
+
+ rclone lsf -R onedrive:
+ rclone size onedrive:
+ rclone rc vfs/refresh recursive=true
+
+**However** the delta listing API **only** works at the root of the
+drive. If you use it not at the root then it recurses from the root
+and discards all the data that is not under the directory you asked
+for. So it will be correct but may not be very efficient.
+
+This is why this flag is not set as the default.
+
+As a rule of thumb if nearly all of your data is under rclone's root
+directory (the `root/directory` in `onedrive:root/directory`) then
+using this flag will be be a big performance win. If your data is
+mostly not under the root then using this flag will be a big
+performance loss.
+
+It is recommended if you are mounting your onedrive at the root
+(or near the root when using crypt) and using rclone `rc vfs/refresh`.
+
+
+Properties:
+
+- Config: delta
+- Env Var: RCLONE_ONEDRIVE_DELTA
+- Type: bool
+- Default: false
+
#### --onedrive-encoding
The encoding for the backend.
@@ -29372,7 +29415,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_ONEDRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot
@@ -29631,12 +29674,14 @@ To copy a local directory to an OpenDrive directory called backup
rclone copy /home/source remote:backup
-### Modified time and MD5SUMs
+### Modification times and hashes
OpenDrive allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
not.
+The MD5 hash algorithm is supported.
+
### Restricted filename characters
| Character | Value | Replacement |
@@ -29710,7 +29755,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_OPENDRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot
#### --opendrive-chunk-size
@@ -29818,6 +29863,7 @@ Rclone supports the following OCI authentication provider.
No authentication
### User Principal
+
Sample rclone config file for Authentication Provider User Principal:
[oos]
@@ -29838,6 +29884,7 @@ Considerations:
- If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials.
### Instance Principal
+
An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal.
With this approach no credentials have to be stored and managed.
@@ -29867,6 +29914,7 @@ Considerations:
- It is applicable for oci compute instances only. It cannot be used on external instance or resources.
### Resource Principal
+
Resource principal auth is very similar to instance principal auth but used for resources that are not
compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm).
To use resource principal ensure Rclone process is started with these environment variables set in its process.
@@ -29886,6 +29934,7 @@ Sample rclone configuration file for Authentication Provider Resource Principal:
provider = resource_principal_auth
### No authentication
+
Public buckets do not require any authentication mechanism to read objects.
Sample rclone configuration file for No authentication:
@@ -29896,10 +29945,9 @@ Sample rclone configuration file for No authentication:
region = us-ashburn-1
provider = no_auth
-## Options
-### Modified time
+### Modification times and hashes
-The modified time is stored as metadata on the object as
+The modification time is stored as metadata on the object as
`opc-meta-mtime` as floating point since the epoch, accurate to 1 ns.
If the modification time needs to be updated rclone will attempt to perform a server
@@ -29909,6 +29957,8 @@ In the case the object is larger than 5Gb, the object will be uploaded rather th
Note that reading this from the object takes an additional `HEAD` request as the metadata
isn't returned in object listings.
+The MD5 hash algorithm is supported.
+
### Multipart uploads
rclone supports multipart uploads with OOS which means that it can
@@ -30211,7 +30261,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_OOS_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,InvalidUtf8,Dot
#### --oos-leave-parts-on-error
@@ -30682,7 +30732,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_QINGSTOR_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Ctl,InvalidUtf8
@@ -30784,7 +30834,7 @@ This will guide you through an interactive setup process:
y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` |
-### Modified time and hashes |
+### Modification times and hashes |
Quatrix allows modification times to be set on objects accurate to 1 microsecond. These will be used to detect whether objects need syncing or not. |
@@ -30859,7 +30909,7 @@ This will guide you through an interactive setup process:
Properties: |
-- Config: encoding - Env Var: RCLONE_QUATRIX_ENCODING - Type: MultiEncoder - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot |
+- Config: encoding - Env Var: RCLONE_QUATRIX_ENCODING - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot |
#### --quatrix-effective-upload-time |
@@ -31036,7 +31086,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SIA_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot
@@ -31162,7 +31212,7 @@ sufficient to determine if it is "dirty". By using `--update` along wi
`--use-server-modtime`, you can avoid the extra API call and simply upload
files whose local modtime is newer than the time it was last uploaded.
-### Modified time
+### Modification times and hashes
The modified time is stored as metadata on the object as
`X-Object-Meta-Mtime` as floating point since the epoch accurate to 1
@@ -31171,6 +31221,8 @@ ns.
This is a de facto standard (used in the official python-swiftclient
amongst others) for storing the modification time for an object.
+The MD5 hash algorithm is supported.
+
### Restricted filename characters
| Character | Value | Replacement |
@@ -31517,7 +31569,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SWIFT_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,InvalidUtf8
@@ -31605,7 +31657,7 @@ To copy a local directory to a pCloud directory called backup
rclone copy /home/source remote:backup
-### Modified time and hashes ###
+### Modification times and hashes
pCloud allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -31744,7 +31796,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PCLOUD_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
#### --pcloud-root-folder-id
@@ -31833,6 +31885,13 @@ This will guide you through an interactive setup process:
Edit advanced config? y) Yes n) No (default) y/n>
Configuration complete. Options: - type: pikpak - user: USERNAME - pass: *** ENCRYPTED *** - token: {"access_token":"eyJ...","token_type":"Bearer","refresh_token":"os...","expiry":"2023-01-26T18:54:32.170582647+09:00"} Keep this "remote" remote? y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y
+### Modification times and hashes
+
+PikPak keeps modification times on objects, and updates them when uploading objects,
+but it does not support changing only the modification time
+
+The MD5 hash algorithm is supported.
+
### Standard options
@@ -31992,7 +32051,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PIKPAK_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot
## Backend commands
@@ -32056,15 +32115,16 @@ Result:
-## Limitations ##
+## Limitations
-### Hashes ###
+### Hashes may be empty
PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files.
-### Deleted files ###
+### Deleted files still visible with trashed-only
-Deleted files will still be visible with `--pikpak-trashed-only` even after the trash emptied. This goes away after few days.
+Deleted files will still be visible with `--pikpak-trashed-only` even after the
+trash emptied. This goes away after few days.
# premiumize.me
@@ -32109,7 +32169,7 @@ To copy a local directory to an premiumize.me directory called backup
rclone copy /home/source remote:backup
-### Modified time and hashes
+### Modification times and hashes
premiumize.me does not support modification times or hashes, therefore
syncing will default to `--size-only` checking. Note that using
@@ -32224,7 +32284,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PREMIUMIZEME_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot
@@ -32290,10 +32350,12 @@ To copy a local directory to an Proton Drive directory called backup
rclone copy /home/source remote:backup
-### Modified time
+### Modification times and hashes
Proton Drive Bridge does not support updating modification times yet.
+The SHA1 hash algorithm is supported.
+
### Restricted filename characters
Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and
@@ -32439,7 +32501,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PROTONDRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot
#### --protondrive-original-file-size
@@ -32702,7 +32764,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PUTIO_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
@@ -32766,10 +32828,12 @@ To copy a local directory to an Proton Drive directory called backup
rclone copy /home/source remote:backup
-### Modified time
+### Modification times and hashes
Proton Drive Bridge does not support updating modification times yet.
+The SHA1 hash algorithm is supported.
+
### Restricted filename characters
Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and
@@ -32915,7 +32979,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PROTONDRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot
#### --protondrive-original-file-size
@@ -33272,7 +33336,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SEAFILE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8
@@ -33457,7 +33521,7 @@ known_hosts_file = ~/.ssh/known_hosts
The options md5sum_command
and sha1_command
can be used to customize the command to be executed for calculation of checksums. You can for example set a specific path to where md5sum and sha1sum executables are located, or use them to specify some other tools that print checksums in compatible format. The value can include command-line arguments, or even shell script blocks as with PowerShell. Rclone has subcommands md5sum and sha1sum that use compatible format, which means if you have an rclone executable on the server it can be used. As mentioned above, they will be automatically picked up if found in PATH, but if not you can set something like /path/to/rclone md5sum
as the value of option md5sum_command
to make sure a specific executable is used.
Remote checksumming is recommended and enabled by default. First time rclone is using a SFTP remote, if options md5sum_command
or sha1_command
are not set, it will check if any of the default commands for each of them, as described above, can be used. The result will be saved in the remote configuration, so next time it will use the same. Value none
will be set if none of the default commands could be used for a specific algorithm, and this algorithm will not be supported by the remote.
Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote shell commands is prohibited. Set the configuration option disable_hashcheck
to true
to disable checksumming entirely, or set shell_type
to none
to disable all functionality based on remote shell command execution.
-Modified time
+Modification times and hashes
Modified times are stored on the server to 1 second precision.
Modified times are used in syncing and are fully supported.
Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option set_modtime = false
in your RClone backend configuration to disable this behaviour.
@@ -33903,7 +33967,21 @@ server_command = sudo /usr/libexec/openssh/sftp-server
Type: string
Required: false
-Limitations
+--sftp-copy-is-hardlink
+Set to enable server side copies using hardlinks.
+The SFTP protocol does not define a copy command so normally server side copies are not allowed with the sftp backend.
+However the SFTP protocol does support hardlinking, and if you enable this flag then the sftp backend will support server side copies. These will be implemented by doing a hardlink from the source to the destination.
+Not all sftp servers support this.
+Note that hardlinking two files together will use no additional space as the source and the destination will be the same file.
+This feature may be useful backups made with --copy-dest.
+Properties:
+
+- Config: copy_is_hardlink
+- Env Var: RCLONE_SFTP_COPY_IS_HARDLINK
+- Type: bool
+- Default: false
+
+Limitations
On some SFTP servers (e.g. Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using disable_hashcheck
is a good idea.
The only ssh agent supported under Windows is Putty's pageant.
The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the use_insecure_cipher
setting in the configuration file to true
. Further details on the insecurity of this cipher can be found in this paper.
@@ -33920,7 +33998,7 @@ server_command = sudo /usr/libexec/openssh/sftp-server
SMB is a communication protocol to share files over network.
This relies on go-smb2 library for communication with SMB protocol.
Paths are specified as remote:sharename
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:item/path/to/dir
.
-Notes
+Notes
The first path segment must be the name of the share, which you entered when you started to share on Windows. On smbd, it's the section title in smb.conf
(usually in /etc/samba/
) file. You can find shares by querying the root if you're unsure (e.g. rclone lsd remote:
).
You can't access to the shared printers from rclone, obviously.
You can't use Anonymous access for logging in. You have to use the guest
user with an empty password instead. The rclone client tries to avoid 8.3 names when uploading files by encoding trailing spaces and periods. Alternatively, the local backend on Windows can access SMB servers using UNC paths, by \\server\share
. This doesn't apply to non-Windows OSes, such as Linux and macOS.
@@ -34099,7 +34177,7 @@ y/e/d> d
- Config: encoding
- Env Var: RCLONE_SMB_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot
Storj
@@ -34391,7 +34469,7 @@ y/e/d> y
rclone sync --interactive --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/
Or even between another cloud storage and Storj.
rclone sync --interactive --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/
-Limitations
+Limitations
rclone about
is not supported by the rclone Storj backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
Known issues
@@ -34464,7 +34542,7 @@ y/e/d> y
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory
.
NB you can't create files in the top level folder you have to create a folder, which rclone will create as a "Sync Folder" with SugarSync.
-Modified time and hashes
+Modification times and hashes
SugarSync does not support modification times or hashes, therefore syncing will default to --size-only
checking. Note that using --update
will work as rclone can read the time files were uploaded.
Restricted filename characters
SugarSync replaces the default restricted characters set except for DEL.
@@ -34582,10 +34660,10 @@ y/e/d> y
- Config: encoding
- Env Var: RCLONE_SUGARSYNC_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Ctl,InvalidUtf8,Dot
-Limitations
+Limitations
rclone about
is not supported by the SugarSync backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs
(most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
Tardigrade
@@ -34648,7 +34726,7 @@ y/e/d>
rclone ls remote:
To copy a local directory to an Uptobox directory called backup
rclone copy /home/source remote:backup
-Modified time and hashes
+Modification times and hashes
Uptobox supports neither modified times nor checksums. All timestamps will read as that set by --default-time
.
Restricted filename characters
In addition to the default restricted characters set the following characters are also replaced:
@@ -34704,16 +34782,16 @@ y/e/d>
- Config: encoding
- Env Var: RCLONE_UPTOBOX_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
-Limitations
+Limitations
Uptobox will delete inactive files that have not been accessed in 60 days.
rclone about
is not supported by this backend an overview of used space can however been seen in the uptobox web interface.
Union
The union
backend joins several remotes together to make a single unified view of them.
During the initial setup with rclone config
you will specify the upstream remotes as a space separated list. The upstream remotes can either be a local paths or other remotes.
-The attributes :ro
, :nc
and :nc
can be attached to the end of the remote to tag the remote as read only, no create or writeback, e.g. remote:directory/subdirectory:ro
or remote:directory/subdirectory:nc
.
+The attributes :ro
, :nc
and :writeback
can be attached to the end of the remote to tag the remote as read only, no create or writeback, e.g. remote:directory/subdirectory:ro
or remote:directory/subdirectory:nc
.
:ro
means files will only be read from here and never written
:nc
means new files or directories won't be created here
@@ -35056,7 +35134,9 @@ Choose a number from below, or type in your own value
\ (sharepoint)
5 / Sharepoint with NTLM authentication, usually self-hosted or on-premises
\ (sharepoint-ntlm)
- 6 / Other site/service or software
+ 6 / rclone WebDAV server to serve a remote over HTTP via the WebDAV protocol
+ \ (rclone)
+ 7 / Other site/service or software
\ (other)
vendor> 2
User name
@@ -35093,7 +35173,7 @@ y/e/d> y
rclone ls remote:
To copy a local directory to an WebDAV directory called backup
rclone copy /home/source remote:backup
-Modified time and hashes
+Modification times and hashes
Plain WebDAV does not support modified times. However when used with Fastmail Files, Owncloud or Nextcloud rclone will support modified times.
Likewise plain WebDAV does not support hashes, however when used with Fastmail Files, Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.
Standard options
@@ -35138,6 +35218,10 @@ y/e/d> y
- Sharepoint with NTLM authentication, usually self-hosted or on-premises
+"rclone"
+
+- rclone WebDAV server to serve a remote over HTTP via the WebDAV protocol
+
"other"
- Other site/service or software
@@ -35273,6 +35357,9 @@ pass = encryptedpassword
As SharePoint does some special things with uploaded documents, you won't be able to use the documents size or the documents hash to compare if a file has been changed since the upload / which file is newer.
For Rclone calls copying files (especially Office files such as .docx, .xlsx, etc.) from/to SharePoint (like copy, sync, etc.), you should append these flags to ensure Rclone uses the "Last Modified" datetime property to compare your documents:
--ignore-size --ignore-checksum --update
+Rclone
+Use this option if you are hosting remotes over WebDAV provided by rclone. Read rclone serve webdav for more details.
+rclone serve supports modified times using the X-OC-Mtime
header.
dCache
dCache is a storage system that supports many protocols and authentication/authorisation schemes. For WebDAV clients, it allows users to authenticate with username and password (BASIC), X.509, Kerberos, and various bearer tokens, including Macaroons and OpenID-Connect access tokens.
Configure as normal using the other
type. Don't enter a username or password, instead enter your Macaroon as the bearer_token
.
@@ -35357,10 +35444,9 @@ y/e/d> y
Sync /home/local/directory
to the remote path, deleting any excess files in the path.
rclone sync --interactive /home/local/directory remote:directory
Yandex paths may be as deep as required, e.g. remote:directory/subdirectory
.
-Modified time
+Modification times and hashes
Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified
in RFC3339 with nanoseconds format.
-MD5 checksums
-MD5 checksums are natively supported by Yandex Disk.
+The MD5 hash algorithm is natively supported by Yandex Disk.
Emptying Trash
If you wish to empty your trash you can use the rclone cleanup remote:
command which will permanently delete all your trashed files. This command does not take any path arguments.
@@ -35437,10 +35523,10 @@ y/e/d> y
- Config: encoding
- Env Var: RCLONE_YANDEX_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Del,Ctl,InvalidUtf8,Dot
-Limitations
+Limitations
When uploading very large files (bigger than about 5 GiB) you will need to increase the --timeout
parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see net/http: timeout awaiting response headers
errors in the logs if this is happening. Setting the timeout to twice the max size of file in GiB should be enough, so if you want to upload a 30 GiB file set a timeout of 2 * 30 = 60m
, that is --timeout 60m
.
Having a Yandex Mail account is mandatory to use the Yandex.Disk subscription. Token generation will work without a mail account, but Rclone won't be able to complete any actions.
[403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.
@@ -35519,10 +35605,9 @@ y/e/d>
Sync /home/local/directory
to the remote path, deleting any excess files in the path.
rclone sync --interactive /home/local/directory remote:directory
Zoho paths may be as deep as required, eg remote:directory/subdirectory
.
-Modified time
+Modification times and hashes
Modified times are currently not supported for Zoho Workdrive
-Checksums
-No checksums are supported.
+No hash algorithms are supported.
To view your current quota you can use the rclone about remote:
command which will display your current usage.
Restricted filename characters
@@ -35624,7 +35709,7 @@ y/e/d>
- Config: encoding
- Env Var: RCLONE_ZOHO_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Del,Ctl,InvalidUtf8
Setting up your own client_id
@@ -35641,8 +35726,8 @@ y/e/d>
Will sync /home/source
to /tmp/destination
.
Configuration
For consistencies sake one can also configure a remote of type local
in the config file, and access the local filesystem using rclone remote paths, e.g. remote:path/to/wherever
, but it is probably easier not to.
-Modified time
-Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.
+Modification times
+Rclone reads and writes the modification times using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.
Filenames
Filenames should be encoded in UTF-8 on disk. This is the normal case for Windows and OS X.
There is a bit more uncertainty in the Linux world, but new distributions will have UTF-8 encoded files names. If you are using an old Linux filesystem with non UTF-8 file names (e.g. latin1) then you can use the convmv
tool to convert the filesystem to UTF-8. This tool is available in most distributions' package managers.
@@ -36097,6 +36182,7 @@ $ tree /tmp/b
Only checksum the size that stat gave
Don't update the stat info for the file
+NB do not use this flag on a Windows Volume Shadow (VSS). For some unknown reason, files in a VSS sometimes show different sizes from the directory listing (where the initial stat value comes from on Windows) and when stat is called on them directly. Other copy tools always use the direct stat value and setting this flag will disable that.
Properties:
- Config: no_check_updated
@@ -36170,7 +36256,7 @@ $ tree /tmp/b
- Config: encoding
- Env Var: RCLONE_LOCAL_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Dot
@@ -36264,6 +36350,228 @@ $ tree /tmp/b
- "error": return an error based on option value
Changelog
+v1.65.0 - 2023-11-26
+See commits
+
+- New backends
+
+- Azure Files (karan, moongdal, Nick Craig-Wood)
+- ImageKit (Abhinav Dhiman)
+- Linkbox (viktor, Nick Craig-Wood)
+
+- New commands
+
+serve s3
: Let rclone act as an S3 compatible server (Mikubill, Artur Neumann, Saw-jan, Nick Craig-Wood)
+nfsmount
: mount command to provide mount mechanism on macOS without FUSE (Saleh Dindar)
+serve nfs
: to serve a remote for use by nfsmount
(Saleh Dindar)
+
+- New Features
+
+- install.sh: Clean up temp files in install script (Jacob Hands)
+- build
+
+- Update all dependencies (Nick Craig-Wood)
+- Refactor version info and icon resource handling on windows (albertony)
+
+- doc updates (albertony, alfish2000, asdffdsazqqq, Dimitri Papadopoulos, Herby Gillot, Joda Stößer, Manoj Ghosh, Nick Craig-Wood)
+- Implement
--metadata-mapper
to transform metatadata with a user supplied program (Nick Craig-Wood)
+- Add
ChunkWriterDoesntSeek
feature flag and set it for b2 (Nick Craig-Wood)
+- lib/http: Export basic go string functions for use in
--template
(Gabriel Espinoza)
+- makefile: Use POSIX compatible install arguments (Mina Galić)
+- operations
+
+- Use less memory when doing multithread uploads (Nick Craig-Wood)
+- Implement
--partial-suffix
to control extension of temporary file names (Volodymyr)
+
+- rc
+
+- Add
operations/check
to the rc API (Nick Craig-Wood)
+- Always report an error as JSON (Nick Craig-Wood)
+- Set
Last-Modified
header for files served by --rc-serve
(Nikita Shoshin)
+
+- size: Dont show duplicate object count when less than 1k (albertony)
+
+- Bug Fixes
+
+- fshttp: Fix
--contimeout
being ignored (你知道未来吗)
+- march: Fix excessive parallelism when using
--no-traverse
(Nick Craig-Wood)
+- ncdu: Fix crash when re-entering changed directory after rescan (Nick Craig-Wood)
+- operations
+
+- Fix overwrite of destination when multi-thread transfer fails (Nick Craig-Wood)
+- Fix invalid UTF-8 when truncating file names when not using
--inplace
(Nick Craig-Wood)
+
+- serve dnla: Fix crash on graceful exit (wuxingzhong)
+
+- Mount
+
+- Disable mount for freebsd and alias cmount as mount on that platform (Nick Craig-Wood)
+
+- VFS
+
+- Add
--vfs-refresh
flag to read all the directories on start (Beyond Meat)
+- Implement Name() method in WriteFileHandle and ReadFileHandle (Saleh Dindar)
+- Add go-billy dependency and make sure vfs.Handle implements billy.File (Saleh Dindar)
+- Error out early if can't upload 0 length file (Nick Craig-Wood)
+
+- Local
+
+- Fix copying from Windows Volume Shadows (Nick Craig-Wood)
+
+- Azure Blob
+
+- Add support for cold tier (Ivan Yanitra)
+
+- B2
+
+- Implement "rclone backend lifecycle" to read and set bucket lifecycles (Nick Craig-Wood)
+- Implement
--b2-lifecycle
to control lifecycle when creating buckets (Nick Craig-Wood)
+- Fix listing all buckets when not needed (Nick Craig-Wood)
+- Fix multi-thread upload with copyto going to wrong name (Nick Craig-Wood)
+- Fix server side chunked copy when file size was exactly
--b2-copy-cutoff
(Nick Craig-Wood)
+- Fix streaming chunked files an exact multiple of chunk size (Nick Craig-Wood)
+
+- Box
+
+- Filter more EventIDs when polling (David Sze)
+- Add more logging for polling (David Sze)
+- Fix performance problem reading metadata for single files (Nick Craig-Wood)
+
+- Drive
+
+- Add read/write metadata support (Nick Craig-Wood)
+- Add support for SHA-1 and SHA-256 checksums (rinsuki)
+- Add
--drive-show-all-gdocs
to allow unexportable gdocs to be server side copied (Nick Craig-Wood)
+- Add a note that
--drive-scope
accepts comma-separated list of scopes (Keigo Imai)
+- Fix error updating created time metadata on existing object (Nick Craig-Wood)
+- Fix integration tests by enabling metadata support from the context (Nick Craig-Wood)
+
+- Dropbox
+
+- Factor batcher into lib/batcher (Nick Craig-Wood)
+- Fix missing encoding for rclone purge (Nick Craig-Wood)
+
+- Google Cloud Storage
+
+- Fix 400 Bad request errors when using multi-thread copy (Nick Craig-Wood)
+
+- Googlephotos
+
+- Implement batcher for uploads (Nick Craig-Wood)
+
+- Hdfs
+
+- Added support for list of namenodes in hdfs remote config (Tayo-pasedaRJ)
+
+- HTTP
+
+- Implement set backend command to update running backend (Nick Craig-Wood)
+- Enable methods used with WebDAV (Alen Šiljak)
+
+- Jottacloud
+
+- Add support for reading and writing metadata (albertony)
+
+- Onedrive
+
+- Implement ListR method which gives
--fast-list
support (Nick Craig-Wood)
+
+- This must be enabled with the
--onedrive-delta
flag
+
+
+- Quatrix
+
+- Add partial upload support (Oksana Zhykina)
+- Overwrite files on conflict during server-side move (Oksana Zhykina)
+
+- S3
+
+- Add Linode provider (Nick Craig-Wood)
+- Add docs on how to add a new provider (Nick Craig-Wood)
+- Fix no error being returned when creating a bucket we don't own (Nick Craig-Wood)
+- Emit a debug message if anonymous credentials are in use (Nick Craig-Wood)
+- Add
--s3-disable-multipart-uploads
flag (Nick Craig-Wood)
+- Detect looping when using gcs and versions (Nick Craig-Wood)
+
+- SFTP
+
+- Implement
--sftp-copy-is-hardlink
to server side copy as hardlink (Nick Craig-Wood)
+
+- Smb
+
+- Fix incorrect
about
size by switching to github.com/cloudsoda/go-smb2
fork (Nick Craig-Wood)
+- Fix modtime of multithread uploads by setting PartialUploads (Nick Craig-Wood)
+
+- WebDAV
+
+- Added an rclone vendor to work with
rclone serve webdav
(Adithya Kumar)
+
+
+v1.64.2 - 2023-10-19
+See commits
+
+- Bug Fixes
+
+- selfupdate: Fix "invalid hashsum signature" error (Nick Craig-Wood)
+- build: Fix docker build running out of space (Nick Craig-Wood)
+
+
+v1.64.1 - 2023-10-17
+See commits
+
+- Bug Fixes
+
+- cmd: Make
--progress
output logs in the same format as without (Nick Craig-Wood)
+- docs fixes (Dimitri Papadopoulos Orfanos, Herby Gillot, Manoj Ghosh, Nick Craig-Wood)
+- lsjson: Make sure we set the global metadata flag too (Nick Craig-Wood)
+- operations
+
+- Ensure concurrency is no greater than the number of chunks (Pat Patterson)
+- Fix OpenOptions ignored in copy if operation was a multiThreadCopy (Vitor Gomes)
+- Fix error message on delete to have file name (Nick Craig-Wood)
+
+- serve sftp: Return not supported error for not supported commands (Nick Craig-Wood)
+- build: Upgrade golang.org/x/net to v0.17.0 to fix HTTP/2 rapid reset (Nick Craig-Wood)
+- pacer: Fix b2 deadlock by defaulting max connections to unlimited (Nick Craig-Wood)
+
+- Mount
+
+- Fix automount not detecting drive is ready (Nick Craig-Wood)
+
+- VFS
+
+- Fix update dir modification time (Saleh Dindar)
+
+- Azure Blob
+
+- Fix "fatal error: concurrent map writes" (Nick Craig-Wood)
+
+- B2
+
+- Fix multipart upload: corrupted on transfer: sizes differ XXX vs 0 (Nick Craig-Wood)
+- Fix locking window when getting mutipart upload URL (Nick Craig-Wood)
+- Fix server side copies greater than 4GB (Nick Craig-Wood)
+- Fix chunked streaming uploads (Nick Craig-Wood)
+- Reduce default
--b2-upload-concurrency
to 4 to reduce memory usage (Nick Craig-Wood)
+
+- Onedrive
+
+- Fix the configurator to allow
/teams/ID
in the config (Nick Craig-Wood)
+
+- Oracleobjectstorage
+
+- Fix OpenOptions being ignored in uploadMultipart with chunkWriter (Nick Craig-Wood)
+
+- S3
+
+- Fix slice bounds out of range error when listing (Nick Craig-Wood)
+- Fix OpenOptions being ignored in uploadMultipart with chunkWriter (Vitor Gomes)
+
+- Storj
+
+- Update storj.io/uplink to v1.12.0 (Kaloyan Raev)
+
+
v1.64.0 - 2023-09-11
See commits
@@ -36414,7 +36722,7 @@ $ tree /tmp/b
- Hdfs
- Retry "replication in progress" errors when uploading (Nick Craig-Wood)
-- Fix uploading to the wrong object on Update with overriden remote name (Nick Craig-Wood)
+- Fix uploading to the wrong object on Update with overridden remote name (Nick Craig-Wood)
- HTTP
@@ -36427,7 +36735,7 @@ $ tree /tmp/b
- Oracleobjectstorage
-- Use rclone's rate limiter in mutipart transfers (Manoj Ghosh)
+- Use rclone's rate limiter in multipart transfers (Manoj Ghosh)
- Implement
OpenChunkWriter
and multi-thread uploads (Manoj Ghosh)
- S3
@@ -36690,7 +36998,7 @@ $ tree /tmp/b
Putio
-- Fix uploading to the wrong object on Update with overriden remote name (Nick Craig-Wood)
+- Fix uploading to the wrong object on Update with overridden remote name (Nick Craig-Wood)
- Fix modification times not being preserved for server side copy and move (Nick Craig-Wood)
- Fix server side copy failures (400 errors) (Nick Craig-Wood)
@@ -36699,7 +37007,7 @@ $ tree /tmp/b
Empty directory markers (Jānis Bebrītis, Nick Craig-Wood)
Update Scaleway storage classes (Brian Starkey)
Fix --s3-versions
on individual objects (Nick Craig-Wood)
-Fix hang on aborting multpart upload with iDrive e2 (Nick Craig-Wood)
+Fix hang on aborting multipart upload with iDrive e2 (Nick Craig-Wood)
Fix missing "tier" metadata (Nick Craig-Wood)
Fix V3sign: add missing subresource delete (cc)
Fix Arvancloud Domain and region changes and alphabetise the provider (Ehsan Tadayon)
@@ -36724,7 +37032,7 @@ $ tree /tmp/b
Storj
- Fix "uplink: too many requests" errors when uploading to the same file (Nick Craig-Wood)
-- Fix uploading to the wrong object on Update with overriden remote name (Nick Craig-Wood)
+- Fix uploading to the wrong object on Update with overridden remote name (Nick Craig-Wood)
Swift
@@ -42141,7 +42449,7 @@ $ tree /tmp/b
Mount
-- Re-use
rcat
internals to support uploads from all remotes
+- Reuse
rcat
internals to support uploads from all remotes
Dropbox
@@ -43316,7 +43624,7 @@ $ tree /tmp/b
- Project started
Bugs and Limitations
-Limitations
+Limitations
Directory timestamps aren't preserved
Rclone doesn't currently preserve the timestamps of directories. This is because rclone only really considers objects when syncing.
Rclone struggles with millions of files in a directory/bucket
@@ -43325,7 +43633,7 @@ $ tree /tmp/b
Bucket-based remotes and folders
Bucket-based remotes (e.g. S3/GCS/Swift/B2) do not have a concept of directories. Rclone therefore cannot create directories in them which means that empty directories on a bucket-based remote will tend to disappear.
Some software creates empty keys ending in /
as directory markers. Rclone doesn't do this as it potentially creates more objects and costs more. This ability may be added in the future (probably via a flag/option).
-Bugs
+Bugs
Bugs are stored in rclone's GitHub project:
Forum
diff --git a/MANUAL.md b/MANUAL.md
index 9b4c2e21d..13bee7b75 100644
--- a/MANUAL.md
+++ b/MANUAL.md
@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
-% Sep 11, 2023
+% Nov 26, 2023
# Rclone syncs your files to cloud storage
@@ -135,11 +135,14 @@ WebDAV or S3, that work out of the box.)
- Koofr
- Leviia Object Storage
- Liara Object Storage
+- Linkbox
+- Linode Object Storage
- Mail.ru Cloud
- Memset Memstore
- Mega
- Memory
- Microsoft Azure Blob Storage
+- Microsoft Azure Files Storage
- Microsoft OneDrive
- Minio
- Nextcloud
@@ -279,6 +282,19 @@ developers so it may be out of date. Its current version is as below.
[![Homebrew package](https://repology.org/badge/version-for-repo/homebrew/rclone.svg)](https://repology.org/project/rclone/versions)
+### Installation with MacPorts (#macos-macports)
+
+On macOS, rclone can also be installed via [MacPorts](https://www.macports.org):
+
+ sudo port install rclone
+
+Note that this is a third party installer not controlled by the rclone
+developers so it may be out of date. Its current version is as below.
+
+[![MacPorts port](https://repology.org/badge/version-for-repo/macports/rclone.svg)](https://repology.org/project/rclone/versions)
+
+More information [here](https://ports.macports.org/port/rclone/).
+
### Precompiled binary, using curl {#macos-precompiled}
To avoid problems with macOS gatekeeper enforcing the binary to be signed and
@@ -501,7 +517,7 @@ Make sure you have [Snapd installed](https://snapcraft.io/docs/installing-snapd)
```bash
$ sudo snap install rclone
```
-Due to the strict confinement of Snap, rclone snap cannot acess real /home/$USER/.config/rclone directory, default config path is as below.
+Due to the strict confinement of Snap, rclone snap cannot access real /home/$USER/.config/rclone directory, default config path is as below.
- Default config directory:
- /home/$USER/snap/rclone/current/.config/rclone
@@ -518,7 +534,7 @@ Note that this is controlled by [community maintainer](https://github.com/bouken
## Source installation {#source}
Make sure you have git and [Go](https://golang.org/) installed.
-Go version 1.17 or newer is required, latest release is recommended.
+Go version 1.18 or newer is required, the latest release is recommended.
You can get it from your package manager, or download it from
[golang.org/dl](https://golang.org/dl/). Then you can run the following:
@@ -552,26 +568,59 @@ port of GCC, e.g. by installing it in a [MSYS2](https://www.msys2.org)
distribution (make sure you install it in the classic mingw64 subsystem, the
ucrt64 version is not compatible).
-Additionally, on Windows, you must install the third party utility
-[WinFsp](https://winfsp.dev/), with the "Developer" feature selected.
+Additionally, to build with mount on Windows, you must install the third party
+utility [WinFsp](https://winfsp.dev/), with the "Developer" feature selected.
If building with cgo, you must also set environment variable CPATH pointing to
the fuse include directory within the WinFsp installation
(normally `C:\Program Files (x86)\WinFsp\inc\fuse`).
-You may also add arguments `-ldflags -s` (with or without `-tags cmount`),
-to omit symbol table and debug information, making the executable file smaller,
-and `-trimpath` to remove references to local file system paths. This is how
-the official rclone releases are built.
+You may add arguments `-ldflags -s` to omit symbol table and debug information,
+making the executable file smaller, and `-trimpath` to remove references to
+local file system paths. The official rclone releases are built with both of these.
```
go build -trimpath -ldflags -s -tags cmount
```
+If you want to customize the version string, as reported by
+the `rclone version` command, you can set one of the variables `fs.Version`,
+`fs.VersionTag` (to keep default suffix but customize the number),
+or `fs.VersionSuffix` (to keep default number but customize the suffix).
+This can be done from the build command, by adding to the `-ldflags`
+argument value as shown below.
+
+```
+go build -trimpath -ldflags "-s -X github.com/rclone/rclone/fs.Version=v9.9.9-test" -tags cmount
+```
+
+On Windows, the official executables also have the version information,
+as well as a file icon, embedded as binary resources. To get that with your
+own build you need to run the following command **before** the build command.
+It generates a Windows resource system object file, with extension .syso, e.g.
+`resource_windows_amd64.syso`, that will be automatically picked up by
+future build commands.
+
+```
+go run bin/resource_windows.go
+```
+
+The above command will generate a resource file containing version information
+based on the fs.Version variable in source at the time you run the command,
+which means if the value of this variable changes you need to re-run the
+command for it to be reflected in the version information. Also, if you
+override this version variable in the build command as described above, you
+need to do that also when generating the resource file, or else it will still
+use the value from the source.
+
+```
+go run bin/resource_windows.go -version v9.9.9-test
+```
+
Instead of executing the `go build` command directly, you can run it via the
-Makefile. It changes the version number suffix from "-DEV" to "-beta" and
-appends commit details. It also copies the resulting rclone executable into
-your GOPATH bin folder (`$(go env GOPATH)/bin`, which corresponds to
-`~/go/bin/rclone` by default).
+Makefile. The default target changes the version suffix from "-DEV" to "-beta"
+followed by additional commit details, embeds version information binary resources
+on Windows, and copies the resulting rclone executable into your GOPATH bin folder
+(`$(go env GOPATH)/bin`, which corresponds to `~/go/bin/rclone` by default).
```
make
@@ -584,32 +633,22 @@ make GOTAGS=cmount
```
There are other make targets that can be used for more advanced builds,
-such as cross-compiling for all supported os/architectures, embedding
-icon and version info resources into windows executable, and packaging
+such as cross-compiling for all supported os/architectures, and packaging
results into release artifacts.
See [Makefile](https://github.com/rclone/rclone/blob/master/Makefile)
and [cross-compile.go](https://github.com/rclone/rclone/blob/master/bin/cross-compile.go)
for details.
-Another alternative is to download the source, build and install rclone in one
-operation, as a regular Go package. The source will be stored it in the Go
-module cache, and the resulting executable will be in your GOPATH bin folder
-(`$(go env GOPATH)/bin`, which corresponds to `~/go/bin/rclone` by default).
-
-With Go version 1.17 or newer:
+Another alternative method for source installation is to download the source,
+build and install rclone - all in one operation, as a regular Go package.
+The source will be stored it in the Go module cache, and the resulting
+executable will be in your GOPATH bin folder (`$(go env GOPATH)/bin`,
+which corresponds to `~/go/bin/rclone` by default).
```
go install github.com/rclone/rclone@latest
```
-With Go versions older than 1.17 (do **not** use the `-u` flag, it causes Go to
-try to update the dependencies that rclone uses and sometimes these don't work
-with the current version):
-
-```
-go get github.com/rclone/rclone
-```
-
## Ansible installation {#ansible}
This can be done with [Stefan Weichinger's ansible
@@ -771,7 +810,7 @@ It requires .NET Framework, but it is preinstalled on newer versions of Windows,
also provides alternative standalone distributions which includes necessary runtime (.NET 5).
WinSW is a command-line only utility, where you have to manually create an XML file with
service configuration. This may be a drawback for some, but it can also be an advantage
-as it is easy to back up and re-use the configuration
+as it is easy to back up and reuse the configuration
settings, without having go through manual steps in a GUI. One thing to note is that
by default it does not restart the service on error, one have to explicit enable this
in the configuration file (via the "onfailure" parameter).
@@ -841,10 +880,12 @@ See the following for detailed instructions for
* [Internet Archive](https://rclone.org/internetarchive/)
* [Jottacloud](https://rclone.org/jottacloud/)
* [Koofr](https://rclone.org/koofr/)
+ * [Linkbox](https://rclone.org/linkbox/)
* [Mail.ru Cloud](https://rclone.org/mailru/)
* [Mega](https://rclone.org/mega/)
* [Memory](https://rclone.org/memory/)
* [Microsoft Azure Blob Storage](https://rclone.org/azureblob/)
+ * [Microsoft Azure Files Storage](https://rclone.org/azurefiles/)
* [Microsoft OneDrive](https://rclone.org/onedrive/)
* [OpenStack Swift / Rackspace Cloudfiles / Blomp Cloud Storage / Memset Memstore](https://rclone.org/swift/)
* [OpenDrive](https://rclone.org/opendrive/)
@@ -1024,11 +1065,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -1043,11 +1084,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
```
@@ -1170,11 +1212,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -1189,11 +1231,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
```
@@ -1330,11 +1373,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -1349,11 +1392,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
```
@@ -2775,11 +2819,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -2794,11 +2838,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
```
@@ -2950,17 +2995,20 @@ See the [global flags page](https://rclone.org/flags/) for global options not li
# rclone checksum
-Checks the files in the source against a SUM file.
+Checks the files in the destination against a SUM file.
## Synopsis
-Checks that hashsums of source files match the SUM file.
+Checks that hashsums of destination files match the SUM file.
It compares hashes (MD5, SHA1, etc) and logs a report of files which
don't match. It doesn't alter the file system.
-If you supply the `--download` flag, it will download the data from remote
-and calculate the contents hash on the fly. This can be useful for remotes
+The sumfile is treated as the source and the dst:path is treated as
+the destination for the purposes of the output.
+
+If you supply the `--download` flag, it will download the data from the remote
+and calculate the content hash on the fly. This can be useful for remotes
that don't support hashes or if you really want to check all the data.
Note that hash values in the SUM file are treated as case insensitive.
@@ -2991,7 +3039,7 @@ option for more information.
```
-rclone checksum sumfile src:path [flags]
+rclone checksum sumfile dst:path [flags]
```
## Options
@@ -3903,11 +3951,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -3922,11 +3970,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
```
@@ -4454,10 +4503,6 @@ Run without a hash to see the list of all supported hashes, e.g.
* whirlpool
* crc32
* sha256
- * dropbox
- * hidrive
- * mailru
- * quickxor
Then
@@ -4467,7 +4512,7 @@ Note that hash names are case insensitive and values are output in lower case.
```
-rclone hashsum remote:path [flags]
+rclone hashsum [ remote:path] [flags]
```
## Options
@@ -4967,7 +5012,6 @@ Mount the remote as file system on a mountpoint.
## Synopsis
-
rclone mount allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with
FUSE.
@@ -5222,11 +5266,17 @@ does not suffer from the same limitations.
## Mounting on macOS
-Mounting on macOS can be done either via [macFUSE](https://osxfuse.github.io/)
+Mounting on macOS can be done either via [built-in NFS server](https://rclone.org/commands/rclone_serve_nfs/), [macFUSE](https://osxfuse.github.io/)
(also known as osxfuse) or [FUSE-T](https://www.fuse-t.org/). macFUSE is a traditional
FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system
which "mounts" via an NFSv4 local server.
+# NFS mount
+
+This method spins up an NFS server using [serve nfs](https://rclone.org/commands/rclone_serve_nfs/) command and mounts
+it to the specified mountpoint. If you run this in background mode using |--daemon|, you will need to
+send SIGTERM signal to the rclone process using |kill| command to stop the mount.
+
### macFUSE Notes
If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) from
@@ -5276,6 +5326,8 @@ sequentially, it can only seek when reading. This means that many
applications won't work with their files on an rclone mount without
`--vfs-cache-mode writes` or `--vfs-cache-mode full`.
See the [VFS File Caching](#vfs-file-caching) section for more info.
+When using NFS mount on macOS, if you don't specify |--vfs-cache-mode|
+the mount point will be read-only.
The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2)
do not support the concept of empty directories, so empty
@@ -5422,7 +5474,6 @@ Mount option syntax includes a few extra options treated specially:
- `vv...` will be transformed into appropriate `--verbose=N`
- standard mount options like `x-systemd.automount`, `_netdev`, `nosuid` and alike
are intended only for Automountd and ignored by rclone.
-
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
@@ -5804,6 +5855,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -5906,11 +5958,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -5925,11 +5977,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
```
@@ -6395,6 +6448,17 @@ to be used within the template to server pages:
|-- .Size | Size in Bytes of the entry. |
|-- .ModTime | The UTC timestamp of an entry. |
+The server also makes the following functions available so that they can be used within the
+template. These functions help extend the options for dynamic rendering of HTML. They can
+be used to render HTML based on specific conditions.
+
+| Function | Description |
+| :---------- | :---------- |
+| afterEpoch | Returns the time since the epoch for the given time. |
+| contains | Checks whether a given substring is present or not in a given string. |
+| hasPrefix | Checks whether the given string begins with the specified prefix. |
+| hasSuffix | Checks whether the given string end with the specified suffix. |
+
### Authentication
By default this will serve files without needing a login.
@@ -6536,7 +6600,6 @@ Update the rclone binary.
## Synopsis
-
This command downloads the latest release of rclone and replaces the
currently running binary. The download is verified with a hashsum and
cryptographically signed signature; see [the release signing
@@ -6643,7 +6706,9 @@ See the [global flags page](https://rclone.org/flags/) for global options not li
* [rclone serve docker](https://rclone.org/commands/rclone_serve_docker/) - Serve any remote on docker's volume plugin API.
* [rclone serve ftp](https://rclone.org/commands/rclone_serve_ftp/) - Serve remote:path over FTP.
* [rclone serve http](https://rclone.org/commands/rclone_serve_http/) - Serve the remote over HTTP.
+* [rclone serve nfs](https://rclone.org/commands/rclone_serve_nfs/) - Serve the remote as an NFS mount
* [rclone serve restic](https://rclone.org/commands/rclone_serve_restic/) - Serve the remote for restic's REST API.
+* [rclone serve s3](https://rclone.org/commands/rclone_serve_s3/) - Serve remote:path over s3.
* [rclone serve sftp](https://rclone.org/commands/rclone_serve_sftp/) - Serve the remote over SFTP.
* [rclone serve webdav](https://rclone.org/commands/rclone_serve_webdav/) - Serve remote:path over WebDAV.
@@ -6676,7 +6741,6 @@ default "rclone (hostname)".
Use `--log-trace` in conjunction with `-vv` to enable additional debug
logging of all UPNP traffic.
-
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
@@ -7045,6 +7109,7 @@ rclone serve dlna remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -7092,7 +7157,6 @@ Serve any remote on docker's volume plugin API.
## Synopsis
-
This command implements the Docker volume plugin API allowing docker to use
rclone as a data storage mechanism for various cloud providers.
rclone provides [docker volume plugin](/docker) based on it.
@@ -7131,7 +7195,6 @@ directory with book-keeping records of created and mounted volumes.
All mount and VFS options are submitted by the docker daemon via API, but
you can also provide defaults on the command line as well as set path to the
config file and cache directory or adjust logging verbosity.
-
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
@@ -7518,6 +7581,7 @@ rclone serve docker [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -7587,7 +7651,6 @@ then using Authentication is advised - see the next section for info.
By default this will serve files without needing a login.
You can set a single username and password with the --user and --pass flags.
-
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
@@ -8040,6 +8103,7 @@ rclone serve ftp remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -8171,6 +8235,17 @@ to be used within the template to server pages:
|-- .Size | Size in Bytes of the entry. |
|-- .ModTime | The UTC timestamp of an entry. |
+The server also makes the following functions available so that they can be used within the
+template. These functions help extend the options for dynamic rendering of HTML. They can
+be used to render HTML based on specific conditions.
+
+| Function | Description |
+| :---------- | :---------- |
+| afterEpoch | Returns the time since the epoch for the given time. |
+| contains | Checks whether a given substring is present or not in a given string. |
+| hasPrefix | Checks whether the given string begins with the specified prefix. |
+| hasSuffix | Checks whether the given string end with the specified suffix. |
+
### Authentication
By default this will serve files without needing a login.
@@ -8197,7 +8272,6 @@ The password file can be updated while rclone is running.
Use `--realm` to set the authentication realm.
Use `--salt` to change the password hashing salt from the default.
-
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
@@ -8659,6 +8733,448 @@ rclone serve http remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
+ --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
+ --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
+ --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
+```
+
+
+## Filter Options
+
+Flags for filtering directory listings.
+
+```
+ --delete-excluded Delete files on dest excluded from sync
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
+ --exclude-if-present stringArray Exclude directories if filename is present
+ --files-from stringArray Read list of source-file names from file (use - to read from stdin)
+ --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
+ -f, --filter stringArray Add a file filtering rule
+ --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --ignore-case Ignore case in filters (case insensitive)
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read file include patterns from file (use - to read from stdin)
+ --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-depth int If set limits the recursion depth to this (default -1)
+ --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
+ --metadata-exclude stringArray Exclude metadatas matching pattern
+ --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
+ --metadata-filter stringArray Add a metadata filtering rule
+ --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
+ --metadata-include stringArray Include metadatas matching pattern
+ --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
+```
+
+See the [global flags page](https://rclone.org/flags/) for global options not listed here.
+
+# SEE ALSO
+
+* [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol.
+
+# rclone serve nfs
+
+Serve the remote as an NFS mount
+
+## Synopsis
+
+Create an NFS server that serves the given remote over the network.
+
+The primary purpose for this command is to enable [mount command](https://rclone.org/commands/rclone_mount/) on recent macOS versions where
+installing FUSE is very cumbersome.
+
+Since this is running on NFSv3, no authentication method is available. Any client
+will be able to access the data. To limit access, you can use serve NFS on loopback address
+and rely on secure tunnels (such as SSH). For this reason, by default, a random TCP port is chosen and loopback interface is used for the listening address;
+meaning that it is only available to the local machine. If you want other machines to access the
+NFS mount over local network, you need to specify the listening address and port using `--addr` flag.
+
+Modifying files through NFS protocol requires VFS caching. Usually you will need to specify `--vfs-cache-mode`
+in order to be able to write to the mountpoint (full is recommended). If you don't specify VFS cache mode,
+the mount will be read-only.
+
+To serve NFS over the network use following command:
+
+ rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full
+
+We specify a specific port that we can use in the mount command:
+
+To mount the server under Linux/macOS, use the following command:
+
+ mount -oport=$PORT,mountport=$PORT $HOSTNAME: path/to/mountpoint
+
+Where `$PORT` is the same port number we used in the serve nfs command.
+
+This feature is only available on Unix platforms.
+
+## VFS - Virtual File System
+
+This command uses the VFS layer. This adapts the cloud storage objects
+that rclone uses into something which looks much more like a disk
+filing system.
+
+Cloud storage objects have lots of properties which aren't like disk
+files - you can't extend them or write to the middle of them, so the
+VFS layer has to deal with that. Because there is no one right way of
+doing this there are various options explained below.
+
+The VFS layer also implements a directory cache - this caches info
+about files and directories (but not the data) in memory.
+
+## VFS Directory Cache
+
+Using the `--dir-cache-time` flag, you can control how long a
+directory should be considered up to date and not refreshed from the
+backend. Changes made through the VFS will appear immediately or
+invalidate the cache.
+
+ --dir-cache-time duration Time to cache directory entries for (default 5m0s)
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
+
+However, changes made directly on the cloud storage by the web
+interface or a different copy of rclone will only be picked up once
+the directory cache expires if the backend configured does not support
+polling for changes. If the backend supports polling, changes will be
+picked up within the polling interval.
+
+You can send a `SIGHUP` signal to rclone for it to flush all
+directory caches, regardless of how old they are. Assuming only one
+rclone instance is running, you can reset the cache like this:
+
+ kill -SIGHUP $(pidof rclone)
+
+If you configure rclone with a [remote control](/rc) then you can use
+rclone rc to flush the whole directory cache:
+
+ rclone rc vfs/forget
+
+Or individual files or directories:
+
+ rclone rc vfs/forget file=path/to/file dir=path/to/dir
+
+## VFS File Buffering
+
+The `--buffer-size` flag determines the amount of memory,
+that will be used to buffer data in advance.
+
+Each open file will try to keep the specified amount of data in memory
+at all times. The buffered data is bound to one open file and won't be
+shared.
+
+This flag is a upper limit for the used memory per open file. The
+buffer will only use memory for data that is downloaded but not not
+yet read. If the buffer is empty, only a small amount of memory will
+be used.
+
+The maximum memory used by rclone for buffering can be up to
+`--buffer-size * open files`.
+
+## VFS File Caching
+
+These flags control the VFS file caching options. File caching is
+necessary to make the VFS layer appear compatible with a normal file
+system. It can be disabled at the cost of some compatibility.
+
+For example you'll need to enable VFS caching if you want to read and
+write simultaneously to a file. See below for more details.
+
+Note that the VFS cache is separate from the cache backend and you may
+find that you need one or the other or both.
+
+ --cache-dir string Directory rclone will use for caching.
+ --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+ --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
+
+If run with `-vv` rclone will print the location of the file cache. The
+files are stored in the user cache file area which is OS dependent but
+can be controlled with `--cache-dir` or setting the appropriate
+environment variable.
+
+The cache has 4 different modes selected by `--vfs-cache-mode`.
+The higher the cache mode the more compatible rclone becomes at the
+cost of using disk space.
+
+Note that files are written back to the remote only when they are
+closed and if they haven't been accessed for `--vfs-write-back`
+seconds. If rclone is quit or dies with files that haven't been
+uploaded, these will be uploaded next time rclone is run with the same
+flags.
+
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+that the cache may exceed these quotas for two reasons. Firstly
+because it is only checked every `--vfs-cache-poll-interval`. Secondly
+because open files cannot be evicted from the cache. When
+`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+rclone will attempt to evict the least accessed files from the cache
+first. rclone will start with files that haven't been accessed for the
+longest. This cache flushing strategy is efficient and more relevant
+files are likely to remain cached.
+
+The `--vfs-cache-max-age` will evict files from the cache
+after the set time since last access has passed. The default value of
+1 hour will start evicting files from cache that haven't been accessed
+for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0
+and will wait for 1 more hour before evicting. Specify the time with
+standard notation, s, m, h, d, w .
+
+You **should not** run two copies of rclone using the same VFS cache
+with the same or overlapping remotes if using `--vfs-cache-mode > off`.
+This can potentially cause data corruption if you do. You can work
+around this by giving each rclone its own cache hierarchy with
+`--cache-dir`. You don't need to worry about this if the remotes in
+use don't overlap.
+
+### --vfs-cache-mode off
+
+In this mode (the default) the cache will read directly from the remote and write
+directly to the remote without caching anything on disk.
+
+This will mean some operations are not possible
+
+ * Files can't be opened for both read AND write
+ * Files opened for write can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files open for read with O_TRUNC will be opened write only
+ * Files open for write only will behave as if O_TRUNC was supplied
+ * Open modes O_APPEND, O_TRUNC are ignored
+ * If an upload fails it can't be retried
+
+### --vfs-cache-mode minimal
+
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disk. This means that files opened for
+write will be a lot more compatible, but uses the minimal disk space.
+
+These operations are not possible
+
+ * Files opened for write only can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files opened for write only will ignore O_APPEND, O_TRUNC
+ * If an upload fails it can't be retried
+
+### --vfs-cache-mode writes
+
+In this mode files opened for read only are still read directly from
+the remote, write only and read/write files are buffered to disk
+first.
+
+This mode should support all normal file system operations.
+
+If an upload fails it will be retried at exponentially increasing
+intervals up to 1 minute.
+
+### --vfs-cache-mode full
+
+In this mode all reads and writes are buffered to and from disk. When
+data is read from the remote this is buffered to disk as well.
+
+In this mode the files in the cache will be sparse files and rclone
+will keep track of which bits of the files it has downloaded.
+
+So if an application only reads the starts of each file, then rclone
+will only buffer the start of the file. These files will appear to be
+their full size in the cache, but they will be sparse files with only
+the data that has been downloaded present in them.
+
+This mode should support all normal file system operations and is
+otherwise identical to `--vfs-cache-mode` writes.
+
+When reading a file rclone will read `--buffer-size` plus
+`--vfs-read-ahead` bytes ahead. The `--buffer-size` is buffered in memory
+whereas the `--vfs-read-ahead` is buffered on disk.
+
+When using this mode it is recommended that `--buffer-size` is not set
+too large and `--vfs-read-ahead` is set large if required.
+
+**IMPORTANT** not all file systems support sparse files. In particular
+FAT/exFAT do not. Rclone will perform very badly if the cache
+directory is on a filesystem which doesn't support sparse files and it
+will log an ERROR message if one is detected.
+
+### Fingerprinting
+
+Various parts of the VFS use fingerprinting to see if a local file
+copy has changed relative to a remote file. Fingerprints are made
+from:
+
+- size
+- modification time
+- hash
+
+where available on an object.
+
+On some backends some of these attributes are slow to read (they take
+an extra API call per object, or extra work per object).
+
+For example `hash` is slow with the `local` and `sftp` backends as
+they have to read the entire file and hash it, and `modtime` is slow
+with the `s3`, `swift`, `ftp` and `qinqstor` backends because they
+need to do an extra API call to fetch it.
+
+If you use the `--vfs-fast-fingerprint` flag then rclone will not
+include the slow operations in the fingerprint. This makes the
+fingerprinting less accurate but much faster and will improve the
+opening time of cached files.
+
+If you are running a vfs cache over `local`, `s3` or `swift` backends
+then using this flag is recommended.
+
+Note that if you change the value of this flag, the fingerprints of
+the files in the cache may be invalidated and the files will need to
+be downloaded again.
+
+## VFS Chunked Reading
+
+When rclone reads files from a remote it reads them in chunks. This
+means that rather than requesting the whole file rclone reads the
+chunk specified. This can reduce the used download quota for some
+remotes by requesting only chunks from the remote that are actually
+read, at the cost of an increased number of requests.
+
+These flags control the chunking:
+
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
+ --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
+
+Rclone will start reading a chunk of size `--vfs-read-chunk-size`,
+and then double the size for each read. When `--vfs-read-chunk-size-limit` is
+specified, and greater than `--vfs-read-chunk-size`, the chunk size for each
+open file will get doubled only until the specified value is reached. If the
+value is "off", which is the default, the limit is disabled and the chunk size
+will grow indefinitely.
+
+With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0`
+the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on.
+When `--vfs-read-chunk-size-limit 500M` is specified, the result would be
+0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
+
+Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading.
+
+## VFS Performance
+
+These flags may be used to enable/disable features of the VFS for
+performance or other reasons. See also the [chunked reading](#vfs-chunked-reading)
+feature.
+
+In particular S3 and Swift benefit hugely from the `--no-modtime` flag
+(or use `--use-server-modtime` for a slightly different effect) as each
+read of the modification time takes a transaction.
+
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --read-only Only allow read-only access.
+
+Sometimes rclone is delivered reads or writes out of order. Rather
+than seeking rclone will wait a short time for the in sequence read or
+write to come in. These flags only come into effect when not using an
+on disk cache file.
+
+ --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
+
+When using VFS write caching (`--vfs-cache-mode` with value writes or full),
+the global flag `--transfers` can be set to adjust the number of parallel uploads of
+modified files from the cache (the related global flag `--checkers` has no effect on the VFS).
+
+ --transfers int Number of file transfers to run in parallel (default 4)
+
+## VFS Case Sensitivity
+
+Linux file systems are case-sensitive: two files can differ only
+by case, and the exact case must be used when opening a file.
+
+File systems in modern Windows are case-insensitive but case-preserving:
+although existing files can be opened using any case, the exact case used
+to create the file is preserved and available for programs to query.
+It is not allowed for two files in the same directory to differ only by case.
+
+Usually file systems on macOS are case-insensitive. It is possible to make macOS
+file systems case-sensitive but that is not the default.
+
+The `--vfs-case-insensitive` VFS flag controls how rclone handles these
+two cases. If its value is "false", rclone passes file names to the remote
+as-is. If the flag is "true" (or appears without a value on the
+command line), rclone may perform a "fixup" as explained below.
+
+The user may specify a file name to open/delete/rename/etc with a case
+different than what is stored on the remote. If an argument refers
+to an existing file with exactly the same name, then the case of the existing
+file on the disk will be used. However, if a file name with exactly the same
+name is not found but a name differing only by case exists, rclone will
+transparently fixup the name. This fixup happens only when an existing file
+is requested. Case sensitivity of file names created anew by rclone is
+controlled by the underlying remote.
+
+Note that case sensitivity of the operating system running rclone (the target)
+may differ from case sensitivity of a file system presented by rclone (the source).
+The flag controls whether "fixup" is performed to satisfy the target.
+
+If the flag is not provided on the command line, then its default value depends
+on the operating system where rclone runs: "true" on Windows and macOS, "false"
+otherwise. If the flag is provided without a value, then it is "true".
+
+## VFS Disk Options
+
+This flag allows you to manually set the statistics about the filing system.
+It can be useful when those statistics cannot be read correctly automatically.
+
+ --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
+
+## Alternate report of used bytes
+
+Some backends, most notably S3, do not report the amount of bytes used.
+If you need this information to be available when running `df` on the
+filesystem, then pass the flag `--vfs-used-is-size` to rclone.
+With this flag set, instead of relying on the backend to report this
+information, rclone will scan the whole remote similar to `rclone size`
+and compute the total used space itself.
+
+_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
+result is accurate. However, this is very inefficient and may cost lots of API
+calls resulting in extra charges. Use it as a last resort and only with caching.
+
+
+```
+rclone serve nfs remote:path [flags]
+```
+
+## Options
+
+```
+ --addr string IPaddress:Port or :Port to bind server to
+ --dir-cache-time Duration Time to cache directory entries for (default 5m0s)
+ --dir-perms FileMode Directory permissions (default 0777)
+ --file-perms FileMode File permissions (default 0666)
+ --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
+ -h, --help help for nfs
+ --no-checksum Don't compare checksums on up/download
+ --no-modtime Don't read/write the modification time (can speed things up)
+ --no-seek Don't allow seeking in files
+ --poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
+ --read-only Only allow read-only access
+ --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
+ --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+ --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
+ --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-case-insensitive If a file name not found, find a case insensitive match
+ --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
+ --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
+ --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
+ --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -8893,6 +9409,593 @@ rclone serve restic remote:path [flags]
```
+See the [global flags page](https://rclone.org/flags/) for global options not listed here.
+
+# SEE ALSO
+
+* [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol.
+
+# rclone serve s3
+
+Serve remote:path over s3.
+
+## Synopsis
+
+`serve s3` implements a basic s3 server that serves a remote via s3.
+This can be viewed with an s3 client, or you can make an [s3 type
+remote](https://rclone.org/s3/) to read and write to it with rclone.
+
+`serve s3` is considered **Experimental** so use with care.
+
+S3 server supports Signature Version 4 authentication. Just use
+`--auth-key accessKey,secretKey` and set the `Authorization`
+header correctly in the request. (See the [AWS
+docs](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)).
+
+`--auth-key` can be repeated for multiple auth pairs. If
+`--auth-key` is not provided then `serve s3` will allow anonymous
+access.
+
+Please note that some clients may require HTTPS endpoints. See [the
+SSL docs](#ssl-tls) for more information.
+
+This command uses the [VFS directory cache](#vfs-virtual-file-system).
+All the functionality will work with `--vfs-cache-mode off`. Using
+`--vfs-cache-mode full` (or `writes`) can be used to cache objects
+locally to improve performance.
+
+Use `--force-path-style=false` if you want to use the bucket name as a
+part of the hostname (such as mybucket.local)
+
+Use `--etag-hash` if you want to change the hash uses for the `ETag`.
+Note that using anything other than `MD5` (the default) is likely to
+cause problems for S3 clients which rely on the Etag being the MD5.
+
+## Quickstart
+
+For a simple set up, to serve `remote:path` over s3, run the server
+like this:
+
+```
+rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
+```
+
+This will be compatible with an rclone remote which is defined like this:
+
+```
+[serves3]
+type = s3
+provider = Rclone
+endpoint = http://127.0.0.1:8080/
+access_key_id = ACCESS_KEY_ID
+secret_access_key = SECRET_ACCESS_KEY
+use_multipart_uploads = false
+```
+
+Note that setting `disable_multipart_uploads = true` is to work around
+[a bug](#bugs) which will be fixed in due course.
+
+## Bugs
+
+When uploading multipart files `serve s3` holds all the parts in
+memory (see [#7453](https://github.com/rclone/rclone/issues/7453)).
+This is a limitaton of the library rclone uses for serving S3 and will
+hopefully be fixed at some point.
+
+Multipart server side copies do not work (see
+[#7454](https://github.com/rclone/rclone/issues/7454)). These take a
+very long time and eventually fail. The default threshold for
+multipart server side copies is 5G which is the maximum it can be, so
+files above this side will fail to be server side copied.
+
+For a current list of `serve s3` bugs see the [serve
+s3](https://github.com/rclone/rclone/labels/serve%20s3) bug category
+on GitHub.
+
+## Limitations
+
+`serve s3` will treat all directories in the root as buckets and
+ignore all files in the root. You can use `CreateBucket` to create
+folders under the root, but you can't create empty folders under other
+folders not in the root.
+
+When using `PutObject` or `DeleteObject`, rclone will automatically
+create or clean up empty folders. If you don't want to clean up empty
+folders automatically, use `--no-cleanup`.
+
+When using `ListObjects`, rclone will use `/` when the delimiter is
+empty. This reduces backend requests with no effect on most
+operations, but if the delimiter is something other than `/` and
+empty, rclone will do a full recursive search of the backend, which
+can take some time.
+
+Versioning is not currently supported.
+
+Metadata will only be saved in memory other than the rclone `mtime`
+metadata which will be set as the modification time of the file.
+
+## Supported operations
+
+`serve s3` currently supports the following operations.
+
+- Bucket
+ - `ListBuckets`
+ - `CreateBucket`
+ - `DeleteBucket`
+- Object
+ - `HeadObject`
+ - `ListObjects`
+ - `GetObject`
+ - `PutObject`
+ - `DeleteObject`
+ - `DeleteObjects`
+ - `CreateMultipartUpload`
+ - `CompleteMultipartUpload`
+ - `AbortMultipartUpload`
+ - `CopyObject`
+ - `UploadPart`
+
+Other operations will return error `Unimplemented`.
+
+## Server options
+
+Use `--addr` to specify which IP address and port the server should
+listen on, eg `--addr 1.2.3.4:8000` or `--addr :8080` to listen to all
+IPs. By default it only listens on localhost. You can use port
+:0 to let the OS choose an available port.
+
+If you set `--addr` to listen on a public or LAN accessible IP address
+then using Authentication is advised - see the next section for info.
+
+You can use a unix socket by setting the url to `unix:///path/to/socket`
+or just by using an absolute path name. Note that unix sockets bypass the
+authentication - this is expected to be done with file system permissions.
+
+`--addr` may be repeated to listen on multiple IPs/ports/sockets.
+
+`--server-read-timeout` and `--server-write-timeout` can be used to
+control the timeouts on the server. Note that this is the total time
+for a transfer.
+
+`--max-header-bytes` controls the maximum number of bytes the server will
+accept in the HTTP header.
+
+`--baseurl` controls the URL prefix that rclone serves from. By default
+rclone will serve from the root. If you used `--baseurl "/rclone"` then
+rclone would serve from a URL starting with "/rclone/". This is
+useful if you wish to proxy rclone serve. Rclone automatically
+inserts leading and trailing "/" on `--baseurl`, so `--baseurl "rclone"`,
+`--baseurl "/rclone"` and `--baseurl "/rclone/"` are all treated
+identically.
+
+### TLS (SSL)
+
+By default this will serve over http. If you want you can serve over
+https. You will need to supply the `--cert` and `--key` flags.
+If you wish to do client side certificate validation then you will need to
+supply `--client-ca` also.
+
+`--cert` should be a either a PEM encoded certificate or a concatenation
+of that with the CA certificate. `--key` should be the PEM encoded
+private key and `--client-ca` should be the PEM encoded client
+certificate authority certificate.
+
+--min-tls-version is minimum TLS version that is acceptable. Valid
+ values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default
+ "tls1.0").
+## VFS - Virtual File System
+
+This command uses the VFS layer. This adapts the cloud storage objects
+that rclone uses into something which looks much more like a disk
+filing system.
+
+Cloud storage objects have lots of properties which aren't like disk
+files - you can't extend them or write to the middle of them, so the
+VFS layer has to deal with that. Because there is no one right way of
+doing this there are various options explained below.
+
+The VFS layer also implements a directory cache - this caches info
+about files and directories (but not the data) in memory.
+
+## VFS Directory Cache
+
+Using the `--dir-cache-time` flag, you can control how long a
+directory should be considered up to date and not refreshed from the
+backend. Changes made through the VFS will appear immediately or
+invalidate the cache.
+
+ --dir-cache-time duration Time to cache directory entries for (default 5m0s)
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
+
+However, changes made directly on the cloud storage by the web
+interface or a different copy of rclone will only be picked up once
+the directory cache expires if the backend configured does not support
+polling for changes. If the backend supports polling, changes will be
+picked up within the polling interval.
+
+You can send a `SIGHUP` signal to rclone for it to flush all
+directory caches, regardless of how old they are. Assuming only one
+rclone instance is running, you can reset the cache like this:
+
+ kill -SIGHUP $(pidof rclone)
+
+If you configure rclone with a [remote control](/rc) then you can use
+rclone rc to flush the whole directory cache:
+
+ rclone rc vfs/forget
+
+Or individual files or directories:
+
+ rclone rc vfs/forget file=path/to/file dir=path/to/dir
+
+## VFS File Buffering
+
+The `--buffer-size` flag determines the amount of memory,
+that will be used to buffer data in advance.
+
+Each open file will try to keep the specified amount of data in memory
+at all times. The buffered data is bound to one open file and won't be
+shared.
+
+This flag is a upper limit for the used memory per open file. The
+buffer will only use memory for data that is downloaded but not not
+yet read. If the buffer is empty, only a small amount of memory will
+be used.
+
+The maximum memory used by rclone for buffering can be up to
+`--buffer-size * open files`.
+
+## VFS File Caching
+
+These flags control the VFS file caching options. File caching is
+necessary to make the VFS layer appear compatible with a normal file
+system. It can be disabled at the cost of some compatibility.
+
+For example you'll need to enable VFS caching if you want to read and
+write simultaneously to a file. See below for more details.
+
+Note that the VFS cache is separate from the cache backend and you may
+find that you need one or the other or both.
+
+ --cache-dir string Directory rclone will use for caching.
+ --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+ --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
+
+If run with `-vv` rclone will print the location of the file cache. The
+files are stored in the user cache file area which is OS dependent but
+can be controlled with `--cache-dir` or setting the appropriate
+environment variable.
+
+The cache has 4 different modes selected by `--vfs-cache-mode`.
+The higher the cache mode the more compatible rclone becomes at the
+cost of using disk space.
+
+Note that files are written back to the remote only when they are
+closed and if they haven't been accessed for `--vfs-write-back`
+seconds. If rclone is quit or dies with files that haven't been
+uploaded, these will be uploaded next time rclone is run with the same
+flags.
+
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+that the cache may exceed these quotas for two reasons. Firstly
+because it is only checked every `--vfs-cache-poll-interval`. Secondly
+because open files cannot be evicted from the cache. When
+`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+rclone will attempt to evict the least accessed files from the cache
+first. rclone will start with files that haven't been accessed for the
+longest. This cache flushing strategy is efficient and more relevant
+files are likely to remain cached.
+
+The `--vfs-cache-max-age` will evict files from the cache
+after the set time since last access has passed. The default value of
+1 hour will start evicting files from cache that haven't been accessed
+for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0
+and will wait for 1 more hour before evicting. Specify the time with
+standard notation, s, m, h, d, w .
+
+You **should not** run two copies of rclone using the same VFS cache
+with the same or overlapping remotes if using `--vfs-cache-mode > off`.
+This can potentially cause data corruption if you do. You can work
+around this by giving each rclone its own cache hierarchy with
+`--cache-dir`. You don't need to worry about this if the remotes in
+use don't overlap.
+
+### --vfs-cache-mode off
+
+In this mode (the default) the cache will read directly from the remote and write
+directly to the remote without caching anything on disk.
+
+This will mean some operations are not possible
+
+ * Files can't be opened for both read AND write
+ * Files opened for write can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files open for read with O_TRUNC will be opened write only
+ * Files open for write only will behave as if O_TRUNC was supplied
+ * Open modes O_APPEND, O_TRUNC are ignored
+ * If an upload fails it can't be retried
+
+### --vfs-cache-mode minimal
+
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disk. This means that files opened for
+write will be a lot more compatible, but uses the minimal disk space.
+
+These operations are not possible
+
+ * Files opened for write only can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files opened for write only will ignore O_APPEND, O_TRUNC
+ * If an upload fails it can't be retried
+
+### --vfs-cache-mode writes
+
+In this mode files opened for read only are still read directly from
+the remote, write only and read/write files are buffered to disk
+first.
+
+This mode should support all normal file system operations.
+
+If an upload fails it will be retried at exponentially increasing
+intervals up to 1 minute.
+
+### --vfs-cache-mode full
+
+In this mode all reads and writes are buffered to and from disk. When
+data is read from the remote this is buffered to disk as well.
+
+In this mode the files in the cache will be sparse files and rclone
+will keep track of which bits of the files it has downloaded.
+
+So if an application only reads the starts of each file, then rclone
+will only buffer the start of the file. These files will appear to be
+their full size in the cache, but they will be sparse files with only
+the data that has been downloaded present in them.
+
+This mode should support all normal file system operations and is
+otherwise identical to `--vfs-cache-mode` writes.
+
+When reading a file rclone will read `--buffer-size` plus
+`--vfs-read-ahead` bytes ahead. The `--buffer-size` is buffered in memory
+whereas the `--vfs-read-ahead` is buffered on disk.
+
+When using this mode it is recommended that `--buffer-size` is not set
+too large and `--vfs-read-ahead` is set large if required.
+
+**IMPORTANT** not all file systems support sparse files. In particular
+FAT/exFAT do not. Rclone will perform very badly if the cache
+directory is on a filesystem which doesn't support sparse files and it
+will log an ERROR message if one is detected.
+
+### Fingerprinting
+
+Various parts of the VFS use fingerprinting to see if a local file
+copy has changed relative to a remote file. Fingerprints are made
+from:
+
+- size
+- modification time
+- hash
+
+where available on an object.
+
+On some backends some of these attributes are slow to read (they take
+an extra API call per object, or extra work per object).
+
+For example `hash` is slow with the `local` and `sftp` backends as
+they have to read the entire file and hash it, and `modtime` is slow
+with the `s3`, `swift`, `ftp` and `qinqstor` backends because they
+need to do an extra API call to fetch it.
+
+If you use the `--vfs-fast-fingerprint` flag then rclone will not
+include the slow operations in the fingerprint. This makes the
+fingerprinting less accurate but much faster and will improve the
+opening time of cached files.
+
+If you are running a vfs cache over `local`, `s3` or `swift` backends
+then using this flag is recommended.
+
+Note that if you change the value of this flag, the fingerprints of
+the files in the cache may be invalidated and the files will need to
+be downloaded again.
+
+## VFS Chunked Reading
+
+When rclone reads files from a remote it reads them in chunks. This
+means that rather than requesting the whole file rclone reads the
+chunk specified. This can reduce the used download quota for some
+remotes by requesting only chunks from the remote that are actually
+read, at the cost of an increased number of requests.
+
+These flags control the chunking:
+
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
+ --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
+
+Rclone will start reading a chunk of size `--vfs-read-chunk-size`,
+and then double the size for each read. When `--vfs-read-chunk-size-limit` is
+specified, and greater than `--vfs-read-chunk-size`, the chunk size for each
+open file will get doubled only until the specified value is reached. If the
+value is "off", which is the default, the limit is disabled and the chunk size
+will grow indefinitely.
+
+With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0`
+the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on.
+When `--vfs-read-chunk-size-limit 500M` is specified, the result would be
+0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
+
+Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading.
+
+## VFS Performance
+
+These flags may be used to enable/disable features of the VFS for
+performance or other reasons. See also the [chunked reading](#vfs-chunked-reading)
+feature.
+
+In particular S3 and Swift benefit hugely from the `--no-modtime` flag
+(or use `--use-server-modtime` for a slightly different effect) as each
+read of the modification time takes a transaction.
+
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --read-only Only allow read-only access.
+
+Sometimes rclone is delivered reads or writes out of order. Rather
+than seeking rclone will wait a short time for the in sequence read or
+write to come in. These flags only come into effect when not using an
+on disk cache file.
+
+ --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
+
+When using VFS write caching (`--vfs-cache-mode` with value writes or full),
+the global flag `--transfers` can be set to adjust the number of parallel uploads of
+modified files from the cache (the related global flag `--checkers` has no effect on the VFS).
+
+ --transfers int Number of file transfers to run in parallel (default 4)
+
+## VFS Case Sensitivity
+
+Linux file systems are case-sensitive: two files can differ only
+by case, and the exact case must be used when opening a file.
+
+File systems in modern Windows are case-insensitive but case-preserving:
+although existing files can be opened using any case, the exact case used
+to create the file is preserved and available for programs to query.
+It is not allowed for two files in the same directory to differ only by case.
+
+Usually file systems on macOS are case-insensitive. It is possible to make macOS
+file systems case-sensitive but that is not the default.
+
+The `--vfs-case-insensitive` VFS flag controls how rclone handles these
+two cases. If its value is "false", rclone passes file names to the remote
+as-is. If the flag is "true" (or appears without a value on the
+command line), rclone may perform a "fixup" as explained below.
+
+The user may specify a file name to open/delete/rename/etc with a case
+different than what is stored on the remote. If an argument refers
+to an existing file with exactly the same name, then the case of the existing
+file on the disk will be used. However, if a file name with exactly the same
+name is not found but a name differing only by case exists, rclone will
+transparently fixup the name. This fixup happens only when an existing file
+is requested. Case sensitivity of file names created anew by rclone is
+controlled by the underlying remote.
+
+Note that case sensitivity of the operating system running rclone (the target)
+may differ from case sensitivity of a file system presented by rclone (the source).
+The flag controls whether "fixup" is performed to satisfy the target.
+
+If the flag is not provided on the command line, then its default value depends
+on the operating system where rclone runs: "true" on Windows and macOS, "false"
+otherwise. If the flag is provided without a value, then it is "true".
+
+## VFS Disk Options
+
+This flag allows you to manually set the statistics about the filing system.
+It can be useful when those statistics cannot be read correctly automatically.
+
+ --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
+
+## Alternate report of used bytes
+
+Some backends, most notably S3, do not report the amount of bytes used.
+If you need this information to be available when running `df` on the
+filesystem, then pass the flag `--vfs-used-is-size` to rclone.
+With this flag set, instead of relying on the backend to report this
+information, rclone will scan the whole remote similar to `rclone size`
+and compute the total used space itself.
+
+_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
+result is accurate. However, this is very inefficient and may cost lots of API
+calls resulting in extra charges. Use it as a last resort and only with caching.
+
+
+```
+rclone serve s3 remote:path [flags]
+```
+
+## Options
+
+```
+ --addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080])
+ --allow-origin string Origin which cross-domain request (CORS) can be executed from
+ --auth-key stringArray Set key pair for v4 authorization: access_key_id,secret_access_key
+ --baseurl string Prefix for URLs - leave blank for root
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
+ --dir-cache-time Duration Time to cache directory entries for (default 5m0s)
+ --dir-perms FileMode Directory permissions (default 0777)
+ --etag-hash string Which hash to use for the ETag, or auto or blank for off (default "MD5")
+ --file-perms FileMode File permissions (default 0666)
+ --force-path-style If true use path style access if false use virtual hosted style (default true) (default true)
+ --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
+ -h, --help help for s3
+ --key string TLS PEM Private key
+ --max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
+ --no-checksum Don't compare checksums on up/download
+ --no-cleanup Not to cleanup empty folder after object is deleted
+ --no-modtime Don't read/write the modification time (can speed things up)
+ --no-seek Don't allow seeking in files
+ --poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
+ --read-only Only allow read-only access
+ --server-read-timeout Duration Timeout for server reading data (default 1h0m0s)
+ --server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
+ --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
+ --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+ --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
+ --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-case-insensitive If a file name not found, find a case insensitive match
+ --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
+ --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
+ --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
+ --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
+ --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
+ --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
+ --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
+```
+
+
+## Filter Options
+
+Flags for filtering directory listings.
+
+```
+ --delete-excluded Delete files on dest excluded from sync
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
+ --exclude-if-present stringArray Exclude directories if filename is present
+ --files-from stringArray Read list of source-file names from file (use - to read from stdin)
+ --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
+ -f, --filter stringArray Add a file filtering rule
+ --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --ignore-case Ignore case in filters (case insensitive)
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read file include patterns from file (use - to read from stdin)
+ --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-depth int If set limits the recursion depth to this (default -1)
+ --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
+ --metadata-exclude stringArray Exclude metadatas matching pattern
+ --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
+ --metadata-filter stringArray Add a metadata filtering rule
+ --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
+ --metadata-include stringArray Include metadatas matching pattern
+ --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
+```
+
See the [global flags page](https://rclone.org/flags/) for global options not listed here.
# SEE ALSO
@@ -8957,7 +10060,6 @@ used. Omitting "restrict" and using `--sftp-path-override` to enable
checksumming is possible but less secure and you could use the SFTP server
provided by OpenSSH in this case.
-
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
@@ -9410,6 +10512,7 @@ rclone serve sftp remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -9570,6 +10673,17 @@ to be used within the template to server pages:
|-- .Size | Size in Bytes of the entry. |
|-- .ModTime | The UTC timestamp of an entry. |
+The server also makes the following functions available so that they can be used within the
+template. These functions help extend the options for dynamic rendering of HTML. They can
+be used to render HTML based on specific conditions.
+
+| Function | Description |
+| :---------- | :---------- |
+| afterEpoch | Returns the time since the epoch for the given time. |
+| contains | Checks whether a given substring is present or not in a given string. |
+| hasPrefix | Checks whether the given string begins with the specified prefix. |
+| hasSuffix | Checks whether the given string end with the specified suffix. |
+
### Authentication
By default this will serve files without needing a login.
@@ -9596,7 +10710,6 @@ The password file can be updated while rclone is running.
Use `--realm` to set the authentication realm.
Use `--salt` to change the password hashing salt from the default.
-
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
@@ -10060,6 +11173,7 @@ rclone serve webdav remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -10892,6 +12006,10 @@ Note that arbitrary metadata may be added to objects using the
`--metadata-set key=value` flag when the object is first uploaded.
This flag can be repeated as many times as necessary.
+The [--metadata-mapper](#metadata-mapper) flag can be used to pass the
+name of a program in which can transform metadata when it is being
+copied from source to destination.
+
### Types of metadata
Metadata is divided into two type. System metadata and User metadata.
@@ -10969,6 +12087,7 @@ backend may implement.
| atime | Time of last access: RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 |
| mtime | Time of last modification: RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 |
| btime | Time of file creation (birth): RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 |
+| utime | Time of file upload: RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 |
| cache-control | Cache-Control header | no-cache |
| content-disposition | Content-Disposition header | inline |
| content-encoding | Content-Encoding header | gzip |
@@ -11696,8 +12815,9 @@ flag set) such as:
- sftp
Without `--inplace` (the default) rclone will first upload to a
-temporary file with an extension like this where `XXXXXX` represents a
-random string.
+temporary file with an extension like this, where `XXXXXX` represents a
+random string and `.partial` is [--partial-suffix](#partial-suffix) value
+(`.partial` by default).
original-file-name.XXXXXX.partial
@@ -11919,12 +13039,123 @@ from reaching the limit. Only applicable for `--max-transfer`
Setting this flag enables rclone to copy the metadata from the source
to the destination. For local backends this is ownership, permissions,
-xattr etc. See the [#metadata](metadata section) for more info.
+xattr etc. See the [metadata section](#metadata) for more info.
+
+### --metadata-mapper SpaceSepList {#metadata-mapper}
+
+If you supply the parameter `--metadata-mapper /path/to/program` then
+rclone will use that program to map metadata from source object to
+destination object.
+
+The argument to this flag should be a command with an optional space separated
+list of arguments. If one of the arguments has a space in then enclose
+it in `"`, if you want a literal `"` in an argument then enclose the
+argument in `"` and double the `"`. See [CSV encoding](https://godoc.org/encoding/csv)
+for more info.
+
+ --metadata-mapper "python bin/test_metadata_mapper.py"
+ --metadata-mapper 'python bin/test_metadata_mapper.py "argument with a space"'
+ --metadata-mapper 'python bin/test_metadata_mapper.py "argument with ""two"" quotes"'
+
+This uses a simple JSON based protocol with input on STDIN and output
+on STDOUT. This will be called for every file and directory copied and
+may be called concurrently.
+
+The program's job is to take a metadata blob on the input and turn it
+into a metadata blob on the output suitable for the destination
+backend.
+
+Input to the program (via STDIN) might look like this. This provides
+some context for the `Metadata` which may be important.
+
+- `SrcFs` is the config string for the remote that the object is currently on.
+- `SrcFsType` is the name of the source backend.
+- `DstFs` is the config string for the remote that the object is being copied to
+- `DstFsType` is the name of the destination backend.
+- `Remote` is the path of the file relative to the root.
+- `Size`, `MimeType`, `ModTime` are attributes of the file.
+- `IsDir` is `true` if this is a directory (not yet implemented).
+- `ID` is the source `ID` of the file if known.
+- `Metadata` is the backend specific metadata as described in the backend docs.
+
+```json
+{
+ "SrcFs": "gdrive:",
+ "SrcFsType": "drive",
+ "DstFs": "newdrive:user",
+ "DstFsType": "onedrive",
+ "Remote": "test.txt",
+ "Size": 6,
+ "MimeType": "text/plain; charset=utf-8",
+ "ModTime": "2022-10-11T17:53:10.286745272+01:00",
+ "IsDir": false,
+ "ID": "xyz",
+ "Metadata": {
+ "btime": "2022-10-11T16:53:11Z",
+ "content-type": "text/plain; charset=utf-8",
+ "mtime": "2022-10-11T17:53:10.286745272+01:00",
+ "owner": "user1@domain1.com",
+ "permissions": "...",
+ "description": "my nice file",
+ "starred": "false"
+ }
+}
+```
+
+The program should then modify the input as desired and send it to
+STDOUT. The returned `Metadata` field will be used in its entirety for
+the destination object. Any other fields will be ignored. Note in this
+example we translate user names and permissions and add something to
+the description:
+
+```json
+{
+ "Metadata": {
+ "btime": "2022-10-11T16:53:11Z",
+ "content-type": "text/plain; charset=utf-8",
+ "mtime": "2022-10-11T17:53:10.286745272+01:00",
+ "owner": "user1@domain2.com",
+ "permissions": "...",
+ "description": "my nice file [migrated from domain1]",
+ "starred": "false"
+ }
+}
+```
+
+Metadata can be removed here too.
+
+An example python program might look something like this to implement
+the above transformations.
+
+```python
+import sys, json
+
+i = json.load(sys.stdin)
+metadata = i["Metadata"]
+# Add tag to description
+if "description" in metadata:
+ metadata["description"] += " [migrated from domain1]"
+else:
+ metadata["description"] = "[migrated from domain1]"
+# Modify owner
+if "owner" in metadata:
+ metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
+o = { "Metadata": metadata }
+json.dump(o, sys.stdout, indent="\t")
+```
+
+You can find this example (slightly expanded) in the rclone source code at
+[bin/test_metadata_mapper.py](https://github.com/rclone/rclone/blob/master/test_metadata_mapper.py).
+
+If you want to see the input to the metadata mapper and the output
+returned from it in the log you can use `-vv --dump mapper`.
+
+See the [metadata section](#metadata) for more info.
### --metadata-set key=value
Add metadata `key` = `value` when uploading. This can be repeated as
-many times as required. See the [#metadata](metadata section) for more
+many times as required. See the [metadata section](#metadata) for more
info.
### --modify-window=TIME ###
@@ -12144,6 +13375,15 @@ If you want perfect ordering then you will need to specify
[--check-first](#check-first) which will find all the files which need
transferring first before transferring any.
+### --partial-suffix {#partial-suffix}
+
+When [--inplace](#inplace) is not used, it causes rclone to use
+the `--partial-suffix` as suffix for temporary files.
+
+Suffix length limit is 16 characters.
+
+The default is `.partial`.
+
### --password-command SpaceSepList ###
This flag supplies a program which should supply the config password
@@ -12158,9 +13398,9 @@ for more info.
Eg
- --password-command echo hello
- --password-command echo "hello with space"
- --password-command echo "hello with ""quotes"" and space"
+ --password-command "echo hello"
+ --password-command 'echo "hello with space"'
+ --password-command 'echo "hello with ""quotes"" and space"'
See the [Configuration Encryption](#configuration-encryption) for more info.
@@ -12529,34 +13769,50 @@ there were IO errors`.
### --fast-list ###
When doing anything which involves a directory listing (e.g. `sync`,
-`copy`, `ls` - in fact nearly every command), rclone normally lists a
-directory and processes it before using more directory lists to
-process any subdirectories. This can be parallelised and works very
-quickly using the least amount of memory.
+`copy`, `ls` - in fact nearly every command), rclone has different
+strategies to choose from.
-However, some remotes have a way of listing all files beneath a
-directory in one (or a small number) of transactions. These tend to
-be the bucket-based remotes (e.g. S3, B2, GCS, Swift).
+The basic strategy is to list one directory and processes it before using
+more directory lists to process any subdirectories. This is a mandatory
+backend feature, called `List`, which means it is supported by all backends.
+This strategy uses small amount of memory, and because it can be parallelised
+it is fast for operations involving processing of the list results.
-If you use the `--fast-list` flag then rclone will use this method for
-listing directories. This will have the following consequences for
-the listing:
+Some backends provide the support for an alternative strategy, where all
+files beneath a directory can be listed in one (or a small number) of
+transactions. Rclone supports this alternative strategy through an optional
+backend feature called [`ListR`](https://rclone.org/overview/#listr). You can see in the storage
+system overview documentation's [optional features](https://rclone.org/overview/#optional-features)
+section which backends it is enabled for (these tend to be the bucket-based
+ones, e.g. S3, B2, GCS, Swift). This strategy requires fewer transactions
+for highly recursive operations, which is important on backends where this
+is charged or heavily rate limited. It may be faster (due to fewer transactions)
+or slower (because it can't be parallelized) depending on different parameters,
+and may require more memory if rclone has to keep the whole listing in memory.
- * It **will** use fewer transactions (important if you pay for them)
- * It **will** use more memory. Rclone has to load the whole listing into memory.
- * It *may* be faster because it uses fewer transactions
- * It *may* be slower because it can't be parallelized
+Which listing strategy rclone picks for a given operation is complicated, but
+in general it tries to choose the best possible. It will prefer `ListR` in
+situations where it doesn't need to store the listed files in memory, e.g.
+for unlimited recursive `ls` command variants. In other situations it will
+prefer `List`, e.g. for `sync` and `copy`, where it needs to keep the listed
+files in memory, and is performing operations on them where parallelization
+may be a huge advantage.
-rclone should always give identical results with and without
-`--fast-list`.
+Rclone is not able to take all relevant parameters into account for deciding
+the best strategy, and therefore allows you to influence the choice in two ways:
+You can stop rclone from using `ListR` by disabling the feature, using the
+[--disable](#disable-feature-feature) option (`--disable ListR`), or you can
+allow rclone to use `ListR` where it would normally choose not to do so due to
+higher memory usage, using the `--fast-list` option. Rclone should always
+produce identical results either way. Using `--disable ListR` or `--fast-list`
+on a remote which doesn't support `ListR` does nothing, rclone will just ignore
+it.
-If you pay for transactions and can fit your entire sync listing into
-memory then `--fast-list` is recommended. If you have a very big sync
-to do then don't use `--fast-list` otherwise you will run out of
-memory.
-
-If you use `--fast-list` on a remote which doesn't support it, then
-rclone will just ignore it.
+A rule of thumb is that if you pay for transactions and can fit your entire
+sync listing into memory, then `--fast-list` is recommended. If you have a
+very big sync to do, then don't use `--fast-list`, otherwise you will run out
+of memory. Run some tests and compare before you decide, and if in doubt then
+just leave the default, let rclone decide, i.e. not use `--fast-list`.
### --timeout=TIME ###
@@ -12893,6 +14149,12 @@ This dumps a list of the open files at the end of the command. It
uses the `lsof` command to do that so you'll need that installed to
use it.
+#### --dump mapper ####
+
+This shows the JSON blobs being sent to the program supplied with
+`--metadata-mapper` and received from it. It can be useful for
+debugging the metadata mapper interface.
+
### --memprofile=FILE ###
Write memory profile to file. This can be analysed with `go tool pprof`.
@@ -15305,6 +16567,56 @@ See the [about](https://rclone.org/commands/rclone_about/) command for more info
**Authentication is required for this call.**
+### operations/check: check the source and destination are the same {#operations-check}
+
+Checks the files in the source and destination match. It compares
+sizes and hashes and logs a report of files that don't
+match. It doesn't alter the source or destination.
+
+This takes the following parameters:
+
+- srcFs - a remote name string e.g. "drive:" for the source, "/" for local filesystem
+- dstFs - a remote name string e.g. "drive2:" for the destination, "/" for local filesystem
+- download - check by downloading rather than with hash
+- checkFileHash - treat checkFileFs:checkFileRemote as a SUM file with hashes of given type
+- checkFileFs - treat checkFileFs:checkFileRemote as a SUM file with hashes of given type
+- checkFileRemote - treat checkFileFs:checkFileRemote as a SUM file with hashes of given type
+- oneWay - check one way only, source files must exist on remote
+- combined - make a combined report of changes (default false)
+- missingOnSrc - report all files missing from the source (default true)
+- missingOnDst - report all files missing from the destination (default true)
+- match - report all matching files (default false)
+- differ - report all non-matching files (default true)
+- error - report all files with errors (hashing or reading) (default true)
+
+If you supply the download flag, it will download the data from
+both remotes and check them against each other on the fly. This can
+be useful for remotes that don't support hashes or if you really want
+to check all the data.
+
+If you supply the size-only global flag, it will only compare the sizes not
+the hashes as well. Use this for a quick check.
+
+If you supply the checkFileHash option with a valid hash name, the
+checkFileFs:checkFileRemote must point to a text file in the SUM
+format. This treats the checksum file as the source and dstFs as the
+destination. Note that srcFs is not used and should not be supplied in
+this case.
+
+Returns:
+
+- success - true if no error, false otherwise
+- status - textual summary of check, OK or text string
+- hashType - hash used in check, may be missing
+- combined - array of strings of combined report of changes
+- missingOnSrc - array of strings of all files missing from the source
+- missingOnDst - array of strings of all files missing from the destination
+- match - array of strings of all matching files
+- differ - array of strings of all non-matching files
+- error - array of strings of all files with errors (hashing or reading)
+
+**Authentication is required for this call.**
+
### operations/cleanup: Remove trashed files in the remote or path {#operations-cleanup}
This takes the following parameters:
@@ -16236,55 +17548,55 @@ show through.
Here is an overview of the major features of each cloud storage system.
-| Name | Hash | ModTime | Case Insensitive | Duplicate Files | MIME Type | Metadata |
-| ---------------------------- |:----------------:|:-------:|:----------------:|:---------------:|:---------:|:--------:|
-| 1Fichier | Whirlpool | - | No | Yes | R | - |
-| Akamai Netstorage | MD5, SHA256 | R/W | No | No | R | - |
-| Amazon Drive | MD5 | - | Yes | No | R | - |
-| Amazon S3 (or S3 compatible) | MD5 | R/W | No | No | R/W | RWU |
-| Backblaze B2 | SHA1 | R/W | No | No | R/W | - |
-| Box | SHA1 | R/W | Yes | No | - | - |
-| Citrix ShareFile | MD5 | R/W | Yes | No | - | - |
-| Dropbox | DBHASH ¹ | R | Yes | No | - | - |
-| Enterprise File Fabric | - | R/W | Yes | No | R/W | - |
-| FTP | - | R/W ¹⁰ | No | No | - | - |
-| Google Cloud Storage | MD5 | R/W | No | No | R/W | - |
-| Google Drive | MD5 | R/W | No | Yes | R/W | - |
-| Google Photos | - | - | No | Yes | R | - |
-| HDFS | - | R/W | No | No | - | - |
-| HiDrive | HiDrive ¹² | R/W | No | No | - | - |
-| HTTP | - | R | No | No | R | - |
-| Internet Archive | MD5, SHA1, CRC32 | R/W ¹¹ | No | No | - | RWU |
-| Jottacloud | MD5 | R/W | Yes | No | R | - |
-| Koofr | MD5 | - | Yes | No | - | - |
-| Mail.ru Cloud | Mailru ⁶ | R/W | Yes | No | - | - |
-| Mega | - | - | No | Yes | - | - |
-| Memory | MD5 | R/W | No | No | - | - |
-| Microsoft Azure Blob Storage | MD5 | R/W | No | No | R/W | - |
-| Microsoft OneDrive | QuickXorHash ⁵ | R/W | Yes | No | R | - |
-| OpenDrive | MD5 | R/W | Yes | Partial ⁸ | - | - |
-| OpenStack Swift | MD5 | R/W | No | No | R/W | - |
-| Oracle Object Storage | MD5 | R/W | No | No | R/W | - |
-| pCloud | MD5, SHA1 ⁷ | R | No | No | W | - |
-| PikPak | MD5 | R | No | No | R | - |
-| premiumize.me | - | - | Yes | No | R | - |
-| put.io | CRC-32 | R/W | No | Yes | R | - |
-| Proton Drive | SHA1 | R/W | No | No | R | - |
-| QingStor | MD5 | - ⁹ | No | No | R/W | - |
-| Quatrix by Maytech | - | R/W | No | No | - | - |
-| Seafile | - | - | No | No | - | - |
-| SFTP | MD5, SHA1 ² | R/W | Depends | No | - | - |
-| Sia | - | - | No | No | - | - |
-| SMB | - | - | Yes | No | - | - |
-| SugarSync | - | - | No | No | - | - |
-| Storj | - | R | No | No | - | - |
-| Uptobox | - | - | No | Yes | - | - |
-| WebDAV | MD5, SHA1 ³ | R ⁴ | Depends | No | - | - |
-| Yandex Disk | MD5 | R/W | No | No | R | - |
-| Zoho WorkDrive | - | - | No | No | - | - |
-| The local filesystem | All | R/W | Depends | No | - | RWU |
-
-### Notes
+| Name | Hash | ModTime | Case Insensitive | Duplicate Files | MIME Type | Metadata |
+| ---------------------------- |:-----------------:|:-------:|:----------------:|:---------------:|:---------:|:--------:|
+| 1Fichier | Whirlpool | - | No | Yes | R | - |
+| Akamai Netstorage | MD5, SHA256 | R/W | No | No | R | - |
+| Amazon Drive | MD5 | - | Yes | No | R | - |
+| Amazon S3 (or S3 compatible) | MD5 | R/W | No | No | R/W | RWU |
+| Backblaze B2 | SHA1 | R/W | No | No | R/W | - |
+| Box | SHA1 | R/W | Yes | No | - | - |
+| Citrix ShareFile | MD5 | R/W | Yes | No | - | - |
+| Dropbox | DBHASH ¹ | R | Yes | No | - | - |
+| Enterprise File Fabric | - | R/W | Yes | No | R/W | - |
+| FTP | - | R/W ¹⁰ | No | No | - | - |
+| Google Cloud Storage | MD5 | R/W | No | No | R/W | - |
+| Google Drive | MD5, SHA1, SHA256 | R/W | No | Yes | R/W | - |
+| Google Photos | - | - | No | Yes | R | - |
+| HDFS | - | R/W | No | No | - | - |
+| HiDrive | HiDrive ¹² | R/W | No | No | - | - |
+| HTTP | - | R | No | No | R | - |
+| Internet Archive | MD5, SHA1, CRC32 | R/W ¹¹ | No | No | - | RWU |
+| Jottacloud | MD5 | R/W | Yes | No | R | RW |
+| Koofr | MD5 | - | Yes | No | - | - |
+| Linkbox | - | R | No | No | - | - |
+| Mail.ru Cloud | Mailru ⁶ | R/W | Yes | No | - | - |
+| Mega | - | - | No | Yes | - | - |
+| Memory | MD5 | R/W | No | No | - | - |
+| Microsoft Azure Blob Storage | MD5 | R/W | No | No | R/W | - |
+| Microsoft Azure Files Storage | MD5 | R/W | Yes | No | R/W | - |
+| Microsoft OneDrive | QuickXorHash ⁵ | R/W | Yes | No | R | - |
+| OpenDrive | MD5 | R/W | Yes | Partial ⁸ | - | - |
+| OpenStack Swift | MD5 | R/W | No | No | R/W | - |
+| Oracle Object Storage | MD5 | R/W | No | No | R/W | - |
+| pCloud | MD5, SHA1 ⁷ | R | No | No | W | - |
+| PikPak | MD5 | R | No | No | R | - |
+| premiumize.me | - | - | Yes | No | R | - |
+| put.io | CRC-32 | R/W | No | Yes | R | - |
+| Proton Drive | SHA1 | R/W | No | No | R | - |
+| QingStor | MD5 | - ⁹ | No | No | R/W | - |
+| Quatrix by Maytech | - | R/W | No | No | - | - |
+| Seafile | - | - | No | No | - | - |
+| SFTP | MD5, SHA1 ² | R/W | Depends | No | - | - |
+| Sia | - | - | No | No | - | - |
+| SMB | - | R/W | Yes | No | - | - |
+| SugarSync | - | - | No | No | - | - |
+| Storj | - | R | No | No | - | - |
+| Uptobox | - | - | No | Yes | - | - |
+| WebDAV | MD5, SHA1 ³ | R ⁴ | Depends | No | - | - |
+| Yandex Disk | MD5 | R/W | No | No | R | - |
+| Zoho WorkDrive | - | - | No | No | - | - |
+| The local filesystem | All | R/W | Depends | No | - | RWU |
¹ Dropbox supports [its own custom
hash](https://www.dropbox.com/developers/reference/content-hash).
@@ -16312,7 +17624,7 @@ mistake or an unsupported feature.
⁹ QingStor does not support SetModTime for objects bigger than 5 GiB.
¹⁰ FTP supports modtimes for the major FTP servers, and also others
-if they advertised required protocol extensions. See [this](https://rclone.org/ftp/#modified-time)
+if they advertised required protocol extensions. See [this](https://rclone.org/ftp/#modification-times)
for more details.
¹¹ Internet Archive requires option `wait_archive` to be set to a non-zero value
@@ -16691,66 +18003,71 @@ upon backend-specific capabilities.
| Name | Purge | Copy | Move | DirMove | CleanUp | ListR | StreamUpload | MultithreadUpload | LinkSharing | About | EmptyDir |
| ---------------------------- |:-----:|:----:|:----:|:-------:|:-------:|:-----:|:------------:|:------------------|:------------:|:-----:|:--------:|
-| 1Fichier | No | Yes | Yes | No | No | No | No | No | Yes | No | Yes |
-| Akamai Netstorage | Yes | No | No | No | No | Yes | Yes | No | No | No | Yes |
-| Amazon Drive | Yes | No | Yes | Yes | No | No | No | No | No | No | Yes |
-| Amazon S3 (or S3 compatible) | No | Yes | No | No | Yes | Yes | Yes | Yes | Yes | No | No |
-| Backblaze B2 | No | Yes | No | No | Yes | Yes | Yes | Yes | Yes | No | No |
-| Box | Yes | Yes | Yes | Yes | Yes ‡‡ | No | Yes | No | Yes | Yes | Yes |
-| Citrix ShareFile | Yes | Yes | Yes | Yes | No | No | No | No | No | No | Yes |
-| Dropbox | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes |
-| Enterprise File Fabric | Yes | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes |
-| FTP | No | No | Yes | Yes | No | No | Yes | No | No | No | Yes |
-| Google Cloud Storage | Yes | Yes | No | No | No | Yes | Yes | No | No | No | No |
-| Google Drive | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes |
-| Google Photos | No | No | No | No | No | No | No | No | No | No | No |
-| HDFS | Yes | No | Yes | Yes | No | No | Yes | No | No | Yes | Yes |
-| HiDrive | Yes | Yes | Yes | Yes | No | No | Yes | No | No | No | Yes |
-| HTTP | No | No | No | No | No | No | No | No | No | No | Yes |
-| Internet Archive | No | Yes | No | No | Yes | Yes | No | No | Yes | Yes | No |
-| Jottacloud | Yes | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes |
-| Koofr | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes |
-| Mail.ru Cloud | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
-| Mega | Yes | No | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
-| Memory | No | Yes | No | No | No | Yes | Yes | No | No | No | No |
-| Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | Yes | Yes | No | No | No |
-| Microsoft OneDrive | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
-| OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | No | Yes |
-| OpenStack Swift | Yes † | Yes | No | No | No | Yes | Yes | No | No | Yes | No |
-| Oracle Object Storage | No | Yes | No | No | Yes | Yes | Yes | No | No | No | No |
-| pCloud | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
-| PikPak | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
-| premiumize.me | Yes | No | Yes | Yes | No | No | No | No | Yes | Yes | Yes |
-| put.io | Yes | No | Yes | Yes | Yes | No | Yes | No | No | Yes | Yes |
-| Proton Drive | Yes | No | Yes | Yes | Yes | No | No | No | No | Yes | Yes |
-| QingStor | No | Yes | No | No | Yes | Yes | No | No | No | No | No |
-| Quatrix by Maytech | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | Yes |
-| Seafile | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes |
-| SFTP | No | No | Yes | Yes | No | No | Yes | No | No | Yes | Yes |
-| Sia | No | No | No | No | No | No | Yes | No | No | No | Yes |
-| SMB | No | No | Yes | Yes | No | No | Yes | Yes | No | No | Yes |
-| SugarSync | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | No | Yes |
-| Storj | Yes ☨ | Yes | Yes | No | No | Yes | Yes | No | Yes | No | No |
-| Uptobox | No | Yes | Yes | Yes | No | No | No | No | No | No | No |
-| WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ‡ | No | No | Yes | Yes |
-| Yandex Disk | Yes | Yes | Yes | Yes | Yes | No | Yes | No | Yes | Yes | Yes |
-| Zoho WorkDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | Yes |
-| The local filesystem | Yes | No | Yes | Yes | No | No | Yes | Yes | No | Yes | Yes |
+| 1Fichier | No | Yes | Yes | No | No | No | No | No | Yes | No | Yes |
+| Akamai Netstorage | Yes | No | No | No | No | Yes | Yes | No | No | No | Yes |
+| Amazon Drive | Yes | No | Yes | Yes | No | No | No | No | No | No | Yes |
+| Amazon S3 (or S3 compatible) | No | Yes | No | No | Yes | Yes | Yes | Yes | Yes | No | No |
+| Backblaze B2 | No | Yes | No | No | Yes | Yes | Yes | Yes | Yes | No | No |
+| Box | Yes | Yes | Yes | Yes | Yes | No | Yes | No | Yes | Yes | Yes |
+| Citrix ShareFile | Yes | Yes | Yes | Yes | No | No | No | No | No | No | Yes |
+| Dropbox | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes |
+| Enterprise File Fabric | Yes | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes |
+| FTP | No | No | Yes | Yes | No | No | Yes | No | No | No | Yes |
+| Google Cloud Storage | Yes | Yes | No | No | No | Yes | Yes | No | No | No | No |
+| Google Drive | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes |
+| Google Photos | No | No | No | No | No | No | No | No | No | No | No |
+| HDFS | Yes | No | Yes | Yes | No | No | Yes | No | No | Yes | Yes |
+| HiDrive | Yes | Yes | Yes | Yes | No | No | Yes | No | No | No | Yes |
+| HTTP | No | No | No | No | No | No | No | No | No | No | Yes |
+| Internet Archive | No | Yes | No | No | Yes | Yes | No | No | Yes | Yes | No |
+| Jottacloud | Yes | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes |
+| Koofr | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes |
+| Mail.ru Cloud | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
+| Mega | Yes | No | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
+| Memory | No | Yes | No | No | No | Yes | Yes | No | No | No | No |
+| Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | Yes | Yes | No | No | No |
+| Microsoft Azure Files Storage | No | Yes | Yes | Yes | No | No | Yes | Yes | No | Yes | Yes |
+| Microsoft OneDrive | Yes | Yes | Yes | Yes | Yes | Yes ⁵ | No | No | Yes | Yes | Yes |
+| OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | No | Yes |
+| OpenStack Swift | Yes ¹ | Yes | No | No | No | Yes | Yes | No | No | Yes | No |
+| Oracle Object Storage | No | Yes | No | No | Yes | Yes | Yes | Yes | No | No | No |
+| pCloud | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
+| PikPak | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
+| premiumize.me | Yes | No | Yes | Yes | No | No | No | No | Yes | Yes | Yes |
+| put.io | Yes | No | Yes | Yes | Yes | No | Yes | No | No | Yes | Yes |
+| Proton Drive | Yes | No | Yes | Yes | Yes | No | No | No | No | Yes | Yes |
+| QingStor | No | Yes | No | No | Yes | Yes | No | No | No | No | No |
+| Quatrix by Maytech | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | Yes |
+| Seafile | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes |
+| SFTP | No | Yes ⁴| Yes | Yes | No | No | Yes | No | No | Yes | Yes |
+| Sia | No | No | No | No | No | No | Yes | No | No | No | Yes |
+| SMB | No | No | Yes | Yes | No | No | Yes | Yes | No | No | Yes |
+| SugarSync | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | No | Yes |
+| Storj | Yes ² | Yes | Yes | No | No | Yes | Yes | No | Yes | No | No |
+| Uptobox | No | Yes | Yes | Yes | No | No | No | No | No | No | No |
+| WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ³ | No | No | Yes | Yes |
+| Yandex Disk | Yes | Yes | Yes | Yes | Yes | No | Yes | No | Yes | Yes | Yes |
+| Zoho WorkDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | Yes |
+| The local filesystem | Yes | No | Yes | Yes | No | No | Yes | Yes | No | Yes | Yes |
+
+¹ Note Swift implements this in order to delete directory markers but
+it doesn't actually have a quicker way of deleting files other than
+deleting them individually.
+
+² Storj implements this efficiently only for entire buckets. If
+purging a directory inside a bucket, files are deleted individually.
+
+³ StreamUpload is not supported with Nextcloud
+
+⁴ Use the `--sftp-copy-is-hardlink` flag to enable.
+
+⁵ Use the `--onedrive-delta` flag to enable.
### Purge ###
This deletes a directory quicker than just deleting all the files in
the directory.
-† Note Swift implements this in order to delete directory markers but
-they don't actually have a quicker way of deleting files other than
-deleting them individually.
-
-☨ Storj implements this efficiently only for entire buckets. If
-purging a directory inside a bucket, files are deleted individually.
-
-‡ StreamUpload is not supported with Nextcloud
-
### Copy ###
Used when copying an object to and from the same remote. This known
@@ -16845,11 +18162,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -16864,11 +18181,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
```
@@ -16938,7 +18256,7 @@ General networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.64.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.65.0")
```
@@ -16961,7 +18279,7 @@ General configuration of rclone.
--ask-password Allow prompt for password for encrypted configuration (default true)
--auto-confirm If enabled, do not request console confirmation
--cache-dir string Directory rclone will use for caching (default "$HOME/.cache/rclone")
- --color string When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default "AUTO")
+ --color AUTO|NEVER|ALWAYS When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default AUTO)
--config string Config file (default "$HOME/.config/rclone/rclone.conf")
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--disable string Disable a comma separated list of features (use --disable help to see a list)
@@ -16990,7 +18308,7 @@ Flags for developers.
```
--cpuprofile string Write cpu profile to file
- --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump DumpFlags List of items to dump from: headers, bodies, requests, responses, auth, filters, goroutines, openfiles, mapper
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info
--memprofile string Write memory profile to file
@@ -17044,7 +18362,7 @@ Logging and statistics.
```
--log-file string Log everything to this file
--log-format string Comma separated list of log format options (default "date,time")
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --log-level LogLevel Log level DEBUG|INFO|NOTICE|ERROR (default NOTICE)
--log-systemd Activate systemd integration for the logger
--max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
-P, --progress Show progress during transfer
@@ -17052,7 +18370,7 @@ Logging and statistics.
-q, --quiet Print as little stuff as possible
--stats Duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats (0 for no limit) (default 45)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-log-level LogLevel Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default INFO)
--stats-one-line Make the stats fit on one line
--stats-one-line-date Enable --stats-one-line and add current date/time prefix
--stats-one-line-date-format string Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes ("), see https://golang.org/pkg/time/#Time.Format
@@ -17076,6 +18394,7 @@ Flags to control metadata.
--metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
--metadata-include stringArray Include metadatas matching pattern
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --metadata-mapper SpaceSepList Program to run to transforming metadata before upload
--metadata-set stringArray Add metadata key=value when uploading
```
@@ -17124,13 +18443,13 @@ Backend only flags. These can be set in the config file also.
--acd-auth-url string Auth server URL
--acd-client-id string OAuth Client Id
--acd-client-secret string OAuth Client Secret
- --acd-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --acd-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi)
--acd-token string OAuth Access Token as a JSON blob
--acd-token-url string Token server url
--acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears (default 3m0s)
--alias-remote string Remote or path to alias
- --azureblob-access-tier string Access tier of blob: hot, cool or archive
+ --azureblob-access-tier string Access tier of blob: hot, cool, cold or archive
--azureblob-account string Azure Storage Account Name
--azureblob-archive-tier-delete Delete archive tier blobs before overwriting
--azureblob-chunk-size SizeSuffix Upload chunk size (default 4Mi)
@@ -17141,7 +18460,7 @@ Backend only flags. These can be set in the config file also.
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
--azureblob-disable-checksum Don't store MD5 checksum with object metadata
- --azureblob-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
+ --azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service
--azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI)
--azureblob-key string Storage Account Shared Key
@@ -17161,18 +18480,43 @@ Backend only flags. These can be set in the config file also.
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address)
+ --azurefiles-account string Azure Storage Account Name
+ --azurefiles-chunk-size SizeSuffix Upload chunk size (default 4Mi)
+ --azurefiles-client-certificate-password string Password for the certificate file (optional) (obscured)
+ --azurefiles-client-certificate-path string Path to a PEM or PKCS12 certificate file including the private key
+ --azurefiles-client-id string The ID of the client in use
+ --azurefiles-client-secret string One of the service principal's client secrets
+ --azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
+ --azurefiles-connection-string string Azure Files Connection String
+ --azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
+ --azurefiles-endpoint string Endpoint for the service
+ --azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI)
+ --azurefiles-key string Storage Account Shared Key
+ --azurefiles-max-stream-size SizeSuffix Max size for streamed files (default 10Gi)
+ --azurefiles-msi-client-id string Object ID of the user-assigned MSI to use, if any
+ --azurefiles-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
+ --azurefiles-msi-object-id string Object ID of the user-assigned MSI to use, if any
+ --azurefiles-password string The user's password (obscured)
+ --azurefiles-sas-url string SAS URL
+ --azurefiles-service-principal-file string Path to file containing credentials for use with a service principal
+ --azurefiles-share-name string Azure Files Share Name
+ --azurefiles-tenant string ID of the service principal's tenant. Also called its directory ID
+ --azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
+ --azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
+ --azurefiles-username string User name (usually an email address)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size (default 96Mi)
--b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w)
--b2-download-url string Custom endpoint for downloads
- --b2-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --b2-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--b2-endpoint string Endpoint for the service
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files
--b2-key string Application Key
+ --b2-lifecycle int Set the number of days deleted files should be kept when creating a bucket
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging
- --b2-upload-concurrency int Concurrency for multipart uploads (default 16)
+ --b2-upload-concurrency int Concurrency for multipart uploads (default 4)
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--b2-version-at Time Show file versions as they were at the specified time (default off)
--b2-versions Include old versions in directory listings
@@ -17183,7 +18527,7 @@ Backend only flags. These can be set in the config file also.
--box-client-id string OAuth Client Id
--box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
- --box-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
+ --box-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--box-impersonate string Impersonate this user ID when using a service account
--box-list-chunk int Size of listing chunk 1-1000 (default 1000)
--box-owned-by string Only show items owned by the login (email address) passed in
@@ -17241,7 +18585,7 @@ Backend only flags. These can be set in the config file also.
--drive-client-secret string OAuth Client Secret
--drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut
--drive-disable-http2 Disable drive using http2 (default true)
- --drive-encoding MultiEncoder The encoding for the backend (default InvalidUtf8)
+ --drive-encoding Encoding The encoding for the backend (default InvalidUtf8)
--drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars)
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg")
--drive-fast-list-bug-fix Work around a bug in Google Drive listing (default true)
@@ -17250,17 +18594,21 @@ Backend only flags. These can be set in the config file also.
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs
--drive-keep-revision-forever Keep new head revision of each file forever
--drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000)
+ --drive-metadata-labels Bits Control whether labels should be read or written in metadata (default off)
+ --drive-metadata-owner Bits Control whether owner should be read or written in metadata (default read)
+ --drive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off)
--drive-pacer-burst int Number of API calls to allow without sleeping (default 100)
--drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms)
--drive-resource-key string Resource key for accessing a link-shared file
--drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive
+ --drive-scope string Comma separated list of scopes that rclone should use when requesting access from drive
--drive-server-side-across-configs Deprecated: use --server-side-across-configs instead
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
+ --drive-show-all-gdocs Show all Google Docs including non-exportable ones in listings
--drive-size-as-quota Show sizes as storage quota usage, not actual size
- --drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only
+ --drive-skip-checksum-gphotos Skip checksums on Google photos and videos only
--drive-skip-dangling-shortcuts If set skip dangling shortcut files
--drive-skip-gdocs Skip google documents in all listings
--drive-skip-shortcuts If set skip shortcut files
@@ -17284,7 +18632,7 @@ Backend only flags. These can be set in the config file also.
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
- --dropbox-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
+ --dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--dropbox-shared-files Instructs rclone to work on individual shared files
@@ -17293,11 +18641,11 @@ Backend only flags. These can be set in the config file also.
--dropbox-token-url string Token server url
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
--fichier-cdn Set if you wish to use CDN download links
- --fichier-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --fichier-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
--fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
--fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
--fichier-shared-folder string If you want to download a shared folder, add this parameter
- --filefabric-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
+ --filefabric-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--filefabric-permanent-token string Permanent Authentication Token
--filefabric-root-folder-id string ID of the root folder
--filefabric-token string Session Token
@@ -17311,7 +18659,7 @@ Backend only flags. These can be set in the config file also.
--ftp-disable-mlsd Disable using MLSD even if server advertises support
--ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
--ftp-disable-utf8 Disable using UTF-8 even if server advertises support
- --ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
+ --ftp-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to
@@ -17333,7 +18681,7 @@ Backend only flags. These can be set in the config file also.
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
- --gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
+ --gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Endpoint for the service
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
@@ -17346,9 +18694,13 @@ Backend only flags. These can be set in the config file also.
--gcs-token-url string Token server url
--gcs-user-project string User project
--gphotos-auth-url string Auth server URL
+ --gphotos-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
+ --gphotos-batch-mode string Upload file batching sync|async|off (default "sync")
+ --gphotos-batch-size int Max number of files in upload batch
+ --gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
--gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret
- --gphotos-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
+ --gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gphotos-include-archived Also view and download archived media
--gphotos-read-only Set to make the Google Photos backend read only
--gphotos-read-size Set to read the size of media items
@@ -17360,8 +18712,8 @@ Backend only flags. These can be set in the config file also.
--hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off)
--hasher-remote string Remote to cache checksums for (e.g. myRemote:path)
--hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy
- --hdfs-encoding MultiEncoder The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
- --hdfs-namenode string Hadoop name node and port
+ --hdfs-encoding Encoding The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
+ --hdfs-namenode CommaSepList Hadoop name nodes and ports
--hdfs-service-principal-name string Kerberos service principal name for the namenode
--hdfs-username string Hadoop user name
--hidrive-auth-url string Auth server URL
@@ -17369,7 +18721,7 @@ Backend only flags. These can be set in the config file also.
--hidrive-client-id string OAuth Client Id
--hidrive-client-secret string OAuth Client Secret
--hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary
- --hidrive-encoding MultiEncoder The encoding for the backend (default Slash,Dot)
+ --hidrive-encoding Encoding The encoding for the backend (default Slash,Dot)
--hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1")
--hidrive-root-prefix string The root/parent folder for all paths (default "/")
--hidrive-scope-access string Access permissions that rclone should use when requesting access from HiDrive (default "rw")
@@ -17382,9 +18734,16 @@ Backend only flags. These can be set in the config file also.
--http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to
+ --imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket)
+ --imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+ --imagekit-only-signed Restrict unsigned image URLs If you have configured Restrict unsigned image URLs in your dashboard settings, set this to true
+ --imagekit-private-key string You can find your ImageKit.io private key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+ --imagekit-public-key string You can find your ImageKit.io public key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+ --imagekit-upload-tags string Tags to add to the uploaded files, e.g. "tag1,tag2"
+ --imagekit-versions Include old versions in directory listings
--internetarchive-access-key-id string IAS3 Access Key
--internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true)
- --internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
+ --internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
--internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org")
--internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org")
--internetarchive-secret-access-key string IAS3 Secret Key (password)
@@ -17392,7 +18751,7 @@ Backend only flags. These can be set in the config file also.
--jottacloud-auth-url string Auth server URL
--jottacloud-client-id string OAuth Client Id
--jottacloud-client-secret string OAuth Client Secret
- --jottacloud-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
+ --jottacloud-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
--jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them
@@ -17400,17 +18759,18 @@ Backend only flags. These can be set in the config file also.
--jottacloud-token-url string Token server url
--jottacloud-trashed-only Only show files that are in the trash
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi)
- --koofr-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --koofr-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--koofr-endpoint string The Koofr API endpoint to use
--koofr-mountid string Mount ID of the mount to use
--koofr-password string Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
--koofr-provider string Choose your storage provider
--koofr-setmtime Does the backend support setting modification time (default true)
--koofr-user string Your user name
+ --linkbox-token string Token from https://www.linkbox.to/admin/account
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive
- --local-encoding MultiEncoder The encoding for the backend (default Slash,Dot)
+ --local-encoding Encoding The encoding for the backend (default Slash,Dot)
--local-no-check-updated Don't check to see if the files change during upload
--local-no-preallocate Disable preallocation of disk space for transferred files
--local-no-set-modtime Disable setting modtime
@@ -17422,7 +18782,7 @@ Backend only flags. These can be set in the config file also.
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
--mailru-client-id string OAuth Client Id
--mailru-client-secret string OAuth Client Secret
- --mailru-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --mailru-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--mailru-pass string Password (obscured)
--mailru-speedup-enable Skip full upload if there is another file with same data hash (default true)
--mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash) (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf")
@@ -17432,7 +18792,7 @@ Backend only flags. These can be set in the config file also.
--mailru-token-url string Token server url
--mailru-user string User name (usually email)
--mega-debug Output more debug from Mega
- --mega-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--mega-hard-delete Delete files permanently rather than putting them into the trash
--mega-pass string Password (obscured)
--mega-use-https Use HTTPS for transfers
@@ -17448,9 +18808,10 @@ Backend only flags. These can be set in the config file also.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
+ --onedrive-delta If set rclone will use delta listing to implement recursive listings
--onedrive-drive-id string The ID of the drive to use
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
- --onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --onedrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
--onedrive-hash-type string Specify the hash in use for the backend (default "auto")
--onedrive-link-password string Set the password for links created by the link command
@@ -17471,7 +18832,7 @@ Backend only flags. These can be set in the config file also.
--oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--oos-copy-timeout Duration Timeout for copy (default 1m0s)
--oos-disable-checksum Don't store MD5 checksum with object metadata
- --oos-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --oos-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--oos-endpoint string Endpoint for Object storage API
--oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery
--oos-max-upload-parts int Maximum number of parts in a multipart upload (default 10000)
@@ -17488,13 +18849,13 @@ Backend only flags. These can be set in the config file also.
--oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
- --opendrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
+ --opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
--opendrive-password string Password (obscured)
--opendrive-username string Username
--pcloud-auth-url string Auth server URL
--pcloud-client-id string OAuth Client Id
--pcloud-client-secret string OAuth Client Secret
- --pcloud-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --pcloud-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--pcloud-hostname string Hostname to connect to (default "api.pcloud.com")
--pcloud-password string Your pcloud password (obscured)
--pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0")
@@ -17504,7 +18865,7 @@ Backend only flags. These can be set in the config file also.
--pikpak-auth-url string Auth server URL
--pikpak-client-id string OAuth Client Id
--pikpak-client-secret string OAuth Client Secret
- --pikpak-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
--pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder
@@ -17516,13 +18877,13 @@ Backend only flags. These can be set in the config file also.
--premiumizeme-auth-url string Auth server URL
--premiumizeme-client-id string OAuth Client Id
--premiumizeme-client-secret string OAuth Client Secret
- --premiumizeme-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --premiumizeme-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--premiumizeme-token string OAuth Access Token as a JSON blob
--premiumizeme-token-url string Token server url
--protondrive-2fa string The 2FA code
--protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone")
--protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true)
- --protondrive-encoding MultiEncoder The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
--protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured)
--protondrive-original-file-size Return the file size before encryption (default true)
--protondrive-password string The password of your proton account (obscured)
@@ -17531,13 +18892,13 @@ Backend only flags. These can be set in the config file also.
--putio-auth-url string Auth server URL
--putio-client-id string OAuth Client Id
--putio-client-secret string OAuth Client Secret
- --putio-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --putio-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--putio-token string OAuth Access Token as a JSON blob
--putio-token-url string Token server url
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi)
--qingstor-connection-retries int Number of connection retries (default 3)
- --qingstor-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8)
+ --qingstor-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8)
--qingstor-endpoint string Enter an endpoint URL to connection QingStor API
--qingstor-env-auth Get QingStor credentials from runtime
--qingstor-secret-access-key string QingStor Secret Access Key (password)
@@ -17546,7 +18907,7 @@ Backend only flags. These can be set in the config file also.
--qingstor-zone string Zone to connect to
--quatrix-api-key string API key for accessing Quatrix account
--quatrix-effective-upload-time string Wanted upload time for one chunk (default "4s")
- --quatrix-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --quatrix-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--quatrix-hard-delete Delete files permanently rather than putting them into the trash
--quatrix-host string Host name of Quatrix account
--quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi)
@@ -17561,7 +18922,7 @@ Backend only flags. These can be set in the config file also.
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
--s3-download-url string Custom endpoint for downloads
- --s3-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --s3-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--s3-endpoint string Endpoint for S3 API
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
--s3-force-path-style If true use path style access if false use virtual hosted style (default true)
@@ -17595,14 +18956,16 @@ Backend only flags. These can be set in the config file also.
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
--s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset)
+ --s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset)
--s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
+ --s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-v2-auth If true use v2 authentication
--s3-version-at Time Show file versions as they were at the specified time (default off)
--s3-versions Include old versions in directory listings
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
--seafile-create-library Should rclone create a library if it doesn't exist
- --seafile-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
+ --seafile-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
--seafile-library string Name of the library
--seafile-library-key string Library password (for encrypted libraries only) (obscured)
--seafile-pass string Password (obscured)
@@ -17612,6 +18975,7 @@ Backend only flags. These can be set in the config file also.
--sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki)
--sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference
--sftp-concurrency int The maximum number of outstanding requests for one file (default 64)
+ --sftp-copy-is-hardlink Set to enable server side copies using hardlinks
--sftp-disable-concurrent-reads If set don't use concurrent reads
--sftp-disable-concurrent-writes If set don't use concurrent writes
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
@@ -17646,7 +19010,7 @@ Backend only flags. These can be set in the config file also.
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
--sharefile-client-id string OAuth Client Id
--sharefile-client-secret string OAuth Client Secret
- --sharefile-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --sharefile-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
--sharefile-endpoint string Endpoint for API calls
--sharefile-root-folder-id string ID of the root folder
--sharefile-token string OAuth Access Token as a JSON blob
@@ -17654,12 +19018,12 @@ Backend only flags. These can be set in the config file also.
--sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi)
--sia-api-password string Sia Daemon API Password (obscured)
--sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980")
- --sia-encoding MultiEncoder The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
+ --sia-encoding Encoding The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
--sia-user-agent string Siad User Agent (default "Sia-Agent")
--skip-links Don't warn about skipped symlinks
--smb-case-insensitive Whether the server is configured to be case-insensitive (default true)
--smb-domain string Domain name for NTLM authentication (default "WORKGROUP")
- --smb-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --smb-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
--smb-hide-special-share Hide special shares (e.g. print$) which users aren't supposed to access (default true)
--smb-host string SMB server hostname to connect to
--smb-idle-timeout Duration Max time before closing idle connections (default 1m0s)
@@ -17677,7 +19041,7 @@ Backend only flags. These can be set in the config file also.
--sugarsync-authorization string Sugarsync authorization
--sugarsync-authorization-expiry string Sugarsync authorization expiry
--sugarsync-deleted-id string Sugarsync deleted folder id
- --sugarsync-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
+ --sugarsync-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
--sugarsync-hard-delete Permanently delete files if true
--sugarsync-private-access-key string Sugarsync Private Access Key
--sugarsync-refresh-token string Sugarsync refresh token
@@ -17691,7 +19055,7 @@ Backend only flags. These can be set in the config file also.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8)
+ --swift-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form
--swift-key string API key or password (OS_PASSWORD)
@@ -17713,7 +19077,7 @@ Backend only flags. These can be set in the config file also.
--union-search-policy string Policy to choose upstream on SEARCH category (default "ff")
--union-upstreams string List of space separated upstreams
--uptobox-access-token string Your access token
- --uptobox-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
+ --uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--uptobox-private Set to make uploaded files private
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
@@ -17728,14 +19092,14 @@ Backend only flags. These can be set in the config file also.
--yandex-auth-url string Auth server URL
--yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret
- --yandex-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
+ --yandex-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--yandex-hard-delete Delete files permanently rather than putting them into the trash
--yandex-token string OAuth Access Token as a JSON blob
--yandex-token-url string Token server url
--zoho-auth-url string Auth server URL
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
- --zoho-encoding MultiEncoder The encoding for the backend (default Del,Ctl,InvalidUtf8)
+ --zoho-encoding Encoding The encoding for the backend (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to
--zoho-token string OAuth Access Token as a JSON blob
--zoho-token-url string Token server url
@@ -18591,7 +19955,7 @@ while `--ignore-checksum` controls whether checksums are considered during the c
if there ARE diffs.
* Unless `--ignore-listing-checksum` is passed, bisync currently computes hashes for one path
*even when there's no common hash with the other path*
-(for example, a [crypt](https://rclone.org/crypt/#modified-time-and-hashes) remote.)
+(for example, a [crypt](https://rclone.org/crypt/#modification-times-and-hashes) remote.)
* If both paths support checksums and have a common hash,
AND `--ignore-listing-checksum` was not specified when creating the listings,
`--check-sync=only` can be used to compare Path1 vs. Path2 checksums (as of the time the previous listings were created.)
@@ -18695,7 +20059,7 @@ Alternately, a `--resync` may be used (Path1 versions will be pushed
to Path2). Consider the situation carefully and perhaps use `--dry-run`
before you commit to the changes.
-### Modification time
+### Modification times
Bisync relies on file timestamps to identify changed files and will
_refuse_ to operate if backend lacks the modification time support.
@@ -19799,11 +21163,11 @@ To copy a local directory to a 1Fichier directory called backup
rclone copy /home/source remote:backup
-### Modified time and hashes ###
+### Modification times and hashes
1Fichier does not support modification times. It supports the Whirlpool hash algorithm.
-### Duplicated files ###
+### Duplicated files
1Fichier can have two files with exactly the same name and path (unlike a
normal file system).
@@ -19915,7 +21279,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_FICHIER_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot
@@ -20162,13 +21526,13 @@ To copy a local directory to an Amazon Drive directory called backup
rclone copy /home/source remote:backup
-### Modified time and MD5SUMs
+### Modification times and hashes
Amazon Drive doesn't allow modification times to be changed via
the API so these won't be accurate or used for syncing.
-It does store MD5SUMs so for a more accurate sync, you can use the
-`--checksum` flag.
+It does support the MD5 hash algorithm, so for a more accurate sync,
+you can use the `--checksum` flag.
### Restricted filename characters
@@ -20338,7 +21702,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_ACD_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,InvalidUtf8,Dot
@@ -20391,12 +21755,14 @@ The S3 backend can be used with a number of different providers:
- IBM COS S3
- IDrive e2
- IONOS Cloud
- - Leviia Object Storage
+- Leviia Object Storage
- Liara Object Storage
+- Linode Object Storage
- Minio
- Petabox
- Qiniu Cloud Object Storage (Kodo)
- RackCorp Object Storage
+- Rclone Serve S3
- Scaleway
- Seagate Lyve Cloud
- SeaweedFS
@@ -20638,7 +22004,9 @@ d) Delete this remote
y/e/d>
```
-### Modified time
+### Modification times and hashes
+
+#### Modification times
The modified time is stored as metadata on the object as
`X-Amz-Meta-Mtime` as floating point since the epoch, accurate to 1 ns.
@@ -20651,6 +22019,29 @@ storage the object will be uploaded rather than copied.
Note that reading this from the object takes an additional `HEAD`
request as the metadata isn't returned in object listings.
+#### Hashes
+
+For small objects which weren't uploaded as multipart uploads (objects
+sized below `--s3-upload-cutoff` if uploaded with rclone) rclone uses
+the `ETag:` header as an MD5 checksum.
+
+However for objects which were uploaded as multipart uploads or with
+server side encryption (SSE-AWS or SSE-C) the `ETag` header is no
+longer the MD5 sum of the data, so rclone adds an additional piece of
+metadata `X-Amz-Meta-Md5chksum` which is a base64 encoded MD5 hash (in
+the same format as is required for `Content-MD5`). You can use base64 -d and hexdump to check this value manually:
+
+ echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump
+
+or you can use `rclone check` to verify the hashes are OK.
+
+For large objects, calculating this hash can take some time so the
+addition of this hash can be disabled with `--s3-disable-checksum`.
+This will mean that these objects do not have an MD5 checksum.
+
+Note that reading this from the object takes an additional `HEAD`
+request as the metadata isn't returned in object listings.
+
### Reducing costs
#### Avoiding HEAD requests to read the modification time
@@ -20742,29 +22133,6 @@ there for more details.
Setting this flag increases the chance for undetected upload failures.
-### Hashes
-
-For small objects which weren't uploaded as multipart uploads (objects
-sized below `--s3-upload-cutoff` if uploaded with rclone) rclone uses
-the `ETag:` header as an MD5 checksum.
-
-However for objects which were uploaded as multipart uploads or with
-server side encryption (SSE-AWS or SSE-C) the `ETag` header is no
-longer the MD5 sum of the data, so rclone adds an additional piece of
-metadata `X-Amz-Meta-Md5chksum` which is a base64 encoded MD5 hash (in
-the same format as is required for `Content-MD5`). You can use base64 -d and hexdump to check this value manually:
-
- echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump
-
-or you can use `rclone check` to verify the hashes are OK.
-
-For large objects, calculating this hash can take some time so the
-addition of this hash can be disabled with `--s3-disable-checksum`.
-This will mean that these objects do not have an MD5 checksum.
-
-Note that reading this from the object takes an additional `HEAD`
-request as the metadata isn't returned in object listings.
-
### Versions
When bucket versioning is enabled (this can be done with rclone with
@@ -21027,13 +22395,14 @@ According to AWS's [documentation on S3 Object Lock](https://docs.aws.amazon.com
> If you configure a default retention period on a bucket, requests to upload objects in such a bucket must include the Content-MD5 header.
-As mentioned in the [Hashes](#hashes) section, small files that are not uploaded as multipart, use a different tag, causing the upload to fail.
+As mentioned in the [Modification times and hashes](#modification-times-and-hashes) section,
+small files that are not uploaded as multipart, use a different tag, causing the upload to fail.
A simple solution is to set the `--s3-upload-cutoff 0` and force all the files to be uploaded as multipart.
### Standard options
-Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi).
+Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
#### --s3-provider
@@ -21078,6 +22447,8 @@ Properties:
- Leviia Object Storage
- "Liara"
- Liara Object Storage
+ - "Linode"
+ - Linode Object Storage
- "Minio"
- Minio Object Storage
- "Netease"
@@ -21086,6 +22457,8 @@ Properties:
- Petabox Object Storage
- "RackCorp"
- RackCorp Object Storage
+ - "Rclone"
+ - Rclone S3 Server
- "Scaleway"
- Scaleway Object Storage
- "SeaweedFS"
@@ -21238,260 +22611,6 @@ Properties:
- AWS GovCloud (US) Region.
- Needs location constraint us-gov-west-1.
-#### --s3-region
-
-region - the location where your bucket will be created and your data stored.
-
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: RackCorp
-- Type: string
-- Required: false
-- Examples:
- - "global"
- - Global CDN (All locations) Region
- - "au"
- - Australia (All states)
- - "au-nsw"
- - NSW (Australia) Region
- - "au-qld"
- - QLD (Australia) Region
- - "au-vic"
- - VIC (Australia) Region
- - "au-wa"
- - Perth (Australia) Region
- - "ph"
- - Manila (Philippines) Region
- - "th"
- - Bangkok (Thailand) Region
- - "hk"
- - HK (Hong Kong) Region
- - "mn"
- - Ulaanbaatar (Mongolia) Region
- - "kg"
- - Bishkek (Kyrgyzstan) Region
- - "id"
- - Jakarta (Indonesia) Region
- - "jp"
- - Tokyo (Japan) Region
- - "sg"
- - SG (Singapore) Region
- - "de"
- - Frankfurt (Germany) Region
- - "us"
- - USA (AnyCast) Region
- - "us-east-1"
- - New York (USA) Region
- - "us-west-1"
- - Freemont (USA) Region
- - "nz"
- - Auckland (New Zealand) Region
-
-#### --s3-region
-
-Region to connect to.
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: Scaleway
-- Type: string
-- Required: false
-- Examples:
- - "nl-ams"
- - Amsterdam, The Netherlands
- - "fr-par"
- - Paris, France
- - "pl-waw"
- - Warsaw, Poland
-
-#### --s3-region
-
-Region to connect to. - the location where your bucket will be created and your data stored. Need bo be same with your endpoint.
-
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: HuaweiOBS
-- Type: string
-- Required: false
-- Examples:
- - "af-south-1"
- - AF-Johannesburg
- - "ap-southeast-2"
- - AP-Bangkok
- - "ap-southeast-3"
- - AP-Singapore
- - "cn-east-3"
- - CN East-Shanghai1
- - "cn-east-2"
- - CN East-Shanghai2
- - "cn-north-1"
- - CN North-Beijing1
- - "cn-north-4"
- - CN North-Beijing4
- - "cn-south-1"
- - CN South-Guangzhou
- - "ap-southeast-1"
- - CN-Hong Kong
- - "sa-argentina-1"
- - LA-Buenos Aires1
- - "sa-peru-1"
- - LA-Lima1
- - "na-mexico-1"
- - LA-Mexico City1
- - "sa-chile-1"
- - LA-Santiago2
- - "sa-brazil-1"
- - LA-Sao Paulo1
- - "ru-northwest-2"
- - RU-Moscow2
-
-#### --s3-region
-
-Region to connect to.
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: Cloudflare
-- Type: string
-- Required: false
-- Examples:
- - "auto"
- - R2 buckets are automatically distributed across Cloudflare's data centers for low latency.
-
-#### --s3-region
-
-Region to connect to.
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: Qiniu
-- Type: string
-- Required: false
-- Examples:
- - "cn-east-1"
- - The default endpoint - a good choice if you are unsure.
- - East China Region 1.
- - Needs location constraint cn-east-1.
- - "cn-east-2"
- - East China Region 2.
- - Needs location constraint cn-east-2.
- - "cn-north-1"
- - North China Region 1.
- - Needs location constraint cn-north-1.
- - "cn-south-1"
- - South China Region 1.
- - Needs location constraint cn-south-1.
- - "us-north-1"
- - North America Region.
- - Needs location constraint us-north-1.
- - "ap-southeast-1"
- - Southeast Asia Region 1.
- - Needs location constraint ap-southeast-1.
- - "ap-northeast-1"
- - Northeast Asia Region 1.
- - Needs location constraint ap-northeast-1.
-
-#### --s3-region
-
-Region where your bucket will be created and your data stored.
-
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: IONOS
-- Type: string
-- Required: false
-- Examples:
- - "de"
- - Frankfurt, Germany
- - "eu-central-2"
- - Berlin, Germany
- - "eu-south-2"
- - Logrono, Spain
-
-#### --s3-region
-
-Region where your bucket will be created and your data stored.
-
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: Petabox
-- Type: string
-- Required: false
-- Examples:
- - "us-east-1"
- - US East (N. Virginia)
- - "eu-central-1"
- - Europe (Frankfurt)
- - "ap-southeast-1"
- - Asia Pacific (Singapore)
- - "me-south-1"
- - Middle East (Bahrain)
- - "sa-east-1"
- - South America (São Paulo)
-
-#### --s3-region
-
-Region where your data stored.
-
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: Synology
-- Type: string
-- Required: false
-- Examples:
- - "eu-001"
- - Europe Region 1
- - "eu-002"
- - Europe Region 2
- - "us-001"
- - US Region 1
- - "us-002"
- - US Region 2
- - "tw-001"
- - Asia (Taiwan)
-
-#### --s3-region
-
-Region to connect to.
-
-Leave blank if you are using an S3 clone and you don't have a region.
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: !AWS,Alibaba,ArvanCloud,ChinaMobile,Cloudflare,IONOS,Petabox,Liara,Qiniu,RackCorp,Scaleway,Storj,Synology,TencentCOS,HuaweiOBS,IDrive
-- Type: string
-- Required: false
-- Examples:
- - ""
- - Use this if unsure.
- - Will use v4 signatures and an empty region.
- - "other-v2-signature"
- - Use this only if v4 signatures don't work.
- - E.g. pre Jewel/v10 CEPH.
-
#### --s3-endpoint
Endpoint for S3 API.
@@ -21506,712 +22625,6 @@ Properties:
- Type: string
- Required: false
-#### --s3-endpoint
-
-Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: ChinaMobile
-- Type: string
-- Required: false
-- Examples:
- - "eos-wuxi-1.cmecloud.cn"
- - The default endpoint - a good choice if you are unsure.
- - East China (Suzhou)
- - "eos-jinan-1.cmecloud.cn"
- - East China (Jinan)
- - "eos-ningbo-1.cmecloud.cn"
- - East China (Hangzhou)
- - "eos-shanghai-1.cmecloud.cn"
- - East China (Shanghai-1)
- - "eos-zhengzhou-1.cmecloud.cn"
- - Central China (Zhengzhou)
- - "eos-hunan-1.cmecloud.cn"
- - Central China (Changsha-1)
- - "eos-zhuzhou-1.cmecloud.cn"
- - Central China (Changsha-2)
- - "eos-guangzhou-1.cmecloud.cn"
- - South China (Guangzhou-2)
- - "eos-dongguan-1.cmecloud.cn"
- - South China (Guangzhou-3)
- - "eos-beijing-1.cmecloud.cn"
- - North China (Beijing-1)
- - "eos-beijing-2.cmecloud.cn"
- - North China (Beijing-2)
- - "eos-beijing-4.cmecloud.cn"
- - North China (Beijing-3)
- - "eos-huhehaote-1.cmecloud.cn"
- - North China (Huhehaote)
- - "eos-chengdu-1.cmecloud.cn"
- - Southwest China (Chengdu)
- - "eos-chongqing-1.cmecloud.cn"
- - Southwest China (Chongqing)
- - "eos-guiyang-1.cmecloud.cn"
- - Southwest China (Guiyang)
- - "eos-xian-1.cmecloud.cn"
- - Nouthwest China (Xian)
- - "eos-yunnan.cmecloud.cn"
- - Yunnan China (Kunming)
- - "eos-yunnan-2.cmecloud.cn"
- - Yunnan China (Kunming-2)
- - "eos-tianjin-1.cmecloud.cn"
- - Tianjin China (Tianjin)
- - "eos-jilin-1.cmecloud.cn"
- - Jilin China (Changchun)
- - "eos-hubei-1.cmecloud.cn"
- - Hubei China (Xiangyan)
- - "eos-jiangxi-1.cmecloud.cn"
- - Jiangxi China (Nanchang)
- - "eos-gansu-1.cmecloud.cn"
- - Gansu China (Lanzhou)
- - "eos-shanxi-1.cmecloud.cn"
- - Shanxi China (Taiyuan)
- - "eos-liaoning-1.cmecloud.cn"
- - Liaoning China (Shenyang)
- - "eos-hebei-1.cmecloud.cn"
- - Hebei China (Shijiazhuang)
- - "eos-fujian-1.cmecloud.cn"
- - Fujian China (Xiamen)
- - "eos-guangxi-1.cmecloud.cn"
- - Guangxi China (Nanning)
- - "eos-anhui-1.cmecloud.cn"
- - Anhui China (Huainan)
-
-#### --s3-endpoint
-
-Endpoint for Arvan Cloud Object Storage (AOS) API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: ArvanCloud
-- Type: string
-- Required: false
-- Examples:
- - "s3.ir-thr-at1.arvanstorage.ir"
- - The default endpoint - a good choice if you are unsure.
- - Tehran Iran (Simin)
- - "s3.ir-tbz-sh1.arvanstorage.ir"
- - Tabriz Iran (Shahriar)
-
-#### --s3-endpoint
-
-Endpoint for IBM COS S3 API.
-
-Specify if using an IBM COS On Premise.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: IBMCOS
-- Type: string
-- Required: false
-- Examples:
- - "s3.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region Endpoint
- - "s3.dal.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region Dallas Endpoint
- - "s3.wdc.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region Washington DC Endpoint
- - "s3.sjc.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region San Jose Endpoint
- - "s3.private.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region Private Endpoint
- - "s3.private.dal.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region Dallas Private Endpoint
- - "s3.private.wdc.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region Washington DC Private Endpoint
- - "s3.private.sjc.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region San Jose Private Endpoint
- - "s3.us-east.cloud-object-storage.appdomain.cloud"
- - US Region East Endpoint
- - "s3.private.us-east.cloud-object-storage.appdomain.cloud"
- - US Region East Private Endpoint
- - "s3.us-south.cloud-object-storage.appdomain.cloud"
- - US Region South Endpoint
- - "s3.private.us-south.cloud-object-storage.appdomain.cloud"
- - US Region South Private Endpoint
- - "s3.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Endpoint
- - "s3.fra.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Frankfurt Endpoint
- - "s3.mil.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Milan Endpoint
- - "s3.ams.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Amsterdam Endpoint
- - "s3.private.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Private Endpoint
- - "s3.private.fra.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Frankfurt Private Endpoint
- - "s3.private.mil.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Milan Private Endpoint
- - "s3.private.ams.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Amsterdam Private Endpoint
- - "s3.eu-gb.cloud-object-storage.appdomain.cloud"
- - Great Britain Endpoint
- - "s3.private.eu-gb.cloud-object-storage.appdomain.cloud"
- - Great Britain Private Endpoint
- - "s3.eu-de.cloud-object-storage.appdomain.cloud"
- - EU Region DE Endpoint
- - "s3.private.eu-de.cloud-object-storage.appdomain.cloud"
- - EU Region DE Private Endpoint
- - "s3.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional Endpoint
- - "s3.tok.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional Tokyo Endpoint
- - "s3.hkg.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional HongKong Endpoint
- - "s3.seo.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional Seoul Endpoint
- - "s3.private.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional Private Endpoint
- - "s3.private.tok.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional Tokyo Private Endpoint
- - "s3.private.hkg.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional HongKong Private Endpoint
- - "s3.private.seo.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional Seoul Private Endpoint
- - "s3.jp-tok.cloud-object-storage.appdomain.cloud"
- - APAC Region Japan Endpoint
- - "s3.private.jp-tok.cloud-object-storage.appdomain.cloud"
- - APAC Region Japan Private Endpoint
- - "s3.au-syd.cloud-object-storage.appdomain.cloud"
- - APAC Region Australia Endpoint
- - "s3.private.au-syd.cloud-object-storage.appdomain.cloud"
- - APAC Region Australia Private Endpoint
- - "s3.ams03.cloud-object-storage.appdomain.cloud"
- - Amsterdam Single Site Endpoint
- - "s3.private.ams03.cloud-object-storage.appdomain.cloud"
- - Amsterdam Single Site Private Endpoint
- - "s3.che01.cloud-object-storage.appdomain.cloud"
- - Chennai Single Site Endpoint
- - "s3.private.che01.cloud-object-storage.appdomain.cloud"
- - Chennai Single Site Private Endpoint
- - "s3.mel01.cloud-object-storage.appdomain.cloud"
- - Melbourne Single Site Endpoint
- - "s3.private.mel01.cloud-object-storage.appdomain.cloud"
- - Melbourne Single Site Private Endpoint
- - "s3.osl01.cloud-object-storage.appdomain.cloud"
- - Oslo Single Site Endpoint
- - "s3.private.osl01.cloud-object-storage.appdomain.cloud"
- - Oslo Single Site Private Endpoint
- - "s3.tor01.cloud-object-storage.appdomain.cloud"
- - Toronto Single Site Endpoint
- - "s3.private.tor01.cloud-object-storage.appdomain.cloud"
- - Toronto Single Site Private Endpoint
- - "s3.seo01.cloud-object-storage.appdomain.cloud"
- - Seoul Single Site Endpoint
- - "s3.private.seo01.cloud-object-storage.appdomain.cloud"
- - Seoul Single Site Private Endpoint
- - "s3.mon01.cloud-object-storage.appdomain.cloud"
- - Montreal Single Site Endpoint
- - "s3.private.mon01.cloud-object-storage.appdomain.cloud"
- - Montreal Single Site Private Endpoint
- - "s3.mex01.cloud-object-storage.appdomain.cloud"
- - Mexico Single Site Endpoint
- - "s3.private.mex01.cloud-object-storage.appdomain.cloud"
- - Mexico Single Site Private Endpoint
- - "s3.sjc04.cloud-object-storage.appdomain.cloud"
- - San Jose Single Site Endpoint
- - "s3.private.sjc04.cloud-object-storage.appdomain.cloud"
- - San Jose Single Site Private Endpoint
- - "s3.mil01.cloud-object-storage.appdomain.cloud"
- - Milan Single Site Endpoint
- - "s3.private.mil01.cloud-object-storage.appdomain.cloud"
- - Milan Single Site Private Endpoint
- - "s3.hkg02.cloud-object-storage.appdomain.cloud"
- - Hong Kong Single Site Endpoint
- - "s3.private.hkg02.cloud-object-storage.appdomain.cloud"
- - Hong Kong Single Site Private Endpoint
- - "s3.par01.cloud-object-storage.appdomain.cloud"
- - Paris Single Site Endpoint
- - "s3.private.par01.cloud-object-storage.appdomain.cloud"
- - Paris Single Site Private Endpoint
- - "s3.sng01.cloud-object-storage.appdomain.cloud"
- - Singapore Single Site Endpoint
- - "s3.private.sng01.cloud-object-storage.appdomain.cloud"
- - Singapore Single Site Private Endpoint
-
-#### --s3-endpoint
-
-Endpoint for IONOS S3 Object Storage.
-
-Specify the endpoint from the same region.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: IONOS
-- Type: string
-- Required: false
-- Examples:
- - "s3-eu-central-1.ionoscloud.com"
- - Frankfurt, Germany
- - "s3-eu-central-2.ionoscloud.com"
- - Berlin, Germany
- - "s3-eu-south-2.ionoscloud.com"
- - Logrono, Spain
-
-#### --s3-endpoint
-
-Endpoint for Petabox S3 Object Storage.
-
-Specify the endpoint from the same region.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Petabox
-- Type: string
-- Required: true
-- Examples:
- - "s3.petabox.io"
- - US East (N. Virginia)
- - "s3.us-east-1.petabox.io"
- - US East (N. Virginia)
- - "s3.eu-central-1.petabox.io"
- - Europe (Frankfurt)
- - "s3.ap-southeast-1.petabox.io"
- - Asia Pacific (Singapore)
- - "s3.me-south-1.petabox.io"
- - Middle East (Bahrain)
- - "s3.sa-east-1.petabox.io"
- - South America (São Paulo)
-
-#### --s3-endpoint
-
-Endpoint for Leviia Object Storage API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Leviia
-- Type: string
-- Required: false
-- Examples:
- - "s3.leviia.com"
- - The default endpoint
- - Leviia
-
-#### --s3-endpoint
-
-Endpoint for Liara Object Storage API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Liara
-- Type: string
-- Required: false
-- Examples:
- - "storage.iran.liara.space"
- - The default endpoint
- - Iran
-
-#### --s3-endpoint
-
-Endpoint for OSS API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Alibaba
-- Type: string
-- Required: false
-- Examples:
- - "oss-accelerate.aliyuncs.com"
- - Global Accelerate
- - "oss-accelerate-overseas.aliyuncs.com"
- - Global Accelerate (outside mainland China)
- - "oss-cn-hangzhou.aliyuncs.com"
- - East China 1 (Hangzhou)
- - "oss-cn-shanghai.aliyuncs.com"
- - East China 2 (Shanghai)
- - "oss-cn-qingdao.aliyuncs.com"
- - North China 1 (Qingdao)
- - "oss-cn-beijing.aliyuncs.com"
- - North China 2 (Beijing)
- - "oss-cn-zhangjiakou.aliyuncs.com"
- - North China 3 (Zhangjiakou)
- - "oss-cn-huhehaote.aliyuncs.com"
- - North China 5 (Hohhot)
- - "oss-cn-wulanchabu.aliyuncs.com"
- - North China 6 (Ulanqab)
- - "oss-cn-shenzhen.aliyuncs.com"
- - South China 1 (Shenzhen)
- - "oss-cn-heyuan.aliyuncs.com"
- - South China 2 (Heyuan)
- - "oss-cn-guangzhou.aliyuncs.com"
- - South China 3 (Guangzhou)
- - "oss-cn-chengdu.aliyuncs.com"
- - West China 1 (Chengdu)
- - "oss-cn-hongkong.aliyuncs.com"
- - Hong Kong (Hong Kong)
- - "oss-us-west-1.aliyuncs.com"
- - US West 1 (Silicon Valley)
- - "oss-us-east-1.aliyuncs.com"
- - US East 1 (Virginia)
- - "oss-ap-southeast-1.aliyuncs.com"
- - Southeast Asia Southeast 1 (Singapore)
- - "oss-ap-southeast-2.aliyuncs.com"
- - Asia Pacific Southeast 2 (Sydney)
- - "oss-ap-southeast-3.aliyuncs.com"
- - Southeast Asia Southeast 3 (Kuala Lumpur)
- - "oss-ap-southeast-5.aliyuncs.com"
- - Asia Pacific Southeast 5 (Jakarta)
- - "oss-ap-northeast-1.aliyuncs.com"
- - Asia Pacific Northeast 1 (Japan)
- - "oss-ap-south-1.aliyuncs.com"
- - Asia Pacific South 1 (Mumbai)
- - "oss-eu-central-1.aliyuncs.com"
- - Central Europe 1 (Frankfurt)
- - "oss-eu-west-1.aliyuncs.com"
- - West Europe (London)
- - "oss-me-east-1.aliyuncs.com"
- - Middle East 1 (Dubai)
-
-#### --s3-endpoint
-
-Endpoint for OBS API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: HuaweiOBS
-- Type: string
-- Required: false
-- Examples:
- - "obs.af-south-1.myhuaweicloud.com"
- - AF-Johannesburg
- - "obs.ap-southeast-2.myhuaweicloud.com"
- - AP-Bangkok
- - "obs.ap-southeast-3.myhuaweicloud.com"
- - AP-Singapore
- - "obs.cn-east-3.myhuaweicloud.com"
- - CN East-Shanghai1
- - "obs.cn-east-2.myhuaweicloud.com"
- - CN East-Shanghai2
- - "obs.cn-north-1.myhuaweicloud.com"
- - CN North-Beijing1
- - "obs.cn-north-4.myhuaweicloud.com"
- - CN North-Beijing4
- - "obs.cn-south-1.myhuaweicloud.com"
- - CN South-Guangzhou
- - "obs.ap-southeast-1.myhuaweicloud.com"
- - CN-Hong Kong
- - "obs.sa-argentina-1.myhuaweicloud.com"
- - LA-Buenos Aires1
- - "obs.sa-peru-1.myhuaweicloud.com"
- - LA-Lima1
- - "obs.na-mexico-1.myhuaweicloud.com"
- - LA-Mexico City1
- - "obs.sa-chile-1.myhuaweicloud.com"
- - LA-Santiago2
- - "obs.sa-brazil-1.myhuaweicloud.com"
- - LA-Sao Paulo1
- - "obs.ru-northwest-2.myhuaweicloud.com"
- - RU-Moscow2
-
-#### --s3-endpoint
-
-Endpoint for Scaleway Object Storage.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Scaleway
-- Type: string
-- Required: false
-- Examples:
- - "s3.nl-ams.scw.cloud"
- - Amsterdam Endpoint
- - "s3.fr-par.scw.cloud"
- - Paris Endpoint
- - "s3.pl-waw.scw.cloud"
- - Warsaw Endpoint
-
-#### --s3-endpoint
-
-Endpoint for StackPath Object Storage.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: StackPath
-- Type: string
-- Required: false
-- Examples:
- - "s3.us-east-2.stackpathstorage.com"
- - US East Endpoint
- - "s3.us-west-1.stackpathstorage.com"
- - US West Endpoint
- - "s3.eu-central-1.stackpathstorage.com"
- - EU Endpoint
-
-#### --s3-endpoint
-
-Endpoint for Google Cloud Storage.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: GCS
-- Type: string
-- Required: false
-- Examples:
- - "https://storage.googleapis.com"
- - Google Cloud Storage endpoint
-
-#### --s3-endpoint
-
-Endpoint for Storj Gateway.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Storj
-- Type: string
-- Required: false
-- Examples:
- - "gateway.storjshare.io"
- - Global Hosted Gateway
-
-#### --s3-endpoint
-
-Endpoint for Synology C2 Object Storage API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Synology
-- Type: string
-- Required: false
-- Examples:
- - "eu-001.s3.synologyc2.net"
- - EU Endpoint 1
- - "eu-002.s3.synologyc2.net"
- - EU Endpoint 2
- - "us-001.s3.synologyc2.net"
- - US Endpoint 1
- - "us-002.s3.synologyc2.net"
- - US Endpoint 2
- - "tw-001.s3.synologyc2.net"
- - TW Endpoint 1
-
-#### --s3-endpoint
-
-Endpoint for Tencent COS API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: TencentCOS
-- Type: string
-- Required: false
-- Examples:
- - "cos.ap-beijing.myqcloud.com"
- - Beijing Region
- - "cos.ap-nanjing.myqcloud.com"
- - Nanjing Region
- - "cos.ap-shanghai.myqcloud.com"
- - Shanghai Region
- - "cos.ap-guangzhou.myqcloud.com"
- - Guangzhou Region
- - "cos.ap-nanjing.myqcloud.com"
- - Nanjing Region
- - "cos.ap-chengdu.myqcloud.com"
- - Chengdu Region
- - "cos.ap-chongqing.myqcloud.com"
- - Chongqing Region
- - "cos.ap-hongkong.myqcloud.com"
- - Hong Kong (China) Region
- - "cos.ap-singapore.myqcloud.com"
- - Singapore Region
- - "cos.ap-mumbai.myqcloud.com"
- - Mumbai Region
- - "cos.ap-seoul.myqcloud.com"
- - Seoul Region
- - "cos.ap-bangkok.myqcloud.com"
- - Bangkok Region
- - "cos.ap-tokyo.myqcloud.com"
- - Tokyo Region
- - "cos.na-siliconvalley.myqcloud.com"
- - Silicon Valley Region
- - "cos.na-ashburn.myqcloud.com"
- - Virginia Region
- - "cos.na-toronto.myqcloud.com"
- - Toronto Region
- - "cos.eu-frankfurt.myqcloud.com"
- - Frankfurt Region
- - "cos.eu-moscow.myqcloud.com"
- - Moscow Region
- - "cos.accelerate.myqcloud.com"
- - Use Tencent COS Accelerate Endpoint
-
-#### --s3-endpoint
-
-Endpoint for RackCorp Object Storage.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: RackCorp
-- Type: string
-- Required: false
-- Examples:
- - "s3.rackcorp.com"
- - Global (AnyCast) Endpoint
- - "au.s3.rackcorp.com"
- - Australia (Anycast) Endpoint
- - "au-nsw.s3.rackcorp.com"
- - Sydney (Australia) Endpoint
- - "au-qld.s3.rackcorp.com"
- - Brisbane (Australia) Endpoint
- - "au-vic.s3.rackcorp.com"
- - Melbourne (Australia) Endpoint
- - "au-wa.s3.rackcorp.com"
- - Perth (Australia) Endpoint
- - "ph.s3.rackcorp.com"
- - Manila (Philippines) Endpoint
- - "th.s3.rackcorp.com"
- - Bangkok (Thailand) Endpoint
- - "hk.s3.rackcorp.com"
- - HK (Hong Kong) Endpoint
- - "mn.s3.rackcorp.com"
- - Ulaanbaatar (Mongolia) Endpoint
- - "kg.s3.rackcorp.com"
- - Bishkek (Kyrgyzstan) Endpoint
- - "id.s3.rackcorp.com"
- - Jakarta (Indonesia) Endpoint
- - "jp.s3.rackcorp.com"
- - Tokyo (Japan) Endpoint
- - "sg.s3.rackcorp.com"
- - SG (Singapore) Endpoint
- - "de.s3.rackcorp.com"
- - Frankfurt (Germany) Endpoint
- - "us.s3.rackcorp.com"
- - USA (AnyCast) Endpoint
- - "us-east-1.s3.rackcorp.com"
- - New York (USA) Endpoint
- - "us-west-1.s3.rackcorp.com"
- - Freemont (USA) Endpoint
- - "nz.s3.rackcorp.com"
- - Auckland (New Zealand) Endpoint
-
-#### --s3-endpoint
-
-Endpoint for Qiniu Object Storage.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Qiniu
-- Type: string
-- Required: false
-- Examples:
- - "s3-cn-east-1.qiniucs.com"
- - East China Endpoint 1
- - "s3-cn-east-2.qiniucs.com"
- - East China Endpoint 2
- - "s3-cn-north-1.qiniucs.com"
- - North China Endpoint 1
- - "s3-cn-south-1.qiniucs.com"
- - South China Endpoint 1
- - "s3-us-north-1.qiniucs.com"
- - North America Endpoint 1
- - "s3-ap-southeast-1.qiniucs.com"
- - Southeast Asia Endpoint 1
- - "s3-ap-northeast-1.qiniucs.com"
- - Northeast Asia Endpoint 1
-
-#### --s3-endpoint
-
-Endpoint for S3 API.
-
-Required when using an S3 clone.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: !AWS,ArvanCloud,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,Scaleway,StackPath,Storj,Synology,RackCorp,Qiniu,Petabox
-- Type: string
-- Required: false
-- Examples:
- - "objects-us-east-1.dream.io"
- - Dream Objects endpoint
- - "syd1.digitaloceanspaces.com"
- - DigitalOcean Spaces Sydney 1
- - "sfo3.digitaloceanspaces.com"
- - DigitalOcean Spaces San Francisco 3
- - "fra1.digitaloceanspaces.com"
- - DigitalOcean Spaces Frankfurt 1
- - "nyc3.digitaloceanspaces.com"
- - DigitalOcean Spaces New York 3
- - "ams3.digitaloceanspaces.com"
- - DigitalOcean Spaces Amsterdam 3
- - "sgp1.digitaloceanspaces.com"
- - DigitalOcean Spaces Singapore 1
- - "localhost:8333"
- - SeaweedFS S3 localhost
- - "s3.us-east-1.lyvecloud.seagate.com"
- - Seagate Lyve Cloud US East 1 (Virginia)
- - "s3.us-west-1.lyvecloud.seagate.com"
- - Seagate Lyve Cloud US West 1 (California)
- - "s3.ap-southeast-1.lyvecloud.seagate.com"
- - Seagate Lyve Cloud AP Southeast 1 (Singapore)
- - "s3.wasabisys.com"
- - Wasabi US East 1 (N. Virginia)
- - "s3.us-east-2.wasabisys.com"
- - Wasabi US East 2 (N. Virginia)
- - "s3.us-central-1.wasabisys.com"
- - Wasabi US Central 1 (Texas)
- - "s3.us-west-1.wasabisys.com"
- - Wasabi US West 1 (Oregon)
- - "s3.ca-central-1.wasabisys.com"
- - Wasabi CA Central 1 (Toronto)
- - "s3.eu-central-1.wasabisys.com"
- - Wasabi EU Central 1 (Amsterdam)
- - "s3.eu-central-2.wasabisys.com"
- - Wasabi EU Central 2 (Frankfurt)
- - "s3.eu-west-1.wasabisys.com"
- - Wasabi EU West 1 (London)
- - "s3.eu-west-2.wasabisys.com"
- - Wasabi EU West 2 (Paris)
- - "s3.ap-northeast-1.wasabisys.com"
- - Wasabi AP Northeast 1 (Tokyo) endpoint
- - "s3.ap-northeast-2.wasabisys.com"
- - Wasabi AP Northeast 2 (Osaka) endpoint
- - "s3.ap-southeast-1.wasabisys.com"
- - Wasabi AP Southeast 1 (Singapore)
- - "s3.ap-southeast-2.wasabisys.com"
- - Wasabi AP Southeast 2 (Sydney)
- - "storage.iran.liara.space"
- - Liara Iran endpoint
- - "s3.ir-thr-at1.arvanstorage.ir"
- - ArvanCloud Tehran Iran (Simin) endpoint
- - "s3.ir-tbz-sh1.arvanstorage.ir"
- - ArvanCloud Tabriz Iran (Shahriar) endpoint
-
#### --s3-location-constraint
Location constraint - must be set to match the Region.
@@ -22277,274 +22690,6 @@ Properties:
- "us-gov-west-1"
- AWS GovCloud (US) Region
-#### --s3-location-constraint
-
-Location constraint - must match endpoint.
-
-Used when creating buckets only.
-
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: ChinaMobile
-- Type: string
-- Required: false
-- Examples:
- - "wuxi1"
- - East China (Suzhou)
- - "jinan1"
- - East China (Jinan)
- - "ningbo1"
- - East China (Hangzhou)
- - "shanghai1"
- - East China (Shanghai-1)
- - "zhengzhou1"
- - Central China (Zhengzhou)
- - "hunan1"
- - Central China (Changsha-1)
- - "zhuzhou1"
- - Central China (Changsha-2)
- - "guangzhou1"
- - South China (Guangzhou-2)
- - "dongguan1"
- - South China (Guangzhou-3)
- - "beijing1"
- - North China (Beijing-1)
- - "beijing2"
- - North China (Beijing-2)
- - "beijing4"
- - North China (Beijing-3)
- - "huhehaote1"
- - North China (Huhehaote)
- - "chengdu1"
- - Southwest China (Chengdu)
- - "chongqing1"
- - Southwest China (Chongqing)
- - "guiyang1"
- - Southwest China (Guiyang)
- - "xian1"
- - Nouthwest China (Xian)
- - "yunnan"
- - Yunnan China (Kunming)
- - "yunnan2"
- - Yunnan China (Kunming-2)
- - "tianjin1"
- - Tianjin China (Tianjin)
- - "jilin1"
- - Jilin China (Changchun)
- - "hubei1"
- - Hubei China (Xiangyan)
- - "jiangxi1"
- - Jiangxi China (Nanchang)
- - "gansu1"
- - Gansu China (Lanzhou)
- - "shanxi1"
- - Shanxi China (Taiyuan)
- - "liaoning1"
- - Liaoning China (Shenyang)
- - "hebei1"
- - Hebei China (Shijiazhuang)
- - "fujian1"
- - Fujian China (Xiamen)
- - "guangxi1"
- - Guangxi China (Nanning)
- - "anhui1"
- - Anhui China (Huainan)
-
-#### --s3-location-constraint
-
-Location constraint - must match endpoint.
-
-Used when creating buckets only.
-
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: ArvanCloud
-- Type: string
-- Required: false
-- Examples:
- - "ir-thr-at1"
- - Tehran Iran (Simin)
- - "ir-tbz-sh1"
- - Tabriz Iran (Shahriar)
-
-#### --s3-location-constraint
-
-Location constraint - must match endpoint when using IBM Cloud Public.
-
-For on-prem COS, do not make a selection from this list, hit enter.
-
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: IBMCOS
-- Type: string
-- Required: false
-- Examples:
- - "us-standard"
- - US Cross Region Standard
- - "us-vault"
- - US Cross Region Vault
- - "us-cold"
- - US Cross Region Cold
- - "us-flex"
- - US Cross Region Flex
- - "us-east-standard"
- - US East Region Standard
- - "us-east-vault"
- - US East Region Vault
- - "us-east-cold"
- - US East Region Cold
- - "us-east-flex"
- - US East Region Flex
- - "us-south-standard"
- - US South Region Standard
- - "us-south-vault"
- - US South Region Vault
- - "us-south-cold"
- - US South Region Cold
- - "us-south-flex"
- - US South Region Flex
- - "eu-standard"
- - EU Cross Region Standard
- - "eu-vault"
- - EU Cross Region Vault
- - "eu-cold"
- - EU Cross Region Cold
- - "eu-flex"
- - EU Cross Region Flex
- - "eu-gb-standard"
- - Great Britain Standard
- - "eu-gb-vault"
- - Great Britain Vault
- - "eu-gb-cold"
- - Great Britain Cold
- - "eu-gb-flex"
- - Great Britain Flex
- - "ap-standard"
- - APAC Standard
- - "ap-vault"
- - APAC Vault
- - "ap-cold"
- - APAC Cold
- - "ap-flex"
- - APAC Flex
- - "mel01-standard"
- - Melbourne Standard
- - "mel01-vault"
- - Melbourne Vault
- - "mel01-cold"
- - Melbourne Cold
- - "mel01-flex"
- - Melbourne Flex
- - "tor01-standard"
- - Toronto Standard
- - "tor01-vault"
- - Toronto Vault
- - "tor01-cold"
- - Toronto Cold
- - "tor01-flex"
- - Toronto Flex
-
-#### --s3-location-constraint
-
-Location constraint - the location where your bucket will be located and your data stored.
-
-
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: RackCorp
-- Type: string
-- Required: false
-- Examples:
- - "global"
- - Global CDN Region
- - "au"
- - Australia (All locations)
- - "au-nsw"
- - NSW (Australia) Region
- - "au-qld"
- - QLD (Australia) Region
- - "au-vic"
- - VIC (Australia) Region
- - "au-wa"
- - Perth (Australia) Region
- - "ph"
- - Manila (Philippines) Region
- - "th"
- - Bangkok (Thailand) Region
- - "hk"
- - HK (Hong Kong) Region
- - "mn"
- - Ulaanbaatar (Mongolia) Region
- - "kg"
- - Bishkek (Kyrgyzstan) Region
- - "id"
- - Jakarta (Indonesia) Region
- - "jp"
- - Tokyo (Japan) Region
- - "sg"
- - SG (Singapore) Region
- - "de"
- - Frankfurt (Germany) Region
- - "us"
- - USA (AnyCast) Region
- - "us-east-1"
- - New York (USA) Region
- - "us-west-1"
- - Freemont (USA) Region
- - "nz"
- - Auckland (New Zealand) Region
-
-#### --s3-location-constraint
-
-Location constraint - must be set to match the Region.
-
-Used when creating buckets only.
-
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: Qiniu
-- Type: string
-- Required: false
-- Examples:
- - "cn-east-1"
- - East China Region 1
- - "cn-east-2"
- - East China Region 2
- - "cn-north-1"
- - North China Region 1
- - "cn-south-1"
- - South China Region 1
- - "us-north-1"
- - North America Region 1
- - "ap-southeast-1"
- - Southeast Asia Region 1
- - "ap-northeast-1"
- - Northeast Asia Region 1
-
-#### --s3-location-constraint
-
-Location constraint - must be set to match the Region.
-
-Leave blank if not sure. Used when creating buckets only.
-
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: !AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,Leviia,Liara,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS,Petabox
-- Type: string
-- Required: false
-
#### --s3-acl
Canned ACL used when creating buckets and storing or copying objects.
@@ -22676,150 +22821,9 @@ Properties:
- "GLACIER_IR"
- Glacier Instant Retrieval storage class
-#### --s3-storage-class
-
-The storage class to use when storing new objects in OSS.
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: Alibaba
-- Type: string
-- Required: false
-- Examples:
- - ""
- - Default
- - "STANDARD"
- - Standard storage class
- - "GLACIER"
- - Archive storage mode
- - "STANDARD_IA"
- - Infrequent access storage mode
-
-#### --s3-storage-class
-
-The storage class to use when storing new objects in ChinaMobile.
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: ChinaMobile
-- Type: string
-- Required: false
-- Examples:
- - ""
- - Default
- - "STANDARD"
- - Standard storage class
- - "GLACIER"
- - Archive storage mode
- - "STANDARD_IA"
- - Infrequent access storage mode
-
-#### --s3-storage-class
-
-The storage class to use when storing new objects in Liara
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: Liara
-- Type: string
-- Required: false
-- Examples:
- - "STANDARD"
- - Standard storage class
-
-#### --s3-storage-class
-
-The storage class to use when storing new objects in ArvanCloud.
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: ArvanCloud
-- Type: string
-- Required: false
-- Examples:
- - "STANDARD"
- - Standard storage class
-
-#### --s3-storage-class
-
-The storage class to use when storing new objects in Tencent COS.
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: TencentCOS
-- Type: string
-- Required: false
-- Examples:
- - ""
- - Default
- - "STANDARD"
- - Standard storage class
- - "ARCHIVE"
- - Archive storage mode
- - "STANDARD_IA"
- - Infrequent access storage mode
-
-#### --s3-storage-class
-
-The storage class to use when storing new objects in S3.
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: Scaleway
-- Type: string
-- Required: false
-- Examples:
- - ""
- - Default.
- - "STANDARD"
- - The Standard class for any upload.
- - Suitable for on-demand content like streaming or CDN.
- - Available in all regions.
- - "GLACIER"
- - Archived storage.
- - Prices are lower, but it needs to be restored first to be accessed.
- - Available in FR-PAR and NL-AMS regions.
- - "ONEZONE_IA"
- - One Zone - Infrequent Access.
- - A good choice for storing secondary backup copies or easily re-creatable data.
- - Available in the FR-PAR region only.
-
-#### --s3-storage-class
-
-The storage class to use when storing new objects in Qiniu.
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: Qiniu
-- Type: string
-- Required: false
-- Examples:
- - "STANDARD"
- - Standard storage class
- - "LINE"
- - Infrequent access storage mode
- - "GLACIER"
- - Archive storage mode
- - "DEEP_ARCHIVE"
- - Deep archive storage mode
-
### Advanced options
-Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi).
+Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
#### --s3-bucket-acl
@@ -23312,7 +23316,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_S3_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,InvalidUtf8,Dot
#### --s3-memory-pool-flush-time
@@ -23548,6 +23552,57 @@ Properties:
- Type: string
- Required: false
+#### --s3-use-already-exists
+
+Set if rclone should report BucketAlreadyExists errors on bucket creation.
+
+At some point during the evolution of the s3 protocol, AWS started
+returning an `AlreadyOwnedByYou` error when attempting to create a
+bucket that the user already owned, rather than a
+`BucketAlreadyExists` error.
+
+Unfortunately exactly what has been implemented by s3 clones is a
+little inconsistent, some return `AlreadyOwnedByYou`, some return
+`BucketAlreadyExists` and some return no error at all.
+
+This is important to rclone because it ensures the bucket exists by
+creating it on quite a lot of operations (unless
+`--s3-no-check-bucket` is used).
+
+If rclone knows the provider can return `AlreadyOwnedByYou` or returns
+no error then it can report `BucketAlreadyExists` errors when the user
+attempts to create a bucket not owned by them. Otherwise rclone
+ignores the `BucketAlreadyExists` error which can lead to confusion.
+
+This should be automatically set correctly for all providers rclone
+knows about - please make a bug report if not.
+
+
+Properties:
+
+- Config: use_already_exists
+- Env Var: RCLONE_S3_USE_ALREADY_EXISTS
+- Type: Tristate
+- Default: unset
+
+#### --s3-use-multipart-uploads
+
+Set if rclone should use multipart uploads.
+
+You can change this if you want to disable the use of multipart uploads.
+This shouldn't be necessary in normal operation.
+
+This should be automatically set correctly for all providers rclone
+knows about - please make a bug report if not.
+
+
+Properties:
+
+- Config: use_multipart_uploads
+- Env Var: RCLONE_S3_USE_MULTIPART_UPLOADS
+- Type: Tristate
+- Default: unset
+
### Metadata
User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case.
@@ -24049,6 +24104,12 @@ secret_access_key = your_secret_key
endpoint = https://storage.googleapis.com
```
+**Note** that `--s3-versions` does not work with GCS when it needs to do directory paging. Rclone will return the error:
+
+ s3 protocol error: received versions listing with IsTruncated set with no NextKeyMarker
+
+This is Google bug [#312292516](https://issuetracker.google.com/u/0/issues/312292516).
+
### DigitalOcean Spaces
[Spaces](https://www.digitalocean.com/products/object-storage/) is an [S3-interoperable](https://developers.digitalocean.com/documentation/spaces/) object storage service from cloud provider DigitalOcean.
@@ -24958,6 +25019,31 @@ endpoint = s3.rackcorp.com
location_constraint = au-nsw
```
+### Rclone Serve S3 {#rclone}
+
+Rclone can serve any remote over the S3 protocol. For details see the
+[rclone serve s3](https://rclone.org/commands/rclone_serve_http/) documentation.
+
+For example, to serve `remote:path` over s3, run the server like this:
+
+```
+rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
+```
+
+This will be compatible with an rclone remote which is defined like this:
+
+```
+[serves3]
+type = s3
+provider = Rclone
+endpoint = http://127.0.0.1:8080/
+access_key_id = ACCESS_KEY_ID
+secret_access_key = SECRET_ACCESS_KEY
+use_multipart_uploads = false
+```
+
+Note that setting `disable_multipart_uploads = true` is to work around
+[a bug](https://rclone.org/commands/rclone_serve_http/#bugs) which will be fixed in due course.
### Scaleway
@@ -25772,6 +25858,7 @@ Name Type
==== ====
leviia s3
```
+
### Liara {#liara-cloud}
Here is an example of making a [Liara Object Storage](https://liara.ir/landing/object-storage)
@@ -25873,6 +25960,139 @@ server_side_encryption =
storage_class =
```
+### Linode {#linode}
+
+Here is an example of making a [Linode Object Storage](https://www.linode.com/products/object-storage/)
+configuration. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process.
+
+```
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter name for new remote.
+name> linode
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+ X / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others
+ \ (s3)
+[snip]
+Storage> s3
+
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+XX / Linode Object Storage
+ \ (Linode)
+[snip]
+provider> Linode
+
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+env_auth>
+
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> ACCESS_KEY
+
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> SECRET_ACCESS_KEY
+
+Option endpoint.
+Endpoint for Linode Object Storage API.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Atlanta, GA (USA), us-southeast-1
+ \ (us-southeast-1.linodeobjects.com)
+ 2 / Chicago, IL (USA), us-ord-1
+ \ (us-ord-1.linodeobjects.com)
+ 3 / Frankfurt (Germany), eu-central-1
+ \ (eu-central-1.linodeobjects.com)
+ 4 / Milan (Italy), it-mil-1
+ \ (it-mil-1.linodeobjects.com)
+ 5 / Newark, NJ (USA), us-east-1
+ \ (us-east-1.linodeobjects.com)
+ 6 / Paris (France), fr-par-1
+ \ (fr-par-1.linodeobjects.com)
+ 7 / Seattle, WA (USA), us-sea-1
+ \ (us-sea-1.linodeobjects.com)
+ 8 / Singapore ap-south-1
+ \ (ap-south-1.linodeobjects.com)
+ 9 / Stockholm (Sweden), se-sto-1
+ \ (se-sto-1.linodeobjects.com)
+10 / Washington, DC, (USA), us-iad-1
+ \ (us-iad-1.linodeobjects.com)
+endpoint> 3
+
+Option acl.
+Canned ACL used when creating buckets and storing or copying objects.
+This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
+For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+Note that this ACL is applied when server-side copying objects as S3
+doesn't copy the ACL from the source but rather writes a fresh one.
+If the acl is an empty string then no X-Amz-Acl: header is added and
+the default (private) will be used.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \ (private)
+[snip]
+acl>
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: s3
+- provider: Linode
+- access_key_id: ACCESS_KEY
+- secret_access_key: SECRET_ACCESS_KEY
+- endpoint: eu-central-1.linodeobjects.com
+Keep this "linode" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+
+This will leave the config file looking like this.
+
+```
+[linode]
+type = s3
+provider = Linode
+access_key_id = ACCESS_KEY
+secret_access_key = SECRET_ACCESS_KEY
+endpoint = eu-central-1.linodeobjects.com
+```
+
### ArvanCloud {#arvan-cloud}
[ArvanCloud](https://www.arvancloud.com/en/products/cloud-storage) ArvanCloud Object Storage goes beyond the limited traditional file storage.
@@ -26615,9 +26835,9 @@ This remote supports `--fast-list` which allows you to use fewer
transactions in exchange for more memory. See the [rclone
docs](https://rclone.org/docs/#fast-list) for more details.
-### Modified time
+### Modification times
-The modified time is stored as metadata on the object as
+The modification time is stored as metadata on the object as
`X-Bz-Info-src_last_modified_millis` as milliseconds since 1970-01-01
in the Backblaze standard. Other tools should be able to use this as
a modified time.
@@ -27027,7 +27247,7 @@ Properties:
- Config: upload_concurrency
- Env Var: RCLONE_B2_UPLOAD_CONCURRENCY
- Type: int
-- Default: 16
+- Default: 4
#### --b2-disable-checksum
@@ -27107,6 +27327,37 @@ Properties:
- Type: bool
- Default: false
+#### --b2-lifecycle
+
+Set the number of days deleted files should be kept when creating a bucket.
+
+On bucket creation, this parameter is used to create a lifecycle rule
+for the entire bucket.
+
+If lifecycle is 0 (the default) it does not create a lifecycle rule so
+the default B2 behaviour applies. This is to create versions of files
+on delete and overwrite and to keep them indefinitely.
+
+If lifecycle is >0 then it creates a single rule setting the number of
+days before a file that is deleted or overwritten is deleted
+permanently. This is known as daysFromHidingToDeleting in the b2 docs.
+
+The minimum value for this parameter is 1 day.
+
+You can also enable hard_delete in the config also which will mean
+deletions won't cause versions but overwrites will still cause
+versions to be made.
+
+See: [rclone backend lifecycle](#lifecycle) for setting lifecycles after bucket creation.
+
+
+Properties:
+
+- Config: lifecycle
+- Env Var: RCLONE_B2_LIFECYCLE
+- Type: int
+- Default: 0
+
#### --b2-encoding
The encoding for the backend.
@@ -27117,9 +27368,76 @@ Properties:
- Config: encoding
- Env Var: RCLONE_B2_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
+## Backend commands
+
+Here are the commands specific to the b2 backend.
+
+Run them with
+
+ rclone backend COMMAND remote:
+
+The help below will explain what arguments each command takes.
+
+See the [backend](https://rclone.org/commands/rclone_backend/) command for more
+info on how to pass options and arguments.
+
+These can be run on a running backend using the rc command
+[backend/command](https://rclone.org/rc/#backend-command).
+
+### lifecycle
+
+Read or set the lifecycle for a bucket
+
+ rclone backend lifecycle remote: [options] [+]
+
+This command can be used to read or set the lifecycle for a bucket.
+
+Usage Examples:
+
+To show the current lifecycle rules:
+
+ rclone backend lifecycle b2:bucket
+
+This will dump something like this showing the lifecycle rules.
+
+ [
+ {
+ "daysFromHidingToDeleting": 1,
+ "daysFromUploadingToHiding": null,
+ "fileNamePrefix": ""
+ }
+ ]
+
+If there are no lifecycle rules (the default) then it will just return [].
+
+To reset the current lifecycle rules:
+
+ rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30
+ rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHidingToDeleting=1
+
+This will run and then print the new lifecycle rules as above.
+
+Rclone only lets you set lifecycles for the whole bucket with the
+fileNamePrefix = "".
+
+You can't disable versioning with B2. The best you can do is to set
+the daysFromHidingToDeleting to 1 day. You can enable hard_delete in
+the config also which will mean deletions won't cause versions but
+overwrites will still cause versions to be made.
+
+ rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1
+
+See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules
+
+
+Options:
+
+- "daysFromHidingToDeleting": After a file has been hidden for this many days it is deleted. 0 is off.
+- "daysFromUploadingToHiding": This many days after uploading a file is hidden
+
## Limitations
@@ -27326,7 +27644,7 @@ d) Delete this remote
y/e/d> y
```
-### Modified time and hashes
+### Modification times and hashes
Box allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -27569,7 +27887,7 @@ Properties:
Impersonate this user ID when using a service account.
-Settng this flag allows rclone, when using a JWT service account, to
+Setting this flag allows rclone, when using a JWT service account, to
act on behalf of another user by setting the as-user header.
The user ID is the Box identifier for a user. User IDs can found for
@@ -27597,7 +27915,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_BOX_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot
@@ -28569,7 +28887,7 @@ revert (sometimes silently) to time/size comparison if compatible hashsums
between source and target are not found.
-### Modified time
+### Modification times
Chunker stores modification times using the wrapped remote so support
depends on that. For a small non-chunked file the chunker overlay simply
@@ -28905,7 +29223,7 @@ To copy a local directory to an ShareFile directory called backup
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
-### Modified time and hashes
+### Modification times and hashes
ShareFile allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -29100,7 +29418,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SHAREFILE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot
@@ -29519,7 +29837,7 @@ Example:
`1/12/qgm4avr35m5loi1th53ato71v0`
-### Modified time and hashes
+### Modification times and hashes
Crypt stores modification times using the underlying remote so support
depends on that.
@@ -29826,7 +30144,7 @@ has a header and is divided into chunks.
The initial nonce is generated from the operating systems crypto
strong random number generator. The nonce is incremented for each
chunk read making sure each nonce is unique for each block written.
-The chance of a nonce being re-used is minuscule. If you wrote an
+The chance of a nonce being reused is minuscule. If you wrote an
exabyte of data (10¹⁸ bytes) you would have a probability of
approximately 2×10⁻³² of re-using a nonce.
@@ -30327,7 +30645,7 @@ You can then use team folders like this `remote:/TeamFolder` and
A leading `/` for a Dropbox personal account will do nothing, but it
will take an extra HTTP transaction so it should be avoided.
-### Modified time and Hashes
+### Modification times and hashes
Dropbox supports modified times, but the only way to set a
modification time is to re-upload the file.
@@ -30573,6 +30891,30 @@ Properties:
- Type: bool
- Default: false
+#### --dropbox-pacer-min-sleep
+
+Minimum time to sleep between API calls.
+
+Properties:
+
+- Config: pacer_min_sleep
+- Env Var: RCLONE_DROPBOX_PACER_MIN_SLEEP
+- Type: Duration
+- Default: 10ms
+
+#### --dropbox-encoding
+
+The encoding for the backend.
+
+See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_DROPBOX_ENCODING
+- Type: Encoding
+- Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot
+
#### --dropbox-batch-mode
Upload file batching sync|async|off.
@@ -30659,30 +31001,6 @@ Properties:
- Type: Duration
- Default: 10m0s
-#### --dropbox-pacer-min-sleep
-
-Minimum time to sleep between API calls.
-
-Properties:
-
-- Config: pacer_min_sleep
-- Env Var: RCLONE_DROPBOX_PACER_MIN_SLEEP
-- Type: Duration
-- Default: 10ms
-
-#### --dropbox-encoding
-
-The encoding for the backend.
-
-See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
-
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_DROPBOX_ENCODING
-- Type: MultiEncoder
-- Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot
-
## Limitations
@@ -30833,7 +31151,7 @@ To copy a local directory to an Enterprise File Fabric directory called backup
rclone copy /home/source remote:backup
-### Modified time and hashes
+### Modification times and hashes
The Enterprise File Fabric allows modification times to be set on
files accurate to 1 second. These will be used to detect whether
@@ -31003,7 +31321,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_FILEFABRIC_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Del,Ctl,InvalidUtf8,Dot
@@ -31447,7 +31765,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_FTP_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Del,Ctl,RightSpace,Dot
- Examples:
- "Asterisk,Ctl,Dot,Slash"
@@ -31490,7 +31808,7 @@ at present.
The `ftp_proxy` environment variable is not currently supported.
-#### Modified time
+### Modification times
File modification time (timestamps) is supported to 1 second resolution
for major FTP servers: ProFTPd, PureFTPd, VsFTPd, and FileZilla FTP server.
@@ -31753,7 +32071,7 @@ Eg `--header-upload "Content-Type text/potato"`
Note that the last of these is for setting custom metadata in the form
`--header-upload "x-goog-meta-key: value"`
-### Modification time
+### Modification times
Google Cloud Storage stores md5sum natively.
Google's [gsutil](https://cloud.google.com/storage/docs/gsutil) tool stores modification time
@@ -32202,7 +32520,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_GCS_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,CrLf,InvalidUtf8,Dot
@@ -32336,6 +32654,8 @@ use. This changes what type of token is granted to rclone. [The
scopes are defined
here](https://developers.google.com/drive/v3/web/about-auth).
+A comma-separated list is allowed e.g. `drive.readonly,drive.file`.
+
The scope are
#### drive
@@ -32571,10 +32891,14 @@ large folder (10600 directories, 39000 files):
- without `--fast-list`: 22:05 min
- with `--fast-list`: 58s
-### Modified time
+### Modification times and hashes
Google drive stores modification times accurate to 1 ms.
+Hash algorithms MD5, SHA1 and SHA256 are supported. Note, however,
+that a small fraction of files uploaded may not have SHA1 or SHA256
+hashes especially if they were uploaded before 2018.
+
### Restricted filename characters
Only Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8),
@@ -32794,7 +33118,7 @@ Properties:
#### --drive-scope
-Scope that rclone should use when requesting access from drive.
+Comma separated list of scopes that rclone should use when requesting access from drive.
Properties:
@@ -32982,15 +33306,40 @@ Properties:
- Type: bool
- Default: false
+#### --drive-show-all-gdocs
+
+Show all Google Docs including non-exportable ones in listings.
+
+If you try a server side copy on a Google Form without this flag, you
+will get this error:
+
+ No export formats found for "application/vnd.google-apps.form"
+
+However adding this flag will allow the form to be server side copied.
+
+Note that rclone doesn't add extensions to the Google Docs file names
+in this mode.
+
+Do **not** use this flag when trying to download Google Docs - rclone
+will fail to download them.
+
+
+Properties:
+
+- Config: show_all_gdocs
+- Env Var: RCLONE_DRIVE_SHOW_ALL_GDOCS
+- Type: bool
+- Default: false
+
#### --drive-skip-checksum-gphotos
-Skip MD5 checksum on Google photos and videos only.
+Skip checksums on Google photos and videos only.
Use this if you get checksum errors when transferring Google photos or
videos.
Setting this flag will cause Google photos and videos to return a
-blank MD5 checksum.
+blank checksums.
Google photos are identified by being in the "photos" space.
@@ -33444,6 +33793,98 @@ Properties:
- Type: bool
- Default: true
+#### --drive-metadata-owner
+
+Control whether owner should be read or written in metadata.
+
+Owner is a standard part of the file metadata so is easy to read. But it
+isn't always desirable to set the owner from the metadata.
+
+Note that you can't set the owner on Shared Drives, and that setting
+ownership will generate an email to the new owner (this can't be
+disabled), and you can't transfer ownership to someone outside your
+organization.
+
+
+Properties:
+
+- Config: metadata_owner
+- Env Var: RCLONE_DRIVE_METADATA_OWNER
+- Type: Bits
+- Default: read
+- Examples:
+ - "off"
+ - Do not read or write the value
+ - "read"
+ - Read the value only
+ - "write"
+ - Write the value only
+ - "read,write"
+ - Read and Write the value.
+
+#### --drive-metadata-permissions
+
+Control whether permissions should be read or written in metadata.
+
+Reading permissions metadata from files can be done quickly, but it
+isn't always desirable to set the permissions from the metadata.
+
+Note that rclone drops any inherited permissions on Shared Drives and
+any owner permission on My Drives as these are duplicated in the owner
+metadata.
+
+
+Properties:
+
+- Config: metadata_permissions
+- Env Var: RCLONE_DRIVE_METADATA_PERMISSIONS
+- Type: Bits
+- Default: off
+- Examples:
+ - "off"
+ - Do not read or write the value
+ - "read"
+ - Read the value only
+ - "write"
+ - Write the value only
+ - "read,write"
+ - Read and Write the value.
+
+#### --drive-metadata-labels
+
+Control whether labels should be read or written in metadata.
+
+Reading labels metadata from files takes an extra API transaction and
+will slow down listings. It isn't always desirable to set the labels
+from the metadata.
+
+The format of labels is documented in the drive API documentation at
+https://developers.google.com/drive/api/reference/rest/v3/Label -
+rclone just provides a JSON dump of this format.
+
+When setting labels, the label and fields must already exist - rclone
+will not create them. This means that if you are transferring labels
+from two different accounts you will have to create the labels in
+advance and use the metadata mapper to translate the IDs between the
+two accounts.
+
+
+Properties:
+
+- Config: metadata_labels
+- Env Var: RCLONE_DRIVE_METADATA_LABELS
+- Type: Bits
+- Default: off
+- Examples:
+ - "off"
+ - Do not read or write the value
+ - "read"
+ - Read the value only
+ - "write"
+ - Write the value only
+ - "read,write"
+ - Read and Write the value.
+
#### --drive-encoding
The encoding for the backend.
@@ -33454,7 +33895,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_DRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: InvalidUtf8
#### --drive-env-auth
@@ -33475,6 +33916,29 @@ Properties:
- "true"
- Get GCP IAM credentials from the environment (env vars or IAM).
+### Metadata
+
+User metadata is stored in the properties field of the drive object.
+
+Here are the possible system metadata items for the drive backend.
+
+| Name | Help | Type | Example | Read Only |
+|------|------|------|---------|-----------|
+| btime | Time of file birth (creation) with mS accuracy. Note that this is only writable on fresh uploads - it can't be written for updates. | RFC 3339 | 2006-01-02T15:04:05.999Z07:00 | N |
+| content-type | The MIME type of the file. | string | text/plain | N |
+| copy-requires-writer-permission | Whether the options to copy, print, or download this file, should be disabled for readers and commenters. | boolean | true | N |
+| description | A short description of the file. | string | Contract for signing | N |
+| folder-color-rgb | The color for a folder or a shortcut to a folder as an RGB hex string. | string | 881133 | N |
+| labels | Labels attached to this file in a JSON dump of Googled drive format. Enable with --drive-metadata-labels. | JSON | [] | N |
+| mtime | Time of last modification with mS accuracy. | RFC 3339 | 2006-01-02T15:04:05.999Z07:00 | N |
+| owner | The owner of the file. Usually an email address. Enable with --drive-metadata-owner. | string | user@example.com | N |
+| permissions | Permissions in a JSON dump of Google drive format. On shared drives these will only be present if they aren't inherited. Enable with --drive-metadata-permissions. | JSON | {} | N |
+| starred | Whether the user has starred the file. | boolean | false | N |
+| viewed-by-me | Whether the file has been viewed by this user. | boolean | true | **Y** |
+| writers-can-share | Whether users with only writer permission can modify the file's permissions. Not populated for items in shared drives. | boolean | false | N |
+
+See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
+
## Backend commands
Here are the commands specific to the drive backend.
@@ -33738,6 +34202,11 @@ Waiting a moderate period of time between attempts (estimated to be
approximately 1 hour) and/or not using --fast-list both seem to be
effective in preventing the problem.
+### SHA1 or SHA256 hashes may be missing
+
+All files have MD5 hashes, but a small fraction of files uploaded may
+not have SHA1 or SHA256 hashes especially if they were uploaded before 2018.
+
## Making your own client_id
When you use rclone with Google drive in its default configuration you
@@ -34190,9 +34659,93 @@ Properties:
- Config: encoding
- Env Var: RCLONE_GPHOTOS_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,CrLf,InvalidUtf8,Dot
+#### --gphotos-batch-mode
+
+Upload file batching sync|async|off.
+
+This sets the batch mode used by rclone.
+
+This has 3 possible values
+
+- off - no batching
+- sync - batch uploads and check completion (default)
+- async - batch upload and don't check completion
+
+Rclone will close any outstanding batches when it exits which may make
+a delay on quit.
+
+
+Properties:
+
+- Config: batch_mode
+- Env Var: RCLONE_GPHOTOS_BATCH_MODE
+- Type: string
+- Default: "sync"
+
+#### --gphotos-batch-size
+
+Max number of files in upload batch.
+
+This sets the batch size of files to upload. It has to be less than 50.
+
+By default this is 0 which means rclone which calculate the batch size
+depending on the setting of batch_mode.
+
+- batch_mode: async - default batch_size is 50
+- batch_mode: sync - default batch_size is the same as --transfers
+- batch_mode: off - not in use
+
+Rclone will close any outstanding batches when it exits which may make
+a delay on quit.
+
+Setting this is a great idea if you are uploading lots of small files
+as it will make them a lot quicker. You can use --transfers 32 to
+maximise throughput.
+
+
+Properties:
+
+- Config: batch_size
+- Env Var: RCLONE_GPHOTOS_BATCH_SIZE
+- Type: int
+- Default: 0
+
+#### --gphotos-batch-timeout
+
+Max time to allow an idle upload batch before uploading.
+
+If an upload batch is idle for more than this long then it will be
+uploaded.
+
+The default for this is 0 which means rclone will choose a sensible
+default based on the batch_mode in use.
+
+- batch_mode: async - default batch_timeout is 10s
+- batch_mode: sync - default batch_timeout is 1s
+- batch_mode: off - not in use
+
+
+Properties:
+
+- Config: batch_timeout
+- Env Var: RCLONE_GPHOTOS_BATCH_TIMEOUT
+- Type: Duration
+- Default: 0s
+
+#### --gphotos-batch-commit-timeout
+
+Max time to wait for a batch to finish committing
+
+Properties:
+
+- Config: batch_commit_timeout
+- Env Var: RCLONE_GPHOTOS_BATCH_COMMIT_TIMEOUT
+- Type: Duration
+- Default: 10m0s
+
## Limitations
@@ -34244,7 +34797,7 @@ if you uploaded an image to `upload` then uploaded the same image to
what it was uploaded with initially, not what you uploaded it with to
`album`. In practise this shouldn't cause too many problems.
-### Modified time
+### Modification times
The date shown of media in Google Photos is the creation date as
determined by the EXIF information, or the upload date if that is not
@@ -34751,7 +35304,7 @@ username = root
You can stop this image with `docker kill rclone-hdfs` (**NB** it does not use volumes, so all data
uploaded will be lost.)
-### Modified time
+### Modification times
Time accurate to 1 second is stored.
@@ -34781,16 +35334,16 @@ Here are the Standard options specific to hdfs (Hadoop distributed file system).
#### --hdfs-namenode
-Hadoop name node and port.
+Hadoop name nodes and ports.
-E.g. "namenode:8020" to connect to host namenode at port 8020.
+E.g. "namenode-1:8020,namenode-2:8020,..." to connect to host namenodes at port 8020.
Properties:
- Config: namenode
- Env Var: RCLONE_HDFS_NAMENODE
-- Type: string
-- Required: true
+- Type: CommaSepList
+- Default:
#### --hdfs-username
@@ -34854,7 +35407,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_HDFS_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot
@@ -34983,7 +35536,7 @@ Using
the process is very similar to the process of initial setup exemplified before.
-### Modified time and hashes
+### Modification times and hashes
HiDrive allows modification times to be set on objects accurate to 1 second.
@@ -35275,7 +35828,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_HIDRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Dot
@@ -35406,7 +35959,7 @@ Sync the remote `directory` to `/home/local/directory`, deleting any excess file
This remote is read only - you can't upload files to an HTTP server.
-### Modified time
+### Modification times
Most HTTP servers store time accurate to 1 second.
@@ -35513,6 +36066,46 @@ Properties:
- Type: bool
- Default: false
+## Backend commands
+
+Here are the commands specific to the http backend.
+
+Run them with
+
+ rclone backend COMMAND remote:
+
+The help below will explain what arguments each command takes.
+
+See the [backend](https://rclone.org/commands/rclone_backend/) command for more
+info on how to pass options and arguments.
+
+These can be run on a running backend using the rc command
+[backend/command](https://rclone.org/rc/#backend-command).
+
+### set
+
+Set command for updating the config parameters.
+
+ rclone backend set remote: [options] [+]
+
+This set command can be used to update the config parameters
+for a running http backend.
+
+Usage Examples:
+
+ rclone backend set remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
+ rclone rc backend/command command=set fs=remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
+ rclone rc backend/command command=set fs=remote: -o url=https://example.com
+
+The option keys are named as they are in the config file.
+
+This rebuilds the connection to the http backend when it is called with
+the new parameters. Only new parameters need be passed as the values
+will default to those currently in use.
+
+It doesn't return anything.
+
+
## Limitations
@@ -35524,6 +36117,217 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
+# ImageKit
+This is a backend for the [ImageKit.io](https://imagekit.io/) storage service.
+
+#### About ImageKit
+[ImageKit.io](https://imagekit.io/) provides real-time image and video optimizations, transformations, and CDN delivery. Over 1,000 businesses and 70,000 developers trust ImageKit with their images and videos on the web.
+
+
+#### Accounts & Pricing
+
+To use this backend, you need to [create an account](https://imagekit.io/registration/) on ImageKit. Start with a free plan with generous usage limits. Then, as your requirements grow, upgrade to a plan that best fits your needs. See [the pricing details](https://imagekit.io/plans).
+
+## Configuration
+
+Here is an example of making an imagekit configuration.
+
+Firstly create a [ImageKit.io](https://imagekit.io/) account and choose a plan.
+
+You will need to log in and get the `publicKey` and `privateKey` for your account from the developer section.
+
+Now run
+```
+rclone config
+```
+
+This will guide you through an interactive setup process:
+
+```
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter the name for the new remote.
+name> imagekit-media-library
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / ImageKit.io
+\ (imagekit)
+[snip]
+Storage> imagekit
+
+Option endpoint.
+You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+Enter a value.
+endpoint> https://ik.imagekit.io/imagekit_id
+
+Option public_key.
+You can find your ImageKit.io public key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+Enter a value.
+public_key> public_****************************
+
+Option private_key.
+You can find your ImageKit.io private key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+Enter a value.
+private_key> private_****************************
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: imagekit
+- endpoint: https://ik.imagekit.io/imagekit_id
+- public_key: public_****************************
+- private_key: private_****************************
+
+Keep this "imagekit-media-library" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+List directories in the top level of your Media Library
+```
+rclone lsd imagekit-media-library:
+```
+Make a new directory.
+```
+rclone mkdir imagekit-media-library:directory
+```
+List the contents of a directory.
+```
+rclone ls imagekit-media-library:directory
+```
+
+### Modified time and hashes
+
+ImageKit does not support modification times or hashes yet.
+
+### Checksums
+
+No checksums are supported.
+
+
+### Standard options
+
+Here are the Standard options specific to imagekit (ImageKit.io).
+
+#### --imagekit-endpoint
+
+You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_IMAGEKIT_ENDPOINT
+- Type: string
+- Required: true
+
+#### --imagekit-public-key
+
+You can find your ImageKit.io public key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+
+Properties:
+
+- Config: public_key
+- Env Var: RCLONE_IMAGEKIT_PUBLIC_KEY
+- Type: string
+- Required: true
+
+#### --imagekit-private-key
+
+You can find your ImageKit.io private key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+
+Properties:
+
+- Config: private_key
+- Env Var: RCLONE_IMAGEKIT_PRIVATE_KEY
+- Type: string
+- Required: true
+
+### Advanced options
+
+Here are the Advanced options specific to imagekit (ImageKit.io).
+
+#### --imagekit-only-signed
+
+If you have configured `Restrict unsigned image URLs` in your dashboard settings, set this to true.
+
+Properties:
+
+- Config: only_signed
+- Env Var: RCLONE_IMAGEKIT_ONLY_SIGNED
+- Type: bool
+- Default: false
+
+#### --imagekit-versions
+
+Include old versions in directory listings.
+
+Properties:
+
+- Config: versions
+- Env Var: RCLONE_IMAGEKIT_VERSIONS
+- Type: bool
+- Default: false
+
+#### --imagekit-upload-tags
+
+Tags to add to the uploaded files, e.g. "tag1,tag2".
+
+Properties:
+
+- Config: upload_tags
+- Env Var: RCLONE_IMAGEKIT_UPLOAD_TAGS
+- Type: string
+- Required: false
+
+#### --imagekit-encoding
+
+The encoding for the backend.
+
+See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_IMAGEKIT_ENCODING
+- Type: Encoding
+- Default: Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket
+
+### Metadata
+
+Any metadata supported by the underlying remote is read and written.
+
+Here are the possible system metadata items for the imagekit backend.
+
+| Name | Help | Type | Example | Read Only |
+|------|------|------|---------|-----------|
+| aws-tags | AI generated tags by AWS Rekognition associated with the image | string | tag1,tag2 | **Y** |
+| btime | Time of file birth (creation) read from Last-Modified header | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | **Y** |
+| custom-coordinates | Custom coordinates of the file | string | 0,0,100,100 | **Y** |
+| file-type | Type of the file | string | image | **Y** |
+| google-tags | AI generated tags by Google Cloud Vision associated with the image | string | tag1,tag2 | **Y** |
+| has-alpha | Whether the image has alpha channel or not | bool | | **Y** |
+| height | Height of the image or video in pixels | int | | **Y** |
+| is-private-file | Whether the file is private or not | bool | | **Y** |
+| size | Size of the object in bytes | int64 | | **Y** |
+| tags | Tags associated with the file | string | tag1,tag2 | **Y** |
+| width | Width of the image or video in pixels | int | | **Y** |
+
+See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
+
+
+
# Internet Archive
The Internet Archive backend utilizes Items on [archive.org](https://archive.org/)
@@ -35780,7 +36584,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_INTERNETARCHIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot
### Metadata
@@ -36043,7 +36847,7 @@ them. Generally you should avoid these, unless you know what you are doing.
### --fast-list
-This remote supports `--fast-list` which allows you to use fewer
+This backend supports `--fast-list` which allows you to use fewer
transactions in exchange for more memory. See the [rclone
docs](https://rclone.org/docs/#fast-list) for more details.
@@ -36051,10 +36855,11 @@ Note that the implementation in Jottacloud always uses only a single
API request to get the entire list, so for large folders this could
lead to long wait time before the first results are shown.
-Note also that with rclone version 1.58 and newer information about
-[MIME types](https://rclone.org/overview/#mime-type) are not available when using `--fast-list`.
+Note also that with rclone version 1.58 and newer, information about
+[MIME types](https://rclone.org/overview/#mime-type) and metadata item [utime](#metadata)
+are not available when using `--fast-list`.
-### Modified time and hashes
+### Modification times and hashes
Jottacloud allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -36253,9 +37058,24 @@ Properties:
- Config: encoding
- Env Var: RCLONE_JOTTACLOUD_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot
+### Metadata
+
+Jottacloud has limited support for metadata, currently an extended set of timestamps.
+
+Here are the possible system metadata items for the jottacloud backend.
+
+| Name | Help | Type | Example | Read Only |
+|------|------|------|---------|-----------|
+| btime | Time of file birth (creation), read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N |
+| content-type | MIME type, also known as media type | string | text/plain | **Y** |
+| mtime | Time of last modification, read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N |
+| utime | Time of last upload, when current revision was created, generated by backend | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | **Y** |
+
+See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
+
## Limitations
@@ -36440,34 +37260,6 @@ Properties:
- Type: string
- Required: true
-#### --koofr-password
-
-Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password).
-
-**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
-
-Properties:
-
-- Config: password
-- Env Var: RCLONE_KOOFR_PASSWORD
-- Provider: digistorage
-- Type: string
-- Required: true
-
-#### --koofr-password
-
-Your password for rclone (generate one at your service's settings page).
-
-**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
-
-Properties:
-
-- Config: password
-- Env Var: RCLONE_KOOFR_PASSWORD
-- Provider: other
-- Type: string
-- Required: true
-
### Advanced options
Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
@@ -36508,7 +37300,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_KOOFR_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
@@ -36664,6 +37456,77 @@ d) Delete this remote
y/e/d> y
```
+# Linkbox
+
+Linkbox is [a private cloud drive](https://linkbox.to/).
+
+## Configuration
+
+Here is an example of making a remote for Linkbox.
+
+First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+```
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter name for new remote.
+name> remote
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+XX / Linkbox
+ \ (linkbox)
+Storage> XX
+
+Option token.
+Token from https://www.linkbox.to/admin/account
+Enter a value.
+token> testFromCLToken
+
+Configuration complete.
+Options:
+- type: linkbox
+- token: XXXXXXXXXXX
+Keep this "linkbox" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+
+```
+
+
+### Standard options
+
+Here are the Standard options specific to linkbox (Linkbox).
+
+#### --linkbox-token
+
+Token from https://www.linkbox.to/admin/account
+
+Properties:
+
+- Config: token
+- Env Var: RCLONE_LINKBOX_TOKEN
+- Type: string
+- Required: true
+
+
+
+## Limitations
+
+Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
+as they can't be used in JSON strings.
+
# Mail.ru Cloud
[Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a Russian internet company [Mail.Ru Group](https://mail.ru). The official desktop client is [Disk-O:](https://disk-o.cloud/en), available on Windows and Mac OS.
@@ -36783,17 +37646,15 @@ excess files in the path.
rclone sync --interactive /home/local/directory remote:directory
-### Modified time
+### Modification times and hashes
Files support a modification time attribute with up to 1 second precision.
Directories do not have a modification time, which is shown as "Jan 1 1970".
-### Hash checksums
-
-Hash sums use a custom Mail.ru algorithm based on SHA1.
+File hashes are supported, with a custom Mail.ru algorithm based on SHA1.
If file size is less than or equal to the SHA1 block size (20 bytes),
its hash is simply its data right-padded with zero bytes.
-Hash sum of a larger file is computed as a SHA1 sum of the file data
+Hashes of a larger file is computed as a SHA1 of the file data
bytes concatenated with a decimal representation of the data length.
### Emptying Trash
@@ -37071,7 +37932,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_MAILRU_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot
@@ -37163,7 +38024,7 @@ To copy a local directory to an Mega directory called backup
rclone copy /home/source remote:backup
-### Modified time and hashes
+### Modification times and hashes
Mega does not support modification times or hashes yet.
@@ -37360,7 +38221,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_MEGA_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,InvalidUtf8,Dot
@@ -37428,7 +38289,7 @@ testing or with an rclone server or rclone mount, e.g.
rclone serve webdav :memory:
rclone serve sftp :memory:
-### Modified time and hashes
+### Modification times and hashes
The memory backend supports MD5 hashes and modification times accurate to 1 nS.
@@ -37787,10 +38648,10 @@ This remote supports `--fast-list` which allows you to use fewer
transactions in exchange for more memory. See the [rclone
docs](https://rclone.org/docs/#fast-list) for more details.
-### Modified time
+### Modification times and hashes
-The modified time is stored as metadata on the object with the `mtime`
-key. It is stored using RFC3339 Format time with nanosecond
+The modification time is stored as metadata on the object with the
+`mtime` key. It is stored using RFC3339 Format time with nanosecond
precision. The metadata is supplied during directory listings so
there is no performance overhead to using it.
@@ -37800,6 +38661,10 @@ flag. Note that rclone can't set `LastModified`, so using the
`--update` flag when syncing is recommended if using
`--use-server-modtime`.
+MD5 hashes are stored with blobs. However blobs that were uploaded in
+chunks only have an MD5 if the source remote was capable of MD5
+hashes, e.g. the local disk.
+
### Performance
When uploading large files, increasing the value of
@@ -37828,12 +38693,6 @@ These only get replaced if they are the last character in the name:
Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
as they can't be used in JSON strings.
-### Hashes
-
-MD5 hashes are stored with blobs. However blobs that were uploaded in
-chunks only have an MD5 if the source remote was capable of MD5
-hashes, e.g. the local disk.
-
### Authentication {#authentication}
There are a number of ways of supplying credentials for Azure Blob
@@ -38387,10 +39246,10 @@ Properties:
#### --azureblob-access-tier
-Access tier of blob: hot, cool or archive.
+Access tier of blob: hot, cool, cold or archive.
-Archived blobs can be restored by setting access tier to hot or
-cool. Leave blank if you intend to use default access tier, which is
+Archived blobs can be restored by setting access tier to hot, cool or
+cold. Leave blank if you intend to use default access tier, which is
set at account level
If there is no "access tier" specified, rclone doesn't apply any tier.
@@ -38398,7 +39257,7 @@ rclone performs "Set Tier" operation on blobs while uploading, if objects
are not modified, specifying "access tier" to new one will have no effect.
If blobs are in "archive tier" at remote, trying to perform data transfer
operations from remote will not be allowed. User should first restore by
-tiering blob to "Hot" or "Cool".
+tiering blob to "Hot", "Cool" or "Cold".
Properties:
@@ -38479,7 +39338,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_AZUREBLOB_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8
#### --azureblob-public-access
@@ -38588,6 +39447,708 @@ advanced settings, setting it to
`http(s)://:/devstoreaccount1`
(e.g. `http://10.254.2.5:10000/devstoreaccount1`).
+# Microsoft Azure Files Storage
+
+Paths are specified as `remote:` You may put subdirectories in too,
+e.g. `remote:path/to/dir`.
+
+## Configuration
+
+Here is an example of making a Microsoft Azure Files Storage
+configuration. For a remote called `remote`. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+```
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+XX / Microsoft Azure Files Storage
+ \ "azurefiles"
+[snip]
+
+Option account.
+Azure Storage Account Name.
+Set this to the Azure Storage Account Name in use.
+Leave blank to use SAS URL or connection string, otherwise it needs to be set.
+If this is blank and if env_auth is set it will be read from the
+environment variable `AZURE_STORAGE_ACCOUNT_NAME` if possible.
+Enter a value. Press Enter to leave empty.
+account> account_name
+
+Option share_name.
+Azure Files Share Name.
+This is required and is the name of the share to access.
+Enter a value. Press Enter to leave empty.
+share_name> share_name
+
+Option env_auth.
+Read credentials from runtime (environment variables, CLI or MSI).
+See the [authentication docs](/azurefiles#authentication) for full info.
+Enter a boolean value (true or false). Press Enter for the default (false).
+env_auth>
+
+Option key.
+Storage Account Shared Key.
+Leave blank to use SAS URL or connection string.
+Enter a value. Press Enter to leave empty.
+key> base64encodedkey==
+
+Option sas_url.
+SAS URL.
+Leave blank if using account/key or connection string.
+Enter a value. Press Enter to leave empty.
+sas_url>
+
+Option connection_string.
+Azure Files Connection String.
+Enter a value. Press Enter to leave empty.
+connection_string>
+[snip]
+
+Configuration complete.
+Options:
+- type: azurefiles
+- account: account_name
+- share_name: share_name
+- key: base64encodedkey==
+Keep this "remote" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d>
+```
+
+Once configured you can use rclone.
+
+See all files in the top level:
+
+ rclone lsf remote:
+
+Make a new directory in the root:
+
+ rclone mkdir remote:dir
+
+Recursively List the contents:
+
+ rclone ls remote:
+
+Sync `/home/local/directory` to the remote directory, deleting any
+excess files in the directory.
+
+ rclone sync --interactive /home/local/directory remote:dir
+
+### Modified time
+
+The modified time is stored as Azure standard `LastModified` time on
+files
+
+### Performance
+
+When uploading large files, increasing the value of
+`--azurefiles-upload-concurrency` will increase performance at the cost
+of using more memory. The default of 16 is set quite conservatively to
+use less memory. It maybe be necessary raise it to 64 or higher to
+fully utilize a 1 GBit/s link with a single file transfer.
+
+### Restricted filename characters
+
+In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
+the following characters are also replaced:
+
+| Character | Value | Replacement |
+| --------- |:-----:|:-----------:|
+| " | 0x22 | " |
+| * | 0x2A | * |
+| : | 0x3A | : |
+| < | 0x3C | < |
+| > | 0x3E | > |
+| ? | 0x3F | ? |
+| \ | 0x5C | \ |
+| \| | 0x7C | | |
+
+File names can also not end with the following characters.
+These only get replaced if they are the last character in the name:
+
+| Character | Value | Replacement |
+| --------- |:-----:|:-----------:|
+| . | 0x2E | . |
+
+Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
+as they can't be used in JSON strings.
+
+### Hashes
+
+MD5 hashes are stored with files. Not all files will have MD5 hashes
+as these have to be uploaded with the file.
+
+### Authentication {#authentication}
+
+There are a number of ways of supplying credentials for Azure Files
+Storage. Rclone tries them in the order of the sections below.
+
+#### Env Auth
+
+If the `env_auth` config parameter is `true` then rclone will pull
+credentials from the environment or runtime.
+
+It tries these authentication methods in this order:
+
+1. Environment Variables
+2. Managed Service Identity Credentials
+3. Azure CLI credentials (as used by the az tool)
+
+These are described in the following sections
+
+##### Env Auth: 1. Environment Variables
+
+If `env_auth` is set and environment variables are present rclone
+authenticates a service principal with a secret or certificate, or a
+user with a password, depending on which environment variable are set.
+It reads configuration from these variables, in the following order:
+
+1. Service principal with client secret
+ - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID.
+ - `AZURE_CLIENT_ID`: the service principal's client ID
+ - `AZURE_CLIENT_SECRET`: one of the service principal's client secrets
+2. Service principal with certificate
+ - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID.
+ - `AZURE_CLIENT_ID`: the service principal's client ID
+ - `AZURE_CLIENT_CERTIFICATE_PATH`: path to a PEM or PKCS12 certificate file including the private key.
+ - `AZURE_CLIENT_CERTIFICATE_PASSWORD`: (optional) password for the certificate file.
+ - `AZURE_CLIENT_SEND_CERTIFICATE_CHAIN`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header.
+3. User with username and password
+ - `AZURE_TENANT_ID`: (optional) tenant to authenticate in. Defaults to "organizations".
+ - `AZURE_CLIENT_ID`: client ID of the application the user will authenticate to
+ - `AZURE_USERNAME`: a username (usually an email address)
+ - `AZURE_PASSWORD`: the user's password
+4. Workload Identity
+ - `AZURE_TENANT_ID`: Tenant to authenticate in.
+ - `AZURE_CLIENT_ID`: Client ID of the application the user will authenticate to.
+ - `AZURE_FEDERATED_TOKEN_FILE`: Path to projected service account token file.
+ - `AZURE_AUTHORITY_HOST`: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com).
+
+
+##### Env Auth: 2. Managed Service Identity Credentials
+
+When using Managed Service Identity if the VM(SS) on which this
+program is running has a system-assigned identity, it will be used by
+default. If the resource has no system-assigned but exactly one
+user-assigned identity, the user-assigned identity will be used by
+default.
+
+If the resource has multiple user-assigned identities you will need to
+unset `env_auth` and set `use_msi` instead. See the [`use_msi`
+section](#use_msi).
+
+##### Env Auth: 3. Azure CLI credentials (as used by the az tool)
+
+Credentials created with the `az` tool can be picked up using `env_auth`.
+
+For example if you were to login with a service principal like this:
+
+ az login --service-principal -u XXX -p XXX --tenant XXX
+
+Then you could access rclone resources like this:
+
+ rclone lsf :azurefiles,env_auth,account=ACCOUNT:
+
+Or
+
+ rclone lsf --azurefiles-env-auth --azurefiles-account=ACCOUNT :azurefiles:
+
+#### Account and Shared Key
+
+This is the most straight forward and least flexible way. Just fill
+in the `account` and `key` lines and leave the rest blank.
+
+#### SAS URL
+
+To use it leave `account`, `key` and `connection_string` blank and fill in `sas_url`.
+
+#### Connection String
+
+To use it leave `account`, `key` and "sas_url" blank and fill in `connection_string`.
+
+#### Service principal with client secret
+
+If these variables are set, rclone will authenticate with a service principal with a client secret.
+
+- `tenant`: ID of the service principal's tenant. Also called its "directory" ID.
+- `client_id`: the service principal's client ID
+- `client_secret`: one of the service principal's client secrets
+
+The credentials can also be placed in a file using the
+`service_principal_file` configuration option.
+
+#### Service principal with certificate
+
+If these variables are set, rclone will authenticate with a service principal with certificate.
+
+- `tenant`: ID of the service principal's tenant. Also called its "directory" ID.
+- `client_id`: the service principal's client ID
+- `client_certificate_path`: path to a PEM or PKCS12 certificate file including the private key.
+- `client_certificate_password`: (optional) password for the certificate file.
+- `client_send_certificate_chain`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header.
+
+**NB** `client_certificate_password` must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+#### User with username and password
+
+If these variables are set, rclone will authenticate with username and password.
+
+- `tenant`: (optional) tenant to authenticate in. Defaults to "organizations".
+- `client_id`: client ID of the application the user will authenticate to
+- `username`: a username (usually an email address)
+- `password`: the user's password
+
+Microsoft doesn't recommend this kind of authentication, because it's
+less secure than other authentication flows. This method is not
+interactive, so it isn't compatible with any form of multi-factor
+authentication, and the application must already have user or admin
+consent. This credential can only authenticate work and school
+accounts; it can't authenticate Microsoft accounts.
+
+**NB** `password` must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+#### Managed Service Identity Credentials {#use_msi}
+
+If `use_msi` is set then managed service identity credentials are
+used. This authentication only works when running in an Azure service.
+`env_auth` needs to be unset to use this.
+
+However if you have multiple user identities to choose from these must
+be explicitly specified using exactly one of the `msi_object_id`,
+`msi_client_id`, or `msi_mi_res_id` parameters.
+
+If none of `msi_object_id`, `msi_client_id`, or `msi_mi_res_id` is
+set, this is is equivalent to using `env_auth`.
+
+
+### Standard options
+
+Here are the Standard options specific to azurefiles (Microsoft Azure Files).
+
+#### --azurefiles-account
+
+Azure Storage Account Name.
+
+Set this to the Azure Storage Account Name in use.
+
+Leave blank to use SAS URL or connection string, otherwise it needs to be set.
+
+If this is blank and if env_auth is set it will be read from the
+environment variable `AZURE_STORAGE_ACCOUNT_NAME` if possible.
+
+
+Properties:
+
+- Config: account
+- Env Var: RCLONE_AZUREFILES_ACCOUNT
+- Type: string
+- Required: false
+
+#### --azurefiles-share-name
+
+Azure Files Share Name.
+
+This is required and is the name of the share to access.
+
+
+Properties:
+
+- Config: share_name
+- Env Var: RCLONE_AZUREFILES_SHARE_NAME
+- Type: string
+- Required: false
+
+#### --azurefiles-env-auth
+
+Read credentials from runtime (environment variables, CLI or MSI).
+
+See the [authentication docs](/azurefiles#authentication) for full info.
+
+Properties:
+
+- Config: env_auth
+- Env Var: RCLONE_AZUREFILES_ENV_AUTH
+- Type: bool
+- Default: false
+
+#### --azurefiles-key
+
+Storage Account Shared Key.
+
+Leave blank to use SAS URL or connection string.
+
+Properties:
+
+- Config: key
+- Env Var: RCLONE_AZUREFILES_KEY
+- Type: string
+- Required: false
+
+#### --azurefiles-sas-url
+
+SAS URL.
+
+Leave blank if using account/key or connection string.
+
+Properties:
+
+- Config: sas_url
+- Env Var: RCLONE_AZUREFILES_SAS_URL
+- Type: string
+- Required: false
+
+#### --azurefiles-connection-string
+
+Azure Files Connection String.
+
+Properties:
+
+- Config: connection_string
+- Env Var: RCLONE_AZUREFILES_CONNECTION_STRING
+- Type: string
+- Required: false
+
+#### --azurefiles-tenant
+
+ID of the service principal's tenant. Also called its directory ID.
+
+Set this if using
+- Service principal with client secret
+- Service principal with certificate
+- User with username and password
+
+
+Properties:
+
+- Config: tenant
+- Env Var: RCLONE_AZUREFILES_TENANT
+- Type: string
+- Required: false
+
+#### --azurefiles-client-id
+
+The ID of the client in use.
+
+Set this if using
+- Service principal with client secret
+- Service principal with certificate
+- User with username and password
+
+
+Properties:
+
+- Config: client_id
+- Env Var: RCLONE_AZUREFILES_CLIENT_ID
+- Type: string
+- Required: false
+
+#### --azurefiles-client-secret
+
+One of the service principal's client secrets
+
+Set this if using
+- Service principal with client secret
+
+
+Properties:
+
+- Config: client_secret
+- Env Var: RCLONE_AZUREFILES_CLIENT_SECRET
+- Type: string
+- Required: false
+
+#### --azurefiles-client-certificate-path
+
+Path to a PEM or PKCS12 certificate file including the private key.
+
+Set this if using
+- Service principal with certificate
+
+
+Properties:
+
+- Config: client_certificate_path
+- Env Var: RCLONE_AZUREFILES_CLIENT_CERTIFICATE_PATH
+- Type: string
+- Required: false
+
+#### --azurefiles-client-certificate-password
+
+Password for the certificate file (optional).
+
+Optionally set this if using
+- Service principal with certificate
+
+And the certificate has a password.
+
+
+**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+Properties:
+
+- Config: client_certificate_password
+- Env Var: RCLONE_AZUREFILES_CLIENT_CERTIFICATE_PASSWORD
+- Type: string
+- Required: false
+
+### Advanced options
+
+Here are the Advanced options specific to azurefiles (Microsoft Azure Files).
+
+#### --azurefiles-client-send-certificate-chain
+
+Send the certificate chain when using certificate auth.
+
+Specifies whether an authentication request will include an x5c header
+to support subject name / issuer based authentication. When set to
+true, authentication requests include the x5c header.
+
+Optionally set this if using
+- Service principal with certificate
+
+
+Properties:
+
+- Config: client_send_certificate_chain
+- Env Var: RCLONE_AZUREFILES_CLIENT_SEND_CERTIFICATE_CHAIN
+- Type: bool
+- Default: false
+
+#### --azurefiles-username
+
+User name (usually an email address)
+
+Set this if using
+- User with username and password
+
+
+Properties:
+
+- Config: username
+- Env Var: RCLONE_AZUREFILES_USERNAME
+- Type: string
+- Required: false
+
+#### --azurefiles-password
+
+The user's password
+
+Set this if using
+- User with username and password
+
+
+**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+Properties:
+
+- Config: password
+- Env Var: RCLONE_AZUREFILES_PASSWORD
+- Type: string
+- Required: false
+
+#### --azurefiles-service-principal-file
+
+Path to file containing credentials for use with a service principal.
+
+Leave blank normally. Needed only if you want to use a service principal instead of interactive login.
+
+ $ az ad sp create-for-rbac --name "" \
+ --role "Storage Files Data Owner" \
+ --scopes "/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts//blobServices/default/containers/" \
+ > azure-principal.json
+
+See ["Create an Azure service principal"](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) and ["Assign an Azure role for access to files data"](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli) pages for more details.
+
+**NB** this section needs updating for Azure Files - pull requests appreciated!
+
+It may be more convenient to put the credentials directly into the
+rclone config file under the `client_id`, `tenant` and `client_secret`
+keys instead of setting `service_principal_file`.
+
+
+Properties:
+
+- Config: service_principal_file
+- Env Var: RCLONE_AZUREFILES_SERVICE_PRINCIPAL_FILE
+- Type: string
+- Required: false
+
+#### --azurefiles-use-msi
+
+Use a managed service identity to authenticate (only works in Azure).
+
+When true, use a [managed service identity](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/)
+to authenticate to Azure Storage instead of a SAS token or account key.
+
+If the VM(SS) on which this program is running has a system-assigned identity, it will
+be used by default. If the resource has no system-assigned but exactly one user-assigned identity,
+the user-assigned identity will be used by default. If the resource has multiple user-assigned
+identities, the identity to use must be explicitly specified using exactly one of the msi_object_id,
+msi_client_id, or msi_mi_res_id parameters.
+
+Properties:
+
+- Config: use_msi
+- Env Var: RCLONE_AZUREFILES_USE_MSI
+- Type: bool
+- Default: false
+
+#### --azurefiles-msi-object-id
+
+Object ID of the user-assigned MSI to use, if any.
+
+Leave blank if msi_client_id or msi_mi_res_id specified.
+
+Properties:
+
+- Config: msi_object_id
+- Env Var: RCLONE_AZUREFILES_MSI_OBJECT_ID
+- Type: string
+- Required: false
+
+#### --azurefiles-msi-client-id
+
+Object ID of the user-assigned MSI to use, if any.
+
+Leave blank if msi_object_id or msi_mi_res_id specified.
+
+Properties:
+
+- Config: msi_client_id
+- Env Var: RCLONE_AZUREFILES_MSI_CLIENT_ID
+- Type: string
+- Required: false
+
+#### --azurefiles-msi-mi-res-id
+
+Azure resource ID of the user-assigned MSI to use, if any.
+
+Leave blank if msi_client_id or msi_object_id specified.
+
+Properties:
+
+- Config: msi_mi_res_id
+- Env Var: RCLONE_AZUREFILES_MSI_MI_RES_ID
+- Type: string
+- Required: false
+
+#### --azurefiles-endpoint
+
+Endpoint for the service.
+
+Leave blank normally.
+
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_AZUREFILES_ENDPOINT
+- Type: string
+- Required: false
+
+#### --azurefiles-chunk-size
+
+Upload chunk size.
+
+Note that this is stored in memory and there may be up to
+"--transfers" * "--azurefile-upload-concurrency" chunks stored at once
+in memory.
+
+Properties:
+
+- Config: chunk_size
+- Env Var: RCLONE_AZUREFILES_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 4Mi
+
+#### --azurefiles-upload-concurrency
+
+Concurrency for multipart uploads.
+
+This is the number of chunks of the same file that are uploaded
+concurrently.
+
+If you are uploading small numbers of large files over high-speed
+links and these uploads do not fully utilize your bandwidth, then
+increasing this may help to speed up the transfers.
+
+Note that chunks are stored in memory and there may be up to
+"--transfers" * "--azurefile-upload-concurrency" chunks stored at once
+in memory.
+
+Properties:
+
+- Config: upload_concurrency
+- Env Var: RCLONE_AZUREFILES_UPLOAD_CONCURRENCY
+- Type: int
+- Default: 16
+
+#### --azurefiles-max-stream-size
+
+Max size for streamed files.
+
+Azure files needs to know in advance how big the file will be. When
+rclone doesn't know it uses this value instead.
+
+This will be used when rclone is streaming data, the most common uses are:
+
+- Uploading files with `--vfs-cache-mode off` with `rclone mount`
+- Using `rclone rcat`
+- Copying files with unknown length
+
+You will need this much free space in the share as the file will be this size temporarily.
+
+
+Properties:
+
+- Config: max_stream_size
+- Env Var: RCLONE_AZUREFILES_MAX_STREAM_SIZE
+- Type: SizeSuffix
+- Default: 10Gi
+
+#### --azurefiles-encoding
+
+The encoding for the backend.
+
+See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_AZUREFILES_ENCODING
+- Type: Encoding
+- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot
+
+
+
+### Custom upload headers
+
+You can set custom upload headers with the `--header-upload` flag.
+
+- Cache-Control
+- Content-Disposition
+- Content-Encoding
+- Content-Language
+- Content-Type
+
+Eg `--header-upload "Content-Type: text/potato"`
+
+## Limitations
+
+MD5 sums are only uploaded with chunked files if the source has an MD5
+sum. This will always be the case for a local to azure copy.
+
# Microsoft OneDrive
Paths are specified as `remote:path`
@@ -38746,7 +40307,7 @@ You may try to [verify you account](https://docs.microsoft.com/en-us/azure/activ
Note: If you have a special region, you may need a different host in step 4 and 5. Here are [some hints](https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86).
-### Modification time and hashes
+### Modification times and hashes
OneDrive allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -38767,6 +40328,32 @@ your workflow.
For all types of OneDrive you can use the `--checksum` flag.
+### --fast-list
+
+This remote supports `--fast-list` which allows you to use fewer
+transactions in exchange for more memory. See the [rclone
+docs](https://rclone.org/docs/#fast-list) for more details.
+
+This must be enabled with the `--onedrive-delta` flag (or `delta =
+true` in the config file) as it can cause performance degradation.
+
+It does this by using the delta listing facilities of OneDrive which
+returns all the files in the remote very efficiently. This is much
+more efficient than listing directories recursively and is Microsoft's
+recommended way of reading all the file information from a drive.
+
+This can be useful with `rclone mount` and [rclone rc vfs/refresh
+recursive=true](https://rclone.org/rc/#vfs-refresh)) to very quickly fill the mount with
+information about all the files.
+
+The API used for the recursive listing (`ListR`) only supports listing
+from the root of the drive. This will become increasingly inefficient
+the further away you get from the root as rclone will have to discard
+files outside of the directory you are using.
+
+Some commands (like `rclone lsf -R`) will use `ListR` by default - you
+can turn this off with `--disable ListR` if you need to.
+
### Restricted filename characters
In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
@@ -39178,6 +40765,43 @@ Properties:
- Type: bool
- Default: false
+#### --onedrive-delta
+
+If set rclone will use delta listing to implement recursive listings.
+
+If this flag is set the the onedrive backend will advertise `ListR`
+support for recursive listings.
+
+Setting this flag speeds up these things greatly:
+
+ rclone lsf -R onedrive:
+ rclone size onedrive:
+ rclone rc vfs/refresh recursive=true
+
+**However** the delta listing API **only** works at the root of the
+drive. If you use it not at the root then it recurses from the root
+and discards all the data that is not under the directory you asked
+for. So it will be correct but may not be very efficient.
+
+This is why this flag is not set as the default.
+
+As a rule of thumb if nearly all of your data is under rclone's root
+directory (the `root/directory` in `onedrive:root/directory`) then
+using this flag will be be a big performance win. If your data is
+mostly not under the root then using this flag will be a big
+performance loss.
+
+It is recommended if you are mounting your onedrive at the root
+(or near the root when using crypt) and using rclone `rc vfs/refresh`.
+
+
+Properties:
+
+- Config: delta
+- Env Var: RCLONE_ONEDRIVE_DELTA
+- Type: bool
+- Default: false
+
#### --onedrive-encoding
The encoding for the backend.
@@ -39188,7 +40812,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_ONEDRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot
@@ -39482,12 +41106,14 @@ To copy a local directory to an OpenDrive directory called backup
rclone copy /home/source remote:backup
-### Modified time and MD5SUMs
+### Modification times and hashes
OpenDrive allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
not.
+The MD5 hash algorithm is supported.
+
### Restricted filename characters
| Character | Value | Replacement |
@@ -39561,7 +41187,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_OPENDRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot
#### --opendrive-chunk-size
@@ -39747,6 +41373,7 @@ Rclone supports the following OCI authentication provider.
No authentication
### User Principal
+
Sample rclone config file for Authentication Provider User Principal:
[oos]
@@ -39767,6 +41394,7 @@ Considerations:
- If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials.
### Instance Principal
+
An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal.
With this approach no credentials have to be stored and managed.
@@ -39796,6 +41424,7 @@ Considerations:
- It is applicable for oci compute instances only. It cannot be used on external instance or resources.
### Resource Principal
+
Resource principal auth is very similar to instance principal auth but used for resources that are not
compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm).
To use resource principal ensure Rclone process is started with these environment variables set in its process.
@@ -39815,6 +41444,7 @@ Sample rclone configuration file for Authentication Provider Resource Principal:
provider = resource_principal_auth
### No authentication
+
Public buckets do not require any authentication mechanism to read objects.
Sample rclone configuration file for No authentication:
@@ -39825,10 +41455,9 @@ Sample rclone configuration file for No authentication:
region = us-ashburn-1
provider = no_auth
-## Options
-### Modified time
+### Modification times and hashes
-The modified time is stored as metadata on the object as
+The modification time is stored as metadata on the object as
`opc-meta-mtime` as floating point since the epoch, accurate to 1 ns.
If the modification time needs to be updated rclone will attempt to perform a server
@@ -39838,6 +41467,8 @@ In the case the object is larger than 5Gb, the object will be uploaded rather th
Note that reading this from the object takes an additional `HEAD` request as the metadata
isn't returned in object listings.
+The MD5 hash algorithm is supported.
+
### Multipart uploads
rclone supports multipart uploads with OOS which means that it can
@@ -40140,7 +41771,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_OOS_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,InvalidUtf8,Dot
#### --oos-leave-parts-on-error
@@ -40667,7 +42298,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_QINGSTOR_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Ctl,InvalidUtf8
@@ -40798,7 +42429,7 @@ d) Delete this remote
y/e/d> y
```
-### Modified time and hashes
+### Modification times and hashes
Quatrix allows modification times to be set on objects accurate to 1 microsecond.
These will be used to detect whether objects need syncing or not.
@@ -40866,7 +42497,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_QUATRIX_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
#### --quatrix-effective-upload-time
@@ -41111,7 +42742,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SIA_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot
@@ -41350,7 +42981,7 @@ sufficient to determine if it is "dirty". By using `--update` along with
`--use-server-modtime`, you can avoid the extra API call and simply upload
files whose local modtime is newer than the time it was last uploaded.
-### Modified time
+### Modification times and hashes
The modified time is stored as metadata on the object as
`X-Object-Meta-Mtime` as floating point since the epoch accurate to 1
@@ -41359,6 +42990,8 @@ ns.
This is a de facto standard (used in the official python-swiftclient
amongst others) for storing the modification time for an object.
+The MD5 hash algorithm is supported.
+
### Restricted filename characters
| Character | Value | Replacement |
@@ -41705,7 +43338,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SWIFT_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,InvalidUtf8
@@ -41833,7 +43466,7 @@ To copy a local directory to a pCloud directory called backup
rclone copy /home/source remote:backup
-### Modified time and hashes ###
+### Modification times and hashes
pCloud allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -41972,7 +43605,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PCLOUD_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
#### --pcloud-root-folder-id
@@ -42104,6 +43737,13 @@ d) Delete this remote
y/e/d> y
```
+### Modification times and hashes
+
+PikPak keeps modification times on objects, and updates them when uploading objects,
+but it does not support changing only the modification time
+
+The MD5 hash algorithm is supported.
+
### Standard options
@@ -42263,7 +43903,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PIKPAK_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot
## Backend commands
@@ -42327,15 +43967,16 @@ Result:
-## Limitations ##
+## Limitations
-### Hashes ###
+### Hashes may be empty
PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files.
-### Deleted files ###
+### Deleted files still visible with trashed-only
-Deleted files will still be visible with `--pikpak-trashed-only` even after the trash emptied. This goes away after few days.
+Deleted files will still be visible with `--pikpak-trashed-only` even after the
+trash emptied. This goes away after few days.
# premiumize.me
@@ -42417,7 +44058,7 @@ To copy a local directory to an premiumize.me directory called backup
rclone copy /home/source remote:backup
-### Modified time and hashes
+### Modification times and hashes
premiumize.me does not support modification times or hashes, therefore
syncing will default to `--size-only` checking. Note that using
@@ -42532,7 +44173,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PREMIUMIZEME_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot
@@ -42638,10 +44279,12 @@ To copy a local directory to an Proton Drive directory called backup
rclone copy /home/source remote:backup
-### Modified time
+### Modification times and hashes
Proton Drive Bridge does not support updating modification times yet.
+The SHA1 hash algorithm is supported.
+
### Restricted filename characters
Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and
@@ -42787,7 +44430,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PROTONDRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot
#### --protondrive-original-file-size
@@ -43091,7 +44734,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PUTIO_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
@@ -43195,10 +44838,12 @@ To copy a local directory to an Proton Drive directory called backup
rclone copy /home/source remote:backup
-### Modified time
+### Modification times and hashes
Proton Drive Bridge does not support updating modification times yet.
+The SHA1 hash algorithm is supported.
+
### Restricted filename characters
Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and
@@ -43344,7 +44989,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PROTONDRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot
#### --protondrive-original-file-size
@@ -43838,7 +45483,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SEAFILE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8
@@ -44198,7 +45843,7 @@ commands is prohibited. Set the configuration option `disable_hashcheck`
to `true` to disable checksumming entirely, or set `shell_type` to `none`
to disable all functionality based on remote shell command execution.
-### Modified time
+### Modification times and hashes
Modified times are stored on the server to 1 second precision.
@@ -44855,6 +46500,32 @@ Properties:
- Type: string
- Required: false
+#### --sftp-copy-is-hardlink
+
+Set to enable server side copies using hardlinks.
+
+The SFTP protocol does not define a copy command so normally server
+side copies are not allowed with the sftp backend.
+
+However the SFTP protocol does support hardlinking, and if you enable
+this flag then the sftp backend will support server side copies. These
+will be implemented by doing a hardlink from the source to the
+destination.
+
+Not all sftp servers support this.
+
+Note that hardlinking two files together will use no additional space
+as the source and the destination will be the same file.
+
+This feature may be useful backups made with --copy-dest.
+
+Properties:
+
+- Config: copy_is_hardlink
+- Env Var: RCLONE_SFTP_COPY_IS_HARDLINK
+- Type: bool
+- Default: false
+
## Limitations
@@ -45133,7 +46804,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SMB_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot
@@ -45652,7 +47323,7 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
create a folder, which rclone will create as a "Sync Folder" with
SugarSync.
-### Modified time and hashes
+### Modification times and hashes
SugarSync does not support modification times or hashes, therefore
syncing will default to `--size-only` checking. Note that using
@@ -45823,7 +47494,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SUGARSYNC_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Ctl,InvalidUtf8,Dot
@@ -45920,7 +47591,7 @@ To copy a local directory to an Uptobox directory called backup
rclone copy /home/source remote:backup
-### Modified time and hashes
+### Modification times and hashes
Uptobox supports neither modified times nor checksums. All timestamps
will read as that set by `--default-time`.
@@ -45981,7 +47652,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_UPTOBOX_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
@@ -46001,7 +47672,7 @@ During the initial setup with `rclone config` you will specify the upstream
remotes as a space separated list. The upstream remotes can either be a local
paths or other remotes.
-The attributes `:ro`, `:nc` and `:nc` can be attached to the end of the remote
+The attributes `:ro`, `:nc` and `:writeback` can be attached to the end of the remote
to tag the remote as **read only**, **no create** or **writeback**, e.g.
`remote:directory/subdirectory:ro` or `remote:directory/subdirectory:nc`.
@@ -46333,7 +48004,9 @@ Choose a number from below, or type in your own value
\ (sharepoint)
5 / Sharepoint with NTLM authentication, usually self-hosted or on-premises
\ (sharepoint-ntlm)
- 6 / Other site/service or software
+ 6 / rclone WebDAV server to serve a remote over HTTP via the WebDAV protocol
+ \ (rclone)
+ 7 / Other site/service or software
\ (other)
vendor> 2
User name
@@ -46379,7 +48052,7 @@ To copy a local directory to an WebDAV directory called backup
rclone copy /home/source remote:backup
-### Modified time and hashes ###
+### Modification times and hashes
Plain WebDAV does not support modified times. However when used with
Fastmail Files, Owncloud or Nextcloud rclone will support modified times.
@@ -46429,6 +48102,8 @@ Properties:
- Sharepoint Online, authenticated by Microsoft account
- "sharepoint-ntlm"
- Sharepoint with NTLM authentication, usually self-hosted or on-premises
+ - "rclone"
+ - rclone WebDAV server to serve a remote over HTTP via the WebDAV protocol
- "other"
- Other site/service or software
@@ -46659,6 +48334,14 @@ For Rclone calls copying files (especially Office files such as .docx, .xlsx, et
--ignore-size --ignore-checksum --update
```
+## Rclone
+
+Use this option if you are hosting remotes over WebDAV provided by rclone.
+Read [rclone serve webdav](commands/rclone_serve_webdav/) for more details.
+
+rclone serve supports modified times using the `X-OC-Mtime` header.
+
+
### dCache
dCache is a storage system that supports many protocols and
@@ -46819,14 +48502,12 @@ excess files in the path.
Yandex paths may be as deep as required, e.g. `remote:directory/subdirectory`.
-### Modified time
+### Modification times and hashes
Modified times are supported and are stored accurate to 1 ns in custom
metadata called `rclone_modified` in RFC3339 with nanoseconds format.
-### MD5 checksums
-
-MD5 checksums are natively supported by Yandex Disk.
+The MD5 hash algorithm is natively supported by Yandex Disk.
### Emptying Trash
@@ -46940,7 +48621,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_YANDEX_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Del,Ctl,InvalidUtf8,Dot
@@ -47067,13 +48748,11 @@ excess files in the path.
Zoho paths may be as deep as required, eg `remote:directory/subdirectory`.
-### Modified time
+### Modification times and hashes
Modified times are currently not supported for Zoho Workdrive
-### Checksums
-
-No checksums are supported.
+No hash algorithms are supported.
### Usage information
@@ -47196,7 +48875,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_ZOHO_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Del,Ctl,InvalidUtf8
@@ -47228,10 +48907,10 @@ For consistencies sake one can also configure a remote of type
rclone remote paths, e.g. `remote:path/to/wherever`, but it is probably
easier not to.
-### Modified time ###
+### Modification times
-Rclone reads and writes the modified time using an accuracy determined by
-the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second
+Rclone reads and writes the modification times using an accuracy determined
+by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second
on OS X.
### Filenames ###
@@ -47660,6 +49339,11 @@ time we:
- Only checksum the size that stat gave
- Don't update the stat info for the file
+**NB** do not use this flag on a Windows Volume Shadow (VSS). For some
+unknown reason, files in a VSS sometimes show different sizes from the
+directory listing (where the initial stat value comes from on Windows)
+and when stat is called on them directly. Other copy tools always use
+the direct stat value and setting this flag will disable that.
Properties:
@@ -47770,7 +49454,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_LOCAL_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Dot
### Metadata
@@ -47832,6 +49516,153 @@ Options:
# Changelog
+## v1.65.0 - 2023-11-26
+
+[See commits](https://github.com/rclone/rclone/compare/v1.64.0...v1.65.0)
+
+* New backends
+ * Azure Files (karan, moongdal, Nick Craig-Wood)
+ * ImageKit (Abhinav Dhiman)
+ * Linkbox (viktor, Nick Craig-Wood)
+* New commands
+ * `serve s3`: Let rclone act as an S3 compatible server (Mikubill, Artur Neumann, Saw-jan, Nick Craig-Wood)
+ * `nfsmount`: mount command to provide mount mechanism on macOS without FUSE (Saleh Dindar)
+ * `serve nfs`: to serve a remote for use by `nfsmount` (Saleh Dindar)
+* New Features
+ * install.sh: Clean up temp files in install script (Jacob Hands)
+ * build
+ * Update all dependencies (Nick Craig-Wood)
+ * Refactor version info and icon resource handling on windows (albertony)
+ * doc updates (albertony, alfish2000, asdffdsazqqq, Dimitri Papadopoulos, Herby Gillot, Joda Stößer, Manoj Ghosh, Nick Craig-Wood)
+ * Implement `--metadata-mapper` to transform metatadata with a user supplied program (Nick Craig-Wood)
+ * Add `ChunkWriterDoesntSeek` feature flag and set it for b2 (Nick Craig-Wood)
+ * lib/http: Export basic go string functions for use in `--template` (Gabriel Espinoza)
+ * makefile: Use POSIX compatible install arguments (Mina Galić)
+ * operations
+ * Use less memory when doing multithread uploads (Nick Craig-Wood)
+ * Implement `--partial-suffix` to control extension of temporary file names (Volodymyr)
+ * rc
+ * Add `operations/check` to the rc API (Nick Craig-Wood)
+ * Always report an error as JSON (Nick Craig-Wood)
+ * Set `Last-Modified` header for files served by `--rc-serve` (Nikita Shoshin)
+ * size: Dont show duplicate object count when less than 1k (albertony)
+* Bug Fixes
+ * fshttp: Fix `--contimeout` being ignored (你知道未来吗)
+ * march: Fix excessive parallelism when using `--no-traverse` (Nick Craig-Wood)
+ * ncdu: Fix crash when re-entering changed directory after rescan (Nick Craig-Wood)
+ * operations
+ * Fix overwrite of destination when multi-thread transfer fails (Nick Craig-Wood)
+ * Fix invalid UTF-8 when truncating file names when not using `--inplace` (Nick Craig-Wood)
+ * serve dnla: Fix crash on graceful exit (wuxingzhong)
+* Mount
+ * Disable mount for freebsd and alias cmount as mount on that platform (Nick Craig-Wood)
+* VFS
+ * Add `--vfs-refresh` flag to read all the directories on start (Beyond Meat)
+ * Implement Name() method in WriteFileHandle and ReadFileHandle (Saleh Dindar)
+ * Add go-billy dependency and make sure vfs.Handle implements billy.File (Saleh Dindar)
+ * Error out early if can't upload 0 length file (Nick Craig-Wood)
+* Local
+ * Fix copying from Windows Volume Shadows (Nick Craig-Wood)
+* Azure Blob
+ * Add support for cold tier (Ivan Yanitra)
+* B2
+ * Implement "rclone backend lifecycle" to read and set bucket lifecycles (Nick Craig-Wood)
+ * Implement `--b2-lifecycle` to control lifecycle when creating buckets (Nick Craig-Wood)
+ * Fix listing all buckets when not needed (Nick Craig-Wood)
+ * Fix multi-thread upload with copyto going to wrong name (Nick Craig-Wood)
+ * Fix server side chunked copy when file size was exactly `--b2-copy-cutoff` (Nick Craig-Wood)
+ * Fix streaming chunked files an exact multiple of chunk size (Nick Craig-Wood)
+* Box
+ * Filter more EventIDs when polling (David Sze)
+ * Add more logging for polling (David Sze)
+ * Fix performance problem reading metadata for single files (Nick Craig-Wood)
+* Drive
+ * Add read/write metadata support (Nick Craig-Wood)
+ * Add support for SHA-1 and SHA-256 checksums (rinsuki)
+ * Add `--drive-show-all-gdocs` to allow unexportable gdocs to be server side copied (Nick Craig-Wood)
+ * Add a note that `--drive-scope` accepts comma-separated list of scopes (Keigo Imai)
+ * Fix error updating created time metadata on existing object (Nick Craig-Wood)
+ * Fix integration tests by enabling metadata support from the context (Nick Craig-Wood)
+* Dropbox
+ * Factor batcher into lib/batcher (Nick Craig-Wood)
+ * Fix missing encoding for rclone purge (Nick Craig-Wood)
+* Google Cloud Storage
+ * Fix 400 Bad request errors when using multi-thread copy (Nick Craig-Wood)
+* Googlephotos
+ * Implement batcher for uploads (Nick Craig-Wood)
+* Hdfs
+ * Added support for list of namenodes in hdfs remote config (Tayo-pasedaRJ)
+* HTTP
+ * Implement set backend command to update running backend (Nick Craig-Wood)
+ * Enable methods used with WebDAV (Alen Šiljak)
+* Jottacloud
+ * Add support for reading and writing metadata (albertony)
+* Onedrive
+ * Implement ListR method which gives `--fast-list` support (Nick Craig-Wood)
+ * This must be enabled with the `--onedrive-delta` flag
+* Quatrix
+ * Add partial upload support (Oksana Zhykina)
+ * Overwrite files on conflict during server-side move (Oksana Zhykina)
+* S3
+ * Add Linode provider (Nick Craig-Wood)
+ * Add docs on how to add a new provider (Nick Craig-Wood)
+ * Fix no error being returned when creating a bucket we don't own (Nick Craig-Wood)
+ * Emit a debug message if anonymous credentials are in use (Nick Craig-Wood)
+ * Add `--s3-disable-multipart-uploads` flag (Nick Craig-Wood)
+ * Detect looping when using gcs and versions (Nick Craig-Wood)
+* SFTP
+ * Implement `--sftp-copy-is-hardlink` to server side copy as hardlink (Nick Craig-Wood)
+* Smb
+ * Fix incorrect `about` size by switching to `github.com/cloudsoda/go-smb2` fork (Nick Craig-Wood)
+ * Fix modtime of multithread uploads by setting PartialUploads (Nick Craig-Wood)
+* WebDAV
+ * Added an rclone vendor to work with `rclone serve webdav` (Adithya Kumar)
+
+## v1.64.2 - 2023-10-19
+
+[See commits](https://github.com/rclone/rclone/compare/v1.64.1...v1.64.2)
+
+* Bug Fixes
+ * selfupdate: Fix "invalid hashsum signature" error (Nick Craig-Wood)
+ * build: Fix docker build running out of space (Nick Craig-Wood)
+
+## v1.64.1 - 2023-10-17
+
+[See commits](https://github.com/rclone/rclone/compare/v1.64.0...v1.64.1)
+
+* Bug Fixes
+ * cmd: Make `--progress` output logs in the same format as without (Nick Craig-Wood)
+ * docs fixes (Dimitri Papadopoulos Orfanos, Herby Gillot, Manoj Ghosh, Nick Craig-Wood)
+ * lsjson: Make sure we set the global metadata flag too (Nick Craig-Wood)
+ * operations
+ * Ensure concurrency is no greater than the number of chunks (Pat Patterson)
+ * Fix OpenOptions ignored in copy if operation was a multiThreadCopy (Vitor Gomes)
+ * Fix error message on delete to have file name (Nick Craig-Wood)
+ * serve sftp: Return not supported error for not supported commands (Nick Craig-Wood)
+ * build: Upgrade golang.org/x/net to v0.17.0 to fix HTTP/2 rapid reset (Nick Craig-Wood)
+ * pacer: Fix b2 deadlock by defaulting max connections to unlimited (Nick Craig-Wood)
+* Mount
+ * Fix automount not detecting drive is ready (Nick Craig-Wood)
+* VFS
+ * Fix update dir modification time (Saleh Dindar)
+* Azure Blob
+ * Fix "fatal error: concurrent map writes" (Nick Craig-Wood)
+* B2
+ * Fix multipart upload: corrupted on transfer: sizes differ XXX vs 0 (Nick Craig-Wood)
+ * Fix locking window when getting mutipart upload URL (Nick Craig-Wood)
+ * Fix server side copies greater than 4GB (Nick Craig-Wood)
+ * Fix chunked streaming uploads (Nick Craig-Wood)
+ * Reduce default `--b2-upload-concurrency` to 4 to reduce memory usage (Nick Craig-Wood)
+* Onedrive
+ * Fix the configurator to allow `/teams/ID` in the config (Nick Craig-Wood)
+* Oracleobjectstorage
+ * Fix OpenOptions being ignored in uploadMultipart with chunkWriter (Nick Craig-Wood)
+* S3
+ * Fix slice bounds out of range error when listing (Nick Craig-Wood)
+ * Fix OpenOptions being ignored in uploadMultipart with chunkWriter (Vitor Gomes)
+* Storj
+ * Update storj.io/uplink to v1.12.0 (Kaloyan Raev)
+
## v1.64.0 - 2023-09-11
[See commits](https://github.com/rclone/rclone/compare/v1.63.0...v1.64.0)
@@ -47932,14 +49763,14 @@ Options:
* Fix 425 "TLS session of data connection not resumed" errors (Nick Craig-Wood)
* Hdfs
* Retry "replication in progress" errors when uploading (Nick Craig-Wood)
- * Fix uploading to the wrong object on Update with overriden remote name (Nick Craig-Wood)
+ * Fix uploading to the wrong object on Update with overridden remote name (Nick Craig-Wood)
* HTTP
* CORS should not be sent if not set (yuudi)
* Fix webdav OPTIONS response (yuudi)
* Opendrive
* Fix List on a just deleted and remade directory (Nick Craig-Wood)
* Oracleobjectstorage
- * Use rclone's rate limiter in mutipart transfers (Manoj Ghosh)
+ * Use rclone's rate limiter in multipart transfers (Manoj Ghosh)
* Implement `OpenChunkWriter` and multi-thread uploads (Manoj Ghosh)
* S3
* Refactor multipart upload to use `OpenChunkWriter` and `ChunkWriter` (Vitor Gomes)
@@ -48112,14 +49943,14 @@ Options:
* Fix quickxorhash on 32 bit architectures (Nick Craig-Wood)
* Report any list errors during `rclone cleanup` (albertony)
* Putio
- * Fix uploading to the wrong object on Update with overriden remote name (Nick Craig-Wood)
+ * Fix uploading to the wrong object on Update with overridden remote name (Nick Craig-Wood)
* Fix modification times not being preserved for server side copy and move (Nick Craig-Wood)
* Fix server side copy failures (400 errors) (Nick Craig-Wood)
* S3
* Empty directory markers (Jānis Bebrītis, Nick Craig-Wood)
* Update Scaleway storage classes (Brian Starkey)
* Fix `--s3-versions` on individual objects (Nick Craig-Wood)
- * Fix hang on aborting multpart upload with iDrive e2 (Nick Craig-Wood)
+ * Fix hang on aborting multipart upload with iDrive e2 (Nick Craig-Wood)
* Fix missing "tier" metadata (Nick Craig-Wood)
* Fix V3sign: add missing subresource delete (cc)
* Fix Arvancloud Domain and region changes and alphabetise the provider (Ehsan Tadayon)
@@ -48136,7 +49967,7 @@ Options:
* Code cleanup to avoid overwriting ctx before first use (fixes issue reported by the staticcheck linter) (albertony)
* Storj
* Fix "uplink: too many requests" errors when uploading to the same file (Nick Craig-Wood)
- * Fix uploading to the wrong object on Update with overriden remote name (Nick Craig-Wood)
+ * Fix uploading to the wrong object on Update with overridden remote name (Nick Craig-Wood)
* Swift
* Ignore 404 error when deleting an object (Nick Craig-Wood)
* Union
@@ -51765,7 +53596,7 @@ Point release to fix hubic and azureblob backends.
* Revert to copy when moving file across file system boundaries
* `--skip-links` to suppress symlink warnings (thanks Zhiming Wang)
* Mount
- * Re-use `rcat` internals to support uploads from all remotes
+ * Reuse `rcat` internals to support uploads from all remotes
* Dropbox
* Fix "entry doesn't belong in directory" error
* Stop using deprecated API methods
@@ -53437,7 +55268,7 @@ put them back in again.` >}}
* HNGamingUK
* Jonta <359397+Jonta@users.noreply.github.com>
* YenForYang
- * Joda Stößer
+ * SimJoSt / Joda Stößer
* Logeshwaran
* Rajat Goel
* r0kk3rz
@@ -53684,6 +55515,38 @@ put them back in again.` >}}
* Volodymyr Kit
* David Pedersen
* Drew Stinnett
+ * Pat Patterson
+ * Herby Gillot
+ * Nikita Shoshin
+ * rinsuki <428rinsuki+git@gmail.com>
+ * Beyond Meat <51850644+beyondmeat@users.noreply.github.com>
+ * Saleh Dindar
+ * Volodymyr <142890760+vkit-maytech@users.noreply.github.com>
+ * Gabriel Espinoza <31670639+gspinoza@users.noreply.github.com>
+ * Keigo Imai
+ * Ivan Yanitra
+ * alfish2000
+ * wuxingzhong
+ * Adithya Kumar
+ * Tayo-pasedaRJ <138471223+Tayo-pasedaRJ@users.noreply.github.com>
+ * Peter Kreuser
+ * Piyush
+ * fotile96
+ * Luc Ritchie
+ * cynful
+ * wjielai
+ * Jack Deng
+ * Mikubill <31246794+Mikubill@users.noreply.github.com>
+ * Artur Neumann
+ * Saw-jan
+ * Oksana Zhykina
+ * karan
+ * viktor
+ * moongdal
+ * Mina Galić
+ * Alen Šiljak
+ * 你知道未来吗
+ * Abhinav Dhiman <8640877+ahnv@users.noreply.github.com>
# Contact the rclone project
diff --git a/MANUAL.txt b/MANUAL.txt
index 2d0f92187..11a38590b 100644
--- a/MANUAL.txt
+++ b/MANUAL.txt
@@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
-Sep 11, 2023
+Nov 26, 2023
Rclone syncs your files to cloud storage
@@ -124,11 +124,14 @@ S3, that work out of the box.)
- Koofr
- Leviia Object Storage
- Liara Object Storage
+- Linkbox
+- Linode Object Storage
- Mail.ru Cloud
- Memset Memstore
- Mega
- Memory
- Microsoft Azure Blob Storage
+- Microsoft Azure Files Storage
- Microsoft OneDrive
- Minio
- Nextcloud
@@ -265,6 +268,19 @@ developers so it may be out of date. Its current version is as below.
[Homebrew package]
+Installation with MacPorts (#macos-macports)
+
+On macOS, rclone can also be installed via MacPorts:
+
+ sudo port install rclone
+
+Note that this is a third party installer not controlled by the rclone
+developers so it may be out of date. Its current version is as below.
+
+[MacPorts port]
+
+More information here.
+
Precompiled binary, using curl
To avoid problems with macOS gatekeeper enforcing the binary to be
@@ -483,7 +499,7 @@ Make sure you have Snapd installed
$ sudo snap install rclone
-Due to the strict confinement of Snap, rclone snap cannot acess real
+Due to the strict confinement of Snap, rclone snap cannot access real
/home/$USER/.config/rclone directory, default config path is as below.
- Default config directory:
@@ -502,8 +518,8 @@ developers so it may be out of date. Its current version is as below.
Source installation
-Make sure you have git and Go installed. Go version 1.17 or newer is
-required, latest release is recommended. You can get it from your
+Make sure you have git and Go installed. Go version 1.18 or newer is
+required, the latest release is recommended. You can get it from your
package manager, or download it from golang.org/dl. Then you can run the
following:
@@ -531,22 +547,51 @@ cgo on Windows as well, by using the MinGW port of GCC, e.g. by
installing it in a MSYS2 distribution (make sure you install it in the
classic mingw64 subsystem, the ucrt64 version is not compatible).
-Additionally, on Windows, you must install the third party utility
-WinFsp, with the "Developer" feature selected. If building with cgo, you
-must also set environment variable CPATH pointing to the fuse include
-directory within the WinFsp installation (normally
+Additionally, to build with mount on Windows, you must install the third
+party utility WinFsp, with the "Developer" feature selected. If building
+with cgo, you must also set environment variable CPATH pointing to the
+fuse include directory within the WinFsp installation (normally
C:\Program Files (x86)\WinFsp\inc\fuse).
-You may also add arguments -ldflags -s (with or without -tags cmount),
-to omit symbol table and debug information, making the executable file
-smaller, and -trimpath to remove references to local file system paths.
-This is how the official rclone releases are built.
+You may add arguments -ldflags -s to omit symbol table and debug
+information, making the executable file smaller, and -trimpath to remove
+references to local file system paths. The official rclone releases are
+built with both of these.
go build -trimpath -ldflags -s -tags cmount
+If you want to customize the version string, as reported by the
+rclone version command, you can set one of the variables fs.Version,
+fs.VersionTag (to keep default suffix but customize the number), or
+fs.VersionSuffix (to keep default number but customize the suffix). This
+can be done from the build command, by adding to the -ldflags argument
+value as shown below.
+
+ go build -trimpath -ldflags "-s -X github.com/rclone/rclone/fs.Version=v9.9.9-test" -tags cmount
+
+On Windows, the official executables also have the version information,
+as well as a file icon, embedded as binary resources. To get that with
+your own build you need to run the following command before the build
+command. It generates a Windows resource system object file, with
+extension .syso, e.g. resource_windows_amd64.syso, that will be
+automatically picked up by future build commands.
+
+ go run bin/resource_windows.go
+
+The above command will generate a resource file containing version
+information based on the fs.Version variable in source at the time you
+run the command, which means if the value of this variable changes you
+need to re-run the command for it to be reflected in the version
+information. Also, if you override this version variable in the build
+command as described above, you need to do that also when generating the
+resource file, or else it will still use the value from the source.
+
+ go run bin/resource_windows.go -version v9.9.9-test
+
Instead of executing the go build command directly, you can run it via
-the Makefile. It changes the version number suffix from "-DEV" to
-"-beta" and appends commit details. It also copies the resulting rclone
+the Makefile. The default target changes the version suffix from "-DEV"
+to "-beta" followed by additional commit details, embeds version
+information binary resources on Windows, and copies the resulting rclone
executable into your GOPATH bin folder ($(go env GOPATH)/bin, which
corresponds to ~/go/bin/rclone by default).
@@ -557,27 +602,18 @@ To include mount command on macOS and Windows with Makefile build:
make GOTAGS=cmount
There are other make targets that can be used for more advanced builds,
-such as cross-compiling for all supported os/architectures, embedding
-icon and version info resources into windows executable, and packaging
-results into release artifacts. See Makefile and cross-compile.go for
-details.
+such as cross-compiling for all supported os/architectures, and
+packaging results into release artifacts. See Makefile and
+cross-compile.go for details.
-Another alternative is to download the source, build and install rclone
-in one operation, as a regular Go package. The source will be stored it
-in the Go module cache, and the resulting executable will be in your
-GOPATH bin folder ($(go env GOPATH)/bin, which corresponds to
-~/go/bin/rclone by default).
-
-With Go version 1.17 or newer:
+Another alternative method for source installation is to download the
+source, build and install rclone - all in one operation, as a regular Go
+package. The source will be stored it in the Go module cache, and the
+resulting executable will be in your GOPATH bin folder
+($(go env GOPATH)/bin, which corresponds to ~/go/bin/rclone by default).
go install github.com/rclone/rclone@latest
-With Go versions older than 1.17 (do not use the -u flag, it causes Go
-to try to update the dependencies that rclone uses and sometimes these
-don't work with the current version):
-
- go get github.com/rclone/rclone
-
Ansible installation
This can be done with Stefan Weichinger's ansible role.
@@ -739,7 +775,7 @@ also provides alternative standalone distributions which includes
necessary runtime (.NET 5). WinSW is a command-line only utility, where
you have to manually create an XML file with service configuration. This
may be a drawback for some, but it can also be an advantage as it is
-easy to back up and re-use the configuration settings, without having go
+easy to back up and reuse the configuration settings, without having go
through manual steps in a GUI. One thing to note is that by default it
does not restart the service on error, one have to explicit enable this
in the configuration file (via the "onfailure" parameter).
@@ -808,10 +844,12 @@ See the following for detailed instructions for
- Internet Archive
- Jottacloud
- Koofr
+- Linkbox
- Mail.ru Cloud
- Mega
- Memory
- Microsoft Azure Blob Storage
+- Microsoft Azure Files Storage
- Microsoft OneDrive
- OpenStack Swift / Rackspace Cloudfiles / Blomp Cloud Storage /
Memset Memstore
@@ -980,11 +1018,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -999,11 +1037,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
@@ -1111,11 +1150,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -1130,11 +1169,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
@@ -1250,11 +1290,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -1269,11 +1309,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
@@ -2525,11 +2566,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -2544,11 +2585,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
@@ -2684,16 +2726,19 @@ SEE ALSO
rclone checksum
-Checks the files in the source against a SUM file.
+Checks the files in the destination against a SUM file.
Synopsis
-Checks that hashsums of source files match the SUM file. It compares
-hashes (MD5, SHA1, etc) and logs a report of files which don't match. It
-doesn't alter the file system.
+Checks that hashsums of destination files match the SUM file. It
+compares hashes (MD5, SHA1, etc) and logs a report of files which don't
+match. It doesn't alter the file system.
-If you supply the --download flag, it will download the data from remote
-and calculate the contents hash on the fly. This can be useful for
+The sumfile is treated as the source and the dst:path is treated as the
+destination for the purposes of the output.
+
+If you supply the --download flag, it will download the data from the
+remote and calculate the content hash on the fly. This can be useful for
remotes that don't support hashes or if you really want to check all the
data.
@@ -2728,7 +2773,7 @@ what happened to it. These are reminiscent of diff files.
The default number of parallel checks is 8. See the --checkers=N option
for more information.
- rclone checksum sumfile src:path [flags]
+ rclone checksum sumfile dst:path [flags]
Options
@@ -3500,11 +3545,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -3519,11 +3564,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
@@ -3984,10 +4030,6 @@ Run without a hash to see the list of all supported hashes, e.g.
* whirlpool
* crc32
* sha256
- * dropbox
- * hidrive
- * mailru
- * quickxor
Then
@@ -3996,7 +4038,7 @@ Then
Note that hash names are case insensitive and values are output in lower
case.
- rclone hashsum remote:path [flags]
+ rclone hashsum [ remote:path] [flags]
Options
@@ -4712,10 +4754,17 @@ not suffer from the same limitations.
Mounting on macOS
-Mounting on macOS can be done either via macFUSE (also known as osxfuse)
-or FUSE-T. macFUSE is a traditional FUSE driver utilizing a macOS kernel
-extension (kext). FUSE-T is an alternative FUSE system which "mounts"
-via an NFSv4 local server.
+Mounting on macOS can be done either via built-in NFS server, macFUSE
+(also known as osxfuse) or FUSE-T. macFUSE is a traditional FUSE driver
+utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE
+system which "mounts" via an NFSv4 local server.
+
+NFS mount
+
+This method spins up an NFS server using serve nfs command and mounts it
+to the specified mountpoint. If you run this in background mode using
+|--daemon|, you will need to send SIGTERM signal to the rclone process
+using |kill| command to stop the mount.
macFUSE Notes
@@ -4764,7 +4813,8 @@ Without the use of --vfs-cache-mode this can only write files
sequentially, it can only seek when reading. This means that many
applications won't work with their files on an rclone mount without
--vfs-cache-mode writes or --vfs-cache-mode full. See the VFS File
-Caching section for more info.
+Caching section for more info. When using NFS mount on macOS, if you
+don't specify |--vfs-cache-mode| the mount point will be read-only.
The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2) do
not support the concept of empty directories, so empty directories will
@@ -4904,9 +4954,8 @@ Mount option syntax includes a few extra options treated specially:
pgrep.
- vv... will be transformed into appropriate --verbose=N
- standard mount options like x-systemd.automount, _netdev, nosuid and
- alike are intended only for Automountd and ignored by rclone.
-
-VFS - Virtual File System
+ alike are intended only for Automountd and ignored by rclone. ##
+ VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
that rclone uses into something which looks much more like a disk filing
@@ -5283,6 +5332,7 @@ Options
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -5373,11 +5423,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -5392,11 +5442,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
@@ -5844,6 +5895,27 @@ be used within the template to server pages:
-- .ModTime The UTC timestamp of an entry.
-----------------------------------------------------------------------
+The server also makes the following functions available so that they can
+be used within the template. These functions help extend the options for
+dynamic rendering of HTML. They can be used to render HTML based on
+specific conditions.
+
+ -----------------------------------------------------------------------
+ Function Description
+ ----------------------------------- -----------------------------------
+ afterEpoch Returns the time since the epoch
+ for the given time.
+
+ contains Checks whether a given substring is
+ present or not in a given string.
+
+ hasPrefix Checks whether the given string
+ begins with the specified prefix.
+
+ hasSuffix Checks whether the given string end
+ with the specified suffix.
+ -----------------------------------------------------------------------
+
Authentication
By default this will serve files without needing a login.
@@ -6063,7 +6135,9 @@ SEE ALSO
API.
- rclone serve ftp - Serve remote:path over FTP.
- rclone serve http - Serve the remote over HTTP.
+- rclone serve nfs - Serve the remote as an NFS mount
- rclone serve restic - Serve the remote for restic's REST API.
+- rclone serve s3 - Serve remote:path over s3.
- rclone serve sftp - Serve the remote over SFTP.
- rclone serve webdav - Serve remote:path over WebDAV.
@@ -6093,9 +6167,7 @@ Use --name to choose the friendly server name, which is by default
"rclone (hostname)".
Use --log-trace in conjunction with -vv to enable additional debug
-logging of all UPNP traffic.
-
-VFS - Virtual File System
+logging of all UPNP traffic. ## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
that rclone uses into something which looks much more like a disk filing
@@ -6459,6 +6531,7 @@ Options
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -6539,8 +6612,7 @@ directory with book-keeping records of created and mounted volumes.
All mount and VFS options are submitted by the docker daemon via API,
but you can also provide defaults on the command line as well as set
path to the config file and cache directory or adjust logging verbosity.
-
-VFS - Virtual File System
+## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
that rclone uses into something which looks much more like a disk filing
@@ -6922,6 +6994,7 @@ Options
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -6986,9 +7059,7 @@ Authentication
By default this will serve files without needing a login.
You can set a single username and password with the --user and --pass
-flags.
-
-VFS - Virtual File System
+flags. ## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
that rclone uses into something which looks much more like a disk filing
@@ -7426,6 +7497,7 @@ Options
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -7579,6 +7651,27 @@ be used within the template to server pages:
-- .ModTime The UTC timestamp of an entry.
-----------------------------------------------------------------------
+The server also makes the following functions available so that they can
+be used within the template. These functions help extend the options for
+dynamic rendering of HTML. They can be used to render HTML based on
+specific conditions.
+
+ -----------------------------------------------------------------------
+ Function Description
+ ----------------------------------- -----------------------------------
+ afterEpoch Returns the time since the epoch
+ for the given time.
+
+ contains Checks whether a given substring is
+ present or not in a given string.
+
+ hasPrefix Checks whether the given string
+ begins with the specified prefix.
+
+ hasSuffix Checks whether the given string end
+ with the specified suffix.
+ -----------------------------------------------------------------------
+
Authentication
By default this will serve files without needing a login.
@@ -7605,9 +7698,8 @@ The password file can be updated while rclone is running.
Use --realm to set the authentication realm.
-Use --salt to change the password hashing salt from the default.
-
-VFS - Virtual File System
+Use --salt to change the password hashing salt from the default. ## VFS
+- Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
that rclone uses into something which looks much more like a disk filing
@@ -8054,6 +8146,444 @@ Options
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
+ --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
+ --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
+ --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
+
+Filter Options
+
+Flags for filtering directory listings.
+
+ --delete-excluded Delete files on dest excluded from sync
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
+ --exclude-if-present stringArray Exclude directories if filename is present
+ --files-from stringArray Read list of source-file names from file (use - to read from stdin)
+ --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
+ -f, --filter stringArray Add a file filtering rule
+ --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --ignore-case Ignore case in filters (case insensitive)
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read file include patterns from file (use - to read from stdin)
+ --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-depth int If set limits the recursion depth to this (default -1)
+ --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
+ --metadata-exclude stringArray Exclude metadatas matching pattern
+ --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
+ --metadata-filter stringArray Add a metadata filtering rule
+ --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
+ --metadata-include stringArray Include metadatas matching pattern
+ --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
+
+See the global flags page for global options not listed here.
+
+SEE ALSO
+
+- rclone serve - Serve a remote over a protocol.
+
+rclone serve nfs
+
+Serve the remote as an NFS mount
+
+Synopsis
+
+Create an NFS server that serves the given remote over the network.
+
+The primary purpose for this command is to enable mount command on
+recent macOS versions where installing FUSE is very cumbersome.
+
+Since this is running on NFSv3, no authentication method is available.
+Any client will be able to access the data. To limit access, you can use
+serve NFS on loopback address and rely on secure tunnels (such as SSH).
+For this reason, by default, a random TCP port is chosen and loopback
+interface is used for the listening address; meaning that it is only
+available to the local machine. If you want other machines to access the
+NFS mount over local network, you need to specify the listening address
+and port using --addr flag.
+
+Modifying files through NFS protocol requires VFS caching. Usually you
+will need to specify --vfs-cache-mode in order to be able to write to
+the mountpoint (full is recommended). If you don't specify VFS cache
+mode, the mount will be read-only.
+
+To serve NFS over the network use following command:
+
+ rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full
+
+We specify a specific port that we can use in the mount command:
+
+To mount the server under Linux/macOS, use the following command:
+
+ mount -oport=$PORT,mountport=$PORT $HOSTNAME: path/to/mountpoint
+
+Where $PORT is the same port number we used in the serve nfs command.
+
+This feature is only available on Unix platforms.
+
+VFS - Virtual File System
+
+This command uses the VFS layer. This adapts the cloud storage objects
+that rclone uses into something which looks much more like a disk filing
+system.
+
+Cloud storage objects have lots of properties which aren't like disk
+files - you can't extend them or write to the middle of them, so the VFS
+layer has to deal with that. Because there is no one right way of doing
+this there are various options explained below.
+
+The VFS layer also implements a directory cache - this caches info about
+files and directories (but not the data) in memory.
+
+VFS Directory Cache
+
+Using the --dir-cache-time flag, you can control how long a directory
+should be considered up to date and not refreshed from the backend.
+Changes made through the VFS will appear immediately or invalidate the
+cache.
+
+ --dir-cache-time duration Time to cache directory entries for (default 5m0s)
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
+
+However, changes made directly on the cloud storage by the web interface
+or a different copy of rclone will only be picked up once the directory
+cache expires if the backend configured does not support polling for
+changes. If the backend supports polling, changes will be picked up
+within the polling interval.
+
+You can send a SIGHUP signal to rclone for it to flush all directory
+caches, regardless of how old they are. Assuming only one rclone
+instance is running, you can reset the cache like this:
+
+ kill -SIGHUP $(pidof rclone)
+
+If you configure rclone with a remote control then you can use rclone rc
+to flush the whole directory cache:
+
+ rclone rc vfs/forget
+
+Or individual files or directories:
+
+ rclone rc vfs/forget file=path/to/file dir=path/to/dir
+
+VFS File Buffering
+
+The --buffer-size flag determines the amount of memory, that will be
+used to buffer data in advance.
+
+Each open file will try to keep the specified amount of data in memory
+at all times. The buffered data is bound to one open file and won't be
+shared.
+
+This flag is a upper limit for the used memory per open file. The buffer
+will only use memory for data that is downloaded but not not yet read.
+If the buffer is empty, only a small amount of memory will be used.
+
+The maximum memory used by rclone for buffering can be up to
+--buffer-size * open files.
+
+VFS File Caching
+
+These flags control the VFS file caching options. File caching is
+necessary to make the VFS layer appear compatible with a normal file
+system. It can be disabled at the cost of some compatibility.
+
+For example you'll need to enable VFS caching if you want to read and
+write simultaneously to a file. See below for more details.
+
+Note that the VFS cache is separate from the cache backend and you may
+find that you need one or the other or both.
+
+ --cache-dir string Directory rclone will use for caching.
+ --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+ --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
+
+If run with -vv rclone will print the location of the file cache. The
+files are stored in the user cache file area which is OS dependent but
+can be controlled with --cache-dir or setting the appropriate
+environment variable.
+
+The cache has 4 different modes selected by --vfs-cache-mode. The higher
+the cache mode the more compatible rclone becomes at the cost of using
+disk space.
+
+Note that files are written back to the remote only when they are closed
+and if they haven't been accessed for --vfs-write-back seconds. If
+rclone is quit or dies with files that haven't been uploaded, these will
+be uploaded next time rclone is run with the same flags.
+
+If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
+cache may exceed these quotas for two reasons. Firstly because it is
+only checked every --vfs-cache-poll-interval. Secondly because open
+files cannot be evicted from the cache. When --vfs-cache-max-size or
+--vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
+least accessed files from the cache first. rclone will start with files
+that haven't been accessed for the longest. This cache flushing strategy
+is efficient and more relevant files are likely to remain cached.
+
+The --vfs-cache-max-age will evict files from the cache after the set
+time since last access has passed. The default value of 1 hour will
+start evicting files from cache that haven't been accessed for 1 hour.
+When a cached file is accessed the 1 hour timer is reset to 0 and will
+wait for 1 more hour before evicting. Specify the time with standard
+notation, s, m, h, d, w .
+
+You should not run two copies of rclone using the same VFS cache with
+the same or overlapping remotes if using --vfs-cache-mode > off. This
+can potentially cause data corruption if you do. You can work around
+this by giving each rclone its own cache hierarchy with --cache-dir. You
+don't need to worry about this if the remotes in use don't overlap.
+
+--vfs-cache-mode off
+
+In this mode (the default) the cache will read directly from the remote
+and write directly to the remote without caching anything on disk.
+
+This will mean some operations are not possible
+
+- Files can't be opened for both read AND write
+- Files opened for write can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files open for read with O_TRUNC will be opened write only
+- Files open for write only will behave as if O_TRUNC was supplied
+- Open modes O_APPEND, O_TRUNC are ignored
+- If an upload fails it can't be retried
+
+--vfs-cache-mode minimal
+
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disk. This means that files opened for write
+will be a lot more compatible, but uses the minimal disk space.
+
+These operations are not possible
+
+- Files opened for write only can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files opened for write only will ignore O_APPEND, O_TRUNC
+- If an upload fails it can't be retried
+
+--vfs-cache-mode writes
+
+In this mode files opened for read only are still read directly from the
+remote, write only and read/write files are buffered to disk first.
+
+This mode should support all normal file system operations.
+
+If an upload fails it will be retried at exponentially increasing
+intervals up to 1 minute.
+
+--vfs-cache-mode full
+
+In this mode all reads and writes are buffered to and from disk. When
+data is read from the remote this is buffered to disk as well.
+
+In this mode the files in the cache will be sparse files and rclone will
+keep track of which bits of the files it has downloaded.
+
+So if an application only reads the starts of each file, then rclone
+will only buffer the start of the file. These files will appear to be
+their full size in the cache, but they will be sparse files with only
+the data that has been downloaded present in them.
+
+This mode should support all normal file system operations and is
+otherwise identical to --vfs-cache-mode writes.
+
+When reading a file rclone will read --buffer-size plus --vfs-read-ahead
+bytes ahead. The --buffer-size is buffered in memory whereas the
+--vfs-read-ahead is buffered on disk.
+
+When using this mode it is recommended that --buffer-size is not set too
+large and --vfs-read-ahead is set large if required.
+
+IMPORTANT not all file systems support sparse files. In particular
+FAT/exFAT do not. Rclone will perform very badly if the cache directory
+is on a filesystem which doesn't support sparse files and it will log an
+ERROR message if one is detected.
+
+Fingerprinting
+
+Various parts of the VFS use fingerprinting to see if a local file copy
+has changed relative to a remote file. Fingerprints are made from:
+
+- size
+- modification time
+- hash
+
+where available on an object.
+
+On some backends some of these attributes are slow to read (they take an
+extra API call per object, or extra work per object).
+
+For example hash is slow with the local and sftp backends as they have
+to read the entire file and hash it, and modtime is slow with the s3,
+swift, ftp and qinqstor backends because they need to do an extra API
+call to fetch it.
+
+If you use the --vfs-fast-fingerprint flag then rclone will not include
+the slow operations in the fingerprint. This makes the fingerprinting
+less accurate but much faster and will improve the opening time of
+cached files.
+
+If you are running a vfs cache over local, s3 or swift backends then
+using this flag is recommended.
+
+Note that if you change the value of this flag, the fingerprints of the
+files in the cache may be invalidated and the files will need to be
+downloaded again.
+
+VFS Chunked Reading
+
+When rclone reads files from a remote it reads them in chunks. This
+means that rather than requesting the whole file rclone reads the chunk
+specified. This can reduce the used download quota for some remotes by
+requesting only chunks from the remote that are actually read, at the
+cost of an increased number of requests.
+
+These flags control the chunking:
+
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
+ --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
+
+Rclone will start reading a chunk of size --vfs-read-chunk-size, and
+then double the size for each read. When --vfs-read-chunk-size-limit is
+specified, and greater than --vfs-read-chunk-size, the chunk size for
+each open file will get doubled only until the specified value is
+reached. If the value is "off", which is the default, the limit is
+disabled and the chunk size will grow indefinitely.
+
+With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the
+following parts will be downloaded: 0-100M, 100M-200M, 200M-300M,
+300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified,
+the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M,
+1200M-1700M and so on.
+
+Setting --vfs-read-chunk-size to 0 or "off" disables chunked reading.
+
+VFS Performance
+
+These flags may be used to enable/disable features of the VFS for
+performance or other reasons. See also the chunked reading feature.
+
+In particular S3 and Swift benefit hugely from the --no-modtime flag (or
+use --use-server-modtime for a slightly different effect) as each read
+of the modification time takes a transaction.
+
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --read-only Only allow read-only access.
+
+Sometimes rclone is delivered reads or writes out of order. Rather than
+seeking rclone will wait a short time for the in sequence read or write
+to come in. These flags only come into effect when not using an on disk
+cache file.
+
+ --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
+
+When using VFS write caching (--vfs-cache-mode with value writes or
+full), the global flag --transfers can be set to adjust the number of
+parallel uploads of modified files from the cache (the related global
+flag --checkers has no effect on the VFS).
+
+ --transfers int Number of file transfers to run in parallel (default 4)
+
+VFS Case Sensitivity
+
+Linux file systems are case-sensitive: two files can differ only by
+case, and the exact case must be used when opening a file.
+
+File systems in modern Windows are case-insensitive but case-preserving:
+although existing files can be opened using any case, the exact case
+used to create the file is preserved and available for programs to
+query. It is not allowed for two files in the same directory to differ
+only by case.
+
+Usually file systems on macOS are case-insensitive. It is possible to
+make macOS file systems case-sensitive but that is not the default.
+
+The --vfs-case-insensitive VFS flag controls how rclone handles these
+two cases. If its value is "false", rclone passes file names to the
+remote as-is. If the flag is "true" (or appears without a value on the
+command line), rclone may perform a "fixup" as explained below.
+
+The user may specify a file name to open/delete/rename/etc with a case
+different than what is stored on the remote. If an argument refers to an
+existing file with exactly the same name, then the case of the existing
+file on the disk will be used. However, if a file name with exactly the
+same name is not found but a name differing only by case exists, rclone
+will transparently fixup the name. This fixup happens only when an
+existing file is requested. Case sensitivity of file names created anew
+by rclone is controlled by the underlying remote.
+
+Note that case sensitivity of the operating system running rclone (the
+target) may differ from case sensitivity of a file system presented by
+rclone (the source). The flag controls whether "fixup" is performed to
+satisfy the target.
+
+If the flag is not provided on the command line, then its default value
+depends on the operating system where rclone runs: "true" on Windows and
+macOS, "false" otherwise. If the flag is provided without a value, then
+it is "true".
+
+VFS Disk Options
+
+This flag allows you to manually set the statistics about the filing
+system. It can be useful when those statistics cannot be read correctly
+automatically.
+
+ --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
+
+Alternate report of used bytes
+
+Some backends, most notably S3, do not report the amount of bytes used.
+If you need this information to be available when running df on the
+filesystem, then pass the flag --vfs-used-is-size to rclone. With this
+flag set, instead of relying on the backend to report this information,
+rclone will scan the whole remote similar to rclone size and compute the
+total used space itself.
+
+WARNING. Contrary to rclone size, this flag ignores filters so that the
+result is accurate. However, this is very inefficient and may cost lots
+of API calls resulting in extra charges. Use it as a last resort and
+only with caching.
+
+ rclone serve nfs remote:path [flags]
+
+Options
+
+ --addr string IPaddress:Port or :Port to bind server to
+ --dir-cache-time Duration Time to cache directory entries for (default 5m0s)
+ --dir-perms FileMode Directory permissions (default 0777)
+ --file-perms FileMode File permissions (default 0666)
+ --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
+ -h, --help help for nfs
+ --no-checksum Don't compare checksums on up/download
+ --no-modtime Don't read/write the modification time (can speed things up)
+ --no-seek Don't allow seeking in files
+ --poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
+ --read-only Only allow read-only access
+ --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
+ --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+ --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
+ --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-case-insensitive If a file name not found, find a case insensitive match
+ --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
+ --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
+ --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
+ --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -8282,6 +8812,575 @@ SEE ALSO
- rclone serve - Serve a remote over a protocol.
+rclone serve s3
+
+Serve remote:path over s3.
+
+Synopsis
+
+serve s3 implements a basic s3 server that serves a remote via s3. This
+can be viewed with an s3 client, or you can make an s3 type remote to
+read and write to it with rclone.
+
+serve s3 is considered Experimental so use with care.
+
+S3 server supports Signature Version 4 authentication. Just use
+--auth-key accessKey,secretKey and set the Authorization header
+correctly in the request. (See the AWS docs).
+
+--auth-key can be repeated for multiple auth pairs. If --auth-key is not
+provided then serve s3 will allow anonymous access.
+
+Please note that some clients may require HTTPS endpoints. See the SSL
+docs for more information.
+
+This command uses the VFS directory cache. All the functionality will
+work with --vfs-cache-mode off. Using --vfs-cache-mode full (or writes)
+can be used to cache objects locally to improve performance.
+
+Use --force-path-style=false if you want to use the bucket name as a
+part of the hostname (such as mybucket.local)
+
+Use --etag-hash if you want to change the hash uses for the ETag. Note
+that using anything other than MD5 (the default) is likely to cause
+problems for S3 clients which rely on the Etag being the MD5.
+
+Quickstart
+
+For a simple set up, to serve remote:path over s3, run the server like
+this:
+
+ rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
+
+This will be compatible with an rclone remote which is defined like
+this:
+
+ [serves3]
+ type = s3
+ provider = Rclone
+ endpoint = http://127.0.0.1:8080/
+ access_key_id = ACCESS_KEY_ID
+ secret_access_key = SECRET_ACCESS_KEY
+ use_multipart_uploads = false
+
+Note that setting disable_multipart_uploads = true is to work around a
+bug which will be fixed in due course.
+
+Bugs
+
+When uploading multipart files serve s3 holds all the parts in memory
+(see #7453). This is a limitaton of the library rclone uses for serving
+S3 and will hopefully be fixed at some point.
+
+Multipart server side copies do not work (see #7454). These take a very
+long time and eventually fail. The default threshold for multipart
+server side copies is 5G which is the maximum it can be, so files above
+this side will fail to be server side copied.
+
+For a current list of serve s3 bugs see the serve s3 bug category on
+GitHub.
+
+Limitations
+
+serve s3 will treat all directories in the root as buckets and ignore
+all files in the root. You can use CreateBucket to create folders under
+the root, but you can't create empty folders under other folders not in
+the root.
+
+When using PutObject or DeleteObject, rclone will automatically create
+or clean up empty folders. If you don't want to clean up empty folders
+automatically, use --no-cleanup.
+
+When using ListObjects, rclone will use / when the delimiter is empty.
+This reduces backend requests with no effect on most operations, but if
+the delimiter is something other than / and empty, rclone will do a full
+recursive search of the backend, which can take some time.
+
+Versioning is not currently supported.
+
+Metadata will only be saved in memory other than the rclone mtime
+metadata which will be set as the modification time of the file.
+
+Supported operations
+
+serve s3 currently supports the following operations.
+
+- Bucket
+ - ListBuckets
+ - CreateBucket
+ - DeleteBucket
+- Object
+ - HeadObject
+ - ListObjects
+ - GetObject
+ - PutObject
+ - DeleteObject
+ - DeleteObjects
+ - CreateMultipartUpload
+ - CompleteMultipartUpload
+ - AbortMultipartUpload
+ - CopyObject
+ - UploadPart
+
+Other operations will return error Unimplemented.
+
+Server options
+
+Use --addr to specify which IP address and port the server should listen
+on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By
+default it only listens on localhost. You can use port :0 to let the OS
+choose an available port.
+
+If you set --addr to listen on a public or LAN accessible IP address
+then using Authentication is advised - see the next section for info.
+
+You can use a unix socket by setting the url to unix:///path/to/socket
+or just by using an absolute path name. Note that unix sockets bypass
+the authentication - this is expected to be done with file system
+permissions.
+
+--addr may be repeated to listen on multiple IPs/ports/sockets.
+
+--server-read-timeout and --server-write-timeout can be used to control
+the timeouts on the server. Note that this is the total time for a
+transfer.
+
+--max-header-bytes controls the maximum number of bytes the server will
+accept in the HTTP header.
+
+--baseurl controls the URL prefix that rclone serves from. By default
+rclone will serve from the root. If you used --baseurl "/rclone" then
+rclone would serve from a URL starting with "/rclone/". This is useful
+if you wish to proxy rclone serve. Rclone automatically inserts leading
+and trailing "/" on --baseurl, so --baseurl "rclone",
+--baseurl "/rclone" and --baseurl "/rclone/" are all treated
+identically.
+
+TLS (SSL)
+
+By default this will serve over http. If you want you can serve over
+https. You will need to supply the --cert and --key flags. If you wish
+to do client side certificate validation then you will need to supply
+--client-ca also.
+
+--cert should be a either a PEM encoded certificate or a concatenation
+of that with the CA certificate. --key should be the PEM encoded private
+key and --client-ca should be the PEM encoded client certificate
+authority certificate.
+
+--min-tls-version is minimum TLS version that is acceptable. Valid
+values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").
+## VFS - Virtual File System
+
+This command uses the VFS layer. This adapts the cloud storage objects
+that rclone uses into something which looks much more like a disk filing
+system.
+
+Cloud storage objects have lots of properties which aren't like disk
+files - you can't extend them or write to the middle of them, so the VFS
+layer has to deal with that. Because there is no one right way of doing
+this there are various options explained below.
+
+The VFS layer also implements a directory cache - this caches info about
+files and directories (but not the data) in memory.
+
+VFS Directory Cache
+
+Using the --dir-cache-time flag, you can control how long a directory
+should be considered up to date and not refreshed from the backend.
+Changes made through the VFS will appear immediately or invalidate the
+cache.
+
+ --dir-cache-time duration Time to cache directory entries for (default 5m0s)
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
+
+However, changes made directly on the cloud storage by the web interface
+or a different copy of rclone will only be picked up once the directory
+cache expires if the backend configured does not support polling for
+changes. If the backend supports polling, changes will be picked up
+within the polling interval.
+
+You can send a SIGHUP signal to rclone for it to flush all directory
+caches, regardless of how old they are. Assuming only one rclone
+instance is running, you can reset the cache like this:
+
+ kill -SIGHUP $(pidof rclone)
+
+If you configure rclone with a remote control then you can use rclone rc
+to flush the whole directory cache:
+
+ rclone rc vfs/forget
+
+Or individual files or directories:
+
+ rclone rc vfs/forget file=path/to/file dir=path/to/dir
+
+VFS File Buffering
+
+The --buffer-size flag determines the amount of memory, that will be
+used to buffer data in advance.
+
+Each open file will try to keep the specified amount of data in memory
+at all times. The buffered data is bound to one open file and won't be
+shared.
+
+This flag is a upper limit for the used memory per open file. The buffer
+will only use memory for data that is downloaded but not not yet read.
+If the buffer is empty, only a small amount of memory will be used.
+
+The maximum memory used by rclone for buffering can be up to
+--buffer-size * open files.
+
+VFS File Caching
+
+These flags control the VFS file caching options. File caching is
+necessary to make the VFS layer appear compatible with a normal file
+system. It can be disabled at the cost of some compatibility.
+
+For example you'll need to enable VFS caching if you want to read and
+write simultaneously to a file. See below for more details.
+
+Note that the VFS cache is separate from the cache backend and you may
+find that you need one or the other or both.
+
+ --cache-dir string Directory rclone will use for caching.
+ --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+ --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
+
+If run with -vv rclone will print the location of the file cache. The
+files are stored in the user cache file area which is OS dependent but
+can be controlled with --cache-dir or setting the appropriate
+environment variable.
+
+The cache has 4 different modes selected by --vfs-cache-mode. The higher
+the cache mode the more compatible rclone becomes at the cost of using
+disk space.
+
+Note that files are written back to the remote only when they are closed
+and if they haven't been accessed for --vfs-write-back seconds. If
+rclone is quit or dies with files that haven't been uploaded, these will
+be uploaded next time rclone is run with the same flags.
+
+If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the
+cache may exceed these quotas for two reasons. Firstly because it is
+only checked every --vfs-cache-poll-interval. Secondly because open
+files cannot be evicted from the cache. When --vfs-cache-max-size or
+--vfs-cache-min-free-size is exceeded, rclone will attempt to evict the
+least accessed files from the cache first. rclone will start with files
+that haven't been accessed for the longest. This cache flushing strategy
+is efficient and more relevant files are likely to remain cached.
+
+The --vfs-cache-max-age will evict files from the cache after the set
+time since last access has passed. The default value of 1 hour will
+start evicting files from cache that haven't been accessed for 1 hour.
+When a cached file is accessed the 1 hour timer is reset to 0 and will
+wait for 1 more hour before evicting. Specify the time with standard
+notation, s, m, h, d, w .
+
+You should not run two copies of rclone using the same VFS cache with
+the same or overlapping remotes if using --vfs-cache-mode > off. This
+can potentially cause data corruption if you do. You can work around
+this by giving each rclone its own cache hierarchy with --cache-dir. You
+don't need to worry about this if the remotes in use don't overlap.
+
+--vfs-cache-mode off
+
+In this mode (the default) the cache will read directly from the remote
+and write directly to the remote without caching anything on disk.
+
+This will mean some operations are not possible
+
+- Files can't be opened for both read AND write
+- Files opened for write can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files open for read with O_TRUNC will be opened write only
+- Files open for write only will behave as if O_TRUNC was supplied
+- Open modes O_APPEND, O_TRUNC are ignored
+- If an upload fails it can't be retried
+
+--vfs-cache-mode minimal
+
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disk. This means that files opened for write
+will be a lot more compatible, but uses the minimal disk space.
+
+These operations are not possible
+
+- Files opened for write only can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files opened for write only will ignore O_APPEND, O_TRUNC
+- If an upload fails it can't be retried
+
+--vfs-cache-mode writes
+
+In this mode files opened for read only are still read directly from the
+remote, write only and read/write files are buffered to disk first.
+
+This mode should support all normal file system operations.
+
+If an upload fails it will be retried at exponentially increasing
+intervals up to 1 minute.
+
+--vfs-cache-mode full
+
+In this mode all reads and writes are buffered to and from disk. When
+data is read from the remote this is buffered to disk as well.
+
+In this mode the files in the cache will be sparse files and rclone will
+keep track of which bits of the files it has downloaded.
+
+So if an application only reads the starts of each file, then rclone
+will only buffer the start of the file. These files will appear to be
+their full size in the cache, but they will be sparse files with only
+the data that has been downloaded present in them.
+
+This mode should support all normal file system operations and is
+otherwise identical to --vfs-cache-mode writes.
+
+When reading a file rclone will read --buffer-size plus --vfs-read-ahead
+bytes ahead. The --buffer-size is buffered in memory whereas the
+--vfs-read-ahead is buffered on disk.
+
+When using this mode it is recommended that --buffer-size is not set too
+large and --vfs-read-ahead is set large if required.
+
+IMPORTANT not all file systems support sparse files. In particular
+FAT/exFAT do not. Rclone will perform very badly if the cache directory
+is on a filesystem which doesn't support sparse files and it will log an
+ERROR message if one is detected.
+
+Fingerprinting
+
+Various parts of the VFS use fingerprinting to see if a local file copy
+has changed relative to a remote file. Fingerprints are made from:
+
+- size
+- modification time
+- hash
+
+where available on an object.
+
+On some backends some of these attributes are slow to read (they take an
+extra API call per object, or extra work per object).
+
+For example hash is slow with the local and sftp backends as they have
+to read the entire file and hash it, and modtime is slow with the s3,
+swift, ftp and qinqstor backends because they need to do an extra API
+call to fetch it.
+
+If you use the --vfs-fast-fingerprint flag then rclone will not include
+the slow operations in the fingerprint. This makes the fingerprinting
+less accurate but much faster and will improve the opening time of
+cached files.
+
+If you are running a vfs cache over local, s3 or swift backends then
+using this flag is recommended.
+
+Note that if you change the value of this flag, the fingerprints of the
+files in the cache may be invalidated and the files will need to be
+downloaded again.
+
+VFS Chunked Reading
+
+When rclone reads files from a remote it reads them in chunks. This
+means that rather than requesting the whole file rclone reads the chunk
+specified. This can reduce the used download quota for some remotes by
+requesting only chunks from the remote that are actually read, at the
+cost of an increased number of requests.
+
+These flags control the chunking:
+
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
+ --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
+
+Rclone will start reading a chunk of size --vfs-read-chunk-size, and
+then double the size for each read. When --vfs-read-chunk-size-limit is
+specified, and greater than --vfs-read-chunk-size, the chunk size for
+each open file will get doubled only until the specified value is
+reached. If the value is "off", which is the default, the limit is
+disabled and the chunk size will grow indefinitely.
+
+With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the
+following parts will be downloaded: 0-100M, 100M-200M, 200M-300M,
+300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified,
+the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M,
+1200M-1700M and so on.
+
+Setting --vfs-read-chunk-size to 0 or "off" disables chunked reading.
+
+VFS Performance
+
+These flags may be used to enable/disable features of the VFS for
+performance or other reasons. See also the chunked reading feature.
+
+In particular S3 and Swift benefit hugely from the --no-modtime flag (or
+use --use-server-modtime for a slightly different effect) as each read
+of the modification time takes a transaction.
+
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --read-only Only allow read-only access.
+
+Sometimes rclone is delivered reads or writes out of order. Rather than
+seeking rclone will wait a short time for the in sequence read or write
+to come in. These flags only come into effect when not using an on disk
+cache file.
+
+ --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
+
+When using VFS write caching (--vfs-cache-mode with value writes or
+full), the global flag --transfers can be set to adjust the number of
+parallel uploads of modified files from the cache (the related global
+flag --checkers has no effect on the VFS).
+
+ --transfers int Number of file transfers to run in parallel (default 4)
+
+VFS Case Sensitivity
+
+Linux file systems are case-sensitive: two files can differ only by
+case, and the exact case must be used when opening a file.
+
+File systems in modern Windows are case-insensitive but case-preserving:
+although existing files can be opened using any case, the exact case
+used to create the file is preserved and available for programs to
+query. It is not allowed for two files in the same directory to differ
+only by case.
+
+Usually file systems on macOS are case-insensitive. It is possible to
+make macOS file systems case-sensitive but that is not the default.
+
+The --vfs-case-insensitive VFS flag controls how rclone handles these
+two cases. If its value is "false", rclone passes file names to the
+remote as-is. If the flag is "true" (or appears without a value on the
+command line), rclone may perform a "fixup" as explained below.
+
+The user may specify a file name to open/delete/rename/etc with a case
+different than what is stored on the remote. If an argument refers to an
+existing file with exactly the same name, then the case of the existing
+file on the disk will be used. However, if a file name with exactly the
+same name is not found but a name differing only by case exists, rclone
+will transparently fixup the name. This fixup happens only when an
+existing file is requested. Case sensitivity of file names created anew
+by rclone is controlled by the underlying remote.
+
+Note that case sensitivity of the operating system running rclone (the
+target) may differ from case sensitivity of a file system presented by
+rclone (the source). The flag controls whether "fixup" is performed to
+satisfy the target.
+
+If the flag is not provided on the command line, then its default value
+depends on the operating system where rclone runs: "true" on Windows and
+macOS, "false" otherwise. If the flag is provided without a value, then
+it is "true".
+
+VFS Disk Options
+
+This flag allows you to manually set the statistics about the filing
+system. It can be useful when those statistics cannot be read correctly
+automatically.
+
+ --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
+
+Alternate report of used bytes
+
+Some backends, most notably S3, do not report the amount of bytes used.
+If you need this information to be available when running df on the
+filesystem, then pass the flag --vfs-used-is-size to rclone. With this
+flag set, instead of relying on the backend to report this information,
+rclone will scan the whole remote similar to rclone size and compute the
+total used space itself.
+
+WARNING. Contrary to rclone size, this flag ignores filters so that the
+result is accurate. However, this is very inefficient and may cost lots
+of API calls resulting in extra charges. Use it as a last resort and
+only with caching.
+
+ rclone serve s3 remote:path [flags]
+
+Options
+
+ --addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080])
+ --allow-origin string Origin which cross-domain request (CORS) can be executed from
+ --auth-key stringArray Set key pair for v4 authorization: access_key_id,secret_access_key
+ --baseurl string Prefix for URLs - leave blank for root
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
+ --dir-cache-time Duration Time to cache directory entries for (default 5m0s)
+ --dir-perms FileMode Directory permissions (default 0777)
+ --etag-hash string Which hash to use for the ETag, or auto or blank for off (default "MD5")
+ --file-perms FileMode File permissions (default 0666)
+ --force-path-style If true use path style access if false use virtual hosted style (default true) (default true)
+ --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
+ -h, --help help for s3
+ --key string TLS PEM Private key
+ --max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
+ --no-checksum Don't compare checksums on up/download
+ --no-cleanup Not to cleanup empty folder after object is deleted
+ --no-modtime Don't read/write the modification time (can speed things up)
+ --no-seek Don't allow seeking in files
+ --poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
+ --read-only Only allow read-only access
+ --server-read-timeout Duration Timeout for server reading data (default 1h0m0s)
+ --server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
+ --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
+ --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+ --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
+ --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-case-insensitive If a file name not found, find a case insensitive match
+ --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
+ --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
+ --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
+ --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
+ --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
+ --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
+ --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
+
+Filter Options
+
+Flags for filtering directory listings.
+
+ --delete-excluded Delete files on dest excluded from sync
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
+ --exclude-if-present stringArray Exclude directories if filename is present
+ --files-from stringArray Read list of source-file names from file (use - to read from stdin)
+ --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
+ -f, --filter stringArray Add a file filtering rule
+ --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --ignore-case Ignore case in filters (case insensitive)
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read file include patterns from file (use - to read from stdin)
+ --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-depth int If set limits the recursion depth to this (default -1)
+ --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
+ --metadata-exclude stringArray Exclude metadatas matching pattern
+ --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
+ --metadata-filter stringArray Add a metadata filtering rule
+ --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
+ --metadata-include stringArray Include metadatas matching pattern
+ --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
+
+See the global flags page for global options not listed here.
+
+SEE ALSO
+
+- rclone serve - Serve a remote over a protocol.
+
rclone serve sftp
Serve the remote over SFTP.
@@ -8778,6 +9877,7 @@ Options
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -8961,6 +10061,27 @@ be used within the template to server pages:
-- .ModTime The UTC timestamp of an entry.
-----------------------------------------------------------------------
+The server also makes the following functions available so that they can
+be used within the template. These functions help extend the options for
+dynamic rendering of HTML. They can be used to render HTML based on
+specific conditions.
+
+ -----------------------------------------------------------------------
+ Function Description
+ ----------------------------------- -----------------------------------
+ afterEpoch Returns the time since the epoch
+ for the given time.
+
+ contains Checks whether a given substring is
+ present or not in a given string.
+
+ hasPrefix Checks whether the given string
+ begins with the specified prefix.
+
+ hasSuffix Checks whether the given string end
+ with the specified suffix.
+ -----------------------------------------------------------------------
+
Authentication
By default this will serve files without needing a login.
@@ -8987,9 +10108,8 @@ The password file can be updated while rclone is running.
Use --realm to set the authentication realm.
-Use --salt to change the password hashing salt from the default.
-
-VFS - Virtual File System
+Use --salt to change the password hashing salt from the default. ## VFS
+- Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
that rclone uses into something which looks much more like a disk filing
@@ -9438,6 +10558,7 @@ Options
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -10195,6 +11316,10 @@ Note that arbitrary metadata may be added to objects using the
--metadata-set key=value flag when the object is first uploaded. This
flag can be repeated as many times as necessary.
+The --metadata-mapper flag can be used to pass the name of a program in
+which can transform metadata when it is being copied from source to
+destination.
+
Types of metadata
Metadata is divided into two type. System metadata and User metadata.
@@ -10287,6 +11412,9 @@ backend may implement.
btime Time of file creation 2006-01-02T15:04:05.999999999Z07:00
(birth): RFC 3339
+ utime Time of file upload: 2006-01-02T15:04:05.999999999Z07:00
+ RFC 3339
+
cache-control Cache-Control header no-cache
content-disposition Content-Disposition inline
@@ -11018,8 +12146,8 @@ such as:
- sftp
Without --inplace (the default) rclone will first upload to a temporary
-file with an extension like this where XXXXXX represents a random
-string.
+file with an extension like this, where XXXXXX represents a random
+string and .partial is --partial-suffix value (.partial by default).
original-file-name.XXXXXX.partial
@@ -11235,12 +12363,118 @@ reaching the limit. Only applicable for --max-transfer
Setting this flag enables rclone to copy the metadata from the source to
the destination. For local backends this is ownership, permissions,
-xattr etc. See the #metadata for more info.
+xattr etc. See the metadata section for more info.
+
+--metadata-mapper SpaceSepList
+
+If you supply the parameter --metadata-mapper /path/to/program then
+rclone will use that program to map metadata from source object to
+destination object.
+
+The argument to this flag should be a command with an optional space
+separated list of arguments. If one of the arguments has a space in then
+enclose it in ", if you want a literal " in an argument then enclose the
+argument in " and double the ". See CSV encoding for more info.
+
+ --metadata-mapper "python bin/test_metadata_mapper.py"
+ --metadata-mapper 'python bin/test_metadata_mapper.py "argument with a space"'
+ --metadata-mapper 'python bin/test_metadata_mapper.py "argument with ""two"" quotes"'
+
+This uses a simple JSON based protocol with input on STDIN and output on
+STDOUT. This will be called for every file and directory copied and may
+be called concurrently.
+
+The program's job is to take a metadata blob on the input and turn it
+into a metadata blob on the output suitable for the destination backend.
+
+Input to the program (via STDIN) might look like this. This provides
+some context for the Metadata which may be important.
+
+- SrcFs is the config string for the remote that the object is
+ currently on.
+- SrcFsType is the name of the source backend.
+- DstFs is the config string for the remote that the object is being
+ copied to
+- DstFsType is the name of the destination backend.
+- Remote is the path of the file relative to the root.
+- Size, MimeType, ModTime are attributes of the file.
+- IsDir is true if this is a directory (not yet implemented).
+- ID is the source ID of the file if known.
+- Metadata is the backend specific metadata as described in the
+ backend docs.
+
+ {
+ "SrcFs": "gdrive:",
+ "SrcFsType": "drive",
+ "DstFs": "newdrive:user",
+ "DstFsType": "onedrive",
+ "Remote": "test.txt",
+ "Size": 6,
+ "MimeType": "text/plain; charset=utf-8",
+ "ModTime": "2022-10-11T17:53:10.286745272+01:00",
+ "IsDir": false,
+ "ID": "xyz",
+ "Metadata": {
+ "btime": "2022-10-11T16:53:11Z",
+ "content-type": "text/plain; charset=utf-8",
+ "mtime": "2022-10-11T17:53:10.286745272+01:00",
+ "owner": "user1@domain1.com",
+ "permissions": "...",
+ "description": "my nice file",
+ "starred": "false"
+ }
+ }
+
+The program should then modify the input as desired and send it to
+STDOUT. The returned Metadata field will be used in its entirety for the
+destination object. Any other fields will be ignored. Note in this
+example we translate user names and permissions and add something to the
+description:
+
+ {
+ "Metadata": {
+ "btime": "2022-10-11T16:53:11Z",
+ "content-type": "text/plain; charset=utf-8",
+ "mtime": "2022-10-11T17:53:10.286745272+01:00",
+ "owner": "user1@domain2.com",
+ "permissions": "...",
+ "description": "my nice file [migrated from domain1]",
+ "starred": "false"
+ }
+ }
+
+Metadata can be removed here too.
+
+An example python program might look something like this to implement
+the above transformations.
+
+ import sys, json
+
+ i = json.load(sys.stdin)
+ metadata = i["Metadata"]
+ # Add tag to description
+ if "description" in metadata:
+ metadata["description"] += " [migrated from domain1]"
+ else:
+ metadata["description"] = "[migrated from domain1]"
+ # Modify owner
+ if "owner" in metadata:
+ metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
+ o = { "Metadata": metadata }
+ json.dump(o, sys.stdout, indent="\t")
+
+You can find this example (slightly expanded) in the rclone source code
+at bin/test_metadata_mapper.py.
+
+If you want to see the input to the metadata mapper and the output
+returned from it in the log you can use -vv --dump mapper.
+
+See the metadata section for more info.
--metadata-set key=value
Add metadata key = value when uploading. This can be repeated as many
-times as required. See the #metadata for more info.
+times as required. See the metadata section for more info.
--modify-window=TIME
@@ -11457,6 +12691,15 @@ If you want perfect ordering then you will need to specify --check-first
which will find all the files which need transferring first before
transferring any.
+--partial-suffix
+
+When --inplace is not used, it causes rclone to use the --partial-suffix
+as suffix for temporary files.
+
+Suffix length limit is 16 characters.
+
+The default is .partial.
+
--password-command SpaceSepList
This flag supplies a program which should supply the config password
@@ -11470,9 +12713,9 @@ and double the ". See CSV encoding for more info.
Eg
- --password-command echo hello
- --password-command echo "hello with space"
- --password-command echo "hello with ""quotes"" and space"
+ --password-command "echo hello"
+ --password-command 'echo "hello with space"'
+ --password-command 'echo "hello with ""quotes"" and space"'
See the Configuration Encryption for more info.
@@ -11837,35 +13080,55 @@ not deleting files as there were IO errors.
--fast-list
When doing anything which involves a directory listing (e.g. sync, copy,
-ls - in fact nearly every command), rclone normally lists a directory
-and processes it before using more directory lists to process any
-subdirectories. This can be parallelised and works very quickly using
-the least amount of memory.
+ls - in fact nearly every command), rclone has different strategies to
+choose from.
-However, some remotes have a way of listing all files beneath a
-directory in one (or a small number) of transactions. These tend to be
-the bucket-based remotes (e.g. S3, B2, GCS, Swift).
+The basic strategy is to list one directory and processes it before
+using more directory lists to process any subdirectories. This is a
+mandatory backend feature, called List, which means it is supported by
+all backends. This strategy uses small amount of memory, and because it
+can be parallelised it is fast for operations involving processing of
+the list results.
-If you use the --fast-list flag then rclone will use this method for
-listing directories. This will have the following consequences for the
-listing:
+Some backends provide the support for an alternative strategy, where all
+files beneath a directory can be listed in one (or a small number) of
+transactions. Rclone supports this alternative strategy through an
+optional backend feature called ListR. You can see in the storage system
+overview documentation's optional features section which backends it is
+enabled for (these tend to be the bucket-based ones, e.g. S3, B2, GCS,
+Swift). This strategy requires fewer transactions for highly recursive
+operations, which is important on backends where this is charged or
+heavily rate limited. It may be faster (due to fewer transactions) or
+slower (because it can't be parallelized) depending on different
+parameters, and may require more memory if rclone has to keep the whole
+listing in memory.
-- It will use fewer transactions (important if you pay for them)
-- It will use more memory. Rclone has to load the whole listing into
- memory.
-- It may be faster because it uses fewer transactions
-- It may be slower because it can't be parallelized
+Which listing strategy rclone picks for a given operation is
+complicated, but in general it tries to choose the best possible. It
+will prefer ListR in situations where it doesn't need to store the
+listed files in memory, e.g. for unlimited recursive ls command
+variants. In other situations it will prefer List, e.g. for sync and
+copy, where it needs to keep the listed files in memory, and is
+performing operations on them where parallelization may be a huge
+advantage.
-rclone should always give identical results with and without
---fast-list.
-
-If you pay for transactions and can fit your entire sync listing into
-memory then --fast-list is recommended. If you have a very big sync to
-do then don't use --fast-list otherwise you will run out of memory.
-
-If you use --fast-list on a remote which doesn't support it, then rclone
+Rclone is not able to take all relevant parameters into account for
+deciding the best strategy, and therefore allows you to influence the
+choice in two ways: You can stop rclone from using ListR by disabling
+the feature, using the --disable option (--disable ListR), or you can
+allow rclone to use ListR where it would normally choose not to do so
+due to higher memory usage, using the --fast-list option. Rclone should
+always produce identical results either way. Using --disable ListR or
+--fast-list on a remote which doesn't support ListR does nothing, rclone
will just ignore it.
+A rule of thumb is that if you pay for transactions and can fit your
+entire sync listing into memory, then --fast-list is recommended. If you
+have a very big sync to do, then don't use --fast-list, otherwise you
+will run out of memory. Run some tests and compare before you decide,
+and if in doubt then just leave the default, let rclone decide, i.e. not
+use --fast-list.
+
--timeout=TIME
This sets the IO idle timeout. If a transfer has started but then
@@ -12187,6 +13450,12 @@ to standard output.
This dumps a list of the open files at the end of the command. It uses
the lsof command to do that so you'll need that installed to use it.
+--dump mapper
+
+This shows the JSON blobs being sent to the program supplied with
+--metadata-mapper and received from it. It can be useful for debugging
+the metadata mapper interface.
+
--memprofile=FILE
Write memory profile to file. This can be analysed with go tool pprof.
@@ -14581,6 +15850,66 @@ See the about command for more information on the above.
Authentication is required for this call.
+operations/check: check the source and destination are the same
+
+Checks the files in the source and destination match. It compares sizes
+and hashes and logs a report of files that don't match. It doesn't alter
+the source or destination.
+
+This takes the following parameters:
+
+- srcFs - a remote name string e.g. "drive:" for the source, "/" for
+ local filesystem
+- dstFs - a remote name string e.g. "drive2:" for the destination, "/"
+ for local filesystem
+- download - check by downloading rather than with hash
+- checkFileHash - treat checkFileFs:checkFileRemote as a SUM file with
+ hashes of given type
+- checkFileFs - treat checkFileFs:checkFileRemote as a SUM file with
+ hashes of given type
+- checkFileRemote - treat checkFileFs:checkFileRemote as a SUM file
+ with hashes of given type
+- oneWay - check one way only, source files must exist on remote
+- combined - make a combined report of changes (default false)
+- missingOnSrc - report all files missing from the source (default
+ true)
+- missingOnDst - report all files missing from the destination
+ (default true)
+- match - report all matching files (default false)
+- differ - report all non-matching files (default true)
+- error - report all files with errors (hashing or reading) (default
+ true)
+
+If you supply the download flag, it will download the data from both
+remotes and check them against each other on the fly. This can be useful
+for remotes that don't support hashes or if you really want to check all
+the data.
+
+If you supply the size-only global flag, it will only compare the sizes
+not the hashes as well. Use this for a quick check.
+
+If you supply the checkFileHash option with a valid hash name, the
+checkFileFs:checkFileRemote must point to a text file in the SUM format.
+This treats the checksum file as the source and dstFs as the
+destination. Note that srcFs is not used and should not be supplied in
+this case.
+
+Returns:
+
+- success - true if no error, false otherwise
+- status - textual summary of check, OK or text string
+- hashType - hash used in check, may be missing
+- combined - array of strings of combined report of changes
+- missingOnSrc - array of strings of all files missing from the source
+- missingOnDst - array of strings of all files missing from the
+ destination
+- match - array of strings of all matching files
+- differ - array of strings of all non-matching files
+- error - array of strings of all files with errors (hashing or
+ reading)
+
+Authentication is required for this call.
+
operations/cleanup: Remove trashed files in the remote or path
This takes the following parameters:
@@ -15496,55 +16825,55 @@ Features
Here is an overview of the major features of each cloud storage system.
- Name Hash ModTime Case Insensitive Duplicate Files MIME Type Metadata
- ------------------------------ ------------------ --------- ------------------ ----------------- ----------- ----------
- 1Fichier Whirlpool - No Yes R -
- Akamai Netstorage MD5, SHA256 R/W No No R -
- Amazon Drive MD5 - Yes No R -
- Amazon S3 (or S3 compatible) MD5 R/W No No R/W RWU
- Backblaze B2 SHA1 R/W No No R/W -
- Box SHA1 R/W Yes No - -
- Citrix ShareFile MD5 R/W Yes No - -
- Dropbox DBHASH ¹ R Yes No - -
- Enterprise File Fabric - R/W Yes No R/W -
- FTP - R/W ¹⁰ No No - -
- Google Cloud Storage MD5 R/W No No R/W -
- Google Drive MD5 R/W No Yes R/W -
- Google Photos - - No Yes R -
- HDFS - R/W No No - -
- HiDrive HiDrive ¹² R/W No No - -
- HTTP - R No No R -
- Internet Archive MD5, SHA1, CRC32 R/W ¹¹ No No - RWU
- Jottacloud MD5 R/W Yes No R -
- Koofr MD5 - Yes No - -
- Mail.ru Cloud Mailru ⁶ R/W Yes No - -
- Mega - - No Yes - -
- Memory MD5 R/W No No - -
- Microsoft Azure Blob Storage MD5 R/W No No R/W -
- Microsoft OneDrive QuickXorHash ⁵ R/W Yes No R -
- OpenDrive MD5 R/W Yes Partial ⁸ - -
- OpenStack Swift MD5 R/W No No R/W -
- Oracle Object Storage MD5 R/W No No R/W -
- pCloud MD5, SHA1 ⁷ R No No W -
- PikPak MD5 R No No R -
- premiumize.me - - Yes No R -
- put.io CRC-32 R/W No Yes R -
- Proton Drive SHA1 R/W No No R -
- QingStor MD5 - ⁹ No No R/W -
- Quatrix by Maytech - R/W No No - -
- Seafile - - No No - -
- SFTP MD5, SHA1 ² R/W Depends No - -
- Sia - - No No - -
- SMB - - Yes No - -
- SugarSync - - No No - -
- Storj - R No No - -
- Uptobox - - No Yes - -
- WebDAV MD5, SHA1 ³ R ⁴ Depends No - -
- Yandex Disk MD5 R/W No No R -
- Zoho WorkDrive - - No No - -
- The local filesystem All R/W Depends No - RWU
-
-Notes
+ Name Hash ModTime Case Insensitive Duplicate Files MIME Type Metadata
+ ------------------------------- ------------------- --------- ------------------ ----------------- ----------- ----------
+ 1Fichier Whirlpool - No Yes R -
+ Akamai Netstorage MD5, SHA256 R/W No No R -
+ Amazon Drive MD5 - Yes No R -
+ Amazon S3 (or S3 compatible) MD5 R/W No No R/W RWU
+ Backblaze B2 SHA1 R/W No No R/W -
+ Box SHA1 R/W Yes No - -
+ Citrix ShareFile MD5 R/W Yes No - -
+ Dropbox DBHASH ¹ R Yes No - -
+ Enterprise File Fabric - R/W Yes No R/W -
+ FTP - R/W ¹⁰ No No - -
+ Google Cloud Storage MD5 R/W No No R/W -
+ Google Drive MD5, SHA1, SHA256 R/W No Yes R/W -
+ Google Photos - - No Yes R -
+ HDFS - R/W No No - -
+ HiDrive HiDrive ¹² R/W No No - -
+ HTTP - R No No R -
+ Internet Archive MD5, SHA1, CRC32 R/W ¹¹ No No - RWU
+ Jottacloud MD5 R/W Yes No R RW
+ Koofr MD5 - Yes No - -
+ Linkbox - R No No - -
+ Mail.ru Cloud Mailru ⁶ R/W Yes No - -
+ Mega - - No Yes - -
+ Memory MD5 R/W No No - -
+ Microsoft Azure Blob Storage MD5 R/W No No R/W -
+ Microsoft Azure Files Storage MD5 R/W Yes No R/W -
+ Microsoft OneDrive QuickXorHash ⁵ R/W Yes No R -
+ OpenDrive MD5 R/W Yes Partial ⁸ - -
+ OpenStack Swift MD5 R/W No No R/W -
+ Oracle Object Storage MD5 R/W No No R/W -
+ pCloud MD5, SHA1 ⁷ R No No W -
+ PikPak MD5 R No No R -
+ premiumize.me - - Yes No R -
+ put.io CRC-32 R/W No Yes R -
+ Proton Drive SHA1 R/W No No R -
+ QingStor MD5 - ⁹ No No R/W -
+ Quatrix by Maytech - R/W No No - -
+ Seafile - - No No - -
+ SFTP MD5, SHA1 ² R/W Depends No - -
+ Sia - - No No - -
+ SMB - R/W Yes No - -
+ SugarSync - - No No - -
+ Storj - R No No - -
+ Uptobox - - No Yes - -
+ WebDAV MD5, SHA1 ³ R ⁴ Depends No - -
+ Yandex Disk MD5 R/W No No R -
+ Zoho WorkDrive - - No No - -
+ The local filesystem All R/W Depends No - RWU
¹ Dropbox supports its own custom hash. This is an SHA256 sum of all the
4 MiB block SHA256s.
@@ -15997,7 +17326,7 @@ upon backend-specific capabilities.
Backblaze B2 No Yes No No Yes Yes Yes Yes Yes No No
- Box Yes Yes Yes Yes Yes ‡‡ No Yes No Yes Yes Yes
+ Box Yes Yes Yes Yes Yes No Yes No Yes Yes Yes
Citrix Yes Yes Yes Yes No No No No No No Yes
ShareFile
@@ -16038,14 +17367,17 @@ upon backend-specific capabilities.
Microsoft Azure Yes Yes No No No Yes Yes Yes No No No
Blob Storage
- Microsoft Yes Yes Yes Yes Yes No No No Yes Yes Yes
+ Microsoft Azure No Yes Yes Yes No No Yes Yes No Yes Yes
+ Files Storage
+
+ Microsoft Yes Yes Yes Yes Yes Yes ⁵ No No Yes Yes Yes
OneDrive
OpenDrive Yes Yes Yes Yes No No No No No No Yes
- OpenStack Swift Yes † Yes No No No Yes Yes No No Yes No
+ OpenStack Swift Yes ¹ Yes No No No Yes Yes No No Yes No
- Oracle Object No Yes No No Yes Yes Yes No No No No
+ Oracle Object No Yes No No Yes Yes Yes Yes No No No
Storage
pCloud Yes Yes Yes Yes Yes No No No Yes Yes Yes
@@ -16065,7 +17397,7 @@ upon backend-specific capabilities.
Seafile Yes Yes Yes Yes Yes Yes Yes No Yes Yes Yes
- SFTP No No Yes Yes No No Yes No No Yes Yes
+ SFTP No Yes ⁴ Yes Yes No No Yes No No Yes Yes
Sia No No No No No No Yes No No No Yes
@@ -16073,11 +17405,11 @@ upon backend-specific capabilities.
SugarSync Yes Yes Yes Yes No No Yes No Yes No Yes
- Storj Yes ☨ Yes Yes No No Yes Yes No Yes No No
+ Storj Yes ² Yes Yes No No Yes Yes No Yes No No
Uptobox No Yes Yes Yes No No No No No No No
- WebDAV Yes Yes Yes Yes No No Yes ‡ No No Yes Yes
+ WebDAV Yes Yes Yes Yes No No Yes ³ No No Yes Yes
Yandex Disk Yes Yes Yes Yes Yes No Yes No Yes Yes Yes
@@ -16087,20 +17419,24 @@ upon backend-specific capabilities.
filesystem
-------------------------------------------------------------------------------------------------------------------------------------
+¹ Note Swift implements this in order to delete directory markers but it
+doesn't actually have a quicker way of deleting files other than
+deleting them individually.
+
+² Storj implements this efficiently only for entire buckets. If purging
+a directory inside a bucket, files are deleted individually.
+
+³ StreamUpload is not supported with Nextcloud
+
+⁴ Use the --sftp-copy-is-hardlink flag to enable.
+
+⁵ Use the --onedrive-delta flag to enable.
+
Purge
This deletes a directory quicker than just deleting all the files in the
directory.
-† Note Swift implements this in order to delete directory markers but
-they don't actually have a quicker way of deleting files other than
-deleting them individually.
-
-☨ Storj implements this efficiently only for entire buckets. If purging
-a directory inside a bucket, files are deleted individually.
-
-‡ StreamUpload is not supported with Nextcloud
-
Copy
Used when copying an object to and from the same remote. This known as a
@@ -16191,11 +17527,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -16210,11 +17546,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
@@ -16272,7 +17609,7 @@ General networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.64.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.65.0")
Performance
@@ -16289,7 +17626,7 @@ General configuration of rclone.
--ask-password Allow prompt for password for encrypted configuration (default true)
--auto-confirm If enabled, do not request console confirmation
--cache-dir string Directory rclone will use for caching (default "$HOME/.cache/rclone")
- --color string When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default "AUTO")
+ --color AUTO|NEVER|ALWAYS When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default AUTO)
--config string Config file (default "$HOME/.config/rclone/rclone.conf")
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--disable string Disable a comma separated list of features (use --disable help to see a list)
@@ -16315,7 +17652,7 @@ Debugging
Flags for developers.
--cpuprofile string Write cpu profile to file
- --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump DumpFlags List of items to dump from: headers, bodies, requests, responses, auth, filters, goroutines, openfiles, mapper
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info
--memprofile string Write memory profile to file
@@ -16360,7 +17697,7 @@ Logging and statistics.
--log-file string Log everything to this file
--log-format string Comma separated list of log format options (default "date,time")
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --log-level LogLevel Log level DEBUG|INFO|NOTICE|ERROR (default NOTICE)
--log-systemd Activate systemd integration for the logger
--max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
-P, --progress Show progress during transfer
@@ -16368,7 +17705,7 @@ Logging and statistics.
-q, --quiet Print as little stuff as possible
--stats Duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats (0 for no limit) (default 45)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-log-level LogLevel Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default INFO)
--stats-one-line Make the stats fit on one line
--stats-one-line-date Enable --stats-one-line and add current date/time prefix
--stats-one-line-date-format string Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes ("), see https://golang.org/pkg/time/#Time.Format
@@ -16389,6 +17726,7 @@ Flags to control metadata.
--metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
--metadata-include stringArray Include metadatas matching pattern
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --metadata-mapper SpaceSepList Program to run to transforming metadata before upload
--metadata-set stringArray Add metadata key=value when uploading
RC
@@ -16431,13 +17769,13 @@ Backend only flags. These can be set in the config file also.
--acd-auth-url string Auth server URL
--acd-client-id string OAuth Client Id
--acd-client-secret string OAuth Client Secret
- --acd-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --acd-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi)
--acd-token string OAuth Access Token as a JSON blob
--acd-token-url string Token server url
--acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears (default 3m0s)
--alias-remote string Remote or path to alias
- --azureblob-access-tier string Access tier of blob: hot, cool or archive
+ --azureblob-access-tier string Access tier of blob: hot, cool, cold or archive
--azureblob-account string Azure Storage Account Name
--azureblob-archive-tier-delete Delete archive tier blobs before overwriting
--azureblob-chunk-size SizeSuffix Upload chunk size (default 4Mi)
@@ -16448,7 +17786,7 @@ Backend only flags. These can be set in the config file also.
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
--azureblob-disable-checksum Don't store MD5 checksum with object metadata
- --azureblob-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
+ --azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service
--azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI)
--azureblob-key string Storage Account Shared Key
@@ -16468,18 +17806,43 @@ Backend only flags. These can be set in the config file also.
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address)
+ --azurefiles-account string Azure Storage Account Name
+ --azurefiles-chunk-size SizeSuffix Upload chunk size (default 4Mi)
+ --azurefiles-client-certificate-password string Password for the certificate file (optional) (obscured)
+ --azurefiles-client-certificate-path string Path to a PEM or PKCS12 certificate file including the private key
+ --azurefiles-client-id string The ID of the client in use
+ --azurefiles-client-secret string One of the service principal's client secrets
+ --azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
+ --azurefiles-connection-string string Azure Files Connection String
+ --azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
+ --azurefiles-endpoint string Endpoint for the service
+ --azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI)
+ --azurefiles-key string Storage Account Shared Key
+ --azurefiles-max-stream-size SizeSuffix Max size for streamed files (default 10Gi)
+ --azurefiles-msi-client-id string Object ID of the user-assigned MSI to use, if any
+ --azurefiles-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
+ --azurefiles-msi-object-id string Object ID of the user-assigned MSI to use, if any
+ --azurefiles-password string The user's password (obscured)
+ --azurefiles-sas-url string SAS URL
+ --azurefiles-service-principal-file string Path to file containing credentials for use with a service principal
+ --azurefiles-share-name string Azure Files Share Name
+ --azurefiles-tenant string ID of the service principal's tenant. Also called its directory ID
+ --azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
+ --azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
+ --azurefiles-username string User name (usually an email address)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size (default 96Mi)
--b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w)
--b2-download-url string Custom endpoint for downloads
- --b2-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --b2-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--b2-endpoint string Endpoint for the service
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files
--b2-key string Application Key
+ --b2-lifecycle int Set the number of days deleted files should be kept when creating a bucket
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging
- --b2-upload-concurrency int Concurrency for multipart uploads (default 16)
+ --b2-upload-concurrency int Concurrency for multipart uploads (default 4)
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--b2-version-at Time Show file versions as they were at the specified time (default off)
--b2-versions Include old versions in directory listings
@@ -16490,7 +17853,7 @@ Backend only flags. These can be set in the config file also.
--box-client-id string OAuth Client Id
--box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
- --box-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
+ --box-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--box-impersonate string Impersonate this user ID when using a service account
--box-list-chunk int Size of listing chunk 1-1000 (default 1000)
--box-owned-by string Only show items owned by the login (email address) passed in
@@ -16548,7 +17911,7 @@ Backend only flags. These can be set in the config file also.
--drive-client-secret string OAuth Client Secret
--drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut
--drive-disable-http2 Disable drive using http2 (default true)
- --drive-encoding MultiEncoder The encoding for the backend (default InvalidUtf8)
+ --drive-encoding Encoding The encoding for the backend (default InvalidUtf8)
--drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars)
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg")
--drive-fast-list-bug-fix Work around a bug in Google Drive listing (default true)
@@ -16557,17 +17920,21 @@ Backend only flags. These can be set in the config file also.
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs
--drive-keep-revision-forever Keep new head revision of each file forever
--drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000)
+ --drive-metadata-labels Bits Control whether labels should be read or written in metadata (default off)
+ --drive-metadata-owner Bits Control whether owner should be read or written in metadata (default read)
+ --drive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off)
--drive-pacer-burst int Number of API calls to allow without sleeping (default 100)
--drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms)
--drive-resource-key string Resource key for accessing a link-shared file
--drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive
+ --drive-scope string Comma separated list of scopes that rclone should use when requesting access from drive
--drive-server-side-across-configs Deprecated: use --server-side-across-configs instead
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
+ --drive-show-all-gdocs Show all Google Docs including non-exportable ones in listings
--drive-size-as-quota Show sizes as storage quota usage, not actual size
- --drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only
+ --drive-skip-checksum-gphotos Skip checksums on Google photos and videos only
--drive-skip-dangling-shortcuts If set skip dangling shortcut files
--drive-skip-gdocs Skip google documents in all listings
--drive-skip-shortcuts If set skip shortcut files
@@ -16591,7 +17958,7 @@ Backend only flags. These can be set in the config file also.
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
- --dropbox-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
+ --dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--dropbox-shared-files Instructs rclone to work on individual shared files
@@ -16600,11 +17967,11 @@ Backend only flags. These can be set in the config file also.
--dropbox-token-url string Token server url
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
--fichier-cdn Set if you wish to use CDN download links
- --fichier-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --fichier-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
--fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
--fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
--fichier-shared-folder string If you want to download a shared folder, add this parameter
- --filefabric-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
+ --filefabric-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--filefabric-permanent-token string Permanent Authentication Token
--filefabric-root-folder-id string ID of the root folder
--filefabric-token string Session Token
@@ -16618,7 +17985,7 @@ Backend only flags. These can be set in the config file also.
--ftp-disable-mlsd Disable using MLSD even if server advertises support
--ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
--ftp-disable-utf8 Disable using UTF-8 even if server advertises support
- --ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
+ --ftp-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to
@@ -16640,7 +18007,7 @@ Backend only flags. These can be set in the config file also.
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
- --gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
+ --gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Endpoint for the service
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
@@ -16653,9 +18020,13 @@ Backend only flags. These can be set in the config file also.
--gcs-token-url string Token server url
--gcs-user-project string User project
--gphotos-auth-url string Auth server URL
+ --gphotos-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
+ --gphotos-batch-mode string Upload file batching sync|async|off (default "sync")
+ --gphotos-batch-size int Max number of files in upload batch
+ --gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
--gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret
- --gphotos-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
+ --gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gphotos-include-archived Also view and download archived media
--gphotos-read-only Set to make the Google Photos backend read only
--gphotos-read-size Set to read the size of media items
@@ -16667,8 +18038,8 @@ Backend only flags. These can be set in the config file also.
--hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off)
--hasher-remote string Remote to cache checksums for (e.g. myRemote:path)
--hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy
- --hdfs-encoding MultiEncoder The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
- --hdfs-namenode string Hadoop name node and port
+ --hdfs-encoding Encoding The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
+ --hdfs-namenode CommaSepList Hadoop name nodes and ports
--hdfs-service-principal-name string Kerberos service principal name for the namenode
--hdfs-username string Hadoop user name
--hidrive-auth-url string Auth server URL
@@ -16676,7 +18047,7 @@ Backend only flags. These can be set in the config file also.
--hidrive-client-id string OAuth Client Id
--hidrive-client-secret string OAuth Client Secret
--hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary
- --hidrive-encoding MultiEncoder The encoding for the backend (default Slash,Dot)
+ --hidrive-encoding Encoding The encoding for the backend (default Slash,Dot)
--hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1")
--hidrive-root-prefix string The root/parent folder for all paths (default "/")
--hidrive-scope-access string Access permissions that rclone should use when requesting access from HiDrive (default "rw")
@@ -16689,9 +18060,16 @@ Backend only flags. These can be set in the config file also.
--http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to
+ --imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket)
+ --imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+ --imagekit-only-signed Restrict unsigned image URLs If you have configured Restrict unsigned image URLs in your dashboard settings, set this to true
+ --imagekit-private-key string You can find your ImageKit.io private key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+ --imagekit-public-key string You can find your ImageKit.io public key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+ --imagekit-upload-tags string Tags to add to the uploaded files, e.g. "tag1,tag2"
+ --imagekit-versions Include old versions in directory listings
--internetarchive-access-key-id string IAS3 Access Key
--internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true)
- --internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
+ --internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
--internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org")
--internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org")
--internetarchive-secret-access-key string IAS3 Secret Key (password)
@@ -16699,7 +18077,7 @@ Backend only flags. These can be set in the config file also.
--jottacloud-auth-url string Auth server URL
--jottacloud-client-id string OAuth Client Id
--jottacloud-client-secret string OAuth Client Secret
- --jottacloud-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
+ --jottacloud-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
--jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them
@@ -16707,17 +18085,18 @@ Backend only flags. These can be set in the config file also.
--jottacloud-token-url string Token server url
--jottacloud-trashed-only Only show files that are in the trash
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi)
- --koofr-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --koofr-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--koofr-endpoint string The Koofr API endpoint to use
--koofr-mountid string Mount ID of the mount to use
--koofr-password string Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
--koofr-provider string Choose your storage provider
--koofr-setmtime Does the backend support setting modification time (default true)
--koofr-user string Your user name
+ --linkbox-token string Token from https://www.linkbox.to/admin/account
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive
- --local-encoding MultiEncoder The encoding for the backend (default Slash,Dot)
+ --local-encoding Encoding The encoding for the backend (default Slash,Dot)
--local-no-check-updated Don't check to see if the files change during upload
--local-no-preallocate Disable preallocation of disk space for transferred files
--local-no-set-modtime Disable setting modtime
@@ -16729,7 +18108,7 @@ Backend only flags. These can be set in the config file also.
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
--mailru-client-id string OAuth Client Id
--mailru-client-secret string OAuth Client Secret
- --mailru-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --mailru-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--mailru-pass string Password (obscured)
--mailru-speedup-enable Skip full upload if there is another file with same data hash (default true)
--mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash) (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf")
@@ -16739,7 +18118,7 @@ Backend only flags. These can be set in the config file also.
--mailru-token-url string Token server url
--mailru-user string User name (usually email)
--mega-debug Output more debug from Mega
- --mega-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--mega-hard-delete Delete files permanently rather than putting them into the trash
--mega-pass string Password (obscured)
--mega-use-https Use HTTPS for transfers
@@ -16755,9 +18134,10 @@ Backend only flags. These can be set in the config file also.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
+ --onedrive-delta If set rclone will use delta listing to implement recursive listings
--onedrive-drive-id string The ID of the drive to use
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
- --onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --onedrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
--onedrive-hash-type string Specify the hash in use for the backend (default "auto")
--onedrive-link-password string Set the password for links created by the link command
@@ -16778,7 +18158,7 @@ Backend only flags. These can be set in the config file also.
--oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--oos-copy-timeout Duration Timeout for copy (default 1m0s)
--oos-disable-checksum Don't store MD5 checksum with object metadata
- --oos-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --oos-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--oos-endpoint string Endpoint for Object storage API
--oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery
--oos-max-upload-parts int Maximum number of parts in a multipart upload (default 10000)
@@ -16795,13 +18175,13 @@ Backend only flags. These can be set in the config file also.
--oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
- --opendrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
+ --opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
--opendrive-password string Password (obscured)
--opendrive-username string Username
--pcloud-auth-url string Auth server URL
--pcloud-client-id string OAuth Client Id
--pcloud-client-secret string OAuth Client Secret
- --pcloud-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --pcloud-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--pcloud-hostname string Hostname to connect to (default "api.pcloud.com")
--pcloud-password string Your pcloud password (obscured)
--pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0")
@@ -16811,7 +18191,7 @@ Backend only flags. These can be set in the config file also.
--pikpak-auth-url string Auth server URL
--pikpak-client-id string OAuth Client Id
--pikpak-client-secret string OAuth Client Secret
- --pikpak-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
--pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder
@@ -16823,13 +18203,13 @@ Backend only flags. These can be set in the config file also.
--premiumizeme-auth-url string Auth server URL
--premiumizeme-client-id string OAuth Client Id
--premiumizeme-client-secret string OAuth Client Secret
- --premiumizeme-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --premiumizeme-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--premiumizeme-token string OAuth Access Token as a JSON blob
--premiumizeme-token-url string Token server url
--protondrive-2fa string The 2FA code
--protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone")
--protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true)
- --protondrive-encoding MultiEncoder The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
--protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured)
--protondrive-original-file-size Return the file size before encryption (default true)
--protondrive-password string The password of your proton account (obscured)
@@ -16838,13 +18218,13 @@ Backend only flags. These can be set in the config file also.
--putio-auth-url string Auth server URL
--putio-client-id string OAuth Client Id
--putio-client-secret string OAuth Client Secret
- --putio-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --putio-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--putio-token string OAuth Access Token as a JSON blob
--putio-token-url string Token server url
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi)
--qingstor-connection-retries int Number of connection retries (default 3)
- --qingstor-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8)
+ --qingstor-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8)
--qingstor-endpoint string Enter an endpoint URL to connection QingStor API
--qingstor-env-auth Get QingStor credentials from runtime
--qingstor-secret-access-key string QingStor Secret Access Key (password)
@@ -16853,7 +18233,7 @@ Backend only flags. These can be set in the config file also.
--qingstor-zone string Zone to connect to
--quatrix-api-key string API key for accessing Quatrix account
--quatrix-effective-upload-time string Wanted upload time for one chunk (default "4s")
- --quatrix-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --quatrix-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--quatrix-hard-delete Delete files permanently rather than putting them into the trash
--quatrix-host string Host name of Quatrix account
--quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi)
@@ -16868,7 +18248,7 @@ Backend only flags. These can be set in the config file also.
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
--s3-download-url string Custom endpoint for downloads
- --s3-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --s3-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--s3-endpoint string Endpoint for S3 API
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
--s3-force-path-style If true use path style access if false use virtual hosted style (default true)
@@ -16902,14 +18282,16 @@ Backend only flags. These can be set in the config file also.
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
--s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset)
+ --s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset)
--s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
+ --s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-v2-auth If true use v2 authentication
--s3-version-at Time Show file versions as they were at the specified time (default off)
--s3-versions Include old versions in directory listings
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
--seafile-create-library Should rclone create a library if it doesn't exist
- --seafile-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
+ --seafile-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
--seafile-library string Name of the library
--seafile-library-key string Library password (for encrypted libraries only) (obscured)
--seafile-pass string Password (obscured)
@@ -16919,6 +18301,7 @@ Backend only flags. These can be set in the config file also.
--sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki)
--sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference
--sftp-concurrency int The maximum number of outstanding requests for one file (default 64)
+ --sftp-copy-is-hardlink Set to enable server side copies using hardlinks
--sftp-disable-concurrent-reads If set don't use concurrent reads
--sftp-disable-concurrent-writes If set don't use concurrent writes
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
@@ -16953,7 +18336,7 @@ Backend only flags. These can be set in the config file also.
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
--sharefile-client-id string OAuth Client Id
--sharefile-client-secret string OAuth Client Secret
- --sharefile-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --sharefile-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
--sharefile-endpoint string Endpoint for API calls
--sharefile-root-folder-id string ID of the root folder
--sharefile-token string OAuth Access Token as a JSON blob
@@ -16961,12 +18344,12 @@ Backend only flags. These can be set in the config file also.
--sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi)
--sia-api-password string Sia Daemon API Password (obscured)
--sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980")
- --sia-encoding MultiEncoder The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
+ --sia-encoding Encoding The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
--sia-user-agent string Siad User Agent (default "Sia-Agent")
--skip-links Don't warn about skipped symlinks
--smb-case-insensitive Whether the server is configured to be case-insensitive (default true)
--smb-domain string Domain name for NTLM authentication (default "WORKGROUP")
- --smb-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --smb-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
--smb-hide-special-share Hide special shares (e.g. print$) which users aren't supposed to access (default true)
--smb-host string SMB server hostname to connect to
--smb-idle-timeout Duration Max time before closing idle connections (default 1m0s)
@@ -16984,7 +18367,7 @@ Backend only flags. These can be set in the config file also.
--sugarsync-authorization string Sugarsync authorization
--sugarsync-authorization-expiry string Sugarsync authorization expiry
--sugarsync-deleted-id string Sugarsync deleted folder id
- --sugarsync-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
+ --sugarsync-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
--sugarsync-hard-delete Permanently delete files if true
--sugarsync-private-access-key string Sugarsync Private Access Key
--sugarsync-refresh-token string Sugarsync refresh token
@@ -16998,7 +18381,7 @@ Backend only flags. These can be set in the config file also.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8)
+ --swift-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form
--swift-key string API key or password (OS_PASSWORD)
@@ -17020,7 +18403,7 @@ Backend only flags. These can be set in the config file also.
--union-search-policy string Policy to choose upstream on SEARCH category (default "ff")
--union-upstreams string List of space separated upstreams
--uptobox-access-token string Your access token
- --uptobox-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
+ --uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--uptobox-private Set to make uploaded files private
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
@@ -17035,14 +18418,14 @@ Backend only flags. These can be set in the config file also.
--yandex-auth-url string Auth server URL
--yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret
- --yandex-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
+ --yandex-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--yandex-hard-delete Delete files permanently rather than putting them into the trash
--yandex-token string OAuth Access Token as a JSON blob
--yandex-token-url string Token server url
--zoho-auth-url string Auth server URL
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
- --zoho-encoding MultiEncoder The encoding for the backend (default Del,Ctl,InvalidUtf8)
+ --zoho-encoding Encoding The encoding for the backend (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to
--zoho-token string OAuth Access Token as a JSON blob
--zoho-token-url string Token server url
@@ -17996,7 +19379,7 @@ considered for this check. You could use --force to force the sync
the situation carefully and perhaps use --dry-run before you commit to
the changes.
-Modification time
+Modification times
Bisync relies on file timestamps to identify changed files and will
refuse to operate if backend lacks the modification time support.
@@ -19077,7 +20460,7 @@ To copy a local directory to a 1Fichier directory called backup
rclone copy /home/source remote:backup
-Modified time and hashes
+Modification times and hashes
1Fichier does not support modification times. It supports the Whirlpool
hash algorithm.
@@ -19195,7 +20578,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_FICHIER_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default:
Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot
@@ -19431,13 +20814,13 @@ To copy a local directory to an Amazon Drive directory called backup
rclone copy /home/source remote:backup
-Modified time and MD5SUMs
+Modification times and hashes
Amazon Drive doesn't allow modification times to be changed via the API
so these won't be accurate or used for syncing.
-It does store MD5SUMs so for a more accurate sync, you can use the
---checksum flag.
+It does support the MD5 hash algorithm, so for a more accurate sync, you
+can use the --checksum flag.
Restricted filename characters
@@ -19609,7 +20992,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_ACD_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,InvalidUtf8,Dot
Limitations
@@ -19661,10 +21044,12 @@ The S3 backend can be used with a number of different providers:
- IONOS Cloud
- Leviia Object Storage
- Liara Object Storage
+- Linode Object Storage
- Minio
- Petabox
- Qiniu Cloud Object Storage (Kodo)
- RackCorp Object Storage
+- Rclone Serve S3
- Scaleway
- Seagate Lyve Cloud
- SeaweedFS
@@ -19904,7 +21289,9 @@ This will guide you through an interactive setup process.
d) Delete this remote
y/e/d>
-Modified time
+Modification times and hashes
+
+Modification times
The modified time is stored as metadata on the object as
X-Amz-Meta-Mtime as floating point since the epoch, accurate to 1 ns.
@@ -19918,6 +21305,30 @@ uploaded rather than copied.
Note that reading this from the object takes an additional HEAD request
as the metadata isn't returned in object listings.
+Hashes
+
+For small objects which weren't uploaded as multipart uploads (objects
+sized below --s3-upload-cutoff if uploaded with rclone) rclone uses the
+ETag: header as an MD5 checksum.
+
+However for objects which were uploaded as multipart uploads or with
+server side encryption (SSE-AWS or SSE-C) the ETag header is no longer
+the MD5 sum of the data, so rclone adds an additional piece of metadata
+X-Amz-Meta-Md5chksum which is a base64 encoded MD5 hash (in the same
+format as is required for Content-MD5). You can use base64 -d and
+hexdump to check this value manually:
+
+ echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump
+
+or you can use rclone check to verify the hashes are OK.
+
+For large objects, calculating this hash can take some time so the
+addition of this hash can be disabled with --s3-disable-checksum. This
+will mean that these objects do not have an MD5 checksum.
+
+Note that reading this from the object takes an additional HEAD request
+as the metadata isn't returned in object listings.
+
Reducing costs
Avoiding HEAD requests to read the modification time
@@ -20007,30 +21418,6 @@ details.
Setting this flag increases the chance for undetected upload failures.
-Hashes
-
-For small objects which weren't uploaded as multipart uploads (objects
-sized below --s3-upload-cutoff if uploaded with rclone) rclone uses the
-ETag: header as an MD5 checksum.
-
-However for objects which were uploaded as multipart uploads or with
-server side encryption (SSE-AWS or SSE-C) the ETag header is no longer
-the MD5 sum of the data, so rclone adds an additional piece of metadata
-X-Amz-Meta-Md5chksum which is a base64 encoded MD5 hash (in the same
-format as is required for Content-MD5). You can use base64 -d and
-hexdump to check this value manually:
-
- echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump
-
-or you can use rclone check to verify the hashes are OK.
-
-For large objects, calculating this hash can take some time so the
-addition of this hash can be disabled with --s3-disable-checksum. This
-will mean that these objects do not have an MD5 checksum.
-
-Note that reading this from the object takes an additional HEAD request
-as the metadata isn't returned in object listings.
-
Versions
When bucket versioning is enabled (this can be done with rclone with the
@@ -20288,19 +21675,19 @@ According to AWS's documentation on S3 Object Lock:
If you configure a default retention period on a bucket, requests to
upload objects in such a bucket must include the Content-MD5 header.
-As mentioned in the Hashes section, small files that are not uploaded as
-multipart, use a different tag, causing the upload to fail. A simple
-solution is to set the --s3-upload-cutoff 0 and force all the files to
-be uploaded as multipart.
+As mentioned in the Modification times and hashes section, small files
+that are not uploaded as multipart, use a different tag, causing the
+upload to fail. A simple solution is to set the --s3-upload-cutoff 0 and
+force all the files to be uploaded as multipart.
Standard options
Here are the Standard options specific to s3 (Amazon S3 Compliant
-Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China
-Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS,
-IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease,
-Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology,
-Tencent COS, Qiniu and Wasabi).
+Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile,
+Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive,
+IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox,
+RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology,
+TencentCOS, Wasabi, Qiniu and others).
--s3-provider
@@ -20345,6 +21732,8 @@ Properties:
- Leviia Object Storage
- "Liara"
- Liara Object Storage
+ - "Linode"
+ - Linode Object Storage
- "Minio"
- Minio Object Storage
- "Netease"
@@ -20353,6 +21742,8 @@ Properties:
- Petabox Object Storage
- "RackCorp"
- RackCorp Object Storage
+ - "Rclone"
+ - Rclone S3 Server
- "Scaleway"
- Scaleway Object Storage
- "SeaweedFS"
@@ -20506,259 +21897,6 @@ Properties:
- AWS GovCloud (US) Region.
- Needs location constraint us-gov-west-1.
---s3-region
-
-region - the location where your bucket will be created and your data
-stored.
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: RackCorp
-- Type: string
-- Required: false
-- Examples:
- - "global"
- - Global CDN (All locations) Region
- - "au"
- - Australia (All states)
- - "au-nsw"
- - NSW (Australia) Region
- - "au-qld"
- - QLD (Australia) Region
- - "au-vic"
- - VIC (Australia) Region
- - "au-wa"
- - Perth (Australia) Region
- - "ph"
- - Manila (Philippines) Region
- - "th"
- - Bangkok (Thailand) Region
- - "hk"
- - HK (Hong Kong) Region
- - "mn"
- - Ulaanbaatar (Mongolia) Region
- - "kg"
- - Bishkek (Kyrgyzstan) Region
- - "id"
- - Jakarta (Indonesia) Region
- - "jp"
- - Tokyo (Japan) Region
- - "sg"
- - SG (Singapore) Region
- - "de"
- - Frankfurt (Germany) Region
- - "us"
- - USA (AnyCast) Region
- - "us-east-1"
- - New York (USA) Region
- - "us-west-1"
- - Freemont (USA) Region
- - "nz"
- - Auckland (New Zealand) Region
-
---s3-region
-
-Region to connect to.
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: Scaleway
-- Type: string
-- Required: false
-- Examples:
- - "nl-ams"
- - Amsterdam, The Netherlands
- - "fr-par"
- - Paris, France
- - "pl-waw"
- - Warsaw, Poland
-
---s3-region
-
-Region to connect to. - the location where your bucket will be created
-and your data stored. Need bo be same with your endpoint.
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: HuaweiOBS
-- Type: string
-- Required: false
-- Examples:
- - "af-south-1"
- - AF-Johannesburg
- - "ap-southeast-2"
- - AP-Bangkok
- - "ap-southeast-3"
- - AP-Singapore
- - "cn-east-3"
- - CN East-Shanghai1
- - "cn-east-2"
- - CN East-Shanghai2
- - "cn-north-1"
- - CN North-Beijing1
- - "cn-north-4"
- - CN North-Beijing4
- - "cn-south-1"
- - CN South-Guangzhou
- - "ap-southeast-1"
- - CN-Hong Kong
- - "sa-argentina-1"
- - LA-Buenos Aires1
- - "sa-peru-1"
- - LA-Lima1
- - "na-mexico-1"
- - LA-Mexico City1
- - "sa-chile-1"
- - LA-Santiago2
- - "sa-brazil-1"
- - LA-Sao Paulo1
- - "ru-northwest-2"
- - RU-Moscow2
-
---s3-region
-
-Region to connect to.
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: Cloudflare
-- Type: string
-- Required: false
-- Examples:
- - "auto"
- - R2 buckets are automatically distributed across Cloudflare's
- data centers for low latency.
-
---s3-region
-
-Region to connect to.
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: Qiniu
-- Type: string
-- Required: false
-- Examples:
- - "cn-east-1"
- - The default endpoint - a good choice if you are unsure.
- - East China Region 1.
- - Needs location constraint cn-east-1.
- - "cn-east-2"
- - East China Region 2.
- - Needs location constraint cn-east-2.
- - "cn-north-1"
- - North China Region 1.
- - Needs location constraint cn-north-1.
- - "cn-south-1"
- - South China Region 1.
- - Needs location constraint cn-south-1.
- - "us-north-1"
- - North America Region.
- - Needs location constraint us-north-1.
- - "ap-southeast-1"
- - Southeast Asia Region 1.
- - Needs location constraint ap-southeast-1.
- - "ap-northeast-1"
- - Northeast Asia Region 1.
- - Needs location constraint ap-northeast-1.
-
---s3-region
-
-Region where your bucket will be created and your data stored.
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: IONOS
-- Type: string
-- Required: false
-- Examples:
- - "de"
- - Frankfurt, Germany
- - "eu-central-2"
- - Berlin, Germany
- - "eu-south-2"
- - Logrono, Spain
-
---s3-region
-
-Region where your bucket will be created and your data stored.
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: Petabox
-- Type: string
-- Required: false
-- Examples:
- - "us-east-1"
- - US East (N. Virginia)
- - "eu-central-1"
- - Europe (Frankfurt)
- - "ap-southeast-1"
- - Asia Pacific (Singapore)
- - "me-south-1"
- - Middle East (Bahrain)
- - "sa-east-1"
- - South America (São Paulo)
-
---s3-region
-
-Region where your data stored.
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: Synology
-- Type: string
-- Required: false
-- Examples:
- - "eu-001"
- - Europe Region 1
- - "eu-002"
- - Europe Region 2
- - "us-001"
- - US Region 1
- - "us-002"
- - US Region 2
- - "tw-001"
- - Asia (Taiwan)
-
---s3-region
-
-Region to connect to.
-
-Leave blank if you are using an S3 clone and you don't have a region.
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider:
- !AWS,Alibaba,ArvanCloud,ChinaMobile,Cloudflare,IONOS,Petabox,Liara,Qiniu,RackCorp,Scaleway,Storj,Synology,TencentCOS,HuaweiOBS,IDrive
-- Type: string
-- Required: false
-- Examples:
- - ""
- - Use this if unsure.
- - Will use v4 signatures and an empty region.
- - "other-v2-signature"
- - Use this only if v4 signatures don't work.
- - E.g. pre Jewel/v10 CEPH.
-
--s3-endpoint
Endpoint for S3 API.
@@ -20773,713 +21911,6 @@ Properties:
- Type: string
- Required: false
---s3-endpoint
-
-Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: ChinaMobile
-- Type: string
-- Required: false
-- Examples:
- - "eos-wuxi-1.cmecloud.cn"
- - The default endpoint - a good choice if you are unsure.
- - East China (Suzhou)
- - "eos-jinan-1.cmecloud.cn"
- - East China (Jinan)
- - "eos-ningbo-1.cmecloud.cn"
- - East China (Hangzhou)
- - "eos-shanghai-1.cmecloud.cn"
- - East China (Shanghai-1)
- - "eos-zhengzhou-1.cmecloud.cn"
- - Central China (Zhengzhou)
- - "eos-hunan-1.cmecloud.cn"
- - Central China (Changsha-1)
- - "eos-zhuzhou-1.cmecloud.cn"
- - Central China (Changsha-2)
- - "eos-guangzhou-1.cmecloud.cn"
- - South China (Guangzhou-2)
- - "eos-dongguan-1.cmecloud.cn"
- - South China (Guangzhou-3)
- - "eos-beijing-1.cmecloud.cn"
- - North China (Beijing-1)
- - "eos-beijing-2.cmecloud.cn"
- - North China (Beijing-2)
- - "eos-beijing-4.cmecloud.cn"
- - North China (Beijing-3)
- - "eos-huhehaote-1.cmecloud.cn"
- - North China (Huhehaote)
- - "eos-chengdu-1.cmecloud.cn"
- - Southwest China (Chengdu)
- - "eos-chongqing-1.cmecloud.cn"
- - Southwest China (Chongqing)
- - "eos-guiyang-1.cmecloud.cn"
- - Southwest China (Guiyang)
- - "eos-xian-1.cmecloud.cn"
- - Nouthwest China (Xian)
- - "eos-yunnan.cmecloud.cn"
- - Yunnan China (Kunming)
- - "eos-yunnan-2.cmecloud.cn"
- - Yunnan China (Kunming-2)
- - "eos-tianjin-1.cmecloud.cn"
- - Tianjin China (Tianjin)
- - "eos-jilin-1.cmecloud.cn"
- - Jilin China (Changchun)
- - "eos-hubei-1.cmecloud.cn"
- - Hubei China (Xiangyan)
- - "eos-jiangxi-1.cmecloud.cn"
- - Jiangxi China (Nanchang)
- - "eos-gansu-1.cmecloud.cn"
- - Gansu China (Lanzhou)
- - "eos-shanxi-1.cmecloud.cn"
- - Shanxi China (Taiyuan)
- - "eos-liaoning-1.cmecloud.cn"
- - Liaoning China (Shenyang)
- - "eos-hebei-1.cmecloud.cn"
- - Hebei China (Shijiazhuang)
- - "eos-fujian-1.cmecloud.cn"
- - Fujian China (Xiamen)
- - "eos-guangxi-1.cmecloud.cn"
- - Guangxi China (Nanning)
- - "eos-anhui-1.cmecloud.cn"
- - Anhui China (Huainan)
-
---s3-endpoint
-
-Endpoint for Arvan Cloud Object Storage (AOS) API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: ArvanCloud
-- Type: string
-- Required: false
-- Examples:
- - "s3.ir-thr-at1.arvanstorage.ir"
- - The default endpoint - a good choice if you are unsure.
- - Tehran Iran (Simin)
- - "s3.ir-tbz-sh1.arvanstorage.ir"
- - Tabriz Iran (Shahriar)
-
---s3-endpoint
-
-Endpoint for IBM COS S3 API.
-
-Specify if using an IBM COS On Premise.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: IBMCOS
-- Type: string
-- Required: false
-- Examples:
- - "s3.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region Endpoint
- - "s3.dal.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region Dallas Endpoint
- - "s3.wdc.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region Washington DC Endpoint
- - "s3.sjc.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region San Jose Endpoint
- - "s3.private.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region Private Endpoint
- - "s3.private.dal.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region Dallas Private Endpoint
- - "s3.private.wdc.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region Washington DC Private Endpoint
- - "s3.private.sjc.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region San Jose Private Endpoint
- - "s3.us-east.cloud-object-storage.appdomain.cloud"
- - US Region East Endpoint
- - "s3.private.us-east.cloud-object-storage.appdomain.cloud"
- - US Region East Private Endpoint
- - "s3.us-south.cloud-object-storage.appdomain.cloud"
- - US Region South Endpoint
- - "s3.private.us-south.cloud-object-storage.appdomain.cloud"
- - US Region South Private Endpoint
- - "s3.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Endpoint
- - "s3.fra.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Frankfurt Endpoint
- - "s3.mil.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Milan Endpoint
- - "s3.ams.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Amsterdam Endpoint
- - "s3.private.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Private Endpoint
- - "s3.private.fra.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Frankfurt Private Endpoint
- - "s3.private.mil.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Milan Private Endpoint
- - "s3.private.ams.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Amsterdam Private Endpoint
- - "s3.eu-gb.cloud-object-storage.appdomain.cloud"
- - Great Britain Endpoint
- - "s3.private.eu-gb.cloud-object-storage.appdomain.cloud"
- - Great Britain Private Endpoint
- - "s3.eu-de.cloud-object-storage.appdomain.cloud"
- - EU Region DE Endpoint
- - "s3.private.eu-de.cloud-object-storage.appdomain.cloud"
- - EU Region DE Private Endpoint
- - "s3.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional Endpoint
- - "s3.tok.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional Tokyo Endpoint
- - "s3.hkg.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional HongKong Endpoint
- - "s3.seo.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional Seoul Endpoint
- - "s3.private.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional Private Endpoint
- - "s3.private.tok.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional Tokyo Private Endpoint
- - "s3.private.hkg.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional HongKong Private Endpoint
- - "s3.private.seo.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional Seoul Private Endpoint
- - "s3.jp-tok.cloud-object-storage.appdomain.cloud"
- - APAC Region Japan Endpoint
- - "s3.private.jp-tok.cloud-object-storage.appdomain.cloud"
- - APAC Region Japan Private Endpoint
- - "s3.au-syd.cloud-object-storage.appdomain.cloud"
- - APAC Region Australia Endpoint
- - "s3.private.au-syd.cloud-object-storage.appdomain.cloud"
- - APAC Region Australia Private Endpoint
- - "s3.ams03.cloud-object-storage.appdomain.cloud"
- - Amsterdam Single Site Endpoint
- - "s3.private.ams03.cloud-object-storage.appdomain.cloud"
- - Amsterdam Single Site Private Endpoint
- - "s3.che01.cloud-object-storage.appdomain.cloud"
- - Chennai Single Site Endpoint
- - "s3.private.che01.cloud-object-storage.appdomain.cloud"
- - Chennai Single Site Private Endpoint
- - "s3.mel01.cloud-object-storage.appdomain.cloud"
- - Melbourne Single Site Endpoint
- - "s3.private.mel01.cloud-object-storage.appdomain.cloud"
- - Melbourne Single Site Private Endpoint
- - "s3.osl01.cloud-object-storage.appdomain.cloud"
- - Oslo Single Site Endpoint
- - "s3.private.osl01.cloud-object-storage.appdomain.cloud"
- - Oslo Single Site Private Endpoint
- - "s3.tor01.cloud-object-storage.appdomain.cloud"
- - Toronto Single Site Endpoint
- - "s3.private.tor01.cloud-object-storage.appdomain.cloud"
- - Toronto Single Site Private Endpoint
- - "s3.seo01.cloud-object-storage.appdomain.cloud"
- - Seoul Single Site Endpoint
- - "s3.private.seo01.cloud-object-storage.appdomain.cloud"
- - Seoul Single Site Private Endpoint
- - "s3.mon01.cloud-object-storage.appdomain.cloud"
- - Montreal Single Site Endpoint
- - "s3.private.mon01.cloud-object-storage.appdomain.cloud"
- - Montreal Single Site Private Endpoint
- - "s3.mex01.cloud-object-storage.appdomain.cloud"
- - Mexico Single Site Endpoint
- - "s3.private.mex01.cloud-object-storage.appdomain.cloud"
- - Mexico Single Site Private Endpoint
- - "s3.sjc04.cloud-object-storage.appdomain.cloud"
- - San Jose Single Site Endpoint
- - "s3.private.sjc04.cloud-object-storage.appdomain.cloud"
- - San Jose Single Site Private Endpoint
- - "s3.mil01.cloud-object-storage.appdomain.cloud"
- - Milan Single Site Endpoint
- - "s3.private.mil01.cloud-object-storage.appdomain.cloud"
- - Milan Single Site Private Endpoint
- - "s3.hkg02.cloud-object-storage.appdomain.cloud"
- - Hong Kong Single Site Endpoint
- - "s3.private.hkg02.cloud-object-storage.appdomain.cloud"
- - Hong Kong Single Site Private Endpoint
- - "s3.par01.cloud-object-storage.appdomain.cloud"
- - Paris Single Site Endpoint
- - "s3.private.par01.cloud-object-storage.appdomain.cloud"
- - Paris Single Site Private Endpoint
- - "s3.sng01.cloud-object-storage.appdomain.cloud"
- - Singapore Single Site Endpoint
- - "s3.private.sng01.cloud-object-storage.appdomain.cloud"
- - Singapore Single Site Private Endpoint
-
---s3-endpoint
-
-Endpoint for IONOS S3 Object Storage.
-
-Specify the endpoint from the same region.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: IONOS
-- Type: string
-- Required: false
-- Examples:
- - "s3-eu-central-1.ionoscloud.com"
- - Frankfurt, Germany
- - "s3-eu-central-2.ionoscloud.com"
- - Berlin, Germany
- - "s3-eu-south-2.ionoscloud.com"
- - Logrono, Spain
-
---s3-endpoint
-
-Endpoint for Petabox S3 Object Storage.
-
-Specify the endpoint from the same region.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Petabox
-- Type: string
-- Required: true
-- Examples:
- - "s3.petabox.io"
- - US East (N. Virginia)
- - "s3.us-east-1.petabox.io"
- - US East (N. Virginia)
- - "s3.eu-central-1.petabox.io"
- - Europe (Frankfurt)
- - "s3.ap-southeast-1.petabox.io"
- - Asia Pacific (Singapore)
- - "s3.me-south-1.petabox.io"
- - Middle East (Bahrain)
- - "s3.sa-east-1.petabox.io"
- - South America (São Paulo)
-
---s3-endpoint
-
-Endpoint for Leviia Object Storage API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Leviia
-- Type: string
-- Required: false
-- Examples:
- - "s3.leviia.com"
- - The default endpoint
- - Leviia
-
---s3-endpoint
-
-Endpoint for Liara Object Storage API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Liara
-- Type: string
-- Required: false
-- Examples:
- - "storage.iran.liara.space"
- - The default endpoint
- - Iran
-
---s3-endpoint
-
-Endpoint for OSS API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Alibaba
-- Type: string
-- Required: false
-- Examples:
- - "oss-accelerate.aliyuncs.com"
- - Global Accelerate
- - "oss-accelerate-overseas.aliyuncs.com"
- - Global Accelerate (outside mainland China)
- - "oss-cn-hangzhou.aliyuncs.com"
- - East China 1 (Hangzhou)
- - "oss-cn-shanghai.aliyuncs.com"
- - East China 2 (Shanghai)
- - "oss-cn-qingdao.aliyuncs.com"
- - North China 1 (Qingdao)
- - "oss-cn-beijing.aliyuncs.com"
- - North China 2 (Beijing)
- - "oss-cn-zhangjiakou.aliyuncs.com"
- - North China 3 (Zhangjiakou)
- - "oss-cn-huhehaote.aliyuncs.com"
- - North China 5 (Hohhot)
- - "oss-cn-wulanchabu.aliyuncs.com"
- - North China 6 (Ulanqab)
- - "oss-cn-shenzhen.aliyuncs.com"
- - South China 1 (Shenzhen)
- - "oss-cn-heyuan.aliyuncs.com"
- - South China 2 (Heyuan)
- - "oss-cn-guangzhou.aliyuncs.com"
- - South China 3 (Guangzhou)
- - "oss-cn-chengdu.aliyuncs.com"
- - West China 1 (Chengdu)
- - "oss-cn-hongkong.aliyuncs.com"
- - Hong Kong (Hong Kong)
- - "oss-us-west-1.aliyuncs.com"
- - US West 1 (Silicon Valley)
- - "oss-us-east-1.aliyuncs.com"
- - US East 1 (Virginia)
- - "oss-ap-southeast-1.aliyuncs.com"
- - Southeast Asia Southeast 1 (Singapore)
- - "oss-ap-southeast-2.aliyuncs.com"
- - Asia Pacific Southeast 2 (Sydney)
- - "oss-ap-southeast-3.aliyuncs.com"
- - Southeast Asia Southeast 3 (Kuala Lumpur)
- - "oss-ap-southeast-5.aliyuncs.com"
- - Asia Pacific Southeast 5 (Jakarta)
- - "oss-ap-northeast-1.aliyuncs.com"
- - Asia Pacific Northeast 1 (Japan)
- - "oss-ap-south-1.aliyuncs.com"
- - Asia Pacific South 1 (Mumbai)
- - "oss-eu-central-1.aliyuncs.com"
- - Central Europe 1 (Frankfurt)
- - "oss-eu-west-1.aliyuncs.com"
- - West Europe (London)
- - "oss-me-east-1.aliyuncs.com"
- - Middle East 1 (Dubai)
-
---s3-endpoint
-
-Endpoint for OBS API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: HuaweiOBS
-- Type: string
-- Required: false
-- Examples:
- - "obs.af-south-1.myhuaweicloud.com"
- - AF-Johannesburg
- - "obs.ap-southeast-2.myhuaweicloud.com"
- - AP-Bangkok
- - "obs.ap-southeast-3.myhuaweicloud.com"
- - AP-Singapore
- - "obs.cn-east-3.myhuaweicloud.com"
- - CN East-Shanghai1
- - "obs.cn-east-2.myhuaweicloud.com"
- - CN East-Shanghai2
- - "obs.cn-north-1.myhuaweicloud.com"
- - CN North-Beijing1
- - "obs.cn-north-4.myhuaweicloud.com"
- - CN North-Beijing4
- - "obs.cn-south-1.myhuaweicloud.com"
- - CN South-Guangzhou
- - "obs.ap-southeast-1.myhuaweicloud.com"
- - CN-Hong Kong
- - "obs.sa-argentina-1.myhuaweicloud.com"
- - LA-Buenos Aires1
- - "obs.sa-peru-1.myhuaweicloud.com"
- - LA-Lima1
- - "obs.na-mexico-1.myhuaweicloud.com"
- - LA-Mexico City1
- - "obs.sa-chile-1.myhuaweicloud.com"
- - LA-Santiago2
- - "obs.sa-brazil-1.myhuaweicloud.com"
- - LA-Sao Paulo1
- - "obs.ru-northwest-2.myhuaweicloud.com"
- - RU-Moscow2
-
---s3-endpoint
-
-Endpoint for Scaleway Object Storage.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Scaleway
-- Type: string
-- Required: false
-- Examples:
- - "s3.nl-ams.scw.cloud"
- - Amsterdam Endpoint
- - "s3.fr-par.scw.cloud"
- - Paris Endpoint
- - "s3.pl-waw.scw.cloud"
- - Warsaw Endpoint
-
---s3-endpoint
-
-Endpoint for StackPath Object Storage.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: StackPath
-- Type: string
-- Required: false
-- Examples:
- - "s3.us-east-2.stackpathstorage.com"
- - US East Endpoint
- - "s3.us-west-1.stackpathstorage.com"
- - US West Endpoint
- - "s3.eu-central-1.stackpathstorage.com"
- - EU Endpoint
-
---s3-endpoint
-
-Endpoint for Google Cloud Storage.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: GCS
-- Type: string
-- Required: false
-- Examples:
- - "https://storage.googleapis.com"
- - Google Cloud Storage endpoint
-
---s3-endpoint
-
-Endpoint for Storj Gateway.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Storj
-- Type: string
-- Required: false
-- Examples:
- - "gateway.storjshare.io"
- - Global Hosted Gateway
-
---s3-endpoint
-
-Endpoint for Synology C2 Object Storage API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Synology
-- Type: string
-- Required: false
-- Examples:
- - "eu-001.s3.synologyc2.net"
- - EU Endpoint 1
- - "eu-002.s3.synologyc2.net"
- - EU Endpoint 2
- - "us-001.s3.synologyc2.net"
- - US Endpoint 1
- - "us-002.s3.synologyc2.net"
- - US Endpoint 2
- - "tw-001.s3.synologyc2.net"
- - TW Endpoint 1
-
---s3-endpoint
-
-Endpoint for Tencent COS API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: TencentCOS
-- Type: string
-- Required: false
-- Examples:
- - "cos.ap-beijing.myqcloud.com"
- - Beijing Region
- - "cos.ap-nanjing.myqcloud.com"
- - Nanjing Region
- - "cos.ap-shanghai.myqcloud.com"
- - Shanghai Region
- - "cos.ap-guangzhou.myqcloud.com"
- - Guangzhou Region
- - "cos.ap-nanjing.myqcloud.com"
- - Nanjing Region
- - "cos.ap-chengdu.myqcloud.com"
- - Chengdu Region
- - "cos.ap-chongqing.myqcloud.com"
- - Chongqing Region
- - "cos.ap-hongkong.myqcloud.com"
- - Hong Kong (China) Region
- - "cos.ap-singapore.myqcloud.com"
- - Singapore Region
- - "cos.ap-mumbai.myqcloud.com"
- - Mumbai Region
- - "cos.ap-seoul.myqcloud.com"
- - Seoul Region
- - "cos.ap-bangkok.myqcloud.com"
- - Bangkok Region
- - "cos.ap-tokyo.myqcloud.com"
- - Tokyo Region
- - "cos.na-siliconvalley.myqcloud.com"
- - Silicon Valley Region
- - "cos.na-ashburn.myqcloud.com"
- - Virginia Region
- - "cos.na-toronto.myqcloud.com"
- - Toronto Region
- - "cos.eu-frankfurt.myqcloud.com"
- - Frankfurt Region
- - "cos.eu-moscow.myqcloud.com"
- - Moscow Region
- - "cos.accelerate.myqcloud.com"
- - Use Tencent COS Accelerate Endpoint
-
---s3-endpoint
-
-Endpoint for RackCorp Object Storage.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: RackCorp
-- Type: string
-- Required: false
-- Examples:
- - "s3.rackcorp.com"
- - Global (AnyCast) Endpoint
- - "au.s3.rackcorp.com"
- - Australia (Anycast) Endpoint
- - "au-nsw.s3.rackcorp.com"
- - Sydney (Australia) Endpoint
- - "au-qld.s3.rackcorp.com"
- - Brisbane (Australia) Endpoint
- - "au-vic.s3.rackcorp.com"
- - Melbourne (Australia) Endpoint
- - "au-wa.s3.rackcorp.com"
- - Perth (Australia) Endpoint
- - "ph.s3.rackcorp.com"
- - Manila (Philippines) Endpoint
- - "th.s3.rackcorp.com"
- - Bangkok (Thailand) Endpoint
- - "hk.s3.rackcorp.com"
- - HK (Hong Kong) Endpoint
- - "mn.s3.rackcorp.com"
- - Ulaanbaatar (Mongolia) Endpoint
- - "kg.s3.rackcorp.com"
- - Bishkek (Kyrgyzstan) Endpoint
- - "id.s3.rackcorp.com"
- - Jakarta (Indonesia) Endpoint
- - "jp.s3.rackcorp.com"
- - Tokyo (Japan) Endpoint
- - "sg.s3.rackcorp.com"
- - SG (Singapore) Endpoint
- - "de.s3.rackcorp.com"
- - Frankfurt (Germany) Endpoint
- - "us.s3.rackcorp.com"
- - USA (AnyCast) Endpoint
- - "us-east-1.s3.rackcorp.com"
- - New York (USA) Endpoint
- - "us-west-1.s3.rackcorp.com"
- - Freemont (USA) Endpoint
- - "nz.s3.rackcorp.com"
- - Auckland (New Zealand) Endpoint
-
---s3-endpoint
-
-Endpoint for Qiniu Object Storage.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Qiniu
-- Type: string
-- Required: false
-- Examples:
- - "s3-cn-east-1.qiniucs.com"
- - East China Endpoint 1
- - "s3-cn-east-2.qiniucs.com"
- - East China Endpoint 2
- - "s3-cn-north-1.qiniucs.com"
- - North China Endpoint 1
- - "s3-cn-south-1.qiniucs.com"
- - South China Endpoint 1
- - "s3-us-north-1.qiniucs.com"
- - North America Endpoint 1
- - "s3-ap-southeast-1.qiniucs.com"
- - Southeast Asia Endpoint 1
- - "s3-ap-northeast-1.qiniucs.com"
- - Northeast Asia Endpoint 1
-
---s3-endpoint
-
-Endpoint for S3 API.
-
-Required when using an S3 clone.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider:
- !AWS,ArvanCloud,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,Scaleway,StackPath,Storj,Synology,RackCorp,Qiniu,Petabox
-- Type: string
-- Required: false
-- Examples:
- - "objects-us-east-1.dream.io"
- - Dream Objects endpoint
- - "syd1.digitaloceanspaces.com"
- - DigitalOcean Spaces Sydney 1
- - "sfo3.digitaloceanspaces.com"
- - DigitalOcean Spaces San Francisco 3
- - "fra1.digitaloceanspaces.com"
- - DigitalOcean Spaces Frankfurt 1
- - "nyc3.digitaloceanspaces.com"
- - DigitalOcean Spaces New York 3
- - "ams3.digitaloceanspaces.com"
- - DigitalOcean Spaces Amsterdam 3
- - "sgp1.digitaloceanspaces.com"
- - DigitalOcean Spaces Singapore 1
- - "localhost:8333"
- - SeaweedFS S3 localhost
- - "s3.us-east-1.lyvecloud.seagate.com"
- - Seagate Lyve Cloud US East 1 (Virginia)
- - "s3.us-west-1.lyvecloud.seagate.com"
- - Seagate Lyve Cloud US West 1 (California)
- - "s3.ap-southeast-1.lyvecloud.seagate.com"
- - Seagate Lyve Cloud AP Southeast 1 (Singapore)
- - "s3.wasabisys.com"
- - Wasabi US East 1 (N. Virginia)
- - "s3.us-east-2.wasabisys.com"
- - Wasabi US East 2 (N. Virginia)
- - "s3.us-central-1.wasabisys.com"
- - Wasabi US Central 1 (Texas)
- - "s3.us-west-1.wasabisys.com"
- - Wasabi US West 1 (Oregon)
- - "s3.ca-central-1.wasabisys.com"
- - Wasabi CA Central 1 (Toronto)
- - "s3.eu-central-1.wasabisys.com"
- - Wasabi EU Central 1 (Amsterdam)
- - "s3.eu-central-2.wasabisys.com"
- - Wasabi EU Central 2 (Frankfurt)
- - "s3.eu-west-1.wasabisys.com"
- - Wasabi EU West 1 (London)
- - "s3.eu-west-2.wasabisys.com"
- - Wasabi EU West 2 (Paris)
- - "s3.ap-northeast-1.wasabisys.com"
- - Wasabi AP Northeast 1 (Tokyo) endpoint
- - "s3.ap-northeast-2.wasabisys.com"
- - Wasabi AP Northeast 2 (Osaka) endpoint
- - "s3.ap-southeast-1.wasabisys.com"
- - Wasabi AP Southeast 1 (Singapore)
- - "s3.ap-southeast-2.wasabisys.com"
- - Wasabi AP Southeast 2 (Sydney)
- - "storage.iran.liara.space"
- - Liara Iran endpoint
- - "s3.ir-thr-at1.arvanstorage.ir"
- - ArvanCloud Tehran Iran (Simin) endpoint
- - "s3.ir-tbz-sh1.arvanstorage.ir"
- - ArvanCloud Tabriz Iran (Shahriar) endpoint
-
--s3-location-constraint
Location constraint - must be set to match the Region.
@@ -21545,275 +21976,6 @@ Properties:
- "us-gov-west-1"
- AWS GovCloud (US) Region
---s3-location-constraint
-
-Location constraint - must match endpoint.
-
-Used when creating buckets only.
-
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: ChinaMobile
-- Type: string
-- Required: false
-- Examples:
- - "wuxi1"
- - East China (Suzhou)
- - "jinan1"
- - East China (Jinan)
- - "ningbo1"
- - East China (Hangzhou)
- - "shanghai1"
- - East China (Shanghai-1)
- - "zhengzhou1"
- - Central China (Zhengzhou)
- - "hunan1"
- - Central China (Changsha-1)
- - "zhuzhou1"
- - Central China (Changsha-2)
- - "guangzhou1"
- - South China (Guangzhou-2)
- - "dongguan1"
- - South China (Guangzhou-3)
- - "beijing1"
- - North China (Beijing-1)
- - "beijing2"
- - North China (Beijing-2)
- - "beijing4"
- - North China (Beijing-3)
- - "huhehaote1"
- - North China (Huhehaote)
- - "chengdu1"
- - Southwest China (Chengdu)
- - "chongqing1"
- - Southwest China (Chongqing)
- - "guiyang1"
- - Southwest China (Guiyang)
- - "xian1"
- - Nouthwest China (Xian)
- - "yunnan"
- - Yunnan China (Kunming)
- - "yunnan2"
- - Yunnan China (Kunming-2)
- - "tianjin1"
- - Tianjin China (Tianjin)
- - "jilin1"
- - Jilin China (Changchun)
- - "hubei1"
- - Hubei China (Xiangyan)
- - "jiangxi1"
- - Jiangxi China (Nanchang)
- - "gansu1"
- - Gansu China (Lanzhou)
- - "shanxi1"
- - Shanxi China (Taiyuan)
- - "liaoning1"
- - Liaoning China (Shenyang)
- - "hebei1"
- - Hebei China (Shijiazhuang)
- - "fujian1"
- - Fujian China (Xiamen)
- - "guangxi1"
- - Guangxi China (Nanning)
- - "anhui1"
- - Anhui China (Huainan)
-
---s3-location-constraint
-
-Location constraint - must match endpoint.
-
-Used when creating buckets only.
-
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: ArvanCloud
-- Type: string
-- Required: false
-- Examples:
- - "ir-thr-at1"
- - Tehran Iran (Simin)
- - "ir-tbz-sh1"
- - Tabriz Iran (Shahriar)
-
---s3-location-constraint
-
-Location constraint - must match endpoint when using IBM Cloud Public.
-
-For on-prem COS, do not make a selection from this list, hit enter.
-
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: IBMCOS
-- Type: string
-- Required: false
-- Examples:
- - "us-standard"
- - US Cross Region Standard
- - "us-vault"
- - US Cross Region Vault
- - "us-cold"
- - US Cross Region Cold
- - "us-flex"
- - US Cross Region Flex
- - "us-east-standard"
- - US East Region Standard
- - "us-east-vault"
- - US East Region Vault
- - "us-east-cold"
- - US East Region Cold
- - "us-east-flex"
- - US East Region Flex
- - "us-south-standard"
- - US South Region Standard
- - "us-south-vault"
- - US South Region Vault
- - "us-south-cold"
- - US South Region Cold
- - "us-south-flex"
- - US South Region Flex
- - "eu-standard"
- - EU Cross Region Standard
- - "eu-vault"
- - EU Cross Region Vault
- - "eu-cold"
- - EU Cross Region Cold
- - "eu-flex"
- - EU Cross Region Flex
- - "eu-gb-standard"
- - Great Britain Standard
- - "eu-gb-vault"
- - Great Britain Vault
- - "eu-gb-cold"
- - Great Britain Cold
- - "eu-gb-flex"
- - Great Britain Flex
- - "ap-standard"
- - APAC Standard
- - "ap-vault"
- - APAC Vault
- - "ap-cold"
- - APAC Cold
- - "ap-flex"
- - APAC Flex
- - "mel01-standard"
- - Melbourne Standard
- - "mel01-vault"
- - Melbourne Vault
- - "mel01-cold"
- - Melbourne Cold
- - "mel01-flex"
- - Melbourne Flex
- - "tor01-standard"
- - Toronto Standard
- - "tor01-vault"
- - Toronto Vault
- - "tor01-cold"
- - Toronto Cold
- - "tor01-flex"
- - Toronto Flex
-
---s3-location-constraint
-
-Location constraint - the location where your bucket will be located and
-your data stored.
-
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: RackCorp
-- Type: string
-- Required: false
-- Examples:
- - "global"
- - Global CDN Region
- - "au"
- - Australia (All locations)
- - "au-nsw"
- - NSW (Australia) Region
- - "au-qld"
- - QLD (Australia) Region
- - "au-vic"
- - VIC (Australia) Region
- - "au-wa"
- - Perth (Australia) Region
- - "ph"
- - Manila (Philippines) Region
- - "th"
- - Bangkok (Thailand) Region
- - "hk"
- - HK (Hong Kong) Region
- - "mn"
- - Ulaanbaatar (Mongolia) Region
- - "kg"
- - Bishkek (Kyrgyzstan) Region
- - "id"
- - Jakarta (Indonesia) Region
- - "jp"
- - Tokyo (Japan) Region
- - "sg"
- - SG (Singapore) Region
- - "de"
- - Frankfurt (Germany) Region
- - "us"
- - USA (AnyCast) Region
- - "us-east-1"
- - New York (USA) Region
- - "us-west-1"
- - Freemont (USA) Region
- - "nz"
- - Auckland (New Zealand) Region
-
---s3-location-constraint
-
-Location constraint - must be set to match the Region.
-
-Used when creating buckets only.
-
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: Qiniu
-- Type: string
-- Required: false
-- Examples:
- - "cn-east-1"
- - East China Region 1
- - "cn-east-2"
- - East China Region 2
- - "cn-north-1"
- - North China Region 1
- - "cn-south-1"
- - South China Region 1
- - "us-north-1"
- - North America Region 1
- - "ap-southeast-1"
- - Southeast Asia Region 1
- - "ap-northeast-1"
- - Northeast Asia Region 1
-
---s3-location-constraint
-
-Location constraint - must be set to match the Region.
-
-Leave blank if not sure. Used when creating buckets only.
-
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider:
- !AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,Leviia,Liara,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS,Petabox
-- Type: string
-- Required: false
-
--s3-acl
Canned ACL used when creating buckets and storing or copying objects.
@@ -21954,157 +22116,14 @@ Properties:
- "GLACIER_IR"
- Glacier Instant Retrieval storage class
---s3-storage-class
-
-The storage class to use when storing new objects in OSS.
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: Alibaba
-- Type: string
-- Required: false
-- Examples:
- - ""
- - Default
- - "STANDARD"
- - Standard storage class
- - "GLACIER"
- - Archive storage mode
- - "STANDARD_IA"
- - Infrequent access storage mode
-
---s3-storage-class
-
-The storage class to use when storing new objects in ChinaMobile.
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: ChinaMobile
-- Type: string
-- Required: false
-- Examples:
- - ""
- - Default
- - "STANDARD"
- - Standard storage class
- - "GLACIER"
- - Archive storage mode
- - "STANDARD_IA"
- - Infrequent access storage mode
-
---s3-storage-class
-
-The storage class to use when storing new objects in Liara
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: Liara
-- Type: string
-- Required: false
-- Examples:
- - "STANDARD"
- - Standard storage class
-
---s3-storage-class
-
-The storage class to use when storing new objects in ArvanCloud.
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: ArvanCloud
-- Type: string
-- Required: false
-- Examples:
- - "STANDARD"
- - Standard storage class
-
---s3-storage-class
-
-The storage class to use when storing new objects in Tencent COS.
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: TencentCOS
-- Type: string
-- Required: false
-- Examples:
- - ""
- - Default
- - "STANDARD"
- - Standard storage class
- - "ARCHIVE"
- - Archive storage mode
- - "STANDARD_IA"
- - Infrequent access storage mode
-
---s3-storage-class
-
-The storage class to use when storing new objects in S3.
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: Scaleway
-- Type: string
-- Required: false
-- Examples:
- - ""
- - Default.
- - "STANDARD"
- - The Standard class for any upload.
- - Suitable for on-demand content like streaming or CDN.
- - Available in all regions.
- - "GLACIER"
- - Archived storage.
- - Prices are lower, but it needs to be restored first to be
- accessed.
- - Available in FR-PAR and NL-AMS regions.
- - "ONEZONE_IA"
- - One Zone - Infrequent Access.
- - A good choice for storing secondary backup copies or easily
- re-creatable data.
- - Available in the FR-PAR region only.
-
---s3-storage-class
-
-The storage class to use when storing new objects in Qiniu.
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: Qiniu
-- Type: string
-- Required: false
-- Examples:
- - "STANDARD"
- - Standard storage class
- - "LINE"
- - Infrequent access storage mode
- - "GLACIER"
- - Archive storage mode
- - "DEEP_ARCHIVE"
- - Deep archive storage mode
-
Advanced options
Here are the Advanced options specific to s3 (Amazon S3 Compliant
-Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China
-Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS,
-IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease,
-Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology,
-Tencent COS, Qiniu and Wasabi).
+Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile,
+Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive,
+IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox,
+RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology,
+TencentCOS, Wasabi, Qiniu and others).
--s3-bucket-acl
@@ -22594,7 +22613,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_S3_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,InvalidUtf8,Dot
--s3-memory-pool-flush-time
@@ -22826,6 +22845,55 @@ Properties:
- Type: string
- Required: false
+--s3-use-already-exists
+
+Set if rclone should report BucketAlreadyExists errors on bucket
+creation.
+
+At some point during the evolution of the s3 protocol, AWS started
+returning an AlreadyOwnedByYou error when attempting to create a bucket
+that the user already owned, rather than a BucketAlreadyExists error.
+
+Unfortunately exactly what has been implemented by s3 clones is a little
+inconsistent, some return AlreadyOwnedByYou, some return
+BucketAlreadyExists and some return no error at all.
+
+This is important to rclone because it ensures the bucket exists by
+creating it on quite a lot of operations (unless --s3-no-check-bucket is
+used).
+
+If rclone knows the provider can return AlreadyOwnedByYou or returns no
+error then it can report BucketAlreadyExists errors when the user
+attempts to create a bucket not owned by them. Otherwise rclone ignores
+the BucketAlreadyExists error which can lead to confusion.
+
+This should be automatically set correctly for all providers rclone
+knows about - please make a bug report if not.
+
+Properties:
+
+- Config: use_already_exists
+- Env Var: RCLONE_S3_USE_ALREADY_EXISTS
+- Type: Tristate
+- Default: unset
+
+--s3-use-multipart-uploads
+
+Set if rclone should use multipart uploads.
+
+You can change this if you want to disable the use of multipart uploads.
+This shouldn't be necessary in normal operation.
+
+This should be automatically set correctly for all providers rclone
+knows about - please make a bug report if not.
+
+Properties:
+
+- Config: use_multipart_uploads
+- Env Var: RCLONE_S3_USE_MULTIPART_UPLOADS
+- Type: Tristate
+- Default: unset
+
Metadata
User metadata is stored as x-amz-meta- keys. S3 metadata keys are case
@@ -23319,6 +23387,13 @@ secret key. These can be retrieved by creating an HMAC key.
secret_access_key = your_secret_key
endpoint = https://storage.googleapis.com
+Note that --s3-versions does not work with GCS when it needs to do
+directory paging. Rclone will return the error:
+
+ s3 protocol error: received versions listing with IsTruncated set with no NextKeyMarker
+
+This is Google bug #312292516.
+
DigitalOcean Spaces
Spaces is an S3-interoperable object storage service from cloud provider
@@ -24213,6 +24288,29 @@ Your config should end up looking a bit like this:
endpoint = s3.rackcorp.com
location_constraint = au-nsw
+Rclone Serve S3
+
+Rclone can serve any remote over the S3 protocol. For details see the
+rclone serve s3 documentation.
+
+For example, to serve remote:path over s3, run the server like this:
+
+ rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
+
+This will be compatible with an rclone remote which is defined like
+this:
+
+ [serves3]
+ type = s3
+ provider = Rclone
+ endpoint = http://127.0.0.1:8080/
+ access_key_id = ACCESS_KEY_ID
+ secret_access_key = SECRET_ACCESS_KEY
+ use_multipart_uploads = false
+
+Note that setting disable_multipart_uploads = true is to work around a
+bug which will be fixed in due course.
+
Scaleway
Scaleway The Object Storage platform allows you to store anything from
@@ -25087,6 +25185,135 @@ This will leave the config file looking like this.
server_side_encryption =
storage_class =
+Linode
+
+Here is an example of making a Linode Object Storage configuration.
+First run:
+
+ rclone config
+
+This will guide you through an interactive setup process.
+
+ No remotes found, make a new one?
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+
+ Enter name for new remote.
+ name> linode
+
+ Option Storage.
+ Type of storage to configure.
+ Choose a number from below, or type in your own value.
+ [snip]
+ X / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others
+ \ (s3)
+ [snip]
+ Storage> s3
+
+ Option provider.
+ Choose your S3 provider.
+ Choose a number from below, or type in your own value.
+ Press Enter to leave empty.
+ [snip]
+ XX / Linode Object Storage
+ \ (Linode)
+ [snip]
+ provider> Linode
+
+ Option env_auth.
+ Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ Only applies if access_key_id and secret_access_key is blank.
+ Choose a number from below, or type in your own boolean value (true or false).
+ Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+ env_auth>
+
+ Option access_key_id.
+ AWS Access Key ID.
+ Leave blank for anonymous access or runtime credentials.
+ Enter a value. Press Enter to leave empty.
+ access_key_id> ACCESS_KEY
+
+ Option secret_access_key.
+ AWS Secret Access Key (password).
+ Leave blank for anonymous access or runtime credentials.
+ Enter a value. Press Enter to leave empty.
+ secret_access_key> SECRET_ACCESS_KEY
+
+ Option endpoint.
+ Endpoint for Linode Object Storage API.
+ Choose a number from below, or type in your own value.
+ Press Enter to leave empty.
+ 1 / Atlanta, GA (USA), us-southeast-1
+ \ (us-southeast-1.linodeobjects.com)
+ 2 / Chicago, IL (USA), us-ord-1
+ \ (us-ord-1.linodeobjects.com)
+ 3 / Frankfurt (Germany), eu-central-1
+ \ (eu-central-1.linodeobjects.com)
+ 4 / Milan (Italy), it-mil-1
+ \ (it-mil-1.linodeobjects.com)
+ 5 / Newark, NJ (USA), us-east-1
+ \ (us-east-1.linodeobjects.com)
+ 6 / Paris (France), fr-par-1
+ \ (fr-par-1.linodeobjects.com)
+ 7 / Seattle, WA (USA), us-sea-1
+ \ (us-sea-1.linodeobjects.com)
+ 8 / Singapore ap-south-1
+ \ (ap-south-1.linodeobjects.com)
+ 9 / Stockholm (Sweden), se-sto-1
+ \ (se-sto-1.linodeobjects.com)
+ 10 / Washington, DC, (USA), us-iad-1
+ \ (us-iad-1.linodeobjects.com)
+ endpoint> 3
+
+ Option acl.
+ Canned ACL used when creating buckets and storing or copying objects.
+ This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
+ For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+ Note that this ACL is applied when server-side copying objects as S3
+ doesn't copy the ACL from the source but rather writes a fresh one.
+ If the acl is an empty string then no X-Amz-Acl: header is added and
+ the default (private) will be used.
+ Choose a number from below, or type in your own value.
+ Press Enter to leave empty.
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \ (private)
+ [snip]
+ acl>
+
+ Edit advanced config?
+ y) Yes
+ n) No (default)
+ y/n> n
+
+ Configuration complete.
+ Options:
+ - type: s3
+ - provider: Linode
+ - access_key_id: ACCESS_KEY
+ - secret_access_key: SECRET_ACCESS_KEY
+ - endpoint: eu-central-1.linodeobjects.com
+ Keep this "linode" remote?
+ y) Yes this is OK (default)
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+
+This will leave the config file looking like this.
+
+ [linode]
+ type = s3
+ provider = Linode
+ access_key_id = ACCESS_KEY
+ secret_access_key = SECRET_ACCESS_KEY
+ endpoint = eu-central-1.linodeobjects.com
+
ArvanCloud
ArvanCloud ArvanCloud Object Storage goes beyond the limited traditional
@@ -25785,9 +26012,9 @@ this remote y/e/d> y
transactions in exchange for more memory. See the [rclone
docs](https://rclone.org/docs/#fast-list) for more details.
- ### Modified time
+ ### Modification times
- The modified time is stored as metadata on the object as
+ The modification time is stored as metadata on the object as
`X-Bz-Info-src_last_modified_millis` as milliseconds since 1970-01-01
in the Backblaze standard. Other tools should be able to use this as
a modified time.
@@ -26172,7 +26399,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
- Config: upload_concurrency
- Env Var: RCLONE_B2_UPLOAD_CONCURRENCY
- Type: int
- - Default: 16
+ - Default: 4
#### --b2-disable-checksum
@@ -26252,6 +26479,37 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
- Type: bool
- Default: false
+ #### --b2-lifecycle
+
+ Set the number of days deleted files should be kept when creating a bucket.
+
+ On bucket creation, this parameter is used to create a lifecycle rule
+ for the entire bucket.
+
+ If lifecycle is 0 (the default) it does not create a lifecycle rule so
+ the default B2 behaviour applies. This is to create versions of files
+ on delete and overwrite and to keep them indefinitely.
+
+ If lifecycle is >0 then it creates a single rule setting the number of
+ days before a file that is deleted or overwritten is deleted
+ permanently. This is known as daysFromHidingToDeleting in the b2 docs.
+
+ The minimum value for this parameter is 1 day.
+
+ You can also enable hard_delete in the config also which will mean
+ deletions won't cause versions but overwrites will still cause
+ versions to be made.
+
+ See: [rclone backend lifecycle](#lifecycle) for setting lifecycles after bucket creation.
+
+
+ Properties:
+
+ - Config: lifecycle
+ - Env Var: RCLONE_B2_LIFECYCLE
+ - Type: int
+ - Default: 0
+
#### --b2-encoding
The encoding for the backend.
@@ -26262,9 +26520,76 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
- Config: encoding
- Env Var: RCLONE_B2_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
+ ## Backend commands
+
+ Here are the commands specific to the b2 backend.
+
+ Run them with
+
+ rclone backend COMMAND remote:
+
+ The help below will explain what arguments each command takes.
+
+ See the [backend](https://rclone.org/commands/rclone_backend/) command for more
+ info on how to pass options and arguments.
+
+ These can be run on a running backend using the rc command
+ [backend/command](https://rclone.org/rc/#backend-command).
+
+ ### lifecycle
+
+ Read or set the lifecycle for a bucket
+
+ rclone backend lifecycle remote: [options] [+]
+
+ This command can be used to read or set the lifecycle for a bucket.
+
+ Usage Examples:
+
+ To show the current lifecycle rules:
+
+ rclone backend lifecycle b2:bucket
+
+ This will dump something like this showing the lifecycle rules.
+
+ [
+ {
+ "daysFromHidingToDeleting": 1,
+ "daysFromUploadingToHiding": null,
+ "fileNamePrefix": ""
+ }
+ ]
+
+ If there are no lifecycle rules (the default) then it will just return [].
+
+ To reset the current lifecycle rules:
+
+ rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30
+ rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHidingToDeleting=1
+
+ This will run and then print the new lifecycle rules as above.
+
+ Rclone only lets you set lifecycles for the whole bucket with the
+ fileNamePrefix = "".
+
+ You can't disable versioning with B2. The best you can do is to set
+ the daysFromHidingToDeleting to 1 day. You can enable hard_delete in
+ the config also which will mean deletions won't cause versions but
+ overwrites will still cause versions to be made.
+
+ rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1
+
+ See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules
+
+
+ Options:
+
+ - "daysFromHidingToDeleting": After a file has been hidden for this many days it is deleted. 0 is off.
+ - "daysFromUploadingToHiding": This many days after uploading a file is hidden
+
## Limitations
@@ -26416,7 +26741,7 @@ b) Edit this remote
c) Delete this remote y/e/d> y
- ### Modified time and hashes
+ ### Modification times and hashes
Box allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -26659,7 +26984,7 @@ c) Delete this remote y/e/d> y
Impersonate this user ID when using a service account.
- Settng this flag allows rclone, when using a JWT service account, to
+ Setting this flag allows rclone, when using a JWT service account, to
act on behalf of another user by setting the as-user header.
The user ID is the Box identifier for a user. User IDs can found for
@@ -26687,7 +27012,7 @@ c) Delete this remote y/e/d> y
- Config: encoding
- Env Var: RCLONE_BOX_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot
@@ -27579,7 +27904,7 @@ this remote y/e/d> y
between source and target are not found.
- ### Modified time
+ ### Modification times
Chunker stores modification times using the wrapped remote so support
depends on that. For a small non-chunked file the chunker overlay simply
@@ -27886,7 +28211,7 @@ this remote y/e/d> y
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
- ### Modified time and hashes
+ ### Modification times and hashes
ShareFile allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -28081,7 +28406,7 @@ this remote y/e/d> y
- Config: encoding
- Env Var: RCLONE_SHAREFILE_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot
@@ -28430,7 +28755,7 @@ subdir/file2.txt.bin 58 subdir/subsubdir/file4.txt.bin 55 file1.txt.bin
`1/12/qgm4avr35m5loi1th53ato71v0`
- ### Modified time and hashes
+ ### Modification times and hashes
Crypt stores modification times using the underlying remote so support
depends on that.
@@ -28737,7 +29062,7 @@ subdir/file2.txt.bin 58 subdir/subsubdir/file4.txt.bin 55 file1.txt.bin
The initial nonce is generated from the operating systems crypto
strong random number generator. The nonce is incremented for each
chunk read making sure each nonce is unique for each block written.
- The chance of a nonce being re-used is minuscule. If you wrote an
+ The chance of a nonce being reused is minuscule. If you wrote an
exabyte of data (10¹⁸ bytes) you would have a probability of
approximately 2×10⁻³² of re-using a nonce.
@@ -29162,7 +29487,7 @@ s) Delete this remote y/e/d> y
A leading `/` for a Dropbox personal account will do nothing, but it
will take an extra HTTP transaction so it should be avoided.
- ### Modified time and Hashes
+ ### Modification times and hashes
Dropbox supports modified times, but the only way to set a
modification time is to re-upload the file.
@@ -29408,6 +29733,30 @@ s) Delete this remote y/e/d> y
- Type: bool
- Default: false
+ #### --dropbox-pacer-min-sleep
+
+ Minimum time to sleep between API calls.
+
+ Properties:
+
+ - Config: pacer_min_sleep
+ - Env Var: RCLONE_DROPBOX_PACER_MIN_SLEEP
+ - Type: Duration
+ - Default: 10ms
+
+ #### --dropbox-encoding
+
+ The encoding for the backend.
+
+ See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+ Properties:
+
+ - Config: encoding
+ - Env Var: RCLONE_DROPBOX_ENCODING
+ - Type: Encoding
+ - Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot
+
#### --dropbox-batch-mode
Upload file batching sync|async|off.
@@ -29494,30 +29843,6 @@ s) Delete this remote y/e/d> y
- Type: Duration
- Default: 10m0s
- #### --dropbox-pacer-min-sleep
-
- Minimum time to sleep between API calls.
-
- Properties:
-
- - Config: pacer_min_sleep
- - Env Var: RCLONE_DROPBOX_PACER_MIN_SLEEP
- - Type: Duration
- - Default: 10ms
-
- #### --dropbox-encoding
-
- The encoding for the backend.
-
- See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
-
- Properties:
-
- - Config: encoding
- - Env Var: RCLONE_DROPBOX_ENCODING
- - Type: MultiEncoder
- - Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot
-
## Limitations
@@ -29642,7 +29967,7 @@ xxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxx -------------------- y) Yes this is OK
rclone copy /home/source remote:backup
- ### Modified time and hashes
+ ### Modification times and hashes
The Enterprise File Fabric allows modification times to be set on
files accurate to 1 second. These will be used to detect whether
@@ -29807,7 +30132,7 @@ $ rclone lsf --dirs-only -Fip --csv filefabric: 120673758,Burnt PDFs/
- Config: encoding
- Env Var: RCLONE_FILEFABRIC_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,Del,Ctl,InvalidUtf8,Dot
@@ -30216,7 +30541,7 @@ this remote y/e/d> y
- Config: encoding
- Env Var: RCLONE_FTP_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,Del,Ctl,RightSpace,Dot
- Examples:
- "Asterisk,Ctl,Dot,Slash"
@@ -30259,7 +30584,7 @@ this remote y/e/d> y
The `ftp_proxy` environment variable is not currently supported.
- #### Modified time
+ ### Modification times
File modification time (timestamps) is supported to 1 second resolution
for major FTP servers: ProFTPd, PureFTPd, VsFTPd, and FileZilla FTP server.
@@ -30466,7 +30791,7 @@ c) Delete this remote y/e/d> y
Note that the last of these is for setting custom metadata in the form
`--header-upload "x-goog-meta-key: value"`
- ### Modification time
+ ### Modification times
Google Cloud Storage stores md5sum natively.
Google's [gsutil](https://cloud.google.com/storage/docs/gsutil) tool stores modification time
@@ -30915,7 +31240,7 @@ c) Delete this remote y/e/d> y
- Config: encoding
- Env Var: RCLONE_GCS_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,CrLf,InvalidUtf8,Dot
@@ -31012,6 +31337,8 @@ this remote y/e/d> y
scopes are defined
here](https://developers.google.com/drive/v3/web/about-auth).
+ A comma-separated list is allowed e.g. `drive.readonly,drive.file`.
+
The scope are
#### drive
@@ -31225,10 +31552,14 @@ rclone lsjson -vv -R --checkers=6 gdrive:folder
- without `--fast-list`: 22:05 min
- with `--fast-list`: 58s
- ### Modified time
+ ### Modification times and hashes
Google drive stores modification times accurate to 1 ms.
+ Hash algorithms MD5, SHA1 and SHA256 are supported. Note, however,
+ that a small fraction of files uploaded may not have SHA1 or SHA256
+ hashes especially if they were uploaded before 2018.
+
### Restricted filename characters
Only Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8),
@@ -31448,7 +31779,7 @@ rclone lsjson -vv -R --checkers=6 gdrive:folder
#### --drive-scope
- Scope that rclone should use when requesting access from drive.
+ Comma separated list of scopes that rclone should use when requesting access from drive.
Properties:
@@ -31636,15 +31967,40 @@ rclone lsjson -vv -R --checkers=6 gdrive:folder
- Type: bool
- Default: false
+ #### --drive-show-all-gdocs
+
+ Show all Google Docs including non-exportable ones in listings.
+
+ If you try a server side copy on a Google Form without this flag, you
+ will get this error:
+
+ No export formats found for "application/vnd.google-apps.form"
+
+ However adding this flag will allow the form to be server side copied.
+
+ Note that rclone doesn't add extensions to the Google Docs file names
+ in this mode.
+
+ Do **not** use this flag when trying to download Google Docs - rclone
+ will fail to download them.
+
+
+ Properties:
+
+ - Config: show_all_gdocs
+ - Env Var: RCLONE_DRIVE_SHOW_ALL_GDOCS
+ - Type: bool
+ - Default: false
+
#### --drive-skip-checksum-gphotos
- Skip MD5 checksum on Google photos and videos only.
+ Skip checksums on Google photos and videos only.
Use this if you get checksum errors when transferring Google photos or
videos.
Setting this flag will cause Google photos and videos to return a
- blank MD5 checksum.
+ blank checksums.
Google photos are identified by being in the "photos" space.
@@ -32098,6 +32454,98 @@ rclone lsjson -vv -R --checkers=6 gdrive:folder
- Type: bool
- Default: true
+ #### --drive-metadata-owner
+
+ Control whether owner should be read or written in metadata.
+
+ Owner is a standard part of the file metadata so is easy to read. But it
+ isn't always desirable to set the owner from the metadata.
+
+ Note that you can't set the owner on Shared Drives, and that setting
+ ownership will generate an email to the new owner (this can't be
+ disabled), and you can't transfer ownership to someone outside your
+ organization.
+
+
+ Properties:
+
+ - Config: metadata_owner
+ - Env Var: RCLONE_DRIVE_METADATA_OWNER
+ - Type: Bits
+ - Default: read
+ - Examples:
+ - "off"
+ - Do not read or write the value
+ - "read"
+ - Read the value only
+ - "write"
+ - Write the value only
+ - "read,write"
+ - Read and Write the value.
+
+ #### --drive-metadata-permissions
+
+ Control whether permissions should be read or written in metadata.
+
+ Reading permissions metadata from files can be done quickly, but it
+ isn't always desirable to set the permissions from the metadata.
+
+ Note that rclone drops any inherited permissions on Shared Drives and
+ any owner permission on My Drives as these are duplicated in the owner
+ metadata.
+
+
+ Properties:
+
+ - Config: metadata_permissions
+ - Env Var: RCLONE_DRIVE_METADATA_PERMISSIONS
+ - Type: Bits
+ - Default: off
+ - Examples:
+ - "off"
+ - Do not read or write the value
+ - "read"
+ - Read the value only
+ - "write"
+ - Write the value only
+ - "read,write"
+ - Read and Write the value.
+
+ #### --drive-metadata-labels
+
+ Control whether labels should be read or written in metadata.
+
+ Reading labels metadata from files takes an extra API transaction and
+ will slow down listings. It isn't always desirable to set the labels
+ from the metadata.
+
+ The format of labels is documented in the drive API documentation at
+ https://developers.google.com/drive/api/reference/rest/v3/Label -
+ rclone just provides a JSON dump of this format.
+
+ When setting labels, the label and fields must already exist - rclone
+ will not create them. This means that if you are transferring labels
+ from two different accounts you will have to create the labels in
+ advance and use the metadata mapper to translate the IDs between the
+ two accounts.
+
+
+ Properties:
+
+ - Config: metadata_labels
+ - Env Var: RCLONE_DRIVE_METADATA_LABELS
+ - Type: Bits
+ - Default: off
+ - Examples:
+ - "off"
+ - Do not read or write the value
+ - "read"
+ - Read the value only
+ - "write"
+ - Write the value only
+ - "read,write"
+ - Read and Write the value.
+
#### --drive-encoding
The encoding for the backend.
@@ -32108,7 +32556,7 @@ rclone lsjson -vv -R --checkers=6 gdrive:folder
- Config: encoding
- Env Var: RCLONE_DRIVE_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: InvalidUtf8
#### --drive-env-auth
@@ -32129,6 +32577,29 @@ rclone lsjson -vv -R --checkers=6 gdrive:folder
- "true"
- Get GCP IAM credentials from the environment (env vars or IAM).
+ ### Metadata
+
+ User metadata is stored in the properties field of the drive object.
+
+ Here are the possible system metadata items for the drive backend.
+
+ | Name | Help | Type | Example | Read Only |
+ |------|------|------|---------|-----------|
+ | btime | Time of file birth (creation) with mS accuracy. Note that this is only writable on fresh uploads - it can't be written for updates. | RFC 3339 | 2006-01-02T15:04:05.999Z07:00 | N |
+ | content-type | The MIME type of the file. | string | text/plain | N |
+ | copy-requires-writer-permission | Whether the options to copy, print, or download this file, should be disabled for readers and commenters. | boolean | true | N |
+ | description | A short description of the file. | string | Contract for signing | N |
+ | folder-color-rgb | The color for a folder or a shortcut to a folder as an RGB hex string. | string | 881133 | N |
+ | labels | Labels attached to this file in a JSON dump of Googled drive format. Enable with --drive-metadata-labels. | JSON | [] | N |
+ | mtime | Time of last modification with mS accuracy. | RFC 3339 | 2006-01-02T15:04:05.999Z07:00 | N |
+ | owner | The owner of the file. Usually an email address. Enable with --drive-metadata-owner. | string | user@example.com | N |
+ | permissions | Permissions in a JSON dump of Google drive format. On shared drives these will only be present if they aren't inherited. Enable with --drive-metadata-permissions. | JSON | {} | N |
+ | starred | Whether the user has starred the file. | boolean | false | N |
+ | viewed-by-me | Whether the file has been viewed by this user. | boolean | true | **Y** |
+ | writers-can-share | Whether users with only writer permission can modify the file's permissions. Not populated for items in shared drives. | boolean | false | N |
+
+ See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
+
## Backend commands
Here are the commands specific to the drive backend.
@@ -32392,6 +32863,11 @@ rclone lsjson -vv -R --checkers=6 gdrive:folder
approximately 1 hour) and/or not using --fast-list both seem to be
effective in preventing the problem.
+ ### SHA1 or SHA256 hashes may be missing
+
+ All files have MD5 hashes, but a small fraction of files uploaded may
+ not have SHA1 or SHA256 hashes especially if they were uploaded before 2018.
+
## Making your own client_id
When you use rclone with Google drive in its default configuration you
@@ -32690,7 +33166,63 @@ will count towards storage in your Google Account.
Properties:
- - Config: encoding - Env Var: RCLONE_GPHOTOS_ENCODING - Type: MultiEncoder - Default: Slash,CrLf,InvalidUtf8,Dot
+ - Config: encoding - Env Var: RCLONE_GPHOTOS_ENCODING - Type: Encoding - Default: Slash,CrLf,InvalidUtf8,Dot
+
+ #### --gphotos-batch-mode
+
+ Upload file batching sync|async|off.
+
+ This sets the batch mode used by rclone.
+
+ This has 3 possible values
+
+ - off - no batching - sync - batch uploads and check completion (default) - async - batch upload and don't check completion
+
+ Rclone will close any outstanding batches when it exits which may make a delay on quit.
+
+ Properties:
+
+ - Config: batch_mode - Env Var: RCLONE_GPHOTOS_BATCH_MODE - Type: string - Default: "sync"
+
+ #### --gphotos-batch-size
+
+ Max number of files in upload batch.
+
+ This sets the batch size of files to upload. It has to be less than 50.
+
+ By default this is 0 which means rclone which calculate the batch size depending on the setting of batch_mode.
+
+ - batch_mode: async - default batch_size is 50 - batch_mode: sync - default batch_size is the same as --transfers - batch_mode: off - not in use
+
+ Rclone will close any outstanding batches when it exits which may make a delay on quit.
+
+ Setting this is a great idea if you are uploading lots of small files as it will make them a lot quicker. You can use --transfers 32 to maximise throughput.
+
+ Properties:
+
+ - Config: batch_size - Env Var: RCLONE_GPHOTOS_BATCH_SIZE - Type: int - Default: 0
+
+ #### --gphotos-batch-timeout
+
+ Max time to allow an idle upload batch before uploading.
+
+ If an upload batch is idle for more than this long then it will be uploaded.
+
+ The default for this is 0 which means rclone will choose a sensible default based on the batch_mode in use.
+
+ - batch_mode: async - default batch_timeout is 10s - batch_mode: sync - default batch_timeout is 1s - batch_mode: off - not in use
+
+ Properties:
+
+ - Config: batch_timeout - Env Var: RCLONE_GPHOTOS_BATCH_TIMEOUT - Type: Duration - Default: 0s
+
+ #### --gphotos-batch-commit-timeout
+
+ Max time to wait for a batch to finish committing
+
+ Properties:
+
+ - Config: batch_commit_timeout - Env Var: RCLONE_GPHOTOS_BATCH_COMMIT_TIMEOUT - Type: Duration - Default: 10m0s
## Limitations
@@ -32719,7 +33251,7 @@ will count towards storage in your Google Account.
If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to upload then uploaded the same image to album/my_album the filename of the image in album/my_album will be what it was uploaded with initially, not what you uploaded it with to album. In practise this shouldn't cause
too many problems.
- ### Modified time
+ ### Modification times
The date shown of media in Google Photos is the creation date as determined by the EXIF information, or the upload date if that is not known.
@@ -33128,7 +33660,7 @@ docker run --rm --name "rclone-hdfs" -p 127.0.0.1:9866:9866 -p
You can stop this image with `docker kill rclone-hdfs` (**NB** it does not use volumes, so all data
uploaded will be lost.)
- ### Modified time
+ ### Modification times
Time accurate to 1 second is stored.
@@ -33158,16 +33690,16 @@ docker run --rm --name "rclone-hdfs" -p 127.0.0.1:9866:9866 -p
#### --hdfs-namenode
- Hadoop name node and port.
+ Hadoop name nodes and ports.
- E.g. "namenode:8020" to connect to host namenode at port 8020.
+ E.g. "namenode-1:8020,namenode-2:8020,..." to connect to host namenodes at port 8020.
Properties:
- Config: namenode
- Env Var: RCLONE_HDFS_NAMENODE
- - Type: string
- - Required: true
+ - Type: CommaSepList
+ - Default:
#### --hdfs-username
@@ -33231,7 +33763,7 @@ docker run --rm --name "rclone-hdfs" -p 127.0.0.1:9866:9866 -p
- Config: encoding
- Env Var: RCLONE_HDFS_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot
@@ -33337,7 +33869,7 @@ Delete this remote y/e/d> y
the process is very similar to the process of initial setup exemplified before.
- ### Modified time and hashes
+ ### Modification times and hashes
HiDrive allows modification times to be set on objects accurate to 1 second.
@@ -33629,7 +34161,7 @@ Delete this remote y/e/d> y
- Config: encoding
- Env Var: RCLONE_HIDRIVE_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,Dot
@@ -33737,7 +34269,7 @@ k) Quit config e/n/d/r/c/s/q> q
This remote is read only - you can't upload files to an HTTP server.
- ### Modified time
+ ### Modification times
Most HTTP servers store time accurate to 1 second.
@@ -33844,6 +34376,46 @@ k) Quit config e/n/d/r/c/s/q> q
- Type: bool
- Default: false
+ ## Backend commands
+
+ Here are the commands specific to the http backend.
+
+ Run them with
+
+ rclone backend COMMAND remote:
+
+ The help below will explain what arguments each command takes.
+
+ See the [backend](https://rclone.org/commands/rclone_backend/) command for more
+ info on how to pass options and arguments.
+
+ These can be run on a running backend using the rc command
+ [backend/command](https://rclone.org/rc/#backend-command).
+
+ ### set
+
+ Set command for updating the config parameters.
+
+ rclone backend set remote: [options] [+]
+
+ This set command can be used to update the config parameters
+ for a running http backend.
+
+ Usage Examples:
+
+ rclone backend set remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
+ rclone rc backend/command command=set fs=remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
+ rclone rc backend/command command=set fs=remote: -o url=https://example.com
+
+ The option keys are named as they are in the config file.
+
+ This rebuilds the connection to the http backend when it is called with
+ the new parameters. Only new parameters need be passed as the values
+ will default to those currently in use.
+
+ It doesn't return anything.
+
+
## Limitations
@@ -33855,6 +34427,194 @@ k) Quit config e/n/d/r/c/s/q> q
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
+ # ImageKit
+ This is a backend for the [ImageKit.io](https://imagekit.io/) storage service.
+
+ #### About ImageKit
+ [ImageKit.io](https://imagekit.io/) provides real-time image and video optimizations, transformations, and CDN delivery. Over 1,000 businesses and 70,000 developers trust ImageKit with their images and videos on the web.
+
+
+ #### Accounts & Pricing
+
+ To use this backend, you need to [create an account](https://imagekit.io/registration/) on ImageKit. Start with a free plan with generous usage limits. Then, as your requirements grow, upgrade to a plan that best fits your needs. See [the pricing details](https://imagekit.io/plans).
+
+ ## Configuration
+
+ Here is an example of making an imagekit configuration.
+
+ Firstly create a [ImageKit.io](https://imagekit.io/) account and choose a plan.
+
+ You will need to log in and get the `publicKey` and `privateKey` for your account from the developer section.
+
+ Now run
+
+rclone config
+
+
+ This will guide you through an interactive setup process:
+
+No remotes found, make a new one? n) New remote s) Set configuration
+password q) Quit config n/s/q> n
+
+Enter the name for the new remote. name> imagekit-media-library
+
+Option Storage. Type of storage to configure. Choose a number from
+below, or type in your own value. [snip] XX / ImageKit.io (imagekit)
+[snip] Storage> imagekit
+
+Option endpoint. You can find your ImageKit.io URL endpoint in your
+dashboard Enter a value. endpoint> https://ik.imagekit.io/imagekit_id
+
+Option public_key. You can find your ImageKit.io public key in your
+dashboard Enter a value. public_key> public_****************************
+
+Option private_key. You can find your ImageKit.io private key in your
+dashboard Enter a value. private_key>
+private_****************************
+
+Edit advanced config? y) Yes n) No (default) y/n> n
+
+Configuration complete. Options: - type: imagekit - endpoint:
+https://ik.imagekit.io/imagekit_id - public_key:
+public_**************************** - private_key:
+private_****************************
+
+Keep this "imagekit-media-library" remote? y) Yes this is OK (default)
+e) Edit this remote d) Delete this remote y/e/d> y
+
+ List directories in the top level of your Media Library
+
+rclone lsd imagekit-media-library:
+
+ Make a new directory.
+
+rclone mkdir imagekit-media-library:directory
+
+ List the contents of a directory.
+
+rclone ls imagekit-media-library:directory
+
+
+ ### Modified time and hashes
+
+ ImageKit does not support modification times or hashes yet.
+
+ ### Checksums
+
+ No checksums are supported.
+
+
+ ### Standard options
+
+ Here are the Standard options specific to imagekit (ImageKit.io).
+
+ #### --imagekit-endpoint
+
+ You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+
+ Properties:
+
+ - Config: endpoint
+ - Env Var: RCLONE_IMAGEKIT_ENDPOINT
+ - Type: string
+ - Required: true
+
+ #### --imagekit-public-key
+
+ You can find your ImageKit.io public key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+
+ Properties:
+
+ - Config: public_key
+ - Env Var: RCLONE_IMAGEKIT_PUBLIC_KEY
+ - Type: string
+ - Required: true
+
+ #### --imagekit-private-key
+
+ You can find your ImageKit.io private key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+
+ Properties:
+
+ - Config: private_key
+ - Env Var: RCLONE_IMAGEKIT_PRIVATE_KEY
+ - Type: string
+ - Required: true
+
+ ### Advanced options
+
+ Here are the Advanced options specific to imagekit (ImageKit.io).
+
+ #### --imagekit-only-signed
+
+ If you have configured `Restrict unsigned image URLs` in your dashboard settings, set this to true.
+
+ Properties:
+
+ - Config: only_signed
+ - Env Var: RCLONE_IMAGEKIT_ONLY_SIGNED
+ - Type: bool
+ - Default: false
+
+ #### --imagekit-versions
+
+ Include old versions in directory listings.
+
+ Properties:
+
+ - Config: versions
+ - Env Var: RCLONE_IMAGEKIT_VERSIONS
+ - Type: bool
+ - Default: false
+
+ #### --imagekit-upload-tags
+
+ Tags to add to the uploaded files, e.g. "tag1,tag2".
+
+ Properties:
+
+ - Config: upload_tags
+ - Env Var: RCLONE_IMAGEKIT_UPLOAD_TAGS
+ - Type: string
+ - Required: false
+
+ #### --imagekit-encoding
+
+ The encoding for the backend.
+
+ See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+ Properties:
+
+ - Config: encoding
+ - Env Var: RCLONE_IMAGEKIT_ENCODING
+ - Type: Encoding
+ - Default: Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket
+
+ ### Metadata
+
+ Any metadata supported by the underlying remote is read and written.
+
+ Here are the possible system metadata items for the imagekit backend.
+
+ | Name | Help | Type | Example | Read Only |
+ |------|------|------|---------|-----------|
+ | aws-tags | AI generated tags by AWS Rekognition associated with the image | string | tag1,tag2 | **Y** |
+ | btime | Time of file birth (creation) read from Last-Modified header | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | **Y** |
+ | custom-coordinates | Custom coordinates of the file | string | 0,0,100,100 | **Y** |
+ | file-type | Type of the file | string | image | **Y** |
+ | google-tags | AI generated tags by Google Cloud Vision associated with the image | string | tag1,tag2 | **Y** |
+ | has-alpha | Whether the image has alpha channel or not | bool | | **Y** |
+ | height | Height of the image or video in pixels | int | | **Y** |
+ | is-private-file | Whether the file is private or not | bool | | **Y** |
+ | size | Size of the object in bytes | int64 | | **Y** |
+ | tags | Tags associated with the file | string | tag1,tag2 | **Y** |
+ | width | Width of the image or video in pixels | int | | **Y** |
+
+ See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
+
+
+
# Internet Archive
The Internet Archive backend utilizes Items on [archive.org](https://archive.org/)
@@ -34075,7 +34835,7 @@ remote d) Delete this remote y/e/d> y
- Config: encoding
- Env Var: RCLONE_INTERNETARCHIVE_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot
### Metadata
@@ -34291,7 +35051,7 @@ Edit this remote d) Delete this remote y/e/d> y
### --fast-list
- This remote supports `--fast-list` which allows you to use fewer
+ This backend supports `--fast-list` which allows you to use fewer
transactions in exchange for more memory. See the [rclone
docs](https://rclone.org/docs/#fast-list) for more details.
@@ -34299,10 +35059,11 @@ Edit this remote d) Delete this remote y/e/d> y
API request to get the entire list, so for large folders this could
lead to long wait time before the first results are shown.
- Note also that with rclone version 1.58 and newer information about
- [MIME types](https://rclone.org/overview/#mime-type) are not available when using `--fast-list`.
+ Note also that with rclone version 1.58 and newer, information about
+ [MIME types](https://rclone.org/overview/#mime-type) and metadata item [utime](#metadata)
+ are not available when using `--fast-list`.
- ### Modified time and hashes
+ ### Modification times and hashes
Jottacloud allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -34501,9 +35262,24 @@ Edit this remote d) Delete this remote y/e/d> y
- Config: encoding
- Env Var: RCLONE_JOTTACLOUD_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot
+ ### Metadata
+
+ Jottacloud has limited support for metadata, currently an extended set of timestamps.
+
+ Here are the possible system metadata items for the jottacloud backend.
+
+ | Name | Help | Type | Example | Read Only |
+ |------|------|------|---------|-----------|
+ | btime | Time of file birth (creation), read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N |
+ | content-type | MIME type, also known as media type | string | text/plain | **Y** |
+ | mtime | Time of last modification, read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N |
+ | utime | Time of last upload, when current revision was created, generated by backend | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | **Y** |
+
+ See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
+
## Limitations
@@ -34650,34 +35426,6 @@ is OK (default) e) Edit this remote d) Delete this remote y/e/d> y
- Type: string
- Required: true
- #### --koofr-password
-
- Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password).
-
- **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
-
- Properties:
-
- - Config: password
- - Env Var: RCLONE_KOOFR_PASSWORD
- - Provider: digistorage
- - Type: string
- - Required: true
-
- #### --koofr-password
-
- Your password for rclone (generate one at your service's settings page).
-
- **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
-
- Properties:
-
- - Config: password
- - Env Var: RCLONE_KOOFR_PASSWORD
- - Provider: other
- - Type: string
- - Required: true
-
### Advanced options
Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
@@ -34718,7 +35466,7 @@ is OK (default) e) Edit this remote d) Delete this remote y/e/d> y
- Config: encoding
- Env Var: RCLONE_KOOFR_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
@@ -34796,6 +35544,59 @@ USERNAME password = *** ENCRYPTED *** -------------------- y) Yes this
is OK (default) e) Edit this remote d) Delete this remote y/e/d> y
+ # Linkbox
+
+ Linkbox is [a private cloud drive](https://linkbox.to/).
+
+ ## Configuration
+
+ Here is an example of making a remote for Linkbox.
+
+ First run:
+
+ rclone config
+
+ This will guide you through an interactive setup process:
+
+No remotes found, make a new one? n) New remote s) Set configuration
+password q) Quit config n/s/q> n
+
+Enter name for new remote. name> remote
+
+Option Storage. Type of storage to configure. Choose a number from
+below, or type in your own value. XX / Linkbox (linkbox) Storage> XX
+
+Option token. Token from https://www.linkbox.to/admin/account Enter a
+value. token> testFromCLToken
+
+Configuration complete. Options: - type: linkbox - token: XXXXXXXXXXX
+Keep this "linkbox" remote? y) Yes this is OK (default) e) Edit this
+remote d) Delete this remote y/e/d> y
+
+
+
+ ### Standard options
+
+ Here are the Standard options specific to linkbox (Linkbox).
+
+ #### --linkbox-token
+
+ Token from https://www.linkbox.to/admin/account
+
+ Properties:
+
+ - Config: token
+ - Env Var: RCLONE_LINKBOX_TOKEN
+ - Type: string
+ - Required: true
+
+
+
+ ## Limitations
+
+ Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
+ as they can't be used in JSON strings.
+
# Mail.ru Cloud
[Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a Russian internet company [Mail.Ru Group](https://mail.ru). The official desktop client is [Disk-O:](https://disk-o.cloud/en), available on Windows and Mac OS.
@@ -34879,17 +35680,15 @@ this remote d) Delete this remote y/e/d> y
rclone sync --interactive /home/local/directory remote:directory
- ### Modified time
+ ### Modification times and hashes
Files support a modification time attribute with up to 1 second precision.
Directories do not have a modification time, which is shown as "Jan 1 1970".
- ### Hash checksums
-
- Hash sums use a custom Mail.ru algorithm based on SHA1.
+ File hashes are supported, with a custom Mail.ru algorithm based on SHA1.
If file size is less than or equal to the SHA1 block size (20 bytes),
its hash is simply its data right-padded with zero bytes.
- Hash sum of a larger file is computed as a SHA1 sum of the file data
+ Hashes of a larger file is computed as a SHA1 of the file data
bytes concatenated with a decimal representation of the data length.
### Emptying Trash
@@ -35167,7 +35966,7 @@ this remote d) Delete this remote y/e/d> y
- Config: encoding
- Env Var: RCLONE_MAILRU_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot
@@ -35233,7 +36032,7 @@ remote d) Delete this remote y/e/d> y
rclone copy /home/source remote:backup
- ### Modified time and hashes
+ ### Modification times and hashes
Mega does not support modification times or hashes yet.
@@ -35425,7 +36224,7 @@ me@example.com:/$
- Config: encoding
- Env Var: RCLONE_MEGA_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,InvalidUtf8,Dot
@@ -35483,7 +36282,7 @@ a) Delete this remote y/e/d> y
rclone serve webdav :memory:
rclone serve sftp :memory:
- ### Modified time and hashes
+ ### Modification times and hashes
The memory backend supports MD5 hashes and modification times accurate to 1 nS.
@@ -35792,10 +36591,10 @@ Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y
transactions in exchange for more memory. See the [rclone
docs](https://rclone.org/docs/#fast-list) for more details.
- ### Modified time
+ ### Modification times and hashes
- The modified time is stored as metadata on the object with the `mtime`
- key. It is stored using RFC3339 Format time with nanosecond
+ The modification time is stored as metadata on the object with the
+ `mtime` key. It is stored using RFC3339 Format time with nanosecond
precision. The metadata is supplied during directory listings so
there is no performance overhead to using it.
@@ -35805,6 +36604,10 @@ Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y
`--update` flag when syncing is recommended if using
`--use-server-modtime`.
+ MD5 hashes are stored with blobs. However blobs that were uploaded in
+ chunks only have an MD5 if the source remote was capable of MD5
+ hashes, e.g. the local disk.
+
### Performance
When uploading large files, increasing the value of
@@ -35833,12 +36636,6 @@ Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y
Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
as they can't be used in JSON strings.
- ### Hashes
-
- MD5 hashes are stored with blobs. However blobs that were uploaded in
- chunks only have an MD5 if the source remote was capable of MD5
- hashes, e.g. the local disk.
-
### Authentication {#authentication}
There are a number of ways of supplying credentials for Azure Blob
@@ -36392,10 +37189,10 @@ Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y
#### --azureblob-access-tier
- Access tier of blob: hot, cool or archive.
+ Access tier of blob: hot, cool, cold or archive.
- Archived blobs can be restored by setting access tier to hot or
- cool. Leave blank if you intend to use default access tier, which is
+ Archived blobs can be restored by setting access tier to hot, cool or
+ cold. Leave blank if you intend to use default access tier, which is
set at account level
If there is no "access tier" specified, rclone doesn't apply any tier.
@@ -36403,7 +37200,7 @@ Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y
are not modified, specifying "access tier" to new one will have no effect.
If blobs are in "archive tier" at remote, trying to perform data transfer
operations from remote will not be allowed. User should first restore by
- tiering blob to "Hot" or "Cool".
+ tiering blob to "Hot", "Cool" or "Cold".
Properties:
@@ -36484,7 +37281,7 @@ Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y
- Config: encoding
- Env Var: RCLONE_AZUREBLOB_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8
#### --azureblob-public-access
@@ -36593,6 +37390,678 @@ Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y
`http(s)://:/devstoreaccount1`
(e.g. `http://10.254.2.5:10000/devstoreaccount1`).
+ # Microsoft Azure Files Storage
+
+ Paths are specified as `remote:` You may put subdirectories in too,
+ e.g. `remote:path/to/dir`.
+
+ ## Configuration
+
+ Here is an example of making a Microsoft Azure Files Storage
+ configuration. For a remote called `remote`. First run:
+
+ rclone config
+
+ This will guide you through an interactive setup process:
+
+No remotes found, make a new one? n) New remote s) Set configuration
+password q) Quit config n/s/q> n name> remote Type of storage to
+configure. Choose a number from below, or type in your own value [snip]
+XX / Microsoft Azure Files Storage "azurefiles" [snip]
+
+Option account. Azure Storage Account Name. Set this to the Azure
+Storage Account Name in use. Leave blank to use SAS URL or connection
+string, otherwise it needs to be set. If this is blank and if env_auth
+is set it will be read from the environment variable
+AZURE_STORAGE_ACCOUNT_NAME if possible. Enter a value. Press Enter to
+leave empty. account> account_name
+
+Option share_name. Azure Files Share Name. This is required and is the
+name of the share to access. Enter a value. Press Enter to leave empty.
+share_name> share_name
+
+Option env_auth. Read credentials from runtime (environment variables,
+CLI or MSI). See the authentication docs for full info. Enter a boolean
+value (true or false). Press Enter for the default (false). env_auth>
+
+Option key. Storage Account Shared Key. Leave blank to use SAS URL or
+connection string. Enter a value. Press Enter to leave empty. key>
+base64encodedkey==
+
+Option sas_url. SAS URL. Leave blank if using account/key or connection
+string. Enter a value. Press Enter to leave empty. sas_url>
+
+Option connection_string. Azure Files Connection String. Enter a value.
+Press Enter to leave empty. connection_string> [snip]
+
+Configuration complete. Options: - type: azurefiles - account:
+account_name - share_name: share_name - key: base64encodedkey== Keep
+this "remote" remote? y) Yes this is OK (default) e) Edit this remote d)
+Delete this remote y/e/d>
+
+
+ Once configured you can use rclone.
+
+ See all files in the top level:
+
+ rclone lsf remote:
+
+ Make a new directory in the root:
+
+ rclone mkdir remote:dir
+
+ Recursively List the contents:
+
+ rclone ls remote:
+
+ Sync `/home/local/directory` to the remote directory, deleting any
+ excess files in the directory.
+
+ rclone sync --interactive /home/local/directory remote:dir
+
+ ### Modified time
+
+ The modified time is stored as Azure standard `LastModified` time on
+ files
+
+ ### Performance
+
+ When uploading large files, increasing the value of
+ `--azurefiles-upload-concurrency` will increase performance at the cost
+ of using more memory. The default of 16 is set quite conservatively to
+ use less memory. It maybe be necessary raise it to 64 or higher to
+ fully utilize a 1 GBit/s link with a single file transfer.
+
+ ### Restricted filename characters
+
+ In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
+ the following characters are also replaced:
+
+ | Character | Value | Replacement |
+ | --------- |:-----:|:-----------:|
+ | " | 0x22 | " |
+ | * | 0x2A | * |
+ | : | 0x3A | : |
+ | < | 0x3C | < |
+ | > | 0x3E | > |
+ | ? | 0x3F | ? |
+ | \ | 0x5C | \ |
+ | \| | 0x7C | | |
+
+ File names can also not end with the following characters.
+ These only get replaced if they are the last character in the name:
+
+ | Character | Value | Replacement |
+ | --------- |:-----:|:-----------:|
+ | . | 0x2E | . |
+
+ Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
+ as they can't be used in JSON strings.
+
+ ### Hashes
+
+ MD5 hashes are stored with files. Not all files will have MD5 hashes
+ as these have to be uploaded with the file.
+
+ ### Authentication {#authentication}
+
+ There are a number of ways of supplying credentials for Azure Files
+ Storage. Rclone tries them in the order of the sections below.
+
+ #### Env Auth
+
+ If the `env_auth` config parameter is `true` then rclone will pull
+ credentials from the environment or runtime.
+
+ It tries these authentication methods in this order:
+
+ 1. Environment Variables
+ 2. Managed Service Identity Credentials
+ 3. Azure CLI credentials (as used by the az tool)
+
+ These are described in the following sections
+
+ ##### Env Auth: 1. Environment Variables
+
+ If `env_auth` is set and environment variables are present rclone
+ authenticates a service principal with a secret or certificate, or a
+ user with a password, depending on which environment variable are set.
+ It reads configuration from these variables, in the following order:
+
+ 1. Service principal with client secret
+ - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID.
+ - `AZURE_CLIENT_ID`: the service principal's client ID
+ - `AZURE_CLIENT_SECRET`: one of the service principal's client secrets
+ 2. Service principal with certificate
+ - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID.
+ - `AZURE_CLIENT_ID`: the service principal's client ID
+ - `AZURE_CLIENT_CERTIFICATE_PATH`: path to a PEM or PKCS12 certificate file including the private key.
+ - `AZURE_CLIENT_CERTIFICATE_PASSWORD`: (optional) password for the certificate file.
+ - `AZURE_CLIENT_SEND_CERTIFICATE_CHAIN`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header.
+ 3. User with username and password
+ - `AZURE_TENANT_ID`: (optional) tenant to authenticate in. Defaults to "organizations".
+ - `AZURE_CLIENT_ID`: client ID of the application the user will authenticate to
+ - `AZURE_USERNAME`: a username (usually an email address)
+ - `AZURE_PASSWORD`: the user's password
+ 4. Workload Identity
+ - `AZURE_TENANT_ID`: Tenant to authenticate in.
+ - `AZURE_CLIENT_ID`: Client ID of the application the user will authenticate to.
+ - `AZURE_FEDERATED_TOKEN_FILE`: Path to projected service account token file.
+ - `AZURE_AUTHORITY_HOST`: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com).
+
+
+ ##### Env Auth: 2. Managed Service Identity Credentials
+
+ When using Managed Service Identity if the VM(SS) on which this
+ program is running has a system-assigned identity, it will be used by
+ default. If the resource has no system-assigned but exactly one
+ user-assigned identity, the user-assigned identity will be used by
+ default.
+
+ If the resource has multiple user-assigned identities you will need to
+ unset `env_auth` and set `use_msi` instead. See the [`use_msi`
+ section](#use_msi).
+
+ ##### Env Auth: 3. Azure CLI credentials (as used by the az tool)
+
+ Credentials created with the `az` tool can be picked up using `env_auth`.
+
+ For example if you were to login with a service principal like this:
+
+ az login --service-principal -u XXX -p XXX --tenant XXX
+
+ Then you could access rclone resources like this:
+
+ rclone lsf :azurefiles,env_auth,account=ACCOUNT:
+
+ Or
+
+ rclone lsf --azurefiles-env-auth --azurefiles-account=ACCOUNT :azurefiles:
+
+ #### Account and Shared Key
+
+ This is the most straight forward and least flexible way. Just fill
+ in the `account` and `key` lines and leave the rest blank.
+
+ #### SAS URL
+
+ To use it leave `account`, `key` and `connection_string` blank and fill in `sas_url`.
+
+ #### Connection String
+
+ To use it leave `account`, `key` and "sas_url" blank and fill in `connection_string`.
+
+ #### Service principal with client secret
+
+ If these variables are set, rclone will authenticate with a service principal with a client secret.
+
+ - `tenant`: ID of the service principal's tenant. Also called its "directory" ID.
+ - `client_id`: the service principal's client ID
+ - `client_secret`: one of the service principal's client secrets
+
+ The credentials can also be placed in a file using the
+ `service_principal_file` configuration option.
+
+ #### Service principal with certificate
+
+ If these variables are set, rclone will authenticate with a service principal with certificate.
+
+ - `tenant`: ID of the service principal's tenant. Also called its "directory" ID.
+ - `client_id`: the service principal's client ID
+ - `client_certificate_path`: path to a PEM or PKCS12 certificate file including the private key.
+ - `client_certificate_password`: (optional) password for the certificate file.
+ - `client_send_certificate_chain`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header.
+
+ **NB** `client_certificate_password` must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+ #### User with username and password
+
+ If these variables are set, rclone will authenticate with username and password.
+
+ - `tenant`: (optional) tenant to authenticate in. Defaults to "organizations".
+ - `client_id`: client ID of the application the user will authenticate to
+ - `username`: a username (usually an email address)
+ - `password`: the user's password
+
+ Microsoft doesn't recommend this kind of authentication, because it's
+ less secure than other authentication flows. This method is not
+ interactive, so it isn't compatible with any form of multi-factor
+ authentication, and the application must already have user or admin
+ consent. This credential can only authenticate work and school
+ accounts; it can't authenticate Microsoft accounts.
+
+ **NB** `password` must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+ #### Managed Service Identity Credentials {#use_msi}
+
+ If `use_msi` is set then managed service identity credentials are
+ used. This authentication only works when running in an Azure service.
+ `env_auth` needs to be unset to use this.
+
+ However if you have multiple user identities to choose from these must
+ be explicitly specified using exactly one of the `msi_object_id`,
+ `msi_client_id`, or `msi_mi_res_id` parameters.
+
+ If none of `msi_object_id`, `msi_client_id`, or `msi_mi_res_id` is
+ set, this is is equivalent to using `env_auth`.
+
+
+ ### Standard options
+
+ Here are the Standard options specific to azurefiles (Microsoft Azure Files).
+
+ #### --azurefiles-account
+
+ Azure Storage Account Name.
+
+ Set this to the Azure Storage Account Name in use.
+
+ Leave blank to use SAS URL or connection string, otherwise it needs to be set.
+
+ If this is blank and if env_auth is set it will be read from the
+ environment variable `AZURE_STORAGE_ACCOUNT_NAME` if possible.
+
+
+ Properties:
+
+ - Config: account
+ - Env Var: RCLONE_AZUREFILES_ACCOUNT
+ - Type: string
+ - Required: false
+
+ #### --azurefiles-share-name
+
+ Azure Files Share Name.
+
+ This is required and is the name of the share to access.
+
+
+ Properties:
+
+ - Config: share_name
+ - Env Var: RCLONE_AZUREFILES_SHARE_NAME
+ - Type: string
+ - Required: false
+
+ #### --azurefiles-env-auth
+
+ Read credentials from runtime (environment variables, CLI or MSI).
+
+ See the [authentication docs](/azurefiles#authentication) for full info.
+
+ Properties:
+
+ - Config: env_auth
+ - Env Var: RCLONE_AZUREFILES_ENV_AUTH
+ - Type: bool
+ - Default: false
+
+ #### --azurefiles-key
+
+ Storage Account Shared Key.
+
+ Leave blank to use SAS URL or connection string.
+
+ Properties:
+
+ - Config: key
+ - Env Var: RCLONE_AZUREFILES_KEY
+ - Type: string
+ - Required: false
+
+ #### --azurefiles-sas-url
+
+ SAS URL.
+
+ Leave blank if using account/key or connection string.
+
+ Properties:
+
+ - Config: sas_url
+ - Env Var: RCLONE_AZUREFILES_SAS_URL
+ - Type: string
+ - Required: false
+
+ #### --azurefiles-connection-string
+
+ Azure Files Connection String.
+
+ Properties:
+
+ - Config: connection_string
+ - Env Var: RCLONE_AZUREFILES_CONNECTION_STRING
+ - Type: string
+ - Required: false
+
+ #### --azurefiles-tenant
+
+ ID of the service principal's tenant. Also called its directory ID.
+
+ Set this if using
+ - Service principal with client secret
+ - Service principal with certificate
+ - User with username and password
+
+
+ Properties:
+
+ - Config: tenant
+ - Env Var: RCLONE_AZUREFILES_TENANT
+ - Type: string
+ - Required: false
+
+ #### --azurefiles-client-id
+
+ The ID of the client in use.
+
+ Set this if using
+ - Service principal with client secret
+ - Service principal with certificate
+ - User with username and password
+
+
+ Properties:
+
+ - Config: client_id
+ - Env Var: RCLONE_AZUREFILES_CLIENT_ID
+ - Type: string
+ - Required: false
+
+ #### --azurefiles-client-secret
+
+ One of the service principal's client secrets
+
+ Set this if using
+ - Service principal with client secret
+
+
+ Properties:
+
+ - Config: client_secret
+ - Env Var: RCLONE_AZUREFILES_CLIENT_SECRET
+ - Type: string
+ - Required: false
+
+ #### --azurefiles-client-certificate-path
+
+ Path to a PEM or PKCS12 certificate file including the private key.
+
+ Set this if using
+ - Service principal with certificate
+
+
+ Properties:
+
+ - Config: client_certificate_path
+ - Env Var: RCLONE_AZUREFILES_CLIENT_CERTIFICATE_PATH
+ - Type: string
+ - Required: false
+
+ #### --azurefiles-client-certificate-password
+
+ Password for the certificate file (optional).
+
+ Optionally set this if using
+ - Service principal with certificate
+
+ And the certificate has a password.
+
+
+ **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+ Properties:
+
+ - Config: client_certificate_password
+ - Env Var: RCLONE_AZUREFILES_CLIENT_CERTIFICATE_PASSWORD
+ - Type: string
+ - Required: false
+
+ ### Advanced options
+
+ Here are the Advanced options specific to azurefiles (Microsoft Azure Files).
+
+ #### --azurefiles-client-send-certificate-chain
+
+ Send the certificate chain when using certificate auth.
+
+ Specifies whether an authentication request will include an x5c header
+ to support subject name / issuer based authentication. When set to
+ true, authentication requests include the x5c header.
+
+ Optionally set this if using
+ - Service principal with certificate
+
+
+ Properties:
+
+ - Config: client_send_certificate_chain
+ - Env Var: RCLONE_AZUREFILES_CLIENT_SEND_CERTIFICATE_CHAIN
+ - Type: bool
+ - Default: false
+
+ #### --azurefiles-username
+
+ User name (usually an email address)
+
+ Set this if using
+ - User with username and password
+
+
+ Properties:
+
+ - Config: username
+ - Env Var: RCLONE_AZUREFILES_USERNAME
+ - Type: string
+ - Required: false
+
+ #### --azurefiles-password
+
+ The user's password
+
+ Set this if using
+ - User with username and password
+
+
+ **NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+ Properties:
+
+ - Config: password
+ - Env Var: RCLONE_AZUREFILES_PASSWORD
+ - Type: string
+ - Required: false
+
+ #### --azurefiles-service-principal-file
+
+ Path to file containing credentials for use with a service principal.
+
+ Leave blank normally. Needed only if you want to use a service principal instead of interactive login.
+
+ $ az ad sp create-for-rbac --name "" \
+ --role "Storage Files Data Owner" \
+ --scopes "/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts//blobServices/default/containers/" \
+ > azure-principal.json
+
+ See ["Create an Azure service principal"](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) and ["Assign an Azure role for access to files data"](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli) pages for more details.
+
+ **NB** this section needs updating for Azure Files - pull requests appreciated!
+
+ It may be more convenient to put the credentials directly into the
+ rclone config file under the `client_id`, `tenant` and `client_secret`
+ keys instead of setting `service_principal_file`.
+
+
+ Properties:
+
+ - Config: service_principal_file
+ - Env Var: RCLONE_AZUREFILES_SERVICE_PRINCIPAL_FILE
+ - Type: string
+ - Required: false
+
+ #### --azurefiles-use-msi
+
+ Use a managed service identity to authenticate (only works in Azure).
+
+ When true, use a [managed service identity](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/)
+ to authenticate to Azure Storage instead of a SAS token or account key.
+
+ If the VM(SS) on which this program is running has a system-assigned identity, it will
+ be used by default. If the resource has no system-assigned but exactly one user-assigned identity,
+ the user-assigned identity will be used by default. If the resource has multiple user-assigned
+ identities, the identity to use must be explicitly specified using exactly one of the msi_object_id,
+ msi_client_id, or msi_mi_res_id parameters.
+
+ Properties:
+
+ - Config: use_msi
+ - Env Var: RCLONE_AZUREFILES_USE_MSI
+ - Type: bool
+ - Default: false
+
+ #### --azurefiles-msi-object-id
+
+ Object ID of the user-assigned MSI to use, if any.
+
+ Leave blank if msi_client_id or msi_mi_res_id specified.
+
+ Properties:
+
+ - Config: msi_object_id
+ - Env Var: RCLONE_AZUREFILES_MSI_OBJECT_ID
+ - Type: string
+ - Required: false
+
+ #### --azurefiles-msi-client-id
+
+ Object ID of the user-assigned MSI to use, if any.
+
+ Leave blank if msi_object_id or msi_mi_res_id specified.
+
+ Properties:
+
+ - Config: msi_client_id
+ - Env Var: RCLONE_AZUREFILES_MSI_CLIENT_ID
+ - Type: string
+ - Required: false
+
+ #### --azurefiles-msi-mi-res-id
+
+ Azure resource ID of the user-assigned MSI to use, if any.
+
+ Leave blank if msi_client_id or msi_object_id specified.
+
+ Properties:
+
+ - Config: msi_mi_res_id
+ - Env Var: RCLONE_AZUREFILES_MSI_MI_RES_ID
+ - Type: string
+ - Required: false
+
+ #### --azurefiles-endpoint
+
+ Endpoint for the service.
+
+ Leave blank normally.
+
+ Properties:
+
+ - Config: endpoint
+ - Env Var: RCLONE_AZUREFILES_ENDPOINT
+ - Type: string
+ - Required: false
+
+ #### --azurefiles-chunk-size
+
+ Upload chunk size.
+
+ Note that this is stored in memory and there may be up to
+ "--transfers" * "--azurefile-upload-concurrency" chunks stored at once
+ in memory.
+
+ Properties:
+
+ - Config: chunk_size
+ - Env Var: RCLONE_AZUREFILES_CHUNK_SIZE
+ - Type: SizeSuffix
+ - Default: 4Mi
+
+ #### --azurefiles-upload-concurrency
+
+ Concurrency for multipart uploads.
+
+ This is the number of chunks of the same file that are uploaded
+ concurrently.
+
+ If you are uploading small numbers of large files over high-speed
+ links and these uploads do not fully utilize your bandwidth, then
+ increasing this may help to speed up the transfers.
+
+ Note that chunks are stored in memory and there may be up to
+ "--transfers" * "--azurefile-upload-concurrency" chunks stored at once
+ in memory.
+
+ Properties:
+
+ - Config: upload_concurrency
+ - Env Var: RCLONE_AZUREFILES_UPLOAD_CONCURRENCY
+ - Type: int
+ - Default: 16
+
+ #### --azurefiles-max-stream-size
+
+ Max size for streamed files.
+
+ Azure files needs to know in advance how big the file will be. When
+ rclone doesn't know it uses this value instead.
+
+ This will be used when rclone is streaming data, the most common uses are:
+
+ - Uploading files with `--vfs-cache-mode off` with `rclone mount`
+ - Using `rclone rcat`
+ - Copying files with unknown length
+
+ You will need this much free space in the share as the file will be this size temporarily.
+
+
+ Properties:
+
+ - Config: max_stream_size
+ - Env Var: RCLONE_AZUREFILES_MAX_STREAM_SIZE
+ - Type: SizeSuffix
+ - Default: 10Gi
+
+ #### --azurefiles-encoding
+
+ The encoding for the backend.
+
+ See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+ Properties:
+
+ - Config: encoding
+ - Env Var: RCLONE_AZUREFILES_ENCODING
+ - Type: Encoding
+ - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot
+
+
+
+ ### Custom upload headers
+
+ You can set custom upload headers with the `--header-upload` flag.
+
+ - Cache-Control
+ - Content-Disposition
+ - Content-Encoding
+ - Content-Language
+ - Content-Type
+
+ Eg `--header-upload "Content-Type: text/potato"`
+
+ ## Limitations
+
+ MD5 sums are only uploaded with chunked files if the source has an MD5
+ sum. This will always be the case for a local to azure copy.
+
# Microsoft OneDrive
Paths are specified as `remote:path`
@@ -36721,7 +38190,7 @@ e) Delete this remote y/e/d> y
Note: If you have a special region, you may need a different host in step 4 and 5. Here are [some hints](https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86).
- ### Modification time and hashes
+ ### Modification times and hashes
OneDrive allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -36742,6 +38211,32 @@ e) Delete this remote y/e/d> y
For all types of OneDrive you can use the `--checksum` flag.
+ ### --fast-list
+
+ This remote supports `--fast-list` which allows you to use fewer
+ transactions in exchange for more memory. See the [rclone
+ docs](https://rclone.org/docs/#fast-list) for more details.
+
+ This must be enabled with the `--onedrive-delta` flag (or `delta =
+ true` in the config file) as it can cause performance degradation.
+
+ It does this by using the delta listing facilities of OneDrive which
+ returns all the files in the remote very efficiently. This is much
+ more efficient than listing directories recursively and is Microsoft's
+ recommended way of reading all the file information from a drive.
+
+ This can be useful with `rclone mount` and [rclone rc vfs/refresh
+ recursive=true](https://rclone.org/rc/#vfs-refresh)) to very quickly fill the mount with
+ information about all the files.
+
+ The API used for the recursive listing (`ListR`) only supports listing
+ from the root of the drive. This will become increasingly inefficient
+ the further away you get from the root as rclone will have to discard
+ files outside of the directory you are using.
+
+ Some commands (like `rclone lsf -R`) will use `ListR` by default - you
+ can turn this off with `--disable ListR` if you need to.
+
### Restricted filename characters
In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
@@ -37153,6 +38648,43 @@ e) Delete this remote y/e/d> y
- Type: bool
- Default: false
+ #### --onedrive-delta
+
+ If set rclone will use delta listing to implement recursive listings.
+
+ If this flag is set the the onedrive backend will advertise `ListR`
+ support for recursive listings.
+
+ Setting this flag speeds up these things greatly:
+
+ rclone lsf -R onedrive:
+ rclone size onedrive:
+ rclone rc vfs/refresh recursive=true
+
+ **However** the delta listing API **only** works at the root of the
+ drive. If you use it not at the root then it recurses from the root
+ and discards all the data that is not under the directory you asked
+ for. So it will be correct but may not be very efficient.
+
+ This is why this flag is not set as the default.
+
+ As a rule of thumb if nearly all of your data is under rclone's root
+ directory (the `root/directory` in `onedrive:root/directory`) then
+ using this flag will be be a big performance win. If your data is
+ mostly not under the root then using this flag will be a big
+ performance loss.
+
+ It is recommended if you are mounting your onedrive at the root
+ (or near the root when using crypt) and using rclone `rc vfs/refresh`.
+
+
+ Properties:
+
+ - Config: delta
+ - Env Var: RCLONE_ONEDRIVE_DELTA
+ - Type: bool
+ - Default: false
+
#### --onedrive-encoding
The encoding for the backend.
@@ -37163,7 +38695,7 @@ e) Delete this remote y/e/d> y
- Config: encoding
- Env Var: RCLONE_ONEDRIVE_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot
@@ -37437,12 +38969,14 @@ u) Delete this remote y/e/d> y
rclone copy /home/source remote:backup
- ### Modified time and MD5SUMs
+ ### Modification times and hashes
OpenDrive allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
not.
+ The MD5 hash algorithm is supported.
+
### Restricted filename characters
| Character | Value | Replacement |
@@ -37516,7 +39050,7 @@ u) Delete this remote y/e/d> y
- Config: encoding
- Env Var: RCLONE_OPENDRIVE_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot
#### --opendrive-chunk-size
@@ -37667,6 +39201,7 @@ y/e/d> y
No authentication
### User Principal
+
Sample rclone config file for Authentication Provider User Principal:
[oos]
@@ -37687,6 +39222,7 @@ y/e/d> y
- If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials.
### Instance Principal
+
An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal.
With this approach no credentials have to be stored and managed.
@@ -37716,6 +39252,7 @@ y/e/d> y
- It is applicable for oci compute instances only. It cannot be used on external instance or resources.
### Resource Principal
+
Resource principal auth is very similar to instance principal auth but used for resources that are not
compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm).
To use resource principal ensure Rclone process is started with these environment variables set in its process.
@@ -37735,6 +39272,7 @@ y/e/d> y
provider = resource_principal_auth
### No authentication
+
Public buckets do not require any authentication mechanism to read objects.
Sample rclone configuration file for No authentication:
@@ -37745,10 +39283,9 @@ y/e/d> y
region = us-ashburn-1
provider = no_auth
- ## Options
- ### Modified time
+ ### Modification times and hashes
- The modified time is stored as metadata on the object as
+ The modification time is stored as metadata on the object as
`opc-meta-mtime` as floating point since the epoch, accurate to 1 ns.
If the modification time needs to be updated rclone will attempt to perform a server
@@ -37758,6 +39295,8 @@ y/e/d> y
Note that reading this from the object takes an additional `HEAD` request as the metadata
isn't returned in object listings.
+ The MD5 hash algorithm is supported.
+
### Multipart uploads
rclone supports multipart uploads with OOS which means that it can
@@ -38060,7 +39599,7 @@ y/e/d> y
- Config: encoding
- Env Var: RCLONE_OOS_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,InvalidUtf8,Dot
#### --oos-leave-parts-on-error
@@ -38554,7 +40093,7 @@ remote d) Delete this remote y/e/d> y
- Config: encoding
- Env Var: RCLONE_QINGSTOR_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,Ctl,InvalidUtf8
@@ -38686,7 +40225,7 @@ account Enter a string value. Press Enter for the default
y) Yes this is OK e) Edit this remote d) Delete
this remote y/e/d> y ```
- ### Modified time and hashes
+ ### Modification times and hashes
Quatrix allows modification times to be set on
objects accurate to 1 microsecond. These will be
@@ -38776,7 +40315,7 @@ account Enter a string value. Press Enter for the default
Properties:
- Config: encoding - Env Var:
- RCLONE_QUATRIX_ENCODING - Type: MultiEncoder -
+ RCLONE_QUATRIX_ENCODING - Type: Encoding -
Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
#### --quatrix-effective-upload-time
@@ -39018,7 +40557,7 @@ rclone copy /home/source mySia:backup
- Config: encoding
- Env Var: RCLONE_SIA_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot
@@ -39193,7 +40732,7 @@ RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true rclone lsd myremote:
`--use-server-modtime`, you can avoid the extra API call and simply upload
files whose local modtime is newer than the time it was last uploaded.
- ### Modified time
+ ### Modification times and hashes
The modified time is stored as metadata on the object as
`X-Object-Meta-Mtime` as floating point since the epoch accurate to 1
@@ -39202,6 +40741,8 @@ RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true rclone lsd myremote:
This is a de facto standard (used in the official python-swiftclient
amongst others) for storing the modification time for an object.
+ The MD5 hash algorithm is supported.
+
### Restricted filename characters
| Character | Value | Replacement |
@@ -39548,7 +41089,7 @@ RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true rclone lsd myremote:
- Config: encoding
- Env Var: RCLONE_SWIFT_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,InvalidUtf8
@@ -39652,7 +41193,7 @@ this remote y/e/d> y
rclone copy /home/source remote:backup
- ### Modified time and hashes ###
+ ### Modification times and hashes
pCloud allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -39791,7 +41332,7 @@ this remote y/e/d> y
- Config: encoding
- Env Var: RCLONE_PCLOUD_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
#### --pcloud-root-folder-id
@@ -39895,6 +41436,13 @@ Keep this "remote" remote? y) Yes this is OK (default) e) Edit this
remote d) Delete this remote y/e/d> y
+ ### Modification times and hashes
+
+ PikPak keeps modification times on objects, and updates them when uploading objects,
+ but it does not support changing only the modification time
+
+ The MD5 hash algorithm is supported.
+
### Standard options
@@ -40054,7 +41602,7 @@ remote d) Delete this remote y/e/d> y
- Config: encoding
- Env Var: RCLONE_PIKPAK_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot
## Backend commands
@@ -40118,15 +41666,16 @@ remote d) Delete this remote y/e/d> y
- ## Limitations ##
+ ## Limitations
- ### Hashes ###
+ ### Hashes may be empty
PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files.
- ### Deleted files ###
+ ### Deleted files still visible with trashed-only
- Deleted files will still be visible with `--pikpak-trashed-only` even after the trash emptied. This goes away after few days.
+ Deleted files will still be visible with `--pikpak-trashed-only` even after the
+ trash emptied. This goes away after few days.
# premiumize.me
@@ -40188,7 +41737,7 @@ this remote y/e/d>
rclone copy /home/source remote:backup
- ### Modified time and hashes
+ ### Modification times and hashes
premiumize.me does not support modification times or hashes, therefore
syncing will default to `--size-only` checking. Note that using
@@ -40303,7 +41852,7 @@ this remote y/e/d>
- Config: encoding
- Env Var: RCLONE_PREMIUMIZEME_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot
@@ -40381,10 +41930,12 @@ this remote y/e/d> y
rclone copy /home/source remote:backup
- ### Modified time
+ ### Modification times and hashes
Proton Drive Bridge does not support updating modification times yet.
+ The SHA1 hash algorithm is supported.
+
### Restricted filename characters
Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and
@@ -40530,7 +42081,7 @@ this remote y/e/d> y
- Config: encoding
- Env Var: RCLONE_PROTONDRIVE_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot
#### --protondrive-original-file-size
@@ -40809,7 +42360,7 @@ k) Quit config e/n/d/r/c/s/q> q
- Config: encoding
- Env Var: RCLONE_PUTIO_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
@@ -40885,10 +42436,12 @@ this remote y/e/d> y
rclone copy /home/source remote:backup
- ### Modified time
+ ### Modification times and hashes
Proton Drive Bridge does not support updating modification times yet.
+ The SHA1 hash algorithm is supported.
+
### Restricted filename characters
Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and
@@ -41034,7 +42587,7 @@ this remote y/e/d> y
- Config: encoding
- Env Var: RCLONE_PROTONDRIVE_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot
#### --protondrive-original-file-size
@@ -41449,7 +43002,7 @@ rclone link seafile:dir http://my.seafile.server/d/9ea2455f6f55478bbb0d/
- Config: encoding
- Env Var: RCLONE_SEAFILE_ENCODING
- - Type: MultiEncoder
+ - Type: Encoding
- Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8
@@ -41769,7 +43322,7 @@ disable_hashcheck to true to disable checksumming entirely, or set
shell_type to none to disable all functionality based on remote shell
command execution.
-Modified time
+Modification times and hashes
Modified times are stored on the server to 1 second precision.
@@ -42431,6 +43984,32 @@ Properties:
- Type: string
- Required: false
+--sftp-copy-is-hardlink
+
+Set to enable server side copies using hardlinks.
+
+The SFTP protocol does not define a copy command so normally server side
+copies are not allowed with the sftp backend.
+
+However the SFTP protocol does support hardlinking, and if you enable
+this flag then the sftp backend will support server side copies. These
+will be implemented by doing a hardlink from the source to the
+destination.
+
+Not all sftp servers support this.
+
+Note that hardlinking two files together will use no additional space as
+the source and the destination will be the same file.
+
+This feature may be useful backups made with --copy-dest.
+
+Properties:
+
+- Config: copy_is_hardlink
+- Env Var: RCLONE_SFTP_COPY_IS_HARDLINK
+- Type: bool
+- Default: false
+
Limitations
On some SFTP servers (e.g. Synology) the paths are different for SSH and
@@ -42707,7 +44286,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SMB_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default:
Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot
@@ -43232,7 +44811,7 @@ Paths may be as deep as required, e.g. remote:directory/subdirectory.
NB you can't create files in the top level folder you have to create a
folder, which rclone will create as a "Sync Folder" with SugarSync.
-Modified time and hashes
+Modification times and hashes
SugarSync does not support modification times or hashes, therefore
syncing will default to --size-only checking. Note that using --update
@@ -43400,7 +44979,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SUGARSYNC_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Ctl,InvalidUtf8,Dot
Limitations
@@ -43495,7 +45074,7 @@ To copy a local directory to an Uptobox directory called backup
rclone copy /home/source remote:backup
-Modified time and hashes
+Modification times and hashes
Uptobox supports neither modified times nor checksums. All timestamps
will read as that set by --default-time.
@@ -43555,7 +45134,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_UPTOBOX_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default:
Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
@@ -43576,8 +45155,8 @@ During the initial setup with rclone config you will specify the
upstream remotes as a space separated list. The upstream remotes can
either be a local paths or other remotes.
-The attributes :ro, :nc and :nc can be attached to the end of the remote
-to tag the remote as read only, no create or writeback, e.g.
+The attributes :ro, :nc and :writeback can be attached to the end of the
+remote to tag the remote as read only, no create or writeback, e.g.
remote:directory/subdirectory:ro or remote:directory/subdirectory:nc.
- :ro means files will only be read from here and never written
@@ -43979,7 +45558,9 @@ This will guide you through an interactive setup process:
\ (sharepoint)
5 / Sharepoint with NTLM authentication, usually self-hosted or on-premises
\ (sharepoint-ntlm)
- 6 / Other site/service or software
+ 6 / rclone WebDAV server to serve a remote over HTTP via the WebDAV protocol
+ \ (rclone)
+ 7 / Other site/service or software
\ (other)
vendor> 2
User name
@@ -44024,7 +45605,7 @@ To copy a local directory to an WebDAV directory called backup
rclone copy /home/source remote:backup
-Modified time and hashes
+Modification times and hashes
Plain WebDAV does not support modified times. However when used with
Fastmail Files, Owncloud or Nextcloud rclone will support modified
@@ -44075,6 +45656,9 @@ Properties:
- "sharepoint-ntlm"
- Sharepoint with NTLM authentication, usually self-hosted or
on-premises
+ - "rclone"
+ - rclone WebDAV server to serve a remote over HTTP via the
+ WebDAV protocol
- "other"
- Other site/service or software
@@ -44305,6 +45889,13 @@ property to compare your documents:
--ignore-size --ignore-checksum --update
+Rclone
+
+Use this option if you are hosting remotes over WebDAV provided by
+rclone. Read rclone serve webdav for more details.
+
+rclone serve supports modified times using the X-OC-Mtime header.
+
dCache
dCache is a storage system that supports many protocols and
@@ -44454,14 +46045,12 @@ in the path.
Yandex paths may be as deep as required, e.g.
remote:directory/subdirectory.
-Modified time
+Modification times and hashes
Modified times are supported and are stored accurate to 1 ns in custom
metadata called rclone_modified in RFC3339 with nanoseconds format.
-MD5 checksums
-
-MD5 checksums are natively supported by Yandex Disk.
+The MD5 hash algorithm is natively supported by Yandex Disk.
Emptying Trash
@@ -44573,7 +46162,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_YANDEX_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Del,Ctl,InvalidUtf8,Dot
Limitations
@@ -44695,13 +46284,11 @@ in the path.
Zoho paths may be as deep as required, eg remote:directory/subdirectory.
-Modified time
+Modification times and hashes
Modified times are currently not supported for Zoho Workdrive
-Checksums
-
-No checksums are supported.
+No hash algorithms are supported.
Usage information
@@ -44822,7 +46409,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_ZOHO_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Del,Ctl,InvalidUtf8
Setting up your own client_id
@@ -44856,11 +46443,11 @@ For consistencies sake one can also configure a remote of type local in
the config file, and access the local filesystem using rclone remote
paths, e.g. remote:path/to/wherever, but it is probably easier not to.
-Modified time
+Modification times
-Rclone reads and writes the modified time using an accuracy determined
-by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second
-on OS X.
+Rclone reads and writes the modification times using an accuracy
+determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows
+and 1 Second on OS X.
Filenames
@@ -45262,6 +46849,12 @@ we:
- Only checksum the size that stat gave
- Don't update the stat info for the file
+NB do not use this flag on a Windows Volume Shadow (VSS). For some
+unknown reason, files in a VSS sometimes show different sizes from the
+directory listing (where the initial stat value comes from on Windows)
+and when stat is called on them directly. Other copy tools always use
+the direct stat value and setting this flag will disable that.
+
Properties:
- Config: no_check_updated
@@ -45370,7 +46963,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_LOCAL_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Dot
Metadata
@@ -45446,6 +47039,211 @@ Options:
Changelog
+v1.65.0 - 2023-11-26
+
+See commits
+
+- New backends
+ - Azure Files (karan, moongdal, Nick Craig-Wood)
+ - ImageKit (Abhinav Dhiman)
+ - Linkbox (viktor, Nick Craig-Wood)
+- New commands
+ - serve s3: Let rclone act as an S3 compatible server (Mikubill,
+ Artur Neumann, Saw-jan, Nick Craig-Wood)
+ - nfsmount: mount command to provide mount mechanism on macOS
+ without FUSE (Saleh Dindar)
+ - serve nfs: to serve a remote for use by nfsmount (Saleh Dindar)
+- New Features
+ - install.sh: Clean up temp files in install script (Jacob Hands)
+ - build
+ - Update all dependencies (Nick Craig-Wood)
+ - Refactor version info and icon resource handling on windows
+ (albertony)
+ - doc updates (albertony, alfish2000, asdffdsazqqq, Dimitri
+ Papadopoulos, Herby Gillot, Joda Stößer, Manoj Ghosh, Nick
+ Craig-Wood)
+ - Implement --metadata-mapper to transform metatadata with a user
+ supplied program (Nick Craig-Wood)
+ - Add ChunkWriterDoesntSeek feature flag and set it for b2 (Nick
+ Craig-Wood)
+ - lib/http: Export basic go string functions for use in --template
+ (Gabriel Espinoza)
+ - makefile: Use POSIX compatible install arguments (Mina Galić)
+ - operations
+ - Use less memory when doing multithread uploads (Nick
+ Craig-Wood)
+ - Implement --partial-suffix to control extension of temporary
+ file names (Volodymyr)
+ - rc
+ - Add operations/check to the rc API (Nick Craig-Wood)
+ - Always report an error as JSON (Nick Craig-Wood)
+ - Set Last-Modified header for files served by --rc-serve
+ (Nikita Shoshin)
+ - size: Dont show duplicate object count when less than 1k
+ (albertony)
+- Bug Fixes
+ - fshttp: Fix --contimeout being ignored (你知道未来吗)
+ - march: Fix excessive parallelism when using --no-traverse (Nick
+ Craig-Wood)
+ - ncdu: Fix crash when re-entering changed directory after rescan
+ (Nick Craig-Wood)
+ - operations
+ - Fix overwrite of destination when multi-thread transfer
+ fails (Nick Craig-Wood)
+ - Fix invalid UTF-8 when truncating file names when not using
+ --inplace (Nick Craig-Wood)
+ - serve dnla: Fix crash on graceful exit (wuxingzhong)
+- Mount
+ - Disable mount for freebsd and alias cmount as mount on that
+ platform (Nick Craig-Wood)
+- VFS
+ - Add --vfs-refresh flag to read all the directories on start
+ (Beyond Meat)
+ - Implement Name() method in WriteFileHandle and ReadFileHandle
+ (Saleh Dindar)
+ - Add go-billy dependency and make sure vfs.Handle implements
+ billy.File (Saleh Dindar)
+ - Error out early if can't upload 0 length file (Nick Craig-Wood)
+- Local
+ - Fix copying from Windows Volume Shadows (Nick Craig-Wood)
+- Azure Blob
+ - Add support for cold tier (Ivan Yanitra)
+- B2
+ - Implement "rclone backend lifecycle" to read and set bucket
+ lifecycles (Nick Craig-Wood)
+ - Implement --b2-lifecycle to control lifecycle when creating
+ buckets (Nick Craig-Wood)
+ - Fix listing all buckets when not needed (Nick Craig-Wood)
+ - Fix multi-thread upload with copyto going to wrong name (Nick
+ Craig-Wood)
+ - Fix server side chunked copy when file size was exactly
+ --b2-copy-cutoff (Nick Craig-Wood)
+ - Fix streaming chunked files an exact multiple of chunk size
+ (Nick Craig-Wood)
+- Box
+ - Filter more EventIDs when polling (David Sze)
+ - Add more logging for polling (David Sze)
+ - Fix performance problem reading metadata for single files (Nick
+ Craig-Wood)
+- Drive
+ - Add read/write metadata support (Nick Craig-Wood)
+ - Add support for SHA-1 and SHA-256 checksums (rinsuki)
+ - Add --drive-show-all-gdocs to allow unexportable gdocs to be
+ server side copied (Nick Craig-Wood)
+ - Add a note that --drive-scope accepts comma-separated list of
+ scopes (Keigo Imai)
+ - Fix error updating created time metadata on existing object
+ (Nick Craig-Wood)
+ - Fix integration tests by enabling metadata support from the
+ context (Nick Craig-Wood)
+- Dropbox
+ - Factor batcher into lib/batcher (Nick Craig-Wood)
+ - Fix missing encoding for rclone purge (Nick Craig-Wood)
+- Google Cloud Storage
+ - Fix 400 Bad request errors when using multi-thread copy (Nick
+ Craig-Wood)
+- Googlephotos
+ - Implement batcher for uploads (Nick Craig-Wood)
+- Hdfs
+ - Added support for list of namenodes in hdfs remote config
+ (Tayo-pasedaRJ)
+- HTTP
+ - Implement set backend command to update running backend (Nick
+ Craig-Wood)
+ - Enable methods used with WebDAV (Alen Šiljak)
+- Jottacloud
+ - Add support for reading and writing metadata (albertony)
+- Onedrive
+ - Implement ListR method which gives --fast-list support (Nick
+ Craig-Wood)
+ - This must be enabled with the --onedrive-delta flag
+- Quatrix
+ - Add partial upload support (Oksana Zhykina)
+ - Overwrite files on conflict during server-side move (Oksana
+ Zhykina)
+- S3
+ - Add Linode provider (Nick Craig-Wood)
+ - Add docs on how to add a new provider (Nick Craig-Wood)
+ - Fix no error being returned when creating a bucket we don't own
+ (Nick Craig-Wood)
+ - Emit a debug message if anonymous credentials are in use (Nick
+ Craig-Wood)
+ - Add --s3-disable-multipart-uploads flag (Nick Craig-Wood)
+ - Detect looping when using gcs and versions (Nick Craig-Wood)
+- SFTP
+ - Implement --sftp-copy-is-hardlink to server side copy as
+ hardlink (Nick Craig-Wood)
+- Smb
+ - Fix incorrect about size by switching to
+ github.com/cloudsoda/go-smb2 fork (Nick Craig-Wood)
+ - Fix modtime of multithread uploads by setting PartialUploads
+ (Nick Craig-Wood)
+- WebDAV
+ - Added an rclone vendor to work with rclone serve webdav (Adithya
+ Kumar)
+
+v1.64.2 - 2023-10-19
+
+See commits
+
+- Bug Fixes
+ - selfupdate: Fix "invalid hashsum signature" error (Nick
+ Craig-Wood)
+ - build: Fix docker build running out of space (Nick Craig-Wood)
+
+v1.64.1 - 2023-10-17
+
+See commits
+
+- Bug Fixes
+ - cmd: Make --progress output logs in the same format as without
+ (Nick Craig-Wood)
+ - docs fixes (Dimitri Papadopoulos Orfanos, Herby Gillot, Manoj
+ Ghosh, Nick Craig-Wood)
+ - lsjson: Make sure we set the global metadata flag too (Nick
+ Craig-Wood)
+ - operations
+ - Ensure concurrency is no greater than the number of chunks
+ (Pat Patterson)
+ - Fix OpenOptions ignored in copy if operation was a
+ multiThreadCopy (Vitor Gomes)
+ - Fix error message on delete to have file name (Nick
+ Craig-Wood)
+ - serve sftp: Return not supported error for not supported
+ commands (Nick Craig-Wood)
+ - build: Upgrade golang.org/x/net to v0.17.0 to fix HTTP/2 rapid
+ reset (Nick Craig-Wood)
+ - pacer: Fix b2 deadlock by defaulting max connections to
+ unlimited (Nick Craig-Wood)
+- Mount
+ - Fix automount not detecting drive is ready (Nick Craig-Wood)
+- VFS
+ - Fix update dir modification time (Saleh Dindar)
+- Azure Blob
+ - Fix "fatal error: concurrent map writes" (Nick Craig-Wood)
+- B2
+ - Fix multipart upload: corrupted on transfer: sizes differ XXX vs
+ 0 (Nick Craig-Wood)
+ - Fix locking window when getting mutipart upload URL (Nick
+ Craig-Wood)
+ - Fix server side copies greater than 4GB (Nick Craig-Wood)
+ - Fix chunked streaming uploads (Nick Craig-Wood)
+ - Reduce default --b2-upload-concurrency to 4 to reduce memory
+ usage (Nick Craig-Wood)
+- Onedrive
+ - Fix the configurator to allow /teams/ID in the config (Nick
+ Craig-Wood)
+- Oracleobjectstorage
+ - Fix OpenOptions being ignored in uploadMultipart with
+ chunkWriter (Nick Craig-Wood)
+- S3
+ - Fix slice bounds out of range error when listing (Nick
+ Craig-Wood)
+ - Fix OpenOptions being ignored in uploadMultipart with
+ chunkWriter (Vitor Gomes)
+- Storj
+ - Update storj.io/uplink to v1.12.0 (Kaloyan Raev)
+
v1.64.0 - 2023-09-11
See commits
@@ -45585,7 +47383,7 @@ See commits
- Hdfs
- Retry "replication in progress" errors when uploading (Nick
Craig-Wood)
- - Fix uploading to the wrong object on Update with overriden
+ - Fix uploading to the wrong object on Update with overridden
remote name (Nick Craig-Wood)
- HTTP
- CORS should not be sent if not set (yuudi)
@@ -45594,7 +47392,7 @@ See commits
- Fix List on a just deleted and remade directory (Nick
Craig-Wood)
- Oracleobjectstorage
- - Use rclone's rate limiter in mutipart transfers (Manoj Ghosh)
+ - Use rclone's rate limiter in multipart transfers (Manoj Ghosh)
- Implement OpenChunkWriter and multi-thread uploads (Manoj Ghosh)
- S3
- Refactor multipart upload to use OpenChunkWriter and ChunkWriter
@@ -45840,7 +47638,7 @@ See commits
- Fix quickxorhash on 32 bit architectures (Nick Craig-Wood)
- Report any list errors during rclone cleanup (albertony)
- Putio
- - Fix uploading to the wrong object on Update with overriden
+ - Fix uploading to the wrong object on Update with overridden
remote name (Nick Craig-Wood)
- Fix modification times not being preserved for server side copy
and move (Nick Craig-Wood)
@@ -45849,7 +47647,7 @@ See commits
- Empty directory markers (Jānis Bebrītis, Nick Craig-Wood)
- Update Scaleway storage classes (Brian Starkey)
- Fix --s3-versions on individual objects (Nick Craig-Wood)
- - Fix hang on aborting multpart upload with iDrive e2 (Nick
+ - Fix hang on aborting multipart upload with iDrive e2 (Nick
Craig-Wood)
- Fix missing "tier" metadata (Nick Craig-Wood)
- Fix V3sign: add missing subresource delete (cc)
@@ -45874,7 +47672,7 @@ See commits
- Storj
- Fix "uplink: too many requests" errors when uploading to the
same file (Nick Craig-Wood)
- - Fix uploading to the wrong object on Update with overriden
+ - Fix uploading to the wrong object on Update with overridden
remote name (Nick Craig-Wood)
- Swift
- Ignore 404 error when deleting an object (Nick Craig-Wood)
@@ -50756,7 +52554,7 @@ v1.38 - 2017-09-30
- Revert to copy when moving file across file system boundaries
- --skip-links to suppress symlink warnings (thanks Zhiming Wang)
- Mount
- - Re-use rcat internals to support uploads from all remotes
+ - Reuse rcat internals to support uploads from all remotes
- Dropbox
- Fix "entry doesn't belong in directory" error
- Stop using deprecated API methods
@@ -52528,7 +54326,7 @@ email addresses removed from here need to be added to bin/.ignore-emails to make
- HNGamingUK connor@earnshawhome.co.uk
- Jonta 359397+Jonta@users.noreply.github.com
- YenForYang YenForYang@users.noreply.github.com
-- Joda Stößer stoesser@yay-digital.de services+github@simjo.st
+- SimJoSt / Joda Stößer git@simjo.st
- Logeshwaran waranlogesh@gmail.com
- Rajat Goel rajat@dropbox.com
- r0kk3rz r0kk3rz@gmail.com
@@ -52777,6 +54575,38 @@ email addresses removed from here need to be added to bin/.ignore-emails to make
- Volodymyr Kit v.kit@maytech.net
- David Pedersen limero@me.com
- Drew Stinnett drew@drewlink.com
+- Pat Patterson pat@backblaze.com
+- Herby Gillot herby.gillot@gmail.com
+- Nikita Shoshin shoshin_nikita@fastmail.com
+- rinsuki 428rinsuki+git@gmail.com
+- Beyond Meat 51850644+beyondmeat@users.noreply.github.com
+- Saleh Dindar salh@fb.com
+- Volodymyr 142890760+vkit-maytech@users.noreply.github.com
+- Gabriel Espinoza 31670639+gspinoza@users.noreply.github.com
+- Keigo Imai keigo.imai@gmail.com
+- Ivan Yanitra iyanitra@tesla-consulting.com
+- alfish2000 alfish2000@gmail.com
+- wuxingzhong qq330332812@gmail.com
+- Adithya Kumar akumar42@protonmail.com
+- Tayo-pasedaRJ 138471223+Tayo-pasedaRJ@users.noreply.github.com
+- Peter Kreuser logo@kreuser.name
+- Piyush
+- fotile96 fotile96@users.noreply.github.com
+- Luc Ritchie luc.ritchie@gmail.com
+- cynful cynful@users.noreply.github.com
+- wjielai wjielai@tencent.com
+- Jack Deng jackdeng@gmail.com
+- Mikubill 31246794+Mikubill@users.noreply.github.com
+- Artur Neumann artur@jankaritech.com
+- Saw-jan saw.jan.grg3e@gmail.com
+- Oksana Zhykina o.zhykina@maytech.net
+- karan karan.gupta92@gmail.com
+- viktor viktor@yakovchuk.net
+- moongdal moongdal@tutanota.com
+- Mina Galić freebsd@igalic.co
+- Alen Šiljak dev@alensiljak.eu.org
+- 你知道未来吗 rkonfj@gmail.com
+- Abhinav Dhiman 8640877+ahnv@users.noreply.github.com
Contact the rclone project
diff --git a/bin/make_manual.py b/bin/make_manual.py
index 9c5866325..94d3da215 100755
--- a/bin/make_manual.py
+++ b/bin/make_manual.py
@@ -54,7 +54,7 @@ docs = [
"internetarchive.md",
"jottacloud.md",
"koofr.md",
- "linkbox.md"
+ "linkbox.md",
"mailru.md",
"mega.md",
"memory.md",
diff --git a/docs/content/amazonclouddrive.md b/docs/content/amazonclouddrive.md
index d7ddec191..cffc75e1f 100644
--- a/docs/content/amazonclouddrive.md
+++ b/docs/content/amazonclouddrive.md
@@ -303,7 +303,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_ACD_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
diff --git a/docs/content/azureblob.md b/docs/content/azureblob.md
index 6436c5d23..0e4bfd7f2 100644
--- a/docs/content/azureblob.md
+++ b/docs/content/azureblob.md
@@ -765,7 +765,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_AZUREBLOB_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8
#### --azureblob-public-access
diff --git a/docs/content/b2.md b/docs/content/b2.md
index 4c49ac45c..7f822b42f 100644
--- a/docs/content/b2.md
+++ b/docs/content/b2.md
@@ -508,7 +508,7 @@ Properties:
- Config: upload_concurrency
- Env Var: RCLONE_B2_UPLOAD_CONCURRENCY
- Type: int
-- Default: 16
+- Default: 4
#### --b2-disable-checksum
@@ -588,6 +588,37 @@ Properties:
- Type: bool
- Default: false
+#### --b2-lifecycle
+
+Set the number of days deleted files should be kept when creating a bucket.
+
+On bucket creation, this parameter is used to create a lifecycle rule
+for the entire bucket.
+
+If lifecycle is 0 (the default) it does not create a lifecycle rule so
+the default B2 behaviour applies. This is to create versions of files
+on delete and overwrite and to keep them indefinitely.
+
+If lifecycle is >0 then it creates a single rule setting the number of
+days before a file that is deleted or overwritten is deleted
+permanently. This is known as daysFromHidingToDeleting in the b2 docs.
+
+The minimum value for this parameter is 1 day.
+
+You can also enable hard_delete in the config also which will mean
+deletions won't cause versions but overwrites will still cause
+versions to be made.
+
+See: [rclone backend lifecycle](#lifecycle) for setting lifecycles after bucket creation.
+
+
+Properties:
+
+- Config: lifecycle
+- Env Var: RCLONE_B2_LIFECYCLE
+- Type: int
+- Default: 0
+
#### --b2-encoding
The encoding for the backend.
@@ -598,9 +629,76 @@ Properties:
- Config: encoding
- Env Var: RCLONE_B2_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
+## Backend commands
+
+Here are the commands specific to the b2 backend.
+
+Run them with
+
+ rclone backend COMMAND remote:
+
+The help below will explain what arguments each command takes.
+
+See the [backend](/commands/rclone_backend/) command for more
+info on how to pass options and arguments.
+
+These can be run on a running backend using the rc command
+[backend/command](/rc/#backend-command).
+
+### lifecycle
+
+Read or set the lifecycle for a bucket
+
+ rclone backend lifecycle remote: [options] [+]
+
+This command can be used to read or set the lifecycle for a bucket.
+
+Usage Examples:
+
+To show the current lifecycle rules:
+
+ rclone backend lifecycle b2:bucket
+
+This will dump something like this showing the lifecycle rules.
+
+ [
+ {
+ "daysFromHidingToDeleting": 1,
+ "daysFromUploadingToHiding": null,
+ "fileNamePrefix": ""
+ }
+ ]
+
+If there are no lifecycle rules (the default) then it will just return [].
+
+To reset the current lifecycle rules:
+
+ rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30
+ rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHidingToDeleting=1
+
+This will run and then print the new lifecycle rules as above.
+
+Rclone only lets you set lifecycles for the whole bucket with the
+fileNamePrefix = "".
+
+You can't disable versioning with B2. The best you can do is to set
+the daysFromHidingToDeleting to 1 day. You can enable hard_delete in
+the config also which will mean deletions won't cause versions but
+overwrites will still cause versions to be made.
+
+ rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1
+
+See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules
+
+
+Options:
+
+- "daysFromHidingToDeleting": After a file has been hidden for this many days it is deleted. 0 is off.
+- "daysFromUploadingToHiding": This many days after uploading a file is hidden
+
{{< rem autogenerated options stop >}}
## Limitations
diff --git a/docs/content/box.md b/docs/content/box.md
index 576db2b03..9e35c5c4f 100644
--- a/docs/content/box.md
+++ b/docs/content/box.md
@@ -470,7 +470,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_BOX_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
diff --git a/docs/content/changelog.md b/docs/content/changelog.md
index ecc5bc5af..17387e5b0 100644
--- a/docs/content/changelog.md
+++ b/docs/content/changelog.md
@@ -5,6 +5,108 @@ description: "Rclone Changelog"
# Changelog
+## v1.65.0 - 2023-11-26
+
+[See commits](https://github.com/rclone/rclone/compare/v1.64.0...v1.65.0)
+
+* New backends
+ * Azure Files (karan, moongdal, Nick Craig-Wood)
+ * ImageKit (Abhinav Dhiman)
+ * Linkbox (viktor, Nick Craig-Wood)
+* New commands
+ * `serve s3`: Let rclone act as an S3 compatible server (Mikubill, Artur Neumann, Saw-jan, Nick Craig-Wood)
+ * `nfsmount`: mount command to provide mount mechanism on macOS without FUSE (Saleh Dindar)
+ * `serve nfs`: to serve a remote for use by `nfsmount` (Saleh Dindar)
+* New Features
+ * install.sh: Clean up temp files in install script (Jacob Hands)
+ * build
+ * Update all dependencies (Nick Craig-Wood)
+ * Refactor version info and icon resource handling on windows (albertony)
+ * doc updates (albertony, alfish2000, asdffdsazqqq, Dimitri Papadopoulos, Herby Gillot, Joda Stößer, Manoj Ghosh, Nick Craig-Wood)
+ * Implement `--metadata-mapper` to transform metatadata with a user supplied program (Nick Craig-Wood)
+ * Add `ChunkWriterDoesntSeek` feature flag and set it for b2 (Nick Craig-Wood)
+ * lib/http: Export basic go string functions for use in `--template` (Gabriel Espinoza)
+ * makefile: Use POSIX compatible install arguments (Mina Galić)
+ * operations
+ * Use less memory when doing multithread uploads (Nick Craig-Wood)
+ * Implement `--partial-suffix` to control extension of temporary file names (Volodymyr)
+ * rc
+ * Add `operations/check` to the rc API (Nick Craig-Wood)
+ * Always report an error as JSON (Nick Craig-Wood)
+ * Set `Last-Modified` header for files served by `--rc-serve` (Nikita Shoshin)
+ * size: Dont show duplicate object count when less than 1k (albertony)
+* Bug Fixes
+ * fshttp: Fix `--contimeout` being ignored (你知道未来吗)
+ * march: Fix excessive parallelism when using `--no-traverse` (Nick Craig-Wood)
+ * ncdu: Fix crash when re-entering changed directory after rescan (Nick Craig-Wood)
+ * operations
+ * Fix overwrite of destination when multi-thread transfer fails (Nick Craig-Wood)
+ * Fix invalid UTF-8 when truncating file names when not using `--inplace` (Nick Craig-Wood)
+ * serve dnla: Fix crash on graceful exit (wuxingzhong)
+* Mount
+ * Disable mount for freebsd and alias cmount as mount on that platform (Nick Craig-Wood)
+* VFS
+ * Add `--vfs-refresh` flag to read all the directories on start (Beyond Meat)
+ * Implement Name() method in WriteFileHandle and ReadFileHandle (Saleh Dindar)
+ * Add go-billy dependency and make sure vfs.Handle implements billy.File (Saleh Dindar)
+ * Error out early if can't upload 0 length file (Nick Craig-Wood)
+* Local
+ * Fix copying from Windows Volume Shadows (Nick Craig-Wood)
+* Azure Blob
+ * Add support for cold tier (Ivan Yanitra)
+* B2
+ * Implement "rclone backend lifecycle" to read and set bucket lifecycles (Nick Craig-Wood)
+ * Implement `--b2-lifecycle` to control lifecycle when creating buckets (Nick Craig-Wood)
+ * Fix listing all buckets when not needed (Nick Craig-Wood)
+ * Fix multi-thread upload with copyto going to wrong name (Nick Craig-Wood)
+ * Fix server side chunked copy when file size was exactly `--b2-copy-cutoff` (Nick Craig-Wood)
+ * Fix streaming chunked files an exact multiple of chunk size (Nick Craig-Wood)
+* Box
+ * Filter more EventIDs when polling (David Sze)
+ * Add more logging for polling (David Sze)
+ * Fix performance problem reading metadata for single files (Nick Craig-Wood)
+* Drive
+ * Add read/write metadata support (Nick Craig-Wood)
+ * Add support for SHA-1 and SHA-256 checksums (rinsuki)
+ * Add `--drive-show-all-gdocs` to allow unexportable gdocs to be server side copied (Nick Craig-Wood)
+ * Add a note that `--drive-scope` accepts comma-separated list of scopes (Keigo Imai)
+ * Fix error updating created time metadata on existing object (Nick Craig-Wood)
+ * Fix integration tests by enabling metadata support from the context (Nick Craig-Wood)
+* Dropbox
+ * Factor batcher into lib/batcher (Nick Craig-Wood)
+ * Fix missing encoding for rclone purge (Nick Craig-Wood)
+* Google Cloud Storage
+ * Fix 400 Bad request errors when using multi-thread copy (Nick Craig-Wood)
+* Googlephotos
+ * Implement batcher for uploads (Nick Craig-Wood)
+* Hdfs
+ * Added support for list of namenodes in hdfs remote config (Tayo-pasedaRJ)
+* HTTP
+ * Implement set backend command to update running backend (Nick Craig-Wood)
+ * Enable methods used with WebDAV (Alen Šiljak)
+* Jottacloud
+ * Add support for reading and writing metadata (albertony)
+* Onedrive
+ * Implement ListR method which gives `--fast-list` support (Nick Craig-Wood)
+ * This must be enabled with the `--onedrive-delta` flag
+* Quatrix
+ * Add partial upload support (Oksana Zhykina)
+ * Overwrite files on conflict during server-side move (Oksana Zhykina)
+* S3
+ * Add Linode provider (Nick Craig-Wood)
+ * Add docs on how to add a new provider (Nick Craig-Wood)
+ * Fix no error being returned when creating a bucket we don't own (Nick Craig-Wood)
+ * Emit a debug message if anonymous credentials are in use (Nick Craig-Wood)
+ * Add `--s3-disable-multipart-uploads` flag (Nick Craig-Wood)
+ * Detect looping when using gcs and versions (Nick Craig-Wood)
+* SFTP
+ * Implement `--sftp-copy-is-hardlink` to server side copy as hardlink (Nick Craig-Wood)
+* Smb
+ * Fix incorrect `about` size by switching to `github.com/cloudsoda/go-smb2` fork (Nick Craig-Wood)
+ * Fix modtime of multithread uploads by setting PartialUploads (Nick Craig-Wood)
+* WebDAV
+ * Added an rclone vendor to work with `rclone serve webdav` (Adithya Kumar)
+
## v1.64.2 - 2023-10-19
[See commits](https://github.com/rclone/rclone/compare/v1.64.1...v1.64.2)
diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md
index b9f0f7be4..e1489389d 100644
--- a/docs/content/commands/rclone.md
+++ b/docs/content/commands/rclone.md
@@ -30,7 +30,7 @@ rclone [flags]
--acd-auth-url string Auth server URL
--acd-client-id string OAuth Client Id
--acd-client-secret string OAuth Client Secret
- --acd-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --acd-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi)
--acd-token string OAuth Access Token as a JSON blob
--acd-token-url string Token server url
@@ -38,7 +38,7 @@ rclone [flags]
--alias-remote string Remote or path to alias
--ask-password Allow prompt for password for encrypted configuration (default true)
--auto-confirm If enabled, do not request console confirmation
- --azureblob-access-tier string Access tier of blob: hot, cool or archive
+ --azureblob-access-tier string Access tier of blob: hot, cool, cold or archive
--azureblob-account string Azure Storage Account Name
--azureblob-archive-tier-delete Delete archive tier blobs before overwriting
--azureblob-chunk-size SizeSuffix Upload chunk size (default 4Mi)
@@ -49,7 +49,7 @@ rclone [flags]
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
--azureblob-disable-checksum Don't store MD5 checksum with object metadata
- --azureblob-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
+ --azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service
--azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI)
--azureblob-key string Storage Account Shared Key
@@ -69,18 +69,43 @@ rclone [flags]
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address)
+ --azurefiles-account string Azure Storage Account Name
+ --azurefiles-chunk-size SizeSuffix Upload chunk size (default 4Mi)
+ --azurefiles-client-certificate-password string Password for the certificate file (optional) (obscured)
+ --azurefiles-client-certificate-path string Path to a PEM or PKCS12 certificate file including the private key
+ --azurefiles-client-id string The ID of the client in use
+ --azurefiles-client-secret string One of the service principal's client secrets
+ --azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
+ --azurefiles-connection-string string Azure Files Connection String
+ --azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
+ --azurefiles-endpoint string Endpoint for the service
+ --azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI)
+ --azurefiles-key string Storage Account Shared Key
+ --azurefiles-max-stream-size SizeSuffix Max size for streamed files (default 10Gi)
+ --azurefiles-msi-client-id string Object ID of the user-assigned MSI to use, if any
+ --azurefiles-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
+ --azurefiles-msi-object-id string Object ID of the user-assigned MSI to use, if any
+ --azurefiles-password string The user's password (obscured)
+ --azurefiles-sas-url string SAS URL
+ --azurefiles-service-principal-file string Path to file containing credentials for use with a service principal
+ --azurefiles-share-name string Azure Files Share Name
+ --azurefiles-tenant string ID of the service principal's tenant. Also called its directory ID
+ --azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
+ --azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
+ --azurefiles-username string User name (usually an email address)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size (default 96Mi)
--b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w)
--b2-download-url string Custom endpoint for downloads
- --b2-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --b2-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--b2-endpoint string Endpoint for the service
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files
--b2-key string Application Key
+ --b2-lifecycle int Set the number of days deleted files should be kept when creating a bucket
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging
- --b2-upload-concurrency int Concurrency for multipart uploads (default 16)
+ --b2-upload-concurrency int Concurrency for multipart uploads (default 4)
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--b2-version-at Time Show file versions as they were at the specified time (default off)
--b2-versions Include old versions in directory listings
@@ -93,7 +118,7 @@ rclone [flags]
--box-client-id string OAuth Client Id
--box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
- --box-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
+ --box-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--box-impersonate string Impersonate this user ID when using a service account
--box-list-chunk int Size of listing chunk 1-1000 (default 1000)
--box-owned-by string Only show items owned by the login (email address) passed in
@@ -135,7 +160,7 @@ rclone [flags]
--chunker-remote string Remote to chunk/unchunk
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
- --color string When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default "AUTO")
+ --color AUTO|NEVER|ALWAYS When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default AUTO)
--combine-upstreams SpaceSepList Upstreams for combining
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--compress-level int GZIP compression level (-2 to 9) (default -1)
@@ -158,7 +183,7 @@ rclone [flags]
--crypt-server-side-across-configs Deprecated: use --server-side-across-configs instead
--crypt-show-mapping For all files listed show how the names encrypt
--crypt-suffix string If this is set it will override the default suffix of ".bin" (default ".bin")
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--delete-after When synchronizing, delete files on destination after transferring (default)
--delete-before When synchronizing, delete files on destination before transferring
@@ -176,7 +201,7 @@ rclone [flags]
--drive-client-secret string OAuth Client Secret
--drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut
--drive-disable-http2 Disable drive using http2 (default true)
- --drive-encoding MultiEncoder The encoding for the backend (default InvalidUtf8)
+ --drive-encoding Encoding The encoding for the backend (default InvalidUtf8)
--drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars)
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg")
--drive-fast-list-bug-fix Work around a bug in Google Drive listing (default true)
@@ -185,17 +210,21 @@ rclone [flags]
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs
--drive-keep-revision-forever Keep new head revision of each file forever
--drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000)
+ --drive-metadata-labels Bits Control whether labels should be read or written in metadata (default off)
+ --drive-metadata-owner Bits Control whether owner should be read or written in metadata (default read)
+ --drive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off)
--drive-pacer-burst int Number of API calls to allow without sleeping (default 100)
--drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms)
--drive-resource-key string Resource key for accessing a link-shared file
--drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive
+ --drive-scope string Comma separated list of scopes that rclone should use when requesting access from drive
--drive-server-side-across-configs Deprecated: use --server-side-across-configs instead
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
+ --drive-show-all-gdocs Show all Google Docs including non-exportable ones in listings
--drive-size-as-quota Show sizes as storage quota usage, not actual size
- --drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only
+ --drive-skip-checksum-gphotos Skip checksums on Google photos and videos only
--drive-skip-dangling-shortcuts If set skip dangling shortcut files
--drive-skip-gdocs Skip google documents in all listings
--drive-skip-shortcuts If set skip shortcut files
@@ -219,7 +248,7 @@ rclone [flags]
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
- --dropbox-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
+ --dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--dropbox-shared-files Instructs rclone to work on individual shared files
@@ -228,7 +257,7 @@ rclone [flags]
--dropbox-token-url string Token server url
-n, --dry-run Do a trial run with no permanent changes
--dscp string Set DSCP value to connections, value or name, e.g. CS1, LE, DF, AF21
- --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump DumpFlags List of items to dump from: headers, bodies, requests, responses, auth, filters, goroutines, openfiles, mapper
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info
--error-on-no-transfer Sets exit code 9 if no files are transferred, useful in scripts
@@ -239,11 +268,11 @@ rclone [flags]
--fast-list Use recursive list if available; uses more memory but fewer transactions
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
--fichier-cdn Set if you wish to use CDN download links
- --fichier-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --fichier-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
--fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
--fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
--fichier-shared-folder string If you want to download a shared folder, add this parameter
- --filefabric-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
+ --filefabric-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--filefabric-permanent-token string Permanent Authentication Token
--filefabric-root-folder-id string ID of the root folder
--filefabric-token string Session Token
@@ -263,7 +292,7 @@ rclone [flags]
--ftp-disable-mlsd Disable using MLSD even if server advertises support
--ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
--ftp-disable-utf8 Disable using UTF-8 even if server advertises support
- --ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
+ --ftp-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to
@@ -285,7 +314,7 @@ rclone [flags]
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
- --gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
+ --gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Endpoint for the service
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
@@ -298,9 +327,13 @@ rclone [flags]
--gcs-token-url string Token server url
--gcs-user-project string User project
--gphotos-auth-url string Auth server URL
+ --gphotos-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
+ --gphotos-batch-mode string Upload file batching sync|async|off (default "sync")
+ --gphotos-batch-size int Max number of files in upload batch
+ --gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
--gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret
- --gphotos-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
+ --gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gphotos-include-archived Also view and download archived media
--gphotos-read-only Set to make the Google Photos backend read only
--gphotos-read-size Set to read the size of media items
@@ -312,8 +345,8 @@ rclone [flags]
--hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off)
--hasher-remote string Remote to cache checksums for (e.g. myRemote:path)
--hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy
- --hdfs-encoding MultiEncoder The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
- --hdfs-namenode string Hadoop name node and port
+ --hdfs-encoding Encoding The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
+ --hdfs-namenode CommaSepList Hadoop name nodes and ports
--hdfs-service-principal-name string Kerberos service principal name for the namenode
--hdfs-username string Hadoop user name
--header stringArray Set HTTP header for all transactions
@@ -325,7 +358,7 @@ rclone [flags]
--hidrive-client-id string OAuth Client Id
--hidrive-client-secret string OAuth Client Secret
--hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary
- --hidrive-encoding MultiEncoder The encoding for the backend (default Slash,Dot)
+ --hidrive-encoding Encoding The encoding for the backend (default Slash,Dot)
--hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1")
--hidrive-root-prefix string The root/parent folder for all paths (default "/")
--hidrive-scope-access string Access permissions that rclone should use when requesting access from HiDrive (default "rw")
@@ -344,8 +377,15 @@ rclone [flags]
--ignore-checksum Skip post copy check of checksums
--ignore-errors Delete even if there are I/O errors
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
+ --imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket)
+ --imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+ --imagekit-only-signed Restrict unsigned image URLs If you have configured Restrict unsigned image URLs in your dashboard settings, set this to true
+ --imagekit-private-key string You can find your ImageKit.io private key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+ --imagekit-public-key string You can find your ImageKit.io public key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+ --imagekit-upload-tags string Tags to add to the uploaded files, e.g. "tag1,tag2"
+ --imagekit-versions Include old versions in directory listings
--immutable Do not modify files, fail if existing files have been modified
--include stringArray Include files matching pattern
--include-from stringArray Read file include patterns from file (use - to read from stdin)
@@ -353,7 +393,7 @@ rclone [flags]
-i, --interactive Enable interactive mode
--internetarchive-access-key-id string IAS3 Access Key
--internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true)
- --internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
+ --internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
--internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org")
--internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org")
--internetarchive-secret-access-key string IAS3 Secret Key (password)
@@ -361,7 +401,7 @@ rclone [flags]
--jottacloud-auth-url string Auth server URL
--jottacloud-client-id string OAuth Client Id
--jottacloud-client-secret string OAuth Client Secret
- --jottacloud-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
+ --jottacloud-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
--jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them
@@ -369,7 +409,7 @@ rclone [flags]
--jottacloud-token-url string Token server url
--jottacloud-trashed-only Only show files that are in the trash
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi)
- --koofr-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --koofr-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--koofr-endpoint string The Koofr API endpoint to use
--koofr-mountid string Mount ID of the mount to use
--koofr-password string Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
@@ -377,10 +417,11 @@ rclone [flags]
--koofr-setmtime Does the backend support setting modification time (default true)
--koofr-user string Your user name
--kv-lock-time Duration Maximum time to keep key-value database locked by process (default 1s)
+ --linkbox-token string Token from https://www.linkbox.to/admin/account
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive
- --local-encoding MultiEncoder The encoding for the backend (default Slash,Dot)
+ --local-encoding Encoding The encoding for the backend (default Slash,Dot)
--local-no-check-updated Don't check to see if the files change during upload
--local-no-preallocate Disable preallocation of disk space for transferred files
--local-no-set-modtime Disable setting modtime
@@ -390,14 +431,14 @@ rclone [flags]
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
--log-file string Log everything to this file
--log-format string Comma separated list of log format options (default "date,time")
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --log-level LogLevel Log level DEBUG|INFO|NOTICE|ERROR (default NOTICE)
--log-systemd Activate systemd integration for the logger
--low-level-retries int Number of low level retries to do (default 10)
--mailru-auth-url string Auth server URL
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
--mailru-client-id string OAuth Client Id
--mailru-client-secret string OAuth Client Secret
- --mailru-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --mailru-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--mailru-pass string Password (obscured)
--mailru-speedup-enable Skip full upload if there is another file with same data hash (default true)
--mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash) (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf")
@@ -416,7 +457,7 @@ rclone [flags]
--max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
--mega-debug Output more debug from Mega
- --mega-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--mega-hard-delete Delete files permanently rather than putting them into the trash
--mega-pass string Password (obscured)
--mega-use-https Use HTTPS for transfers
@@ -429,6 +470,7 @@ rclone [flags]
--metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
--metadata-include stringArray Include metadatas matching pattern
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --metadata-mapper SpaceSepList Program to run to transforming metadata before upload
--metadata-set stringArray Add metadata key=value when uploading
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
@@ -447,7 +489,7 @@ rclone [flags]
--no-gzip-encoding Don't set Accept-Encoding: gzip
--no-traverse Don't traverse destination file system on copy
--no-unicode-normalization Don't normalize unicode characters in filenames
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
-x, --one-file-system Don't cross filesystem boundaries (unix/macOS only)
--onedrive-access-scopes SpaceSepList Set scopes to be requested by rclone (default Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access)
--onedrive-auth-url string Auth server URL
@@ -455,9 +497,10 @@ rclone [flags]
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
+ --onedrive-delta If set rclone will use delta listing to implement recursive listings
--onedrive-drive-id string The ID of the drive to use
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
- --onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --onedrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
--onedrive-hash-type string Specify the hash in use for the backend (default "auto")
--onedrive-link-password string Set the password for links created by the link command
@@ -478,7 +521,7 @@ rclone [flags]
--oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--oos-copy-timeout Duration Timeout for copy (default 1m0s)
--oos-disable-checksum Don't store MD5 checksum with object metadata
- --oos-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --oos-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--oos-endpoint string Endpoint for Object storage API
--oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery
--oos-max-upload-parts int Maximum number of parts in a multipart upload (default 10000)
@@ -495,15 +538,16 @@ rclone [flags]
--oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
- --opendrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
+ --opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
--opendrive-password string Password (obscured)
--opendrive-username string Username
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--password-command SpaceSepList Command for supplying password for encrypted configuration
--pcloud-auth-url string Auth server URL
--pcloud-client-id string OAuth Client Id
--pcloud-client-secret string OAuth Client Secret
- --pcloud-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --pcloud-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--pcloud-hostname string Hostname to connect to (default "api.pcloud.com")
--pcloud-password string Your pcloud password (obscured)
--pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0")
@@ -513,7 +557,7 @@ rclone [flags]
--pikpak-auth-url string Auth server URL
--pikpak-client-id string OAuth Client Id
--pikpak-client-secret string OAuth Client Secret
- --pikpak-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
--pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder
@@ -525,7 +569,7 @@ rclone [flags]
--premiumizeme-auth-url string Auth server URL
--premiumizeme-client-id string OAuth Client Id
--premiumizeme-client-secret string OAuth Client Secret
- --premiumizeme-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --premiumizeme-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--premiumizeme-token string OAuth Access Token as a JSON blob
--premiumizeme-token-url string Token server url
-P, --progress Show progress during transfer
@@ -533,7 +577,7 @@ rclone [flags]
--protondrive-2fa string The 2FA code
--protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone")
--protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true)
- --protondrive-encoding MultiEncoder The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
--protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured)
--protondrive-original-file-size Return the file size before encryption (default true)
--protondrive-password string The password of your proton account (obscured)
@@ -542,13 +586,13 @@ rclone [flags]
--putio-auth-url string Auth server URL
--putio-client-id string OAuth Client Id
--putio-client-secret string OAuth Client Secret
- --putio-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --putio-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--putio-token string OAuth Access Token as a JSON blob
--putio-token-url string Token server url
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi)
--qingstor-connection-retries int Number of connection retries (default 3)
- --qingstor-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8)
+ --qingstor-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8)
--qingstor-endpoint string Enter an endpoint URL to connection QingStor API
--qingstor-env-auth Get QingStor credentials from runtime
--qingstor-secret-access-key string QingStor Secret Access Key (password)
@@ -557,7 +601,7 @@ rclone [flags]
--qingstor-zone string Zone to connect to
--quatrix-api-key string API key for accessing Quatrix account
--quatrix-effective-upload-time string Wanted upload time for one chunk (default "4s")
- --quatrix-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --quatrix-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--quatrix-hard-delete Delete files permanently rather than putting them into the trash
--quatrix-host string Host name of Quatrix account
--quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi)
@@ -604,7 +648,7 @@ rclone [flags]
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
--s3-download-url string Custom endpoint for downloads
- --s3-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --s3-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--s3-endpoint string Endpoint for S3 API
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
--s3-force-path-style If true use path style access if false use virtual hosted style (default true)
@@ -638,14 +682,16 @@ rclone [flags]
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
--s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset)
+ --s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset)
--s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
+ --s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-v2-auth If true use v2 authentication
--s3-version-at Time Show file versions as they were at the specified time (default off)
--s3-versions Include old versions in directory listings
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
--seafile-create-library Should rclone create a library if it doesn't exist
- --seafile-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
+ --seafile-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
--seafile-library string Name of the library
--seafile-library-key string Library password (for encrypted libraries only) (obscured)
--seafile-pass string Password (obscured)
@@ -656,6 +702,7 @@ rclone [flags]
--sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki)
--sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference
--sftp-concurrency int The maximum number of outstanding requests for one file (default 64)
+ --sftp-copy-is-hardlink Set to enable server side copies using hardlinks
--sftp-disable-concurrent-reads If set don't use concurrent reads
--sftp-disable-concurrent-writes If set don't use concurrent writes
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
@@ -690,7 +737,7 @@ rclone [flags]
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
--sharefile-client-id string OAuth Client Id
--sharefile-client-secret string OAuth Client Secret
- --sharefile-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --sharefile-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
--sharefile-endpoint string Endpoint for API calls
--sharefile-root-folder-id string ID of the root folder
--sharefile-token string OAuth Access Token as a JSON blob
@@ -698,13 +745,13 @@ rclone [flags]
--sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi)
--sia-api-password string Sia Daemon API Password (obscured)
--sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980")
- --sia-encoding MultiEncoder The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
+ --sia-encoding Encoding The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
--sia-user-agent string Siad User Agent (default "Sia-Agent")
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--skip-links Don't warn about skipped symlinks
--smb-case-insensitive Whether the server is configured to be case-insensitive (default true)
--smb-domain string Domain name for NTLM authentication (default "WORKGROUP")
- --smb-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --smb-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
--smb-hide-special-share Hide special shares (e.g. print$) which users aren't supposed to access (default true)
--smb-host string SMB server hostname to connect to
--smb-idle-timeout Duration Max time before closing idle connections (default 1m0s)
@@ -714,7 +761,7 @@ rclone [flags]
--smb-user string SMB username (default "$USER")
--stats Duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats (0 for no limit) (default 45)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-log-level LogLevel Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default INFO)
--stats-one-line Make the stats fit on one line
--stats-one-line-date Enable --stats-one-line and add current date/time prefix
--stats-one-line-date-format string Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes ("), see https://golang.org/pkg/time/#Time.Format
@@ -732,7 +779,7 @@ rclone [flags]
--sugarsync-authorization string Sugarsync authorization
--sugarsync-authorization-expiry string Sugarsync authorization expiry
--sugarsync-deleted-id string Sugarsync deleted folder id
- --sugarsync-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
+ --sugarsync-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
--sugarsync-hard-delete Permanently delete files if true
--sugarsync-private-access-key string Sugarsync Private Access Key
--sugarsync-refresh-token string Sugarsync refresh token
@@ -746,7 +793,7 @@ rclone [flags]
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8)
+ --swift-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form
--swift-key string API key or password (OS_PASSWORD)
@@ -778,13 +825,13 @@ rclone [flags]
--union-upstreams string List of space separated upstreams
-u, --update Skip files that are newer on the destination
--uptobox-access-token string Your access token
- --uptobox-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
+ --uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--uptobox-private Set to make uploaded files private
--use-cookies Enable session cookiejar
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.64.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.65.0")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
@@ -800,14 +847,14 @@ rclone [flags]
--yandex-auth-url string Auth server URL
--yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret
- --yandex-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
+ --yandex-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--yandex-hard-delete Delete files permanently rather than putting them into the trash
--yandex-token string OAuth Access Token as a JSON blob
--yandex-token-url string Token server url
--zoho-auth-url string Auth server URL
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
- --zoho-encoding MultiEncoder The encoding for the backend (default Del,Ctl,InvalidUtf8)
+ --zoho-encoding Encoding The encoding for the backend (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to
--zoho-token string OAuth Access Token as a JSON blob
--zoho-token-url string Token server url
@@ -821,7 +868,7 @@ rclone [flags]
* [rclone bisync](/commands/rclone_bisync/) - Perform bidirectional synchronization between two paths.
* [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout.
* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match.
-* [rclone checksum](/commands/rclone_checksum/) - Checks the files in the source against a SUM file.
+* [rclone checksum](/commands/rclone_checksum/) - Checks the files in the destination against a SUM file.
* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible.
* [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell.
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
diff --git a/docs/content/commands/rclone_bisync.md b/docs/content/commands/rclone_bisync.md
index 7bde4cce5..057e31848 100644
--- a/docs/content/commands/rclone_bisync.md
+++ b/docs/content/commands/rclone_bisync.md
@@ -59,11 +59,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -78,11 +78,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
```
diff --git a/docs/content/commands/rclone_checksum.md b/docs/content/commands/rclone_checksum.md
index 6da448b32..1e78aeb65 100644
--- a/docs/content/commands/rclone_checksum.md
+++ b/docs/content/commands/rclone_checksum.md
@@ -1,6 +1,6 @@
---
title: "rclone checksum"
-description: "Checks the files in the source against a SUM file."
+description: "Checks the files in the destination against a SUM file."
slug: rclone_checksum
url: /commands/rclone_checksum/
groups: Filter,Listing
@@ -9,17 +9,20 @@ versionIntroduced: v1.56
---
# rclone checksum
-Checks the files in the source against a SUM file.
+Checks the files in the destination against a SUM file.
## Synopsis
-Checks that hashsums of source files match the SUM file.
+Checks that hashsums of destination files match the SUM file.
It compares hashes (MD5, SHA1, etc) and logs a report of files which
don't match. It doesn't alter the file system.
-If you supply the `--download` flag, it will download the data from remote
-and calculate the contents hash on the fly. This can be useful for remotes
+The sumfile is treated as the source and the dst:path is treated as
+the destination for the purposes of the output.
+
+If you supply the `--download` flag, it will download the data from the remote
+and calculate the content hash on the fly. This can be useful for remotes
that don't support hashes or if you really want to check all the data.
Note that hash values in the SUM file are treated as case insensitive.
@@ -50,7 +53,7 @@ option for more information.
```
-rclone checksum sumfile src:path [flags]
+rclone checksum sumfile dst:path [flags]
```
## Options
diff --git a/docs/content/commands/rclone_copy.md b/docs/content/commands/rclone_copy.md
index 061c2b2f4..315dc4b8a 100644
--- a/docs/content/commands/rclone_copy.md
+++ b/docs/content/commands/rclone_copy.md
@@ -91,11 +91,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -110,11 +110,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
```
diff --git a/docs/content/commands/rclone_copyto.md b/docs/content/commands/rclone_copyto.md
index 8aa99aed1..30a42e53f 100644
--- a/docs/content/commands/rclone_copyto.md
+++ b/docs/content/commands/rclone_copyto.md
@@ -63,11 +63,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -82,11 +82,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
```
diff --git a/docs/content/commands/rclone_hashsum.md b/docs/content/commands/rclone_hashsum.md
index b3b39ab84..b3392c01b 100644
--- a/docs/content/commands/rclone_hashsum.md
+++ b/docs/content/commands/rclone_hashsum.md
@@ -40,10 +40,6 @@ Run without a hash to see the list of all supported hashes, e.g.
* whirlpool
* crc32
* sha256
- * dropbox
- * hidrive
- * mailru
- * quickxor
Then
@@ -53,7 +49,7 @@ Note that hash names are case insensitive and values are output in lower case.
```
-rclone hashsum remote:path [flags]
+rclone hashsum [ remote:path] [flags]
```
## Options
diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md
index ec85e1f9c..c22fb2e7e 100644
--- a/docs/content/commands/rclone_mount.md
+++ b/docs/content/commands/rclone_mount.md
@@ -13,7 +13,6 @@ Mount the remote as file system on a mountpoint.
## Synopsis
-
rclone mount allows Linux, FreeBSD, macOS and Windows to
mount any of Rclone's cloud storage systems as a file system with
FUSE.
@@ -268,11 +267,17 @@ does not suffer from the same limitations.
## Mounting on macOS
-Mounting on macOS can be done either via [macFUSE](https://osxfuse.github.io/)
+Mounting on macOS can be done either via [built-in NFS server](/commands/rclone_serve_nfs/), [macFUSE](https://osxfuse.github.io/)
(also known as osxfuse) or [FUSE-T](https://www.fuse-t.org/). macFUSE is a traditional
FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system
which "mounts" via an NFSv4 local server.
+# NFS mount
+
+This method spins up an NFS server using [serve nfs](/commands/rclone_serve_nfs/) command and mounts
+it to the specified mountpoint. If you run this in background mode using |--daemon|, you will need to
+send SIGTERM signal to the rclone process using |kill| command to stop the mount.
+
### macFUSE Notes
If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) from
@@ -322,6 +327,8 @@ sequentially, it can only seek when reading. This means that many
applications won't work with their files on an rclone mount without
`--vfs-cache-mode writes` or `--vfs-cache-mode full`.
See the [VFS File Caching](#vfs-file-caching) section for more info.
+When using NFS mount on macOS, if you don't specify |--vfs-cache-mode|
+the mount point will be read-only.
The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2)
do not support the concept of empty directories, so empty
@@ -468,7 +475,6 @@ Mount option syntax includes a few extra options treated specially:
- `vv...` will be transformed into appropriate `--verbose=N`
- standard mount options like `x-systemd.automount`, `_netdev`, `nosuid` and alike
are intended only for Automountd and ignored by rclone.
-
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
@@ -850,6 +856,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
diff --git a/docs/content/commands/rclone_move.md b/docs/content/commands/rclone_move.md
index 4ce4fd55c..bc6efddde 100644
--- a/docs/content/commands/rclone_move.md
+++ b/docs/content/commands/rclone_move.md
@@ -67,11 +67,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -86,11 +86,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
```
diff --git a/docs/content/commands/rclone_moveto.md b/docs/content/commands/rclone_moveto.md
index 074332ea7..79f3e3420 100644
--- a/docs/content/commands/rclone_moveto.md
+++ b/docs/content/commands/rclone_moveto.md
@@ -66,11 +66,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -85,11 +85,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
```
diff --git a/docs/content/commands/rclone_rcd.md b/docs/content/commands/rclone_rcd.md
index 749ed959b..73a40c42b 100644
--- a/docs/content/commands/rclone_rcd.md
+++ b/docs/content/commands/rclone_rcd.md
@@ -96,6 +96,17 @@ to be used within the template to server pages:
|-- .Size | Size in Bytes of the entry. |
|-- .ModTime | The UTC timestamp of an entry. |
+The server also makes the following functions available so that they can be used within the
+template. These functions help extend the options for dynamic rendering of HTML. They can
+be used to render HTML based on specific conditions.
+
+| Function | Description |
+| :---------- | :---------- |
+| afterEpoch | Returns the time since the epoch for the given time. |
+| contains | Checks whether a given substring is present or not in a given string. |
+| hasPrefix | Checks whether the given string begins with the specified prefix. |
+| hasSuffix | Checks whether the given string end with the specified suffix. |
+
### Authentication
By default this will serve files without needing a login.
diff --git a/docs/content/commands/rclone_selfupdate.md b/docs/content/commands/rclone_selfupdate.md
index 2942215c5..82994370e 100644
--- a/docs/content/commands/rclone_selfupdate.md
+++ b/docs/content/commands/rclone_selfupdate.md
@@ -12,7 +12,6 @@ Update the rclone binary.
## Synopsis
-
This command downloads the latest release of rclone and replaces the
currently running binary. The download is verified with a hashsum and
cryptographically signed signature; see [the release signing
diff --git a/docs/content/commands/rclone_serve.md b/docs/content/commands/rclone_serve.md
index 39854a7fc..211472b89 100644
--- a/docs/content/commands/rclone_serve.md
+++ b/docs/content/commands/rclone_serve.md
@@ -40,7 +40,9 @@ See the [global flags page](/flags/) for global options not listed here.
* [rclone serve docker](/commands/rclone_serve_docker/) - Serve any remote on docker's volume plugin API.
* [rclone serve ftp](/commands/rclone_serve_ftp/) - Serve remote:path over FTP.
* [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP.
+* [rclone serve nfs](/commands/rclone_serve_nfs/) - Serve the remote as an NFS mount
* [rclone serve restic](/commands/rclone_serve_restic/) - Serve the remote for restic's REST API.
+* [rclone serve s3](/commands/rclone_serve_s3/) - Serve remote:path over s3.
* [rclone serve sftp](/commands/rclone_serve_sftp/) - Serve the remote over SFTP.
* [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over WebDAV.
diff --git a/docs/content/commands/rclone_serve_dlna.md b/docs/content/commands/rclone_serve_dlna.md
index b6f29a55e..abe864473 100644
--- a/docs/content/commands/rclone_serve_dlna.md
+++ b/docs/content/commands/rclone_serve_dlna.md
@@ -36,7 +36,6 @@ default "rclone (hostname)".
Use `--log-trace` in conjunction with `-vv` to enable additional debug
logging of all UPNP traffic.
-
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
@@ -405,6 +404,7 @@ rclone serve dlna remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
diff --git a/docs/content/commands/rclone_serve_docker.md b/docs/content/commands/rclone_serve_docker.md
index 001fc3bbb..0366cf6c9 100644
--- a/docs/content/commands/rclone_serve_docker.md
+++ b/docs/content/commands/rclone_serve_docker.md
@@ -13,7 +13,6 @@ Serve any remote on docker's volume plugin API.
## Synopsis
-
This command implements the Docker volume plugin API allowing docker to use
rclone as a data storage mechanism for various cloud providers.
rclone provides [docker volume plugin](/docker) based on it.
@@ -52,7 +51,6 @@ directory with book-keeping records of created and mounted volumes.
All mount and VFS options are submitted by the docker daemon via API, but
you can also provide defaults on the command line as well as set path to the
config file and cache directory or adjust logging verbosity.
-
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
@@ -439,6 +437,7 @@ rclone serve docker [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
diff --git a/docs/content/commands/rclone_serve_ftp.md b/docs/content/commands/rclone_serve_ftp.md
index 9d18cb384..6ba8142e9 100644
--- a/docs/content/commands/rclone_serve_ftp.md
+++ b/docs/content/commands/rclone_serve_ftp.md
@@ -33,7 +33,6 @@ then using Authentication is advised - see the next section for info.
By default this will serve files without needing a login.
You can set a single username and password with the --user and --pass flags.
-
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
@@ -486,6 +485,7 @@ rclone serve ftp remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
diff --git a/docs/content/commands/rclone_serve_http.md b/docs/content/commands/rclone_serve_http.md
index 4df56e77c..f599e5284 100644
--- a/docs/content/commands/rclone_serve_http.md
+++ b/docs/content/commands/rclone_serve_http.md
@@ -97,6 +97,17 @@ to be used within the template to server pages:
|-- .Size | Size in Bytes of the entry. |
|-- .ModTime | The UTC timestamp of an entry. |
+The server also makes the following functions available so that they can be used within the
+template. These functions help extend the options for dynamic rendering of HTML. They can
+be used to render HTML based on specific conditions.
+
+| Function | Description |
+| :---------- | :---------- |
+| afterEpoch | Returns the time since the epoch for the given time. |
+| contains | Checks whether a given substring is present or not in a given string. |
+| hasPrefix | Checks whether the given string begins with the specified prefix. |
+| hasSuffix | Checks whether the given string end with the specified suffix. |
+
### Authentication
By default this will serve files without needing a login.
@@ -123,7 +134,6 @@ The password file can be updated while rclone is running.
Use `--realm` to set the authentication realm.
Use `--salt` to change the password hashing salt from the default.
-
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
@@ -585,6 +595,7 @@ rclone serve http remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
diff --git a/docs/content/commands/rclone_serve_nfs.md b/docs/content/commands/rclone_serve_nfs.md
new file mode 100644
index 000000000..bbd28ccf8
--- /dev/null
+++ b/docs/content/commands/rclone_serve_nfs.md
@@ -0,0 +1,450 @@
+---
+title: "rclone serve nfs"
+description: "Serve the remote as an NFS mount"
+slug: rclone_serve_nfs
+url: /commands/rclone_serve_nfs/
+groups: Filter
+versionIntroduced: v1.65
+# autogenerated - DO NOT EDIT, instead edit the source code in cmd/serve/nfs/ and as part of making a release run "make commanddocs"
+---
+# rclone serve nfs
+
+Serve the remote as an NFS mount
+
+## Synopsis
+
+Create an NFS server that serves the given remote over the network.
+
+The primary purpose for this command is to enable [mount command](/commands/rclone_mount/) on recent macOS versions where
+installing FUSE is very cumbersome.
+
+Since this is running on NFSv3, no authentication method is available. Any client
+will be able to access the data. To limit access, you can use serve NFS on loopback address
+and rely on secure tunnels (such as SSH). For this reason, by default, a random TCP port is chosen and loopback interface is used for the listening address;
+meaning that it is only available to the local machine. If you want other machines to access the
+NFS mount over local network, you need to specify the listening address and port using `--addr` flag.
+
+Modifying files through NFS protocol requires VFS caching. Usually you will need to specify `--vfs-cache-mode`
+in order to be able to write to the mountpoint (full is recommended). If you don't specify VFS cache mode,
+the mount will be read-only.
+
+To serve NFS over the network use following command:
+
+ rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full
+
+We specify a specific port that we can use in the mount command:
+
+To mount the server under Linux/macOS, use the following command:
+
+ mount -oport=$PORT,mountport=$PORT $HOSTNAME: path/to/mountpoint
+
+Where `$PORT` is the same port number we used in the serve nfs command.
+
+This feature is only available on Unix platforms.
+
+## VFS - Virtual File System
+
+This command uses the VFS layer. This adapts the cloud storage objects
+that rclone uses into something which looks much more like a disk
+filing system.
+
+Cloud storage objects have lots of properties which aren't like disk
+files - you can't extend them or write to the middle of them, so the
+VFS layer has to deal with that. Because there is no one right way of
+doing this there are various options explained below.
+
+The VFS layer also implements a directory cache - this caches info
+about files and directories (but not the data) in memory.
+
+## VFS Directory Cache
+
+Using the `--dir-cache-time` flag, you can control how long a
+directory should be considered up to date and not refreshed from the
+backend. Changes made through the VFS will appear immediately or
+invalidate the cache.
+
+ --dir-cache-time duration Time to cache directory entries for (default 5m0s)
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
+
+However, changes made directly on the cloud storage by the web
+interface or a different copy of rclone will only be picked up once
+the directory cache expires if the backend configured does not support
+polling for changes. If the backend supports polling, changes will be
+picked up within the polling interval.
+
+You can send a `SIGHUP` signal to rclone for it to flush all
+directory caches, regardless of how old they are. Assuming only one
+rclone instance is running, you can reset the cache like this:
+
+ kill -SIGHUP $(pidof rclone)
+
+If you configure rclone with a [remote control](/rc) then you can use
+rclone rc to flush the whole directory cache:
+
+ rclone rc vfs/forget
+
+Or individual files or directories:
+
+ rclone rc vfs/forget file=path/to/file dir=path/to/dir
+
+## VFS File Buffering
+
+The `--buffer-size` flag determines the amount of memory,
+that will be used to buffer data in advance.
+
+Each open file will try to keep the specified amount of data in memory
+at all times. The buffered data is bound to one open file and won't be
+shared.
+
+This flag is a upper limit for the used memory per open file. The
+buffer will only use memory for data that is downloaded but not not
+yet read. If the buffer is empty, only a small amount of memory will
+be used.
+
+The maximum memory used by rclone for buffering can be up to
+`--buffer-size * open files`.
+
+## VFS File Caching
+
+These flags control the VFS file caching options. File caching is
+necessary to make the VFS layer appear compatible with a normal file
+system. It can be disabled at the cost of some compatibility.
+
+For example you'll need to enable VFS caching if you want to read and
+write simultaneously to a file. See below for more details.
+
+Note that the VFS cache is separate from the cache backend and you may
+find that you need one or the other or both.
+
+ --cache-dir string Directory rclone will use for caching.
+ --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+ --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
+
+If run with `-vv` rclone will print the location of the file cache. The
+files are stored in the user cache file area which is OS dependent but
+can be controlled with `--cache-dir` or setting the appropriate
+environment variable.
+
+The cache has 4 different modes selected by `--vfs-cache-mode`.
+The higher the cache mode the more compatible rclone becomes at the
+cost of using disk space.
+
+Note that files are written back to the remote only when they are
+closed and if they haven't been accessed for `--vfs-write-back`
+seconds. If rclone is quit or dies with files that haven't been
+uploaded, these will be uploaded next time rclone is run with the same
+flags.
+
+If using `--vfs-cache-max-size` or `--vfs-cache-min-free-size` note
+that the cache may exceed these quotas for two reasons. Firstly
+because it is only checked every `--vfs-cache-poll-interval`. Secondly
+because open files cannot be evicted from the cache. When
+`--vfs-cache-max-size` or `--vfs-cache-min-free-size` is exceeded,
+rclone will attempt to evict the least accessed files from the cache
+first. rclone will start with files that haven't been accessed for the
+longest. This cache flushing strategy is efficient and more relevant
+files are likely to remain cached.
+
+The `--vfs-cache-max-age` will evict files from the cache
+after the set time since last access has passed. The default value of
+1 hour will start evicting files from cache that haven't been accessed
+for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0
+and will wait for 1 more hour before evicting. Specify the time with
+standard notation, s, m, h, d, w .
+
+You **should not** run two copies of rclone using the same VFS cache
+with the same or overlapping remotes if using `--vfs-cache-mode > off`.
+This can potentially cause data corruption if you do. You can work
+around this by giving each rclone its own cache hierarchy with
+`--cache-dir`. You don't need to worry about this if the remotes in
+use don't overlap.
+
+### --vfs-cache-mode off
+
+In this mode (the default) the cache will read directly from the remote and write
+directly to the remote without caching anything on disk.
+
+This will mean some operations are not possible
+
+ * Files can't be opened for both read AND write
+ * Files opened for write can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files open for read with O_TRUNC will be opened write only
+ * Files open for write only will behave as if O_TRUNC was supplied
+ * Open modes O_APPEND, O_TRUNC are ignored
+ * If an upload fails it can't be retried
+
+### --vfs-cache-mode minimal
+
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disk. This means that files opened for
+write will be a lot more compatible, but uses the minimal disk space.
+
+These operations are not possible
+
+ * Files opened for write only can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files opened for write only will ignore O_APPEND, O_TRUNC
+ * If an upload fails it can't be retried
+
+### --vfs-cache-mode writes
+
+In this mode files opened for read only are still read directly from
+the remote, write only and read/write files are buffered to disk
+first.
+
+This mode should support all normal file system operations.
+
+If an upload fails it will be retried at exponentially increasing
+intervals up to 1 minute.
+
+### --vfs-cache-mode full
+
+In this mode all reads and writes are buffered to and from disk. When
+data is read from the remote this is buffered to disk as well.
+
+In this mode the files in the cache will be sparse files and rclone
+will keep track of which bits of the files it has downloaded.
+
+So if an application only reads the starts of each file, then rclone
+will only buffer the start of the file. These files will appear to be
+their full size in the cache, but they will be sparse files with only
+the data that has been downloaded present in them.
+
+This mode should support all normal file system operations and is
+otherwise identical to `--vfs-cache-mode` writes.
+
+When reading a file rclone will read `--buffer-size` plus
+`--vfs-read-ahead` bytes ahead. The `--buffer-size` is buffered in memory
+whereas the `--vfs-read-ahead` is buffered on disk.
+
+When using this mode it is recommended that `--buffer-size` is not set
+too large and `--vfs-read-ahead` is set large if required.
+
+**IMPORTANT** not all file systems support sparse files. In particular
+FAT/exFAT do not. Rclone will perform very badly if the cache
+directory is on a filesystem which doesn't support sparse files and it
+will log an ERROR message if one is detected.
+
+### Fingerprinting
+
+Various parts of the VFS use fingerprinting to see if a local file
+copy has changed relative to a remote file. Fingerprints are made
+from:
+
+- size
+- modification time
+- hash
+
+where available on an object.
+
+On some backends some of these attributes are slow to read (they take
+an extra API call per object, or extra work per object).
+
+For example `hash` is slow with the `local` and `sftp` backends as
+they have to read the entire file and hash it, and `modtime` is slow
+with the `s3`, `swift`, `ftp` and `qinqstor` backends because they
+need to do an extra API call to fetch it.
+
+If you use the `--vfs-fast-fingerprint` flag then rclone will not
+include the slow operations in the fingerprint. This makes the
+fingerprinting less accurate but much faster and will improve the
+opening time of cached files.
+
+If you are running a vfs cache over `local`, `s3` or `swift` backends
+then using this flag is recommended.
+
+Note that if you change the value of this flag, the fingerprints of
+the files in the cache may be invalidated and the files will need to
+be downloaded again.
+
+## VFS Chunked Reading
+
+When rclone reads files from a remote it reads them in chunks. This
+means that rather than requesting the whole file rclone reads the
+chunk specified. This can reduce the used download quota for some
+remotes by requesting only chunks from the remote that are actually
+read, at the cost of an increased number of requests.
+
+These flags control the chunking:
+
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
+ --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
+
+Rclone will start reading a chunk of size `--vfs-read-chunk-size`,
+and then double the size for each read. When `--vfs-read-chunk-size-limit` is
+specified, and greater than `--vfs-read-chunk-size`, the chunk size for each
+open file will get doubled only until the specified value is reached. If the
+value is "off", which is the default, the limit is disabled and the chunk size
+will grow indefinitely.
+
+With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0`
+the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on.
+When `--vfs-read-chunk-size-limit 500M` is specified, the result would be
+0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
+
+Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading.
+
+## VFS Performance
+
+These flags may be used to enable/disable features of the VFS for
+performance or other reasons. See also the [chunked reading](#vfs-chunked-reading)
+feature.
+
+In particular S3 and Swift benefit hugely from the `--no-modtime` flag
+(or use `--use-server-modtime` for a slightly different effect) as each
+read of the modification time takes a transaction.
+
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --read-only Only allow read-only access.
+
+Sometimes rclone is delivered reads or writes out of order. Rather
+than seeking rclone will wait a short time for the in sequence read or
+write to come in. These flags only come into effect when not using an
+on disk cache file.
+
+ --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
+
+When using VFS write caching (`--vfs-cache-mode` with value writes or full),
+the global flag `--transfers` can be set to adjust the number of parallel uploads of
+modified files from the cache (the related global flag `--checkers` has no effect on the VFS).
+
+ --transfers int Number of file transfers to run in parallel (default 4)
+
+## VFS Case Sensitivity
+
+Linux file systems are case-sensitive: two files can differ only
+by case, and the exact case must be used when opening a file.
+
+File systems in modern Windows are case-insensitive but case-preserving:
+although existing files can be opened using any case, the exact case used
+to create the file is preserved and available for programs to query.
+It is not allowed for two files in the same directory to differ only by case.
+
+Usually file systems on macOS are case-insensitive. It is possible to make macOS
+file systems case-sensitive but that is not the default.
+
+The `--vfs-case-insensitive` VFS flag controls how rclone handles these
+two cases. If its value is "false", rclone passes file names to the remote
+as-is. If the flag is "true" (or appears without a value on the
+command line), rclone may perform a "fixup" as explained below.
+
+The user may specify a file name to open/delete/rename/etc with a case
+different than what is stored on the remote. If an argument refers
+to an existing file with exactly the same name, then the case of the existing
+file on the disk will be used. However, if a file name with exactly the same
+name is not found but a name differing only by case exists, rclone will
+transparently fixup the name. This fixup happens only when an existing file
+is requested. Case sensitivity of file names created anew by rclone is
+controlled by the underlying remote.
+
+Note that case sensitivity of the operating system running rclone (the target)
+may differ from case sensitivity of a file system presented by rclone (the source).
+The flag controls whether "fixup" is performed to satisfy the target.
+
+If the flag is not provided on the command line, then its default value depends
+on the operating system where rclone runs: "true" on Windows and macOS, "false"
+otherwise. If the flag is provided without a value, then it is "true".
+
+## VFS Disk Options
+
+This flag allows you to manually set the statistics about the filing system.
+It can be useful when those statistics cannot be read correctly automatically.
+
+ --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
+
+## Alternate report of used bytes
+
+Some backends, most notably S3, do not report the amount of bytes used.
+If you need this information to be available when running `df` on the
+filesystem, then pass the flag `--vfs-used-is-size` to rclone.
+With this flag set, instead of relying on the backend to report this
+information, rclone will scan the whole remote similar to `rclone size`
+and compute the total used space itself.
+
+_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the
+result is accurate. However, this is very inefficient and may cost lots of API
+calls resulting in extra charges. Use it as a last resort and only with caching.
+
+
+```
+rclone serve nfs remote:path [flags]
+```
+
+## Options
+
+```
+ --addr string IPaddress:Port or :Port to bind server to
+ --dir-cache-time Duration Time to cache directory entries for (default 5m0s)
+ --dir-perms FileMode Directory permissions (default 0777)
+ --file-perms FileMode File permissions (default 0666)
+ --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
+ -h, --help help for nfs
+ --no-checksum Don't compare checksums on up/download
+ --no-modtime Don't read/write the modification time (can speed things up)
+ --no-seek Don't allow seeking in files
+ --poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
+ --read-only Only allow read-only access
+ --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
+ --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+ --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
+ --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-case-insensitive If a file name not found, find a case insensitive match
+ --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
+ --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
+ --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
+ --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
+ --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
+ --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
+ --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
+```
+
+
+## Filter Options
+
+Flags for filtering directory listings.
+
+```
+ --delete-excluded Delete files on dest excluded from sync
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
+ --exclude-if-present stringArray Exclude directories if filename is present
+ --files-from stringArray Read list of source-file names from file (use - to read from stdin)
+ --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
+ -f, --filter stringArray Add a file filtering rule
+ --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --ignore-case Ignore case in filters (case insensitive)
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read file include patterns from file (use - to read from stdin)
+ --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-depth int If set limits the recursion depth to this (default -1)
+ --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
+ --metadata-exclude stringArray Exclude metadatas matching pattern
+ --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
+ --metadata-filter stringArray Add a metadata filtering rule
+ --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
+ --metadata-include stringArray Include metadatas matching pattern
+ --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
+```
+
+See the [global flags page](/flags/) for global options not listed here.
+
+# SEE ALSO
+
+* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
+
diff --git a/docs/content/commands/rclone_serve_s3.md b/docs/content/commands/rclone_serve_s3.md
index be36e2750..986d31119 100644
--- a/docs/content/commands/rclone_serve_s3.md
+++ b/docs/content/commands/rclone_serve_s3.md
@@ -71,8 +71,19 @@ Note that setting `disable_multipart_uploads = true` is to work around
## Bugs
When uploading multipart files `serve s3` holds all the parts in
-memory. This is a limitaton of the library rclone uses for serving S3
-and will hopefully be fixed at some point.
+memory (see [#7453](https://github.com/rclone/rclone/issues/7453)).
+This is a limitaton of the library rclone uses for serving S3 and will
+hopefully be fixed at some point.
+
+Multipart server side copies do not work (see
+[#7454](https://github.com/rclone/rclone/issues/7454)). These take a
+very long time and eventually fail. The default threshold for
+multipart server side copies is 5G which is the maximum it can be, so
+files above this side will fail to be server side copied.
+
+For a current list of `serve s3` bugs see the [serve
+s3](https://github.com/rclone/rclone/labels/serve%20s3) bug category
+on GitHub.
## Limitations
diff --git a/docs/content/commands/rclone_serve_sftp.md b/docs/content/commands/rclone_serve_sftp.md
index f1e60c360..9181e7be4 100644
--- a/docs/content/commands/rclone_serve_sftp.md
+++ b/docs/content/commands/rclone_serve_sftp.md
@@ -65,7 +65,6 @@ used. Omitting "restrict" and using `--sftp-path-override` to enable
checksumming is possible but less secure and you could use the SFTP server
provided by OpenSSH in this case.
-
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
@@ -518,6 +517,7 @@ rclone serve sftp remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
diff --git a/docs/content/commands/rclone_serve_webdav.md b/docs/content/commands/rclone_serve_webdav.md
index eb13e6bf4..a2f6554b1 100644
--- a/docs/content/commands/rclone_serve_webdav.md
+++ b/docs/content/commands/rclone_serve_webdav.md
@@ -126,6 +126,17 @@ to be used within the template to server pages:
|-- .Size | Size in Bytes of the entry. |
|-- .ModTime | The UTC timestamp of an entry. |
+The server also makes the following functions available so that they can be used within the
+template. These functions help extend the options for dynamic rendering of HTML. They can
+be used to render HTML based on specific conditions.
+
+| Function | Description |
+| :---------- | :---------- |
+| afterEpoch | Returns the time since the epoch for the given time. |
+| contains | Checks whether a given substring is present or not in a given string. |
+| hasPrefix | Checks whether the given string begins with the specified prefix. |
+| hasSuffix | Checks whether the given string end with the specified suffix. |
+
### Authentication
By default this will serve files without needing a login.
@@ -152,7 +163,6 @@ The password file can be updated while rclone is running.
Use `--realm` to set the authentication realm.
Use `--salt` to change the password hashing salt from the default.
-
## VFS - Virtual File System
This command uses the VFS layer. This adapts the cloud storage objects
@@ -616,6 +626,7 @@ rclone serve webdav remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
diff --git a/docs/content/commands/rclone_sync.md b/docs/content/commands/rclone_sync.md
index 25c12a08d..8096e01c3 100644
--- a/docs/content/commands/rclone_sync.md
+++ b/docs/content/commands/rclone_sync.md
@@ -70,11 +70,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -89,11 +89,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
```
diff --git a/docs/content/drive.md b/docs/content/drive.md
index 246430b98..854b185eb 100644
--- a/docs/content/drive.md
+++ b/docs/content/drive.md
@@ -776,6 +776,31 @@ Properties:
- Type: bool
- Default: false
+#### --drive-show-all-gdocs
+
+Show all Google Docs including non-exportable ones in listings.
+
+If you try a server side copy on a Google Form without this flag, you
+will get this error:
+
+ No export formats found for "application/vnd.google-apps.form"
+
+However adding this flag will allow the form to be server side copied.
+
+Note that rclone doesn't add extensions to the Google Docs file names
+in this mode.
+
+Do **not** use this flag when trying to download Google Docs - rclone
+will fail to download them.
+
+
+Properties:
+
+- Config: show_all_gdocs
+- Env Var: RCLONE_DRIVE_SHOW_ALL_GDOCS
+- Type: bool
+- Default: false
+
#### --drive-skip-checksum-gphotos
Skip checksums on Google photos and videos only.
@@ -1238,6 +1263,98 @@ Properties:
- Type: bool
- Default: true
+#### --drive-metadata-owner
+
+Control whether owner should be read or written in metadata.
+
+Owner is a standard part of the file metadata so is easy to read. But it
+isn't always desirable to set the owner from the metadata.
+
+Note that you can't set the owner on Shared Drives, and that setting
+ownership will generate an email to the new owner (this can't be
+disabled), and you can't transfer ownership to someone outside your
+organization.
+
+
+Properties:
+
+- Config: metadata_owner
+- Env Var: RCLONE_DRIVE_METADATA_OWNER
+- Type: Bits
+- Default: read
+- Examples:
+ - "off"
+ - Do not read or write the value
+ - "read"
+ - Read the value only
+ - "write"
+ - Write the value only
+ - "read,write"
+ - Read and Write the value.
+
+#### --drive-metadata-permissions
+
+Control whether permissions should be read or written in metadata.
+
+Reading permissions metadata from files can be done quickly, but it
+isn't always desirable to set the permissions from the metadata.
+
+Note that rclone drops any inherited permissions on Shared Drives and
+any owner permission on My Drives as these are duplicated in the owner
+metadata.
+
+
+Properties:
+
+- Config: metadata_permissions
+- Env Var: RCLONE_DRIVE_METADATA_PERMISSIONS
+- Type: Bits
+- Default: off
+- Examples:
+ - "off"
+ - Do not read or write the value
+ - "read"
+ - Read the value only
+ - "write"
+ - Write the value only
+ - "read,write"
+ - Read and Write the value.
+
+#### --drive-metadata-labels
+
+Control whether labels should be read or written in metadata.
+
+Reading labels metadata from files takes an extra API transaction and
+will slow down listings. It isn't always desirable to set the labels
+from the metadata.
+
+The format of labels is documented in the drive API documentation at
+https://developers.google.com/drive/api/reference/rest/v3/Label -
+rclone just provides a JSON dump of this format.
+
+When setting labels, the label and fields must already exist - rclone
+will not create them. This means that if you are transferring labels
+from two different accounts you will have to create the labels in
+advance and use the metadata mapper to translate the IDs between the
+two accounts.
+
+
+Properties:
+
+- Config: metadata_labels
+- Env Var: RCLONE_DRIVE_METADATA_LABELS
+- Type: Bits
+- Default: off
+- Examples:
+ - "off"
+ - Do not read or write the value
+ - "read"
+ - Read the value only
+ - "write"
+ - Write the value only
+ - "read,write"
+ - Read and Write the value.
+
#### --drive-encoding
The encoding for the backend.
@@ -1248,7 +1365,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_DRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: InvalidUtf8
#### --drive-env-auth
@@ -1269,6 +1386,29 @@ Properties:
- "true"
- Get GCP IAM credentials from the environment (env vars or IAM).
+### Metadata
+
+User metadata is stored in the properties field of the drive object.
+
+Here are the possible system metadata items for the drive backend.
+
+| Name | Help | Type | Example | Read Only |
+|------|------|------|---------|-----------|
+| btime | Time of file birth (creation) with mS accuracy. Note that this is only writable on fresh uploads - it can't be written for updates. | RFC 3339 | 2006-01-02T15:04:05.999Z07:00 | N |
+| content-type | The MIME type of the file. | string | text/plain | N |
+| copy-requires-writer-permission | Whether the options to copy, print, or download this file, should be disabled for readers and commenters. | boolean | true | N |
+| description | A short description of the file. | string | Contract for signing | N |
+| folder-color-rgb | The color for a folder or a shortcut to a folder as an RGB hex string. | string | 881133 | N |
+| labels | Labels attached to this file in a JSON dump of Googled drive format. Enable with --drive-metadata-labels. | JSON | [] | N |
+| mtime | Time of last modification with mS accuracy. | RFC 3339 | 2006-01-02T15:04:05.999Z07:00 | N |
+| owner | The owner of the file. Usually an email address. Enable with --drive-metadata-owner. | string | user@example.com | N |
+| permissions | Permissions in a JSON dump of Google drive format. On shared drives these will only be present if they aren't inherited. Enable with --drive-metadata-permissions. | JSON | {} | N |
+| starred | Whether the user has starred the file. | boolean | false | N |
+| viewed-by-me | Whether the file has been viewed by this user. | boolean | true | **Y** |
+| writers-can-share | Whether users with only writer permission can modify the file's permissions. Not populated for items in shared drives. | boolean | false | N |
+
+See the [metadata](/docs/#metadata) docs for more info.
+
## Backend commands
Here are the commands specific to the drive backend.
diff --git a/docs/content/dropbox.md b/docs/content/dropbox.md
index fa20cfe19..91b15688e 100644
--- a/docs/content/dropbox.md
+++ b/docs/content/dropbox.md
@@ -343,6 +343,30 @@ Properties:
- Type: bool
- Default: false
+#### --dropbox-pacer-min-sleep
+
+Minimum time to sleep between API calls.
+
+Properties:
+
+- Config: pacer_min_sleep
+- Env Var: RCLONE_DROPBOX_PACER_MIN_SLEEP
+- Type: Duration
+- Default: 10ms
+
+#### --dropbox-encoding
+
+The encoding for the backend.
+
+See the [encoding section in the overview](/overview/#encoding) for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_DROPBOX_ENCODING
+- Type: Encoding
+- Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot
+
#### --dropbox-batch-mode
Upload file batching sync|async|off.
@@ -429,30 +453,6 @@ Properties:
- Type: Duration
- Default: 10m0s
-#### --dropbox-pacer-min-sleep
-
-Minimum time to sleep between API calls.
-
-Properties:
-
-- Config: pacer_min_sleep
-- Env Var: RCLONE_DROPBOX_PACER_MIN_SLEEP
-- Type: Duration
-- Default: 10ms
-
-#### --dropbox-encoding
-
-The encoding for the backend.
-
-See the [encoding section in the overview](/overview/#encoding) for more info.
-
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_DROPBOX_ENCODING
-- Type: MultiEncoder
-- Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot
-
{{< rem autogenerated options stop >}}
## Limitations
diff --git a/docs/content/fichier.md b/docs/content/fichier.md
index 8576cb9aa..b5a824505 100644
--- a/docs/content/fichier.md
+++ b/docs/content/fichier.md
@@ -192,7 +192,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_FICHIER_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
diff --git a/docs/content/filefabric.md b/docs/content/filefabric.md
index 61f2ae815..1666cd6bc 100644
--- a/docs/content/filefabric.md
+++ b/docs/content/filefabric.md
@@ -271,7 +271,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_FILEFABRIC_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
diff --git a/docs/content/flags.md b/docs/content/flags.md
index 833980633..4ce4c079f 100644
--- a/docs/content/flags.md
+++ b/docs/content/flags.md
@@ -18,11 +18,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default "HARD")
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -37,11 +37,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
- --no-update-modtime Don't update destination mod-time if files identical
+ --no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
```
@@ -111,7 +112,7 @@ General networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.64.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.65.0")
```
@@ -134,7 +135,7 @@ General configuration of rclone.
--ask-password Allow prompt for password for encrypted configuration (default true)
--auto-confirm If enabled, do not request console confirmation
--cache-dir string Directory rclone will use for caching (default "$HOME/.cache/rclone")
- --color string When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default "AUTO")
+ --color AUTO|NEVER|ALWAYS When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default AUTO)
--config string Config file (default "$HOME/.config/rclone/rclone.conf")
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--disable string Disable a comma separated list of features (use --disable help to see a list)
@@ -163,7 +164,7 @@ Flags for developers.
```
--cpuprofile string Write cpu profile to file
- --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump DumpFlags List of items to dump from: headers, bodies, requests, responses, auth, filters, goroutines, openfiles, mapper
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info
--memprofile string Write memory profile to file
@@ -217,7 +218,7 @@ Logging and statistics.
```
--log-file string Log everything to this file
--log-format string Comma separated list of log format options (default "date,time")
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --log-level LogLevel Log level DEBUG|INFO|NOTICE|ERROR (default NOTICE)
--log-systemd Activate systemd integration for the logger
--max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
-P, --progress Show progress during transfer
@@ -225,7 +226,7 @@ Logging and statistics.
-q, --quiet Print as little stuff as possible
--stats Duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats (0 for no limit) (default 45)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-log-level LogLevel Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default INFO)
--stats-one-line Make the stats fit on one line
--stats-one-line-date Enable --stats-one-line and add current date/time prefix
--stats-one-line-date-format string Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes ("), see https://golang.org/pkg/time/#Time.Format
@@ -249,6 +250,7 @@ Flags to control metadata.
--metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
--metadata-include stringArray Include metadatas matching pattern
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --metadata-mapper SpaceSepList Program to run to transforming metadata before upload
--metadata-set stringArray Add metadata key=value when uploading
```
@@ -297,13 +299,13 @@ Backend only flags. These can be set in the config file also.
--acd-auth-url string Auth server URL
--acd-client-id string OAuth Client Id
--acd-client-secret string OAuth Client Secret
- --acd-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --acd-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi)
--acd-token string OAuth Access Token as a JSON blob
--acd-token-url string Token server url
--acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears (default 3m0s)
--alias-remote string Remote or path to alias
- --azureblob-access-tier string Access tier of blob: hot, cool or archive
+ --azureblob-access-tier string Access tier of blob: hot, cool, cold or archive
--azureblob-account string Azure Storage Account Name
--azureblob-archive-tier-delete Delete archive tier blobs before overwriting
--azureblob-chunk-size SizeSuffix Upload chunk size (default 4Mi)
@@ -314,7 +316,7 @@ Backend only flags. These can be set in the config file also.
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
--azureblob-disable-checksum Don't store MD5 checksum with object metadata
- --azureblob-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
+ --azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service
--azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI)
--azureblob-key string Storage Account Shared Key
@@ -334,18 +336,43 @@ Backend only flags. These can be set in the config file also.
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address)
+ --azurefiles-account string Azure Storage Account Name
+ --azurefiles-chunk-size SizeSuffix Upload chunk size (default 4Mi)
+ --azurefiles-client-certificate-password string Password for the certificate file (optional) (obscured)
+ --azurefiles-client-certificate-path string Path to a PEM or PKCS12 certificate file including the private key
+ --azurefiles-client-id string The ID of the client in use
+ --azurefiles-client-secret string One of the service principal's client secrets
+ --azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
+ --azurefiles-connection-string string Azure Files Connection String
+ --azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
+ --azurefiles-endpoint string Endpoint for the service
+ --azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI)
+ --azurefiles-key string Storage Account Shared Key
+ --azurefiles-max-stream-size SizeSuffix Max size for streamed files (default 10Gi)
+ --azurefiles-msi-client-id string Object ID of the user-assigned MSI to use, if any
+ --azurefiles-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
+ --azurefiles-msi-object-id string Object ID of the user-assigned MSI to use, if any
+ --azurefiles-password string The user's password (obscured)
+ --azurefiles-sas-url string SAS URL
+ --azurefiles-service-principal-file string Path to file containing credentials for use with a service principal
+ --azurefiles-share-name string Azure Files Share Name
+ --azurefiles-tenant string ID of the service principal's tenant. Also called its directory ID
+ --azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
+ --azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
+ --azurefiles-username string User name (usually an email address)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size (default 96Mi)
--b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w)
--b2-download-url string Custom endpoint for downloads
- --b2-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --b2-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--b2-endpoint string Endpoint for the service
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files
--b2-key string Application Key
+ --b2-lifecycle int Set the number of days deleted files should be kept when creating a bucket
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging
- --b2-upload-concurrency int Concurrency for multipart uploads (default 16)
+ --b2-upload-concurrency int Concurrency for multipart uploads (default 4)
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--b2-version-at Time Show file versions as they were at the specified time (default off)
--b2-versions Include old versions in directory listings
@@ -356,7 +383,7 @@ Backend only flags. These can be set in the config file also.
--box-client-id string OAuth Client Id
--box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
- --box-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
+ --box-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--box-impersonate string Impersonate this user ID when using a service account
--box-list-chunk int Size of listing chunk 1-1000 (default 1000)
--box-owned-by string Only show items owned by the login (email address) passed in
@@ -414,7 +441,7 @@ Backend only flags. These can be set in the config file also.
--drive-client-secret string OAuth Client Secret
--drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut
--drive-disable-http2 Disable drive using http2 (default true)
- --drive-encoding MultiEncoder The encoding for the backend (default InvalidUtf8)
+ --drive-encoding Encoding The encoding for the backend (default InvalidUtf8)
--drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars)
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default "docx,xlsx,pptx,svg")
--drive-fast-list-bug-fix Work around a bug in Google Drive listing (default true)
@@ -423,17 +450,21 @@ Backend only flags. These can be set in the config file also.
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs
--drive-keep-revision-forever Keep new head revision of each file forever
--drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000)
+ --drive-metadata-labels Bits Control whether labels should be read or written in metadata (default off)
+ --drive-metadata-owner Bits Control whether owner should be read or written in metadata (default read)
+ --drive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off)
--drive-pacer-burst int Number of API calls to allow without sleeping (default 100)
--drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms)
--drive-resource-key string Resource key for accessing a link-shared file
--drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive
+ --drive-scope string Comma separated list of scopes that rclone should use when requesting access from drive
--drive-server-side-across-configs Deprecated: use --server-side-across-configs instead
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
+ --drive-show-all-gdocs Show all Google Docs including non-exportable ones in listings
--drive-size-as-quota Show sizes as storage quota usage, not actual size
- --drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only
+ --drive-skip-checksum-gphotos Skip checksums on Google photos and videos only
--drive-skip-dangling-shortcuts If set skip dangling shortcut files
--drive-skip-gdocs Skip google documents in all listings
--drive-skip-shortcuts If set skip shortcut files
@@ -457,7 +488,7 @@ Backend only flags. These can be set in the config file also.
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
- --dropbox-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
+ --dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--dropbox-shared-files Instructs rclone to work on individual shared files
@@ -466,11 +497,11 @@ Backend only flags. These can be set in the config file also.
--dropbox-token-url string Token server url
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
--fichier-cdn Set if you wish to use CDN download links
- --fichier-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --fichier-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
--fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
--fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
--fichier-shared-folder string If you want to download a shared folder, add this parameter
- --filefabric-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
+ --filefabric-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--filefabric-permanent-token string Permanent Authentication Token
--filefabric-root-folder-id string ID of the root folder
--filefabric-token string Session Token
@@ -484,7 +515,7 @@ Backend only flags. These can be set in the config file also.
--ftp-disable-mlsd Disable using MLSD even if server advertises support
--ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
--ftp-disable-utf8 Disable using UTF-8 even if server advertises support
- --ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
+ --ftp-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to
@@ -506,7 +537,7 @@ Backend only flags. These can be set in the config file also.
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
- --gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
+ --gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Endpoint for the service
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
@@ -519,9 +550,13 @@ Backend only flags. These can be set in the config file also.
--gcs-token-url string Token server url
--gcs-user-project string User project
--gphotos-auth-url string Auth server URL
+ --gphotos-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
+ --gphotos-batch-mode string Upload file batching sync|async|off (default "sync")
+ --gphotos-batch-size int Max number of files in upload batch
+ --gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
--gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret
- --gphotos-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
+ --gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gphotos-include-archived Also view and download archived media
--gphotos-read-only Set to make the Google Photos backend read only
--gphotos-read-size Set to read the size of media items
@@ -533,8 +568,8 @@ Backend only flags. These can be set in the config file also.
--hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off)
--hasher-remote string Remote to cache checksums for (e.g. myRemote:path)
--hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy
- --hdfs-encoding MultiEncoder The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
- --hdfs-namenode string Hadoop name node and port
+ --hdfs-encoding Encoding The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
+ --hdfs-namenode CommaSepList Hadoop name nodes and ports
--hdfs-service-principal-name string Kerberos service principal name for the namenode
--hdfs-username string Hadoop user name
--hidrive-auth-url string Auth server URL
@@ -542,7 +577,7 @@ Backend only flags. These can be set in the config file also.
--hidrive-client-id string OAuth Client Id
--hidrive-client-secret string OAuth Client Secret
--hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary
- --hidrive-encoding MultiEncoder The encoding for the backend (default Slash,Dot)
+ --hidrive-encoding Encoding The encoding for the backend (default Slash,Dot)
--hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1")
--hidrive-root-prefix string The root/parent folder for all paths (default "/")
--hidrive-scope-access string Access permissions that rclone should use when requesting access from HiDrive (default "rw")
@@ -555,9 +590,16 @@ Backend only flags. These can be set in the config file also.
--http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to
+ --imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket)
+ --imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+ --imagekit-only-signed Restrict unsigned image URLs If you have configured Restrict unsigned image URLs in your dashboard settings, set this to true
+ --imagekit-private-key string You can find your ImageKit.io private key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+ --imagekit-public-key string You can find your ImageKit.io public key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+ --imagekit-upload-tags string Tags to add to the uploaded files, e.g. "tag1,tag2"
+ --imagekit-versions Include old versions in directory listings
--internetarchive-access-key-id string IAS3 Access Key
--internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true)
- --internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
+ --internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
--internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org")
--internetarchive-front-endpoint string Host of InternetArchive Frontend (default "https://archive.org")
--internetarchive-secret-access-key string IAS3 Secret Key (password)
@@ -565,7 +607,7 @@ Backend only flags. These can be set in the config file also.
--jottacloud-auth-url string Auth server URL
--jottacloud-client-id string OAuth Client Id
--jottacloud-client-secret string OAuth Client Secret
- --jottacloud-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
+ --jottacloud-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
--jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them
@@ -573,17 +615,18 @@ Backend only flags. These can be set in the config file also.
--jottacloud-token-url string Token server url
--jottacloud-trashed-only Only show files that are in the trash
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi)
- --koofr-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --koofr-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--koofr-endpoint string The Koofr API endpoint to use
--koofr-mountid string Mount ID of the mount to use
--koofr-password string Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
--koofr-provider string Choose your storage provider
--koofr-setmtime Does the backend support setting modification time (default true)
--koofr-user string Your user name
+ --linkbox-token string Token from https://www.linkbox.to/admin/account
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive
- --local-encoding MultiEncoder The encoding for the backend (default Slash,Dot)
+ --local-encoding Encoding The encoding for the backend (default Slash,Dot)
--local-no-check-updated Don't check to see if the files change during upload
--local-no-preallocate Disable preallocation of disk space for transferred files
--local-no-set-modtime Disable setting modtime
@@ -595,7 +638,7 @@ Backend only flags. These can be set in the config file also.
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
--mailru-client-id string OAuth Client Id
--mailru-client-secret string OAuth Client Secret
- --mailru-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --mailru-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--mailru-pass string Password (obscured)
--mailru-speedup-enable Skip full upload if there is another file with same data hash (default true)
--mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash) (default "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf")
@@ -605,7 +648,7 @@ Backend only flags. These can be set in the config file also.
--mailru-token-url string Token server url
--mailru-user string User name (usually email)
--mega-debug Output more debug from Mega
- --mega-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--mega-hard-delete Delete files permanently rather than putting them into the trash
--mega-pass string Password (obscured)
--mega-use-https Use HTTPS for transfers
@@ -621,9 +664,10 @@ Backend only flags. These can be set in the config file also.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
+ --onedrive-delta If set rclone will use delta listing to implement recursive listings
--onedrive-drive-id string The ID of the drive to use
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
- --onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --onedrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
--onedrive-hash-type string Specify the hash in use for the backend (default "auto")
--onedrive-link-password string Set the password for links created by the link command
@@ -644,7 +688,7 @@ Backend only flags. These can be set in the config file also.
--oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--oos-copy-timeout Duration Timeout for copy (default 1m0s)
--oos-disable-checksum Don't store MD5 checksum with object metadata
- --oos-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --oos-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--oos-endpoint string Endpoint for Object storage API
--oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery
--oos-max-upload-parts int Maximum number of parts in a multipart upload (default 10000)
@@ -661,13 +705,13 @@ Backend only flags. These can be set in the config file also.
--oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
- --opendrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
+ --opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
--opendrive-password string Password (obscured)
--opendrive-username string Username
--pcloud-auth-url string Auth server URL
--pcloud-client-id string OAuth Client Id
--pcloud-client-secret string OAuth Client Secret
- --pcloud-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --pcloud-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--pcloud-hostname string Hostname to connect to (default "api.pcloud.com")
--pcloud-password string Your pcloud password (obscured)
--pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default "d0")
@@ -677,7 +721,7 @@ Backend only flags. These can be set in the config file also.
--pikpak-auth-url string Auth server URL
--pikpak-client-id string OAuth Client Id
--pikpak-client-secret string OAuth Client Secret
- --pikpak-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
--pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder
@@ -689,13 +733,13 @@ Backend only flags. These can be set in the config file also.
--premiumizeme-auth-url string Auth server URL
--premiumizeme-client-id string OAuth Client Id
--premiumizeme-client-secret string OAuth Client Secret
- --premiumizeme-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --premiumizeme-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--premiumizeme-token string OAuth Access Token as a JSON blob
--premiumizeme-token-url string Token server url
--protondrive-2fa string The 2FA code
--protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone")
--protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true)
- --protondrive-encoding MultiEncoder The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
--protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured)
--protondrive-original-file-size Return the file size before encryption (default true)
--protondrive-password string The password of your proton account (obscured)
@@ -704,13 +748,13 @@ Backend only flags. These can be set in the config file also.
--putio-auth-url string Auth server URL
--putio-client-id string OAuth Client Id
--putio-client-secret string OAuth Client Secret
- --putio-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --putio-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--putio-token string OAuth Access Token as a JSON blob
--putio-token-url string Token server url
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi)
--qingstor-connection-retries int Number of connection retries (default 3)
- --qingstor-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8)
+ --qingstor-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8)
--qingstor-endpoint string Enter an endpoint URL to connection QingStor API
--qingstor-env-auth Get QingStor credentials from runtime
--qingstor-secret-access-key string QingStor Secret Access Key (password)
@@ -719,7 +763,7 @@ Backend only flags. These can be set in the config file also.
--qingstor-zone string Zone to connect to
--quatrix-api-key string API key for accessing Quatrix account
--quatrix-effective-upload-time string Wanted upload time for one chunk (default "4s")
- --quatrix-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --quatrix-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--quatrix-hard-delete Delete files permanently rather than putting them into the trash
--quatrix-host string Host name of Quatrix account
--quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi)
@@ -734,7 +778,7 @@ Backend only flags. These can be set in the config file also.
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
--s3-download-url string Custom endpoint for downloads
- --s3-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --s3-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--s3-endpoint string Endpoint for S3 API
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
--s3-force-path-style If true use path style access if false use virtual hosted style (default true)
@@ -768,14 +812,16 @@ Backend only flags. These can be set in the config file also.
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
--s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset)
+ --s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset)
--s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
+ --s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-v2-auth If true use v2 authentication
--s3-version-at Time Show file versions as they were at the specified time (default off)
--s3-versions Include old versions in directory listings
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
--seafile-create-library Should rclone create a library if it doesn't exist
- --seafile-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
+ --seafile-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
--seafile-library string Name of the library
--seafile-library-key string Library password (for encrypted libraries only) (obscured)
--seafile-pass string Password (obscured)
@@ -785,6 +831,7 @@ Backend only flags. These can be set in the config file also.
--sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki)
--sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference
--sftp-concurrency int The maximum number of outstanding requests for one file (default 64)
+ --sftp-copy-is-hardlink Set to enable server side copies using hardlinks
--sftp-disable-concurrent-reads If set don't use concurrent reads
--sftp-disable-concurrent-writes If set don't use concurrent writes
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
@@ -819,7 +866,7 @@ Backend only flags. These can be set in the config file also.
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
--sharefile-client-id string OAuth Client Id
--sharefile-client-secret string OAuth Client Secret
- --sharefile-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --sharefile-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
--sharefile-endpoint string Endpoint for API calls
--sharefile-root-folder-id string ID of the root folder
--sharefile-token string OAuth Access Token as a JSON blob
@@ -827,12 +874,12 @@ Backend only flags. These can be set in the config file also.
--sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi)
--sia-api-password string Sia Daemon API Password (obscured)
--sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980")
- --sia-encoding MultiEncoder The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
+ --sia-encoding Encoding The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
--sia-user-agent string Siad User Agent (default "Sia-Agent")
--skip-links Don't warn about skipped symlinks
--smb-case-insensitive Whether the server is configured to be case-insensitive (default true)
--smb-domain string Domain name for NTLM authentication (default "WORKGROUP")
- --smb-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --smb-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
--smb-hide-special-share Hide special shares (e.g. print$) which users aren't supposed to access (default true)
--smb-host string SMB server hostname to connect to
--smb-idle-timeout Duration Max time before closing idle connections (default 1m0s)
@@ -850,7 +897,7 @@ Backend only flags. These can be set in the config file also.
--sugarsync-authorization string Sugarsync authorization
--sugarsync-authorization-expiry string Sugarsync authorization expiry
--sugarsync-deleted-id string Sugarsync deleted folder id
- --sugarsync-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
+ --sugarsync-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
--sugarsync-hard-delete Permanently delete files if true
--sugarsync-private-access-key string Sugarsync Private Access Key
--sugarsync-refresh-token string Sugarsync refresh token
@@ -864,7 +911,7 @@ Backend only flags. These can be set in the config file also.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8)
+ --swift-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form
--swift-key string API key or password (OS_PASSWORD)
@@ -886,7 +933,7 @@ Backend only flags. These can be set in the config file also.
--union-search-policy string Policy to choose upstream on SEARCH category (default "ff")
--union-upstreams string List of space separated upstreams
--uptobox-access-token string Your access token
- --uptobox-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
+ --uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--uptobox-private Set to make uploaded files private
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
@@ -901,14 +948,14 @@ Backend only flags. These can be set in the config file also.
--yandex-auth-url string Auth server URL
--yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret
- --yandex-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
+ --yandex-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--yandex-hard-delete Delete files permanently rather than putting them into the trash
--yandex-token string OAuth Access Token as a JSON blob
--yandex-token-url string Token server url
--zoho-auth-url string Auth server URL
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
- --zoho-encoding MultiEncoder The encoding for the backend (default Del,Ctl,InvalidUtf8)
+ --zoho-encoding Encoding The encoding for the backend (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to
--zoho-token string OAuth Access Token as a JSON blob
--zoho-token-url string Token server url
diff --git a/docs/content/ftp.md b/docs/content/ftp.md
index 7d01d9f8e..b00bb8acc 100644
--- a/docs/content/ftp.md
+++ b/docs/content/ftp.md
@@ -443,7 +443,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_FTP_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Del,Ctl,RightSpace,Dot
- Examples:
- "Asterisk,Ctl,Dot,Slash"
diff --git a/docs/content/googlecloudstorage.md b/docs/content/googlecloudstorage.md
index 0c7512964..561d536c5 100644
--- a/docs/content/googlecloudstorage.md
+++ b/docs/content/googlecloudstorage.md
@@ -696,7 +696,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_GCS_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,CrLf,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
diff --git a/docs/content/googlephotos.md b/docs/content/googlephotos.md
index bce5a26f7..76a9af6d6 100644
--- a/docs/content/googlephotos.md
+++ b/docs/content/googlephotos.md
@@ -374,9 +374,93 @@ Properties:
- Config: encoding
- Env Var: RCLONE_GPHOTOS_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,CrLf,InvalidUtf8,Dot
+#### --gphotos-batch-mode
+
+Upload file batching sync|async|off.
+
+This sets the batch mode used by rclone.
+
+This has 3 possible values
+
+- off - no batching
+- sync - batch uploads and check completion (default)
+- async - batch upload and don't check completion
+
+Rclone will close any outstanding batches when it exits which may make
+a delay on quit.
+
+
+Properties:
+
+- Config: batch_mode
+- Env Var: RCLONE_GPHOTOS_BATCH_MODE
+- Type: string
+- Default: "sync"
+
+#### --gphotos-batch-size
+
+Max number of files in upload batch.
+
+This sets the batch size of files to upload. It has to be less than 50.
+
+By default this is 0 which means rclone which calculate the batch size
+depending on the setting of batch_mode.
+
+- batch_mode: async - default batch_size is 50
+- batch_mode: sync - default batch_size is the same as --transfers
+- batch_mode: off - not in use
+
+Rclone will close any outstanding batches when it exits which may make
+a delay on quit.
+
+Setting this is a great idea if you are uploading lots of small files
+as it will make them a lot quicker. You can use --transfers 32 to
+maximise throughput.
+
+
+Properties:
+
+- Config: batch_size
+- Env Var: RCLONE_GPHOTOS_BATCH_SIZE
+- Type: int
+- Default: 0
+
+#### --gphotos-batch-timeout
+
+Max time to allow an idle upload batch before uploading.
+
+If an upload batch is idle for more than this long then it will be
+uploaded.
+
+The default for this is 0 which means rclone will choose a sensible
+default based on the batch_mode in use.
+
+- batch_mode: async - default batch_timeout is 10s
+- batch_mode: sync - default batch_timeout is 1s
+- batch_mode: off - not in use
+
+
+Properties:
+
+- Config: batch_timeout
+- Env Var: RCLONE_GPHOTOS_BATCH_TIMEOUT
+- Type: Duration
+- Default: 0s
+
+#### --gphotos-batch-commit-timeout
+
+Max time to wait for a batch to finish committing
+
+Properties:
+
+- Config: batch_commit_timeout
+- Env Var: RCLONE_GPHOTOS_BATCH_COMMIT_TIMEOUT
+- Type: Duration
+- Default: 10m0s
+
{{< rem autogenerated options stop >}}
## Limitations
diff --git a/docs/content/hdfs.md b/docs/content/hdfs.md
index 58889c65f..6e85414c3 100644
--- a/docs/content/hdfs.md
+++ b/docs/content/hdfs.md
@@ -156,16 +156,16 @@ Here are the Standard options specific to hdfs (Hadoop distributed file system).
#### --hdfs-namenode
-Hadoop name node and port.
+Hadoop name nodes and ports.
-E.g. "namenode:8020" to connect to host namenode at port 8020.
+E.g. "namenode-1:8020,namenode-2:8020,..." to connect to host namenodes at port 8020.
Properties:
- Config: namenode
- Env Var: RCLONE_HDFS_NAMENODE
-- Type: string
-- Required: true
+- Type: CommaSepList
+- Default:
#### --hdfs-username
@@ -229,7 +229,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_HDFS_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
diff --git a/docs/content/hidrive.md b/docs/content/hidrive.md
index 830778119..f48dcb8e9 100644
--- a/docs/content/hidrive.md
+++ b/docs/content/hidrive.md
@@ -415,7 +415,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_HIDRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Dot
{{< rem autogenerated options stop >}}
diff --git a/docs/content/http.md b/docs/content/http.md
index 19667dda4..0fa8ca0a1 100644
--- a/docs/content/http.md
+++ b/docs/content/http.md
@@ -212,6 +212,46 @@ Properties:
- Type: bool
- Default: false
+## Backend commands
+
+Here are the commands specific to the http backend.
+
+Run them with
+
+ rclone backend COMMAND remote:
+
+The help below will explain what arguments each command takes.
+
+See the [backend](/commands/rclone_backend/) command for more
+info on how to pass options and arguments.
+
+These can be run on a running backend using the rc command
+[backend/command](/rc/#backend-command).
+
+### set
+
+Set command for updating the config parameters.
+
+ rclone backend set remote: [options] [+]
+
+This set command can be used to update the config parameters
+for a running http backend.
+
+Usage Examples:
+
+ rclone backend set remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
+ rclone rc backend/command command=set fs=remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
+ rclone rc backend/command command=set fs=remote: -o url=https://example.com
+
+The option keys are named as they are in the config file.
+
+This rebuilds the connection to the http backend when it is called with
+the new parameters. Only new parameters need be passed as the values
+will default to those currently in use.
+
+It doesn't return anything.
+
+
{{< rem autogenerated options stop >}}
## Limitations
diff --git a/docs/content/imagekit.md b/docs/content/imagekit.md
index 1f85db552..c0ae147d2 100644
--- a/docs/content/imagekit.md
+++ b/docs/content/imagekit.md
@@ -167,6 +167,17 @@ Properties:
- Type: bool
- Default: false
+#### --imagekit-upload-tags
+
+Tags to add to the uploaded files, e.g. "tag1,tag2".
+
+Properties:
+
+- Config: upload_tags
+- Env Var: RCLONE_IMAGEKIT_UPLOAD_TAGS
+- Type: string
+- Required: false
+
#### --imagekit-encoding
The encoding for the backend.
@@ -188,11 +199,11 @@ Here are the possible system metadata items for the imagekit backend.
| Name | Help | Type | Example | Read Only |
|------|------|------|---------|-----------|
-| aws-tags | AI generated tags by AWS Rekognition associated with the file | string | tag1,tag2 | **Y** |
+| aws-tags | AI generated tags by AWS Rekognition associated with the image | string | tag1,tag2 | **Y** |
| btime | Time of file birth (creation) read from Last-Modified header | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | **Y** |
| custom-coordinates | Custom coordinates of the file | string | 0,0,100,100 | **Y** |
| file-type | Type of the file | string | image | **Y** |
-| google-tags | AI generated tags by Google Cloud Vision associated with the file | string | tag1,tag2 | **Y** |
+| google-tags | AI generated tags by Google Cloud Vision associated with the image | string | tag1,tag2 | **Y** |
| has-alpha | Whether the image has alpha channel or not | bool | | **Y** |
| height | Height of the image or video in pixels | int | | **Y** |
| is-private-file | Whether the file is private or not | bool | | **Y** |
diff --git a/docs/content/internetarchive.md b/docs/content/internetarchive.md
index a2d5a4771..980c534a6 100644
--- a/docs/content/internetarchive.md
+++ b/docs/content/internetarchive.md
@@ -260,7 +260,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_INTERNETARCHIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot
### Metadata
diff --git a/docs/content/jottacloud.md b/docs/content/jottacloud.md
index ef570c394..3ac1fc370 100644
--- a/docs/content/jottacloud.md
+++ b/docs/content/jottacloud.md
@@ -444,9 +444,24 @@ Properties:
- Config: encoding
- Env Var: RCLONE_JOTTACLOUD_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot
+### Metadata
+
+Jottacloud has limited support for metadata, currently an extended set of timestamps.
+
+Here are the possible system metadata items for the jottacloud backend.
+
+| Name | Help | Type | Example | Read Only |
+|------|------|------|---------|-----------|
+| btime | Time of file birth (creation), read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N |
+| content-type | MIME type, also known as media type | string | text/plain | **Y** |
+| mtime | Time of last modification, read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N |
+| utime | Time of last upload, when current revision was created, generated by backend | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | **Y** |
+
+See the [metadata](/docs/#metadata) docs for more info.
+
{{< rem autogenerated options stop >}}
## Limitations
diff --git a/docs/content/koofr.md b/docs/content/koofr.md
index 6fbbbcabf..3d161297f 100644
--- a/docs/content/koofr.md
+++ b/docs/content/koofr.md
@@ -171,34 +171,6 @@ Properties:
- Type: string
- Required: true
-#### --koofr-password
-
-Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password).
-
-**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
-
-Properties:
-
-- Config: password
-- Env Var: RCLONE_KOOFR_PASSWORD
-- Provider: digistorage
-- Type: string
-- Required: true
-
-#### --koofr-password
-
-Your password for rclone (generate one at your service's settings page).
-
-**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
-
-Properties:
-
-- Config: password
-- Env Var: RCLONE_KOOFR_PASSWORD
-- Provider: other
-- Type: string
-- Required: true
-
### Advanced options
Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
@@ -239,7 +211,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_KOOFR_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
diff --git a/docs/content/local.md b/docs/content/local.md
index d7881eef6..fec259cd7 100644
--- a/docs/content/local.md
+++ b/docs/content/local.md
@@ -451,6 +451,11 @@ time we:
- Only checksum the size that stat gave
- Don't update the stat info for the file
+**NB** do not use this flag on a Windows Volume Shadow (VSS). For some
+unknown reason, files in a VSS sometimes show different sizes from the
+directory listing (where the initial stat value comes from on Windows)
+and when stat is called on them directly. Other copy tools always use
+the direct stat value and setting this flag will disable that.
Properties:
@@ -561,7 +566,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_LOCAL_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Dot
### Metadata
diff --git a/docs/content/mailru.md b/docs/content/mailru.md
index 9aae4013d..01de432f2 100644
--- a/docs/content/mailru.md
+++ b/docs/content/mailru.md
@@ -409,7 +409,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_MAILRU_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
diff --git a/docs/content/mega.md b/docs/content/mega.md
index 53a275868..e553842c0 100644
--- a/docs/content/mega.md
+++ b/docs/content/mega.md
@@ -279,7 +279,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_MEGA_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
diff --git a/docs/content/onedrive.md b/docs/content/onedrive.md
index a57db0764..e2ec8b0c2 100644
--- a/docs/content/onedrive.md
+++ b/docs/content/onedrive.md
@@ -620,6 +620,43 @@ Properties:
- Type: bool
- Default: false
+#### --onedrive-delta
+
+If set rclone will use delta listing to implement recursive listings.
+
+If this flag is set the the onedrive backend will advertise `ListR`
+support for recursive listings.
+
+Setting this flag speeds up these things greatly:
+
+ rclone lsf -R onedrive:
+ rclone size onedrive:
+ rclone rc vfs/refresh recursive=true
+
+**However** the delta listing API **only** works at the root of the
+drive. If you use it not at the root then it recurses from the root
+and discards all the data that is not under the directory you asked
+for. So it will be correct but may not be very efficient.
+
+This is why this flag is not set as the default.
+
+As a rule of thumb if nearly all of your data is under rclone's root
+directory (the `root/directory` in `onedrive:root/directory`) then
+using this flag will be be a big performance win. If your data is
+mostly not under the root then using this flag will be a big
+performance loss.
+
+It is recommended if you are mounting your onedrive at the root
+(or near the root when using crypt) and using rclone `rc vfs/refresh`.
+
+
+Properties:
+
+- Config: delta
+- Env Var: RCLONE_ONEDRIVE_DELTA
+- Type: bool
+- Default: false
+
#### --onedrive-encoding
The encoding for the backend.
@@ -630,7 +667,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_ONEDRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
diff --git a/docs/content/opendrive.md b/docs/content/opendrive.md
index 90d5f4cf4..4cf82d773 100644
--- a/docs/content/opendrive.md
+++ b/docs/content/opendrive.md
@@ -145,7 +145,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_OPENDRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot
#### --opendrive-chunk-size
diff --git a/docs/content/oracleobjectstorage.md b/docs/content/oracleobjectstorage.md
index 7ed7a0870..a418ad8b3 100644
--- a/docs/content/oracleobjectstorage.md
+++ b/docs/content/oracleobjectstorage.md
@@ -552,7 +552,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_OOS_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,InvalidUtf8,Dot
#### --oos-leave-parts-on-error
diff --git a/docs/content/pcloud.md b/docs/content/pcloud.md
index 3a28668f5..cb1b0ba36 100644
--- a/docs/content/pcloud.md
+++ b/docs/content/pcloud.md
@@ -225,7 +225,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PCLOUD_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
#### --pcloud-root-folder-id
diff --git a/docs/content/pikpak.md b/docs/content/pikpak.md
index 502219238..472e14fd1 100644
--- a/docs/content/pikpak.md
+++ b/docs/content/pikpak.md
@@ -237,7 +237,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PIKPAK_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot
## Backend commands
diff --git a/docs/content/premiumizeme.md b/docs/content/premiumizeme.md
index 6b8ccace4..2d462c60f 100644
--- a/docs/content/premiumizeme.md
+++ b/docs/content/premiumizeme.md
@@ -199,7 +199,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PREMIUMIZEME_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
diff --git a/docs/content/protondrive.md b/docs/content/protondrive.md
index 5e46803c4..3b23d39fe 100644
--- a/docs/content/protondrive.md
+++ b/docs/content/protondrive.md
@@ -246,7 +246,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PROTONDRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot
#### --protondrive-original-file-size
diff --git a/docs/content/putio.md b/docs/content/putio.md
index e0db46850..8c8722b9d 100644
--- a/docs/content/putio.md
+++ b/docs/content/putio.md
@@ -196,7 +196,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PUTIO_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
diff --git a/docs/content/qingstor.md b/docs/content/qingstor.md
index 219e61a9b..69fd51f81 100644
--- a/docs/content/qingstor.md
+++ b/docs/content/qingstor.md
@@ -307,7 +307,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_QINGSTOR_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Ctl,InvalidUtf8
{{< rem autogenerated options stop >}}
diff --git a/docs/content/quatrix.md b/docs/content/quatrix.md
index 02ea14cea..bc8d715fd 100644
--- a/docs/content/quatrix.md
+++ b/docs/content/quatrix.md
@@ -189,7 +189,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_QUATRIX_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
#### --quatrix-effective-upload-time
diff --git a/docs/content/rc.md b/docs/content/rc.md
index 86a2c6074..7e4ed11d0 100644
--- a/docs/content/rc.md
+++ b/docs/content/rc.md
@@ -1173,6 +1173,56 @@ See the [about](/commands/rclone_about/) command for more information on the abo
**Authentication is required for this call.**
+### operations/check: check the source and destination are the same {#operations-check}
+
+Checks the files in the source and destination match. It compares
+sizes and hashes and logs a report of files that don't
+match. It doesn't alter the source or destination.
+
+This takes the following parameters:
+
+- srcFs - a remote name string e.g. "drive:" for the source, "/" for local filesystem
+- dstFs - a remote name string e.g. "drive2:" for the destination, "/" for local filesystem
+- download - check by downloading rather than with hash
+- checkFileHash - treat checkFileFs:checkFileRemote as a SUM file with hashes of given type
+- checkFileFs - treat checkFileFs:checkFileRemote as a SUM file with hashes of given type
+- checkFileRemote - treat checkFileFs:checkFileRemote as a SUM file with hashes of given type
+- oneWay - check one way only, source files must exist on remote
+- combined - make a combined report of changes (default false)
+- missingOnSrc - report all files missing from the source (default true)
+- missingOnDst - report all files missing from the destination (default true)
+- match - report all matching files (default false)
+- differ - report all non-matching files (default true)
+- error - report all files with errors (hashing or reading) (default true)
+
+If you supply the download flag, it will download the data from
+both remotes and check them against each other on the fly. This can
+be useful for remotes that don't support hashes or if you really want
+to check all the data.
+
+If you supply the size-only global flag, it will only compare the sizes not
+the hashes as well. Use this for a quick check.
+
+If you supply the checkFileHash option with a valid hash name, the
+checkFileFs:checkFileRemote must point to a text file in the SUM
+format. This treats the checksum file as the source and dstFs as the
+destination. Note that srcFs is not used and should not be supplied in
+this case.
+
+Returns:
+
+- success - true if no error, false otherwise
+- status - textual summary of check, OK or text string
+- hashType - hash used in check, may be missing
+- combined - array of strings of combined report of changes
+- missingOnSrc - array of strings of all files missing from the source
+- missingOnDst - array of strings of all files missing from the destination
+- match - array of strings of all matching files
+- differ - array of strings of all non-matching files
+- error - array of strings of all files with errors (hashing or reading)
+
+**Authentication is required for this call.**
+
### operations/cleanup: Remove trashed files in the remote or path {#operations-cleanup}
This takes the following parameters:
diff --git a/docs/content/s3.md b/docs/content/s3.md
index b4bdf10f8..aa16509ac 100644
--- a/docs/content/s3.md
+++ b/docs/content/s3.md
@@ -669,7 +669,7 @@ A simple solution is to set the `--s3-upload-cutoff 0` and force all the files t
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
### Standard options
-Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi).
+Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
#### --s3-provider
@@ -714,6 +714,8 @@ Properties:
- Leviia Object Storage
- "Liara"
- Liara Object Storage
+ - "Linode"
+ - Linode Object Storage
- "Minio"
- Minio Object Storage
- "Netease"
@@ -722,6 +724,8 @@ Properties:
- Petabox Object Storage
- "RackCorp"
- RackCorp Object Storage
+ - "Rclone"
+ - Rclone S3 Server
- "Scaleway"
- Scaleway Object Storage
- "SeaweedFS"
@@ -874,260 +878,6 @@ Properties:
- AWS GovCloud (US) Region.
- Needs location constraint us-gov-west-1.
-#### --s3-region
-
-region - the location where your bucket will be created and your data stored.
-
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: RackCorp
-- Type: string
-- Required: false
-- Examples:
- - "global"
- - Global CDN (All locations) Region
- - "au"
- - Australia (All states)
- - "au-nsw"
- - NSW (Australia) Region
- - "au-qld"
- - QLD (Australia) Region
- - "au-vic"
- - VIC (Australia) Region
- - "au-wa"
- - Perth (Australia) Region
- - "ph"
- - Manila (Philippines) Region
- - "th"
- - Bangkok (Thailand) Region
- - "hk"
- - HK (Hong Kong) Region
- - "mn"
- - Ulaanbaatar (Mongolia) Region
- - "kg"
- - Bishkek (Kyrgyzstan) Region
- - "id"
- - Jakarta (Indonesia) Region
- - "jp"
- - Tokyo (Japan) Region
- - "sg"
- - SG (Singapore) Region
- - "de"
- - Frankfurt (Germany) Region
- - "us"
- - USA (AnyCast) Region
- - "us-east-1"
- - New York (USA) Region
- - "us-west-1"
- - Freemont (USA) Region
- - "nz"
- - Auckland (New Zealand) Region
-
-#### --s3-region
-
-Region to connect to.
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: Scaleway
-- Type: string
-- Required: false
-- Examples:
- - "nl-ams"
- - Amsterdam, The Netherlands
- - "fr-par"
- - Paris, France
- - "pl-waw"
- - Warsaw, Poland
-
-#### --s3-region
-
-Region to connect to. - the location where your bucket will be created and your data stored. Need bo be same with your endpoint.
-
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: HuaweiOBS
-- Type: string
-- Required: false
-- Examples:
- - "af-south-1"
- - AF-Johannesburg
- - "ap-southeast-2"
- - AP-Bangkok
- - "ap-southeast-3"
- - AP-Singapore
- - "cn-east-3"
- - CN East-Shanghai1
- - "cn-east-2"
- - CN East-Shanghai2
- - "cn-north-1"
- - CN North-Beijing1
- - "cn-north-4"
- - CN North-Beijing4
- - "cn-south-1"
- - CN South-Guangzhou
- - "ap-southeast-1"
- - CN-Hong Kong
- - "sa-argentina-1"
- - LA-Buenos Aires1
- - "sa-peru-1"
- - LA-Lima1
- - "na-mexico-1"
- - LA-Mexico City1
- - "sa-chile-1"
- - LA-Santiago2
- - "sa-brazil-1"
- - LA-Sao Paulo1
- - "ru-northwest-2"
- - RU-Moscow2
-
-#### --s3-region
-
-Region to connect to.
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: Cloudflare
-- Type: string
-- Required: false
-- Examples:
- - "auto"
- - R2 buckets are automatically distributed across Cloudflare's data centers for low latency.
-
-#### --s3-region
-
-Region to connect to.
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: Qiniu
-- Type: string
-- Required: false
-- Examples:
- - "cn-east-1"
- - The default endpoint - a good choice if you are unsure.
- - East China Region 1.
- - Needs location constraint cn-east-1.
- - "cn-east-2"
- - East China Region 2.
- - Needs location constraint cn-east-2.
- - "cn-north-1"
- - North China Region 1.
- - Needs location constraint cn-north-1.
- - "cn-south-1"
- - South China Region 1.
- - Needs location constraint cn-south-1.
- - "us-north-1"
- - North America Region.
- - Needs location constraint us-north-1.
- - "ap-southeast-1"
- - Southeast Asia Region 1.
- - Needs location constraint ap-southeast-1.
- - "ap-northeast-1"
- - Northeast Asia Region 1.
- - Needs location constraint ap-northeast-1.
-
-#### --s3-region
-
-Region where your bucket will be created and your data stored.
-
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: IONOS
-- Type: string
-- Required: false
-- Examples:
- - "de"
- - Frankfurt, Germany
- - "eu-central-2"
- - Berlin, Germany
- - "eu-south-2"
- - Logrono, Spain
-
-#### --s3-region
-
-Region where your bucket will be created and your data stored.
-
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: Petabox
-- Type: string
-- Required: false
-- Examples:
- - "us-east-1"
- - US East (N. Virginia)
- - "eu-central-1"
- - Europe (Frankfurt)
- - "ap-southeast-1"
- - Asia Pacific (Singapore)
- - "me-south-1"
- - Middle East (Bahrain)
- - "sa-east-1"
- - South America (São Paulo)
-
-#### --s3-region
-
-Region where your data stored.
-
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: Synology
-- Type: string
-- Required: false
-- Examples:
- - "eu-001"
- - Europe Region 1
- - "eu-002"
- - Europe Region 2
- - "us-001"
- - US Region 1
- - "us-002"
- - US Region 2
- - "tw-001"
- - Asia (Taiwan)
-
-#### --s3-region
-
-Region to connect to.
-
-Leave blank if you are using an S3 clone and you don't have a region.
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_S3_REGION
-- Provider: !AWS,Alibaba,ArvanCloud,ChinaMobile,Cloudflare,IONOS,Petabox,Liara,Qiniu,RackCorp,Scaleway,Storj,Synology,TencentCOS,HuaweiOBS,IDrive
-- Type: string
-- Required: false
-- Examples:
- - ""
- - Use this if unsure.
- - Will use v4 signatures and an empty region.
- - "other-v2-signature"
- - Use this only if v4 signatures don't work.
- - E.g. pre Jewel/v10 CEPH.
-
#### --s3-endpoint
Endpoint for S3 API.
@@ -1142,712 +892,6 @@ Properties:
- Type: string
- Required: false
-#### --s3-endpoint
-
-Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: ChinaMobile
-- Type: string
-- Required: false
-- Examples:
- - "eos-wuxi-1.cmecloud.cn"
- - The default endpoint - a good choice if you are unsure.
- - East China (Suzhou)
- - "eos-jinan-1.cmecloud.cn"
- - East China (Jinan)
- - "eos-ningbo-1.cmecloud.cn"
- - East China (Hangzhou)
- - "eos-shanghai-1.cmecloud.cn"
- - East China (Shanghai-1)
- - "eos-zhengzhou-1.cmecloud.cn"
- - Central China (Zhengzhou)
- - "eos-hunan-1.cmecloud.cn"
- - Central China (Changsha-1)
- - "eos-zhuzhou-1.cmecloud.cn"
- - Central China (Changsha-2)
- - "eos-guangzhou-1.cmecloud.cn"
- - South China (Guangzhou-2)
- - "eos-dongguan-1.cmecloud.cn"
- - South China (Guangzhou-3)
- - "eos-beijing-1.cmecloud.cn"
- - North China (Beijing-1)
- - "eos-beijing-2.cmecloud.cn"
- - North China (Beijing-2)
- - "eos-beijing-4.cmecloud.cn"
- - North China (Beijing-3)
- - "eos-huhehaote-1.cmecloud.cn"
- - North China (Huhehaote)
- - "eos-chengdu-1.cmecloud.cn"
- - Southwest China (Chengdu)
- - "eos-chongqing-1.cmecloud.cn"
- - Southwest China (Chongqing)
- - "eos-guiyang-1.cmecloud.cn"
- - Southwest China (Guiyang)
- - "eos-xian-1.cmecloud.cn"
- - Nouthwest China (Xian)
- - "eos-yunnan.cmecloud.cn"
- - Yunnan China (Kunming)
- - "eos-yunnan-2.cmecloud.cn"
- - Yunnan China (Kunming-2)
- - "eos-tianjin-1.cmecloud.cn"
- - Tianjin China (Tianjin)
- - "eos-jilin-1.cmecloud.cn"
- - Jilin China (Changchun)
- - "eos-hubei-1.cmecloud.cn"
- - Hubei China (Xiangyan)
- - "eos-jiangxi-1.cmecloud.cn"
- - Jiangxi China (Nanchang)
- - "eos-gansu-1.cmecloud.cn"
- - Gansu China (Lanzhou)
- - "eos-shanxi-1.cmecloud.cn"
- - Shanxi China (Taiyuan)
- - "eos-liaoning-1.cmecloud.cn"
- - Liaoning China (Shenyang)
- - "eos-hebei-1.cmecloud.cn"
- - Hebei China (Shijiazhuang)
- - "eos-fujian-1.cmecloud.cn"
- - Fujian China (Xiamen)
- - "eos-guangxi-1.cmecloud.cn"
- - Guangxi China (Nanning)
- - "eos-anhui-1.cmecloud.cn"
- - Anhui China (Huainan)
-
-#### --s3-endpoint
-
-Endpoint for Arvan Cloud Object Storage (AOS) API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: ArvanCloud
-- Type: string
-- Required: false
-- Examples:
- - "s3.ir-thr-at1.arvanstorage.ir"
- - The default endpoint - a good choice if you are unsure.
- - Tehran Iran (Simin)
- - "s3.ir-tbz-sh1.arvanstorage.ir"
- - Tabriz Iran (Shahriar)
-
-#### --s3-endpoint
-
-Endpoint for IBM COS S3 API.
-
-Specify if using an IBM COS On Premise.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: IBMCOS
-- Type: string
-- Required: false
-- Examples:
- - "s3.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region Endpoint
- - "s3.dal.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region Dallas Endpoint
- - "s3.wdc.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region Washington DC Endpoint
- - "s3.sjc.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region San Jose Endpoint
- - "s3.private.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region Private Endpoint
- - "s3.private.dal.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region Dallas Private Endpoint
- - "s3.private.wdc.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region Washington DC Private Endpoint
- - "s3.private.sjc.us.cloud-object-storage.appdomain.cloud"
- - US Cross Region San Jose Private Endpoint
- - "s3.us-east.cloud-object-storage.appdomain.cloud"
- - US Region East Endpoint
- - "s3.private.us-east.cloud-object-storage.appdomain.cloud"
- - US Region East Private Endpoint
- - "s3.us-south.cloud-object-storage.appdomain.cloud"
- - US Region South Endpoint
- - "s3.private.us-south.cloud-object-storage.appdomain.cloud"
- - US Region South Private Endpoint
- - "s3.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Endpoint
- - "s3.fra.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Frankfurt Endpoint
- - "s3.mil.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Milan Endpoint
- - "s3.ams.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Amsterdam Endpoint
- - "s3.private.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Private Endpoint
- - "s3.private.fra.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Frankfurt Private Endpoint
- - "s3.private.mil.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Milan Private Endpoint
- - "s3.private.ams.eu.cloud-object-storage.appdomain.cloud"
- - EU Cross Region Amsterdam Private Endpoint
- - "s3.eu-gb.cloud-object-storage.appdomain.cloud"
- - Great Britain Endpoint
- - "s3.private.eu-gb.cloud-object-storage.appdomain.cloud"
- - Great Britain Private Endpoint
- - "s3.eu-de.cloud-object-storage.appdomain.cloud"
- - EU Region DE Endpoint
- - "s3.private.eu-de.cloud-object-storage.appdomain.cloud"
- - EU Region DE Private Endpoint
- - "s3.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional Endpoint
- - "s3.tok.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional Tokyo Endpoint
- - "s3.hkg.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional HongKong Endpoint
- - "s3.seo.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional Seoul Endpoint
- - "s3.private.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional Private Endpoint
- - "s3.private.tok.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional Tokyo Private Endpoint
- - "s3.private.hkg.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional HongKong Private Endpoint
- - "s3.private.seo.ap.cloud-object-storage.appdomain.cloud"
- - APAC Cross Regional Seoul Private Endpoint
- - "s3.jp-tok.cloud-object-storage.appdomain.cloud"
- - APAC Region Japan Endpoint
- - "s3.private.jp-tok.cloud-object-storage.appdomain.cloud"
- - APAC Region Japan Private Endpoint
- - "s3.au-syd.cloud-object-storage.appdomain.cloud"
- - APAC Region Australia Endpoint
- - "s3.private.au-syd.cloud-object-storage.appdomain.cloud"
- - APAC Region Australia Private Endpoint
- - "s3.ams03.cloud-object-storage.appdomain.cloud"
- - Amsterdam Single Site Endpoint
- - "s3.private.ams03.cloud-object-storage.appdomain.cloud"
- - Amsterdam Single Site Private Endpoint
- - "s3.che01.cloud-object-storage.appdomain.cloud"
- - Chennai Single Site Endpoint
- - "s3.private.che01.cloud-object-storage.appdomain.cloud"
- - Chennai Single Site Private Endpoint
- - "s3.mel01.cloud-object-storage.appdomain.cloud"
- - Melbourne Single Site Endpoint
- - "s3.private.mel01.cloud-object-storage.appdomain.cloud"
- - Melbourne Single Site Private Endpoint
- - "s3.osl01.cloud-object-storage.appdomain.cloud"
- - Oslo Single Site Endpoint
- - "s3.private.osl01.cloud-object-storage.appdomain.cloud"
- - Oslo Single Site Private Endpoint
- - "s3.tor01.cloud-object-storage.appdomain.cloud"
- - Toronto Single Site Endpoint
- - "s3.private.tor01.cloud-object-storage.appdomain.cloud"
- - Toronto Single Site Private Endpoint
- - "s3.seo01.cloud-object-storage.appdomain.cloud"
- - Seoul Single Site Endpoint
- - "s3.private.seo01.cloud-object-storage.appdomain.cloud"
- - Seoul Single Site Private Endpoint
- - "s3.mon01.cloud-object-storage.appdomain.cloud"
- - Montreal Single Site Endpoint
- - "s3.private.mon01.cloud-object-storage.appdomain.cloud"
- - Montreal Single Site Private Endpoint
- - "s3.mex01.cloud-object-storage.appdomain.cloud"
- - Mexico Single Site Endpoint
- - "s3.private.mex01.cloud-object-storage.appdomain.cloud"
- - Mexico Single Site Private Endpoint
- - "s3.sjc04.cloud-object-storage.appdomain.cloud"
- - San Jose Single Site Endpoint
- - "s3.private.sjc04.cloud-object-storage.appdomain.cloud"
- - San Jose Single Site Private Endpoint
- - "s3.mil01.cloud-object-storage.appdomain.cloud"
- - Milan Single Site Endpoint
- - "s3.private.mil01.cloud-object-storage.appdomain.cloud"
- - Milan Single Site Private Endpoint
- - "s3.hkg02.cloud-object-storage.appdomain.cloud"
- - Hong Kong Single Site Endpoint
- - "s3.private.hkg02.cloud-object-storage.appdomain.cloud"
- - Hong Kong Single Site Private Endpoint
- - "s3.par01.cloud-object-storage.appdomain.cloud"
- - Paris Single Site Endpoint
- - "s3.private.par01.cloud-object-storage.appdomain.cloud"
- - Paris Single Site Private Endpoint
- - "s3.sng01.cloud-object-storage.appdomain.cloud"
- - Singapore Single Site Endpoint
- - "s3.private.sng01.cloud-object-storage.appdomain.cloud"
- - Singapore Single Site Private Endpoint
-
-#### --s3-endpoint
-
-Endpoint for IONOS S3 Object Storage.
-
-Specify the endpoint from the same region.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: IONOS
-- Type: string
-- Required: false
-- Examples:
- - "s3-eu-central-1.ionoscloud.com"
- - Frankfurt, Germany
- - "s3-eu-central-2.ionoscloud.com"
- - Berlin, Germany
- - "s3-eu-south-2.ionoscloud.com"
- - Logrono, Spain
-
-#### --s3-endpoint
-
-Endpoint for Petabox S3 Object Storage.
-
-Specify the endpoint from the same region.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Petabox
-- Type: string
-- Required: true
-- Examples:
- - "s3.petabox.io"
- - US East (N. Virginia)
- - "s3.us-east-1.petabox.io"
- - US East (N. Virginia)
- - "s3.eu-central-1.petabox.io"
- - Europe (Frankfurt)
- - "s3.ap-southeast-1.petabox.io"
- - Asia Pacific (Singapore)
- - "s3.me-south-1.petabox.io"
- - Middle East (Bahrain)
- - "s3.sa-east-1.petabox.io"
- - South America (São Paulo)
-
-#### --s3-endpoint
-
-Endpoint for Leviia Object Storage API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Leviia
-- Type: string
-- Required: false
-- Examples:
- - "s3.leviia.com"
- - The default endpoint
- - Leviia
-
-#### --s3-endpoint
-
-Endpoint for Liara Object Storage API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Liara
-- Type: string
-- Required: false
-- Examples:
- - "storage.iran.liara.space"
- - The default endpoint
- - Iran
-
-#### --s3-endpoint
-
-Endpoint for OSS API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Alibaba
-- Type: string
-- Required: false
-- Examples:
- - "oss-accelerate.aliyuncs.com"
- - Global Accelerate
- - "oss-accelerate-overseas.aliyuncs.com"
- - Global Accelerate (outside mainland China)
- - "oss-cn-hangzhou.aliyuncs.com"
- - East China 1 (Hangzhou)
- - "oss-cn-shanghai.aliyuncs.com"
- - East China 2 (Shanghai)
- - "oss-cn-qingdao.aliyuncs.com"
- - North China 1 (Qingdao)
- - "oss-cn-beijing.aliyuncs.com"
- - North China 2 (Beijing)
- - "oss-cn-zhangjiakou.aliyuncs.com"
- - North China 3 (Zhangjiakou)
- - "oss-cn-huhehaote.aliyuncs.com"
- - North China 5 (Hohhot)
- - "oss-cn-wulanchabu.aliyuncs.com"
- - North China 6 (Ulanqab)
- - "oss-cn-shenzhen.aliyuncs.com"
- - South China 1 (Shenzhen)
- - "oss-cn-heyuan.aliyuncs.com"
- - South China 2 (Heyuan)
- - "oss-cn-guangzhou.aliyuncs.com"
- - South China 3 (Guangzhou)
- - "oss-cn-chengdu.aliyuncs.com"
- - West China 1 (Chengdu)
- - "oss-cn-hongkong.aliyuncs.com"
- - Hong Kong (Hong Kong)
- - "oss-us-west-1.aliyuncs.com"
- - US West 1 (Silicon Valley)
- - "oss-us-east-1.aliyuncs.com"
- - US East 1 (Virginia)
- - "oss-ap-southeast-1.aliyuncs.com"
- - Southeast Asia Southeast 1 (Singapore)
- - "oss-ap-southeast-2.aliyuncs.com"
- - Asia Pacific Southeast 2 (Sydney)
- - "oss-ap-southeast-3.aliyuncs.com"
- - Southeast Asia Southeast 3 (Kuala Lumpur)
- - "oss-ap-southeast-5.aliyuncs.com"
- - Asia Pacific Southeast 5 (Jakarta)
- - "oss-ap-northeast-1.aliyuncs.com"
- - Asia Pacific Northeast 1 (Japan)
- - "oss-ap-south-1.aliyuncs.com"
- - Asia Pacific South 1 (Mumbai)
- - "oss-eu-central-1.aliyuncs.com"
- - Central Europe 1 (Frankfurt)
- - "oss-eu-west-1.aliyuncs.com"
- - West Europe (London)
- - "oss-me-east-1.aliyuncs.com"
- - Middle East 1 (Dubai)
-
-#### --s3-endpoint
-
-Endpoint for OBS API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: HuaweiOBS
-- Type: string
-- Required: false
-- Examples:
- - "obs.af-south-1.myhuaweicloud.com"
- - AF-Johannesburg
- - "obs.ap-southeast-2.myhuaweicloud.com"
- - AP-Bangkok
- - "obs.ap-southeast-3.myhuaweicloud.com"
- - AP-Singapore
- - "obs.cn-east-3.myhuaweicloud.com"
- - CN East-Shanghai1
- - "obs.cn-east-2.myhuaweicloud.com"
- - CN East-Shanghai2
- - "obs.cn-north-1.myhuaweicloud.com"
- - CN North-Beijing1
- - "obs.cn-north-4.myhuaweicloud.com"
- - CN North-Beijing4
- - "obs.cn-south-1.myhuaweicloud.com"
- - CN South-Guangzhou
- - "obs.ap-southeast-1.myhuaweicloud.com"
- - CN-Hong Kong
- - "obs.sa-argentina-1.myhuaweicloud.com"
- - LA-Buenos Aires1
- - "obs.sa-peru-1.myhuaweicloud.com"
- - LA-Lima1
- - "obs.na-mexico-1.myhuaweicloud.com"
- - LA-Mexico City1
- - "obs.sa-chile-1.myhuaweicloud.com"
- - LA-Santiago2
- - "obs.sa-brazil-1.myhuaweicloud.com"
- - LA-Sao Paulo1
- - "obs.ru-northwest-2.myhuaweicloud.com"
- - RU-Moscow2
-
-#### --s3-endpoint
-
-Endpoint for Scaleway Object Storage.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Scaleway
-- Type: string
-- Required: false
-- Examples:
- - "s3.nl-ams.scw.cloud"
- - Amsterdam Endpoint
- - "s3.fr-par.scw.cloud"
- - Paris Endpoint
- - "s3.pl-waw.scw.cloud"
- - Warsaw Endpoint
-
-#### --s3-endpoint
-
-Endpoint for StackPath Object Storage.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: StackPath
-- Type: string
-- Required: false
-- Examples:
- - "s3.us-east-2.stackpathstorage.com"
- - US East Endpoint
- - "s3.us-west-1.stackpathstorage.com"
- - US West Endpoint
- - "s3.eu-central-1.stackpathstorage.com"
- - EU Endpoint
-
-#### --s3-endpoint
-
-Endpoint for Google Cloud Storage.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: GCS
-- Type: string
-- Required: false
-- Examples:
- - "https://storage.googleapis.com"
- - Google Cloud Storage endpoint
-
-#### --s3-endpoint
-
-Endpoint for Storj Gateway.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Storj
-- Type: string
-- Required: false
-- Examples:
- - "gateway.storjshare.io"
- - Global Hosted Gateway
-
-#### --s3-endpoint
-
-Endpoint for Synology C2 Object Storage API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Synology
-- Type: string
-- Required: false
-- Examples:
- - "eu-001.s3.synologyc2.net"
- - EU Endpoint 1
- - "eu-002.s3.synologyc2.net"
- - EU Endpoint 2
- - "us-001.s3.synologyc2.net"
- - US Endpoint 1
- - "us-002.s3.synologyc2.net"
- - US Endpoint 2
- - "tw-001.s3.synologyc2.net"
- - TW Endpoint 1
-
-#### --s3-endpoint
-
-Endpoint for Tencent COS API.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: TencentCOS
-- Type: string
-- Required: false
-- Examples:
- - "cos.ap-beijing.myqcloud.com"
- - Beijing Region
- - "cos.ap-nanjing.myqcloud.com"
- - Nanjing Region
- - "cos.ap-shanghai.myqcloud.com"
- - Shanghai Region
- - "cos.ap-guangzhou.myqcloud.com"
- - Guangzhou Region
- - "cos.ap-nanjing.myqcloud.com"
- - Nanjing Region
- - "cos.ap-chengdu.myqcloud.com"
- - Chengdu Region
- - "cos.ap-chongqing.myqcloud.com"
- - Chongqing Region
- - "cos.ap-hongkong.myqcloud.com"
- - Hong Kong (China) Region
- - "cos.ap-singapore.myqcloud.com"
- - Singapore Region
- - "cos.ap-mumbai.myqcloud.com"
- - Mumbai Region
- - "cos.ap-seoul.myqcloud.com"
- - Seoul Region
- - "cos.ap-bangkok.myqcloud.com"
- - Bangkok Region
- - "cos.ap-tokyo.myqcloud.com"
- - Tokyo Region
- - "cos.na-siliconvalley.myqcloud.com"
- - Silicon Valley Region
- - "cos.na-ashburn.myqcloud.com"
- - Virginia Region
- - "cos.na-toronto.myqcloud.com"
- - Toronto Region
- - "cos.eu-frankfurt.myqcloud.com"
- - Frankfurt Region
- - "cos.eu-moscow.myqcloud.com"
- - Moscow Region
- - "cos.accelerate.myqcloud.com"
- - Use Tencent COS Accelerate Endpoint
-
-#### --s3-endpoint
-
-Endpoint for RackCorp Object Storage.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: RackCorp
-- Type: string
-- Required: false
-- Examples:
- - "s3.rackcorp.com"
- - Global (AnyCast) Endpoint
- - "au.s3.rackcorp.com"
- - Australia (Anycast) Endpoint
- - "au-nsw.s3.rackcorp.com"
- - Sydney (Australia) Endpoint
- - "au-qld.s3.rackcorp.com"
- - Brisbane (Australia) Endpoint
- - "au-vic.s3.rackcorp.com"
- - Melbourne (Australia) Endpoint
- - "au-wa.s3.rackcorp.com"
- - Perth (Australia) Endpoint
- - "ph.s3.rackcorp.com"
- - Manila (Philippines) Endpoint
- - "th.s3.rackcorp.com"
- - Bangkok (Thailand) Endpoint
- - "hk.s3.rackcorp.com"
- - HK (Hong Kong) Endpoint
- - "mn.s3.rackcorp.com"
- - Ulaanbaatar (Mongolia) Endpoint
- - "kg.s3.rackcorp.com"
- - Bishkek (Kyrgyzstan) Endpoint
- - "id.s3.rackcorp.com"
- - Jakarta (Indonesia) Endpoint
- - "jp.s3.rackcorp.com"
- - Tokyo (Japan) Endpoint
- - "sg.s3.rackcorp.com"
- - SG (Singapore) Endpoint
- - "de.s3.rackcorp.com"
- - Frankfurt (Germany) Endpoint
- - "us.s3.rackcorp.com"
- - USA (AnyCast) Endpoint
- - "us-east-1.s3.rackcorp.com"
- - New York (USA) Endpoint
- - "us-west-1.s3.rackcorp.com"
- - Freemont (USA) Endpoint
- - "nz.s3.rackcorp.com"
- - Auckland (New Zealand) Endpoint
-
-#### --s3-endpoint
-
-Endpoint for Qiniu Object Storage.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: Qiniu
-- Type: string
-- Required: false
-- Examples:
- - "s3-cn-east-1.qiniucs.com"
- - East China Endpoint 1
- - "s3-cn-east-2.qiniucs.com"
- - East China Endpoint 2
- - "s3-cn-north-1.qiniucs.com"
- - North China Endpoint 1
- - "s3-cn-south-1.qiniucs.com"
- - South China Endpoint 1
- - "s3-us-north-1.qiniucs.com"
- - North America Endpoint 1
- - "s3-ap-southeast-1.qiniucs.com"
- - Southeast Asia Endpoint 1
- - "s3-ap-northeast-1.qiniucs.com"
- - Northeast Asia Endpoint 1
-
-#### --s3-endpoint
-
-Endpoint for S3 API.
-
-Required when using an S3 clone.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_S3_ENDPOINT
-- Provider: !AWS,ArvanCloud,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,Scaleway,StackPath,Storj,Synology,RackCorp,Qiniu,Petabox
-- Type: string
-- Required: false
-- Examples:
- - "objects-us-east-1.dream.io"
- - Dream Objects endpoint
- - "syd1.digitaloceanspaces.com"
- - DigitalOcean Spaces Sydney 1
- - "sfo3.digitaloceanspaces.com"
- - DigitalOcean Spaces San Francisco 3
- - "fra1.digitaloceanspaces.com"
- - DigitalOcean Spaces Frankfurt 1
- - "nyc3.digitaloceanspaces.com"
- - DigitalOcean Spaces New York 3
- - "ams3.digitaloceanspaces.com"
- - DigitalOcean Spaces Amsterdam 3
- - "sgp1.digitaloceanspaces.com"
- - DigitalOcean Spaces Singapore 1
- - "localhost:8333"
- - SeaweedFS S3 localhost
- - "s3.us-east-1.lyvecloud.seagate.com"
- - Seagate Lyve Cloud US East 1 (Virginia)
- - "s3.us-west-1.lyvecloud.seagate.com"
- - Seagate Lyve Cloud US West 1 (California)
- - "s3.ap-southeast-1.lyvecloud.seagate.com"
- - Seagate Lyve Cloud AP Southeast 1 (Singapore)
- - "s3.wasabisys.com"
- - Wasabi US East 1 (N. Virginia)
- - "s3.us-east-2.wasabisys.com"
- - Wasabi US East 2 (N. Virginia)
- - "s3.us-central-1.wasabisys.com"
- - Wasabi US Central 1 (Texas)
- - "s3.us-west-1.wasabisys.com"
- - Wasabi US West 1 (Oregon)
- - "s3.ca-central-1.wasabisys.com"
- - Wasabi CA Central 1 (Toronto)
- - "s3.eu-central-1.wasabisys.com"
- - Wasabi EU Central 1 (Amsterdam)
- - "s3.eu-central-2.wasabisys.com"
- - Wasabi EU Central 2 (Frankfurt)
- - "s3.eu-west-1.wasabisys.com"
- - Wasabi EU West 1 (London)
- - "s3.eu-west-2.wasabisys.com"
- - Wasabi EU West 2 (Paris)
- - "s3.ap-northeast-1.wasabisys.com"
- - Wasabi AP Northeast 1 (Tokyo) endpoint
- - "s3.ap-northeast-2.wasabisys.com"
- - Wasabi AP Northeast 2 (Osaka) endpoint
- - "s3.ap-southeast-1.wasabisys.com"
- - Wasabi AP Southeast 1 (Singapore)
- - "s3.ap-southeast-2.wasabisys.com"
- - Wasabi AP Southeast 2 (Sydney)
- - "storage.iran.liara.space"
- - Liara Iran endpoint
- - "s3.ir-thr-at1.arvanstorage.ir"
- - ArvanCloud Tehran Iran (Simin) endpoint
- - "s3.ir-tbz-sh1.arvanstorage.ir"
- - ArvanCloud Tabriz Iran (Shahriar) endpoint
-
#### --s3-location-constraint
Location constraint - must be set to match the Region.
@@ -1913,274 +957,6 @@ Properties:
- "us-gov-west-1"
- AWS GovCloud (US) Region
-#### --s3-location-constraint
-
-Location constraint - must match endpoint.
-
-Used when creating buckets only.
-
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: ChinaMobile
-- Type: string
-- Required: false
-- Examples:
- - "wuxi1"
- - East China (Suzhou)
- - "jinan1"
- - East China (Jinan)
- - "ningbo1"
- - East China (Hangzhou)
- - "shanghai1"
- - East China (Shanghai-1)
- - "zhengzhou1"
- - Central China (Zhengzhou)
- - "hunan1"
- - Central China (Changsha-1)
- - "zhuzhou1"
- - Central China (Changsha-2)
- - "guangzhou1"
- - South China (Guangzhou-2)
- - "dongguan1"
- - South China (Guangzhou-3)
- - "beijing1"
- - North China (Beijing-1)
- - "beijing2"
- - North China (Beijing-2)
- - "beijing4"
- - North China (Beijing-3)
- - "huhehaote1"
- - North China (Huhehaote)
- - "chengdu1"
- - Southwest China (Chengdu)
- - "chongqing1"
- - Southwest China (Chongqing)
- - "guiyang1"
- - Southwest China (Guiyang)
- - "xian1"
- - Nouthwest China (Xian)
- - "yunnan"
- - Yunnan China (Kunming)
- - "yunnan2"
- - Yunnan China (Kunming-2)
- - "tianjin1"
- - Tianjin China (Tianjin)
- - "jilin1"
- - Jilin China (Changchun)
- - "hubei1"
- - Hubei China (Xiangyan)
- - "jiangxi1"
- - Jiangxi China (Nanchang)
- - "gansu1"
- - Gansu China (Lanzhou)
- - "shanxi1"
- - Shanxi China (Taiyuan)
- - "liaoning1"
- - Liaoning China (Shenyang)
- - "hebei1"
- - Hebei China (Shijiazhuang)
- - "fujian1"
- - Fujian China (Xiamen)
- - "guangxi1"
- - Guangxi China (Nanning)
- - "anhui1"
- - Anhui China (Huainan)
-
-#### --s3-location-constraint
-
-Location constraint - must match endpoint.
-
-Used when creating buckets only.
-
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: ArvanCloud
-- Type: string
-- Required: false
-- Examples:
- - "ir-thr-at1"
- - Tehran Iran (Simin)
- - "ir-tbz-sh1"
- - Tabriz Iran (Shahriar)
-
-#### --s3-location-constraint
-
-Location constraint - must match endpoint when using IBM Cloud Public.
-
-For on-prem COS, do not make a selection from this list, hit enter.
-
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: IBMCOS
-- Type: string
-- Required: false
-- Examples:
- - "us-standard"
- - US Cross Region Standard
- - "us-vault"
- - US Cross Region Vault
- - "us-cold"
- - US Cross Region Cold
- - "us-flex"
- - US Cross Region Flex
- - "us-east-standard"
- - US East Region Standard
- - "us-east-vault"
- - US East Region Vault
- - "us-east-cold"
- - US East Region Cold
- - "us-east-flex"
- - US East Region Flex
- - "us-south-standard"
- - US South Region Standard
- - "us-south-vault"
- - US South Region Vault
- - "us-south-cold"
- - US South Region Cold
- - "us-south-flex"
- - US South Region Flex
- - "eu-standard"
- - EU Cross Region Standard
- - "eu-vault"
- - EU Cross Region Vault
- - "eu-cold"
- - EU Cross Region Cold
- - "eu-flex"
- - EU Cross Region Flex
- - "eu-gb-standard"
- - Great Britain Standard
- - "eu-gb-vault"
- - Great Britain Vault
- - "eu-gb-cold"
- - Great Britain Cold
- - "eu-gb-flex"
- - Great Britain Flex
- - "ap-standard"
- - APAC Standard
- - "ap-vault"
- - APAC Vault
- - "ap-cold"
- - APAC Cold
- - "ap-flex"
- - APAC Flex
- - "mel01-standard"
- - Melbourne Standard
- - "mel01-vault"
- - Melbourne Vault
- - "mel01-cold"
- - Melbourne Cold
- - "mel01-flex"
- - Melbourne Flex
- - "tor01-standard"
- - Toronto Standard
- - "tor01-vault"
- - Toronto Vault
- - "tor01-cold"
- - Toronto Cold
- - "tor01-flex"
- - Toronto Flex
-
-#### --s3-location-constraint
-
-Location constraint - the location where your bucket will be located and your data stored.
-
-
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: RackCorp
-- Type: string
-- Required: false
-- Examples:
- - "global"
- - Global CDN Region
- - "au"
- - Australia (All locations)
- - "au-nsw"
- - NSW (Australia) Region
- - "au-qld"
- - QLD (Australia) Region
- - "au-vic"
- - VIC (Australia) Region
- - "au-wa"
- - Perth (Australia) Region
- - "ph"
- - Manila (Philippines) Region
- - "th"
- - Bangkok (Thailand) Region
- - "hk"
- - HK (Hong Kong) Region
- - "mn"
- - Ulaanbaatar (Mongolia) Region
- - "kg"
- - Bishkek (Kyrgyzstan) Region
- - "id"
- - Jakarta (Indonesia) Region
- - "jp"
- - Tokyo (Japan) Region
- - "sg"
- - SG (Singapore) Region
- - "de"
- - Frankfurt (Germany) Region
- - "us"
- - USA (AnyCast) Region
- - "us-east-1"
- - New York (USA) Region
- - "us-west-1"
- - Freemont (USA) Region
- - "nz"
- - Auckland (New Zealand) Region
-
-#### --s3-location-constraint
-
-Location constraint - must be set to match the Region.
-
-Used when creating buckets only.
-
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: Qiniu
-- Type: string
-- Required: false
-- Examples:
- - "cn-east-1"
- - East China Region 1
- - "cn-east-2"
- - East China Region 2
- - "cn-north-1"
- - North China Region 1
- - "cn-south-1"
- - South China Region 1
- - "us-north-1"
- - North America Region 1
- - "ap-southeast-1"
- - Southeast Asia Region 1
- - "ap-northeast-1"
- - Northeast Asia Region 1
-
-#### --s3-location-constraint
-
-Location constraint - must be set to match the Region.
-
-Leave blank if not sure. Used when creating buckets only.
-
-Properties:
-
-- Config: location_constraint
-- Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-- Provider: !AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,Leviia,Liara,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS,Petabox
-- Type: string
-- Required: false
-
#### --s3-acl
Canned ACL used when creating buckets and storing or copying objects.
@@ -2312,150 +1088,9 @@ Properties:
- "GLACIER_IR"
- Glacier Instant Retrieval storage class
-#### --s3-storage-class
-
-The storage class to use when storing new objects in OSS.
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: Alibaba
-- Type: string
-- Required: false
-- Examples:
- - ""
- - Default
- - "STANDARD"
- - Standard storage class
- - "GLACIER"
- - Archive storage mode
- - "STANDARD_IA"
- - Infrequent access storage mode
-
-#### --s3-storage-class
-
-The storage class to use when storing new objects in ChinaMobile.
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: ChinaMobile
-- Type: string
-- Required: false
-- Examples:
- - ""
- - Default
- - "STANDARD"
- - Standard storage class
- - "GLACIER"
- - Archive storage mode
- - "STANDARD_IA"
- - Infrequent access storage mode
-
-#### --s3-storage-class
-
-The storage class to use when storing new objects in Liara
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: Liara
-- Type: string
-- Required: false
-- Examples:
- - "STANDARD"
- - Standard storage class
-
-#### --s3-storage-class
-
-The storage class to use when storing new objects in ArvanCloud.
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: ArvanCloud
-- Type: string
-- Required: false
-- Examples:
- - "STANDARD"
- - Standard storage class
-
-#### --s3-storage-class
-
-The storage class to use when storing new objects in Tencent COS.
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: TencentCOS
-- Type: string
-- Required: false
-- Examples:
- - ""
- - Default
- - "STANDARD"
- - Standard storage class
- - "ARCHIVE"
- - Archive storage mode
- - "STANDARD_IA"
- - Infrequent access storage mode
-
-#### --s3-storage-class
-
-The storage class to use when storing new objects in S3.
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: Scaleway
-- Type: string
-- Required: false
-- Examples:
- - ""
- - Default.
- - "STANDARD"
- - The Standard class for any upload.
- - Suitable for on-demand content like streaming or CDN.
- - Available in all regions.
- - "GLACIER"
- - Archived storage.
- - Prices are lower, but it needs to be restored first to be accessed.
- - Available in FR-PAR and NL-AMS regions.
- - "ONEZONE_IA"
- - One Zone - Infrequent Access.
- - A good choice for storing secondary backup copies or easily re-creatable data.
- - Available in the FR-PAR region only.
-
-#### --s3-storage-class
-
-The storage class to use when storing new objects in Qiniu.
-
-Properties:
-
-- Config: storage_class
-- Env Var: RCLONE_S3_STORAGE_CLASS
-- Provider: Qiniu
-- Type: string
-- Required: false
-- Examples:
- - "STANDARD"
- - Standard storage class
- - "LINE"
- - Infrequent access storage mode
- - "GLACIER"
- - Archive storage mode
- - "DEEP_ARCHIVE"
- - Deep archive storage mode
-
### Advanced options
-Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi).
+Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
#### --s3-bucket-acl
@@ -2948,7 +1583,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_S3_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,InvalidUtf8,Dot
#### --s3-memory-pool-flush-time
@@ -3184,6 +1819,57 @@ Properties:
- Type: string
- Required: false
+#### --s3-use-already-exists
+
+Set if rclone should report BucketAlreadyExists errors on bucket creation.
+
+At some point during the evolution of the s3 protocol, AWS started
+returning an `AlreadyOwnedByYou` error when attempting to create a
+bucket that the user already owned, rather than a
+`BucketAlreadyExists` error.
+
+Unfortunately exactly what has been implemented by s3 clones is a
+little inconsistent, some return `AlreadyOwnedByYou`, some return
+`BucketAlreadyExists` and some return no error at all.
+
+This is important to rclone because it ensures the bucket exists by
+creating it on quite a lot of operations (unless
+`--s3-no-check-bucket` is used).
+
+If rclone knows the provider can return `AlreadyOwnedByYou` or returns
+no error then it can report `BucketAlreadyExists` errors when the user
+attempts to create a bucket not owned by them. Otherwise rclone
+ignores the `BucketAlreadyExists` error which can lead to confusion.
+
+This should be automatically set correctly for all providers rclone
+knows about - please make a bug report if not.
+
+
+Properties:
+
+- Config: use_already_exists
+- Env Var: RCLONE_S3_USE_ALREADY_EXISTS
+- Type: Tristate
+- Default: unset
+
+#### --s3-use-multipart-uploads
+
+Set if rclone should use multipart uploads.
+
+You can change this if you want to disable the use of multipart uploads.
+This shouldn't be necessary in normal operation.
+
+This should be automatically set correctly for all providers rclone
+knows about - please make a bug report if not.
+
+
+Properties:
+
+- Config: use_multipart_uploads
+- Env Var: RCLONE_S3_USE_MULTIPART_UPLOADS
+- Type: Tristate
+- Default: unset
+
### Metadata
User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case.
diff --git a/docs/content/seafile.md b/docs/content/seafile.md
index 5afeadbfe..e3e1aa109 100644
--- a/docs/content/seafile.md
+++ b/docs/content/seafile.md
@@ -386,7 +386,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SEAFILE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8
{{< rem autogenerated options stop >}}
diff --git a/docs/content/sftp.md b/docs/content/sftp.md
index e2cd91b83..a32564c76 100644
--- a/docs/content/sftp.md
+++ b/docs/content/sftp.md
@@ -1016,6 +1016,32 @@ Properties:
- Type: string
- Required: false
+#### --sftp-copy-is-hardlink
+
+Set to enable server side copies using hardlinks.
+
+The SFTP protocol does not define a copy command so normally server
+side copies are not allowed with the sftp backend.
+
+However the SFTP protocol does support hardlinking, and if you enable
+this flag then the sftp backend will support server side copies. These
+will be implemented by doing a hardlink from the source to the
+destination.
+
+Not all sftp servers support this.
+
+Note that hardlinking two files together will use no additional space
+as the source and the destination will be the same file.
+
+This feature may be useful backups made with --copy-dest.
+
+Properties:
+
+- Config: copy_is_hardlink
+- Env Var: RCLONE_SFTP_COPY_IS_HARDLINK
+- Type: bool
+- Default: false
+
{{< rem autogenerated options stop >}}
## Limitations
diff --git a/docs/content/sharefile.md b/docs/content/sharefile.md
index 60d518687..3dd1027f9 100644
--- a/docs/content/sharefile.md
+++ b/docs/content/sharefile.md
@@ -300,7 +300,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SHAREFILE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
diff --git a/docs/content/sia.md b/docs/content/sia.md
index ae9f74a2a..0ee8cba94 100644
--- a/docs/content/sia.md
+++ b/docs/content/sia.md
@@ -191,7 +191,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SIA_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
diff --git a/docs/content/smb.md b/docs/content/smb.md
index 3ee6efe10..10eaec518 100644
--- a/docs/content/smb.md
+++ b/docs/content/smb.md
@@ -245,7 +245,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SMB_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
diff --git a/docs/content/sugarsync.md b/docs/content/sugarsync.md
index 30633051c..0b19927d7 100644
--- a/docs/content/sugarsync.md
+++ b/docs/content/sugarsync.md
@@ -269,7 +269,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SUGARSYNC_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
diff --git a/docs/content/swift.md b/docs/content/swift.md
index ddcfa45f7..a627e3684 100644
--- a/docs/content/swift.md
+++ b/docs/content/swift.md
@@ -584,7 +584,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SWIFT_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,InvalidUtf8
{{< rem autogenerated options stop >}}
diff --git a/docs/content/uptobox.md b/docs/content/uptobox.md
index 9a08f3f53..816337330 100644
--- a/docs/content/uptobox.md
+++ b/docs/content/uptobox.md
@@ -143,7 +143,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_UPTOBOX_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
diff --git a/docs/content/webdav.md b/docs/content/webdav.md
index 6f246a017..ea7b2669d 100644
--- a/docs/content/webdav.md
+++ b/docs/content/webdav.md
@@ -151,8 +151,8 @@ Properties:
- Sharepoint Online, authenticated by Microsoft account
- "sharepoint-ntlm"
- Sharepoint with NTLM authentication, usually self-hosted or on-premises
- - "rclone",
- - rclone WebDAV server to serve a remote over HTTP via the WebDAV protocol,
+ - "rclone"
+ - rclone WebDAV server to serve a remote over HTTP via the WebDAV protocol
- "other"
- Other site/service or software
diff --git a/docs/content/yandex.md b/docs/content/yandex.md
index d62b33e2f..d8be56006 100644
--- a/docs/content/yandex.md
+++ b/docs/content/yandex.md
@@ -206,7 +206,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_YANDEX_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Del,Ctl,InvalidUtf8,Dot
{{< rem autogenerated options stop >}}
diff --git a/docs/content/zoho.md b/docs/content/zoho.md
index c185a2271..b9ecdd8cd 100644
--- a/docs/content/zoho.md
+++ b/docs/content/zoho.md
@@ -234,7 +234,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_ZOHO_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Del,Ctl,InvalidUtf8
{{< rem autogenerated options stop >}}
diff --git a/rclone.1 b/rclone.1
index bdf35b4d7..347ad1846 100644
--- a/rclone.1
+++ b/rclone.1
@@ -1,7 +1,7 @@
.\"t
.\" Automatically generated by Pandoc 2.9.2.1
.\"
-.TH "rclone" "1" "Sep 11, 2023" "User Manual" ""
+.TH "rclone" "1" "Nov 26, 2023" "User Manual" ""
.hy
.SH Rclone syncs your files to cloud storage
.PP
@@ -209,6 +209,10 @@ Leviia Object Storage
.IP \[bu] 2
Liara Object Storage
.IP \[bu] 2
+Linkbox
+.IP \[bu] 2
+Linode Object Storage
+.IP \[bu] 2
Mail.ru Cloud
.IP \[bu] 2
Memset Memstore
@@ -219,6 +223,8 @@ Memory
.IP \[bu] 2
Microsoft Azure Blob Storage
.IP \[bu] 2
+Microsoft Azure Files Storage
+.IP \[bu] 2
Microsoft OneDrive
.IP \[bu] 2
Minio
@@ -432,6 +438,25 @@ Its current version is as below.
.PP
[IMAGE: Homebrew
package (https://repology.org/badge/version-for-repo/homebrew/rclone.svg)] (https://repology.org/project/rclone/versions)
+.SS Installation with MacPorts (#macos-macports)
+.PP
+On macOS, rclone can also be installed via
+MacPorts (https://www.macports.org):
+.IP
+.nf
+\f[C]
+sudo port install rclone
+\f[R]
+.fi
+.PP
+Note that this is a third party installer not controlled by the rclone
+developers so it may be out of date.
+Its current version is as below.
+.PP
+[IMAGE: MacPorts
+port (https://repology.org/badge/version-for-repo/macports/rclone.svg)] (https://repology.org/project/rclone/versions)
+.PP
+More information here (https://ports.macports.org/port/rclone/).
.SS Precompiled binary, using curl
.PP
To avoid problems with macOS gatekeeper enforcing the binary to be
@@ -738,7 +763,7 @@ $ sudo snap install rclone
\f[R]
.fi
.PP
-Due to the strict confinement of Snap, rclone snap cannot acess real
+Due to the strict confinement of Snap, rclone snap cannot access real
/home/$USER/.config/rclone directory, default config path is as below.
.IP \[bu] 2
Default config directory:
@@ -762,7 +787,7 @@ Its current version is as below.
.SS Source installation
.PP
Make sure you have git and Go (https://golang.org/) installed.
-Go version 1.17 or newer is required, latest release is recommended.
+Go version 1.18 or newer is required, the latest release is recommended.
You can get it from your package manager, or download it from
golang.org/dl (https://golang.org/dl/).
Then you can run the following:
@@ -805,19 +830,18 @@ by installing it in a MSYS2 (https://www.msys2.org) distribution (make
sure you install it in the classic mingw64 subsystem, the ucrt64 version
is not compatible).
.PP
-Additionally, on Windows, you must install the third party utility
-WinFsp (https://winfsp.dev/), with the \[dq]Developer\[dq] feature
-selected.
+Additionally, to build with mount on Windows, you must install the third
+party utility WinFsp (https://winfsp.dev/), with the \[dq]Developer\[dq]
+feature selected.
If building with cgo, you must also set environment variable CPATH
pointing to the fuse include directory within the WinFsp installation
(normally
\f[C]C:\[rs]Program Files (x86)\[rs]WinFsp\[rs]inc\[rs]fuse\f[R]).
.PP
-You may also add arguments \f[C]-ldflags -s\f[R] (with or without
-\f[C]-tags cmount\f[R]), to omit symbol table and debug information,
-making the executable file smaller, and \f[C]-trimpath\f[R] to remove
-references to local file system paths.
-This is how the official rclone releases are built.
+You may add arguments \f[C]-ldflags -s\f[R] to omit symbol table and
+debug information, making the executable file smaller, and
+\f[C]-trimpath\f[R] to remove references to local file system paths.
+The official rclone releases are built with both of these.
.IP
.nf
\f[C]
@@ -825,13 +849,57 @@ go build -trimpath -ldflags -s -tags cmount
\f[R]
.fi
.PP
+If you want to customize the version string, as reported by the
+\f[C]rclone version\f[R] command, you can set one of the variables
+\f[C]fs.Version\f[R], \f[C]fs.VersionTag\f[R] (to keep default suffix
+but customize the number), or \f[C]fs.VersionSuffix\f[R] (to keep
+default number but customize the suffix).
+This can be done from the build command, by adding to the
+\f[C]-ldflags\f[R] argument value as shown below.
+.IP
+.nf
+\f[C]
+go build -trimpath -ldflags \[dq]-s -X github.com/rclone/rclone/fs.Version=v9.9.9-test\[dq] -tags cmount
+\f[R]
+.fi
+.PP
+On Windows, the official executables also have the version information,
+as well as a file icon, embedded as binary resources.
+To get that with your own build you need to run the following command
+\f[B]before\f[R] the build command.
+It generates a Windows resource system object file, with extension
+\&.syso, e.g.
+\f[C]resource_windows_amd64.syso\f[R], that will be automatically picked
+up by future build commands.
+.IP
+.nf
+\f[C]
+go run bin/resource_windows.go
+\f[R]
+.fi
+.PP
+The above command will generate a resource file containing version
+information based on the fs.Version variable in source at the time you
+run the command, which means if the value of this variable changes you
+need to re-run the command for it to be reflected in the version
+information.
+Also, if you override this version variable in the build command as
+described above, you need to do that also when generating the resource
+file, or else it will still use the value from the source.
+.IP
+.nf
+\f[C]
+go run bin/resource_windows.go -version v9.9.9-test
+\f[R]
+.fi
+.PP
Instead of executing the \f[C]go build\f[R] command directly, you can
run it via the Makefile.
-It changes the version number suffix from \[dq]-DEV\[dq] to
-\[dq]-beta\[dq] and appends commit details.
-It also copies the resulting rclone executable into your GOPATH bin
-folder (\f[C]$(go env GOPATH)/bin\f[R], which corresponds to
-\f[C]\[ti]/go/bin/rclone\f[R] by default).
+The default target changes the version suffix from \[dq]-DEV\[dq] to
+\[dq]-beta\[dq] followed by additional commit details, embeds version
+information binary resources on Windows, and copies the resulting rclone
+executable into your GOPATH bin folder (\f[C]$(go env GOPATH)/bin\f[R],
+which corresponds to \f[C]\[ti]/go/bin/rclone\f[R] by default).
.IP
.nf
\f[C]
@@ -848,37 +916,25 @@ make GOTAGS=cmount
.fi
.PP
There are other make targets that can be used for more advanced builds,
-such as cross-compiling for all supported os/architectures, embedding
-icon and version info resources into windows executable, and packaging
-results into release artifacts.
+such as cross-compiling for all supported os/architectures, and
+packaging results into release artifacts.
See Makefile (https://github.com/rclone/rclone/blob/master/Makefile) and
cross-compile.go (https://github.com/rclone/rclone/blob/master/bin/cross-compile.go)
for details.
.PP
-Another alternative is to download the source, build and install rclone
-in one operation, as a regular Go package.
+Another alternative method for source installation is to download the
+source, build and install rclone - all in one operation, as a regular Go
+package.
The source will be stored it in the Go module cache, and the resulting
executable will be in your GOPATH bin folder
(\f[C]$(go env GOPATH)/bin\f[R], which corresponds to
\f[C]\[ti]/go/bin/rclone\f[R] by default).
-.PP
-With Go version 1.17 or newer:
.IP
.nf
\f[C]
go install github.com/rclone/rclone\[at]latest
\f[R]
.fi
-.PP
-With Go versions older than 1.17 (do \f[B]not\f[R] use the \f[C]-u\f[R]
-flag, it causes Go to try to update the dependencies that rclone uses
-and sometimes these don\[aq]t work with the current version):
-.IP
-.nf
-\f[C]
-go get github.com/rclone/rclone
-\f[R]
-.fi
.SS Ansible installation
.PP
This can be done with Stefan Weichinger\[aq]s ansible
@@ -1078,7 +1134,7 @@ includes necessary runtime (.NET 5).
WinSW is a command-line only utility, where you have to manually create
an XML file with service configuration.
This may be a drawback for some, but it can also be an advantage as it
-is easy to back up and re-use the configuration settings, without having
+is easy to back up and reuse the configuration settings, without having
go through manual steps in a GUI.
One thing to note is that by default it does not restart the service on
error, one have to explicit enable this in the configuration file (via
@@ -1178,6 +1234,8 @@ Jottacloud (https://rclone.org/jottacloud/)
.IP \[bu] 2
Koofr (https://rclone.org/koofr/)
.IP \[bu] 2
+Linkbox (https://rclone.org/linkbox/)
+.IP \[bu] 2
Mail.ru Cloud (https://rclone.org/mailru/)
.IP \[bu] 2
Mega (https://rclone.org/mega/)
@@ -1186,6 +1244,8 @@ Memory (https://rclone.org/memory/)
.IP \[bu] 2
Microsoft Azure Blob Storage (https://rclone.org/azureblob/)
.IP \[bu] 2
+Microsoft Azure Files Storage (https://rclone.org/azurefiles/)
+.IP \[bu] 2
Microsoft OneDrive (https://rclone.org/onedrive/)
.IP \[bu] 2
OpenStack Swift / Rackspace Cloudfiles / Blomp Cloud Storage / Memset
@@ -1454,11 +1514,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default \[dq]HARD\[dq])
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don\[aq]t skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -1473,11 +1533,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don\[aq]t check the destination, copy regardless
--no-traverse Don\[aq]t traverse destination file system on copy
- --no-update-modtime Don\[aq]t update destination mod-time if files identical
+ --no-update-modtime Don\[aq]t update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. \[aq]size,descending\[aq]
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default \[dq].partial\[dq])
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
\f[R]
@@ -1615,11 +1676,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default \[dq]HARD\[dq])
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don\[aq]t skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -1634,11 +1695,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don\[aq]t check the destination, copy regardless
--no-traverse Don\[aq]t traverse destination file system on copy
- --no-update-modtime Don\[aq]t update destination mod-time if files identical
+ --no-update-modtime Don\[aq]t update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. \[aq]size,descending\[aq]
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default \[dq].partial\[dq])
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
\f[R]
@@ -1780,11 +1842,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default \[dq]HARD\[dq])
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don\[aq]t skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -1799,11 +1861,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don\[aq]t check the destination, copy regardless
--no-traverse Don\[aq]t traverse destination file system on copy
- --no-update-modtime Don\[aq]t update destination mod-time if files identical
+ --no-update-modtime Don\[aq]t update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. \[aq]size,descending\[aq]
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default \[dq].partial\[dq])
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
\f[R]
@@ -3416,11 +3479,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default \[dq]HARD\[dq])
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don\[aq]t skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -3435,11 +3498,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don\[aq]t check the destination, copy regardless
--no-traverse Don\[aq]t traverse destination file system on copy
- --no-update-modtime Don\[aq]t update destination mod-time if files identical
+ --no-update-modtime Don\[aq]t update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. \[aq]size,descending\[aq]
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default \[dq].partial\[dq])
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
\f[R]
@@ -3621,16 +3685,19 @@ rclone (https://rclone.org/commands/rclone/) - Show help for rclone
commands, flags and backends.
.SH rclone checksum
.PP
-Checks the files in the source against a SUM file.
+Checks the files in the destination against a SUM file.
.SS Synopsis
.PP
-Checks that hashsums of source files match the SUM file.
+Checks that hashsums of destination files match the SUM file.
It compares hashes (MD5, SHA1, etc) and logs a report of files which
don\[aq]t match.
It doesn\[aq]t alter the file system.
.PP
+The sumfile is treated as the source and the dst:path is treated as the
+destination for the purposes of the output.
+.PP
If you supply the \f[C]--download\f[R] flag, it will download the data
-from remote and calculate the contents hash on the fly.
+from the remote and calculate the content hash on the fly.
This can be useful for remotes that don\[aq]t support hashes or if you
really want to check all the data.
.PP
@@ -3676,7 +3743,7 @@ more information.
.IP
.nf
\f[C]
-rclone checksum sumfile src:path [flags]
+rclone checksum sumfile dst:path [flags]
\f[R]
.fi
.SS Options
@@ -4714,11 +4781,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default \[dq]HARD\[dq])
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don\[aq]t skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -4733,11 +4800,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don\[aq]t check the destination, copy regardless
--no-traverse Don\[aq]t traverse destination file system on copy
- --no-update-modtime Don\[aq]t update destination mod-time if files identical
+ --no-update-modtime Don\[aq]t update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. \[aq]size,descending\[aq]
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default \[dq].partial\[dq])
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
\f[R]
@@ -5342,10 +5410,6 @@ Supported hashes are:
* whirlpool
* crc32
* sha256
- * dropbox
- * hidrive
- * mailru
- * quickxor
\f[R]
.fi
.PP
@@ -5362,7 +5426,7 @@ case.
.IP
.nf
\f[C]
-rclone hashsum remote:path [flags]
+rclone hashsum [ remote:path] [flags]
\f[R]
.fi
.SS Options
@@ -6267,13 +6331,22 @@ Note that mapping to a directory path, instead of a drive letter, does
not suffer from the same limitations.
.SS Mounting on macOS
.PP
-Mounting on macOS can be done either via
+Mounting on macOS can be done either via built-in NFS
+server (https://rclone.org/commands/rclone_serve_nfs/),
macFUSE (https://osxfuse.github.io/) (also known as osxfuse) or
FUSE-T (https://www.fuse-t.org/).
macFUSE is a traditional FUSE driver utilizing a macOS kernel extension
(kext).
FUSE-T is an alternative FUSE system which \[dq]mounts\[dq] via an NFSv4
local server.
+.SH NFS mount
+.PP
+This method spins up an NFS server using serve
+nfs (https://rclone.org/commands/rclone_serve_nfs/) command and mounts
+it to the specified mountpoint.
+If you run this in background mode using |--daemon|, you will need to
+send SIGTERM signal to the rclone process using |kill| command to stop
+the mount.
.SS macFUSE Notes
.PP
If installing macFUSE using dmg
@@ -6338,6 +6411,8 @@ This means that many applications won\[aq]t work with their files on an
rclone mount without \f[C]--vfs-cache-mode writes\f[R] or
\f[C]--vfs-cache-mode full\f[R].
See the VFS File Caching section for more info.
+When using NFS mount on macOS, if you don\[aq]t specify
+|--vfs-cache-mode| the mount point will be read-only.
.PP
The bucket-based remotes (e.g.
Swift, S3, Google Compute Storage, B2) do not support the concept of
@@ -6519,7 +6594,7 @@ This allows to hide secrets from such commands as \f[C]ps\f[R] or
standard mount options like \f[C]x-systemd.automount\f[R],
\f[C]_netdev\f[R], \f[C]nosuid\f[R] and alike are intended only for
Automountd and ignored by rclone.
-.SS VFS - Virtual File System
+## VFS - Virtual File System
.PP
This command uses the VFS layer.
This adapts the cloud storage objects that rclone uses into something
@@ -6980,6 +7055,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -7092,11 +7168,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default \[dq]HARD\[dq])
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don\[aq]t skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -7111,11 +7187,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don\[aq]t check the destination, copy regardless
--no-traverse Don\[aq]t traverse destination file system on copy
- --no-update-modtime Don\[aq]t update destination mod-time if files identical
+ --no-update-modtime Don\[aq]t update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. \[aq]size,descending\[aq]
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default \[dq].partial\[dq])
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
\f[R]
@@ -7718,6 +7795,42 @@ T}@T{
The UTC timestamp of an entry.
T}
.TE
+.PP
+The server also makes the following functions available so that they can
+be used within the template.
+These functions help extend the options for dynamic rendering of HTML.
+They can be used to render HTML based on specific conditions.
+.PP
+.TS
+tab(@);
+lw(35.0n) lw(35.0n).
+T{
+Function
+T}@T{
+Description
+T}
+_
+T{
+afterEpoch
+T}@T{
+Returns the time since the epoch for the given time.
+T}
+T{
+contains
+T}@T{
+Checks whether a given substring is present or not in a given string.
+T}
+T{
+hasPrefix
+T}@T{
+Checks whether the given string begins with the specified prefix.
+T}
+T{
+hasSuffix
+T}@T{
+Checks whether the given string end with the specified suffix.
+T}
+.TE
.SS Authentication
.PP
By default this will serve files without needing a login.
@@ -8008,9 +8121,15 @@ remote:path over FTP.
rclone serve http (https://rclone.org/commands/rclone_serve_http/) -
Serve the remote over HTTP.
.IP \[bu] 2
+rclone serve nfs (https://rclone.org/commands/rclone_serve_nfs/) - Serve
+the remote as an NFS mount
+.IP \[bu] 2
rclone serve restic (https://rclone.org/commands/rclone_serve_restic/) -
Serve the remote for restic\[aq]s REST API.
.IP \[bu] 2
+rclone serve s3 (https://rclone.org/commands/rclone_serve_s3/) - Serve
+remote:path over s3.
+.IP \[bu] 2
rclone serve sftp (https://rclone.org/commands/rclone_serve_sftp/) -
Serve the remote over SFTP.
.IP \[bu] 2
@@ -8045,7 +8164,7 @@ default \[dq]rclone (hostname)\[dq].
.PP
Use \f[C]--log-trace\f[R] in conjunction with \f[C]-vv\f[R] to enable
additional debug logging of all UPNP traffic.
-.SS VFS - Virtual File System
+## VFS - Virtual File System
.PP
This command uses the VFS layer.
This adapts the cloud storage objects that rclone uses into something
@@ -8493,6 +8612,7 @@ rclone serve dlna remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -8589,7 +8709,7 @@ volumes.
All mount and VFS options are submitted by the docker daemon via API,
but you can also provide defaults on the command line as well as set
path to the config file and cache directory or adjust logging verbosity.
-.SS VFS - Virtual File System
+## VFS - Virtual File System
.PP
This command uses the VFS layer.
This adapts the cloud storage objects that rclone uses into something
@@ -9055,6 +9175,7 @@ rclone serve docker [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -9123,7 +9244,7 @@ By default this will serve files without needing a login.
.PP
You can set a single username and password with the --user and --pass
flags.
-.SS VFS - Virtual File System
+## VFS - Virtual File System
.PP
This command uses the VFS layer.
This adapts the cloud storage objects that rclone uses into something
@@ -9667,6 +9788,7 @@ rclone serve ftp remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -9877,6 +9999,42 @@ T}@T{
The UTC timestamp of an entry.
T}
.TE
+.PP
+The server also makes the following functions available so that they can
+be used within the template.
+These functions help extend the options for dynamic rendering of HTML.
+They can be used to render HTML based on specific conditions.
+.PP
+.TS
+tab(@);
+lw(35.0n) lw(35.0n).
+T{
+Function
+T}@T{
+Description
+T}
+_
+T{
+afterEpoch
+T}@T{
+Returns the time since the epoch for the given time.
+T}
+T{
+contains
+T}@T{
+Checks whether a given substring is present or not in a given string.
+T}
+T{
+hasPrefix
+T}@T{
+Checks whether the given string begins with the specified prefix.
+T}
+T{
+hasSuffix
+T}@T{
+Checks whether the given string end with the specified suffix.
+T}
+.TE
.SS Authentication
.PP
By default this will serve files without needing a login.
@@ -9911,7 +10069,7 @@ Use \f[C]--realm\f[R] to set the authentication realm.
.PP
Use \f[C]--salt\f[R] to change the password hashing salt from the
default.
-.SS VFS - Virtual File System
+## VFS - Virtual File System
.PP
This command uses the VFS layer.
This adapts the cloud storage objects that rclone uses into something
@@ -10464,6 +10622,543 @@ rclone serve http remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
+ --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
+ --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
+ --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
+\f[R]
+.fi
+.SS Filter Options
+.PP
+Flags for filtering directory listings.
+.IP
+.nf
+\f[C]
+ --delete-excluded Delete files on dest excluded from sync
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
+ --exclude-if-present stringArray Exclude directories if filename is present
+ --files-from stringArray Read list of source-file names from file (use - to read from stdin)
+ --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
+ -f, --filter stringArray Add a file filtering rule
+ --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --ignore-case Ignore case in filters (case insensitive)
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read file include patterns from file (use - to read from stdin)
+ --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-depth int If set limits the recursion depth to this (default -1)
+ --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
+ --metadata-exclude stringArray Exclude metadatas matching pattern
+ --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
+ --metadata-filter stringArray Add a metadata filtering rule
+ --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
+ --metadata-include stringArray Include metadatas matching pattern
+ --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
+\f[R]
+.fi
+.PP
+See the global flags page (https://rclone.org/flags/) for global options
+not listed here.
+.SH SEE ALSO
+.IP \[bu] 2
+rclone serve (https://rclone.org/commands/rclone_serve/) - Serve a
+remote over a protocol.
+.SH rclone serve nfs
+.PP
+Serve the remote as an NFS mount
+.SS Synopsis
+.PP
+Create an NFS server that serves the given remote over the network.
+.PP
+The primary purpose for this command is to enable mount
+command (https://rclone.org/commands/rclone_mount/) on recent macOS
+versions where installing FUSE is very cumbersome.
+.PP
+Since this is running on NFSv3, no authentication method is available.
+Any client will be able to access the data.
+To limit access, you can use serve NFS on loopback address and rely on
+secure tunnels (such as SSH).
+For this reason, by default, a random TCP port is chosen and loopback
+interface is used for the listening address; meaning that it is only
+available to the local machine.
+If you want other machines to access the NFS mount over local network,
+you need to specify the listening address and port using
+\f[C]--addr\f[R] flag.
+.PP
+Modifying files through NFS protocol requires VFS caching.
+Usually you will need to specify \f[C]--vfs-cache-mode\f[R] in order to
+be able to write to the mountpoint (full is recommended).
+If you don\[aq]t specify VFS cache mode, the mount will be read-only.
+.PP
+To serve NFS over the network use following command:
+.IP
+.nf
+\f[C]
+rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full
+\f[R]
+.fi
+.PP
+We specify a specific port that we can use in the mount command:
+.PP
+To mount the server under Linux/macOS, use the following command:
+.IP
+.nf
+\f[C]
+mount -oport=$PORT,mountport=$PORT $HOSTNAME: path/to/mountpoint
+\f[R]
+.fi
+.PP
+Where \f[C]$PORT\f[R] is the same port number we used in the serve nfs
+command.
+.PP
+This feature is only available on Unix platforms.
+.SS VFS - Virtual File System
+.PP
+This command uses the VFS layer.
+This adapts the cloud storage objects that rclone uses into something
+which looks much more like a disk filing system.
+.PP
+Cloud storage objects have lots of properties which aren\[aq]t like disk
+files - you can\[aq]t extend them or write to the middle of them, so the
+VFS layer has to deal with that.
+Because there is no one right way of doing this there are various
+options explained below.
+.PP
+The VFS layer also implements a directory cache - this caches info about
+files and directories (but not the data) in memory.
+.SS VFS Directory Cache
+.PP
+Using the \f[C]--dir-cache-time\f[R] flag, you can control how long a
+directory should be considered up to date and not refreshed from the
+backend.
+Changes made through the VFS will appear immediately or invalidate the
+cache.
+.IP
+.nf
+\f[C]
+--dir-cache-time duration Time to cache directory entries for (default 5m0s)
+--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
+\f[R]
+.fi
+.PP
+However, changes made directly on the cloud storage by the web interface
+or a different copy of rclone will only be picked up once the directory
+cache expires if the backend configured does not support polling for
+changes.
+If the backend supports polling, changes will be picked up within the
+polling interval.
+.PP
+You can send a \f[C]SIGHUP\f[R] signal to rclone for it to flush all
+directory caches, regardless of how old they are.
+Assuming only one rclone instance is running, you can reset the cache
+like this:
+.IP
+.nf
+\f[C]
+kill -SIGHUP $(pidof rclone)
+\f[R]
+.fi
+.PP
+If you configure rclone with a remote control then you can use rclone rc
+to flush the whole directory cache:
+.IP
+.nf
+\f[C]
+rclone rc vfs/forget
+\f[R]
+.fi
+.PP
+Or individual files or directories:
+.IP
+.nf
+\f[C]
+rclone rc vfs/forget file=path/to/file dir=path/to/dir
+\f[R]
+.fi
+.SS VFS File Buffering
+.PP
+The \f[C]--buffer-size\f[R] flag determines the amount of memory, that
+will be used to buffer data in advance.
+.PP
+Each open file will try to keep the specified amount of data in memory
+at all times.
+The buffered data is bound to one open file and won\[aq]t be shared.
+.PP
+This flag is a upper limit for the used memory per open file.
+The buffer will only use memory for data that is downloaded but not not
+yet read.
+If the buffer is empty, only a small amount of memory will be used.
+.PP
+The maximum memory used by rclone for buffering can be up to
+\f[C]--buffer-size * open files\f[R].
+.SS VFS File Caching
+.PP
+These flags control the VFS file caching options.
+File caching is necessary to make the VFS layer appear compatible with a
+normal file system.
+It can be disabled at the cost of some compatibility.
+.PP
+For example you\[aq]ll need to enable VFS caching if you want to read
+and write simultaneously to a file.
+See below for more details.
+.PP
+Note that the VFS cache is separate from the cache backend and you may
+find that you need one or the other or both.
+.IP
+.nf
+\f[C]
+--cache-dir string Directory rclone will use for caching.
+--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
+--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
+--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
+\f[R]
+.fi
+.PP
+If run with \f[C]-vv\f[R] rclone will print the location of the file
+cache.
+The files are stored in the user cache file area which is OS dependent
+but can be controlled with \f[C]--cache-dir\f[R] or setting the
+appropriate environment variable.
+.PP
+The cache has 4 different modes selected by \f[C]--vfs-cache-mode\f[R].
+The higher the cache mode the more compatible rclone becomes at the cost
+of using disk space.
+.PP
+Note that files are written back to the remote only when they are closed
+and if they haven\[aq]t been accessed for \f[C]--vfs-write-back\f[R]
+seconds.
+If rclone is quit or dies with files that haven\[aq]t been uploaded,
+these will be uploaded next time rclone is run with the same flags.
+.PP
+If using \f[C]--vfs-cache-max-size\f[R] or
+\f[C]--vfs-cache-min-free-size\f[R] note that the cache may exceed these
+quotas for two reasons.
+Firstly because it is only checked every
+\f[C]--vfs-cache-poll-interval\f[R].
+Secondly because open files cannot be evicted from the cache.
+When \f[C]--vfs-cache-max-size\f[R] or
+\f[C]--vfs-cache-min-free-size\f[R] is exceeded, rclone will attempt to
+evict the least accessed files from the cache first.
+rclone will start with files that haven\[aq]t been accessed for the
+longest.
+This cache flushing strategy is efficient and more relevant files are
+likely to remain cached.
+.PP
+The \f[C]--vfs-cache-max-age\f[R] will evict files from the cache after
+the set time since last access has passed.
+The default value of 1 hour will start evicting files from cache that
+haven\[aq]t been accessed for 1 hour.
+When a cached file is accessed the 1 hour timer is reset to 0 and will
+wait for 1 more hour before evicting.
+Specify the time with standard notation, s, m, h, d, w .
+.PP
+You \f[B]should not\f[R] run two copies of rclone using the same VFS
+cache with the same or overlapping remotes if using
+\f[C]--vfs-cache-mode > off\f[R].
+This can potentially cause data corruption if you do.
+You can work around this by giving each rclone its own cache hierarchy
+with \f[C]--cache-dir\f[R].
+You don\[aq]t need to worry about this if the remotes in use don\[aq]t
+overlap.
+.SS --vfs-cache-mode off
+.PP
+In this mode (the default) the cache will read directly from the remote
+and write directly to the remote without caching anything on disk.
+.PP
+This will mean some operations are not possible
+.IP \[bu] 2
+Files can\[aq]t be opened for both read AND write
+.IP \[bu] 2
+Files opened for write can\[aq]t be seeked
+.IP \[bu] 2
+Existing files opened for write must have O_TRUNC set
+.IP \[bu] 2
+Files open for read with O_TRUNC will be opened write only
+.IP \[bu] 2
+Files open for write only will behave as if O_TRUNC was supplied
+.IP \[bu] 2
+Open modes O_APPEND, O_TRUNC are ignored
+.IP \[bu] 2
+If an upload fails it can\[aq]t be retried
+.SS --vfs-cache-mode minimal
+.PP
+This is very similar to \[dq]off\[dq] except that files opened for read
+AND write will be buffered to disk.
+This means that files opened for write will be a lot more compatible,
+but uses the minimal disk space.
+.PP
+These operations are not possible
+.IP \[bu] 2
+Files opened for write only can\[aq]t be seeked
+.IP \[bu] 2
+Existing files opened for write must have O_TRUNC set
+.IP \[bu] 2
+Files opened for write only will ignore O_APPEND, O_TRUNC
+.IP \[bu] 2
+If an upload fails it can\[aq]t be retried
+.SS --vfs-cache-mode writes
+.PP
+In this mode files opened for read only are still read directly from the
+remote, write only and read/write files are buffered to disk first.
+.PP
+This mode should support all normal file system operations.
+.PP
+If an upload fails it will be retried at exponentially increasing
+intervals up to 1 minute.
+.SS --vfs-cache-mode full
+.PP
+In this mode all reads and writes are buffered to and from disk.
+When data is read from the remote this is buffered to disk as well.
+.PP
+In this mode the files in the cache will be sparse files and rclone will
+keep track of which bits of the files it has downloaded.
+.PP
+So if an application only reads the starts of each file, then rclone
+will only buffer the start of the file.
+These files will appear to be their full size in the cache, but they
+will be sparse files with only the data that has been downloaded present
+in them.
+.PP
+This mode should support all normal file system operations and is
+otherwise identical to \f[C]--vfs-cache-mode\f[R] writes.
+.PP
+When reading a file rclone will read \f[C]--buffer-size\f[R] plus
+\f[C]--vfs-read-ahead\f[R] bytes ahead.
+The \f[C]--buffer-size\f[R] is buffered in memory whereas the
+\f[C]--vfs-read-ahead\f[R] is buffered on disk.
+.PP
+When using this mode it is recommended that \f[C]--buffer-size\f[R] is
+not set too large and \f[C]--vfs-read-ahead\f[R] is set large if
+required.
+.PP
+\f[B]IMPORTANT\f[R] not all file systems support sparse files.
+In particular FAT/exFAT do not.
+Rclone will perform very badly if the cache directory is on a filesystem
+which doesn\[aq]t support sparse files and it will log an ERROR message
+if one is detected.
+.SS Fingerprinting
+.PP
+Various parts of the VFS use fingerprinting to see if a local file copy
+has changed relative to a remote file.
+Fingerprints are made from:
+.IP \[bu] 2
+size
+.IP \[bu] 2
+modification time
+.IP \[bu] 2
+hash
+.PP
+where available on an object.
+.PP
+On some backends some of these attributes are slow to read (they take an
+extra API call per object, or extra work per object).
+.PP
+For example \f[C]hash\f[R] is slow with the \f[C]local\f[R] and
+\f[C]sftp\f[R] backends as they have to read the entire file and hash
+it, and \f[C]modtime\f[R] is slow with the \f[C]s3\f[R],
+\f[C]swift\f[R], \f[C]ftp\f[R] and \f[C]qinqstor\f[R] backends because
+they need to do an extra API call to fetch it.
+.PP
+If you use the \f[C]--vfs-fast-fingerprint\f[R] flag then rclone will
+not include the slow operations in the fingerprint.
+This makes the fingerprinting less accurate but much faster and will
+improve the opening time of cached files.
+.PP
+If you are running a vfs cache over \f[C]local\f[R], \f[C]s3\f[R] or
+\f[C]swift\f[R] backends then using this flag is recommended.
+.PP
+Note that if you change the value of this flag, the fingerprints of the
+files in the cache may be invalidated and the files will need to be
+downloaded again.
+.SS VFS Chunked Reading
+.PP
+When rclone reads files from a remote it reads them in chunks.
+This means that rather than requesting the whole file rclone reads the
+chunk specified.
+This can reduce the used download quota for some remotes by requesting
+only chunks from the remote that are actually read, at the cost of an
+increased number of requests.
+.PP
+These flags control the chunking:
+.IP
+.nf
+\f[C]
+--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
+--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
+\f[R]
+.fi
+.PP
+Rclone will start reading a chunk of size
+\f[C]--vfs-read-chunk-size\f[R], and then double the size for each read.
+When \f[C]--vfs-read-chunk-size-limit\f[R] is specified, and greater
+than \f[C]--vfs-read-chunk-size\f[R], the chunk size for each open file
+will get doubled only until the specified value is reached.
+If the value is \[dq]off\[dq], which is the default, the limit is
+disabled and the chunk size will grow indefinitely.
+.PP
+With \f[C]--vfs-read-chunk-size 100M\f[R] and
+\f[C]--vfs-read-chunk-size-limit 0\f[R] the following parts will be
+downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on.
+When \f[C]--vfs-read-chunk-size-limit 500M\f[R] is specified, the result
+would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so
+on.
+.PP
+Setting \f[C]--vfs-read-chunk-size\f[R] to \f[C]0\f[R] or \[dq]off\[dq]
+disables chunked reading.
+.SS VFS Performance
+.PP
+These flags may be used to enable/disable features of the VFS for
+performance or other reasons.
+See also the chunked reading feature.
+.PP
+In particular S3 and Swift benefit hugely from the
+\f[C]--no-modtime\f[R] flag (or use \f[C]--use-server-modtime\f[R] for a
+slightly different effect) as each read of the modification time takes a
+transaction.
+.IP
+.nf
+\f[C]
+--no-checksum Don\[aq]t compare checksums on up/download.
+--no-modtime Don\[aq]t read/write the modification time (can speed things up).
+--no-seek Don\[aq]t allow seeking in files.
+--read-only Only allow read-only access.
+\f[R]
+.fi
+.PP
+Sometimes rclone is delivered reads or writes out of order.
+Rather than seeking rclone will wait a short time for the in sequence
+read or write to come in.
+These flags only come into effect when not using an on disk cache file.
+.IP
+.nf
+\f[C]
+--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
+\f[R]
+.fi
+.PP
+When using VFS write caching (\f[C]--vfs-cache-mode\f[R] with value
+writes or full), the global flag \f[C]--transfers\f[R] can be set to
+adjust the number of parallel uploads of modified files from the cache
+(the related global flag \f[C]--checkers\f[R] has no effect on the VFS).
+.IP
+.nf
+\f[C]
+--transfers int Number of file transfers to run in parallel (default 4)
+\f[R]
+.fi
+.SS VFS Case Sensitivity
+.PP
+Linux file systems are case-sensitive: two files can differ only by
+case, and the exact case must be used when opening a file.
+.PP
+File systems in modern Windows are case-insensitive but case-preserving:
+although existing files can be opened using any case, the exact case
+used to create the file is preserved and available for programs to
+query.
+It is not allowed for two files in the same directory to differ only by
+case.
+.PP
+Usually file systems on macOS are case-insensitive.
+It is possible to make macOS file systems case-sensitive but that is not
+the default.
+.PP
+The \f[C]--vfs-case-insensitive\f[R] VFS flag controls how rclone
+handles these two cases.
+If its value is \[dq]false\[dq], rclone passes file names to the remote
+as-is.
+If the flag is \[dq]true\[dq] (or appears without a value on the command
+line), rclone may perform a \[dq]fixup\[dq] as explained below.
+.PP
+The user may specify a file name to open/delete/rename/etc with a case
+different than what is stored on the remote.
+If an argument refers to an existing file with exactly the same name,
+then the case of the existing file on the disk will be used.
+However, if a file name with exactly the same name is not found but a
+name differing only by case exists, rclone will transparently fixup the
+name.
+This fixup happens only when an existing file is requested.
+Case sensitivity of file names created anew by rclone is controlled by
+the underlying remote.
+.PP
+Note that case sensitivity of the operating system running rclone (the
+target) may differ from case sensitivity of a file system presented by
+rclone (the source).
+The flag controls whether \[dq]fixup\[dq] is performed to satisfy the
+target.
+.PP
+If the flag is not provided on the command line, then its default value
+depends on the operating system where rclone runs: \[dq]true\[dq] on
+Windows and macOS, \[dq]false\[dq] otherwise.
+If the flag is provided without a value, then it is \[dq]true\[dq].
+.SS VFS Disk Options
+.PP
+This flag allows you to manually set the statistics about the filing
+system.
+It can be useful when those statistics cannot be read correctly
+automatically.
+.IP
+.nf
+\f[C]
+--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
+\f[R]
+.fi
+.SS Alternate report of used bytes
+.PP
+Some backends, most notably S3, do not report the amount of bytes used.
+If you need this information to be available when running \f[C]df\f[R]
+on the filesystem, then pass the flag \f[C]--vfs-used-is-size\f[R] to
+rclone.
+With this flag set, instead of relying on the backend to report this
+information, rclone will scan the whole remote similar to
+\f[C]rclone size\f[R] and compute the total used space itself.
+.PP
+\f[I]WARNING.\f[R] Contrary to \f[C]rclone size\f[R], this flag ignores
+filters so that the result is accurate.
+However, this is very inefficient and may cost lots of API calls
+resulting in extra charges.
+Use it as a last resort and only with caching.
+.IP
+.nf
+\f[C]
+rclone serve nfs remote:path [flags]
+\f[R]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+ --addr string IPaddress:Port or :Port to bind server to
+ --dir-cache-time Duration Time to cache directory entries for (default 5m0s)
+ --dir-perms FileMode Directory permissions (default 0777)
+ --file-perms FileMode File permissions (default 0666)
+ --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
+ -h, --help help for nfs
+ --no-checksum Don\[aq]t compare checksums on up/download
+ --no-modtime Don\[aq]t read/write the modification time (can speed things up)
+ --no-seek Don\[aq]t allow seeking in files
+ --poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
+ --read-only Only allow read-only access
+ --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
+ --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+ --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
+ --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-case-insensitive If a file name not found, find a case insensitive match
+ --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
+ --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
+ --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
+ --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -10731,6 +11426,707 @@ not listed here.
.IP \[bu] 2
rclone serve (https://rclone.org/commands/rclone_serve/) - Serve a
remote over a protocol.
+.SH rclone serve s3
+.PP
+Serve remote:path over s3.
+.SS Synopsis
+.PP
+\f[C]serve s3\f[R] implements a basic s3 server that serves a remote via
+s3.
+This can be viewed with an s3 client, or you can make an s3 type
+remote (https://rclone.org/s3/) to read and write to it with rclone.
+.PP
+\f[C]serve s3\f[R] is considered \f[B]Experimental\f[R] so use with
+care.
+.PP
+S3 server supports Signature Version 4 authentication.
+Just use \f[C]--auth-key accessKey,secretKey\f[R] and set the
+\f[C]Authorization\f[R] header correctly in the request.
+(See the AWS
+docs (https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)).
+.PP
+\f[C]--auth-key\f[R] can be repeated for multiple auth pairs.
+If \f[C]--auth-key\f[R] is not provided then \f[C]serve s3\f[R] will
+allow anonymous access.
+.PP
+Please note that some clients may require HTTPS endpoints.
+See the SSL docs for more information.
+.PP
+This command uses the VFS directory cache.
+All the functionality will work with \f[C]--vfs-cache-mode off\f[R].
+Using \f[C]--vfs-cache-mode full\f[R] (or \f[C]writes\f[R]) can be used
+to cache objects locally to improve performance.
+.PP
+Use \f[C]--force-path-style=false\f[R] if you want to use the bucket
+name as a part of the hostname (such as mybucket.local)
+.PP
+Use \f[C]--etag-hash\f[R] if you want to change the hash uses for the
+\f[C]ETag\f[R].
+Note that using anything other than \f[C]MD5\f[R] (the default) is
+likely to cause problems for S3 clients which rely on the Etag being the
+MD5.
+.SS Quickstart
+.PP
+For a simple set up, to serve \f[C]remote:path\f[R] over s3, run the
+server like this:
+.IP
+.nf
+\f[C]
+rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
+\f[R]
+.fi
+.PP
+This will be compatible with an rclone remote which is defined like
+this:
+.IP
+.nf
+\f[C]
+[serves3]
+type = s3
+provider = Rclone
+endpoint = http://127.0.0.1:8080/
+access_key_id = ACCESS_KEY_ID
+secret_access_key = SECRET_ACCESS_KEY
+use_multipart_uploads = false
+\f[R]
+.fi
+.PP
+Note that setting \f[C]disable_multipart_uploads = true\f[R] is to work
+around a bug which will be fixed in due course.
+.SS Bugs
+.PP
+When uploading multipart files \f[C]serve s3\f[R] holds all the parts in
+memory (see #7453 (https://github.com/rclone/rclone/issues/7453)).
+This is a limitaton of the library rclone uses for serving S3 and will
+hopefully be fixed at some point.
+.PP
+Multipart server side copies do not work (see
+#7454 (https://github.com/rclone/rclone/issues/7454)).
+These take a very long time and eventually fail.
+The default threshold for multipart server side copies is 5G which is
+the maximum it can be, so files above this side will fail to be server
+side copied.
+.PP
+For a current list of \f[C]serve s3\f[R] bugs see the serve
+s3 (https://github.com/rclone/rclone/labels/serve%20s3) bug category on
+GitHub.
+.SS Limitations
+.PP
+\f[C]serve s3\f[R] will treat all directories in the root as buckets and
+ignore all files in the root.
+You can use \f[C]CreateBucket\f[R] to create folders under the root, but
+you can\[aq]t create empty folders under other folders not in the root.
+.PP
+When using \f[C]PutObject\f[R] or \f[C]DeleteObject\f[R], rclone will
+automatically create or clean up empty folders.
+If you don\[aq]t want to clean up empty folders automatically, use
+\f[C]--no-cleanup\f[R].
+.PP
+When using \f[C]ListObjects\f[R], rclone will use \f[C]/\f[R] when the
+delimiter is empty.
+This reduces backend requests with no effect on most operations, but if
+the delimiter is something other than \f[C]/\f[R] and empty, rclone will
+do a full recursive search of the backend, which can take some time.
+.PP
+Versioning is not currently supported.
+.PP
+Metadata will only be saved in memory other than the rclone
+\f[C]mtime\f[R] metadata which will be set as the modification time of
+the file.
+.SS Supported operations
+.PP
+\f[C]serve s3\f[R] currently supports the following operations.
+.IP \[bu] 2
+Bucket
+.RS 2
+.IP \[bu] 2
+\f[C]ListBuckets\f[R]
+.IP \[bu] 2
+\f[C]CreateBucket\f[R]
+.IP \[bu] 2
+\f[C]DeleteBucket\f[R]
+.RE
+.IP \[bu] 2
+Object
+.RS 2
+.IP \[bu] 2
+\f[C]HeadObject\f[R]
+.IP \[bu] 2
+\f[C]ListObjects\f[R]
+.IP \[bu] 2
+\f[C]GetObject\f[R]
+.IP \[bu] 2
+\f[C]PutObject\f[R]
+.IP \[bu] 2
+\f[C]DeleteObject\f[R]
+.IP \[bu] 2
+\f[C]DeleteObjects\f[R]
+.IP \[bu] 2
+\f[C]CreateMultipartUpload\f[R]
+.IP \[bu] 2
+\f[C]CompleteMultipartUpload\f[R]
+.IP \[bu] 2
+\f[C]AbortMultipartUpload\f[R]
+.IP \[bu] 2
+\f[C]CopyObject\f[R]
+.IP \[bu] 2
+\f[C]UploadPart\f[R]
+.RE
+.PP
+Other operations will return error \f[C]Unimplemented\f[R].
+.SS Server options
+.PP
+Use \f[C]--addr\f[R] to specify which IP address and port the server
+should listen on, eg \f[C]--addr 1.2.3.4:8000\f[R] or
+\f[C]--addr :8080\f[R] to listen to all IPs.
+By default it only listens on localhost.
+You can use port :0 to let the OS choose an available port.
+.PP
+If you set \f[C]--addr\f[R] to listen on a public or LAN accessible IP
+address then using Authentication is advised - see the next section for
+info.
+.PP
+You can use a unix socket by setting the url to
+\f[C]unix:///path/to/socket\f[R] or just by using an absolute path name.
+Note that unix sockets bypass the authentication - this is expected to
+be done with file system permissions.
+.PP
+\f[C]--addr\f[R] may be repeated to listen on multiple
+IPs/ports/sockets.
+.PP
+\f[C]--server-read-timeout\f[R] and \f[C]--server-write-timeout\f[R] can
+be used to control the timeouts on the server.
+Note that this is the total time for a transfer.
+.PP
+\f[C]--max-header-bytes\f[R] controls the maximum number of bytes the
+server will accept in the HTTP header.
+.PP
+\f[C]--baseurl\f[R] controls the URL prefix that rclone serves from.
+By default rclone will serve from the root.
+If you used \f[C]--baseurl \[dq]/rclone\[dq]\f[R] then rclone would
+serve from a URL starting with \[dq]/rclone/\[dq].
+This is useful if you wish to proxy rclone serve.
+Rclone automatically inserts leading and trailing \[dq]/\[dq] on
+\f[C]--baseurl\f[R], so \f[C]--baseurl \[dq]rclone\[dq]\f[R],
+\f[C]--baseurl \[dq]/rclone\[dq]\f[R] and
+\f[C]--baseurl \[dq]/rclone/\[dq]\f[R] are all treated identically.
+.SS TLS (SSL)
+.PP
+By default this will serve over http.
+If you want you can serve over https.
+You will need to supply the \f[C]--cert\f[R] and \f[C]--key\f[R] flags.
+If you wish to do client side certificate validation then you will need
+to supply \f[C]--client-ca\f[R] also.
+.PP
+\f[C]--cert\f[R] should be a either a PEM encoded certificate or a
+concatenation of that with the CA certificate.
+\f[C]--key\f[R] should be the PEM encoded private key and
+\f[C]--client-ca\f[R] should be the PEM encoded client certificate
+authority certificate.
+.PP
+--min-tls-version is minimum TLS version that is acceptable.
+Valid values are \[dq]tls1.0\[dq], \[dq]tls1.1\[dq], \[dq]tls1.2\[dq]
+and \[dq]tls1.3\[dq] (default \[dq]tls1.0\[dq]).
+## VFS - Virtual File System
+.PP
+This command uses the VFS layer.
+This adapts the cloud storage objects that rclone uses into something
+which looks much more like a disk filing system.
+.PP
+Cloud storage objects have lots of properties which aren\[aq]t like disk
+files - you can\[aq]t extend them or write to the middle of them, so the
+VFS layer has to deal with that.
+Because there is no one right way of doing this there are various
+options explained below.
+.PP
+The VFS layer also implements a directory cache - this caches info about
+files and directories (but not the data) in memory.
+.SS VFS Directory Cache
+.PP
+Using the \f[C]--dir-cache-time\f[R] flag, you can control how long a
+directory should be considered up to date and not refreshed from the
+backend.
+Changes made through the VFS will appear immediately or invalidate the
+cache.
+.IP
+.nf
+\f[C]
+--dir-cache-time duration Time to cache directory entries for (default 5m0s)
+--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
+\f[R]
+.fi
+.PP
+However, changes made directly on the cloud storage by the web interface
+or a different copy of rclone will only be picked up once the directory
+cache expires if the backend configured does not support polling for
+changes.
+If the backend supports polling, changes will be picked up within the
+polling interval.
+.PP
+You can send a \f[C]SIGHUP\f[R] signal to rclone for it to flush all
+directory caches, regardless of how old they are.
+Assuming only one rclone instance is running, you can reset the cache
+like this:
+.IP
+.nf
+\f[C]
+kill -SIGHUP $(pidof rclone)
+\f[R]
+.fi
+.PP
+If you configure rclone with a remote control then you can use rclone rc
+to flush the whole directory cache:
+.IP
+.nf
+\f[C]
+rclone rc vfs/forget
+\f[R]
+.fi
+.PP
+Or individual files or directories:
+.IP
+.nf
+\f[C]
+rclone rc vfs/forget file=path/to/file dir=path/to/dir
+\f[R]
+.fi
+.SS VFS File Buffering
+.PP
+The \f[C]--buffer-size\f[R] flag determines the amount of memory, that
+will be used to buffer data in advance.
+.PP
+Each open file will try to keep the specified amount of data in memory
+at all times.
+The buffered data is bound to one open file and won\[aq]t be shared.
+.PP
+This flag is a upper limit for the used memory per open file.
+The buffer will only use memory for data that is downloaded but not not
+yet read.
+If the buffer is empty, only a small amount of memory will be used.
+.PP
+The maximum memory used by rclone for buffering can be up to
+\f[C]--buffer-size * open files\f[R].
+.SS VFS File Caching
+.PP
+These flags control the VFS file caching options.
+File caching is necessary to make the VFS layer appear compatible with a
+normal file system.
+It can be disabled at the cost of some compatibility.
+.PP
+For example you\[aq]ll need to enable VFS caching if you want to read
+and write simultaneously to a file.
+See below for more details.
+.PP
+Note that the VFS cache is separate from the cache backend and you may
+find that you need one or the other or both.
+.IP
+.nf
+\f[C]
+--cache-dir string Directory rclone will use for caching.
+--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
+--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
+--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)
+\f[R]
+.fi
+.PP
+If run with \f[C]-vv\f[R] rclone will print the location of the file
+cache.
+The files are stored in the user cache file area which is OS dependent
+but can be controlled with \f[C]--cache-dir\f[R] or setting the
+appropriate environment variable.
+.PP
+The cache has 4 different modes selected by \f[C]--vfs-cache-mode\f[R].
+The higher the cache mode the more compatible rclone becomes at the cost
+of using disk space.
+.PP
+Note that files are written back to the remote only when they are closed
+and if they haven\[aq]t been accessed for \f[C]--vfs-write-back\f[R]
+seconds.
+If rclone is quit or dies with files that haven\[aq]t been uploaded,
+these will be uploaded next time rclone is run with the same flags.
+.PP
+If using \f[C]--vfs-cache-max-size\f[R] or
+\f[C]--vfs-cache-min-free-size\f[R] note that the cache may exceed these
+quotas for two reasons.
+Firstly because it is only checked every
+\f[C]--vfs-cache-poll-interval\f[R].
+Secondly because open files cannot be evicted from the cache.
+When \f[C]--vfs-cache-max-size\f[R] or
+\f[C]--vfs-cache-min-free-size\f[R] is exceeded, rclone will attempt to
+evict the least accessed files from the cache first.
+rclone will start with files that haven\[aq]t been accessed for the
+longest.
+This cache flushing strategy is efficient and more relevant files are
+likely to remain cached.
+.PP
+The \f[C]--vfs-cache-max-age\f[R] will evict files from the cache after
+the set time since last access has passed.
+The default value of 1 hour will start evicting files from cache that
+haven\[aq]t been accessed for 1 hour.
+When a cached file is accessed the 1 hour timer is reset to 0 and will
+wait for 1 more hour before evicting.
+Specify the time with standard notation, s, m, h, d, w .
+.PP
+You \f[B]should not\f[R] run two copies of rclone using the same VFS
+cache with the same or overlapping remotes if using
+\f[C]--vfs-cache-mode > off\f[R].
+This can potentially cause data corruption if you do.
+You can work around this by giving each rclone its own cache hierarchy
+with \f[C]--cache-dir\f[R].
+You don\[aq]t need to worry about this if the remotes in use don\[aq]t
+overlap.
+.SS --vfs-cache-mode off
+.PP
+In this mode (the default) the cache will read directly from the remote
+and write directly to the remote without caching anything on disk.
+.PP
+This will mean some operations are not possible
+.IP \[bu] 2
+Files can\[aq]t be opened for both read AND write
+.IP \[bu] 2
+Files opened for write can\[aq]t be seeked
+.IP \[bu] 2
+Existing files opened for write must have O_TRUNC set
+.IP \[bu] 2
+Files open for read with O_TRUNC will be opened write only
+.IP \[bu] 2
+Files open for write only will behave as if O_TRUNC was supplied
+.IP \[bu] 2
+Open modes O_APPEND, O_TRUNC are ignored
+.IP \[bu] 2
+If an upload fails it can\[aq]t be retried
+.SS --vfs-cache-mode minimal
+.PP
+This is very similar to \[dq]off\[dq] except that files opened for read
+AND write will be buffered to disk.
+This means that files opened for write will be a lot more compatible,
+but uses the minimal disk space.
+.PP
+These operations are not possible
+.IP \[bu] 2
+Files opened for write only can\[aq]t be seeked
+.IP \[bu] 2
+Existing files opened for write must have O_TRUNC set
+.IP \[bu] 2
+Files opened for write only will ignore O_APPEND, O_TRUNC
+.IP \[bu] 2
+If an upload fails it can\[aq]t be retried
+.SS --vfs-cache-mode writes
+.PP
+In this mode files opened for read only are still read directly from the
+remote, write only and read/write files are buffered to disk first.
+.PP
+This mode should support all normal file system operations.
+.PP
+If an upload fails it will be retried at exponentially increasing
+intervals up to 1 minute.
+.SS --vfs-cache-mode full
+.PP
+In this mode all reads and writes are buffered to and from disk.
+When data is read from the remote this is buffered to disk as well.
+.PP
+In this mode the files in the cache will be sparse files and rclone will
+keep track of which bits of the files it has downloaded.
+.PP
+So if an application only reads the starts of each file, then rclone
+will only buffer the start of the file.
+These files will appear to be their full size in the cache, but they
+will be sparse files with only the data that has been downloaded present
+in them.
+.PP
+This mode should support all normal file system operations and is
+otherwise identical to \f[C]--vfs-cache-mode\f[R] writes.
+.PP
+When reading a file rclone will read \f[C]--buffer-size\f[R] plus
+\f[C]--vfs-read-ahead\f[R] bytes ahead.
+The \f[C]--buffer-size\f[R] is buffered in memory whereas the
+\f[C]--vfs-read-ahead\f[R] is buffered on disk.
+.PP
+When using this mode it is recommended that \f[C]--buffer-size\f[R] is
+not set too large and \f[C]--vfs-read-ahead\f[R] is set large if
+required.
+.PP
+\f[B]IMPORTANT\f[R] not all file systems support sparse files.
+In particular FAT/exFAT do not.
+Rclone will perform very badly if the cache directory is on a filesystem
+which doesn\[aq]t support sparse files and it will log an ERROR message
+if one is detected.
+.SS Fingerprinting
+.PP
+Various parts of the VFS use fingerprinting to see if a local file copy
+has changed relative to a remote file.
+Fingerprints are made from:
+.IP \[bu] 2
+size
+.IP \[bu] 2
+modification time
+.IP \[bu] 2
+hash
+.PP
+where available on an object.
+.PP
+On some backends some of these attributes are slow to read (they take an
+extra API call per object, or extra work per object).
+.PP
+For example \f[C]hash\f[R] is slow with the \f[C]local\f[R] and
+\f[C]sftp\f[R] backends as they have to read the entire file and hash
+it, and \f[C]modtime\f[R] is slow with the \f[C]s3\f[R],
+\f[C]swift\f[R], \f[C]ftp\f[R] and \f[C]qinqstor\f[R] backends because
+they need to do an extra API call to fetch it.
+.PP
+If you use the \f[C]--vfs-fast-fingerprint\f[R] flag then rclone will
+not include the slow operations in the fingerprint.
+This makes the fingerprinting less accurate but much faster and will
+improve the opening time of cached files.
+.PP
+If you are running a vfs cache over \f[C]local\f[R], \f[C]s3\f[R] or
+\f[C]swift\f[R] backends then using this flag is recommended.
+.PP
+Note that if you change the value of this flag, the fingerprints of the
+files in the cache may be invalidated and the files will need to be
+downloaded again.
+.SS VFS Chunked Reading
+.PP
+When rclone reads files from a remote it reads them in chunks.
+This means that rather than requesting the whole file rclone reads the
+chunk specified.
+This can reduce the used download quota for some remotes by requesting
+only chunks from the remote that are actually read, at the cost of an
+increased number of requests.
+.PP
+These flags control the chunking:
+.IP
+.nf
+\f[C]
+--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
+--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)
+\f[R]
+.fi
+.PP
+Rclone will start reading a chunk of size
+\f[C]--vfs-read-chunk-size\f[R], and then double the size for each read.
+When \f[C]--vfs-read-chunk-size-limit\f[R] is specified, and greater
+than \f[C]--vfs-read-chunk-size\f[R], the chunk size for each open file
+will get doubled only until the specified value is reached.
+If the value is \[dq]off\[dq], which is the default, the limit is
+disabled and the chunk size will grow indefinitely.
+.PP
+With \f[C]--vfs-read-chunk-size 100M\f[R] and
+\f[C]--vfs-read-chunk-size-limit 0\f[R] the following parts will be
+downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on.
+When \f[C]--vfs-read-chunk-size-limit 500M\f[R] is specified, the result
+would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so
+on.
+.PP
+Setting \f[C]--vfs-read-chunk-size\f[R] to \f[C]0\f[R] or \[dq]off\[dq]
+disables chunked reading.
+.SS VFS Performance
+.PP
+These flags may be used to enable/disable features of the VFS for
+performance or other reasons.
+See also the chunked reading feature.
+.PP
+In particular S3 and Swift benefit hugely from the
+\f[C]--no-modtime\f[R] flag (or use \f[C]--use-server-modtime\f[R] for a
+slightly different effect) as each read of the modification time takes a
+transaction.
+.IP
+.nf
+\f[C]
+--no-checksum Don\[aq]t compare checksums on up/download.
+--no-modtime Don\[aq]t read/write the modification time (can speed things up).
+--no-seek Don\[aq]t allow seeking in files.
+--read-only Only allow read-only access.
+\f[R]
+.fi
+.PP
+Sometimes rclone is delivered reads or writes out of order.
+Rather than seeking rclone will wait a short time for the in sequence
+read or write to come in.
+These flags only come into effect when not using an on disk cache file.
+.IP
+.nf
+\f[C]
+--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
+\f[R]
+.fi
+.PP
+When using VFS write caching (\f[C]--vfs-cache-mode\f[R] with value
+writes or full), the global flag \f[C]--transfers\f[R] can be set to
+adjust the number of parallel uploads of modified files from the cache
+(the related global flag \f[C]--checkers\f[R] has no effect on the VFS).
+.IP
+.nf
+\f[C]
+--transfers int Number of file transfers to run in parallel (default 4)
+\f[R]
+.fi
+.SS VFS Case Sensitivity
+.PP
+Linux file systems are case-sensitive: two files can differ only by
+case, and the exact case must be used when opening a file.
+.PP
+File systems in modern Windows are case-insensitive but case-preserving:
+although existing files can be opened using any case, the exact case
+used to create the file is preserved and available for programs to
+query.
+It is not allowed for two files in the same directory to differ only by
+case.
+.PP
+Usually file systems on macOS are case-insensitive.
+It is possible to make macOS file systems case-sensitive but that is not
+the default.
+.PP
+The \f[C]--vfs-case-insensitive\f[R] VFS flag controls how rclone
+handles these two cases.
+If its value is \[dq]false\[dq], rclone passes file names to the remote
+as-is.
+If the flag is \[dq]true\[dq] (or appears without a value on the command
+line), rclone may perform a \[dq]fixup\[dq] as explained below.
+.PP
+The user may specify a file name to open/delete/rename/etc with a case
+different than what is stored on the remote.
+If an argument refers to an existing file with exactly the same name,
+then the case of the existing file on the disk will be used.
+However, if a file name with exactly the same name is not found but a
+name differing only by case exists, rclone will transparently fixup the
+name.
+This fixup happens only when an existing file is requested.
+Case sensitivity of file names created anew by rclone is controlled by
+the underlying remote.
+.PP
+Note that case sensitivity of the operating system running rclone (the
+target) may differ from case sensitivity of a file system presented by
+rclone (the source).
+The flag controls whether \[dq]fixup\[dq] is performed to satisfy the
+target.
+.PP
+If the flag is not provided on the command line, then its default value
+depends on the operating system where rclone runs: \[dq]true\[dq] on
+Windows and macOS, \[dq]false\[dq] otherwise.
+If the flag is provided without a value, then it is \[dq]true\[dq].
+.SS VFS Disk Options
+.PP
+This flag allows you to manually set the statistics about the filing
+system.
+It can be useful when those statistics cannot be read correctly
+automatically.
+.IP
+.nf
+\f[C]
+--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
+\f[R]
+.fi
+.SS Alternate report of used bytes
+.PP
+Some backends, most notably S3, do not report the amount of bytes used.
+If you need this information to be available when running \f[C]df\f[R]
+on the filesystem, then pass the flag \f[C]--vfs-used-is-size\f[R] to
+rclone.
+With this flag set, instead of relying on the backend to report this
+information, rclone will scan the whole remote similar to
+\f[C]rclone size\f[R] and compute the total used space itself.
+.PP
+\f[I]WARNING.\f[R] Contrary to \f[C]rclone size\f[R], this flag ignores
+filters so that the result is accurate.
+However, this is very inefficient and may cost lots of API calls
+resulting in extra charges.
+Use it as a last resort and only with caching.
+.IP
+.nf
+\f[C]
+rclone serve s3 remote:path [flags]
+\f[R]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+ --addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080])
+ --allow-origin string Origin which cross-domain request (CORS) can be executed from
+ --auth-key stringArray Set key pair for v4 authorization: access_key_id,secret_access_key
+ --baseurl string Prefix for URLs - leave blank for root
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
+ --dir-cache-time Duration Time to cache directory entries for (default 5m0s)
+ --dir-perms FileMode Directory permissions (default 0777)
+ --etag-hash string Which hash to use for the ETag, or auto or blank for off (default \[dq]MD5\[dq])
+ --file-perms FileMode File permissions (default 0666)
+ --force-path-style If true use path style access if false use virtual hosted style (default true) (default true)
+ --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
+ -h, --help help for s3
+ --key string TLS PEM Private key
+ --max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default \[dq]tls1.0\[dq])
+ --no-checksum Don\[aq]t compare checksums on up/download
+ --no-cleanup Not to cleanup empty folder after object is deleted
+ --no-modtime Don\[aq]t read/write the modification time (can speed things up)
+ --no-seek Don\[aq]t allow seeking in files
+ --poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
+ --read-only Only allow read-only access
+ --server-read-timeout Duration Timeout for server reading data (default 1h0m0s)
+ --server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
+ --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
+ --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+ --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
+ --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-case-insensitive If a file name not found, find a case insensitive match
+ --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
+ --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
+ --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
+ --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
+ --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
+ --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
+ --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
+\f[R]
+.fi
+.SS Filter Options
+.PP
+Flags for filtering directory listings.
+.IP
+.nf
+\f[C]
+ --delete-excluded Delete files on dest excluded from sync
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
+ --exclude-if-present stringArray Exclude directories if filename is present
+ --files-from stringArray Read list of source-file names from file (use - to read from stdin)
+ --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
+ -f, --filter stringArray Add a file filtering rule
+ --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --ignore-case Ignore case in filters (case insensitive)
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read file include patterns from file (use - to read from stdin)
+ --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-depth int If set limits the recursion depth to this (default -1)
+ --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
+ --metadata-exclude stringArray Exclude metadatas matching pattern
+ --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
+ --metadata-filter stringArray Add a metadata filtering rule
+ --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
+ --metadata-include stringArray Include metadatas matching pattern
+ --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
+\f[R]
+.fi
+.PP
+See the global flags page (https://rclone.org/flags/) for global options
+not listed here.
+.SH SEE ALSO
+.IP \[bu] 2
+rclone serve (https://rclone.org/commands/rclone_serve/) - Serve a
+remote over a protocol.
.SH rclone serve sftp
.PP
Serve the remote over SFTP.
@@ -11341,6 +12737,7 @@ rclone serve sftp remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -11584,6 +12981,42 @@ T}@T{
The UTC timestamp of an entry.
T}
.TE
+.PP
+The server also makes the following functions available so that they can
+be used within the template.
+These functions help extend the options for dynamic rendering of HTML.
+They can be used to render HTML based on specific conditions.
+.PP
+.TS
+tab(@);
+lw(35.0n) lw(35.0n).
+T{
+Function
+T}@T{
+Description
+T}
+_
+T{
+afterEpoch
+T}@T{
+Returns the time since the epoch for the given time.
+T}
+T{
+contains
+T}@T{
+Checks whether a given substring is present or not in a given string.
+T}
+T{
+hasPrefix
+T}@T{
+Checks whether the given string begins with the specified prefix.
+T}
+T{
+hasSuffix
+T}@T{
+Checks whether the given string end with the specified suffix.
+T}
+.TE
.SS Authentication
.PP
By default this will serve files without needing a login.
@@ -11618,7 +13051,7 @@ Use \f[C]--realm\f[R] to set the authentication realm.
.PP
Use \f[C]--salt\f[R] to change the password hashing salt from the
default.
-.SS VFS - Virtual File System
+## VFS - Virtual File System
.PP
This command uses the VFS layer.
This adapts the cloud storage objects that rclone uses into something
@@ -12173,6 +13606,7 @@ rclone serve webdav remote:path [flags]
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -13198,6 +14632,10 @@ Note that arbitrary metadata may be added to objects using the
\f[C]--metadata-set key=value\f[R] flag when the object is first
uploaded.
This flag can be repeated as many times as necessary.
+.PP
+The --metadata-mapper flag can be used to pass the name of a program in
+which can transform metadata when it is being copied from source to
+destination.
.SS Types of metadata
.PP
Metadata is divided into two type.
@@ -13332,6 +14770,13 @@ T}@T{
2006-01-02T15:04:05.999999999Z07:00
T}
T{
+utime
+T}@T{
+Time of file upload: RFC 3339
+T}@T{
+2006-01-02T15:04:05.999999999Z07:00
+T}
+T{
cache-control
T}@T{
Cache-Control header
@@ -14236,8 +15681,9 @@ ftp
sftp
.PP
Without \f[C]--inplace\f[R] (the default) rclone will first upload to a
-temporary file with an extension like this where \f[C]XXXXXX\f[R]
-represents a random string.
+temporary file with an extension like this, where \f[C]XXXXXX\f[R]
+represents a random string and \f[C].partial\f[R] is --partial-suffix
+value (\f[C].partial\f[R] by default).
.IP
.nf
\f[C]
@@ -14480,12 +15926,152 @@ Only applicable for \f[C]--max-transfer\f[R]
Setting this flag enables rclone to copy the metadata from the source to
the destination.
For local backends this is ownership, permissions, xattr etc.
-See the #metadata for more info.
+See the metadata section for more info.
+.SS --metadata-mapper SpaceSepList
+.PP
+If you supply the parameter \f[C]--metadata-mapper /path/to/program\f[R]
+then rclone will use that program to map metadata from source object to
+destination object.
+.PP
+The argument to this flag should be a command with an optional space
+separated list of arguments.
+If one of the arguments has a space in then enclose it in
+\f[C]\[dq]\f[R], if you want a literal \f[C]\[dq]\f[R] in an argument
+then enclose the argument in \f[C]\[dq]\f[R] and double the
+\f[C]\[dq]\f[R].
+See CSV encoding (https://godoc.org/encoding/csv) for more info.
+.IP
+.nf
+\f[C]
+--metadata-mapper \[dq]python bin/test_metadata_mapper.py\[dq]
+--metadata-mapper \[aq]python bin/test_metadata_mapper.py \[dq]argument with a space\[dq]\[aq]
+--metadata-mapper \[aq]python bin/test_metadata_mapper.py \[dq]argument with \[dq]\[dq]two\[dq]\[dq] quotes\[dq]\[aq]
+\f[R]
+.fi
+.PP
+This uses a simple JSON based protocol with input on STDIN and output on
+STDOUT.
+This will be called for every file and directory copied and may be
+called concurrently.
+.PP
+The program\[aq]s job is to take a metadata blob on the input and turn
+it into a metadata blob on the output suitable for the destination
+backend.
+.PP
+Input to the program (via STDIN) might look like this.
+This provides some context for the \f[C]Metadata\f[R] which may be
+important.
+.IP \[bu] 2
+\f[C]SrcFs\f[R] is the config string for the remote that the object is
+currently on.
+.IP \[bu] 2
+\f[C]SrcFsType\f[R] is the name of the source backend.
+.IP \[bu] 2
+\f[C]DstFs\f[R] is the config string for the remote that the object is
+being copied to
+.IP \[bu] 2
+\f[C]DstFsType\f[R] is the name of the destination backend.
+.IP \[bu] 2
+\f[C]Remote\f[R] is the path of the file relative to the root.
+.IP \[bu] 2
+\f[C]Size\f[R], \f[C]MimeType\f[R], \f[C]ModTime\f[R] are attributes of
+the file.
+.IP \[bu] 2
+\f[C]IsDir\f[R] is \f[C]true\f[R] if this is a directory (not yet
+implemented).
+.IP \[bu] 2
+\f[C]ID\f[R] is the source \f[C]ID\f[R] of the file if known.
+.IP \[bu] 2
+\f[C]Metadata\f[R] is the backend specific metadata as described in the
+backend docs.
+.IP
+.nf
+\f[C]
+{
+ \[dq]SrcFs\[dq]: \[dq]gdrive:\[dq],
+ \[dq]SrcFsType\[dq]: \[dq]drive\[dq],
+ \[dq]DstFs\[dq]: \[dq]newdrive:user\[dq],
+ \[dq]DstFsType\[dq]: \[dq]onedrive\[dq],
+ \[dq]Remote\[dq]: \[dq]test.txt\[dq],
+ \[dq]Size\[dq]: 6,
+ \[dq]MimeType\[dq]: \[dq]text/plain; charset=utf-8\[dq],
+ \[dq]ModTime\[dq]: \[dq]2022-10-11T17:53:10.286745272+01:00\[dq],
+ \[dq]IsDir\[dq]: false,
+ \[dq]ID\[dq]: \[dq]xyz\[dq],
+ \[dq]Metadata\[dq]: {
+ \[dq]btime\[dq]: \[dq]2022-10-11T16:53:11Z\[dq],
+ \[dq]content-type\[dq]: \[dq]text/plain; charset=utf-8\[dq],
+ \[dq]mtime\[dq]: \[dq]2022-10-11T17:53:10.286745272+01:00\[dq],
+ \[dq]owner\[dq]: \[dq]user1\[at]domain1.com\[dq],
+ \[dq]permissions\[dq]: \[dq]...\[dq],
+ \[dq]description\[dq]: \[dq]my nice file\[dq],
+ \[dq]starred\[dq]: \[dq]false\[dq]
+ }
+}
+\f[R]
+.fi
+.PP
+The program should then modify the input as desired and send it to
+STDOUT.
+The returned \f[C]Metadata\f[R] field will be used in its entirety for
+the destination object.
+Any other fields will be ignored.
+Note in this example we translate user names and permissions and add
+something to the description:
+.IP
+.nf
+\f[C]
+{
+ \[dq]Metadata\[dq]: {
+ \[dq]btime\[dq]: \[dq]2022-10-11T16:53:11Z\[dq],
+ \[dq]content-type\[dq]: \[dq]text/plain; charset=utf-8\[dq],
+ \[dq]mtime\[dq]: \[dq]2022-10-11T17:53:10.286745272+01:00\[dq],
+ \[dq]owner\[dq]: \[dq]user1\[at]domain2.com\[dq],
+ \[dq]permissions\[dq]: \[dq]...\[dq],
+ \[dq]description\[dq]: \[dq]my nice file [migrated from domain1]\[dq],
+ \[dq]starred\[dq]: \[dq]false\[dq]
+ }
+}
+\f[R]
+.fi
+.PP
+Metadata can be removed here too.
+.PP
+An example python program might look something like this to implement
+the above transformations.
+.IP
+.nf
+\f[C]
+import sys, json
+
+i = json.load(sys.stdin)
+metadata = i[\[dq]Metadata\[dq]]
+# Add tag to description
+if \[dq]description\[dq] in metadata:
+ metadata[\[dq]description\[dq]] += \[dq] [migrated from domain1]\[dq]
+else:
+ metadata[\[dq]description\[dq]] = \[dq][migrated from domain1]\[dq]
+# Modify owner
+if \[dq]owner\[dq] in metadata:
+ metadata[\[dq]owner\[dq]] = metadata[\[dq]owner\[dq]].replace(\[dq]domain1.com\[dq], \[dq]domain2.com\[dq])
+o = { \[dq]Metadata\[dq]: metadata }
+json.dump(o, sys.stdout, indent=\[dq]\[rs]t\[dq])
+\f[R]
+.fi
+.PP
+You can find this example (slightly expanded) in the rclone source code
+at
+bin/test_metadata_mapper.py (https://github.com/rclone/rclone/blob/master/test_metadata_mapper.py).
+.PP
+If you want to see the input to the metadata mapper and the output
+returned from it in the log you can use \f[C]-vv --dump mapper\f[R].
+.PP
+See the metadata section for more info.
.SS --metadata-set key=value
.PP
Add metadata \f[C]key\f[R] = \f[C]value\f[R] when uploading.
This can be repeated as many times as required.
-See the #metadata for more info.
+See the metadata section for more info.
.SS --modify-window=TIME
.PP
When checking whether a file has been modified, this is the maximum
@@ -14730,6 +16316,14 @@ rather than a perfect ordering.
If you want perfect ordering then you will need to specify --check-first
which will find all the files which need transferring first before
transferring any.
+.SS --partial-suffix
+.PP
+When --inplace is not used, it causes rclone to use the
+\f[C]--partial-suffix\f[R] as suffix for temporary files.
+.PP
+Suffix length limit is 16 characters.
+.PP
+The default is \f[C].partial\f[R].
.SS --password-command SpaceSepList
.PP
This flag supplies a program which should supply the config password
@@ -14749,9 +16343,9 @@ Eg
.IP
.nf
\f[C]
---password-command echo hello
---password-command echo \[dq]hello with space\[dq]
---password-command echo \[dq]hello with \[dq]\[dq]quotes\[dq]\[dq] and space\[dq]
+--password-command \[dq]echo hello\[dq]
+--password-command \[aq]echo \[dq]hello with space\[dq]\[aq]
+--password-command \[aq]echo \[dq]hello with \[dq]\[dq]quotes\[dq]\[dq] and space\[dq]\[aq]
\f[R]
.fi
.PP
@@ -15147,39 +16741,62 @@ the message \f[C]not deleting files as there were IO errors\f[R].
.PP
When doing anything which involves a directory listing (e.g.
\f[C]sync\f[R], \f[C]copy\f[R], \f[C]ls\f[R] - in fact nearly every
-command), rclone normally lists a directory and processes it before
+command), rclone has different strategies to choose from.
+.PP
+The basic strategy is to list one directory and processes it before
using more directory lists to process any subdirectories.
-This can be parallelised and works very quickly using the least amount
-of memory.
+This is a mandatory backend feature, called \f[C]List\f[R], which means
+it is supported by all backends.
+This strategy uses small amount of memory, and because it can be
+parallelised it is fast for operations involving processing of the list
+results.
.PP
-However, some remotes have a way of listing all files beneath a
-directory in one (or a small number) of transactions.
-These tend to be the bucket-based remotes (e.g.
+Some backends provide the support for an alternative strategy, where all
+files beneath a directory can be listed in one (or a small number) of
+transactions.
+Rclone supports this alternative strategy through an optional backend
+feature called \f[C]ListR\f[R] (https://rclone.org/overview/#listr).
+You can see in the storage system overview documentation\[aq]s optional
+features (https://rclone.org/overview/#optional-features) section which
+backends it is enabled for (these tend to be the bucket-based ones, e.g.
S3, B2, GCS, Swift).
+This strategy requires fewer transactions for highly recursive
+operations, which is important on backends where this is charged or
+heavily rate limited.
+It may be faster (due to fewer transactions) or slower (because it
+can\[aq]t be parallelized) depending on different parameters, and may
+require more memory if rclone has to keep the whole listing in memory.
.PP
-If you use the \f[C]--fast-list\f[R] flag then rclone will use this
-method for listing directories.
-This will have the following consequences for the listing:
-.IP \[bu] 2
-It \f[B]will\f[R] use fewer transactions (important if you pay for them)
-.IP \[bu] 2
-It \f[B]will\f[R] use more memory.
-Rclone has to load the whole listing into memory.
-.IP \[bu] 2
-It \f[I]may\f[R] be faster because it uses fewer transactions
-.IP \[bu] 2
-It \f[I]may\f[R] be slower because it can\[aq]t be parallelized
+Which listing strategy rclone picks for a given operation is
+complicated, but in general it tries to choose the best possible.
+It will prefer \f[C]ListR\f[R] in situations where it doesn\[aq]t need
+to store the listed files in memory, e.g.
+for unlimited recursive \f[C]ls\f[R] command variants.
+In other situations it will prefer \f[C]List\f[R], e.g.
+for \f[C]sync\f[R] and \f[C]copy\f[R], where it needs to keep the listed
+files in memory, and is performing operations on them where
+parallelization may be a huge advantage.
.PP
-rclone should always give identical results with and without
-\f[C]--fast-list\f[R].
+Rclone is not able to take all relevant parameters into account for
+deciding the best strategy, and therefore allows you to influence the
+choice in two ways: You can stop rclone from using \f[C]ListR\f[R] by
+disabling the feature, using the --disable option
+(\f[C]--disable ListR\f[R]), or you can allow rclone to use
+\f[C]ListR\f[R] where it would normally choose not to do so due to
+higher memory usage, using the \f[C]--fast-list\f[R] option.
+Rclone should always produce identical results either way.
+Using \f[C]--disable ListR\f[R] or \f[C]--fast-list\f[R] on a remote
+which doesn\[aq]t support \f[C]ListR\f[R] does nothing, rclone will just
+ignore it.
.PP
-If you pay for transactions and can fit your entire sync listing into
-memory then \f[C]--fast-list\f[R] is recommended.
-If you have a very big sync to do then don\[aq]t use
-\f[C]--fast-list\f[R] otherwise you will run out of memory.
-.PP
-If you use \f[C]--fast-list\f[R] on a remote which doesn\[aq]t support
-it, then rclone will just ignore it.
+A rule of thumb is that if you pay for transactions and can fit your
+entire sync listing into memory, then \f[C]--fast-list\f[R] is
+recommended.
+If you have a very big sync to do, then don\[aq]t use
+\f[C]--fast-list\f[R], otherwise you will run out of memory.
+Run some tests and compare before you decide, and if in doubt then just
+leave the default, let rclone decide, i.e.
+not use \f[C]--fast-list\f[R].
.SS --timeout=TIME
.PP
This sets the IO idle timeout.
@@ -15536,6 +17153,11 @@ to standard output.
This dumps a list of the open files at the end of the command.
It uses the \f[C]lsof\f[R] command to do that so you\[aq]ll need that
installed to use it.
+.SS --dump mapper
+.PP
+This shows the JSON blobs being sent to the program supplied with
+\f[C]--metadata-mapper\f[R] and received from it.
+It can be useful for debugging the metadata mapper interface.
.SS --memprofile=FILE
.PP
Write memory profile to file.
@@ -18680,6 +20302,84 @@ See the about (https://rclone.org/commands/rclone_about/) command for
more information on the above.
.PP
\f[B]Authentication is required for this call.\f[R]
+.SS operations/check: check the source and destination are the same
+.PP
+Checks the files in the source and destination match.
+It compares sizes and hashes and logs a report of files that don\[aq]t
+match.
+It doesn\[aq]t alter the source or destination.
+.PP
+This takes the following parameters:
+.IP \[bu] 2
+srcFs - a remote name string e.g.
+\[dq]drive:\[dq] for the source, \[dq]/\[dq] for local filesystem
+.IP \[bu] 2
+dstFs - a remote name string e.g.
+\[dq]drive2:\[dq] for the destination, \[dq]/\[dq] for local filesystem
+.IP \[bu] 2
+download - check by downloading rather than with hash
+.IP \[bu] 2
+checkFileHash - treat checkFileFs:checkFileRemote as a SUM file with
+hashes of given type
+.IP \[bu] 2
+checkFileFs - treat checkFileFs:checkFileRemote as a SUM file with
+hashes of given type
+.IP \[bu] 2
+checkFileRemote - treat checkFileFs:checkFileRemote as a SUM file with
+hashes of given type
+.IP \[bu] 2
+oneWay - check one way only, source files must exist on remote
+.IP \[bu] 2
+combined - make a combined report of changes (default false)
+.IP \[bu] 2
+missingOnSrc - report all files missing from the source (default true)
+.IP \[bu] 2
+missingOnDst - report all files missing from the destination (default
+true)
+.IP \[bu] 2
+match - report all matching files (default false)
+.IP \[bu] 2
+differ - report all non-matching files (default true)
+.IP \[bu] 2
+error - report all files with errors (hashing or reading) (default true)
+.PP
+If you supply the download flag, it will download the data from both
+remotes and check them against each other on the fly.
+This can be useful for remotes that don\[aq]t support hashes or if you
+really want to check all the data.
+.PP
+If you supply the size-only global flag, it will only compare the sizes
+not the hashes as well.
+Use this for a quick check.
+.PP
+If you supply the checkFileHash option with a valid hash name, the
+checkFileFs:checkFileRemote must point to a text file in the SUM format.
+This treats the checksum file as the source and dstFs as the
+destination.
+Note that srcFs is not used and should not be supplied in this case.
+.PP
+Returns:
+.IP \[bu] 2
+success - true if no error, false otherwise
+.IP \[bu] 2
+status - textual summary of check, OK or text string
+.IP \[bu] 2
+hashType - hash used in check, may be missing
+.IP \[bu] 2
+combined - array of strings of combined report of changes
+.IP \[bu] 2
+missingOnSrc - array of strings of all files missing from the source
+.IP \[bu] 2
+missingOnDst - array of strings of all files missing from the
+destination
+.IP \[bu] 2
+match - array of strings of all matching files
+.IP \[bu] 2
+differ - array of strings of all non-matching files
+.IP \[bu] 2
+error - array of strings of all files with errors (hashing or reading)
+.PP
+\f[B]Authentication is required for this call.\f[R]
.SS operations/cleanup: Remove trashed files in the remote or path
.PP
This takes the following parameters:
@@ -20004,7 +21704,7 @@ T}
T{
Google Drive
T}@T{
-MD5
+MD5, SHA1, SHA256
T}@T{
R/W
T}@T{
@@ -20104,7 +21804,7 @@ No
T}@T{
R
T}@T{
--
+RW
T}
T{
Koofr
@@ -20122,6 +21822,21 @@ T}@T{
-
T}
T{
+Linkbox
+T}@T{
+-
+T}@T{
+R
+T}@T{
+No
+T}@T{
+No
+T}@T{
+-
+T}@T{
+-
+T}
+T{
Mail.ru Cloud
T}@T{
Mailru \[u2076]
@@ -20182,6 +21897,21 @@ T}@T{
-
T}
T{
+Microsoft Azure Files Storage
+T}@T{
+MD5
+T}@T{
+R/W
+T}@T{
+Yes
+T}@T{
+No
+T}@T{
+R/W
+T}@T{
+-
+T}
+T{
Microsoft OneDrive
T}@T{
QuickXorHash \[u2075]
@@ -20396,7 +22126,7 @@ SMB
T}@T{
-
T}@T{
--
+R/W
T}@T{
Yes
T}@T{
@@ -20512,7 +22242,6 @@ T}@T{
RWU
T}
.TE
-.SS Notes
.PP
\[S1] Dropbox supports its own custom
hash (https://www.dropbox.com/developers/reference/content-hash).
@@ -20547,7 +22276,7 @@ GiB.
.PP
\[S1]\[u2070] FTP supports modtimes for the major FTP servers, and also
others if they advertised required protocol extensions.
-See this (https://rclone.org/ftp/#modified-time) for more details.
+See this (https://rclone.org/ftp/#modification-times) for more details.
.PP
\[S1]\[S1] Internet Archive requires option \f[C]wait_archive\f[R] to be
set to a non-zero value for full modtime support.
@@ -21573,7 +23302,7 @@ Yes
T}@T{
Yes
T}@T{
-Yes \[dd]\[dd]
+Yes
T}@T{
No
T}@T{
@@ -22013,6 +23742,31 @@ T}@T{
No
T}
T{
+Microsoft Azure Files Storage
+T}@T{
+No
+T}@T{
+Yes
+T}@T{
+Yes
+T}@T{
+Yes
+T}@T{
+No
+T}@T{
+No
+T}@T{
+Yes
+T}@T{
+Yes
+T}@T{
+No
+T}@T{
+Yes
+T}@T{
+Yes
+T}
+T{
Microsoft OneDrive
T}@T{
Yes
@@ -22025,7 +23779,7 @@ Yes
T}@T{
Yes
T}@T{
-No
+Yes \[u2075]
T}@T{
No
T}@T{
@@ -22065,7 +23819,7 @@ T}
T{
OpenStack Swift
T}@T{
-Yes \[dg]
+Yes \[S1]
T}@T{
Yes
T}@T{
@@ -22104,7 +23858,7 @@ Yes
T}@T{
Yes
T}@T{
-No
+Yes
T}@T{
No
T}@T{
@@ -22317,7 +24071,7 @@ SFTP
T}@T{
No
T}@T{
-No
+Yes \[u2074]
T}@T{
Yes
T}@T{
@@ -22415,7 +24169,7 @@ T}
T{
Storj
T}@T{
-Yes \[u2628]
+Yes \[S2]
T}@T{
Yes
T}@T{
@@ -22477,7 +24231,7 @@ No
T}@T{
No
T}@T{
-Yes \[dd]
+Yes \[S3]
T}@T{
No
T}@T{
@@ -22563,19 +24317,23 @@ T}@T{
Yes
T}
.TE
+.PP
+\[S1] Note Swift implements this in order to delete directory markers
+but it doesn\[aq]t actually have a quicker way of deleting files other
+than deleting them individually.
+.PP
+\[S2] Storj implements this efficiently only for entire buckets.
+If purging a directory inside a bucket, files are deleted individually.
+.PP
+\[S3] StreamUpload is not supported with Nextcloud
+.PP
+\[u2074] Use the \f[C]--sftp-copy-is-hardlink\f[R] flag to enable.
+.PP
+\[u2075] Use the \f[C]--onedrive-delta\f[R] flag to enable.
.SS Purge
.PP
This deletes a directory quicker than just deleting all the files in the
directory.
-.PP
-\[dg] Note Swift implements this in order to delete directory markers
-but they don\[aq]t actually have a quicker way of deleting files other
-than deleting them individually.
-.PP
-\[u2628] Storj implements this efficiently only for entire buckets.
-If purging a directory inside a bucket, files are deleted individually.
-.PP
-\[dd] StreamUpload is not supported with Nextcloud
.SS Copy
.PP
Used when copying an object to and from the same remote.
@@ -22672,11 +24430,11 @@ Flags for anything which can Copy a file.
-c, --checksum Check for changes with size & checksum (if available, or fallback to size only).
--compare-dest stringArray Include additional comma separated server-side paths during comparison
--copy-dest stringArray Implies --compare-dest but also copies files from paths into destination
- --cutoff-mode string Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default \[dq]HARD\[dq])
+ --cutoff-mode HARD|SOFT|CAUTIOUS Mode to stop transfers when reaching the max transfer limit HARD|SOFT|CAUTIOUS (default HARD)
--ignore-case-sync Ignore case when synchronizing
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum
+ --ignore-size Ignore size when skipping use modtime or checksum
-I, --ignore-times Don\[aq]t skip files that match size and time - transfer all files
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
@@ -22691,11 +24449,12 @@ Flags for anything which can Copy a file.
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don\[aq]t check the destination, copy regardless
--no-traverse Don\[aq]t traverse destination file system on copy
- --no-update-modtime Don\[aq]t update destination mod-time if files identical
+ --no-update-modtime Don\[aq]t update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. \[aq]size,descending\[aq]
+ --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default \[dq].partial\[dq])
--refresh-times Refresh the modtime of remote files
--server-side-across-configs Allow server-side operations (e.g. copy) to work across different configs
- --size-only Skip based on size only, not mod-time or checksum
+ --size-only Skip based on size only, not modtime or checksum
--streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown, upload starts after reaching cutoff or when file ends (default 100Ki)
-u, --update Skip files that are newer on the destination
\f[R]
@@ -22765,7 +24524,7 @@ General networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.64.0\[dq])
+ --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.65.0\[dq])
\f[R]
.fi
.SS Performance
@@ -22788,7 +24547,7 @@ General configuration of rclone.
--ask-password Allow prompt for password for encrypted configuration (default true)
--auto-confirm If enabled, do not request console confirmation
--cache-dir string Directory rclone will use for caching (default \[dq]$HOME/.cache/rclone\[dq])
- --color string When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default \[dq]AUTO\[dq])
+ --color AUTO|NEVER|ALWAYS When to show colors (and other ANSI codes) AUTO|NEVER|ALWAYS (default AUTO)
--config string Config file (default \[dq]$HOME/.config/rclone/rclone.conf\[dq])
--default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--disable string Disable a comma separated list of features (use --disable help to see a list)
@@ -22817,7 +24576,7 @@ Flags for developers.
.nf
\f[C]
--cpuprofile string Write cpu profile to file
- --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump DumpFlags List of items to dump from: headers, bodies, requests, responses, auth, filters, goroutines, openfiles, mapper
--dump-bodies Dump HTTP headers and bodies - may contain sensitive info
--dump-headers Dump HTTP headers - may contain sensitive info
--memprofile string Write memory profile to file
@@ -22871,7 +24630,7 @@ Logging and statistics.
\f[C]
--log-file string Log everything to this file
--log-format string Comma separated list of log format options (default \[dq]date,time\[dq])
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default \[dq]NOTICE\[dq])
+ --log-level LogLevel Log level DEBUG|INFO|NOTICE|ERROR (default NOTICE)
--log-systemd Activate systemd integration for the logger
--max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
-P, --progress Show progress during transfer
@@ -22879,7 +24638,7 @@ Logging and statistics.
-q, --quiet Print as little stuff as possible
--stats Duration Interval between printing stats, e.g. 500ms, 60s, 5m (0 to disable) (default 1m0s)
--stats-file-name-length int Max file name length in stats (0 for no limit) (default 45)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default \[dq]INFO\[dq])
+ --stats-log-level LogLevel Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default INFO)
--stats-one-line Make the stats fit on one line
--stats-one-line-date Enable --stats-one-line and add current date/time prefix
--stats-one-line-date-format string Enable --stats-one-line-date and use custom formatted date: Enclose date string in double quotes (\[dq]), see https://golang.org/pkg/time/#Time.Format
@@ -22903,6 +24662,7 @@ Flags to control metadata.
--metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
--metadata-include stringArray Include metadatas matching pattern
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --metadata-mapper SpaceSepList Program to run to transforming metadata before upload
--metadata-set stringArray Add metadata key=value when uploading
\f[R]
.fi
@@ -22952,13 +24712,13 @@ These can be set in the config file also.
--acd-auth-url string Auth server URL
--acd-client-id string OAuth Client Id
--acd-client-secret string OAuth Client Secret
- --acd-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --acd-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi)
--acd-token string OAuth Access Token as a JSON blob
--acd-token-url string Token server url
--acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears (default 3m0s)
--alias-remote string Remote or path to alias
- --azureblob-access-tier string Access tier of blob: hot, cool or archive
+ --azureblob-access-tier string Access tier of blob: hot, cool, cold or archive
--azureblob-account string Azure Storage Account Name
--azureblob-archive-tier-delete Delete archive tier blobs before overwriting
--azureblob-chunk-size SizeSuffix Upload chunk size (default 4Mi)
@@ -22969,7 +24729,7 @@ These can be set in the config file also.
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
--azureblob-disable-checksum Don\[aq]t store MD5 checksum with object metadata
- --azureblob-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
+ --azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service
--azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI)
--azureblob-key string Storage Account Shared Key
@@ -22989,18 +24749,43 @@ These can be set in the config file also.
--azureblob-use-emulator Uses local storage emulator if provided as \[aq]true\[aq]
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address)
+ --azurefiles-account string Azure Storage Account Name
+ --azurefiles-chunk-size SizeSuffix Upload chunk size (default 4Mi)
+ --azurefiles-client-certificate-password string Password for the certificate file (optional) (obscured)
+ --azurefiles-client-certificate-path string Path to a PEM or PKCS12 certificate file including the private key
+ --azurefiles-client-id string The ID of the client in use
+ --azurefiles-client-secret string One of the service principal\[aq]s client secrets
+ --azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
+ --azurefiles-connection-string string Azure Files Connection String
+ --azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
+ --azurefiles-endpoint string Endpoint for the service
+ --azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI)
+ --azurefiles-key string Storage Account Shared Key
+ --azurefiles-max-stream-size SizeSuffix Max size for streamed files (default 10Gi)
+ --azurefiles-msi-client-id string Object ID of the user-assigned MSI to use, if any
+ --azurefiles-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
+ --azurefiles-msi-object-id string Object ID of the user-assigned MSI to use, if any
+ --azurefiles-password string The user\[aq]s password (obscured)
+ --azurefiles-sas-url string SAS URL
+ --azurefiles-service-principal-file string Path to file containing credentials for use with a service principal
+ --azurefiles-share-name string Azure Files Share Name
+ --azurefiles-tenant string ID of the service principal\[aq]s tenant. Also called its directory ID
+ --azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
+ --azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
+ --azurefiles-username string User name (usually an email address)
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size (default 96Mi)
--b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi)
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
--b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w)
--b2-download-url string Custom endpoint for downloads
- --b2-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --b2-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--b2-endpoint string Endpoint for the service
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files
--b2-key string Application Key
+ --b2-lifecycle int Set the number of days deleted files should be kept when creating a bucket
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging
- --b2-upload-concurrency int Concurrency for multipart uploads (default 16)
+ --b2-upload-concurrency int Concurrency for multipart uploads (default 4)
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--b2-version-at Time Show file versions as they were at the specified time (default off)
--b2-versions Include old versions in directory listings
@@ -23011,7 +24796,7 @@ These can be set in the config file also.
--box-client-id string OAuth Client Id
--box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
- --box-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
+ --box-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--box-impersonate string Impersonate this user ID when using a service account
--box-list-chunk int Size of listing chunk 1-1000 (default 1000)
--box-owned-by string Only show items owned by the login (email address) passed in
@@ -23069,7 +24854,7 @@ These can be set in the config file also.
--drive-client-secret string OAuth Client Secret
--drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut
--drive-disable-http2 Disable drive using http2 (default true)
- --drive-encoding MultiEncoder The encoding for the backend (default InvalidUtf8)
+ --drive-encoding Encoding The encoding for the backend (default InvalidUtf8)
--drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars)
--drive-export-formats string Comma separated list of preferred formats for downloading Google docs (default \[dq]docx,xlsx,pptx,svg\[dq])
--drive-fast-list-bug-fix Work around a bug in Google Drive listing (default true)
@@ -23078,17 +24863,21 @@ These can be set in the config file also.
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs
--drive-keep-revision-forever Keep new head revision of each file forever
--drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000)
+ --drive-metadata-labels Bits Control whether labels should be read or written in metadata (default off)
+ --drive-metadata-owner Bits Control whether owner should be read or written in metadata (default read)
+ --drive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off)
--drive-pacer-burst int Number of API calls to allow without sleeping (default 100)
--drive-pacer-min-sleep Duration Minimum time to sleep between API calls (default 100ms)
--drive-resource-key string Resource key for accessing a link-shared file
--drive-root-folder-id string ID of the root folder
- --drive-scope string Scope that rclone should use when requesting access from drive
+ --drive-scope string Comma separated list of scopes that rclone should use when requesting access from drive
--drive-server-side-across-configs Deprecated: use --server-side-across-configs instead
--drive-service-account-credentials string Service Account Credentials JSON blob
--drive-service-account-file string Service Account Credentials JSON file path
--drive-shared-with-me Only show files that are shared with me
+ --drive-show-all-gdocs Show all Google Docs including non-exportable ones in listings
--drive-size-as-quota Show sizes as storage quota usage, not actual size
- --drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only
+ --drive-skip-checksum-gphotos Skip checksums on Google photos and videos only
--drive-skip-dangling-shortcuts If set skip dangling shortcut files
--drive-skip-gdocs Skip google documents in all listings
--drive-skip-shortcuts If set skip shortcut files
@@ -23112,7 +24901,7 @@ These can be set in the config file also.
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
- --dropbox-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
+ --dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--dropbox-shared-files Instructs rclone to work on individual shared files
@@ -23121,11 +24910,11 @@ These can be set in the config file also.
--dropbox-token-url string Token server url
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
--fichier-cdn Set if you wish to use CDN download links
- --fichier-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --fichier-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
--fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
--fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
--fichier-shared-folder string If you want to download a shared folder, add this parameter
- --filefabric-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
+ --filefabric-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--filefabric-permanent-token string Permanent Authentication Token
--filefabric-root-folder-id string ID of the root folder
--filefabric-token string Session Token
@@ -23139,7 +24928,7 @@ These can be set in the config file also.
--ftp-disable-mlsd Disable using MLSD even if server advertises support
--ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
--ftp-disable-utf8 Disable using UTF-8 even if server advertises support
- --ftp-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
+ --ftp-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,RightSpace,Dot)
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to
@@ -23161,7 +24950,7 @@ These can be set in the config file also.
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
- --gcs-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
+ --gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Endpoint for the service
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
@@ -23174,9 +24963,13 @@ These can be set in the config file also.
--gcs-token-url string Token server url
--gcs-user-project string User project
--gphotos-auth-url string Auth server URL
+ --gphotos-batch-commit-timeout Duration Max time to wait for a batch to finish committing (default 10m0s)
+ --gphotos-batch-mode string Upload file batching sync|async|off (default \[dq]sync\[dq])
+ --gphotos-batch-size int Max number of files in upload batch
+ --gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
--gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret
- --gphotos-encoding MultiEncoder The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
+ --gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gphotos-include-archived Also view and download archived media
--gphotos-read-only Set to make the Google Photos backend read only
--gphotos-read-size Set to read the size of media items
@@ -23188,8 +24981,8 @@ These can be set in the config file also.
--hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off)
--hasher-remote string Remote to cache checksums for (e.g. myRemote:path)
--hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy
- --hdfs-encoding MultiEncoder The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
- --hdfs-namenode string Hadoop name node and port
+ --hdfs-encoding Encoding The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
+ --hdfs-namenode CommaSepList Hadoop name nodes and ports
--hdfs-service-principal-name string Kerberos service principal name for the namenode
--hdfs-username string Hadoop user name
--hidrive-auth-url string Auth server URL
@@ -23197,7 +24990,7 @@ These can be set in the config file also.
--hidrive-client-id string OAuth Client Id
--hidrive-client-secret string OAuth Client Secret
--hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary
- --hidrive-encoding MultiEncoder The encoding for the backend (default Slash,Dot)
+ --hidrive-encoding Encoding The encoding for the backend (default Slash,Dot)
--hidrive-endpoint string Endpoint for the service (default \[dq]https://api.hidrive.strato.com/2.1\[dq])
--hidrive-root-prefix string The root/parent folder for all paths (default \[dq]/\[dq])
--hidrive-scope-access string Access permissions that rclone should use when requesting access from HiDrive (default \[dq]rw\[dq])
@@ -23210,9 +25003,16 @@ These can be set in the config file also.
--http-no-head Don\[aq]t use HEAD requests
--http-no-slash Set this if the site doesn\[aq]t end directories with /
--http-url string URL of HTTP host to connect to
+ --imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket)
+ --imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+ --imagekit-only-signed Restrict unsigned image URLs If you have configured Restrict unsigned image URLs in your dashboard settings, set this to true
+ --imagekit-private-key string You can find your ImageKit.io private key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+ --imagekit-public-key string You can find your ImageKit.io public key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+ --imagekit-upload-tags string Tags to add to the uploaded files, e.g. \[dq]tag1,tag2\[dq]
+ --imagekit-versions Include old versions in directory listings
--internetarchive-access-key-id string IAS3 Access Key
--internetarchive-disable-checksum Don\[aq]t ask the server to test against MD5 checksum calculated by rclone (default true)
- --internetarchive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
+ --internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
--internetarchive-endpoint string IAS3 Endpoint (default \[dq]https://s3.us.archive.org\[dq])
--internetarchive-front-endpoint string Host of InternetArchive Frontend (default \[dq]https://archive.org\[dq])
--internetarchive-secret-access-key string IAS3 Secret Key (password)
@@ -23220,7 +25020,7 @@ These can be set in the config file also.
--jottacloud-auth-url string Auth server URL
--jottacloud-client-id string OAuth Client Id
--jottacloud-client-secret string OAuth Client Secret
- --jottacloud-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
+ --jottacloud-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
--jottacloud-no-versions Avoid server side versioning by deleting files and recreating files instead of overwriting them
@@ -23228,17 +25028,18 @@ These can be set in the config file also.
--jottacloud-token-url string Token server url
--jottacloud-trashed-only Only show files that are in the trash
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail\[aq]s (default 10Mi)
- --koofr-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --koofr-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--koofr-endpoint string The Koofr API endpoint to use
--koofr-mountid string Mount ID of the mount to use
--koofr-password string Your password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) (obscured)
--koofr-provider string Choose your storage provider
--koofr-setmtime Does the backend support setting modification time (default true)
--koofr-user string Your user name
+ --linkbox-token string Token from https://www.linkbox.to/admin/account
-l, --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension
--local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive
- --local-encoding MultiEncoder The encoding for the backend (default Slash,Dot)
+ --local-encoding Encoding The encoding for the backend (default Slash,Dot)
--local-no-check-updated Don\[aq]t check to see if the files change during upload
--local-no-preallocate Disable preallocation of disk space for transferred files
--local-no-set-modtime Disable setting modtime
@@ -23250,7 +25051,7 @@ These can be set in the config file also.
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
--mailru-client-id string OAuth Client Id
--mailru-client-secret string OAuth Client Secret
- --mailru-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --mailru-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--mailru-pass string Password (obscured)
--mailru-speedup-enable Skip full upload if there is another file with same data hash (default true)
--mailru-speedup-file-patterns string Comma separated list of file name patterns eligible for speedup (put by hash) (default \[dq]*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf\[dq])
@@ -23260,7 +25061,7 @@ These can be set in the config file also.
--mailru-token-url string Token server url
--mailru-user string User name (usually email)
--mega-debug Output more debug from Mega
- --mega-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--mega-hard-delete Delete files permanently rather than putting them into the trash
--mega-pass string Password (obscured)
--mega-use-https Use HTTPS for transfers
@@ -23276,9 +25077,10 @@ These can be set in the config file also.
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
+ --onedrive-delta If set rclone will use delta listing to implement recursive listings
--onedrive-drive-id string The ID of the drive to use
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
- --onedrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --onedrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
--onedrive-expose-onenote-files Set to make OneNote files show up in directory listings
--onedrive-hash-type string Specify the hash in use for the backend (default \[dq]auto\[dq])
--onedrive-link-password string Set the password for links created by the link command
@@ -23299,7 +25101,7 @@ These can be set in the config file also.
--oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--oos-copy-timeout Duration Timeout for copy (default 1m0s)
--oos-disable-checksum Don\[aq]t store MD5 checksum with object metadata
- --oos-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --oos-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--oos-endpoint string Endpoint for Object storage API
--oos-leave-parts-on-error If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery
--oos-max-upload-parts int Maximum number of parts in a multipart upload (default 10000)
@@ -23316,13 +25118,13 @@ These can be set in the config file also.
--oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
- --opendrive-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
+ --opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
--opendrive-password string Password (obscured)
--opendrive-username string Username
--pcloud-auth-url string Auth server URL
--pcloud-client-id string OAuth Client Id
--pcloud-client-secret string OAuth Client Secret
- --pcloud-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --pcloud-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--pcloud-hostname string Hostname to connect to (default \[dq]api.pcloud.com\[dq])
--pcloud-password string Your pcloud password (obscured)
--pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point (default \[dq]d0\[dq])
@@ -23332,7 +25134,7 @@ These can be set in the config file also.
--pikpak-auth-url string Auth server URL
--pikpak-client-id string OAuth Client Id
--pikpak-client-secret string OAuth Client Secret
- --pikpak-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
--pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder
@@ -23344,13 +25146,13 @@ These can be set in the config file also.
--premiumizeme-auth-url string Auth server URL
--premiumizeme-client-id string OAuth Client Id
--premiumizeme-client-secret string OAuth Client Secret
- --premiumizeme-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --premiumizeme-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--premiumizeme-token string OAuth Access Token as a JSON blob
--premiumizeme-token-url string Token server url
--protondrive-2fa string The 2FA code
--protondrive-app-version string The app version string (default \[dq]macos-drive\[at]1.0.0-alpha.1+rclone\[dq])
--protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true)
- --protondrive-encoding MultiEncoder The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
--protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured)
--protondrive-original-file-size Return the file size before encryption (default true)
--protondrive-password string The password of your proton account (obscured)
@@ -23359,13 +25161,13 @@ These can be set in the config file also.
--putio-auth-url string Auth server URL
--putio-client-id string OAuth Client Id
--putio-client-secret string OAuth Client Secret
- --putio-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --putio-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--putio-token string OAuth Access Token as a JSON blob
--putio-token-url string Token server url
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi)
--qingstor-connection-retries int Number of connection retries (default 3)
- --qingstor-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8)
+ --qingstor-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8)
--qingstor-endpoint string Enter an endpoint URL to connection QingStor API
--qingstor-env-auth Get QingStor credentials from runtime
--qingstor-secret-access-key string QingStor Secret Access Key (password)
@@ -23374,7 +25176,7 @@ These can be set in the config file also.
--qingstor-zone string Zone to connect to
--quatrix-api-key string API key for accessing Quatrix account
--quatrix-effective-upload-time string Wanted upload time for one chunk (default \[dq]4s\[dq])
- --quatrix-encoding MultiEncoder The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --quatrix-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--quatrix-hard-delete Delete files permanently rather than putting them into the trash
--quatrix-host string Host name of Quatrix account
--quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than \[aq]transfers\[aq]*\[aq]minimal_chunk_size\[aq] (default 95.367Mi)
@@ -23389,7 +25191,7 @@ These can be set in the config file also.
--s3-disable-checksum Don\[aq]t store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
--s3-download-url string Custom endpoint for downloads
- --s3-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8,Dot)
+ --s3-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--s3-endpoint string Endpoint for S3 API
--s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars)
--s3-force-path-style If true use path style access if false use virtual hosted style (default true)
@@ -23423,14 +25225,16 @@ These can be set in the config file also.
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
--s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset)
+ --s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset)
--s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
+ --s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-v2-auth If true use v2 authentication
--s3-version-at Time Show file versions as they were at the specified time (default off)
--s3-versions Include old versions in directory listings
--seafile-2fa Two-factor authentication (\[aq]true\[aq] if the account has 2FA enabled)
--seafile-create-library Should rclone create a library if it doesn\[aq]t exist
- --seafile-encoding MultiEncoder The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
+ --seafile-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
--seafile-library string Name of the library
--seafile-library-key string Library password (for encrypted libraries only) (obscured)
--seafile-pass string Password (obscured)
@@ -23440,6 +25244,7 @@ These can be set in the config file also.
--sftp-chunk-size SizeSuffix Upload and download chunk size (default 32Ki)
--sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference
--sftp-concurrency int The maximum number of outstanding requests for one file (default 64)
+ --sftp-copy-is-hardlink Set to enable server side copies using hardlinks
--sftp-disable-concurrent-reads If set don\[aq]t use concurrent reads
--sftp-disable-concurrent-writes If set don\[aq]t use concurrent writes
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
@@ -23474,7 +25279,7 @@ These can be set in the config file also.
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
--sharefile-client-id string OAuth Client Id
--sharefile-client-secret string OAuth Client Secret
- --sharefile-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --sharefile-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
--sharefile-endpoint string Endpoint for API calls
--sharefile-root-folder-id string ID of the root folder
--sharefile-token string OAuth Access Token as a JSON blob
@@ -23482,12 +25287,12 @@ These can be set in the config file also.
--sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi)
--sia-api-password string Sia Daemon API Password (obscured)
--sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default \[dq]http://127.0.0.1:9980\[dq])
- --sia-encoding MultiEncoder The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
+ --sia-encoding Encoding The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
--sia-user-agent string Siad User Agent (default \[dq]Sia-Agent\[dq])
--skip-links Don\[aq]t warn about skipped symlinks
--smb-case-insensitive Whether the server is configured to be case-insensitive (default true)
--smb-domain string Domain name for NTLM authentication (default \[dq]WORKGROUP\[dq])
- --smb-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
+ --smb-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
--smb-hide-special-share Hide special shares (e.g. print$) which users aren\[aq]t supposed to access (default true)
--smb-host string SMB server hostname to connect to
--smb-idle-timeout Duration Max time before closing idle connections (default 1m0s)
@@ -23505,7 +25310,7 @@ These can be set in the config file also.
--sugarsync-authorization string Sugarsync authorization
--sugarsync-authorization-expiry string Sugarsync authorization expiry
--sugarsync-deleted-id string Sugarsync deleted folder id
- --sugarsync-encoding MultiEncoder The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
+ --sugarsync-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
--sugarsync-hard-delete Permanently delete files if true
--sugarsync-private-access-key string Sugarsync Private Access Key
--sugarsync-refresh-token string Sugarsync refresh token
@@ -23519,7 +25324,7 @@ These can be set in the config file also.
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
- --swift-encoding MultiEncoder The encoding for the backend (default Slash,InvalidUtf8)
+ --swift-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default \[dq]public\[dq])
--swift-env-auth Get swift credentials from environment variables in standard OpenStack form
--swift-key string API key or password (OS_PASSWORD)
@@ -23541,7 +25346,7 @@ These can be set in the config file also.
--union-search-policy string Policy to choose upstream on SEARCH category (default \[dq]ff\[dq])
--union-upstreams string List of space separated upstreams
--uptobox-access-token string Your access token
- --uptobox-encoding MultiEncoder The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
+ --uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--uptobox-private Set to make uploaded files private
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
@@ -23556,14 +25361,14 @@ These can be set in the config file also.
--yandex-auth-url string Auth server URL
--yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret
- --yandex-encoding MultiEncoder The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
+ --yandex-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--yandex-hard-delete Delete files permanently rather than putting them into the trash
--yandex-token string OAuth Access Token as a JSON blob
--yandex-token-url string Token server url
--zoho-auth-url string Auth server URL
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
- --zoho-encoding MultiEncoder The encoding for the backend (default Del,Ctl,InvalidUtf8)
+ --zoho-encoding Encoding The encoding for the backend (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to
--zoho-token string OAuth Access Token as a JSON blob
--zoho-token-url string Token server url
@@ -24625,7 +26430,7 @@ during the copy/sync operations that follow, if there ARE diffs.
Unless \f[C]--ignore-listing-checksum\f[R] is passed, bisync currently
computes hashes for one path \f[I]even when there\[aq]s no common hash
with the other path\f[R] (for example, a
-crypt (https://rclone.org/crypt/#modified-time-and-hashes) remote.)
+crypt (https://rclone.org/crypt/#modification-times-and-hashes) remote.)
.IP \[bu] 2
If both paths support checksums and have a common hash, AND
\f[C]--ignore-listing-checksum\f[R] was not specified when creating the
@@ -24892,7 +26697,7 @@ Alternately, a \f[C]--resync\f[R] may be used (Path1 versions will be
pushed to Path2).
Consider the situation carefully and perhaps use \f[C]--dry-run\f[R]
before you commit to the changes.
-.SS Modification time
+.SS Modification times
.PP
Bisync relies on file timestamps to identify changed files and will
\f[I]refuse\f[R] to operate if backend lacks the modification time
@@ -26292,7 +28097,7 @@ To copy a local directory to a 1Fichier directory called backup
rclone copy /home/source remote:backup
\f[R]
.fi
-.SS Modified time and hashes
+.SS Modification times and hashes
.PP
1Fichier does not support modification times.
It supports the Whirlpool hash algorithm.
@@ -26490,7 +28295,7 @@ Config: encoding
.IP \[bu] 2
Env Var: RCLONE_FICHIER_ENCODING
.IP \[bu] 2
-Type: MultiEncoder
+Type: Encoding
.IP \[bu] 2
Default:
Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot
@@ -26782,13 +28587,13 @@ To copy a local directory to an Amazon Drive directory called backup
rclone copy /home/source remote:backup
\f[R]
.fi
-.SS Modified time and MD5SUMs
+.SS Modification times and hashes
.PP
Amazon Drive doesn\[aq]t allow modification times to be changed via the
API so these won\[aq]t be accurate or used for syncing.
.PP
-It does store MD5SUMs so for a more accurate sync, you can use the
-\f[C]--checksum\f[R] flag.
+It does support the MD5 hash algorithm, so for a more accurate sync, you
+can use the \f[C]--checksum\f[R] flag.
.SS Restricted filename characters
.PP
.TS
@@ -26999,7 +28804,7 @@ Config: encoding
.IP \[bu] 2
Env Var: RCLONE_ACD_ENCODING
.IP \[bu] 2
-Type: MultiEncoder
+Type: Encoding
.IP \[bu] 2
Default: Slash,InvalidUtf8,Dot
.SS Limitations
@@ -27071,6 +28876,8 @@ Leviia Object Storage
.IP \[bu] 2
Liara Object Storage
.IP \[bu] 2
+Linode Object Storage
+.IP \[bu] 2
Minio
.IP \[bu] 2
Petabox
@@ -27079,6 +28886,8 @@ Qiniu Cloud Object Storage (Kodo)
.IP \[bu] 2
RackCorp Object Storage
.IP \[bu] 2
+Rclone Serve S3
+.IP \[bu] 2
Scaleway
.IP \[bu] 2
Seagate Lyve Cloud
@@ -27349,7 +29158,8 @@ d) Delete this remote
y/e/d>
\f[R]
.fi
-.SS Modified time
+.SS Modification times and hashes
+.SS Modification times
.PP
The modified time is stored as metadata on the object as
\f[C]X-Amz-Meta-Mtime\f[R] as floating point since the epoch, accurate
@@ -27364,6 +29174,35 @@ Deep Archive storage the object will be uploaded rather than copied.
Note that reading this from the object takes an additional
\f[C]HEAD\f[R] request as the metadata isn\[aq]t returned in object
listings.
+.SS Hashes
+.PP
+For small objects which weren\[aq]t uploaded as multipart uploads
+(objects sized below \f[C]--s3-upload-cutoff\f[R] if uploaded with
+rclone) rclone uses the \f[C]ETag:\f[R] header as an MD5 checksum.
+.PP
+However for objects which were uploaded as multipart uploads or with
+server side encryption (SSE-AWS or SSE-C) the \f[C]ETag\f[R] header is
+no longer the MD5 sum of the data, so rclone adds an additional piece of
+metadata \f[C]X-Amz-Meta-Md5chksum\f[R] which is a base64 encoded MD5
+hash (in the same format as is required for \f[C]Content-MD5\f[R]).
+You can use base64 -d and hexdump to check this value manually:
+.IP
+.nf
+\f[C]
+echo \[aq]VWTGdNx3LyXQDfA0e2Edxw==\[aq] | base64 -d | hexdump
+\f[R]
+.fi
+.PP
+or you can use \f[C]rclone check\f[R] to verify the hashes are OK.
+.PP
+For large objects, calculating this hash can take some time so the
+addition of this hash can be disabled with
+\f[C]--s3-disable-checksum\f[R].
+This will mean that these objects do not have an MD5 checksum.
+.PP
+Note that reading this from the object takes an additional
+\f[C]HEAD\f[R] request as the metadata isn\[aq]t returned in object
+listings.
.SS Reducing costs
.SS Avoiding HEAD requests to read the modification time
.PP
@@ -27488,35 +29327,6 @@ You can disable this with the --s3-no-head option - see there for more
details.
.PP
Setting this flag increases the chance for undetected upload failures.
-.SS Hashes
-.PP
-For small objects which weren\[aq]t uploaded as multipart uploads
-(objects sized below \f[C]--s3-upload-cutoff\f[R] if uploaded with
-rclone) rclone uses the \f[C]ETag:\f[R] header as an MD5 checksum.
-.PP
-However for objects which were uploaded as multipart uploads or with
-server side encryption (SSE-AWS or SSE-C) the \f[C]ETag\f[R] header is
-no longer the MD5 sum of the data, so rclone adds an additional piece of
-metadata \f[C]X-Amz-Meta-Md5chksum\f[R] which is a base64 encoded MD5
-hash (in the same format as is required for \f[C]Content-MD5\f[R]).
-You can use base64 -d and hexdump to check this value manually:
-.IP
-.nf
-\f[C]
-echo \[aq]VWTGdNx3LyXQDfA0e2Edxw==\[aq] | base64 -d | hexdump
-\f[R]
-.fi
-.PP
-or you can use \f[C]rclone check\f[R] to verify the hashes are OK.
-.PP
-For large objects, calculating this hash can take some time so the
-addition of this hash can be disabled with
-\f[C]--s3-disable-checksum\f[R].
-This will mean that these objects do not have an MD5 checksum.
-.PP
-Note that reading this from the object takes an additional
-\f[C]HEAD\f[R] request as the metadata isn\[aq]t returned in object
-listings.
.SS Versions
.PP
When bucket versioning is enabled (this can be done with rclone with the
@@ -27882,18 +29692,19 @@ If you configure a default retention period on a bucket, requests to
upload objects in such a bucket must include the Content-MD5 header.
.RE
.PP
-As mentioned in the Hashes section, small files that are not uploaded as
-multipart, use a different tag, causing the upload to fail.
+As mentioned in the Modification times and hashes section, small files
+that are not uploaded as multipart, use a different tag, causing the
+upload to fail.
A simple solution is to set the \f[C]--s3-upload-cutoff 0\f[R] and force
all the files to be uploaded as multipart.
.SS Standard options
.PP
Here are the Standard options specific to s3 (Amazon S3 Compliant
-Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China
-Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS,
-IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease,
-Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology,
-Tencent COS, Qiniu and Wasabi).
+Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile,
+Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive,
+IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox,
+RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology,
+TencentCOS, Wasabi, Qiniu and others).
.SS --s3-provider
.PP
Choose your S3 provider.
@@ -28007,6 +29818,12 @@ Leviia Object Storage
Liara Object Storage
.RE
.IP \[bu] 2
+\[dq]Linode\[dq]
+.RS 2
+.IP \[bu] 2
+Linode Object Storage
+.RE
+.IP \[bu] 2
\[dq]Minio\[dq]
.RS 2
.IP \[bu] 2
@@ -28031,6 +29848,12 @@ Petabox Object Storage
RackCorp Object Storage
.RE
.IP \[bu] 2
+\[dq]Rclone\[dq]
+.RS 2
+.IP \[bu] 2
+Rclone S3 Server
+.RE
+.IP \[bu] 2
\[dq]Scaleway\[dq]
.RS 2
.IP \[bu] 2
@@ -28368,567 +30191,6 @@ AWS GovCloud (US) Region.
Needs location constraint us-gov-west-1.
.RE
.RE
-.SS --s3-region
-.PP
-region - the location where your bucket will be created and your data
-stored.
-.PP
-Properties:
-.IP \[bu] 2
-Config: region
-.IP \[bu] 2
-Env Var: RCLONE_S3_REGION
-.IP \[bu] 2
-Provider: RackCorp
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]global\[dq]
-.RS 2
-.IP \[bu] 2
-Global CDN (All locations) Region
-.RE
-.IP \[bu] 2
-\[dq]au\[dq]
-.RS 2
-.IP \[bu] 2
-Australia (All states)
-.RE
-.IP \[bu] 2
-\[dq]au-nsw\[dq]
-.RS 2
-.IP \[bu] 2
-NSW (Australia) Region
-.RE
-.IP \[bu] 2
-\[dq]au-qld\[dq]
-.RS 2
-.IP \[bu] 2
-QLD (Australia) Region
-.RE
-.IP \[bu] 2
-\[dq]au-vic\[dq]
-.RS 2
-.IP \[bu] 2
-VIC (Australia) Region
-.RE
-.IP \[bu] 2
-\[dq]au-wa\[dq]
-.RS 2
-.IP \[bu] 2
-Perth (Australia) Region
-.RE
-.IP \[bu] 2
-\[dq]ph\[dq]
-.RS 2
-.IP \[bu] 2
-Manila (Philippines) Region
-.RE
-.IP \[bu] 2
-\[dq]th\[dq]
-.RS 2
-.IP \[bu] 2
-Bangkok (Thailand) Region
-.RE
-.IP \[bu] 2
-\[dq]hk\[dq]
-.RS 2
-.IP \[bu] 2
-HK (Hong Kong) Region
-.RE
-.IP \[bu] 2
-\[dq]mn\[dq]
-.RS 2
-.IP \[bu] 2
-Ulaanbaatar (Mongolia) Region
-.RE
-.IP \[bu] 2
-\[dq]kg\[dq]
-.RS 2
-.IP \[bu] 2
-Bishkek (Kyrgyzstan) Region
-.RE
-.IP \[bu] 2
-\[dq]id\[dq]
-.RS 2
-.IP \[bu] 2
-Jakarta (Indonesia) Region
-.RE
-.IP \[bu] 2
-\[dq]jp\[dq]
-.RS 2
-.IP \[bu] 2
-Tokyo (Japan) Region
-.RE
-.IP \[bu] 2
-\[dq]sg\[dq]
-.RS 2
-.IP \[bu] 2
-SG (Singapore) Region
-.RE
-.IP \[bu] 2
-\[dq]de\[dq]
-.RS 2
-.IP \[bu] 2
-Frankfurt (Germany) Region
-.RE
-.IP \[bu] 2
-\[dq]us\[dq]
-.RS 2
-.IP \[bu] 2
-USA (AnyCast) Region
-.RE
-.IP \[bu] 2
-\[dq]us-east-1\[dq]
-.RS 2
-.IP \[bu] 2
-New York (USA) Region
-.RE
-.IP \[bu] 2
-\[dq]us-west-1\[dq]
-.RS 2
-.IP \[bu] 2
-Freemont (USA) Region
-.RE
-.IP \[bu] 2
-\[dq]nz\[dq]
-.RS 2
-.IP \[bu] 2
-Auckland (New Zealand) Region
-.RE
-.RE
-.SS --s3-region
-.PP
-Region to connect to.
-.PP
-Properties:
-.IP \[bu] 2
-Config: region
-.IP \[bu] 2
-Env Var: RCLONE_S3_REGION
-.IP \[bu] 2
-Provider: Scaleway
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]nl-ams\[dq]
-.RS 2
-.IP \[bu] 2
-Amsterdam, The Netherlands
-.RE
-.IP \[bu] 2
-\[dq]fr-par\[dq]
-.RS 2
-.IP \[bu] 2
-Paris, France
-.RE
-.IP \[bu] 2
-\[dq]pl-waw\[dq]
-.RS 2
-.IP \[bu] 2
-Warsaw, Poland
-.RE
-.RE
-.SS --s3-region
-.PP
-Region to connect to.
-- the location where your bucket will be created and your data stored.
-Need bo be same with your endpoint.
-.PP
-Properties:
-.IP \[bu] 2
-Config: region
-.IP \[bu] 2
-Env Var: RCLONE_S3_REGION
-.IP \[bu] 2
-Provider: HuaweiOBS
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]af-south-1\[dq]
-.RS 2
-.IP \[bu] 2
-AF-Johannesburg
-.RE
-.IP \[bu] 2
-\[dq]ap-southeast-2\[dq]
-.RS 2
-.IP \[bu] 2
-AP-Bangkok
-.RE
-.IP \[bu] 2
-\[dq]ap-southeast-3\[dq]
-.RS 2
-.IP \[bu] 2
-AP-Singapore
-.RE
-.IP \[bu] 2
-\[dq]cn-east-3\[dq]
-.RS 2
-.IP \[bu] 2
-CN East-Shanghai1
-.RE
-.IP \[bu] 2
-\[dq]cn-east-2\[dq]
-.RS 2
-.IP \[bu] 2
-CN East-Shanghai2
-.RE
-.IP \[bu] 2
-\[dq]cn-north-1\[dq]
-.RS 2
-.IP \[bu] 2
-CN North-Beijing1
-.RE
-.IP \[bu] 2
-\[dq]cn-north-4\[dq]
-.RS 2
-.IP \[bu] 2
-CN North-Beijing4
-.RE
-.IP \[bu] 2
-\[dq]cn-south-1\[dq]
-.RS 2
-.IP \[bu] 2
-CN South-Guangzhou
-.RE
-.IP \[bu] 2
-\[dq]ap-southeast-1\[dq]
-.RS 2
-.IP \[bu] 2
-CN-Hong Kong
-.RE
-.IP \[bu] 2
-\[dq]sa-argentina-1\[dq]
-.RS 2
-.IP \[bu] 2
-LA-Buenos Aires1
-.RE
-.IP \[bu] 2
-\[dq]sa-peru-1\[dq]
-.RS 2
-.IP \[bu] 2
-LA-Lima1
-.RE
-.IP \[bu] 2
-\[dq]na-mexico-1\[dq]
-.RS 2
-.IP \[bu] 2
-LA-Mexico City1
-.RE
-.IP \[bu] 2
-\[dq]sa-chile-1\[dq]
-.RS 2
-.IP \[bu] 2
-LA-Santiago2
-.RE
-.IP \[bu] 2
-\[dq]sa-brazil-1\[dq]
-.RS 2
-.IP \[bu] 2
-LA-Sao Paulo1
-.RE
-.IP \[bu] 2
-\[dq]ru-northwest-2\[dq]
-.RS 2
-.IP \[bu] 2
-RU-Moscow2
-.RE
-.RE
-.SS --s3-region
-.PP
-Region to connect to.
-.PP
-Properties:
-.IP \[bu] 2
-Config: region
-.IP \[bu] 2
-Env Var: RCLONE_S3_REGION
-.IP \[bu] 2
-Provider: Cloudflare
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]auto\[dq]
-.RS 2
-.IP \[bu] 2
-R2 buckets are automatically distributed across Cloudflare\[aq]s data
-centers for low latency.
-.RE
-.RE
-.SS --s3-region
-.PP
-Region to connect to.
-.PP
-Properties:
-.IP \[bu] 2
-Config: region
-.IP \[bu] 2
-Env Var: RCLONE_S3_REGION
-.IP \[bu] 2
-Provider: Qiniu
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]cn-east-1\[dq]
-.RS 2
-.IP \[bu] 2
-The default endpoint - a good choice if you are unsure.
-.IP \[bu] 2
-East China Region 1.
-.IP \[bu] 2
-Needs location constraint cn-east-1.
-.RE
-.IP \[bu] 2
-\[dq]cn-east-2\[dq]
-.RS 2
-.IP \[bu] 2
-East China Region 2.
-.IP \[bu] 2
-Needs location constraint cn-east-2.
-.RE
-.IP \[bu] 2
-\[dq]cn-north-1\[dq]
-.RS 2
-.IP \[bu] 2
-North China Region 1.
-.IP \[bu] 2
-Needs location constraint cn-north-1.
-.RE
-.IP \[bu] 2
-\[dq]cn-south-1\[dq]
-.RS 2
-.IP \[bu] 2
-South China Region 1.
-.IP \[bu] 2
-Needs location constraint cn-south-1.
-.RE
-.IP \[bu] 2
-\[dq]us-north-1\[dq]
-.RS 2
-.IP \[bu] 2
-North America Region.
-.IP \[bu] 2
-Needs location constraint us-north-1.
-.RE
-.IP \[bu] 2
-\[dq]ap-southeast-1\[dq]
-.RS 2
-.IP \[bu] 2
-Southeast Asia Region 1.
-.IP \[bu] 2
-Needs location constraint ap-southeast-1.
-.RE
-.IP \[bu] 2
-\[dq]ap-northeast-1\[dq]
-.RS 2
-.IP \[bu] 2
-Northeast Asia Region 1.
-.IP \[bu] 2
-Needs location constraint ap-northeast-1.
-.RE
-.RE
-.SS --s3-region
-.PP
-Region where your bucket will be created and your data stored.
-.PP
-Properties:
-.IP \[bu] 2
-Config: region
-.IP \[bu] 2
-Env Var: RCLONE_S3_REGION
-.IP \[bu] 2
-Provider: IONOS
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]de\[dq]
-.RS 2
-.IP \[bu] 2
-Frankfurt, Germany
-.RE
-.IP \[bu] 2
-\[dq]eu-central-2\[dq]
-.RS 2
-.IP \[bu] 2
-Berlin, Germany
-.RE
-.IP \[bu] 2
-\[dq]eu-south-2\[dq]
-.RS 2
-.IP \[bu] 2
-Logrono, Spain
-.RE
-.RE
-.SS --s3-region
-.PP
-Region where your bucket will be created and your data stored.
-.PP
-Properties:
-.IP \[bu] 2
-Config: region
-.IP \[bu] 2
-Env Var: RCLONE_S3_REGION
-.IP \[bu] 2
-Provider: Petabox
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]us-east-1\[dq]
-.RS 2
-.IP \[bu] 2
-US East (N.
-Virginia)
-.RE
-.IP \[bu] 2
-\[dq]eu-central-1\[dq]
-.RS 2
-.IP \[bu] 2
-Europe (Frankfurt)
-.RE
-.IP \[bu] 2
-\[dq]ap-southeast-1\[dq]
-.RS 2
-.IP \[bu] 2
-Asia Pacific (Singapore)
-.RE
-.IP \[bu] 2
-\[dq]me-south-1\[dq]
-.RS 2
-.IP \[bu] 2
-Middle East (Bahrain)
-.RE
-.IP \[bu] 2
-\[dq]sa-east-1\[dq]
-.RS 2
-.IP \[bu] 2
-South America (S\[~a]o Paulo)
-.RE
-.RE
-.SS --s3-region
-.PP
-Region where your data stored.
-.PP
-Properties:
-.IP \[bu] 2
-Config: region
-.IP \[bu] 2
-Env Var: RCLONE_S3_REGION
-.IP \[bu] 2
-Provider: Synology
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]eu-001\[dq]
-.RS 2
-.IP \[bu] 2
-Europe Region 1
-.RE
-.IP \[bu] 2
-\[dq]eu-002\[dq]
-.RS 2
-.IP \[bu] 2
-Europe Region 2
-.RE
-.IP \[bu] 2
-\[dq]us-001\[dq]
-.RS 2
-.IP \[bu] 2
-US Region 1
-.RE
-.IP \[bu] 2
-\[dq]us-002\[dq]
-.RS 2
-.IP \[bu] 2
-US Region 2
-.RE
-.IP \[bu] 2
-\[dq]tw-001\[dq]
-.RS 2
-.IP \[bu] 2
-Asia (Taiwan)
-.RE
-.RE
-.SS --s3-region
-.PP
-Region to connect to.
-.PP
-Leave blank if you are using an S3 clone and you don\[aq]t have a
-region.
-.PP
-Properties:
-.IP \[bu] 2
-Config: region
-.IP \[bu] 2
-Env Var: RCLONE_S3_REGION
-.IP \[bu] 2
-Provider:
-!AWS,Alibaba,ArvanCloud,ChinaMobile,Cloudflare,IONOS,Petabox,Liara,Qiniu,RackCorp,Scaleway,Storj,Synology,TencentCOS,HuaweiOBS,IDrive
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]\[dq]
-.RS 2
-.IP \[bu] 2
-Use this if unsure.
-.IP \[bu] 2
-Will use v4 signatures and an empty region.
-.RE
-.IP \[bu] 2
-\[dq]other-v2-signature\[dq]
-.RS 2
-.IP \[bu] 2
-Use this only if v4 signatures don\[aq]t work.
-.IP \[bu] 2
-E.g.
-pre Jewel/v10 CEPH.
-.RE
-.RE
.SS --s3-endpoint
.PP
Endpoint for S3 API.
@@ -28946,1749 +30208,6 @@ Provider: AWS
Type: string
.IP \[bu] 2
Required: false
-.SS --s3-endpoint
-.PP
-Endpoint for China Mobile Ecloud Elastic Object Storage (EOS) API.
-.PP
-Properties:
-.IP \[bu] 2
-Config: endpoint
-.IP \[bu] 2
-Env Var: RCLONE_S3_ENDPOINT
-.IP \[bu] 2
-Provider: ChinaMobile
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]eos-wuxi-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-The default endpoint - a good choice if you are unsure.
-.IP \[bu] 2
-East China (Suzhou)
-.RE
-.IP \[bu] 2
-\[dq]eos-jinan-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-East China (Jinan)
-.RE
-.IP \[bu] 2
-\[dq]eos-ningbo-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-East China (Hangzhou)
-.RE
-.IP \[bu] 2
-\[dq]eos-shanghai-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-East China (Shanghai-1)
-.RE
-.IP \[bu] 2
-\[dq]eos-zhengzhou-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-Central China (Zhengzhou)
-.RE
-.IP \[bu] 2
-\[dq]eos-hunan-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-Central China (Changsha-1)
-.RE
-.IP \[bu] 2
-\[dq]eos-zhuzhou-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-Central China (Changsha-2)
-.RE
-.IP \[bu] 2
-\[dq]eos-guangzhou-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-South China (Guangzhou-2)
-.RE
-.IP \[bu] 2
-\[dq]eos-dongguan-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-South China (Guangzhou-3)
-.RE
-.IP \[bu] 2
-\[dq]eos-beijing-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-North China (Beijing-1)
-.RE
-.IP \[bu] 2
-\[dq]eos-beijing-2.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-North China (Beijing-2)
-.RE
-.IP \[bu] 2
-\[dq]eos-beijing-4.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-North China (Beijing-3)
-.RE
-.IP \[bu] 2
-\[dq]eos-huhehaote-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-North China (Huhehaote)
-.RE
-.IP \[bu] 2
-\[dq]eos-chengdu-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-Southwest China (Chengdu)
-.RE
-.IP \[bu] 2
-\[dq]eos-chongqing-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-Southwest China (Chongqing)
-.RE
-.IP \[bu] 2
-\[dq]eos-guiyang-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-Southwest China (Guiyang)
-.RE
-.IP \[bu] 2
-\[dq]eos-xian-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-Nouthwest China (Xian)
-.RE
-.IP \[bu] 2
-\[dq]eos-yunnan.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-Yunnan China (Kunming)
-.RE
-.IP \[bu] 2
-\[dq]eos-yunnan-2.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-Yunnan China (Kunming-2)
-.RE
-.IP \[bu] 2
-\[dq]eos-tianjin-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-Tianjin China (Tianjin)
-.RE
-.IP \[bu] 2
-\[dq]eos-jilin-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-Jilin China (Changchun)
-.RE
-.IP \[bu] 2
-\[dq]eos-hubei-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-Hubei China (Xiangyan)
-.RE
-.IP \[bu] 2
-\[dq]eos-jiangxi-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-Jiangxi China (Nanchang)
-.RE
-.IP \[bu] 2
-\[dq]eos-gansu-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-Gansu China (Lanzhou)
-.RE
-.IP \[bu] 2
-\[dq]eos-shanxi-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-Shanxi China (Taiyuan)
-.RE
-.IP \[bu] 2
-\[dq]eos-liaoning-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-Liaoning China (Shenyang)
-.RE
-.IP \[bu] 2
-\[dq]eos-hebei-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-Hebei China (Shijiazhuang)
-.RE
-.IP \[bu] 2
-\[dq]eos-fujian-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-Fujian China (Xiamen)
-.RE
-.IP \[bu] 2
-\[dq]eos-guangxi-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-Guangxi China (Nanning)
-.RE
-.IP \[bu] 2
-\[dq]eos-anhui-1.cmecloud.cn\[dq]
-.RS 2
-.IP \[bu] 2
-Anhui China (Huainan)
-.RE
-.RE
-.SS --s3-endpoint
-.PP
-Endpoint for Arvan Cloud Object Storage (AOS) API.
-.PP
-Properties:
-.IP \[bu] 2
-Config: endpoint
-.IP \[bu] 2
-Env Var: RCLONE_S3_ENDPOINT
-.IP \[bu] 2
-Provider: ArvanCloud
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]s3.ir-thr-at1.arvanstorage.ir\[dq]
-.RS 2
-.IP \[bu] 2
-The default endpoint - a good choice if you are unsure.
-.IP \[bu] 2
-Tehran Iran (Simin)
-.RE
-.IP \[bu] 2
-\[dq]s3.ir-tbz-sh1.arvanstorage.ir\[dq]
-.RS 2
-.IP \[bu] 2
-Tabriz Iran (Shahriar)
-.RE
-.RE
-.SS --s3-endpoint
-.PP
-Endpoint for IBM COS S3 API.
-.PP
-Specify if using an IBM COS On Premise.
-.PP
-Properties:
-.IP \[bu] 2
-Config: endpoint
-.IP \[bu] 2
-Env Var: RCLONE_S3_ENDPOINT
-.IP \[bu] 2
-Provider: IBMCOS
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]s3.us.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-US Cross Region Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.dal.us.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-US Cross Region Dallas Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.wdc.us.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-US Cross Region Washington DC Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.sjc.us.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-US Cross Region San Jose Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.us.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-US Cross Region Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.dal.us.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-US Cross Region Dallas Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.wdc.us.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-US Cross Region Washington DC Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.sjc.us.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-US Cross Region San Jose Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.us-east.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-US Region East Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.us-east.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-US Region East Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.us-south.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-US Region South Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.us-south.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-US Region South Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.eu.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-EU Cross Region Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.fra.eu.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-EU Cross Region Frankfurt Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.mil.eu.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-EU Cross Region Milan Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.ams.eu.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-EU Cross Region Amsterdam Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.eu.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-EU Cross Region Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.fra.eu.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-EU Cross Region Frankfurt Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.mil.eu.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-EU Cross Region Milan Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.ams.eu.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-EU Cross Region Amsterdam Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.eu-gb.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Great Britain Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.eu-gb.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Great Britain Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.eu-de.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-EU Region DE Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.eu-de.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-EU Region DE Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.ap.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-APAC Cross Regional Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.tok.ap.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-APAC Cross Regional Tokyo Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.hkg.ap.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-APAC Cross Regional HongKong Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.seo.ap.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-APAC Cross Regional Seoul Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.ap.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-APAC Cross Regional Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.tok.ap.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-APAC Cross Regional Tokyo Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.hkg.ap.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-APAC Cross Regional HongKong Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.seo.ap.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-APAC Cross Regional Seoul Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.jp-tok.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-APAC Region Japan Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.jp-tok.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-APAC Region Japan Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.au-syd.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-APAC Region Australia Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.au-syd.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-APAC Region Australia Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.ams03.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Amsterdam Single Site Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.ams03.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Amsterdam Single Site Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.che01.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Chennai Single Site Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.che01.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Chennai Single Site Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.mel01.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Melbourne Single Site Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.mel01.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Melbourne Single Site Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.osl01.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Oslo Single Site Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.osl01.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Oslo Single Site Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.tor01.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Toronto Single Site Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.tor01.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Toronto Single Site Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.seo01.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Seoul Single Site Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.seo01.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Seoul Single Site Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.mon01.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Montreal Single Site Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.mon01.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Montreal Single Site Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.mex01.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Mexico Single Site Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.mex01.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Mexico Single Site Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.sjc04.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-San Jose Single Site Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.sjc04.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-San Jose Single Site Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.mil01.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Milan Single Site Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.mil01.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Milan Single Site Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.hkg02.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Hong Kong Single Site Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.hkg02.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Hong Kong Single Site Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.par01.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Paris Single Site Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.par01.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Paris Single Site Private Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.sng01.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Singapore Single Site Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.private.sng01.cloud-object-storage.appdomain.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Singapore Single Site Private Endpoint
-.RE
-.RE
-.SS --s3-endpoint
-.PP
-Endpoint for IONOS S3 Object Storage.
-.PP
-Specify the endpoint from the same region.
-.PP
-Properties:
-.IP \[bu] 2
-Config: endpoint
-.IP \[bu] 2
-Env Var: RCLONE_S3_ENDPOINT
-.IP \[bu] 2
-Provider: IONOS
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]s3-eu-central-1.ionoscloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Frankfurt, Germany
-.RE
-.IP \[bu] 2
-\[dq]s3-eu-central-2.ionoscloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Berlin, Germany
-.RE
-.IP \[bu] 2
-\[dq]s3-eu-south-2.ionoscloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Logrono, Spain
-.RE
-.RE
-.SS --s3-endpoint
-.PP
-Endpoint for Petabox S3 Object Storage.
-.PP
-Specify the endpoint from the same region.
-.PP
-Properties:
-.IP \[bu] 2
-Config: endpoint
-.IP \[bu] 2
-Env Var: RCLONE_S3_ENDPOINT
-.IP \[bu] 2
-Provider: Petabox
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: true
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]s3.petabox.io\[dq]
-.RS 2
-.IP \[bu] 2
-US East (N.
-Virginia)
-.RE
-.IP \[bu] 2
-\[dq]s3.us-east-1.petabox.io\[dq]
-.RS 2
-.IP \[bu] 2
-US East (N.
-Virginia)
-.RE
-.IP \[bu] 2
-\[dq]s3.eu-central-1.petabox.io\[dq]
-.RS 2
-.IP \[bu] 2
-Europe (Frankfurt)
-.RE
-.IP \[bu] 2
-\[dq]s3.ap-southeast-1.petabox.io\[dq]
-.RS 2
-.IP \[bu] 2
-Asia Pacific (Singapore)
-.RE
-.IP \[bu] 2
-\[dq]s3.me-south-1.petabox.io\[dq]
-.RS 2
-.IP \[bu] 2
-Middle East (Bahrain)
-.RE
-.IP \[bu] 2
-\[dq]s3.sa-east-1.petabox.io\[dq]
-.RS 2
-.IP \[bu] 2
-South America (S\[~a]o Paulo)
-.RE
-.RE
-.SS --s3-endpoint
-.PP
-Endpoint for Leviia Object Storage API.
-.PP
-Properties:
-.IP \[bu] 2
-Config: endpoint
-.IP \[bu] 2
-Env Var: RCLONE_S3_ENDPOINT
-.IP \[bu] 2
-Provider: Leviia
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]s3.leviia.com\[dq]
-.RS 2
-.IP \[bu] 2
-The default endpoint
-.IP \[bu] 2
-Leviia
-.RE
-.RE
-.SS --s3-endpoint
-.PP
-Endpoint for Liara Object Storage API.
-.PP
-Properties:
-.IP \[bu] 2
-Config: endpoint
-.IP \[bu] 2
-Env Var: RCLONE_S3_ENDPOINT
-.IP \[bu] 2
-Provider: Liara
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]storage.iran.liara.space\[dq]
-.RS 2
-.IP \[bu] 2
-The default endpoint
-.IP \[bu] 2
-Iran
-.RE
-.RE
-.SS --s3-endpoint
-.PP
-Endpoint for OSS API.
-.PP
-Properties:
-.IP \[bu] 2
-Config: endpoint
-.IP \[bu] 2
-Env Var: RCLONE_S3_ENDPOINT
-.IP \[bu] 2
-Provider: Alibaba
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]oss-accelerate.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-Global Accelerate
-.RE
-.IP \[bu] 2
-\[dq]oss-accelerate-overseas.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-Global Accelerate (outside mainland China)
-.RE
-.IP \[bu] 2
-\[dq]oss-cn-hangzhou.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-East China 1 (Hangzhou)
-.RE
-.IP \[bu] 2
-\[dq]oss-cn-shanghai.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-East China 2 (Shanghai)
-.RE
-.IP \[bu] 2
-\[dq]oss-cn-qingdao.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-North China 1 (Qingdao)
-.RE
-.IP \[bu] 2
-\[dq]oss-cn-beijing.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-North China 2 (Beijing)
-.RE
-.IP \[bu] 2
-\[dq]oss-cn-zhangjiakou.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-North China 3 (Zhangjiakou)
-.RE
-.IP \[bu] 2
-\[dq]oss-cn-huhehaote.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-North China 5 (Hohhot)
-.RE
-.IP \[bu] 2
-\[dq]oss-cn-wulanchabu.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-North China 6 (Ulanqab)
-.RE
-.IP \[bu] 2
-\[dq]oss-cn-shenzhen.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-South China 1 (Shenzhen)
-.RE
-.IP \[bu] 2
-\[dq]oss-cn-heyuan.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-South China 2 (Heyuan)
-.RE
-.IP \[bu] 2
-\[dq]oss-cn-guangzhou.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-South China 3 (Guangzhou)
-.RE
-.IP \[bu] 2
-\[dq]oss-cn-chengdu.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-West China 1 (Chengdu)
-.RE
-.IP \[bu] 2
-\[dq]oss-cn-hongkong.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-Hong Kong (Hong Kong)
-.RE
-.IP \[bu] 2
-\[dq]oss-us-west-1.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-US West 1 (Silicon Valley)
-.RE
-.IP \[bu] 2
-\[dq]oss-us-east-1.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-US East 1 (Virginia)
-.RE
-.IP \[bu] 2
-\[dq]oss-ap-southeast-1.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-Southeast Asia Southeast 1 (Singapore)
-.RE
-.IP \[bu] 2
-\[dq]oss-ap-southeast-2.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-Asia Pacific Southeast 2 (Sydney)
-.RE
-.IP \[bu] 2
-\[dq]oss-ap-southeast-3.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-Southeast Asia Southeast 3 (Kuala Lumpur)
-.RE
-.IP \[bu] 2
-\[dq]oss-ap-southeast-5.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-Asia Pacific Southeast 5 (Jakarta)
-.RE
-.IP \[bu] 2
-\[dq]oss-ap-northeast-1.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-Asia Pacific Northeast 1 (Japan)
-.RE
-.IP \[bu] 2
-\[dq]oss-ap-south-1.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-Asia Pacific South 1 (Mumbai)
-.RE
-.IP \[bu] 2
-\[dq]oss-eu-central-1.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-Central Europe 1 (Frankfurt)
-.RE
-.IP \[bu] 2
-\[dq]oss-eu-west-1.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-West Europe (London)
-.RE
-.IP \[bu] 2
-\[dq]oss-me-east-1.aliyuncs.com\[dq]
-.RS 2
-.IP \[bu] 2
-Middle East 1 (Dubai)
-.RE
-.RE
-.SS --s3-endpoint
-.PP
-Endpoint for OBS API.
-.PP
-Properties:
-.IP \[bu] 2
-Config: endpoint
-.IP \[bu] 2
-Env Var: RCLONE_S3_ENDPOINT
-.IP \[bu] 2
-Provider: HuaweiOBS
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]obs.af-south-1.myhuaweicloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-AF-Johannesburg
-.RE
-.IP \[bu] 2
-\[dq]obs.ap-southeast-2.myhuaweicloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-AP-Bangkok
-.RE
-.IP \[bu] 2
-\[dq]obs.ap-southeast-3.myhuaweicloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-AP-Singapore
-.RE
-.IP \[bu] 2
-\[dq]obs.cn-east-3.myhuaweicloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-CN East-Shanghai1
-.RE
-.IP \[bu] 2
-\[dq]obs.cn-east-2.myhuaweicloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-CN East-Shanghai2
-.RE
-.IP \[bu] 2
-\[dq]obs.cn-north-1.myhuaweicloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-CN North-Beijing1
-.RE
-.IP \[bu] 2
-\[dq]obs.cn-north-4.myhuaweicloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-CN North-Beijing4
-.RE
-.IP \[bu] 2
-\[dq]obs.cn-south-1.myhuaweicloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-CN South-Guangzhou
-.RE
-.IP \[bu] 2
-\[dq]obs.ap-southeast-1.myhuaweicloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-CN-Hong Kong
-.RE
-.IP \[bu] 2
-\[dq]obs.sa-argentina-1.myhuaweicloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-LA-Buenos Aires1
-.RE
-.IP \[bu] 2
-\[dq]obs.sa-peru-1.myhuaweicloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-LA-Lima1
-.RE
-.IP \[bu] 2
-\[dq]obs.na-mexico-1.myhuaweicloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-LA-Mexico City1
-.RE
-.IP \[bu] 2
-\[dq]obs.sa-chile-1.myhuaweicloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-LA-Santiago2
-.RE
-.IP \[bu] 2
-\[dq]obs.sa-brazil-1.myhuaweicloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-LA-Sao Paulo1
-.RE
-.IP \[bu] 2
-\[dq]obs.ru-northwest-2.myhuaweicloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-RU-Moscow2
-.RE
-.RE
-.SS --s3-endpoint
-.PP
-Endpoint for Scaleway Object Storage.
-.PP
-Properties:
-.IP \[bu] 2
-Config: endpoint
-.IP \[bu] 2
-Env Var: RCLONE_S3_ENDPOINT
-.IP \[bu] 2
-Provider: Scaleway
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]s3.nl-ams.scw.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Amsterdam Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.fr-par.scw.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Paris Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.pl-waw.scw.cloud\[dq]
-.RS 2
-.IP \[bu] 2
-Warsaw Endpoint
-.RE
-.RE
-.SS --s3-endpoint
-.PP
-Endpoint for StackPath Object Storage.
-.PP
-Properties:
-.IP \[bu] 2
-Config: endpoint
-.IP \[bu] 2
-Env Var: RCLONE_S3_ENDPOINT
-.IP \[bu] 2
-Provider: StackPath
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]s3.us-east-2.stackpathstorage.com\[dq]
-.RS 2
-.IP \[bu] 2
-US East Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.us-west-1.stackpathstorage.com\[dq]
-.RS 2
-.IP \[bu] 2
-US West Endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.eu-central-1.stackpathstorage.com\[dq]
-.RS 2
-.IP \[bu] 2
-EU Endpoint
-.RE
-.RE
-.SS --s3-endpoint
-.PP
-Endpoint for Google Cloud Storage.
-.PP
-Properties:
-.IP \[bu] 2
-Config: endpoint
-.IP \[bu] 2
-Env Var: RCLONE_S3_ENDPOINT
-.IP \[bu] 2
-Provider: GCS
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]https://storage.googleapis.com\[dq]
-.RS 2
-.IP \[bu] 2
-Google Cloud Storage endpoint
-.RE
-.RE
-.SS --s3-endpoint
-.PP
-Endpoint for Storj Gateway.
-.PP
-Properties:
-.IP \[bu] 2
-Config: endpoint
-.IP \[bu] 2
-Env Var: RCLONE_S3_ENDPOINT
-.IP \[bu] 2
-Provider: Storj
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]gateway.storjshare.io\[dq]
-.RS 2
-.IP \[bu] 2
-Global Hosted Gateway
-.RE
-.RE
-.SS --s3-endpoint
-.PP
-Endpoint for Synology C2 Object Storage API.
-.PP
-Properties:
-.IP \[bu] 2
-Config: endpoint
-.IP \[bu] 2
-Env Var: RCLONE_S3_ENDPOINT
-.IP \[bu] 2
-Provider: Synology
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]eu-001.s3.synologyc2.net\[dq]
-.RS 2
-.IP \[bu] 2
-EU Endpoint 1
-.RE
-.IP \[bu] 2
-\[dq]eu-002.s3.synologyc2.net\[dq]
-.RS 2
-.IP \[bu] 2
-EU Endpoint 2
-.RE
-.IP \[bu] 2
-\[dq]us-001.s3.synologyc2.net\[dq]
-.RS 2
-.IP \[bu] 2
-US Endpoint 1
-.RE
-.IP \[bu] 2
-\[dq]us-002.s3.synologyc2.net\[dq]
-.RS 2
-.IP \[bu] 2
-US Endpoint 2
-.RE
-.IP \[bu] 2
-\[dq]tw-001.s3.synologyc2.net\[dq]
-.RS 2
-.IP \[bu] 2
-TW Endpoint 1
-.RE
-.RE
-.SS --s3-endpoint
-.PP
-Endpoint for Tencent COS API.
-.PP
-Properties:
-.IP \[bu] 2
-Config: endpoint
-.IP \[bu] 2
-Env Var: RCLONE_S3_ENDPOINT
-.IP \[bu] 2
-Provider: TencentCOS
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]cos.ap-beijing.myqcloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Beijing Region
-.RE
-.IP \[bu] 2
-\[dq]cos.ap-nanjing.myqcloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Nanjing Region
-.RE
-.IP \[bu] 2
-\[dq]cos.ap-shanghai.myqcloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Shanghai Region
-.RE
-.IP \[bu] 2
-\[dq]cos.ap-guangzhou.myqcloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Guangzhou Region
-.RE
-.IP \[bu] 2
-\[dq]cos.ap-nanjing.myqcloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Nanjing Region
-.RE
-.IP \[bu] 2
-\[dq]cos.ap-chengdu.myqcloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Chengdu Region
-.RE
-.IP \[bu] 2
-\[dq]cos.ap-chongqing.myqcloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Chongqing Region
-.RE
-.IP \[bu] 2
-\[dq]cos.ap-hongkong.myqcloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Hong Kong (China) Region
-.RE
-.IP \[bu] 2
-\[dq]cos.ap-singapore.myqcloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Singapore Region
-.RE
-.IP \[bu] 2
-\[dq]cos.ap-mumbai.myqcloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Mumbai Region
-.RE
-.IP \[bu] 2
-\[dq]cos.ap-seoul.myqcloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Seoul Region
-.RE
-.IP \[bu] 2
-\[dq]cos.ap-bangkok.myqcloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Bangkok Region
-.RE
-.IP \[bu] 2
-\[dq]cos.ap-tokyo.myqcloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Tokyo Region
-.RE
-.IP \[bu] 2
-\[dq]cos.na-siliconvalley.myqcloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Silicon Valley Region
-.RE
-.IP \[bu] 2
-\[dq]cos.na-ashburn.myqcloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Virginia Region
-.RE
-.IP \[bu] 2
-\[dq]cos.na-toronto.myqcloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Toronto Region
-.RE
-.IP \[bu] 2
-\[dq]cos.eu-frankfurt.myqcloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Frankfurt Region
-.RE
-.IP \[bu] 2
-\[dq]cos.eu-moscow.myqcloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Moscow Region
-.RE
-.IP \[bu] 2
-\[dq]cos.accelerate.myqcloud.com\[dq]
-.RS 2
-.IP \[bu] 2
-Use Tencent COS Accelerate Endpoint
-.RE
-.RE
-.SS --s3-endpoint
-.PP
-Endpoint for RackCorp Object Storage.
-.PP
-Properties:
-.IP \[bu] 2
-Config: endpoint
-.IP \[bu] 2
-Env Var: RCLONE_S3_ENDPOINT
-.IP \[bu] 2
-Provider: RackCorp
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]s3.rackcorp.com\[dq]
-.RS 2
-.IP \[bu] 2
-Global (AnyCast) Endpoint
-.RE
-.IP \[bu] 2
-\[dq]au.s3.rackcorp.com\[dq]
-.RS 2
-.IP \[bu] 2
-Australia (Anycast) Endpoint
-.RE
-.IP \[bu] 2
-\[dq]au-nsw.s3.rackcorp.com\[dq]
-.RS 2
-.IP \[bu] 2
-Sydney (Australia) Endpoint
-.RE
-.IP \[bu] 2
-\[dq]au-qld.s3.rackcorp.com\[dq]
-.RS 2
-.IP \[bu] 2
-Brisbane (Australia) Endpoint
-.RE
-.IP \[bu] 2
-\[dq]au-vic.s3.rackcorp.com\[dq]
-.RS 2
-.IP \[bu] 2
-Melbourne (Australia) Endpoint
-.RE
-.IP \[bu] 2
-\[dq]au-wa.s3.rackcorp.com\[dq]
-.RS 2
-.IP \[bu] 2
-Perth (Australia) Endpoint
-.RE
-.IP \[bu] 2
-\[dq]ph.s3.rackcorp.com\[dq]
-.RS 2
-.IP \[bu] 2
-Manila (Philippines) Endpoint
-.RE
-.IP \[bu] 2
-\[dq]th.s3.rackcorp.com\[dq]
-.RS 2
-.IP \[bu] 2
-Bangkok (Thailand) Endpoint
-.RE
-.IP \[bu] 2
-\[dq]hk.s3.rackcorp.com\[dq]
-.RS 2
-.IP \[bu] 2
-HK (Hong Kong) Endpoint
-.RE
-.IP \[bu] 2
-\[dq]mn.s3.rackcorp.com\[dq]
-.RS 2
-.IP \[bu] 2
-Ulaanbaatar (Mongolia) Endpoint
-.RE
-.IP \[bu] 2
-\[dq]kg.s3.rackcorp.com\[dq]
-.RS 2
-.IP \[bu] 2
-Bishkek (Kyrgyzstan) Endpoint
-.RE
-.IP \[bu] 2
-\[dq]id.s3.rackcorp.com\[dq]
-.RS 2
-.IP \[bu] 2
-Jakarta (Indonesia) Endpoint
-.RE
-.IP \[bu] 2
-\[dq]jp.s3.rackcorp.com\[dq]
-.RS 2
-.IP \[bu] 2
-Tokyo (Japan) Endpoint
-.RE
-.IP \[bu] 2
-\[dq]sg.s3.rackcorp.com\[dq]
-.RS 2
-.IP \[bu] 2
-SG (Singapore) Endpoint
-.RE
-.IP \[bu] 2
-\[dq]de.s3.rackcorp.com\[dq]
-.RS 2
-.IP \[bu] 2
-Frankfurt (Germany) Endpoint
-.RE
-.IP \[bu] 2
-\[dq]us.s3.rackcorp.com\[dq]
-.RS 2
-.IP \[bu] 2
-USA (AnyCast) Endpoint
-.RE
-.IP \[bu] 2
-\[dq]us-east-1.s3.rackcorp.com\[dq]
-.RS 2
-.IP \[bu] 2
-New York (USA) Endpoint
-.RE
-.IP \[bu] 2
-\[dq]us-west-1.s3.rackcorp.com\[dq]
-.RS 2
-.IP \[bu] 2
-Freemont (USA) Endpoint
-.RE
-.IP \[bu] 2
-\[dq]nz.s3.rackcorp.com\[dq]
-.RS 2
-.IP \[bu] 2
-Auckland (New Zealand) Endpoint
-.RE
-.RE
-.SS --s3-endpoint
-.PP
-Endpoint for Qiniu Object Storage.
-.PP
-Properties:
-.IP \[bu] 2
-Config: endpoint
-.IP \[bu] 2
-Env Var: RCLONE_S3_ENDPOINT
-.IP \[bu] 2
-Provider: Qiniu
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]s3-cn-east-1.qiniucs.com\[dq]
-.RS 2
-.IP \[bu] 2
-East China Endpoint 1
-.RE
-.IP \[bu] 2
-\[dq]s3-cn-east-2.qiniucs.com\[dq]
-.RS 2
-.IP \[bu] 2
-East China Endpoint 2
-.RE
-.IP \[bu] 2
-\[dq]s3-cn-north-1.qiniucs.com\[dq]
-.RS 2
-.IP \[bu] 2
-North China Endpoint 1
-.RE
-.IP \[bu] 2
-\[dq]s3-cn-south-1.qiniucs.com\[dq]
-.RS 2
-.IP \[bu] 2
-South China Endpoint 1
-.RE
-.IP \[bu] 2
-\[dq]s3-us-north-1.qiniucs.com\[dq]
-.RS 2
-.IP \[bu] 2
-North America Endpoint 1
-.RE
-.IP \[bu] 2
-\[dq]s3-ap-southeast-1.qiniucs.com\[dq]
-.RS 2
-.IP \[bu] 2
-Southeast Asia Endpoint 1
-.RE
-.IP \[bu] 2
-\[dq]s3-ap-northeast-1.qiniucs.com\[dq]
-.RS 2
-.IP \[bu] 2
-Northeast Asia Endpoint 1
-.RE
-.RE
-.SS --s3-endpoint
-.PP
-Endpoint for S3 API.
-.PP
-Required when using an S3 clone.
-.PP
-Properties:
-.IP \[bu] 2
-Config: endpoint
-.IP \[bu] 2
-Env Var: RCLONE_S3_ENDPOINT
-.IP \[bu] 2
-Provider:
-!AWS,ArvanCloud,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,Scaleway,StackPath,Storj,Synology,RackCorp,Qiniu,Petabox
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]objects-us-east-1.dream.io\[dq]
-.RS 2
-.IP \[bu] 2
-Dream Objects endpoint
-.RE
-.IP \[bu] 2
-\[dq]syd1.digitaloceanspaces.com\[dq]
-.RS 2
-.IP \[bu] 2
-DigitalOcean Spaces Sydney 1
-.RE
-.IP \[bu] 2
-\[dq]sfo3.digitaloceanspaces.com\[dq]
-.RS 2
-.IP \[bu] 2
-DigitalOcean Spaces San Francisco 3
-.RE
-.IP \[bu] 2
-\[dq]fra1.digitaloceanspaces.com\[dq]
-.RS 2
-.IP \[bu] 2
-DigitalOcean Spaces Frankfurt 1
-.RE
-.IP \[bu] 2
-\[dq]nyc3.digitaloceanspaces.com\[dq]
-.RS 2
-.IP \[bu] 2
-DigitalOcean Spaces New York 3
-.RE
-.IP \[bu] 2
-\[dq]ams3.digitaloceanspaces.com\[dq]
-.RS 2
-.IP \[bu] 2
-DigitalOcean Spaces Amsterdam 3
-.RE
-.IP \[bu] 2
-\[dq]sgp1.digitaloceanspaces.com\[dq]
-.RS 2
-.IP \[bu] 2
-DigitalOcean Spaces Singapore 1
-.RE
-.IP \[bu] 2
-\[dq]localhost:8333\[dq]
-.RS 2
-.IP \[bu] 2
-SeaweedFS S3 localhost
-.RE
-.IP \[bu] 2
-\[dq]s3.us-east-1.lyvecloud.seagate.com\[dq]
-.RS 2
-.IP \[bu] 2
-Seagate Lyve Cloud US East 1 (Virginia)
-.RE
-.IP \[bu] 2
-\[dq]s3.us-west-1.lyvecloud.seagate.com\[dq]
-.RS 2
-.IP \[bu] 2
-Seagate Lyve Cloud US West 1 (California)
-.RE
-.IP \[bu] 2
-\[dq]s3.ap-southeast-1.lyvecloud.seagate.com\[dq]
-.RS 2
-.IP \[bu] 2
-Seagate Lyve Cloud AP Southeast 1 (Singapore)
-.RE
-.IP \[bu] 2
-\[dq]s3.wasabisys.com\[dq]
-.RS 2
-.IP \[bu] 2
-Wasabi US East 1 (N.
-Virginia)
-.RE
-.IP \[bu] 2
-\[dq]s3.us-east-2.wasabisys.com\[dq]
-.RS 2
-.IP \[bu] 2
-Wasabi US East 2 (N.
-Virginia)
-.RE
-.IP \[bu] 2
-\[dq]s3.us-central-1.wasabisys.com\[dq]
-.RS 2
-.IP \[bu] 2
-Wasabi US Central 1 (Texas)
-.RE
-.IP \[bu] 2
-\[dq]s3.us-west-1.wasabisys.com\[dq]
-.RS 2
-.IP \[bu] 2
-Wasabi US West 1 (Oregon)
-.RE
-.IP \[bu] 2
-\[dq]s3.ca-central-1.wasabisys.com\[dq]
-.RS 2
-.IP \[bu] 2
-Wasabi CA Central 1 (Toronto)
-.RE
-.IP \[bu] 2
-\[dq]s3.eu-central-1.wasabisys.com\[dq]
-.RS 2
-.IP \[bu] 2
-Wasabi EU Central 1 (Amsterdam)
-.RE
-.IP \[bu] 2
-\[dq]s3.eu-central-2.wasabisys.com\[dq]
-.RS 2
-.IP \[bu] 2
-Wasabi EU Central 2 (Frankfurt)
-.RE
-.IP \[bu] 2
-\[dq]s3.eu-west-1.wasabisys.com\[dq]
-.RS 2
-.IP \[bu] 2
-Wasabi EU West 1 (London)
-.RE
-.IP \[bu] 2
-\[dq]s3.eu-west-2.wasabisys.com\[dq]
-.RS 2
-.IP \[bu] 2
-Wasabi EU West 2 (Paris)
-.RE
-.IP \[bu] 2
-\[dq]s3.ap-northeast-1.wasabisys.com\[dq]
-.RS 2
-.IP \[bu] 2
-Wasabi AP Northeast 1 (Tokyo) endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.ap-northeast-2.wasabisys.com\[dq]
-.RS 2
-.IP \[bu] 2
-Wasabi AP Northeast 2 (Osaka) endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.ap-southeast-1.wasabisys.com\[dq]
-.RS 2
-.IP \[bu] 2
-Wasabi AP Southeast 1 (Singapore)
-.RE
-.IP \[bu] 2
-\[dq]s3.ap-southeast-2.wasabisys.com\[dq]
-.RS 2
-.IP \[bu] 2
-Wasabi AP Southeast 2 (Sydney)
-.RE
-.IP \[bu] 2
-\[dq]storage.iran.liara.space\[dq]
-.RS 2
-.IP \[bu] 2
-Liara Iran endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.ir-thr-at1.arvanstorage.ir\[dq]
-.RS 2
-.IP \[bu] 2
-ArvanCloud Tehran Iran (Simin) endpoint
-.RE
-.IP \[bu] 2
-\[dq]s3.ir-tbz-sh1.arvanstorage.ir\[dq]
-.RS 2
-.IP \[bu] 2
-ArvanCloud Tabriz Iran (Shahriar) endpoint
-.RE
-.RE
.SS --s3-location-constraint
.PP
Location constraint - must be set to match the Region.
@@ -30860,669 +30379,6 @@ AWS GovCloud (US-East) Region
AWS GovCloud (US) Region
.RE
.RE
-.SS --s3-location-constraint
-.PP
-Location constraint - must match endpoint.
-.PP
-Used when creating buckets only.
-.PP
-Properties:
-.IP \[bu] 2
-Config: location_constraint
-.IP \[bu] 2
-Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-.IP \[bu] 2
-Provider: ChinaMobile
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]wuxi1\[dq]
-.RS 2
-.IP \[bu] 2
-East China (Suzhou)
-.RE
-.IP \[bu] 2
-\[dq]jinan1\[dq]
-.RS 2
-.IP \[bu] 2
-East China (Jinan)
-.RE
-.IP \[bu] 2
-\[dq]ningbo1\[dq]
-.RS 2
-.IP \[bu] 2
-East China (Hangzhou)
-.RE
-.IP \[bu] 2
-\[dq]shanghai1\[dq]
-.RS 2
-.IP \[bu] 2
-East China (Shanghai-1)
-.RE
-.IP \[bu] 2
-\[dq]zhengzhou1\[dq]
-.RS 2
-.IP \[bu] 2
-Central China (Zhengzhou)
-.RE
-.IP \[bu] 2
-\[dq]hunan1\[dq]
-.RS 2
-.IP \[bu] 2
-Central China (Changsha-1)
-.RE
-.IP \[bu] 2
-\[dq]zhuzhou1\[dq]
-.RS 2
-.IP \[bu] 2
-Central China (Changsha-2)
-.RE
-.IP \[bu] 2
-\[dq]guangzhou1\[dq]
-.RS 2
-.IP \[bu] 2
-South China (Guangzhou-2)
-.RE
-.IP \[bu] 2
-\[dq]dongguan1\[dq]
-.RS 2
-.IP \[bu] 2
-South China (Guangzhou-3)
-.RE
-.IP \[bu] 2
-\[dq]beijing1\[dq]
-.RS 2
-.IP \[bu] 2
-North China (Beijing-1)
-.RE
-.IP \[bu] 2
-\[dq]beijing2\[dq]
-.RS 2
-.IP \[bu] 2
-North China (Beijing-2)
-.RE
-.IP \[bu] 2
-\[dq]beijing4\[dq]
-.RS 2
-.IP \[bu] 2
-North China (Beijing-3)
-.RE
-.IP \[bu] 2
-\[dq]huhehaote1\[dq]
-.RS 2
-.IP \[bu] 2
-North China (Huhehaote)
-.RE
-.IP \[bu] 2
-\[dq]chengdu1\[dq]
-.RS 2
-.IP \[bu] 2
-Southwest China (Chengdu)
-.RE
-.IP \[bu] 2
-\[dq]chongqing1\[dq]
-.RS 2
-.IP \[bu] 2
-Southwest China (Chongqing)
-.RE
-.IP \[bu] 2
-\[dq]guiyang1\[dq]
-.RS 2
-.IP \[bu] 2
-Southwest China (Guiyang)
-.RE
-.IP \[bu] 2
-\[dq]xian1\[dq]
-.RS 2
-.IP \[bu] 2
-Nouthwest China (Xian)
-.RE
-.IP \[bu] 2
-\[dq]yunnan\[dq]
-.RS 2
-.IP \[bu] 2
-Yunnan China (Kunming)
-.RE
-.IP \[bu] 2
-\[dq]yunnan2\[dq]
-.RS 2
-.IP \[bu] 2
-Yunnan China (Kunming-2)
-.RE
-.IP \[bu] 2
-\[dq]tianjin1\[dq]
-.RS 2
-.IP \[bu] 2
-Tianjin China (Tianjin)
-.RE
-.IP \[bu] 2
-\[dq]jilin1\[dq]
-.RS 2
-.IP \[bu] 2
-Jilin China (Changchun)
-.RE
-.IP \[bu] 2
-\[dq]hubei1\[dq]
-.RS 2
-.IP \[bu] 2
-Hubei China (Xiangyan)
-.RE
-.IP \[bu] 2
-\[dq]jiangxi1\[dq]
-.RS 2
-.IP \[bu] 2
-Jiangxi China (Nanchang)
-.RE
-.IP \[bu] 2
-\[dq]gansu1\[dq]
-.RS 2
-.IP \[bu] 2
-Gansu China (Lanzhou)
-.RE
-.IP \[bu] 2
-\[dq]shanxi1\[dq]
-.RS 2
-.IP \[bu] 2
-Shanxi China (Taiyuan)
-.RE
-.IP \[bu] 2
-\[dq]liaoning1\[dq]
-.RS 2
-.IP \[bu] 2
-Liaoning China (Shenyang)
-.RE
-.IP \[bu] 2
-\[dq]hebei1\[dq]
-.RS 2
-.IP \[bu] 2
-Hebei China (Shijiazhuang)
-.RE
-.IP \[bu] 2
-\[dq]fujian1\[dq]
-.RS 2
-.IP \[bu] 2
-Fujian China (Xiamen)
-.RE
-.IP \[bu] 2
-\[dq]guangxi1\[dq]
-.RS 2
-.IP \[bu] 2
-Guangxi China (Nanning)
-.RE
-.IP \[bu] 2
-\[dq]anhui1\[dq]
-.RS 2
-.IP \[bu] 2
-Anhui China (Huainan)
-.RE
-.RE
-.SS --s3-location-constraint
-.PP
-Location constraint - must match endpoint.
-.PP
-Used when creating buckets only.
-.PP
-Properties:
-.IP \[bu] 2
-Config: location_constraint
-.IP \[bu] 2
-Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-.IP \[bu] 2
-Provider: ArvanCloud
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]ir-thr-at1\[dq]
-.RS 2
-.IP \[bu] 2
-Tehran Iran (Simin)
-.RE
-.IP \[bu] 2
-\[dq]ir-tbz-sh1\[dq]
-.RS 2
-.IP \[bu] 2
-Tabriz Iran (Shahriar)
-.RE
-.RE
-.SS --s3-location-constraint
-.PP
-Location constraint - must match endpoint when using IBM Cloud Public.
-.PP
-For on-prem COS, do not make a selection from this list, hit enter.
-.PP
-Properties:
-.IP \[bu] 2
-Config: location_constraint
-.IP \[bu] 2
-Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-.IP \[bu] 2
-Provider: IBMCOS
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]us-standard\[dq]
-.RS 2
-.IP \[bu] 2
-US Cross Region Standard
-.RE
-.IP \[bu] 2
-\[dq]us-vault\[dq]
-.RS 2
-.IP \[bu] 2
-US Cross Region Vault
-.RE
-.IP \[bu] 2
-\[dq]us-cold\[dq]
-.RS 2
-.IP \[bu] 2
-US Cross Region Cold
-.RE
-.IP \[bu] 2
-\[dq]us-flex\[dq]
-.RS 2
-.IP \[bu] 2
-US Cross Region Flex
-.RE
-.IP \[bu] 2
-\[dq]us-east-standard\[dq]
-.RS 2
-.IP \[bu] 2
-US East Region Standard
-.RE
-.IP \[bu] 2
-\[dq]us-east-vault\[dq]
-.RS 2
-.IP \[bu] 2
-US East Region Vault
-.RE
-.IP \[bu] 2
-\[dq]us-east-cold\[dq]
-.RS 2
-.IP \[bu] 2
-US East Region Cold
-.RE
-.IP \[bu] 2
-\[dq]us-east-flex\[dq]
-.RS 2
-.IP \[bu] 2
-US East Region Flex
-.RE
-.IP \[bu] 2
-\[dq]us-south-standard\[dq]
-.RS 2
-.IP \[bu] 2
-US South Region Standard
-.RE
-.IP \[bu] 2
-\[dq]us-south-vault\[dq]
-.RS 2
-.IP \[bu] 2
-US South Region Vault
-.RE
-.IP \[bu] 2
-\[dq]us-south-cold\[dq]
-.RS 2
-.IP \[bu] 2
-US South Region Cold
-.RE
-.IP \[bu] 2
-\[dq]us-south-flex\[dq]
-.RS 2
-.IP \[bu] 2
-US South Region Flex
-.RE
-.IP \[bu] 2
-\[dq]eu-standard\[dq]
-.RS 2
-.IP \[bu] 2
-EU Cross Region Standard
-.RE
-.IP \[bu] 2
-\[dq]eu-vault\[dq]
-.RS 2
-.IP \[bu] 2
-EU Cross Region Vault
-.RE
-.IP \[bu] 2
-\[dq]eu-cold\[dq]
-.RS 2
-.IP \[bu] 2
-EU Cross Region Cold
-.RE
-.IP \[bu] 2
-\[dq]eu-flex\[dq]
-.RS 2
-.IP \[bu] 2
-EU Cross Region Flex
-.RE
-.IP \[bu] 2
-\[dq]eu-gb-standard\[dq]
-.RS 2
-.IP \[bu] 2
-Great Britain Standard
-.RE
-.IP \[bu] 2
-\[dq]eu-gb-vault\[dq]
-.RS 2
-.IP \[bu] 2
-Great Britain Vault
-.RE
-.IP \[bu] 2
-\[dq]eu-gb-cold\[dq]
-.RS 2
-.IP \[bu] 2
-Great Britain Cold
-.RE
-.IP \[bu] 2
-\[dq]eu-gb-flex\[dq]
-.RS 2
-.IP \[bu] 2
-Great Britain Flex
-.RE
-.IP \[bu] 2
-\[dq]ap-standard\[dq]
-.RS 2
-.IP \[bu] 2
-APAC Standard
-.RE
-.IP \[bu] 2
-\[dq]ap-vault\[dq]
-.RS 2
-.IP \[bu] 2
-APAC Vault
-.RE
-.IP \[bu] 2
-\[dq]ap-cold\[dq]
-.RS 2
-.IP \[bu] 2
-APAC Cold
-.RE
-.IP \[bu] 2
-\[dq]ap-flex\[dq]
-.RS 2
-.IP \[bu] 2
-APAC Flex
-.RE
-.IP \[bu] 2
-\[dq]mel01-standard\[dq]
-.RS 2
-.IP \[bu] 2
-Melbourne Standard
-.RE
-.IP \[bu] 2
-\[dq]mel01-vault\[dq]
-.RS 2
-.IP \[bu] 2
-Melbourne Vault
-.RE
-.IP \[bu] 2
-\[dq]mel01-cold\[dq]
-.RS 2
-.IP \[bu] 2
-Melbourne Cold
-.RE
-.IP \[bu] 2
-\[dq]mel01-flex\[dq]
-.RS 2
-.IP \[bu] 2
-Melbourne Flex
-.RE
-.IP \[bu] 2
-\[dq]tor01-standard\[dq]
-.RS 2
-.IP \[bu] 2
-Toronto Standard
-.RE
-.IP \[bu] 2
-\[dq]tor01-vault\[dq]
-.RS 2
-.IP \[bu] 2
-Toronto Vault
-.RE
-.IP \[bu] 2
-\[dq]tor01-cold\[dq]
-.RS 2
-.IP \[bu] 2
-Toronto Cold
-.RE
-.IP \[bu] 2
-\[dq]tor01-flex\[dq]
-.RS 2
-.IP \[bu] 2
-Toronto Flex
-.RE
-.RE
-.SS --s3-location-constraint
-.PP
-Location constraint - the location where your bucket will be located and
-your data stored.
-.PP
-Properties:
-.IP \[bu] 2
-Config: location_constraint
-.IP \[bu] 2
-Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-.IP \[bu] 2
-Provider: RackCorp
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]global\[dq]
-.RS 2
-.IP \[bu] 2
-Global CDN Region
-.RE
-.IP \[bu] 2
-\[dq]au\[dq]
-.RS 2
-.IP \[bu] 2
-Australia (All locations)
-.RE
-.IP \[bu] 2
-\[dq]au-nsw\[dq]
-.RS 2
-.IP \[bu] 2
-NSW (Australia) Region
-.RE
-.IP \[bu] 2
-\[dq]au-qld\[dq]
-.RS 2
-.IP \[bu] 2
-QLD (Australia) Region
-.RE
-.IP \[bu] 2
-\[dq]au-vic\[dq]
-.RS 2
-.IP \[bu] 2
-VIC (Australia) Region
-.RE
-.IP \[bu] 2
-\[dq]au-wa\[dq]
-.RS 2
-.IP \[bu] 2
-Perth (Australia) Region
-.RE
-.IP \[bu] 2
-\[dq]ph\[dq]
-.RS 2
-.IP \[bu] 2
-Manila (Philippines) Region
-.RE
-.IP \[bu] 2
-\[dq]th\[dq]
-.RS 2
-.IP \[bu] 2
-Bangkok (Thailand) Region
-.RE
-.IP \[bu] 2
-\[dq]hk\[dq]
-.RS 2
-.IP \[bu] 2
-HK (Hong Kong) Region
-.RE
-.IP \[bu] 2
-\[dq]mn\[dq]
-.RS 2
-.IP \[bu] 2
-Ulaanbaatar (Mongolia) Region
-.RE
-.IP \[bu] 2
-\[dq]kg\[dq]
-.RS 2
-.IP \[bu] 2
-Bishkek (Kyrgyzstan) Region
-.RE
-.IP \[bu] 2
-\[dq]id\[dq]
-.RS 2
-.IP \[bu] 2
-Jakarta (Indonesia) Region
-.RE
-.IP \[bu] 2
-\[dq]jp\[dq]
-.RS 2
-.IP \[bu] 2
-Tokyo (Japan) Region
-.RE
-.IP \[bu] 2
-\[dq]sg\[dq]
-.RS 2
-.IP \[bu] 2
-SG (Singapore) Region
-.RE
-.IP \[bu] 2
-\[dq]de\[dq]
-.RS 2
-.IP \[bu] 2
-Frankfurt (Germany) Region
-.RE
-.IP \[bu] 2
-\[dq]us\[dq]
-.RS 2
-.IP \[bu] 2
-USA (AnyCast) Region
-.RE
-.IP \[bu] 2
-\[dq]us-east-1\[dq]
-.RS 2
-.IP \[bu] 2
-New York (USA) Region
-.RE
-.IP \[bu] 2
-\[dq]us-west-1\[dq]
-.RS 2
-.IP \[bu] 2
-Freemont (USA) Region
-.RE
-.IP \[bu] 2
-\[dq]nz\[dq]
-.RS 2
-.IP \[bu] 2
-Auckland (New Zealand) Region
-.RE
-.RE
-.SS --s3-location-constraint
-.PP
-Location constraint - must be set to match the Region.
-.PP
-Used when creating buckets only.
-.PP
-Properties:
-.IP \[bu] 2
-Config: location_constraint
-.IP \[bu] 2
-Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-.IP \[bu] 2
-Provider: Qiniu
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]cn-east-1\[dq]
-.RS 2
-.IP \[bu] 2
-East China Region 1
-.RE
-.IP \[bu] 2
-\[dq]cn-east-2\[dq]
-.RS 2
-.IP \[bu] 2
-East China Region 2
-.RE
-.IP \[bu] 2
-\[dq]cn-north-1\[dq]
-.RS 2
-.IP \[bu] 2
-North China Region 1
-.RE
-.IP \[bu] 2
-\[dq]cn-south-1\[dq]
-.RS 2
-.IP \[bu] 2
-South China Region 1
-.RE
-.IP \[bu] 2
-\[dq]us-north-1\[dq]
-.RS 2
-.IP \[bu] 2
-North America Region 1
-.RE
-.IP \[bu] 2
-\[dq]ap-southeast-1\[dq]
-.RS 2
-.IP \[bu] 2
-Southeast Asia Region 1
-.RE
-.IP \[bu] 2
-\[dq]ap-northeast-1\[dq]
-.RS 2
-.IP \[bu] 2
-Northeast Asia Region 1
-.RE
-.RE
-.SS --s3-location-constraint
-.PP
-Location constraint - must be set to match the Region.
-.PP
-Leave blank if not sure.
-Used when creating buckets only.
-.PP
-Properties:
-.IP \[bu] 2
-Config: location_constraint
-.IP \[bu] 2
-Env Var: RCLONE_S3_LOCATION_CONSTRAINT
-.IP \[bu] 2
-Provider:
-!AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,IBMCOS,IDrive,IONOS,Leviia,Liara,Qiniu,RackCorp,Scaleway,StackPath,Storj,TencentCOS,Petabox
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
.SS --s3-acl
.PP
Canned ACL used when creating buckets and storing or copying objects.
@@ -31803,292 +30659,14 @@ Intelligent-Tiering storage class
Glacier Instant Retrieval storage class
.RE
.RE
-.SS --s3-storage-class
-.PP
-The storage class to use when storing new objects in OSS.
-.PP
-Properties:
-.IP \[bu] 2
-Config: storage_class
-.IP \[bu] 2
-Env Var: RCLONE_S3_STORAGE_CLASS
-.IP \[bu] 2
-Provider: Alibaba
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]\[dq]
-.RS 2
-.IP \[bu] 2
-Default
-.RE
-.IP \[bu] 2
-\[dq]STANDARD\[dq]
-.RS 2
-.IP \[bu] 2
-Standard storage class
-.RE
-.IP \[bu] 2
-\[dq]GLACIER\[dq]
-.RS 2
-.IP \[bu] 2
-Archive storage mode
-.RE
-.IP \[bu] 2
-\[dq]STANDARD_IA\[dq]
-.RS 2
-.IP \[bu] 2
-Infrequent access storage mode
-.RE
-.RE
-.SS --s3-storage-class
-.PP
-The storage class to use when storing new objects in ChinaMobile.
-.PP
-Properties:
-.IP \[bu] 2
-Config: storage_class
-.IP \[bu] 2
-Env Var: RCLONE_S3_STORAGE_CLASS
-.IP \[bu] 2
-Provider: ChinaMobile
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]\[dq]
-.RS 2
-.IP \[bu] 2
-Default
-.RE
-.IP \[bu] 2
-\[dq]STANDARD\[dq]
-.RS 2
-.IP \[bu] 2
-Standard storage class
-.RE
-.IP \[bu] 2
-\[dq]GLACIER\[dq]
-.RS 2
-.IP \[bu] 2
-Archive storage mode
-.RE
-.IP \[bu] 2
-\[dq]STANDARD_IA\[dq]
-.RS 2
-.IP \[bu] 2
-Infrequent access storage mode
-.RE
-.RE
-.SS --s3-storage-class
-.PP
-The storage class to use when storing new objects in Liara
-.PP
-Properties:
-.IP \[bu] 2
-Config: storage_class
-.IP \[bu] 2
-Env Var: RCLONE_S3_STORAGE_CLASS
-.IP \[bu] 2
-Provider: Liara
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]STANDARD\[dq]
-.RS 2
-.IP \[bu] 2
-Standard storage class
-.RE
-.RE
-.SS --s3-storage-class
-.PP
-The storage class to use when storing new objects in ArvanCloud.
-.PP
-Properties:
-.IP \[bu] 2
-Config: storage_class
-.IP \[bu] 2
-Env Var: RCLONE_S3_STORAGE_CLASS
-.IP \[bu] 2
-Provider: ArvanCloud
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]STANDARD\[dq]
-.RS 2
-.IP \[bu] 2
-Standard storage class
-.RE
-.RE
-.SS --s3-storage-class
-.PP
-The storage class to use when storing new objects in Tencent COS.
-.PP
-Properties:
-.IP \[bu] 2
-Config: storage_class
-.IP \[bu] 2
-Env Var: RCLONE_S3_STORAGE_CLASS
-.IP \[bu] 2
-Provider: TencentCOS
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]\[dq]
-.RS 2
-.IP \[bu] 2
-Default
-.RE
-.IP \[bu] 2
-\[dq]STANDARD\[dq]
-.RS 2
-.IP \[bu] 2
-Standard storage class
-.RE
-.IP \[bu] 2
-\[dq]ARCHIVE\[dq]
-.RS 2
-.IP \[bu] 2
-Archive storage mode
-.RE
-.IP \[bu] 2
-\[dq]STANDARD_IA\[dq]
-.RS 2
-.IP \[bu] 2
-Infrequent access storage mode
-.RE
-.RE
-.SS --s3-storage-class
-.PP
-The storage class to use when storing new objects in S3.
-.PP
-Properties:
-.IP \[bu] 2
-Config: storage_class
-.IP \[bu] 2
-Env Var: RCLONE_S3_STORAGE_CLASS
-.IP \[bu] 2
-Provider: Scaleway
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]\[dq]
-.RS 2
-.IP \[bu] 2
-Default.
-.RE
-.IP \[bu] 2
-\[dq]STANDARD\[dq]
-.RS 2
-.IP \[bu] 2
-The Standard class for any upload.
-.IP \[bu] 2
-Suitable for on-demand content like streaming or CDN.
-.IP \[bu] 2
-Available in all regions.
-.RE
-.IP \[bu] 2
-\[dq]GLACIER\[dq]
-.RS 2
-.IP \[bu] 2
-Archived storage.
-.IP \[bu] 2
-Prices are lower, but it needs to be restored first to be accessed.
-.IP \[bu] 2
-Available in FR-PAR and NL-AMS regions.
-.RE
-.IP \[bu] 2
-\[dq]ONEZONE_IA\[dq]
-.RS 2
-.IP \[bu] 2
-One Zone - Infrequent Access.
-.IP \[bu] 2
-A good choice for storing secondary backup copies or easily re-creatable
-data.
-.IP \[bu] 2
-Available in the FR-PAR region only.
-.RE
-.RE
-.SS --s3-storage-class
-.PP
-The storage class to use when storing new objects in Qiniu.
-.PP
-Properties:
-.IP \[bu] 2
-Config: storage_class
-.IP \[bu] 2
-Env Var: RCLONE_S3_STORAGE_CLASS
-.IP \[bu] 2
-Provider: Qiniu
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.IP \[bu] 2
-Examples:
-.RS 2
-.IP \[bu] 2
-\[dq]STANDARD\[dq]
-.RS 2
-.IP \[bu] 2
-Standard storage class
-.RE
-.IP \[bu] 2
-\[dq]LINE\[dq]
-.RS 2
-.IP \[bu] 2
-Infrequent access storage mode
-.RE
-.IP \[bu] 2
-\[dq]GLACIER\[dq]
-.RS 2
-.IP \[bu] 2
-Archive storage mode
-.RE
-.IP \[bu] 2
-\[dq]DEEP_ARCHIVE\[dq]
-.RS 2
-.IP \[bu] 2
-Deep archive storage mode
-.RE
-.RE
.SS Advanced options
.PP
Here are the Advanced options specific to s3 (Amazon S3 Compliant
-Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, China
-Mobile, Cloudflare, GCS, DigitalOcean, Dreamhost, Huawei OBS, IBM COS,
-IDrive e2, IONOS Cloud, Leviia, Liara, Lyve Cloud, Minio, Netease,
-Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology,
-Tencent COS, Qiniu and Wasabi).
+Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile,
+Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive,
+IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox,
+RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology,
+TencentCOS, Wasabi, Qiniu and others).
.SS --s3-bucket-acl
.PP
Canned ACL used when creating buckets.
@@ -32712,7 +31290,7 @@ Config: encoding
.IP \[bu] 2
Env Var: RCLONE_S3_ENCODING
.IP \[bu] 2
-Type: MultiEncoder
+Type: Encoding
.IP \[bu] 2
Default: Slash,InvalidUtf8,Dot
.SS --s3-memory-pool-flush-time
@@ -32987,6 +31565,61 @@ Provider: AWS
Type: string
.IP \[bu] 2
Required: false
+.SS --s3-use-already-exists
+.PP
+Set if rclone should report BucketAlreadyExists errors on bucket
+creation.
+.PP
+At some point during the evolution of the s3 protocol, AWS started
+returning an \f[C]AlreadyOwnedByYou\f[R] error when attempting to create
+a bucket that the user already owned, rather than a
+\f[C]BucketAlreadyExists\f[R] error.
+.PP
+Unfortunately exactly what has been implemented by s3 clones is a little
+inconsistent, some return \f[C]AlreadyOwnedByYou\f[R], some return
+\f[C]BucketAlreadyExists\f[R] and some return no error at all.
+.PP
+This is important to rclone because it ensures the bucket exists by
+creating it on quite a lot of operations (unless
+\f[C]--s3-no-check-bucket\f[R] is used).
+.PP
+If rclone knows the provider can return \f[C]AlreadyOwnedByYou\f[R] or
+returns no error then it can report \f[C]BucketAlreadyExists\f[R] errors
+when the user attempts to create a bucket not owned by them.
+Otherwise rclone ignores the \f[C]BucketAlreadyExists\f[R] error which
+can lead to confusion.
+.PP
+This should be automatically set correctly for all providers rclone
+knows about - please make a bug report if not.
+.PP
+Properties:
+.IP \[bu] 2
+Config: use_already_exists
+.IP \[bu] 2
+Env Var: RCLONE_S3_USE_ALREADY_EXISTS
+.IP \[bu] 2
+Type: Tristate
+.IP \[bu] 2
+Default: unset
+.SS --s3-use-multipart-uploads
+.PP
+Set if rclone should use multipart uploads.
+.PP
+You can change this if you want to disable the use of multipart uploads.
+This shouldn\[aq]t be necessary in normal operation.
+.PP
+This should be automatically set correctly for all providers rclone
+knows about - please make a bug report if not.
+.PP
+Properties:
+.IP \[bu] 2
+Config: use_multipart_uploads
+.IP \[bu] 2
+Env Var: RCLONE_S3_USE_MULTIPART_UPLOADS
+.IP \[bu] 2
+Type: Tristate
+.IP \[bu] 2
+Default: unset
.SS Metadata
.PP
User metadata is stored as x-amz-meta- keys.
@@ -33677,6 +32310,19 @@ secret_access_key = your_secret_key
endpoint = https://storage.googleapis.com
\f[R]
.fi
+.PP
+\f[B]Note\f[R] that \f[C]--s3-versions\f[R] does not work with GCS when
+it needs to do directory paging.
+Rclone will return the error:
+.IP
+.nf
+\f[C]
+s3 protocol error: received versions listing with IsTruncated set with no NextKeyMarker
+\f[R]
+.fi
+.PP
+This is Google bug
+#312292516 (https://issuetracker.google.com/u/0/issues/312292516).
.SS DigitalOcean Spaces
.PP
Spaces (https://www.digitalocean.com/products/object-storage/) is an
@@ -34764,6 +33410,39 @@ endpoint = s3.rackcorp.com
location_constraint = au-nsw
\f[R]
.fi
+.SS Rclone Serve S3
+.PP
+Rclone can serve any remote over the S3 protocol.
+For details see the rclone serve
+s3 (https://rclone.org/commands/rclone_serve_http/) documentation.
+.PP
+For example, to serve \f[C]remote:path\f[R] over s3, run the server like
+this:
+.IP
+.nf
+\f[C]
+rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
+\f[R]
+.fi
+.PP
+This will be compatible with an rclone remote which is defined like
+this:
+.IP
+.nf
+\f[C]
+[serves3]
+type = s3
+provider = Rclone
+endpoint = http://127.0.0.1:8080/
+access_key_id = ACCESS_KEY_ID
+secret_access_key = SECRET_ACCESS_KEY
+use_multipart_uploads = false
+\f[R]
+.fi
+.PP
+Note that setting \f[C]disable_multipart_uploads = true\f[R] is to work
+around a bug (https://rclone.org/commands/rclone_serve_http/#bugs) which
+will be fixed in due course.
.SS Scaleway
.PP
Scaleway (https://www.scaleway.com/object-storage/) The Object Storage
@@ -35767,6 +34446,147 @@ server_side_encryption =
storage_class =
\f[R]
.fi
+.SS Linode
+.PP
+Here is an example of making a Linode Object
+Storage (https://www.linode.com/products/object-storage/) configuration.
+First run:
+.IP
+.nf
+\f[C]
+rclone config
+\f[R]
+.fi
+.PP
+This will guide you through an interactive setup process.
+.IP
+.nf
+\f[C]
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter name for new remote.
+name> linode
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+ X / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others
+ \[rs] (s3)
+[snip]
+Storage> s3
+
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+XX / Linode Object Storage
+ \[rs] (Linode)
+[snip]
+provider> Linode
+
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \[rs] (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \[rs] (true)
+env_auth>
+
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> ACCESS_KEY
+
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> SECRET_ACCESS_KEY
+
+Option endpoint.
+Endpoint for Linode Object Storage API.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Atlanta, GA (USA), us-southeast-1
+ \[rs] (us-southeast-1.linodeobjects.com)
+ 2 / Chicago, IL (USA), us-ord-1
+ \[rs] (us-ord-1.linodeobjects.com)
+ 3 / Frankfurt (Germany), eu-central-1
+ \[rs] (eu-central-1.linodeobjects.com)
+ 4 / Milan (Italy), it-mil-1
+ \[rs] (it-mil-1.linodeobjects.com)
+ 5 / Newark, NJ (USA), us-east-1
+ \[rs] (us-east-1.linodeobjects.com)
+ 6 / Paris (France), fr-par-1
+ \[rs] (fr-par-1.linodeobjects.com)
+ 7 / Seattle, WA (USA), us-sea-1
+ \[rs] (us-sea-1.linodeobjects.com)
+ 8 / Singapore ap-south-1
+ \[rs] (ap-south-1.linodeobjects.com)
+ 9 / Stockholm (Sweden), se-sto-1
+ \[rs] (se-sto-1.linodeobjects.com)
+10 / Washington, DC, (USA), us-iad-1
+ \[rs] (us-iad-1.linodeobjects.com)
+endpoint> 3
+
+Option acl.
+Canned ACL used when creating buckets and storing or copying objects.
+This ACL is used for creating objects and if bucket_acl isn\[aq]t set, for creating buckets too.
+For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+Note that this ACL is applied when server-side copying objects as S3
+doesn\[aq]t copy the ACL from the source but rather writes a fresh one.
+If the acl is an empty string then no X-Amz-Acl: header is added and
+the default (private) will be used.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \[rs] (private)
+[snip]
+acl>
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: s3
+- provider: Linode
+- access_key_id: ACCESS_KEY
+- secret_access_key: SECRET_ACCESS_KEY
+- endpoint: eu-central-1.linodeobjects.com
+Keep this \[dq]linode\[dq] remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+\f[R]
+.fi
+.PP
+This will leave the config file looking like this.
+.IP
+.nf
+\f[C]
+[linode]
+type = s3
+provider = Linode
+access_key_id = ACCESS_KEY
+secret_access_key = SECRET_ACCESS_KEY
+endpoint = eu-central-1.linodeobjects.com
+\f[R]
+.fi
.SS ArvanCloud
.PP
ArvanCloud (https://www.arvancloud.com/en/products/cloud-storage)
@@ -36550,9 +35370,9 @@ This remote supports \[ga]--fast-list\[ga] which allows you to use fewer
transactions in exchange for more memory. See the [rclone
docs](https://rclone.org/docs/#fast-list) for more details.
-### Modified time
+### Modification times
-The modified time is stored as metadata on the object as
+The modification time is stored as metadata on the object as
\[ga]X-Bz-Info-src_last_modified_millis\[ga] as milliseconds since 1970-01-01
in the Backblaze standard. Other tools should be able to use this as
a modified time.
@@ -36978,7 +35798,7 @@ Properties:
- Config: upload_concurrency
- Env Var: RCLONE_B2_UPLOAD_CONCURRENCY
- Type: int
-- Default: 16
+- Default: 4
#### --b2-disable-checksum
@@ -37058,6 +35878,37 @@ Properties:
- Type: bool
- Default: false
+#### --b2-lifecycle
+
+Set the number of days deleted files should be kept when creating a bucket.
+
+On bucket creation, this parameter is used to create a lifecycle rule
+for the entire bucket.
+
+If lifecycle is 0 (the default) it does not create a lifecycle rule so
+the default B2 behaviour applies. This is to create versions of files
+on delete and overwrite and to keep them indefinitely.
+
+If lifecycle is >0 then it creates a single rule setting the number of
+days before a file that is deleted or overwritten is deleted
+permanently. This is known as daysFromHidingToDeleting in the b2 docs.
+
+The minimum value for this parameter is 1 day.
+
+You can also enable hard_delete in the config also which will mean
+deletions won\[aq]t cause versions but overwrites will still cause
+versions to be made.
+
+See: [rclone backend lifecycle](#lifecycle) for setting lifecycles after bucket creation.
+
+
+Properties:
+
+- Config: lifecycle
+- Env Var: RCLONE_B2_LIFECYCLE
+- Type: int
+- Default: 0
+
#### --b2-encoding
The encoding for the backend.
@@ -37068,9 +35919,76 @@ Properties:
- Config: encoding
- Env Var: RCLONE_B2_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
+## Backend commands
+
+Here are the commands specific to the b2 backend.
+
+Run them with
+
+ rclone backend COMMAND remote:
+
+The help below will explain what arguments each command takes.
+
+See the [backend](https://rclone.org/commands/rclone_backend/) command for more
+info on how to pass options and arguments.
+
+These can be run on a running backend using the rc command
+[backend/command](https://rclone.org/rc/#backend-command).
+
+### lifecycle
+
+Read or set the lifecycle for a bucket
+
+ rclone backend lifecycle remote: [options] [+]
+
+This command can be used to read or set the lifecycle for a bucket.
+
+Usage Examples:
+
+To show the current lifecycle rules:
+
+ rclone backend lifecycle b2:bucket
+
+This will dump something like this showing the lifecycle rules.
+
+ [
+ {
+ \[dq]daysFromHidingToDeleting\[dq]: 1,
+ \[dq]daysFromUploadingToHiding\[dq]: null,
+ \[dq]fileNamePrefix\[dq]: \[dq]\[dq]
+ }
+ ]
+
+If there are no lifecycle rules (the default) then it will just return [].
+
+To reset the current lifecycle rules:
+
+ rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30
+ rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHidingToDeleting=1
+
+This will run and then print the new lifecycle rules as above.
+
+Rclone only lets you set lifecycles for the whole bucket with the
+fileNamePrefix = \[dq]\[dq].
+
+You can\[aq]t disable versioning with B2. The best you can do is to set
+the daysFromHidingToDeleting to 1 day. You can enable hard_delete in
+the config also which will mean deletions won\[aq]t cause versions but
+overwrites will still cause versions to be made.
+
+ rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1
+
+See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules
+
+
+Options:
+
+- \[dq]daysFromHidingToDeleting\[dq]: After a file has been hidden for this many days it is deleted. 0 is off.
+- \[dq]daysFromUploadingToHiding\[dq]: This many days after uploading a file is hidden
+
## Limitations
@@ -37258,7 +36176,7 @@ Delete this remote y/e/d> y
.IP
.nf
\f[C]
-### Modified time and hashes
+### Modification times and hashes
Box allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -37501,7 +36419,7 @@ Properties:
Impersonate this user ID when using a service account.
-Settng this flag allows rclone, when using a JWT service account, to
+Setting this flag allows rclone, when using a JWT service account, to
act on behalf of another user by setting the as-user header.
The user ID is the Box identifier for a user. User IDs can found for
@@ -37529,7 +36447,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_BOX_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot
@@ -38443,7 +37361,7 @@ revert (sometimes silently) to time/size comparison if compatible hashsums
between source and target are not found.
-### Modified time
+### Modification times
Chunker stores modification times using the wrapped remote so support
depends on that. For a small non-chunked file the chunker overlay simply
@@ -38762,7 +37680,7 @@ To copy a local directory to an ShareFile directory called backup
Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga].
-### Modified time and hashes
+### Modification times and hashes
ShareFile allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -38957,7 +37875,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SHAREFILE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot
@@ -39344,7 +38262,7 @@ Example:
\[ga]1/12/qgm4avr35m5loi1th53ato71v0\[ga]
-### Modified time and hashes
+### Modification times and hashes
Crypt stores modification times using the underlying remote so support
depends on that.
@@ -39651,7 +38569,7 @@ has a header and is divided into chunks.
The initial nonce is generated from the operating systems crypto
strong random number generator. The nonce is incremented for each
chunk read making sure each nonce is unique for each block written.
-The chance of a nonce being re-used is minuscule. If you wrote an
+The chance of a nonce being reused is minuscule. If you wrote an
exabyte of data (10\[S1]\[u2078] bytes) you would have a probability of
approximately 2\[tmu]10\[u207B]\[S3]\[S2] of re-using a nonce.
@@ -40120,7 +39038,7 @@ You can then use team folders like this \[ga]remote:/TeamFolder\[ga] and
A leading \[ga]/\[ga] for a Dropbox personal account will do nothing, but it
will take an extra HTTP transaction so it should be avoided.
-### Modified time and Hashes
+### Modification times and hashes
Dropbox supports modified times, but the only way to set a
modification time is to re-upload the file.
@@ -40366,6 +39284,30 @@ Properties:
- Type: bool
- Default: false
+#### --dropbox-pacer-min-sleep
+
+Minimum time to sleep between API calls.
+
+Properties:
+
+- Config: pacer_min_sleep
+- Env Var: RCLONE_DROPBOX_PACER_MIN_SLEEP
+- Type: Duration
+- Default: 10ms
+
+#### --dropbox-encoding
+
+The encoding for the backend.
+
+See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_DROPBOX_ENCODING
+- Type: Encoding
+- Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot
+
#### --dropbox-batch-mode
Upload file batching sync|async|off.
@@ -40452,30 +39394,6 @@ Properties:
- Type: Duration
- Default: 10m0s
-#### --dropbox-pacer-min-sleep
-
-Minimum time to sleep between API calls.
-
-Properties:
-
-- Config: pacer_min_sleep
-- Env Var: RCLONE_DROPBOX_PACER_MIN_SLEEP
-- Type: Duration
-- Default: 10ms
-
-#### --dropbox-encoding
-
-The encoding for the backend.
-
-See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
-
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_DROPBOX_ENCODING
-- Type: MultiEncoder
-- Default: Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot
-
## Limitations
@@ -40609,7 +39527,7 @@ To copy a local directory to an Enterprise File Fabric directory called backup
rclone copy /home/source remote:backup
-### Modified time and hashes
+### Modification times and hashes
The Enterprise File Fabric allows modification times to be set on
files accurate to 1 second. These will be used to detect whether
@@ -40777,7 +39695,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_FILEFABRIC_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Del,Ctl,InvalidUtf8,Dot
@@ -41196,7 +40114,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_FTP_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Del,Ctl,RightSpace,Dot
- Examples:
- \[dq]Asterisk,Ctl,Dot,Slash\[dq]
@@ -41239,7 +40157,7 @@ at present.
The \[ga]ftp_proxy\[ga] environment variable is not currently supported.
-#### Modified time
+### Modification times
File modification time (timestamps) is supported to 1 second resolution
for major FTP servers: ProFTPd, PureFTPd, VsFTPd, and FileZilla FTP server.
@@ -41472,7 +40390,7 @@ Eg \[ga]--header-upload \[dq]Content-Type text/potato\[dq]\[ga]
Note that the last of these is for setting custom metadata in the form
\[ga]--header-upload \[dq]x-goog-meta-key: value\[dq]\[ga]
-### Modification time
+### Modification times
Google Cloud Storage stores md5sum natively.
Google\[aq]s [gsutil](https://cloud.google.com/storage/docs/gsutil) tool stores modification time
@@ -41921,7 +40839,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_GCS_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,CrLf,InvalidUtf8,Dot
@@ -42031,6 +40949,8 @@ use. This changes what type of token is granted to rclone. [The
scopes are defined
here](https://developers.google.com/drive/v3/web/about-auth).
+A comma-separated list is allowed e.g. \[ga]drive.readonly,drive.file\[ga].
+
The scope are
#### drive
@@ -42262,10 +41182,14 @@ large folder (10600 directories, 39000 files):
- without \[ga]--fast-list\[ga]: 22:05 min
- with \[ga]--fast-list\[ga]: 58s
-### Modified time
+### Modification times and hashes
Google drive stores modification times accurate to 1 ms.
+Hash algorithms MD5, SHA1 and SHA256 are supported. Note, however,
+that a small fraction of files uploaded may not have SHA1 or SHA256
+hashes especially if they were uploaded before 2018.
+
### Restricted filename characters
Only Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8),
@@ -42485,7 +41409,7 @@ Properties:
#### --drive-scope
-Scope that rclone should use when requesting access from drive.
+Comma separated list of scopes that rclone should use when requesting access from drive.
Properties:
@@ -42673,15 +41597,40 @@ Properties:
- Type: bool
- Default: false
+#### --drive-show-all-gdocs
+
+Show all Google Docs including non-exportable ones in listings.
+
+If you try a server side copy on a Google Form without this flag, you
+will get this error:
+
+ No export formats found for \[dq]application/vnd.google-apps.form\[dq]
+
+However adding this flag will allow the form to be server side copied.
+
+Note that rclone doesn\[aq]t add extensions to the Google Docs file names
+in this mode.
+
+Do **not** use this flag when trying to download Google Docs - rclone
+will fail to download them.
+
+
+Properties:
+
+- Config: show_all_gdocs
+- Env Var: RCLONE_DRIVE_SHOW_ALL_GDOCS
+- Type: bool
+- Default: false
+
#### --drive-skip-checksum-gphotos
-Skip MD5 checksum on Google photos and videos only.
+Skip checksums on Google photos and videos only.
Use this if you get checksum errors when transferring Google photos or
videos.
Setting this flag will cause Google photos and videos to return a
-blank MD5 checksum.
+blank checksums.
Google photos are identified by being in the \[dq]photos\[dq] space.
@@ -43135,6 +42084,98 @@ Properties:
- Type: bool
- Default: true
+#### --drive-metadata-owner
+
+Control whether owner should be read or written in metadata.
+
+Owner is a standard part of the file metadata so is easy to read. But it
+isn\[aq]t always desirable to set the owner from the metadata.
+
+Note that you can\[aq]t set the owner on Shared Drives, and that setting
+ownership will generate an email to the new owner (this can\[aq]t be
+disabled), and you can\[aq]t transfer ownership to someone outside your
+organization.
+
+
+Properties:
+
+- Config: metadata_owner
+- Env Var: RCLONE_DRIVE_METADATA_OWNER
+- Type: Bits
+- Default: read
+- Examples:
+ - \[dq]off\[dq]
+ - Do not read or write the value
+ - \[dq]read\[dq]
+ - Read the value only
+ - \[dq]write\[dq]
+ - Write the value only
+ - \[dq]read,write\[dq]
+ - Read and Write the value.
+
+#### --drive-metadata-permissions
+
+Control whether permissions should be read or written in metadata.
+
+Reading permissions metadata from files can be done quickly, but it
+isn\[aq]t always desirable to set the permissions from the metadata.
+
+Note that rclone drops any inherited permissions on Shared Drives and
+any owner permission on My Drives as these are duplicated in the owner
+metadata.
+
+
+Properties:
+
+- Config: metadata_permissions
+- Env Var: RCLONE_DRIVE_METADATA_PERMISSIONS
+- Type: Bits
+- Default: off
+- Examples:
+ - \[dq]off\[dq]
+ - Do not read or write the value
+ - \[dq]read\[dq]
+ - Read the value only
+ - \[dq]write\[dq]
+ - Write the value only
+ - \[dq]read,write\[dq]
+ - Read and Write the value.
+
+#### --drive-metadata-labels
+
+Control whether labels should be read or written in metadata.
+
+Reading labels metadata from files takes an extra API transaction and
+will slow down listings. It isn\[aq]t always desirable to set the labels
+from the metadata.
+
+The format of labels is documented in the drive API documentation at
+https://developers.google.com/drive/api/reference/rest/v3/Label -
+rclone just provides a JSON dump of this format.
+
+When setting labels, the label and fields must already exist - rclone
+will not create them. This means that if you are transferring labels
+from two different accounts you will have to create the labels in
+advance and use the metadata mapper to translate the IDs between the
+two accounts.
+
+
+Properties:
+
+- Config: metadata_labels
+- Env Var: RCLONE_DRIVE_METADATA_LABELS
+- Type: Bits
+- Default: off
+- Examples:
+ - \[dq]off\[dq]
+ - Do not read or write the value
+ - \[dq]read\[dq]
+ - Read the value only
+ - \[dq]write\[dq]
+ - Write the value only
+ - \[dq]read,write\[dq]
+ - Read and Write the value.
+
#### --drive-encoding
The encoding for the backend.
@@ -43145,7 +42186,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_DRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: InvalidUtf8
#### --drive-env-auth
@@ -43166,6 +42207,29 @@ Properties:
- \[dq]true\[dq]
- Get GCP IAM credentials from the environment (env vars or IAM).
+### Metadata
+
+User metadata is stored in the properties field of the drive object.
+
+Here are the possible system metadata items for the drive backend.
+
+| Name | Help | Type | Example | Read Only |
+|------|------|------|---------|-----------|
+| btime | Time of file birth (creation) with mS accuracy. Note that this is only writable on fresh uploads - it can\[aq]t be written for updates. | RFC 3339 | 2006-01-02T15:04:05.999Z07:00 | N |
+| content-type | The MIME type of the file. | string | text/plain | N |
+| copy-requires-writer-permission | Whether the options to copy, print, or download this file, should be disabled for readers and commenters. | boolean | true | N |
+| description | A short description of the file. | string | Contract for signing | N |
+| folder-color-rgb | The color for a folder or a shortcut to a folder as an RGB hex string. | string | 881133 | N |
+| labels | Labels attached to this file in a JSON dump of Googled drive format. Enable with --drive-metadata-labels. | JSON | [] | N |
+| mtime | Time of last modification with mS accuracy. | RFC 3339 | 2006-01-02T15:04:05.999Z07:00 | N |
+| owner | The owner of the file. Usually an email address. Enable with --drive-metadata-owner. | string | user\[at]example.com | N |
+| permissions | Permissions in a JSON dump of Google drive format. On shared drives these will only be present if they aren\[aq]t inherited. Enable with --drive-metadata-permissions. | JSON | {} | N |
+| starred | Whether the user has starred the file. | boolean | false | N |
+| viewed-by-me | Whether the file has been viewed by this user. | boolean | true | **Y** |
+| writers-can-share | Whether users with only writer permission can modify the file\[aq]s permissions. Not populated for items in shared drives. | boolean | false | N |
+
+See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
+
## Backend commands
Here are the commands specific to the drive backend.
@@ -43429,6 +42493,11 @@ Waiting a moderate period of time between attempts (estimated to be
approximately 1 hour) and/or not using --fast-list both seem to be
effective in preventing the problem.
+### SHA1 or SHA256 hashes may be missing
+
+All files have MD5 hashes, but a small fraction of files uploaded may
+not have SHA1 or SHA256 hashes especially if they were uploaded before 2018.
+
## Making your own client_id
When you use rclone with Google drive in its default configuration you
@@ -43881,8 +42950,108 @@ T{
Properties:
T}
T{
-- Config: encoding - Env Var: RCLONE_GPHOTOS_ENCODING - Type:
-MultiEncoder - Default: Slash,CrLf,InvalidUtf8,Dot
+- Config: encoding - Env Var: RCLONE_GPHOTOS_ENCODING - Type: Encoding -
+Default: Slash,CrLf,InvalidUtf8,Dot
+T}
+T{
+#### --gphotos-batch-mode
+T}
+T{
+Upload file batching sync|async|off.
+T}
+T{
+This sets the batch mode used by rclone.
+T}
+T{
+This has 3 possible values
+T}
+T{
+- off - no batching - sync - batch uploads and check completion
+(default) - async - batch upload and don\[aq]t check completion
+T}
+T{
+Rclone will close any outstanding batches when it exits which may make a
+delay on quit.
+T}
+T{
+Properties:
+T}
+T{
+- Config: batch_mode - Env Var: RCLONE_GPHOTOS_BATCH_MODE - Type: string
+- Default: \[dq]sync\[dq]
+T}
+T{
+#### --gphotos-batch-size
+T}
+T{
+Max number of files in upload batch.
+T}
+T{
+This sets the batch size of files to upload.
+It has to be less than 50.
+T}
+T{
+By default this is 0 which means rclone which calculate the batch size
+depending on the setting of batch_mode.
+T}
+T{
+- batch_mode: async - default batch_size is 50 - batch_mode: sync -
+default batch_size is the same as --transfers - batch_mode: off - not in
+use
+T}
+T{
+Rclone will close any outstanding batches when it exits which may make a
+delay on quit.
+T}
+T{
+Setting this is a great idea if you are uploading lots of small files as
+it will make them a lot quicker.
+You can use --transfers 32 to maximise throughput.
+T}
+T{
+Properties:
+T}
+T{
+- Config: batch_size - Env Var: RCLONE_GPHOTOS_BATCH_SIZE - Type: int -
+Default: 0
+T}
+T{
+#### --gphotos-batch-timeout
+T}
+T{
+Max time to allow an idle upload batch before uploading.
+T}
+T{
+If an upload batch is idle for more than this long then it will be
+uploaded.
+T}
+T{
+The default for this is 0 which means rclone will choose a sensible
+default based on the batch_mode in use.
+T}
+T{
+- batch_mode: async - default batch_timeout is 10s - batch_mode: sync -
+default batch_timeout is 1s - batch_mode: off - not in use
+T}
+T{
+Properties:
+T}
+T{
+- Config: batch_timeout - Env Var: RCLONE_GPHOTOS_BATCH_TIMEOUT - Type:
+Duration - Default: 0s
+T}
+T{
+#### --gphotos-batch-commit-timeout
+T}
+T{
+Max time to wait for a batch to finish committing
+T}
+T{
+Properties:
+T}
+T{
+- Config: batch_commit_timeout - Env Var:
+RCLONE_GPHOTOS_BATCH_COMMIT_TIMEOUT - Type: Duration - Default: 10m0s
T}
T{
## Limitations
@@ -43960,7 +43129,7 @@ not what you uploaded it with to \f[C]album\f[R].
In practise this shouldn\[aq]t cause too many problems.
T}
T{
-### Modified time
+### Modification times
T}
T{
The date shown of media in Google Photos is the creation date as
@@ -44478,7 +43647,7 @@ For this docker image the remote needs to be configured like this:
You can stop this image with \[ga]docker kill rclone-hdfs\[ga] (**NB** it does not use volumes, so all data
uploaded will be lost.)
-### Modified time
+### Modification times
Time accurate to 1 second is stored.
@@ -44508,16 +43677,16 @@ Here are the Standard options specific to hdfs (Hadoop distributed file system).
#### --hdfs-namenode
-Hadoop name node and port.
+Hadoop name nodes and ports.
-E.g. \[dq]namenode:8020\[dq] to connect to host namenode at port 8020.
+E.g. \[dq]namenode-1:8020,namenode-2:8020,...\[dq] to connect to host namenodes at port 8020.
Properties:
- Config: namenode
- Env Var: RCLONE_HDFS_NAMENODE
-- Type: string
-- Required: true
+- Type: CommaSepList
+- Default:
#### --hdfs-username
@@ -44581,7 +43750,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_HDFS_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot
@@ -44695,7 +43864,7 @@ Using
the process is very similar to the process of initial setup exemplified before.
-### Modified time and hashes
+### Modification times and hashes
HiDrive allows modification times to be set on objects accurate to 1 second.
@@ -44987,7 +44156,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_HIDRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Dot
@@ -45105,7 +44274,7 @@ Sync the remote \[ga]directory\[ga] to \[ga]/home/local/directory\[ga], deleting
This remote is read only - you can\[aq]t upload files to an HTTP server.
-### Modified time
+### Modification times
Most HTTP servers store time accurate to 1 second.
@@ -45212,6 +44381,46 @@ Properties:
- Type: bool
- Default: false
+## Backend commands
+
+Here are the commands specific to the http backend.
+
+Run them with
+
+ rclone backend COMMAND remote:
+
+The help below will explain what arguments each command takes.
+
+See the [backend](https://rclone.org/commands/rclone_backend/) command for more
+info on how to pass options and arguments.
+
+These can be run on a running backend using the rc command
+[backend/command](https://rclone.org/rc/#backend-command).
+
+### set
+
+Set command for updating the config parameters.
+
+ rclone backend set remote: [options] [+]
+
+This set command can be used to update the config parameters
+for a running http backend.
+
+Usage Examples:
+
+ rclone backend set remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
+ rclone rc backend/command command=set fs=remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
+ rclone rc backend/command command=set fs=remote: -o url=https://example.com
+
+The option keys are named as they are in the config file.
+
+This rebuilds the connection to the http backend when it is called with
+the new parameters. Only new parameters need be passed as the values
+will default to those currently in use.
+
+It doesn\[aq]t return anything.
+
+
## Limitations
@@ -45223,6 +44432,224 @@ remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
+# ImageKit
+This is a backend for the [ImageKit.io](https://imagekit.io/) storage service.
+
+#### About ImageKit
+[ImageKit.io](https://imagekit.io/) provides real-time image and video optimizations, transformations, and CDN delivery. Over 1,000 businesses and 70,000 developers trust ImageKit with their images and videos on the web.
+
+
+#### Accounts & Pricing
+
+To use this backend, you need to [create an account](https://imagekit.io/registration/) on ImageKit. Start with a free plan with generous usage limits. Then, as your requirements grow, upgrade to a plan that best fits your needs. See [the pricing details](https://imagekit.io/plans).
+
+## Configuration
+
+Here is an example of making an imagekit configuration.
+
+Firstly create a [ImageKit.io](https://imagekit.io/) account and choose a plan.
+
+You will need to log in and get the \[ga]publicKey\[ga] and \[ga]privateKey\[ga] for your account from the developer section.
+
+Now run
+\f[R]
+.fi
+.PP
+rclone config
+.IP
+.nf
+\f[C]
+This will guide you through an interactive setup process:
+\f[R]
+.fi
+.PP
+No remotes found, make a new one?
+n) New remote s) Set configuration password q) Quit config n/s/q> n
+.PP
+Enter the name for the new remote.
+name> imagekit-media-library
+.PP
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip] XX / ImageKit.io \ (imagekit) [snip] Storage> imagekit
+.PP
+Option endpoint.
+You can find your ImageKit.io URL endpoint in your
+dashboard (https://imagekit.io/dashboard/developer/api-keys) Enter a
+value.
+endpoint> https://ik.imagekit.io/imagekit_id
+.PP
+Option public_key.
+You can find your ImageKit.io public key in your
+dashboard (https://imagekit.io/dashboard/developer/api-keys) Enter a
+value.
+public_key> public_****************************
+.PP
+Option private_key.
+You can find your ImageKit.io private key in your
+dashboard (https://imagekit.io/dashboard/developer/api-keys) Enter a
+value.
+private_key> private_****************************
+.PP
+Edit advanced config?
+y) Yes n) No (default) y/n> n
+.PP
+Configuration complete.
+Options: - type: imagekit - endpoint: https://ik.imagekit.io/imagekit_id
+- public_key: public_**************************** - private_key:
+private_****************************
+.PP
+Keep this \[dq]imagekit-media-library\[dq] remote?
+y) Yes this is OK (default) e) Edit this remote d) Delete this remote
+y/e/d> y
+.IP
+.nf
+\f[C]
+List directories in the top level of your Media Library
+\f[R]
+.fi
+.PP
+rclone lsd imagekit-media-library:
+.IP
+.nf
+\f[C]
+Make a new directory.
+\f[R]
+.fi
+.PP
+rclone mkdir imagekit-media-library:directory
+.IP
+.nf
+\f[C]
+List the contents of a directory.
+\f[R]
+.fi
+.PP
+rclone ls imagekit-media-library:directory
+.IP
+.nf
+\f[C]
+### Modified time and hashes
+
+ImageKit does not support modification times or hashes yet.
+
+### Checksums
+
+No checksums are supported.
+
+
+### Standard options
+
+Here are the Standard options specific to imagekit (ImageKit.io).
+
+#### --imagekit-endpoint
+
+You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_IMAGEKIT_ENDPOINT
+- Type: string
+- Required: true
+
+#### --imagekit-public-key
+
+You can find your ImageKit.io public key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+
+Properties:
+
+- Config: public_key
+- Env Var: RCLONE_IMAGEKIT_PUBLIC_KEY
+- Type: string
+- Required: true
+
+#### --imagekit-private-key
+
+You can find your ImageKit.io private key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
+
+Properties:
+
+- Config: private_key
+- Env Var: RCLONE_IMAGEKIT_PRIVATE_KEY
+- Type: string
+- Required: true
+
+### Advanced options
+
+Here are the Advanced options specific to imagekit (ImageKit.io).
+
+#### --imagekit-only-signed
+
+If you have configured \[ga]Restrict unsigned image URLs\[ga] in your dashboard settings, set this to true.
+
+Properties:
+
+- Config: only_signed
+- Env Var: RCLONE_IMAGEKIT_ONLY_SIGNED
+- Type: bool
+- Default: false
+
+#### --imagekit-versions
+
+Include old versions in directory listings.
+
+Properties:
+
+- Config: versions
+- Env Var: RCLONE_IMAGEKIT_VERSIONS
+- Type: bool
+- Default: false
+
+#### --imagekit-upload-tags
+
+Tags to add to the uploaded files, e.g. \[dq]tag1,tag2\[dq].
+
+Properties:
+
+- Config: upload_tags
+- Env Var: RCLONE_IMAGEKIT_UPLOAD_TAGS
+- Type: string
+- Required: false
+
+#### --imagekit-encoding
+
+The encoding for the backend.
+
+See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_IMAGEKIT_ENCODING
+- Type: Encoding
+- Default: Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket
+
+### Metadata
+
+Any metadata supported by the underlying remote is read and written.
+
+Here are the possible system metadata items for the imagekit backend.
+
+| Name | Help | Type | Example | Read Only |
+|------|------|------|---------|-----------|
+| aws-tags | AI generated tags by AWS Rekognition associated with the image | string | tag1,tag2 | **Y** |
+| btime | Time of file birth (creation) read from Last-Modified header | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | **Y** |
+| custom-coordinates | Custom coordinates of the file | string | 0,0,100,100 | **Y** |
+| file-type | Type of the file | string | image | **Y** |
+| google-tags | AI generated tags by Google Cloud Vision associated with the image | string | tag1,tag2 | **Y** |
+| has-alpha | Whether the image has alpha channel or not | bool | | **Y** |
+| height | Height of the image or video in pixels | int | | **Y** |
+| is-private-file | Whether the file is private or not | bool | | **Y** |
+| size | Size of the object in bytes | int64 | | **Y** |
+| tags | Tags associated with the file | string | tag1,tag2 | **Y** |
+| width | Width of the image or video in pixels | int | | **Y** |
+
+See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
+
+
+
# Internet Archive
The Internet Archive backend utilizes Items on [archive.org](https://archive.org/)
@@ -45465,7 +44892,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_INTERNETARCHIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot
### Metadata
@@ -45698,7 +45125,7 @@ them. Generally you should avoid these, unless you know what you are doing.
### --fast-list
-This remote supports \[ga]--fast-list\[ga] which allows you to use fewer
+This backend supports \[ga]--fast-list\[ga] which allows you to use fewer
transactions in exchange for more memory. See the [rclone
docs](https://rclone.org/docs/#fast-list) for more details.
@@ -45706,10 +45133,11 @@ Note that the implementation in Jottacloud always uses only a single
API request to get the entire list, so for large folders this could
lead to long wait time before the first results are shown.
-Note also that with rclone version 1.58 and newer information about
-[MIME types](https://rclone.org/overview/#mime-type) are not available when using \[ga]--fast-list\[ga].
+Note also that with rclone version 1.58 and newer, information about
+[MIME types](https://rclone.org/overview/#mime-type) and metadata item [utime](#metadata)
+are not available when using \[ga]--fast-list\[ga].
-### Modified time and hashes
+### Modification times and hashes
Jottacloud allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -45908,9 +45336,24 @@ Properties:
- Config: encoding
- Env Var: RCLONE_JOTTACLOUD_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot
+### Metadata
+
+Jottacloud has limited support for metadata, currently an extended set of timestamps.
+
+Here are the possible system metadata items for the jottacloud backend.
+
+| Name | Help | Type | Example | Read Only |
+|------|------|------|---------|-----------|
+| btime | Time of file birth (creation), read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N |
+| content-type | MIME type, also known as media type | string | text/plain | **Y** |
+| mtime | Time of last modification, read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N |
+| utime | Time of last upload, when current revision was created, generated by backend | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | **Y** |
+
+See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
+
## Limitations
@@ -46072,34 +45515,6 @@ Properties:
- Type: string
- Required: true
-#### --koofr-password
-
-Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password).
-
-**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
-
-Properties:
-
-- Config: password
-- Env Var: RCLONE_KOOFR_PASSWORD
-- Provider: digistorage
-- Type: string
-- Required: true
-
-#### --koofr-password
-
-Your password for rclone (generate one at your service\[aq]s settings page).
-
-**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
-
-Properties:
-
-- Config: password
-- Env Var: RCLONE_KOOFR_PASSWORD
-- Provider: other
-- Type: string
-- Required: true
-
### Advanced options
Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
@@ -46140,7 +45555,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_KOOFR_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
@@ -46242,6 +45657,68 @@ password = *** ENCRYPTED *** -------------------- y) Yes this is OK
.IP
.nf
\f[C]
+# Linkbox
+
+Linkbox is [a private cloud drive](https://linkbox.to/).
+
+## Configuration
+
+Here is an example of making a remote for Linkbox.
+
+First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+\f[R]
+.fi
+.PP
+No remotes found, make a new one?
+n) New remote s) Set configuration password q) Quit config n/s/q> n
+.PP
+Enter name for new remote.
+name> remote
+.PP
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+XX / Linkbox \ (linkbox) Storage> XX
+.PP
+Option token.
+Token from https://www.linkbox.to/admin/account Enter a value.
+token> testFromCLToken
+.PP
+Configuration complete.
+Options: - type: linkbox - token: XXXXXXXXXXX Keep this
+\[dq]linkbox\[dq] remote?
+y) Yes this is OK (default) e) Edit this remote d) Delete this remote
+y/e/d> y
+.IP
+.nf
+\f[C]
+
+### Standard options
+
+Here are the Standard options specific to linkbox (Linkbox).
+
+#### --linkbox-token
+
+Token from https://www.linkbox.to/admin/account
+
+Properties:
+
+- Config: token
+- Env Var: RCLONE_LINKBOX_TOKEN
+- Type: string
+- Required: true
+
+
+
+## Limitations
+
+Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
+as they can\[aq]t be used in JSON strings.
+
# Mail.ru Cloud
[Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a Russian internet company [Mail.Ru Group](https://mail.ru). The official desktop client is [Disk-O:](https://disk-o.cloud/en), available on Windows and Mac OS.
@@ -46337,17 +45814,15 @@ excess files in the path.
rclone sync --interactive /home/local/directory remote:directory
-### Modified time
+### Modification times and hashes
Files support a modification time attribute with up to 1 second precision.
Directories do not have a modification time, which is shown as \[dq]Jan 1 1970\[dq].
-### Hash checksums
-
-Hash sums use a custom Mail.ru algorithm based on SHA1.
+File hashes are supported, with a custom Mail.ru algorithm based on SHA1.
If file size is less than or equal to the SHA1 block size (20 bytes),
its hash is simply its data right-padded with zero bytes.
-Hash sum of a larger file is computed as a SHA1 sum of the file data
+Hashes of a larger file is computed as a SHA1 of the file data
bytes concatenated with a decimal representation of the data length.
### Emptying Trash
@@ -46625,7 +46100,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_MAILRU_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot
@@ -46696,7 +46171,7 @@ To copy a local directory to an Mega directory called backup
rclone copy /home/source remote:backup
-### Modified time and hashes
+### Modification times and hashes
Mega does not support modification times or hashes yet.
@@ -46894,7 +46369,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_MEGA_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,InvalidUtf8,Dot
@@ -46965,7 +46440,7 @@ testing or with an rclone server or rclone mount, e.g.
rclone serve webdav :memory:
rclone serve sftp :memory:
-### Modified time and hashes
+### Modification times and hashes
The memory backend supports MD5 hashes and modification times accurate to 1 nS.
@@ -47314,10 +46789,10 @@ This remote supports \[ga]--fast-list\[ga] which allows you to use fewer
transactions in exchange for more memory. See the [rclone
docs](https://rclone.org/docs/#fast-list) for more details.
-### Modified time
+### Modification times and hashes
-The modified time is stored as metadata on the object with the \[ga]mtime\[ga]
-key. It is stored using RFC3339 Format time with nanosecond
+The modification time is stored as metadata on the object with the
+\[ga]mtime\[ga] key. It is stored using RFC3339 Format time with nanosecond
precision. The metadata is supplied during directory listings so
there is no performance overhead to using it.
@@ -47327,6 +46802,10 @@ flag. Note that rclone can\[aq]t set \[ga]LastModified\[ga], so using the
\[ga]--update\[ga] flag when syncing is recommended if using
\[ga]--use-server-modtime\[ga].
+MD5 hashes are stored with blobs. However blobs that were uploaded in
+chunks only have an MD5 if the source remote was capable of MD5
+hashes, e.g. the local disk.
+
### Performance
When uploading large files, increasing the value of
@@ -47355,12 +46834,6 @@ These only get replaced if they are the last character in the name:
Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
as they can\[aq]t be used in JSON strings.
-### Hashes
-
-MD5 hashes are stored with blobs. However blobs that were uploaded in
-chunks only have an MD5 if the source remote was capable of MD5
-hashes, e.g. the local disk.
-
### Authentication {#authentication}
There are a number of ways of supplying credentials for Azure Blob
@@ -47914,10 +47387,10 @@ Properties:
#### --azureblob-access-tier
-Access tier of blob: hot, cool or archive.
+Access tier of blob: hot, cool, cold or archive.
-Archived blobs can be restored by setting access tier to hot or
-cool. Leave blank if you intend to use default access tier, which is
+Archived blobs can be restored by setting access tier to hot, cool or
+cold. Leave blank if you intend to use default access tier, which is
set at account level
If there is no \[dq]access tier\[dq] specified, rclone doesn\[aq]t apply any tier.
@@ -47925,7 +47398,7 @@ rclone performs \[dq]Set Tier\[dq] operation on blobs while uploading, if object
are not modified, specifying \[dq]access tier\[dq] to new one will have no effect.
If blobs are in \[dq]archive tier\[dq] at remote, trying to perform data transfer
operations from remote will not be allowed. User should first restore by
-tiering blob to \[dq]Hot\[dq] or \[dq]Cool\[dq].
+tiering blob to \[dq]Hot\[dq], \[dq]Cool\[dq] or \[dq]Cold\[dq].
Properties:
@@ -48006,7 +47479,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_AZUREBLOB_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8
#### --azureblob-public-access
@@ -48115,6 +47588,703 @@ advanced settings, setting it to
\[ga]http(s)://:/devstoreaccount1\[ga]
(e.g. \[ga]http://10.254.2.5:10000/devstoreaccount1\[ga]).
+# Microsoft Azure Files Storage
+
+Paths are specified as \[ga]remote:\[ga] You may put subdirectories in too,
+e.g. \[ga]remote:path/to/dir\[ga].
+
+## Configuration
+
+Here is an example of making a Microsoft Azure Files Storage
+configuration. For a remote called \[ga]remote\[ga]. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+\f[R]
+.fi
+.PP
+No remotes found, make a new one?
+n) New remote s) Set configuration password q) Quit config n/s/q> n
+name> remote Type of storage to configure.
+Choose a number from below, or type in your own value [snip] XX /
+Microsoft Azure Files Storage \ \[dq]azurefiles\[dq] [snip]
+.PP
+Option account.
+Azure Storage Account Name.
+Set this to the Azure Storage Account Name in use.
+Leave blank to use SAS URL or connection string, otherwise it needs to
+be set.
+If this is blank and if env_auth is set it will be read from the
+environment variable \f[C]AZURE_STORAGE_ACCOUNT_NAME\f[R] if possible.
+Enter a value.
+Press Enter to leave empty.
+account> account_name
+.PP
+Option share_name.
+Azure Files Share Name.
+This is required and is the name of the share to access.
+Enter a value.
+Press Enter to leave empty.
+share_name> share_name
+.PP
+Option env_auth.
+Read credentials from runtime (environment variables, CLI or MSI).
+See the authentication docs for full info.
+Enter a boolean value (true or false).
+Press Enter for the default (false).
+env_auth>
+.PP
+Option key.
+Storage Account Shared Key.
+Leave blank to use SAS URL or connection string.
+Enter a value.
+Press Enter to leave empty.
+key> base64encodedkey==
+.PP
+Option sas_url.
+SAS URL.
+Leave blank if using account/key or connection string.
+Enter a value.
+Press Enter to leave empty.
+sas_url>
+.PP
+Option connection_string.
+Azure Files Connection String.
+Enter a value.
+Press Enter to leave empty.
+connection_string> [snip]
+.PP
+Configuration complete.
+Options: - type: azurefiles - account: account_name - share_name:
+share_name - key: base64encodedkey== Keep this \[dq]remote\[dq] remote?
+y) Yes this is OK (default) e) Edit this remote d) Delete this remote
+y/e/d>
+.IP
+.nf
+\f[C]
+Once configured you can use rclone.
+
+See all files in the top level:
+
+ rclone lsf remote:
+
+Make a new directory in the root:
+
+ rclone mkdir remote:dir
+
+Recursively List the contents:
+
+ rclone ls remote:
+
+Sync \[ga]/home/local/directory\[ga] to the remote directory, deleting any
+excess files in the directory.
+
+ rclone sync --interactive /home/local/directory remote:dir
+
+### Modified time
+
+The modified time is stored as Azure standard \[ga]LastModified\[ga] time on
+files
+
+### Performance
+
+When uploading large files, increasing the value of
+\[ga]--azurefiles-upload-concurrency\[ga] will increase performance at the cost
+of using more memory. The default of 16 is set quite conservatively to
+use less memory. It maybe be necessary raise it to 64 or higher to
+fully utilize a 1 GBit/s link with a single file transfer.
+
+### Restricted filename characters
+
+In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
+the following characters are also replaced:
+
+| Character | Value | Replacement |
+| --------- |:-----:|:-----------:|
+| \[dq] | 0x22 | \[uFF02] |
+| * | 0x2A | \[uFF0A] |
+| : | 0x3A | \[uFF1A] |
+| < | 0x3C | \[uFF1C] |
+| > | 0x3E | \[uFF1E] |
+| ? | 0x3F | \[uFF1F] |
+| \[rs] | 0x5C | \[uFF3C] |
+| \[rs]| | 0x7C | \[uFF5C] |
+
+File names can also not end with the following characters.
+These only get replaced if they are the last character in the name:
+
+| Character | Value | Replacement |
+| --------- |:-----:|:-----------:|
+| . | 0x2E | \[uFF0E] |
+
+Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
+as they can\[aq]t be used in JSON strings.
+
+### Hashes
+
+MD5 hashes are stored with files. Not all files will have MD5 hashes
+as these have to be uploaded with the file.
+
+### Authentication {#authentication}
+
+There are a number of ways of supplying credentials for Azure Files
+Storage. Rclone tries them in the order of the sections below.
+
+#### Env Auth
+
+If the \[ga]env_auth\[ga] config parameter is \[ga]true\[ga] then rclone will pull
+credentials from the environment or runtime.
+
+It tries these authentication methods in this order:
+
+1. Environment Variables
+2. Managed Service Identity Credentials
+3. Azure CLI credentials (as used by the az tool)
+
+These are described in the following sections
+
+##### Env Auth: 1. Environment Variables
+
+If \[ga]env_auth\[ga] is set and environment variables are present rclone
+authenticates a service principal with a secret or certificate, or a
+user with a password, depending on which environment variable are set.
+It reads configuration from these variables, in the following order:
+
+1. Service principal with client secret
+ - \[ga]AZURE_TENANT_ID\[ga]: ID of the service principal\[aq]s tenant. Also called its \[dq]directory\[dq] ID.
+ - \[ga]AZURE_CLIENT_ID\[ga]: the service principal\[aq]s client ID
+ - \[ga]AZURE_CLIENT_SECRET\[ga]: one of the service principal\[aq]s client secrets
+2. Service principal with certificate
+ - \[ga]AZURE_TENANT_ID\[ga]: ID of the service principal\[aq]s tenant. Also called its \[dq]directory\[dq] ID.
+ - \[ga]AZURE_CLIENT_ID\[ga]: the service principal\[aq]s client ID
+ - \[ga]AZURE_CLIENT_CERTIFICATE_PATH\[ga]: path to a PEM or PKCS12 certificate file including the private key.
+ - \[ga]AZURE_CLIENT_CERTIFICATE_PASSWORD\[ga]: (optional) password for the certificate file.
+ - \[ga]AZURE_CLIENT_SEND_CERTIFICATE_CHAIN\[ga]: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to \[dq]true\[dq] or \[dq]1\[dq], authentication requests include the x5c header.
+3. User with username and password
+ - \[ga]AZURE_TENANT_ID\[ga]: (optional) tenant to authenticate in. Defaults to \[dq]organizations\[dq].
+ - \[ga]AZURE_CLIENT_ID\[ga]: client ID of the application the user will authenticate to
+ - \[ga]AZURE_USERNAME\[ga]: a username (usually an email address)
+ - \[ga]AZURE_PASSWORD\[ga]: the user\[aq]s password
+4. Workload Identity
+ - \[ga]AZURE_TENANT_ID\[ga]: Tenant to authenticate in.
+ - \[ga]AZURE_CLIENT_ID\[ga]: Client ID of the application the user will authenticate to.
+ - \[ga]AZURE_FEDERATED_TOKEN_FILE\[ga]: Path to projected service account token file.
+ - \[ga]AZURE_AUTHORITY_HOST\[ga]: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com).
+
+
+##### Env Auth: 2. Managed Service Identity Credentials
+
+When using Managed Service Identity if the VM(SS) on which this
+program is running has a system-assigned identity, it will be used by
+default. If the resource has no system-assigned but exactly one
+user-assigned identity, the user-assigned identity will be used by
+default.
+
+If the resource has multiple user-assigned identities you will need to
+unset \[ga]env_auth\[ga] and set \[ga]use_msi\[ga] instead. See the [\[ga]use_msi\[ga]
+section](#use_msi).
+
+##### Env Auth: 3. Azure CLI credentials (as used by the az tool)
+
+Credentials created with the \[ga]az\[ga] tool can be picked up using \[ga]env_auth\[ga].
+
+For example if you were to login with a service principal like this:
+
+ az login --service-principal -u XXX -p XXX --tenant XXX
+
+Then you could access rclone resources like this:
+
+ rclone lsf :azurefiles,env_auth,account=ACCOUNT:
+
+Or
+
+ rclone lsf --azurefiles-env-auth --azurefiles-account=ACCOUNT :azurefiles:
+
+#### Account and Shared Key
+
+This is the most straight forward and least flexible way. Just fill
+in the \[ga]account\[ga] and \[ga]key\[ga] lines and leave the rest blank.
+
+#### SAS URL
+
+To use it leave \[ga]account\[ga], \[ga]key\[ga] and \[ga]connection_string\[ga] blank and fill in \[ga]sas_url\[ga].
+
+#### Connection String
+
+To use it leave \[ga]account\[ga], \[ga]key\[ga] and \[dq]sas_url\[dq] blank and fill in \[ga]connection_string\[ga].
+
+#### Service principal with client secret
+
+If these variables are set, rclone will authenticate with a service principal with a client secret.
+
+- \[ga]tenant\[ga]: ID of the service principal\[aq]s tenant. Also called its \[dq]directory\[dq] ID.
+- \[ga]client_id\[ga]: the service principal\[aq]s client ID
+- \[ga]client_secret\[ga]: one of the service principal\[aq]s client secrets
+
+The credentials can also be placed in a file using the
+\[ga]service_principal_file\[ga] configuration option.
+
+#### Service principal with certificate
+
+If these variables are set, rclone will authenticate with a service principal with certificate.
+
+- \[ga]tenant\[ga]: ID of the service principal\[aq]s tenant. Also called its \[dq]directory\[dq] ID.
+- \[ga]client_id\[ga]: the service principal\[aq]s client ID
+- \[ga]client_certificate_path\[ga]: path to a PEM or PKCS12 certificate file including the private key.
+- \[ga]client_certificate_password\[ga]: (optional) password for the certificate file.
+- \[ga]client_send_certificate_chain\[ga]: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to \[dq]true\[dq] or \[dq]1\[dq], authentication requests include the x5c header.
+
+**NB** \[ga]client_certificate_password\[ga] must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+#### User with username and password
+
+If these variables are set, rclone will authenticate with username and password.
+
+- \[ga]tenant\[ga]: (optional) tenant to authenticate in. Defaults to \[dq]organizations\[dq].
+- \[ga]client_id\[ga]: client ID of the application the user will authenticate to
+- \[ga]username\[ga]: a username (usually an email address)
+- \[ga]password\[ga]: the user\[aq]s password
+
+Microsoft doesn\[aq]t recommend this kind of authentication, because it\[aq]s
+less secure than other authentication flows. This method is not
+interactive, so it isn\[aq]t compatible with any form of multi-factor
+authentication, and the application must already have user or admin
+consent. This credential can only authenticate work and school
+accounts; it can\[aq]t authenticate Microsoft accounts.
+
+**NB** \[ga]password\[ga] must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+#### Managed Service Identity Credentials {#use_msi}
+
+If \[ga]use_msi\[ga] is set then managed service identity credentials are
+used. This authentication only works when running in an Azure service.
+\[ga]env_auth\[ga] needs to be unset to use this.
+
+However if you have multiple user identities to choose from these must
+be explicitly specified using exactly one of the \[ga]msi_object_id\[ga],
+\[ga]msi_client_id\[ga], or \[ga]msi_mi_res_id\[ga] parameters.
+
+If none of \[ga]msi_object_id\[ga], \[ga]msi_client_id\[ga], or \[ga]msi_mi_res_id\[ga] is
+set, this is is equivalent to using \[ga]env_auth\[ga].
+
+
+### Standard options
+
+Here are the Standard options specific to azurefiles (Microsoft Azure Files).
+
+#### --azurefiles-account
+
+Azure Storage Account Name.
+
+Set this to the Azure Storage Account Name in use.
+
+Leave blank to use SAS URL or connection string, otherwise it needs to be set.
+
+If this is blank and if env_auth is set it will be read from the
+environment variable \[ga]AZURE_STORAGE_ACCOUNT_NAME\[ga] if possible.
+
+
+Properties:
+
+- Config: account
+- Env Var: RCLONE_AZUREFILES_ACCOUNT
+- Type: string
+- Required: false
+
+#### --azurefiles-share-name
+
+Azure Files Share Name.
+
+This is required and is the name of the share to access.
+
+
+Properties:
+
+- Config: share_name
+- Env Var: RCLONE_AZUREFILES_SHARE_NAME
+- Type: string
+- Required: false
+
+#### --azurefiles-env-auth
+
+Read credentials from runtime (environment variables, CLI or MSI).
+
+See the [authentication docs](/azurefiles#authentication) for full info.
+
+Properties:
+
+- Config: env_auth
+- Env Var: RCLONE_AZUREFILES_ENV_AUTH
+- Type: bool
+- Default: false
+
+#### --azurefiles-key
+
+Storage Account Shared Key.
+
+Leave blank to use SAS URL or connection string.
+
+Properties:
+
+- Config: key
+- Env Var: RCLONE_AZUREFILES_KEY
+- Type: string
+- Required: false
+
+#### --azurefiles-sas-url
+
+SAS URL.
+
+Leave blank if using account/key or connection string.
+
+Properties:
+
+- Config: sas_url
+- Env Var: RCLONE_AZUREFILES_SAS_URL
+- Type: string
+- Required: false
+
+#### --azurefiles-connection-string
+
+Azure Files Connection String.
+
+Properties:
+
+- Config: connection_string
+- Env Var: RCLONE_AZUREFILES_CONNECTION_STRING
+- Type: string
+- Required: false
+
+#### --azurefiles-tenant
+
+ID of the service principal\[aq]s tenant. Also called its directory ID.
+
+Set this if using
+- Service principal with client secret
+- Service principal with certificate
+- User with username and password
+
+
+Properties:
+
+- Config: tenant
+- Env Var: RCLONE_AZUREFILES_TENANT
+- Type: string
+- Required: false
+
+#### --azurefiles-client-id
+
+The ID of the client in use.
+
+Set this if using
+- Service principal with client secret
+- Service principal with certificate
+- User with username and password
+
+
+Properties:
+
+- Config: client_id
+- Env Var: RCLONE_AZUREFILES_CLIENT_ID
+- Type: string
+- Required: false
+
+#### --azurefiles-client-secret
+
+One of the service principal\[aq]s client secrets
+
+Set this if using
+- Service principal with client secret
+
+
+Properties:
+
+- Config: client_secret
+- Env Var: RCLONE_AZUREFILES_CLIENT_SECRET
+- Type: string
+- Required: false
+
+#### --azurefiles-client-certificate-path
+
+Path to a PEM or PKCS12 certificate file including the private key.
+
+Set this if using
+- Service principal with certificate
+
+
+Properties:
+
+- Config: client_certificate_path
+- Env Var: RCLONE_AZUREFILES_CLIENT_CERTIFICATE_PATH
+- Type: string
+- Required: false
+
+#### --azurefiles-client-certificate-password
+
+Password for the certificate file (optional).
+
+Optionally set this if using
+- Service principal with certificate
+
+And the certificate has a password.
+
+
+**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+Properties:
+
+- Config: client_certificate_password
+- Env Var: RCLONE_AZUREFILES_CLIENT_CERTIFICATE_PASSWORD
+- Type: string
+- Required: false
+
+### Advanced options
+
+Here are the Advanced options specific to azurefiles (Microsoft Azure Files).
+
+#### --azurefiles-client-send-certificate-chain
+
+Send the certificate chain when using certificate auth.
+
+Specifies whether an authentication request will include an x5c header
+to support subject name / issuer based authentication. When set to
+true, authentication requests include the x5c header.
+
+Optionally set this if using
+- Service principal with certificate
+
+
+Properties:
+
+- Config: client_send_certificate_chain
+- Env Var: RCLONE_AZUREFILES_CLIENT_SEND_CERTIFICATE_CHAIN
+- Type: bool
+- Default: false
+
+#### --azurefiles-username
+
+User name (usually an email address)
+
+Set this if using
+- User with username and password
+
+
+Properties:
+
+- Config: username
+- Env Var: RCLONE_AZUREFILES_USERNAME
+- Type: string
+- Required: false
+
+#### --azurefiles-password
+
+The user\[aq]s password
+
+Set this if using
+- User with username and password
+
+
+**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+Properties:
+
+- Config: password
+- Env Var: RCLONE_AZUREFILES_PASSWORD
+- Type: string
+- Required: false
+
+#### --azurefiles-service-principal-file
+
+Path to file containing credentials for use with a service principal.
+
+Leave blank normally. Needed only if you want to use a service principal instead of interactive login.
+
+ $ az ad sp create-for-rbac --name \[dq]\[dq] \[rs]
+ --role \[dq]Storage Files Data Owner\[dq] \[rs]
+ --scopes \[dq]/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts//blobServices/default/containers/\[dq] \[rs]
+ > azure-principal.json
+
+See [\[dq]Create an Azure service principal\[dq]](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) and [\[dq]Assign an Azure role for access to files data\[dq]](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli) pages for more details.
+
+**NB** this section needs updating for Azure Files - pull requests appreciated!
+
+It may be more convenient to put the credentials directly into the
+rclone config file under the \[ga]client_id\[ga], \[ga]tenant\[ga] and \[ga]client_secret\[ga]
+keys instead of setting \[ga]service_principal_file\[ga].
+
+
+Properties:
+
+- Config: service_principal_file
+- Env Var: RCLONE_AZUREFILES_SERVICE_PRINCIPAL_FILE
+- Type: string
+- Required: false
+
+#### --azurefiles-use-msi
+
+Use a managed service identity to authenticate (only works in Azure).
+
+When true, use a [managed service identity](https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/)
+to authenticate to Azure Storage instead of a SAS token or account key.
+
+If the VM(SS) on which this program is running has a system-assigned identity, it will
+be used by default. If the resource has no system-assigned but exactly one user-assigned identity,
+the user-assigned identity will be used by default. If the resource has multiple user-assigned
+identities, the identity to use must be explicitly specified using exactly one of the msi_object_id,
+msi_client_id, or msi_mi_res_id parameters.
+
+Properties:
+
+- Config: use_msi
+- Env Var: RCLONE_AZUREFILES_USE_MSI
+- Type: bool
+- Default: false
+
+#### --azurefiles-msi-object-id
+
+Object ID of the user-assigned MSI to use, if any.
+
+Leave blank if msi_client_id or msi_mi_res_id specified.
+
+Properties:
+
+- Config: msi_object_id
+- Env Var: RCLONE_AZUREFILES_MSI_OBJECT_ID
+- Type: string
+- Required: false
+
+#### --azurefiles-msi-client-id
+
+Object ID of the user-assigned MSI to use, if any.
+
+Leave blank if msi_object_id or msi_mi_res_id specified.
+
+Properties:
+
+- Config: msi_client_id
+- Env Var: RCLONE_AZUREFILES_MSI_CLIENT_ID
+- Type: string
+- Required: false
+
+#### --azurefiles-msi-mi-res-id
+
+Azure resource ID of the user-assigned MSI to use, if any.
+
+Leave blank if msi_client_id or msi_object_id specified.
+
+Properties:
+
+- Config: msi_mi_res_id
+- Env Var: RCLONE_AZUREFILES_MSI_MI_RES_ID
+- Type: string
+- Required: false
+
+#### --azurefiles-endpoint
+
+Endpoint for the service.
+
+Leave blank normally.
+
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_AZUREFILES_ENDPOINT
+- Type: string
+- Required: false
+
+#### --azurefiles-chunk-size
+
+Upload chunk size.
+
+Note that this is stored in memory and there may be up to
+\[dq]--transfers\[dq] * \[dq]--azurefile-upload-concurrency\[dq] chunks stored at once
+in memory.
+
+Properties:
+
+- Config: chunk_size
+- Env Var: RCLONE_AZUREFILES_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 4Mi
+
+#### --azurefiles-upload-concurrency
+
+Concurrency for multipart uploads.
+
+This is the number of chunks of the same file that are uploaded
+concurrently.
+
+If you are uploading small numbers of large files over high-speed
+links and these uploads do not fully utilize your bandwidth, then
+increasing this may help to speed up the transfers.
+
+Note that chunks are stored in memory and there may be up to
+\[dq]--transfers\[dq] * \[dq]--azurefile-upload-concurrency\[dq] chunks stored at once
+in memory.
+
+Properties:
+
+- Config: upload_concurrency
+- Env Var: RCLONE_AZUREFILES_UPLOAD_CONCURRENCY
+- Type: int
+- Default: 16
+
+#### --azurefiles-max-stream-size
+
+Max size for streamed files.
+
+Azure files needs to know in advance how big the file will be. When
+rclone doesn\[aq]t know it uses this value instead.
+
+This will be used when rclone is streaming data, the most common uses are:
+
+- Uploading files with \[ga]--vfs-cache-mode off\[ga] with \[ga]rclone mount\[ga]
+- Using \[ga]rclone rcat\[ga]
+- Copying files with unknown length
+
+You will need this much free space in the share as the file will be this size temporarily.
+
+
+Properties:
+
+- Config: max_stream_size
+- Env Var: RCLONE_AZUREFILES_MAX_STREAM_SIZE
+- Type: SizeSuffix
+- Default: 10Gi
+
+#### --azurefiles-encoding
+
+The encoding for the backend.
+
+See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_AZUREFILES_ENCODING
+- Type: Encoding
+- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot
+
+
+
+### Custom upload headers
+
+You can set custom upload headers with the \[ga]--header-upload\[ga] flag.
+
+- Cache-Control
+- Content-Disposition
+- Content-Encoding
+- Content-Language
+- Content-Type
+
+Eg \[ga]--header-upload \[dq]Content-Type: text/potato\[dq]\[ga]
+
+## Limitations
+
+MD5 sums are only uploaded with chunked files if the source has an MD5
+sum. This will always be the case for a local to azure copy.
+
# Microsoft OneDrive
Paths are specified as \[ga]remote:path\[ga]
@@ -48267,7 +48437,7 @@ You may try to [verify you account](https://docs.microsoft.com/en-us/azure/activ
Note: If you have a special region, you may need a different host in step 4 and 5. Here are [some hints](https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86).
-### Modification time and hashes
+### Modification times and hashes
OneDrive allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -48288,6 +48458,32 @@ your workflow.
For all types of OneDrive you can use the \[ga]--checksum\[ga] flag.
+### --fast-list
+
+This remote supports \[ga]--fast-list\[ga] which allows you to use fewer
+transactions in exchange for more memory. See the [rclone
+docs](https://rclone.org/docs/#fast-list) for more details.
+
+This must be enabled with the \[ga]--onedrive-delta\[ga] flag (or \[ga]delta =
+true\[ga] in the config file) as it can cause performance degradation.
+
+It does this by using the delta listing facilities of OneDrive which
+returns all the files in the remote very efficiently. This is much
+more efficient than listing directories recursively and is Microsoft\[aq]s
+recommended way of reading all the file information from a drive.
+
+This can be useful with \[ga]rclone mount\[ga] and [rclone rc vfs/refresh
+recursive=true](https://rclone.org/rc/#vfs-refresh)) to very quickly fill the mount with
+information about all the files.
+
+The API used for the recursive listing (\[ga]ListR\[ga]) only supports listing
+from the root of the drive. This will become increasingly inefficient
+the further away you get from the root as rclone will have to discard
+files outside of the directory you are using.
+
+Some commands (like \[ga]rclone lsf -R\[ga]) will use \[ga]ListR\[ga] by default - you
+can turn this off with \[ga]--disable ListR\[ga] if you need to.
+
### Restricted filename characters
In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
@@ -48699,6 +48895,43 @@ Properties:
- Type: bool
- Default: false
+#### --onedrive-delta
+
+If set rclone will use delta listing to implement recursive listings.
+
+If this flag is set the the onedrive backend will advertise \[ga]ListR\[ga]
+support for recursive listings.
+
+Setting this flag speeds up these things greatly:
+
+ rclone lsf -R onedrive:
+ rclone size onedrive:
+ rclone rc vfs/refresh recursive=true
+
+**However** the delta listing API **only** works at the root of the
+drive. If you use it not at the root then it recurses from the root
+and discards all the data that is not under the directory you asked
+for. So it will be correct but may not be very efficient.
+
+This is why this flag is not set as the default.
+
+As a rule of thumb if nearly all of your data is under rclone\[aq]s root
+directory (the \[ga]root/directory\[ga] in \[ga]onedrive:root/directory\[ga]) then
+using this flag will be be a big performance win. If your data is
+mostly not under the root then using this flag will be a big
+performance loss.
+
+It is recommended if you are mounting your onedrive at the root
+(or near the root when using crypt) and using rclone \[ga]rc vfs/refresh\[ga].
+
+
+Properties:
+
+- Config: delta
+- Env Var: RCLONE_ONEDRIVE_DELTA
+- Type: bool
+- Default: false
+
#### --onedrive-encoding
The encoding for the backend.
@@ -48709,7 +48942,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_ONEDRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot
@@ -49006,12 +49239,14 @@ To copy a local directory to an OpenDrive directory called backup
rclone copy /home/source remote:backup
-### Modified time and MD5SUMs
+### Modification times and hashes
OpenDrive allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
not.
+The MD5 hash algorithm is supported.
+
### Restricted filename characters
| Character | Value | Replacement |
@@ -49085,7 +49320,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_OPENDRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot
#### --opendrive-chunk-size
@@ -49264,6 +49499,7 @@ Rclone supports the following OCI authentication provider.
No authentication
### User Principal
+
Sample rclone config file for Authentication Provider User Principal:
[oos]
@@ -49284,6 +49520,7 @@ Considerations:
- If the user is deleted, the config file will no longer work and may cause automation regressions that use the user\[aq]s credentials.
### Instance Principal
+
An OCI compute instance can be authorized to use rclone by using it\[aq]s identity and certificates as an instance principal.
With this approach no credentials have to be stored and managed.
@@ -49313,6 +49550,7 @@ Considerations:
- It is applicable for oci compute instances only. It cannot be used on external instance or resources.
### Resource Principal
+
Resource principal auth is very similar to instance principal auth but used for resources that are not
compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm).
To use resource principal ensure Rclone process is started with these environment variables set in its process.
@@ -49332,6 +49570,7 @@ Sample rclone configuration file for Authentication Provider Resource Principal:
provider = resource_principal_auth
### No authentication
+
Public buckets do not require any authentication mechanism to read objects.
Sample rclone configuration file for No authentication:
@@ -49342,10 +49581,9 @@ Sample rclone configuration file for No authentication:
region = us-ashburn-1
provider = no_auth
-## Options
-### Modified time
+### Modification times and hashes
-The modified time is stored as metadata on the object as
+The modification time is stored as metadata on the object as
\[ga]opc-meta-mtime\[ga] as floating point since the epoch, accurate to 1 ns.
If the modification time needs to be updated rclone will attempt to perform a server
@@ -49355,6 +49593,8 @@ In the case the object is larger than 5Gb, the object will be uploaded rather th
Note that reading this from the object takes an additional \[ga]HEAD\[ga] request as the metadata
isn\[aq]t returned in object listings.
+The MD5 hash algorithm is supported.
+
### Multipart uploads
rclone supports multipart uploads with OOS which means that it can
@@ -49657,7 +49897,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_OOS_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,InvalidUtf8,Dot
#### --oos-leave-parts-on-error
@@ -50161,7 +50401,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_QINGSTOR_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Ctl,InvalidUtf8
@@ -50284,7 +50524,7 @@ y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y
\[ga]\[ga]\[ga]
T}
T{
-### Modified time and hashes
+### Modification times and hashes
T}
T{
Quatrix allows modification times to be set on objects accurate to 1
@@ -50390,8 +50630,8 @@ T{
Properties:
T}
T{
-- Config: encoding - Env Var: RCLONE_QUATRIX_ENCODING - Type:
-MultiEncoder - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
+- Config: encoding - Env Var: RCLONE_QUATRIX_ENCODING - Type: Encoding -
+Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
T}
T{
#### --quatrix-effective-upload-time
@@ -50672,7 +50912,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SIA_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot
@@ -50864,7 +51104,7 @@ sufficient to determine if it is \[dq]dirty\[dq]. By using \[ga]--update\[ga] al
\[ga]--use-server-modtime\[ga], you can avoid the extra API call and simply upload
files whose local modtime is newer than the time it was last uploaded.
-### Modified time
+### Modification times and hashes
The modified time is stored as metadata on the object as
\[ga]X-Object-Meta-Mtime\[ga] as floating point since the epoch accurate to 1
@@ -50873,6 +51113,8 @@ ns.
This is a de facto standard (used in the official python-swiftclient
amongst others) for storing the modification time for an object.
+The MD5 hash algorithm is supported.
+
### Restricted filename characters
| Character | Value | Replacement |
@@ -51219,7 +51461,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SWIFT_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,InvalidUtf8
@@ -51331,7 +51573,7 @@ To copy a local directory to a pCloud directory called backup
rclone copy /home/source remote:backup
-### Modified time and hashes ###
+### Modification times and hashes
pCloud allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -51470,7 +51712,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PCLOUD_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
#### --pcloud-root-folder-id
@@ -51588,6 +51830,13 @@ y/e/d> y
.IP
.nf
\f[C]
+### Modification times and hashes
+
+PikPak keeps modification times on objects, and updates them when uploading objects,
+but it does not support changing only the modification time
+
+The MD5 hash algorithm is supported.
+
### Standard options
@@ -51747,7 +51996,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PIKPAK_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot
## Backend commands
@@ -51811,15 +52060,16 @@ Result:
-## Limitations ##
+## Limitations
-### Hashes ###
+### Hashes may be empty
PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files.
-### Deleted files ###
+### Deleted files still visible with trashed-only
-Deleted files will still be visible with \[ga]--pikpak-trashed-only\[ga] even after the trash emptied. This goes away after few days.
+Deleted files will still be visible with \[ga]--pikpak-trashed-only\[ga] even after the
+trash emptied. This goes away after few days.
# premiumize.me
@@ -51889,7 +52139,7 @@ To copy a local directory to an premiumize.me directory called backup
rclone copy /home/source remote:backup
-### Modified time and hashes
+### Modification times and hashes
premiumize.me does not support modification times or hashes, therefore
syncing will default to \[ga]--size-only\[ga] checking. Note that using
@@ -52004,7 +52254,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PREMIUMIZEME_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot
@@ -52088,10 +52338,12 @@ To copy a local directory to an Proton Drive directory called backup
rclone copy /home/source remote:backup
-### Modified time
+### Modification times and hashes
Proton Drive Bridge does not support updating modification times yet.
+The SHA1 hash algorithm is supported.
+
### Restricted filename characters
Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and
@@ -52237,7 +52489,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PROTONDRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot
#### --protondrive-original-file-size
@@ -52530,7 +52782,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PUTIO_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
@@ -52612,10 +52864,12 @@ To copy a local directory to an Proton Drive directory called backup
rclone copy /home/source remote:backup
-### Modified time
+### Modification times and hashes
Proton Drive Bridge does not support updating modification times yet.
+The SHA1 hash algorithm is supported.
+
### Restricted filename characters
Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and
@@ -52761,7 +53015,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_PROTONDRIVE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot
#### --protondrive-original-file-size
@@ -53204,7 +53458,7 @@ Properties:
- Config: encoding
- Env Var: RCLONE_SEAFILE_ENCODING
-- Type: MultiEncoder
+- Type: Encoding
- Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8
@@ -53586,7 +53840,7 @@ Set the configuration option \f[C]disable_hashcheck\f[R] to
\f[C]true\f[R] to disable checksumming entirely, or set
\f[C]shell_type\f[R] to \f[C]none\f[R] to disable all functionality
based on remote shell command execution.
-.SS Modified time
+.SS Modification times and hashes
.PP
Modified times are stored on the server to 1 second precision.
.PP
@@ -54434,6 +54688,34 @@ Env Var: RCLONE_SFTP_SOCKS_PROXY
Type: string
.IP \[bu] 2
Required: false
+.SS --sftp-copy-is-hardlink
+.PP
+Set to enable server side copies using hardlinks.
+.PP
+The SFTP protocol does not define a copy command so normally server side
+copies are not allowed with the sftp backend.
+.PP
+However the SFTP protocol does support hardlinking, and if you enable
+this flag then the sftp backend will support server side copies.
+These will be implemented by doing a hardlink from the source to the
+destination.
+.PP
+Not all sftp servers support this.
+.PP
+Note that hardlinking two files together will use no additional space as
+the source and the destination will be the same file.
+.PP
+This feature may be useful backups made with --copy-dest.
+.PP
+Properties:
+.IP \[bu] 2
+Config: copy_is_hardlink
+.IP \[bu] 2
+Env Var: RCLONE_SFTP_COPY_IS_HARDLINK
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS Limitations
.PP
On some SFTP servers (e.g.
@@ -54755,7 +55037,7 @@ Config: encoding
.IP \[bu] 2
Env Var: RCLONE_SMB_ENCODING
.IP \[bu] 2
-Type: MultiEncoder
+Type: Encoding
.IP \[bu] 2
Default:
Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot
@@ -55483,7 +55765,7 @@ Paths may be as deep as required, e.g.
\f[B]NB\f[R] you can\[aq]t create files in the top level folder you have
to create a folder, which rclone will create as a \[dq]Sync Folder\[dq]
with SugarSync.
-.SS Modified time and hashes
+.SS Modification times and hashes
.PP
SugarSync does not support modification times or hashes, therefore
syncing will default to \f[C]--size-only\f[R] checking.
@@ -55673,7 +55955,7 @@ Config: encoding
.IP \[bu] 2
Env Var: RCLONE_SUGARSYNC_ENCODING
.IP \[bu] 2
-Type: MultiEncoder
+Type: Encoding
.IP \[bu] 2
Default: Slash,Ctl,InvalidUtf8,Dot
.SS Limitations
@@ -55791,7 +56073,7 @@ To copy a local directory to an Uptobox directory called backup
rclone copy /home/source remote:backup
\f[R]
.fi
-.SS Modified time and hashes
+.SS Modification times and hashes
.PP
Uptobox supports neither modified times nor checksums.
All timestamps will read as that set by \f[C]--default-time\f[R].
@@ -55878,7 +56160,7 @@ Config: encoding
.IP \[bu] 2
Env Var: RCLONE_UPTOBOX_ENCODING
.IP \[bu] 2
-Type: MultiEncoder
+Type: Encoding
.IP \[bu] 2
Default:
Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
@@ -55898,8 +56180,8 @@ During the initial setup with \f[C]rclone config\f[R] you will specify
the upstream remotes as a space separated list.
The upstream remotes can either be a local paths or other remotes.
.PP
-The attributes \f[C]:ro\f[R], \f[C]:nc\f[R] and \f[C]:nc\f[R] can be
-attached to the end of the remote to tag the remote as \f[B]read
+The attributes \f[C]:ro\f[R], \f[C]:nc\f[R] and \f[C]:writeback\f[R] can
+be attached to the end of the remote to tag the remote as \f[B]read
only\f[R], \f[B]no create\f[R] or \f[B]writeback\f[R], e.g.
\f[C]remote:directory/subdirectory:ro\f[R] or
\f[C]remote:directory/subdirectory:nc\f[R].
@@ -56452,7 +56734,9 @@ Choose a number from below, or type in your own value
\[rs] (sharepoint)
5 / Sharepoint with NTLM authentication, usually self-hosted or on-premises
\[rs] (sharepoint-ntlm)
- 6 / Other site/service or software
+ 6 / rclone WebDAV server to serve a remote over HTTP via the WebDAV protocol
+ \[rs] (rclone)
+ 7 / Other site/service or software
\[rs] (other)
vendor> 2
User name
@@ -56510,7 +56794,7 @@ To copy a local directory to an WebDAV directory called backup
rclone copy /home/source remote:backup
\f[R]
.fi
-.SS Modified time and hashes
+.SS Modification times and hashes
.PP
Plain WebDAV does not support modified times.
However when used with Fastmail Files, Owncloud or Nextcloud rclone will
@@ -56588,6 +56872,12 @@ Sharepoint Online, authenticated by Microsoft account
Sharepoint with NTLM authentication, usually self-hosted or on-premises
.RE
.IP \[bu] 2
+\[dq]rclone\[dq]
+.RS 2
+.IP \[bu] 2
+rclone WebDAV server to serve a remote over HTTP via the WebDAV protocol
+.RE
+.IP \[bu] 2
\[dq]other\[dq]
.RS 2
.IP \[bu] 2
@@ -56857,6 +57147,14 @@ datetime property to compare your documents:
--ignore-size --ignore-checksum --update
\f[R]
.fi
+.SS Rclone
+.PP
+Use this option if you are hosting remotes over WebDAV provided by
+rclone.
+Read rclone serve webdav for more details.
+.PP
+rclone serve supports modified times using the \f[C]X-OC-Mtime\f[R]
+header.
.SS dCache
.PP
dCache is a storage system that supports many protocols and
@@ -57052,14 +57350,13 @@ rclone sync --interactive /home/local/directory remote:directory
.PP
Yandex paths may be as deep as required, e.g.
\f[C]remote:directory/subdirectory\f[R].
-.SS Modified time
+.SS Modification times and hashes
.PP
Modified times are supported and are stored accurate to 1 ns in custom
metadata called \f[C]rclone_modified\f[R] in RFC3339 with nanoseconds
format.
-.SS MD5 checksums
.PP
-MD5 checksums are natively supported by Yandex Disk.
+The MD5 hash algorithm is natively supported by Yandex Disk.
.SS Emptying Trash
.PP
If you wish to empty your trash you can use the
@@ -57184,7 +57481,7 @@ Config: encoding
.IP \[bu] 2
Env Var: RCLONE_YANDEX_ENCODING
.IP \[bu] 2
-Type: MultiEncoder
+Type: Encoding
.IP \[bu] 2
Default: Slash,Del,Ctl,InvalidUtf8,Dot
.SS Limitations
@@ -57339,12 +57636,11 @@ rclone sync --interactive /home/local/directory remote:directory
.PP
Zoho paths may be as deep as required, eg
\f[C]remote:directory/subdirectory\f[R].
-.SS Modified time
+.SS Modification times and hashes
.PP
Modified times are currently not supported for Zoho Workdrive
-.SS Checksums
.PP
-No checksums are supported.
+No hash algorithms are supported.
.SS Usage information
.PP
To view your current quota you can use the
@@ -57504,7 +57800,7 @@ Config: encoding
.IP \[bu] 2
Env Var: RCLONE_ZOHO_ENCODING
.IP \[bu] 2
-Type: MultiEncoder
+Type: Encoding
.IP \[bu] 2
Default: Del,Ctl,InvalidUtf8
.SS Setting up your own client_id
@@ -57540,10 +57836,10 @@ For consistencies sake one can also configure a remote of type
\f[C]local\f[R] in the config file, and access the local filesystem
using rclone remote paths, e.g.
\f[C]remote:path/to/wherever\f[R], but it is probably easier not to.
-.SS Modified time
+.SS Modification times
.PP
-Rclone reads and writes the modified time using an accuracy determined
-by the OS.
+Rclone reads and writes the modification times using an accuracy
+determined by the OS.
Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.
.SS Filenames
.PP
@@ -58350,6 +58646,13 @@ Only checksum the size that stat gave
.IP \[bu] 2
Don\[aq]t update the stat info for the file
.PP
+\f[B]NB\f[R] do not use this flag on a Windows Volume Shadow (VSS).
+For some unknown reason, files in a VSS sometimes show different sizes
+from the directory listing (where the initial stat value comes from on
+Windows) and when stat is called on them directly.
+Other copy tools always use the direct stat value and setting this flag
+will disable that.
+.PP
Properties:
.IP \[bu] 2
Config: no_check_updated
@@ -58478,7 +58781,7 @@ Config: encoding
.IP \[bu] 2
Env Var: RCLONE_LOCAL_ENCODING
.IP \[bu] 2
-Type: MultiEncoder
+Type: Encoding
.IP \[bu] 2
Default: Slash,Dot
.SS Metadata
@@ -58628,6 +58931,408 @@ Options:
.IP \[bu] 2
\[dq]error\[dq]: return an error based on option value
.SH Changelog
+.SS v1.65.0 - 2023-11-26
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.64.0...v1.65.0)
+.IP \[bu] 2
+New backends
+.RS 2
+.IP \[bu] 2
+Azure Files (karan, moongdal, Nick Craig-Wood)
+.IP \[bu] 2
+ImageKit (Abhinav Dhiman)
+.IP \[bu] 2
+Linkbox (viktor, Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+New commands
+.RS 2
+.IP \[bu] 2
+\f[C]serve s3\f[R]: Let rclone act as an S3 compatible server (Mikubill,
+Artur Neumann, Saw-jan, Nick Craig-Wood)
+.IP \[bu] 2
+\f[C]nfsmount\f[R]: mount command to provide mount mechanism on macOS
+without FUSE (Saleh Dindar)
+.IP \[bu] 2
+\f[C]serve nfs\f[R]: to serve a remote for use by \f[C]nfsmount\f[R]
+(Saleh Dindar)
+.RE
+.IP \[bu] 2
+New Features
+.RS 2
+.IP \[bu] 2
+install.sh: Clean up temp files in install script (Jacob Hands)
+.IP \[bu] 2
+build
+.RS 2
+.IP \[bu] 2
+Update all dependencies (Nick Craig-Wood)
+.IP \[bu] 2
+Refactor version info and icon resource handling on windows (albertony)
+.RE
+.IP \[bu] 2
+doc updates (albertony, alfish2000, asdffdsazqqq, Dimitri Papadopoulos,
+Herby Gillot, Joda St\[:o]\[ss]er, Manoj Ghosh, Nick Craig-Wood)
+.IP \[bu] 2
+Implement \f[C]--metadata-mapper\f[R] to transform metatadata with a
+user supplied program (Nick Craig-Wood)
+.IP \[bu] 2
+Add \f[C]ChunkWriterDoesntSeek\f[R] feature flag and set it for b2 (Nick
+Craig-Wood)
+.IP \[bu] 2
+lib/http: Export basic go string functions for use in
+\f[C]--template\f[R] (Gabriel Espinoza)
+.IP \[bu] 2
+makefile: Use POSIX compatible install arguments (Mina Gali\['c])
+.IP \[bu] 2
+operations
+.RS 2
+.IP \[bu] 2
+Use less memory when doing multithread uploads (Nick Craig-Wood)
+.IP \[bu] 2
+Implement \f[C]--partial-suffix\f[R] to control extension of temporary
+file names (Volodymyr)
+.RE
+.IP \[bu] 2
+rc
+.RS 2
+.IP \[bu] 2
+Add \f[C]operations/check\f[R] to the rc API (Nick Craig-Wood)
+.IP \[bu] 2
+Always report an error as JSON (Nick Craig-Wood)
+.IP \[bu] 2
+Set \f[C]Last-Modified\f[R] header for files served by
+\f[C]--rc-serve\f[R] (Nikita Shoshin)
+.RE
+.IP \[bu] 2
+size: Dont show duplicate object count when less than 1k (albertony)
+.RE
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+fshttp: Fix \f[C]--contimeout\f[R] being ignored
+(\[u4F60]\[u77E5]\[u9053]\[u672A]\[u6765]\[u5417])
+.IP \[bu] 2
+march: Fix excessive parallelism when using \f[C]--no-traverse\f[R]
+(Nick Craig-Wood)
+.IP \[bu] 2
+ncdu: Fix crash when re-entering changed directory after rescan (Nick
+Craig-Wood)
+.IP \[bu] 2
+operations
+.RS 2
+.IP \[bu] 2
+Fix overwrite of destination when multi-thread transfer fails (Nick
+Craig-Wood)
+.IP \[bu] 2
+Fix invalid UTF-8 when truncating file names when not using
+\f[C]--inplace\f[R] (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+serve dnla: Fix crash on graceful exit (wuxingzhong)
+.RE
+.IP \[bu] 2
+Mount
+.RS 2
+.IP \[bu] 2
+Disable mount for freebsd and alias cmount as mount on that platform
+(Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+VFS
+.RS 2
+.IP \[bu] 2
+Add \f[C]--vfs-refresh\f[R] flag to read all the directories on start
+(Beyond Meat)
+.IP \[bu] 2
+Implement Name() method in WriteFileHandle and ReadFileHandle (Saleh
+Dindar)
+.IP \[bu] 2
+Add go-billy dependency and make sure vfs.Handle implements billy.File
+(Saleh Dindar)
+.IP \[bu] 2
+Error out early if can\[aq]t upload 0 length file (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Local
+.RS 2
+.IP \[bu] 2
+Fix copying from Windows Volume Shadows (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Azure Blob
+.RS 2
+.IP \[bu] 2
+Add support for cold tier (Ivan Yanitra)
+.RE
+.IP \[bu] 2
+B2
+.RS 2
+.IP \[bu] 2
+Implement \[dq]rclone backend lifecycle\[dq] to read and set bucket
+lifecycles (Nick Craig-Wood)
+.IP \[bu] 2
+Implement \f[C]--b2-lifecycle\f[R] to control lifecycle when creating
+buckets (Nick Craig-Wood)
+.IP \[bu] 2
+Fix listing all buckets when not needed (Nick Craig-Wood)
+.IP \[bu] 2
+Fix multi-thread upload with copyto going to wrong name (Nick
+Craig-Wood)
+.IP \[bu] 2
+Fix server side chunked copy when file size was exactly
+\f[C]--b2-copy-cutoff\f[R] (Nick Craig-Wood)
+.IP \[bu] 2
+Fix streaming chunked files an exact multiple of chunk size (Nick
+Craig-Wood)
+.RE
+.IP \[bu] 2
+Box
+.RS 2
+.IP \[bu] 2
+Filter more EventIDs when polling (David Sze)
+.IP \[bu] 2
+Add more logging for polling (David Sze)
+.IP \[bu] 2
+Fix performance problem reading metadata for single files (Nick
+Craig-Wood)
+.RE
+.IP \[bu] 2
+Drive
+.RS 2
+.IP \[bu] 2
+Add read/write metadata support (Nick Craig-Wood)
+.IP \[bu] 2
+Add support for SHA-1 and SHA-256 checksums (rinsuki)
+.IP \[bu] 2
+Add \f[C]--drive-show-all-gdocs\f[R] to allow unexportable gdocs to be
+server side copied (Nick Craig-Wood)
+.IP \[bu] 2
+Add a note that \f[C]--drive-scope\f[R] accepts comma-separated list of
+scopes (Keigo Imai)
+.IP \[bu] 2
+Fix error updating created time metadata on existing object (Nick
+Craig-Wood)
+.IP \[bu] 2
+Fix integration tests by enabling metadata support from the context
+(Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Dropbox
+.RS 2
+.IP \[bu] 2
+Factor batcher into lib/batcher (Nick Craig-Wood)
+.IP \[bu] 2
+Fix missing encoding for rclone purge (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Google Cloud Storage
+.RS 2
+.IP \[bu] 2
+Fix 400 Bad request errors when using multi-thread copy (Nick
+Craig-Wood)
+.RE
+.IP \[bu] 2
+Googlephotos
+.RS 2
+.IP \[bu] 2
+Implement batcher for uploads (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Hdfs
+.RS 2
+.IP \[bu] 2
+Added support for list of namenodes in hdfs remote config
+(Tayo-pasedaRJ)
+.RE
+.IP \[bu] 2
+HTTP
+.RS 2
+.IP \[bu] 2
+Implement set backend command to update running backend (Nick
+Craig-Wood)
+.IP \[bu] 2
+Enable methods used with WebDAV (Alen \[vS]iljak)
+.RE
+.IP \[bu] 2
+Jottacloud
+.RS 2
+.IP \[bu] 2
+Add support for reading and writing metadata (albertony)
+.RE
+.IP \[bu] 2
+Onedrive
+.RS 2
+.IP \[bu] 2
+Implement ListR method which gives \f[C]--fast-list\f[R] support (Nick
+Craig-Wood)
+.RS 2
+.IP \[bu] 2
+This must be enabled with the \f[C]--onedrive-delta\f[R] flag
+.RE
+.RE
+.IP \[bu] 2
+Quatrix
+.RS 2
+.IP \[bu] 2
+Add partial upload support (Oksana Zhykina)
+.IP \[bu] 2
+Overwrite files on conflict during server-side move (Oksana Zhykina)
+.RE
+.IP \[bu] 2
+S3
+.RS 2
+.IP \[bu] 2
+Add Linode provider (Nick Craig-Wood)
+.IP \[bu] 2
+Add docs on how to add a new provider (Nick Craig-Wood)
+.IP \[bu] 2
+Fix no error being returned when creating a bucket we don\[aq]t own
+(Nick Craig-Wood)
+.IP \[bu] 2
+Emit a debug message if anonymous credentials are in use (Nick
+Craig-Wood)
+.IP \[bu] 2
+Add \f[C]--s3-disable-multipart-uploads\f[R] flag (Nick Craig-Wood)
+.IP \[bu] 2
+Detect looping when using gcs and versions (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+SFTP
+.RS 2
+.IP \[bu] 2
+Implement \f[C]--sftp-copy-is-hardlink\f[R] to server side copy as
+hardlink (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Smb
+.RS 2
+.IP \[bu] 2
+Fix incorrect \f[C]about\f[R] size by switching to
+\f[C]github.com/cloudsoda/go-smb2\f[R] fork (Nick Craig-Wood)
+.IP \[bu] 2
+Fix modtime of multithread uploads by setting PartialUploads (Nick
+Craig-Wood)
+.RE
+.IP \[bu] 2
+WebDAV
+.RS 2
+.IP \[bu] 2
+Added an rclone vendor to work with \f[C]rclone serve webdav\f[R]
+(Adithya Kumar)
+.RE
+.SS v1.64.2 - 2023-10-19
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.64.1...v1.64.2)
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+selfupdate: Fix \[dq]invalid hashsum signature\[dq] error (Nick
+Craig-Wood)
+.IP \[bu] 2
+build: Fix docker build running out of space (Nick Craig-Wood)
+.RE
+.SS v1.64.1 - 2023-10-17
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.64.0...v1.64.1)
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+cmd: Make \f[C]--progress\f[R] output logs in the same format as without
+(Nick Craig-Wood)
+.IP \[bu] 2
+docs fixes (Dimitri Papadopoulos Orfanos, Herby Gillot, Manoj Ghosh,
+Nick Craig-Wood)
+.IP \[bu] 2
+lsjson: Make sure we set the global metadata flag too (Nick Craig-Wood)
+.IP \[bu] 2
+operations
+.RS 2
+.IP \[bu] 2
+Ensure concurrency is no greater than the number of chunks (Pat
+Patterson)
+.IP \[bu] 2
+Fix OpenOptions ignored in copy if operation was a multiThreadCopy
+(Vitor Gomes)
+.IP \[bu] 2
+Fix error message on delete to have file name (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+serve sftp: Return not supported error for not supported commands (Nick
+Craig-Wood)
+.IP \[bu] 2
+build: Upgrade golang.org/x/net to v0.17.0 to fix HTTP/2 rapid reset
+(Nick Craig-Wood)
+.IP \[bu] 2
+pacer: Fix b2 deadlock by defaulting max connections to unlimited (Nick
+Craig-Wood)
+.RE
+.IP \[bu] 2
+Mount
+.RS 2
+.IP \[bu] 2
+Fix automount not detecting drive is ready (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+VFS
+.RS 2
+.IP \[bu] 2
+Fix update dir modification time (Saleh Dindar)
+.RE
+.IP \[bu] 2
+Azure Blob
+.RS 2
+.IP \[bu] 2
+Fix \[dq]fatal error: concurrent map writes\[dq] (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+B2
+.RS 2
+.IP \[bu] 2
+Fix multipart upload: corrupted on transfer: sizes differ XXX vs 0 (Nick
+Craig-Wood)
+.IP \[bu] 2
+Fix locking window when getting mutipart upload URL (Nick Craig-Wood)
+.IP \[bu] 2
+Fix server side copies greater than 4GB (Nick Craig-Wood)
+.IP \[bu] 2
+Fix chunked streaming uploads (Nick Craig-Wood)
+.IP \[bu] 2
+Reduce default \f[C]--b2-upload-concurrency\f[R] to 4 to reduce memory
+usage (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Onedrive
+.RS 2
+.IP \[bu] 2
+Fix the configurator to allow \f[C]/teams/ID\f[R] in the config (Nick
+Craig-Wood)
+.RE
+.IP \[bu] 2
+Oracleobjectstorage
+.RS 2
+.IP \[bu] 2
+Fix OpenOptions being ignored in uploadMultipart with chunkWriter (Nick
+Craig-Wood)
+.RE
+.IP \[bu] 2
+S3
+.RS 2
+.IP \[bu] 2
+Fix slice bounds out of range error when listing (Nick Craig-Wood)
+.IP \[bu] 2
+Fix OpenOptions being ignored in uploadMultipart with chunkWriter (Vitor
+Gomes)
+.RE
+.IP \[bu] 2
+Storj
+.RS 2
+.IP \[bu] 2
+Update storj.io/uplink to v1.12.0 (Kaloyan Raev)
+.RE
.SS v1.64.0 - 2023-09-11
.PP
See commits (https://github.com/rclone/rclone/compare/v1.63.0...v1.64.0)
@@ -58913,7 +59618,7 @@ Hdfs
Retry \[dq]replication in progress\[dq] errors when uploading (Nick
Craig-Wood)
.IP \[bu] 2
-Fix uploading to the wrong object on Update with overriden remote name
+Fix uploading to the wrong object on Update with overridden remote name
(Nick Craig-Wood)
.RE
.IP \[bu] 2
@@ -58934,7 +59639,7 @@ Fix List on a just deleted and remade directory (Nick Craig-Wood)
Oracleobjectstorage
.RS 2
.IP \[bu] 2
-Use rclone\[aq]s rate limiter in mutipart transfers (Manoj Ghosh)
+Use rclone\[aq]s rate limiter in multipart transfers (Manoj Ghosh)
.IP \[bu] 2
Implement \f[C]OpenChunkWriter\f[R] and multi-thread uploads (Manoj
Ghosh)
@@ -59427,7 +60132,7 @@ Report any list errors during \f[C]rclone cleanup\f[R] (albertony)
Putio
.RS 2
.IP \[bu] 2
-Fix uploading to the wrong object on Update with overriden remote name
+Fix uploading to the wrong object on Update with overridden remote name
(Nick Craig-Wood)
.IP \[bu] 2
Fix modification times not being preserved for server side copy and move
@@ -59445,7 +60150,7 @@ Update Scaleway storage classes (Brian Starkey)
.IP \[bu] 2
Fix \f[C]--s3-versions\f[R] on individual objects (Nick Craig-Wood)
.IP \[bu] 2
-Fix hang on aborting multpart upload with iDrive e2 (Nick Craig-Wood)
+Fix hang on aborting multipart upload with iDrive e2 (Nick Craig-Wood)
.IP \[bu] 2
Fix missing \[dq]tier\[dq] metadata (Nick Craig-Wood)
.IP \[bu] 2
@@ -59493,7 +60198,7 @@ Storj
Fix \[dq]uplink: too many requests\[dq] errors when uploading to the
same file (Nick Craig-Wood)
.IP \[bu] 2
-Fix uploading to the wrong object on Update with overriden remote name
+Fix uploading to the wrong object on Update with overridden remote name
(Nick Craig-Wood)
.RE
.IP \[bu] 2
@@ -69309,7 +70014,7 @@ Wang)
Mount
.RS 2
.IP \[bu] 2
-Re-use \f[C]rcat\f[R] internals to support uploads from all remotes
+Reuse \f[C]rcat\f[R] internals to support uploads from all remotes
.RE
.IP \[bu] 2
Dropbox
@@ -72569,7 +73274,7 @@ Jonta <359397+Jonta@users.noreply.github.com>
.IP \[bu] 2
YenForYang
.IP \[bu] 2
-Joda St\[:o]\[ss]er
+SimJoSt / Joda St\[:o]\[ss]er
.IP \[bu] 2
Logeshwaran
.IP \[bu] 2
@@ -73066,6 +73771,70 @@ Volodymyr Kit
David Pedersen
.IP \[bu] 2
Drew Stinnett
+.IP \[bu] 2
+Pat Patterson
+.IP \[bu] 2
+Herby Gillot
+.IP \[bu] 2
+Nikita Shoshin
+.IP \[bu] 2
+rinsuki <428rinsuki+git@gmail.com>
+.IP \[bu] 2
+Beyond Meat <51850644+beyondmeat@users.noreply.github.com>
+.IP \[bu] 2
+Saleh Dindar
+.IP \[bu] 2
+Volodymyr <142890760+vkit-maytech@users.noreply.github.com>
+.IP \[bu] 2
+Gabriel Espinoza <31670639+gspinoza@users.noreply.github.com>
+.IP \[bu] 2
+Keigo Imai
+.IP \[bu] 2
+Ivan Yanitra
+.IP \[bu] 2
+alfish2000
+.IP \[bu] 2
+wuxingzhong
+.IP \[bu] 2
+Adithya Kumar
+.IP \[bu] 2
+Tayo-pasedaRJ <138471223+Tayo-pasedaRJ@users.noreply.github.com>
+.IP \[bu] 2
+Peter Kreuser
+.IP \[bu] 2
+Piyush
+.IP \[bu] 2
+fotile96
+.IP \[bu] 2
+Luc Ritchie
+.IP \[bu] 2
+cynful
+.IP \[bu] 2
+wjielai
+.IP \[bu] 2
+Jack Deng
+.IP \[bu] 2
+Mikubill <31246794+Mikubill@users.noreply.github.com>
+.IP \[bu] 2
+Artur Neumann
+.IP \[bu] 2
+Saw-jan
+.IP \[bu] 2
+Oksana Zhykina
+.IP \[bu] 2
+karan
+.IP \[bu] 2
+viktor
+.IP \[bu] 2
+moongdal
+.IP \[bu] 2
+Mina Gali\['c]
+.IP \[bu] 2
+Alen \[vS]iljak
+.IP \[bu] 2
+\[u4F60]\[u77E5]\[u9053]\[u672A]\[u6765]\[u5417]
+.IP \[bu] 2
+Abhinav Dhiman <8640877+ahnv@users.noreply.github.com>
.SH Contact the rclone project
.SS Forum
.PP