Compare commits

..

34 commits

Author SHA1 Message Date
Nick Craig-Wood
f2d16ab4c5 Version v1.68.2 2024-11-15 12:20:50 +00:00
Nick Craig-Wood
c0fc4fe0ca s3: fix multitenant multipart uploads with CEPH
CEPH uses a special bucket form `tenant:bucket` for multitentant
access using S3 as documented here:

https://docs.ceph.com/en/reef/radosgw/multitenancy/#s3

However when doing multipart uploads, in the reply from
`CreateMultipart` the `tenant:` was missing from the `Bucket` response
rclone was using to build the `UploadPart` request. This caused a 404
failure return. This may be a CEPH bug, but it is easy to work around.

This changes the code to use the `Bucket` and `Key` that we used in
`CreateMultipart` in `UploadPart` rather than the one returned from
`CreateMultipart` which fixes the problem.

See: https://forum.rclone.org/t/rclone-zcat-does-not-work-with-a-multitenant-ceph-backend/48618
2024-11-14 16:50:19 +00:00
Nick Craig-Wood
669b2f2669 local: fix permission and ownership on symlinks with --links and --metadata
Before this change, if writing to a local backend with --metadata and
--links, if the incoming metadata contained mode or ownership
information then rclone would apply the mode/ownership to the
destination of the link not the link itself.

This fixes the problem by using the link safe sycall variants
lchown/fchmodat when --links and --metadata is in use. Note that Linux
does not support setting permissions on symlinks, so rclone emits a
debug message in this case.

This also fixes setting times on symlinks on Windows which wasn't
implemented for atime, mtime and was incorrectly setting the target of
the symlink for btime.

See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
2024-11-14 16:36:22 +00:00
Dimitrios Slamaris
e1ba10a86e bisync: fix output capture restoring the wrong output for logrus
Before this change, if rclone is used as a library and logrus is used
after a call to rc `sync/bisync`, logging does not work anymore and
leads to writing to a closed pipe.

This change restores the output correctly.

Fixes #8158
2024-11-14 16:36:22 +00:00
Nick Craig-Wood
022442cf58 build: fix comments after golangci-lint upgrade 2024-11-14 16:36:22 +00:00
dependabot[bot]
5cc4488294 build(deps): bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1
Bumps [github.com/golang-jwt/jwt/v4](https://github.com/golang-jwt/jwt) from 4.5.0 to 4.5.1.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v4.5.0...v4.5.1)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v4
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-14 16:36:22 +00:00
Nick Craig-Wood
ec9566c5c3 pikpak: fix fatal crash on startup with token that can't be refreshed 2024-11-14 16:36:22 +00:00
Nick Craig-Wood
f6976eb4c4 serve s3: fix excess locking which was making serve s3 single threaded
The fix for this was in the upstream library to narrow the locking
window.

See: https://forum.rclone.org/t/can-rclone-serve-s3-handle-more-than-one-client/48329/
2024-11-14 16:36:22 +00:00
Nick Craig-Wood
c242c00799 onedrive: fix Retry-After handling to look at 503 errors also
According to the Microsoft docs a Retry-After header can be returned
on 429 errors and 503 errors, but before this change we were only
checking for it on 429 errors.

See: https://forum.rclone.org/t/onedrive-503-response-retry-after-not-used/48045
2024-11-14 16:36:22 +00:00
Kaloyan Raev
bf954b74ff s3: Storj provider: fix server-side copy of files bigger than 5GB
Like some other S3-compatible providers, Storj does not currently
implements UploadPartCopy and returns NotImplemented errors for
multi-part server side copies.

This patch works around the problem by raising --s3-copy-cutoff for
Storj to the maximum. This means that rclone will never use
multi-part copies for files in Storj. This includes files larger than
5GB which (according to AWS documentation) must be copied with
multi-part copy. This works fine for Storj.

See https://github.com/storj/roadmap/issues/40
2024-11-14 16:36:22 +00:00
tgfisher
88f0770d0a docs: mention that inline comments are not supported in a filter-file 2024-11-14 16:36:22 +00:00
Randy Bush
41d905c9b0 docs: fix forward refs in step 9 of using your own client id 2024-11-14 16:36:22 +00:00
Alexandre Hamez
300a063b5e docs: fix Scaleway Glacier website URL 2024-11-14 16:36:21 +00:00
Simon Bos
61bf29ed5e dlna: fix loggingResponseWriter disregarding log level 2024-11-14 16:36:21 +00:00
Nick Craig-Wood
3191717572 s3: fix crash when using --s3-download-url after migration to SDKv2
Before this change rclone was crashing when the download URL did not
supply an X-Amz-Storage-Class header.

This change allows the header to be missing.

See: https://forum.rclone.org/t/sigsegv-on-ubuntu-24-04/48047
2024-11-14 16:36:21 +00:00
Nick Craig-Wood
961dfe97b5 docs: update overview to show pcloud can set modtime
See 258092f9c6 and #7896
2024-11-14 16:36:21 +00:00
Nick Craig-Wood
22612b4b38 Add RcloneView as a sponsor 2024-11-14 16:36:21 +00:00
Nick Craig-Wood
b9927461c3 accounting: fix wrong message on SIGUSR2 to enable/disable bwlimit
This was caused by the message code only looking at one of the
bandwidth filters, not all of them.

Fixes #8104
2024-11-14 16:36:21 +00:00
wiserain
6d04be99f2 pikpak: fix cid/gcid calculations for fs.OverrideRemote
Previously, cid/gcid (custom hash for pikpak) calculations failed when 
attempting to unwrap object info from `fs.OverrideRemote`. 

This commit introduces a new function that can correctly unwrap 
object info from both regular objects and `fs.OverrideRemote` types, 
ensuring uploads with accurate cid/gcid calculations in all scenarios.
2024-11-14 16:36:21 +00:00
nielash
06ae0dfa54 local: fix --copy-links on macOS when cloning
Before this change, --copy-links erroneously behaved like --links when using cloning
on macOS, and cloning was not supported at all when using --links.

After this change, --copy-links does what it's supposed to, and takes advantage of
cloning when possible, by copying the file being linked to instead of the link
itself.

Cloning is now also supported in --links mode for regular files (which benefit
most from cloning). symlinks in --links mode continue to be tossed back to be
handled by rclone's special translation logic.

See https://forum.rclone.org/t/macos-local-to-local-copy-with-copy-links-causes-error/47671/5?u=nielash
2024-11-14 16:36:21 +00:00
Nick Craig-Wood
912f29b5b8 Start v1.68.2-DEV development 2024-09-24 17:25:53 +01:00
Nick Craig-Wood
8d78768aaa Version v1.68.1 2024-09-24 15:47:01 +01:00
Nick Craig-Wood
6aa924f28d docs: document that fusermount3 may be needed when mounting/unmounting
See: https://forum.rclone.org/t/documentation-fusermount-vs-fusermount3/47816/
2024-09-23 17:33:09 +01:00
wiserain
48f2c2db70 pikpak: fix login issue where token retrieval fails
This addresses the login issue caused by pikpak's recent cancellation 
of existing login methods and requirement for additional verifications. 

To resolve this, we've made the following changes:

1. Similar to lib/oauthutil, we've integrated a mechanism to handle 
captcha tokens.

2. A new pikpakClient has been introduced to wrap the existing 
rest.Client and incorporate the necessary headers including 
x-captcha-token for each request.

3. Several options have been added/removed to support persistent 
user/client identification.

* client_id: No longer configurable.
* client_secret: Deprecated as it's no longer used.
* user_agent: A new option that defaults to PC/Firefox's user agent 
but can be overridden using the --pikpak-user-agent flag.
* device_id: A new option that is randomly generated if invalid. 
It is recommended not to delete or change it frequently.
* captcha_token: A new option that is automatically managed 
by rclone, similar to the OAuth token.

Fixes #7950 #8005
2024-09-23 17:33:09 +01:00
Nick Craig-Wood
a88066aff3 s3: fix rclone ignoring static credentials when env_auth=true
The SDKv2 conversion introduced a regression to do with setting
credentials with env_auth=true. The rclone documentation explicitly
states that env_auth only applies if secret_access_key and
access_key_id are blank and users had been relying on that.

However after the SDKv2 conversion we were ignoring static credentials
if env_auth=true.

This fixes the problem by ignoring env_auth=true if secret_access_key
and access_key_id are both provided. This brings rclone back into line
with the documentation and users expectations.

Fixes #8067
2024-09-23 17:33:09 +01:00
Nick Craig-Wood
75f5b06ff7 fs: fix setting stringArray config values from environment variables
After the config re-organisation, the setting of stringArray config
values (eg `--exclude` set with `RCLONE_EXCLUDE`) was broken and gave
a message like this for `RCLONE_EXCLUDE=*.jpg`:

    Failed to load "filter" default values: failed to initialise "filter" options:
    couldn't parse config item "exclude" = "*.jpg" as []string: parsing "*.jpg" as []string failed:
    invalid character '/' looking for beginning of value

This was caused by the parser trying to parse the input string as a
JSON value.

When the config was re-organised it was thought that the internal
representation of stringArray values was not important as it was never
visible externally, however this turned out not to be true.

A defined representation was chosen - a comma separated string and
this was documented and tests were introduced in this patch.

This potentially introduces a very small backwards incompatibility. In
rclone v1.67.0

    RCLONE_EXCLUDE=a,b

Would be interpreted as

    --exclude "a,b"

Whereas this new code will interpret it as

    --exclude "a" --exclude "b"

The benefit of being able to set multiple values with an environment
variable was deemed to outweigh the very small backwards compatibility
risk.

If a value with a `,` is needed, then use CSV escaping, eg

    RCLONE_EXCLUDE="a,b"

(Note this needs to have the quotes in so at the unix shell that would be

    RCLONE_EXCLUDE='"a,b"'

Fixes #8063
2024-09-23 17:33:09 +01:00
Nick Craig-Wood
daeeb7c145 rc: fix default value of --metrics-addr
Before this fix it was empty string, which isn't a good default for a
stringArray.
2024-09-23 17:33:09 +01:00
Nick Craig-Wood
d6a5fc6ffa fs: fix --dump filters not always appearing
Before this fix, we initialised the options blocks in a random order.
This meant that there was a 50/50 chance whether --dump filters would
show the filters or not as it was depending on the "main" block having
being read first to set the Dump flags.

This initialises the options blocks in a defined order which is
alphabetically but with main first which fixes the problem.
2024-09-23 17:33:09 +01:00
Nick Craig-Wood
c0bfedf99c docs: correct notes on docker manual build 2024-09-23 17:33:09 +01:00
ttionya
76b76c30bf build: fix docker release build - fixes #8062
This updates the action to use `docker/build-push-action` instead of `ilteoood/docker_buildx`
which fixes the build problem in testing.
2024-09-23 17:33:09 +01:00
Pawel Palucha
737fcc804f docs: add section for improving performance for s3 2024-09-23 17:33:09 +01:00
Nick Craig-Wood
70f3965354 onedrive: fix spurious "Couldn't decode error response: EOF" DEBUG
This DEBUG was being generated on redirects which don't have a JSON
body and is irrelevant.
2024-09-23 17:33:09 +01:00
Divyam
d5c100edaf serve docker: add missing vfs-read-chunk-streams option in docker volume driver 2024-09-23 17:33:09 +01:00
Nick Craig-Wood
dc7458cea0 Start v1.68.1-DEV development 2024-09-23 17:29:48 +01:00
77 changed files with 1324 additions and 1590 deletions

View file

@ -490,7 +490,7 @@ alphabetical order of full name of remote (e.g. `drive` is ordered as
- `docs/content/remote.md` - main docs page (note the backend options are automatically added to this file with `make backenddocs`)
- make sure this has the `autogenerated options` comments in (see your reference backend docs)
- update them in your backend with `bin/make_backend_docs.py remote`
- `docs/content/overview.md` - overview docs - add an entry into the Features table and the Optional Features table.
- `docs/content/overview.md` - overview docs
- `docs/content/docs.md` - list of remotes in config section
- `docs/content/_index.md` - front page of rclone.org
- `docs/layouts/chrome/navbar.html` - add it to the website navigation

336
MANUAL.html generated
View file

@ -81,7 +81,7 @@
<header id="title-block-header">
<h1 class="title">rclone(1) User Manual</h1>
<p class="author">Nick Craig-Wood</p>
<p class="date">Sep 08, 2024</p>
<p class="date">Nov 15, 2024</p>
</header>
<h1 id="rclone-syncs-your-files-to-cloud-storage">Rclone syncs your files to cloud storage</h1>
<p><img width="50%" src="https://rclone.org/img/logo_on_light__horizontal_color.svg" alt="rclone logo" style="float:right; padding: 5px;" ></p>
@ -2964,7 +2964,9 @@ rclone mount remote:path/to/files \\cloud\remote</code></pre>
<p>When running in background mode the user will have to stop the mount manually:</p>
<pre><code># Linux
fusermount -u /path/to/local/mount
# OS X
#... or on some systems
fusermount3 -u /path/to/local/mount
# OS X or Linux when using nfsmount
umount /path/to/local/mount</code></pre>
<p>The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually.</p>
<p>The size of the mounted file system will be set according to information retrieved from the remote, the same as returned by the <a href="https://rclone.org/commands/rclone_about/">rclone about</a> command. Remotes with unlimited storage may report the used size only, then an additional 1 PiB of free space is assumed. If the remote does not <a href="https://rclone.org/overview/#optional-features">support</a> the about feature at all, then 1 PiB is set as both the total and the free size.</p>
@ -3048,7 +3050,7 @@ sudo ln -s /opt/local/lib/libfuse.2.dylib</code></pre>
<p>Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.</p>
<h2 id="systemd">systemd</h2>
<p>When running rclone mount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone mount service specified as a requirement will see all files and folders immediately in this mode.</p>
<p>Note that systemd runs mount units without any environment variables including <code>PATH</code> or <code>HOME</code>. This means that tilde (<code>~</code>) expansion will not work and you should provide <code>--config</code> and <code>--cache-dir</code> explicitly as absolute paths via rclone arguments. Since mounting requires the <code>fusermount</code> program, rclone will use the fallback PATH of <code>/bin:/usr/bin</code> in this scenario. Please ensure that <code>fusermount</code> is present on this PATH.</p>
<p>Note that systemd runs mount units without any environment variables including <code>PATH</code> or <code>HOME</code>. This means that tilde (<code>~</code>) expansion will not work and you should provide <code>--config</code> and <code>--cache-dir</code> explicitly as absolute paths via rclone arguments. Since mounting requires the <code>fusermount</code> or <code>fusermount3</code> program, rclone will use the fallback PATH of <code>/bin:/usr/bin</code> in this scenario. Please ensure that <code>fusermount</code>/<code>fusermount3</code> is present on this PATH.</p>
<h2 id="rclone-as-unix-mount-helper">Rclone as Unix mount helper</h2>
<p>The core Unix program <code>/bin/mount</code> normally takes the <code>-t FSTYPE</code> argument then runs the <code>/sbin/mount.FSTYPE</code> helper program passing it mount options as <code>-o key=val,...</code> or <code>--opt=...</code>. Automount (classic or systemd) behaves in a similar way.</p>
<p>rclone by default expects GNU-style flags <code>--key val</code>. To run it as a mount helper you should symlink rclone binary to <code>/sbin/mount.rclone</code> and optionally <code>/usr/bin/rclonefs</code>, e.g. <code>ln -s /usr/bin/rclone /sbin/mount.rclone</code>. rclone will detect it and translate command-line arguments appropriately.</p>
@ -3482,7 +3484,9 @@ rclone nfsmount remote:path/to/files \\cloud\remote</code></pre>
<p>When running in background mode the user will have to stop the mount manually:</p>
<pre><code># Linux
fusermount -u /path/to/local/mount
# OS X
#... or on some systems
fusermount3 -u /path/to/local/mount
# OS X or Linux when using nfsmount
umount /path/to/local/mount</code></pre>
<p>The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually.</p>
<p>The size of the mounted file system will be set according to information retrieved from the remote, the same as returned by the <a href="https://rclone.org/commands/rclone_about/">rclone about</a> command. Remotes with unlimited storage may report the used size only, then an additional 1 PiB of free space is assumed. If the remote does not <a href="https://rclone.org/overview/#optional-features">support</a> the about feature at all, then 1 PiB is set as both the total and the free size.</p>
@ -3566,7 +3570,7 @@ sudo ln -s /opt/local/lib/libfuse.2.dylib</code></pre>
<p>Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.</p>
<h2 id="systemd-1">systemd</h2>
<p>When running rclone nfsmount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone nfsmount service specified as a requirement will see all files and folders immediately in this mode.</p>
<p>Note that systemd runs mount units without any environment variables including <code>PATH</code> or <code>HOME</code>. This means that tilde (<code>~</code>) expansion will not work and you should provide <code>--config</code> and <code>--cache-dir</code> explicitly as absolute paths via rclone arguments. Since mounting requires the <code>fusermount</code> program, rclone will use the fallback PATH of <code>/bin:/usr/bin</code> in this scenario. Please ensure that <code>fusermount</code> is present on this PATH.</p>
<p>Note that systemd runs mount units without any environment variables including <code>PATH</code> or <code>HOME</code>. This means that tilde (<code>~</code>) expansion will not work and you should provide <code>--config</code> and <code>--cache-dir</code> explicitly as absolute paths via rclone arguments. Since mounting requires the <code>fusermount</code> or <code>fusermount3</code> program, rclone will use the fallback PATH of <code>/bin:/usr/bin</code> in this scenario. Please ensure that <code>fusermount</code>/<code>fusermount3</code> is present on this PATH.</p>
<h2 id="rclone-as-unix-mount-helper-1">Rclone as Unix mount helper</h2>
<p>The core Unix program <code>/bin/mount</code> normally takes the <code>-t FSTYPE</code> argument then runs the <code>/sbin/mount.FSTYPE</code> helper program passing it mount options as <code>-o key=val,...</code> or <code>--opt=...</code>. Automount (classic or systemd) behaves in a similar way.</p>
<p>rclone by default expects GNU-style flags <code>--key val</code>. To run it as a mount helper you should symlink rclone binary to <code>/sbin/mount.rclone</code> and optionally <code>/usr/bin/rclonefs</code>, e.g. <code>ln -s /usr/bin/rclone /sbin/mount.rclone</code>. rclone will detect it and translate command-line arguments appropriately.</p>
@ -4056,7 +4060,7 @@ htpasswd -B htpasswd anotherUser</code></pre>
<h3 id="rc-options">RC Options</h3>
<p>Flags to control the Remote Control API</p>
<pre><code> --rc Enable the remote control server
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default [&quot;localhost:5572&quot;])
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@ -7849,6 +7853,38 @@ export RCLONE_CONFIG_PASS</code></pre>
<p>Verbosity is slightly different, the environment variable equivalent of <code>--verbose</code> or <code>-v</code> is <code>RCLONE_VERBOSE=1</code>, or for <code>-vv</code>, <code>RCLONE_VERBOSE=2</code>.</p>
<p>The same parser is used for the options and the environment variables so they take exactly the same form.</p>
<p>The options set by environment variables can be seen with the <code>-vv</code> flag, e.g. <code>rclone version -vv</code>.</p>
<p>Options that can appear multiple times (type <code>stringArray</code>) are treated slighly differently as environment variables can only be defined once. In order to allow a simple mechanism for adding one or many items, the input is treated as a <a href="https://godoc.org/encoding/csv">CSV encoded</a> string. For example</p>
<table>
<colgroup>
<col style="width: 52%" />
<col style="width: 47%" />
</colgroup>
<thead>
<tr class="header">
<th>Environment Variable</th>
<th>Equivalent options</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td><code>RCLONE_EXCLUDE="*.jpg"</code></td>
<td><code>--exclude "*.jpg"</code></td>
</tr>
<tr class="even">
<td><code>RCLONE_EXCLUDE="*.jpg,*.png"</code></td>
<td><code>--exclude "*.jpg"</code> <code>--exclude "*.png"</code></td>
</tr>
<tr class="odd">
<td><code>RCLONE_EXCLUDE='"*.jpg","*.png"'</code></td>
<td><code>--exclude "*.jpg"</code> <code>--exclude "*.png"</code></td>
</tr>
<tr class="even">
<td><code>RCLONE_EXCLUDE='"/directory with comma , in it /**"'</code></td>
<td>`--exclude "/directory with comma , in it /**"</td>
</tr>
</tbody>
</table>
<p>If <code>stringArray</code> options are defined as environment variables <strong>and</strong> options on the command line then all the values will be used.</p>
<h3 id="config-file">Config file</h3>
<p>You can set defaults for values in the config file on an individual remote basis. The names of the config items are documented in the page for each backend.</p>
<p>To find the name of the environment variable, you need to set, take <code>RCLONE_CONFIG_</code> + name of remote + <code>_</code> + name of config file option and make it all uppercase. Note one implication here is the remote's name must be convertible into a valid environment variable name, so it can only contain letters, digits, or the <code>_</code> (underscore) character.</p>
@ -8287,12 +8323,14 @@ file2.avi</code></pre>
<p>Adds path/file names to an rclone command based on rules in a named file. The file contains a list of remarks and pattern rules. Include rules start with <code>+</code> and exclude rules with <code>-</code>. <code>!</code> clears existing rules. Rules are processed in the order they are defined.</p>
<p>This flag can be repeated. See above for the order filter flags are processed in.</p>
<p>Arrange the order of filter rules with the most restrictive first and work down.</p>
<p>Lines starting with # or ; are ignored, and can be used to write comments. Inline comments are not supported. <em>Use <code>-vv --dump filters</code> to see how they appear in the final regexp.</em></p>
<p>E.g. for <code>filter-file.txt</code>:</p>
<pre><code># a sample filter rule file
- secret*.jpg
+ *.jpg
+ *.png
+ file2.avi
- /dir/tmp/** # WARNING! This text will be treated as part of the path.
- /dir/Trash/**
+ /dir/**
# exclude everything else
@ -10382,7 +10420,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
<tr class="odd">
<td>pCloud</td>
<td style="text-align: center;">MD5, SHA1 ⁷</td>
<td style="text-align: center;">R</td>
<td style="text-align: center;">R/W</td>
<td style="text-align: center;">No</td>
<td style="text-align: center;">No</td>
<td style="text-align: center;">W</td>
@ -11932,7 +11970,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default &quot;rclone/v1.68.0&quot;)</code></pre>
--user-agent string Set the user-agent to a specified string (default &quot;rclone/v1.68.2&quot;)</code></pre>
<h2 id="performance">Performance</h2>
<p>Flags helpful for increasing performance.</p>
<pre><code> --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
@ -12033,7 +12071,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
<h2 id="rc-1">RC</h2>
<p>Flags to control the Remote Control API.</p>
<pre><code> --rc Enable the remote control server
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default [&quot;localhost:5572&quot;])
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@ -12063,7 +12101,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--rc-web-gui-update Check and update to latest version of web gui</code></pre>
<h2 id="metrics-1">Metrics</h2>
<p>Flags to control the Metrics HTTP endpoint..</p>
<pre><code> --metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to (default [&quot;&quot;])
<pre><code> --metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to
--metrics-allow-origin string Origin which cross-domain request (CORS) can be executed from
--metrics-baseurl string Prefix for URLs - leave blank for root
--metrics-cert string TLS PEM key (concatenation of certificate and CA certificate)
@ -12551,21 +12589,18 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--pcloud-token string OAuth Access Token as a JSON blob
--pcloud-token-url string Token server url
--pcloud-username string Your pcloud username
--pikpak-auth-url string Auth server URL
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
--pikpak-client-id string OAuth Client Id
--pikpak-client-secret string OAuth Client Secret
--pikpak-description string Description of the remote
--pikpak-device-id string Device ID used for authorization
--pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
--pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder
--pikpak-token string OAuth Access Token as a JSON blob
--pikpak-token-url string Token server url
--pikpak-trashed-only Only show files that are in the trash
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
--pikpak-user string Pikpak username
--pikpak-user-agent string HTTP user agent for pikpak (default &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0&quot;)
--pixeldrain-api-key string API key for your pixeldrain account
--pixeldrain-api-url string The API endpoint to connect to. In the vast majority of cases it&#39;s fine to leave (default &quot;https://pixeldrain.com/api&quot;)
--pixeldrain-description string Description of the remote
@ -14529,6 +14564,18 @@ y/e/d&gt;</code></pre>
<p>By default, rclone will HEAD every object it uploads. It does this to check the object got uploaded correctly.</p>
<p>You can disable this with the <a href="#s3-no-head">--s3-no-head</a> option - see there for more details.</p>
<p>Setting this flag increases the chance for undetected upload failures.</p>
<h3 id="increasing-performance">Increasing performance</h3>
<h4 id="using-server-side-copy">Using server-side copy</h4>
<p>If you are copying objects between S3 buckets in the same region, you should use server-side copy. This is much faster than downloading and re-uploading the objects, as no data is transferred.</p>
<p>For rclone to use server-side copy, you must use the same remote for the source and destination.</p>
<pre><code>rclone copy s3:source-bucket s3:destination-bucket</code></pre>
<p>When using server-side copy, the performance is limited by the rate at which rclone issues API requests to S3. See below for how to increase the number of API requests rclone makes.</p>
<h4 id="increasing-the-rate-of-api-requests">Increasing the rate of API requests</h4>
<p>You can increase the rate of API requests to S3 by increasing the parallelism using <code>--transfers</code> and <code>--checkers</code> options.</p>
<p>Rclone uses a very conservative defaults for these settings, as not all providers support high rates of requests. Depending on your provider, you can increase significantly the number of transfers and checkers.</p>
<p>For example, with AWS S3, if you can increase the number of checkers to values like 200. If you are doing a server-side copy, you can also increase the number of transfers to 200.</p>
<pre><code>rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket</code></pre>
<p>You will need to experiment with these values to find the optimal settings for your setup.</p>
<h3 id="versions">Versions</h3>
<p>When bucket versioning is enabled (this can be done with rclone with the <a href="#versioning"><code>rclone backend versioning</code></a> command) when rclone uploads a new version of a file it creates a <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html">new version of it</a> Likewise when you delete a file, the old version will be marked hidden and still be available.</p>
<p>Old versions of files, where available, are visible using the <a href="#s3-versions"><code>--s3-versions</code></a> flag.</p>
@ -17123,7 +17170,7 @@ acl = private
upload_cutoff = 5M
chunk_size = 5M
copy_cutoff = 5M</code></pre>
<p><a href="https://www.online.net/en/storage/c14-cold-storage">C14 Cold Storage</a> is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" <code>storage_class</code>. So you can configure your remote with the <code>storage_class = GLACIER</code> option to upload directly to C14. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)</p>
<p><a href="https://www.scaleway.com/en/glacier-cold-storage/">Scaleway Glacier</a> is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" <code>storage_class</code>. So you can configure your remote with the <code>storage_class = GLACIER</code> option to upload directly to Scaleway Glacier. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)</p>
<h3 id="lyve">Seagate Lyve Cloud</h3>
<p><a href="https://www.seagate.com/gb/en/services/cloud/storage/">Seagate Lyve Cloud</a> is an S3 compatible object storage platform from <a href="https://seagate.com/">Seagate</a> intended for enterprise use.</p>
<p>Here is a config run through for a remote called <code>remote</code> - you may choose a different name of course. Note that to create an access key and secret key you will need to create a service account first.</p>
@ -18322,7 +18369,7 @@ cos s3</code></pre>
<p>For Netease NOS configure as per the configurator <code>rclone config</code> setting the provider <code>Netease</code>. This will automatically set <code>force_path_style = false</code> which is necessary for it to run properly.</p>
<h3 id="petabox">Petabox</h3>
<p>Here is an example of making a <a href="https://petabox.io/">Petabox</a> configuration. First run:</p>
<div class="sourceCode" id="cb967"><pre class="sourceCode bash"><code class="sourceCode bash"><span id="cb967-1"><a href="#cb967-1" aria-hidden="true"></a><span class="ex">rclone</span> config</span></code></pre></div>
<div class="sourceCode" id="cb969"><pre class="sourceCode bash"><code class="sourceCode bash"><span id="cb969-1"><a href="#cb969-1" aria-hidden="true"></a><span class="ex">rclone</span> config</span></code></pre></div>
<p>This will guide you through an interactive setup process.</p>
<pre><code>No remotes found, make a new one?
n) New remote
@ -24568,7 +24615,7 @@ rclone backend copyid drive: ID1 path1 ID2 path2</code></pre>
<li><p>Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID".</p></li>
<li><p>Choose an application type of "Desktop app" and click "Create". (the default name is fine)</p></li>
<li><p>It will show you a client ID and client secret. Make a note of these.</p>
<p>(If you selected "External" at Step 5 continue to Step 9. If you chose "Internal" you don't need to publish and can skip straight to Step 10 but your destination drive must be part of the same Google Workspace.)</p></li>
<p>(If you selected "External" at Step 5 continue to Step 10. If you chose "Internal" you don't need to publish and can skip straight to Step 11 but your destination drive must be part of the same Google Workspace.)</p></li>
<li><p>Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm. You will also want to add yourself as a test user.</p></li>
<li><p>Provide the noted client ID and client secret to rclone.</p></li>
</ol>
@ -29660,75 +29707,75 @@ rclone rc vfs/refresh recursive=true</code></pre>
<p>Permissions are also supported, if <code>--onedrive-metadata-permissions</code> is set. The accepted values for <code>--onedrive-metadata-permissions</code> are "<code>read</code>", "<code>write</code>", "<code>read,write</code>", and "<code>off</code>" (the default). "<code>write</code>" supports adding new permissions, updating the "role" of existing permissions, and removing permissions. Updating and removing require the Permission ID to be known, so it is recommended to use "<code>read,write</code>" instead of "<code>write</code>" if you wish to update/remove permissions.</p>
<p>Permissions are read/written in JSON format using the same schema as the <a href="https://learn.microsoft.com/en-us/onedrive/developer/rest-api/resources/permission?view=odsp-graph-online">OneDrive API</a>, which differs slightly between OneDrive Personal and Business.</p>
<p>Example for OneDrive Personal:</p>
<div class="sourceCode" id="cb1258"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1258-1"><a href="#cb1258-1" aria-hidden="true"></a><span class="ot">[</span></span>
<span id="cb1258-2"><a href="#cb1258-2" aria-hidden="true"></a> <span class="fu">{</span></span>
<span id="cb1258-3"><a href="#cb1258-3" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;1234567890ABC!123&quot;</span><span class="fu">,</span></span>
<span id="cb1258-4"><a href="#cb1258-4" aria-hidden="true"></a> <span class="dt">&quot;grantedTo&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1258-5"><a href="#cb1258-5" aria-hidden="true"></a> <span class="dt">&quot;user&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1258-6"><a href="#cb1258-6" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;ryan@contoso.com&quot;</span></span>
<span id="cb1258-7"><a href="#cb1258-7" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1258-8"><a href="#cb1258-8" aria-hidden="true"></a> <span class="dt">&quot;application&quot;</span><span class="fu">:</span> <span class="fu">{},</span></span>
<span id="cb1258-9"><a href="#cb1258-9" aria-hidden="true"></a> <span class="dt">&quot;device&quot;</span><span class="fu">:</span> <span class="fu">{}</span></span>
<span id="cb1258-10"><a href="#cb1258-10" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1258-11"><a href="#cb1258-11" aria-hidden="true"></a> <span class="dt">&quot;invitation&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1258-12"><a href="#cb1258-12" aria-hidden="true"></a> <span class="dt">&quot;email&quot;</span><span class="fu">:</span> <span class="st">&quot;ryan@contoso.com&quot;</span></span>
<span id="cb1258-13"><a href="#cb1258-13" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1258-14"><a href="#cb1258-14" aria-hidden="true"></a> <span class="dt">&quot;link&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1258-15"><a href="#cb1258-15" aria-hidden="true"></a> <span class="dt">&quot;webUrl&quot;</span><span class="fu">:</span> <span class="st">&quot;https://1drv.ms/t/s!1234567890ABC&quot;</span></span>
<span id="cb1258-16"><a href="#cb1258-16" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1258-17"><a href="#cb1258-17" aria-hidden="true"></a> <span class="dt">&quot;roles&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1258-18"><a href="#cb1258-18" aria-hidden="true"></a> <span class="st">&quot;read&quot;</span></span>
<span id="cb1258-19"><a href="#cb1258-19" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1258-20"><a href="#cb1258-20" aria-hidden="true"></a> <span class="dt">&quot;shareId&quot;</span><span class="fu">:</span> <span class="st">&quot;s!1234567890ABC&quot;</span></span>
<span id="cb1258-21"><a href="#cb1258-21" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1258-22"><a href="#cb1258-22" aria-hidden="true"></a><span class="ot">]</span></span></code></pre></div>
<div class="sourceCode" id="cb1260"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1260-1"><a href="#cb1260-1" aria-hidden="true"></a><span class="ot">[</span></span>
<span id="cb1260-2"><a href="#cb1260-2" aria-hidden="true"></a> <span class="fu">{</span></span>
<span id="cb1260-3"><a href="#cb1260-3" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;1234567890ABC!123&quot;</span><span class="fu">,</span></span>
<span id="cb1260-4"><a href="#cb1260-4" aria-hidden="true"></a> <span class="dt">&quot;grantedTo&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1260-5"><a href="#cb1260-5" aria-hidden="true"></a> <span class="dt">&quot;user&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1260-6"><a href="#cb1260-6" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;ryan@contoso.com&quot;</span></span>
<span id="cb1260-7"><a href="#cb1260-7" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1260-8"><a href="#cb1260-8" aria-hidden="true"></a> <span class="dt">&quot;application&quot;</span><span class="fu">:</span> <span class="fu">{},</span></span>
<span id="cb1260-9"><a href="#cb1260-9" aria-hidden="true"></a> <span class="dt">&quot;device&quot;</span><span class="fu">:</span> <span class="fu">{}</span></span>
<span id="cb1260-10"><a href="#cb1260-10" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1260-11"><a href="#cb1260-11" aria-hidden="true"></a> <span class="dt">&quot;invitation&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1260-12"><a href="#cb1260-12" aria-hidden="true"></a> <span class="dt">&quot;email&quot;</span><span class="fu">:</span> <span class="st">&quot;ryan@contoso.com&quot;</span></span>
<span id="cb1260-13"><a href="#cb1260-13" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1260-14"><a href="#cb1260-14" aria-hidden="true"></a> <span class="dt">&quot;link&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1260-15"><a href="#cb1260-15" aria-hidden="true"></a> <span class="dt">&quot;webUrl&quot;</span><span class="fu">:</span> <span class="st">&quot;https://1drv.ms/t/s!1234567890ABC&quot;</span></span>
<span id="cb1260-16"><a href="#cb1260-16" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1260-17"><a href="#cb1260-17" aria-hidden="true"></a> <span class="dt">&quot;roles&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1260-18"><a href="#cb1260-18" aria-hidden="true"></a> <span class="st">&quot;read&quot;</span></span>
<span id="cb1260-19"><a href="#cb1260-19" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1260-20"><a href="#cb1260-20" aria-hidden="true"></a> <span class="dt">&quot;shareId&quot;</span><span class="fu">:</span> <span class="st">&quot;s!1234567890ABC&quot;</span></span>
<span id="cb1260-21"><a href="#cb1260-21" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1260-22"><a href="#cb1260-22" aria-hidden="true"></a><span class="ot">]</span></span></code></pre></div>
<p>Example for OneDrive Business:</p>
<div class="sourceCode" id="cb1259"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1259-1"><a href="#cb1259-1" aria-hidden="true"></a><span class="ot">[</span></span>
<span id="cb1259-2"><a href="#cb1259-2" aria-hidden="true"></a> <span class="fu">{</span></span>
<span id="cb1259-3"><a href="#cb1259-3" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;48d31887-5fad-4d73-a9f5-3c356e68a038&quot;</span><span class="fu">,</span></span>
<span id="cb1259-4"><a href="#cb1259-4" aria-hidden="true"></a> <span class="dt">&quot;grantedToIdentities&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1259-5"><a href="#cb1259-5" aria-hidden="true"></a> <span class="fu">{</span></span>
<span id="cb1259-6"><a href="#cb1259-6" aria-hidden="true"></a> <span class="dt">&quot;user&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1259-7"><a href="#cb1259-7" aria-hidden="true"></a> <span class="dt">&quot;displayName&quot;</span><span class="fu">:</span> <span class="st">&quot;ryan@contoso.com&quot;</span></span>
<span id="cb1259-8"><a href="#cb1259-8" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1259-9"><a href="#cb1259-9" aria-hidden="true"></a> <span class="dt">&quot;application&quot;</span><span class="fu">:</span> <span class="fu">{},</span></span>
<span id="cb1259-10"><a href="#cb1259-10" aria-hidden="true"></a> <span class="dt">&quot;device&quot;</span><span class="fu">:</span> <span class="fu">{}</span></span>
<span id="cb1259-11"><a href="#cb1259-11" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1259-12"><a href="#cb1259-12" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1259-13"><a href="#cb1259-13" aria-hidden="true"></a> <span class="dt">&quot;link&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1259-14"><a href="#cb1259-14" aria-hidden="true"></a> <span class="dt">&quot;type&quot;</span><span class="fu">:</span> <span class="st">&quot;view&quot;</span><span class="fu">,</span></span>
<span id="cb1259-15"><a href="#cb1259-15" aria-hidden="true"></a> <span class="dt">&quot;scope&quot;</span><span class="fu">:</span> <span class="st">&quot;users&quot;</span><span class="fu">,</span></span>
<span id="cb1259-16"><a href="#cb1259-16" aria-hidden="true"></a> <span class="dt">&quot;webUrl&quot;</span><span class="fu">:</span> <span class="st">&quot;https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s&quot;</span></span>
<span id="cb1259-17"><a href="#cb1259-17" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1259-18"><a href="#cb1259-18" aria-hidden="true"></a> <span class="dt">&quot;roles&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1259-19"><a href="#cb1259-19" aria-hidden="true"></a> <span class="st">&quot;read&quot;</span></span>
<span id="cb1259-20"><a href="#cb1259-20" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1259-21"><a href="#cb1259-21" aria-hidden="true"></a> <span class="dt">&quot;shareId&quot;</span><span class="fu">:</span> <span class="st">&quot;u!LKj1lkdlals90j1nlkascl&quot;</span></span>
<span id="cb1259-22"><a href="#cb1259-22" aria-hidden="true"></a> <span class="fu">}</span><span class="ot">,</span></span>
<span id="cb1259-23"><a href="#cb1259-23" aria-hidden="true"></a> <span class="fu">{</span></span>
<span id="cb1259-24"><a href="#cb1259-24" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;5D33DD65C6932946&quot;</span><span class="fu">,</span></span>
<span id="cb1259-25"><a href="#cb1259-25" aria-hidden="true"></a> <span class="dt">&quot;grantedTo&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1259-26"><a href="#cb1259-26" aria-hidden="true"></a> <span class="dt">&quot;user&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1259-27"><a href="#cb1259-27" aria-hidden="true"></a> <span class="dt">&quot;displayName&quot;</span><span class="fu">:</span> <span class="st">&quot;John Doe&quot;</span><span class="fu">,</span></span>
<span id="cb1259-28"><a href="#cb1259-28" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;efee1b77-fb3b-4f65-99d6-274c11914d12&quot;</span></span>
<span id="cb1259-29"><a href="#cb1259-29" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1259-30"><a href="#cb1259-30" aria-hidden="true"></a> <span class="dt">&quot;application&quot;</span><span class="fu">:</span> <span class="fu">{},</span></span>
<span id="cb1259-31"><a href="#cb1259-31" aria-hidden="true"></a> <span class="dt">&quot;device&quot;</span><span class="fu">:</span> <span class="fu">{}</span></span>
<span id="cb1259-32"><a href="#cb1259-32" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1259-33"><a href="#cb1259-33" aria-hidden="true"></a> <span class="dt">&quot;roles&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1259-34"><a href="#cb1259-34" aria-hidden="true"></a> <span class="st">&quot;owner&quot;</span></span>
<span id="cb1259-35"><a href="#cb1259-35" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1259-36"><a href="#cb1259-36" aria-hidden="true"></a> <span class="dt">&quot;shareId&quot;</span><span class="fu">:</span> <span class="st">&quot;FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U&quot;</span></span>
<span id="cb1259-37"><a href="#cb1259-37" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1259-38"><a href="#cb1259-38" aria-hidden="true"></a><span class="ot">]</span></span></code></pre></div>
<div class="sourceCode" id="cb1261"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1261-1"><a href="#cb1261-1" aria-hidden="true"></a><span class="ot">[</span></span>
<span id="cb1261-2"><a href="#cb1261-2" aria-hidden="true"></a> <span class="fu">{</span></span>
<span id="cb1261-3"><a href="#cb1261-3" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;48d31887-5fad-4d73-a9f5-3c356e68a038&quot;</span><span class="fu">,</span></span>
<span id="cb1261-4"><a href="#cb1261-4" aria-hidden="true"></a> <span class="dt">&quot;grantedToIdentities&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1261-5"><a href="#cb1261-5" aria-hidden="true"></a> <span class="fu">{</span></span>
<span id="cb1261-6"><a href="#cb1261-6" aria-hidden="true"></a> <span class="dt">&quot;user&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1261-7"><a href="#cb1261-7" aria-hidden="true"></a> <span class="dt">&quot;displayName&quot;</span><span class="fu">:</span> <span class="st">&quot;ryan@contoso.com&quot;</span></span>
<span id="cb1261-8"><a href="#cb1261-8" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1261-9"><a href="#cb1261-9" aria-hidden="true"></a> <span class="dt">&quot;application&quot;</span><span class="fu">:</span> <span class="fu">{},</span></span>
<span id="cb1261-10"><a href="#cb1261-10" aria-hidden="true"></a> <span class="dt">&quot;device&quot;</span><span class="fu">:</span> <span class="fu">{}</span></span>
<span id="cb1261-11"><a href="#cb1261-11" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1261-12"><a href="#cb1261-12" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1261-13"><a href="#cb1261-13" aria-hidden="true"></a> <span class="dt">&quot;link&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1261-14"><a href="#cb1261-14" aria-hidden="true"></a> <span class="dt">&quot;type&quot;</span><span class="fu">:</span> <span class="st">&quot;view&quot;</span><span class="fu">,</span></span>
<span id="cb1261-15"><a href="#cb1261-15" aria-hidden="true"></a> <span class="dt">&quot;scope&quot;</span><span class="fu">:</span> <span class="st">&quot;users&quot;</span><span class="fu">,</span></span>
<span id="cb1261-16"><a href="#cb1261-16" aria-hidden="true"></a> <span class="dt">&quot;webUrl&quot;</span><span class="fu">:</span> <span class="st">&quot;https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s&quot;</span></span>
<span id="cb1261-17"><a href="#cb1261-17" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1261-18"><a href="#cb1261-18" aria-hidden="true"></a> <span class="dt">&quot;roles&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1261-19"><a href="#cb1261-19" aria-hidden="true"></a> <span class="st">&quot;read&quot;</span></span>
<span id="cb1261-20"><a href="#cb1261-20" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1261-21"><a href="#cb1261-21" aria-hidden="true"></a> <span class="dt">&quot;shareId&quot;</span><span class="fu">:</span> <span class="st">&quot;u!LKj1lkdlals90j1nlkascl&quot;</span></span>
<span id="cb1261-22"><a href="#cb1261-22" aria-hidden="true"></a> <span class="fu">}</span><span class="ot">,</span></span>
<span id="cb1261-23"><a href="#cb1261-23" aria-hidden="true"></a> <span class="fu">{</span></span>
<span id="cb1261-24"><a href="#cb1261-24" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;5D33DD65C6932946&quot;</span><span class="fu">,</span></span>
<span id="cb1261-25"><a href="#cb1261-25" aria-hidden="true"></a> <span class="dt">&quot;grantedTo&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1261-26"><a href="#cb1261-26" aria-hidden="true"></a> <span class="dt">&quot;user&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1261-27"><a href="#cb1261-27" aria-hidden="true"></a> <span class="dt">&quot;displayName&quot;</span><span class="fu">:</span> <span class="st">&quot;John Doe&quot;</span><span class="fu">,</span></span>
<span id="cb1261-28"><a href="#cb1261-28" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;efee1b77-fb3b-4f65-99d6-274c11914d12&quot;</span></span>
<span id="cb1261-29"><a href="#cb1261-29" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1261-30"><a href="#cb1261-30" aria-hidden="true"></a> <span class="dt">&quot;application&quot;</span><span class="fu">:</span> <span class="fu">{},</span></span>
<span id="cb1261-31"><a href="#cb1261-31" aria-hidden="true"></a> <span class="dt">&quot;device&quot;</span><span class="fu">:</span> <span class="fu">{}</span></span>
<span id="cb1261-32"><a href="#cb1261-32" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1261-33"><a href="#cb1261-33" aria-hidden="true"></a> <span class="dt">&quot;roles&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1261-34"><a href="#cb1261-34" aria-hidden="true"></a> <span class="st">&quot;owner&quot;</span></span>
<span id="cb1261-35"><a href="#cb1261-35" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1261-36"><a href="#cb1261-36" aria-hidden="true"></a> <span class="dt">&quot;shareId&quot;</span><span class="fu">:</span> <span class="st">&quot;FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U&quot;</span></span>
<span id="cb1261-37"><a href="#cb1261-37" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1261-38"><a href="#cb1261-38" aria-hidden="true"></a><span class="ot">]</span></span></code></pre></div>
<p>To write permissions, pass in a "permissions" metadata key using this same format. The <a href="https://rclone.org/docs/#metadata-mapper"><code>--metadata-mapper</code></a> tool can be very helpful for this.</p>
<p>When adding permissions, an email address can be provided in the <code>User.ID</code> or <code>DisplayName</code> properties of <code>grantedTo</code> or <code>grantedToIdentities</code>. Alternatively, an ObjectID can be provided in <code>User.ID</code>. At least one valid recipient must be provided in order to add a permission for a user. Creating a Public Link is also supported, if <code>Link.Scope</code> is set to <code>"anonymous"</code>.</p>
<p>Example request to add a "read" permission with <code>--metadata-mapper</code>:</p>
<div class="sourceCode" id="cb1260"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1260-1"><a href="#cb1260-1" aria-hidden="true"></a><span class="fu">{</span></span>
<span id="cb1260-2"><a href="#cb1260-2" aria-hidden="true"></a> <span class="dt">&quot;Metadata&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1260-3"><a href="#cb1260-3" aria-hidden="true"></a> <span class="dt">&quot;permissions&quot;</span><span class="fu">:</span> <span class="st">&quot;[{</span><span class="ch">\&quot;</span><span class="st">grantedToIdentities</span><span class="ch">\&quot;</span><span class="st">:[{</span><span class="ch">\&quot;</span><span class="st">user</span><span class="ch">\&quot;</span><span class="st">:{</span><span class="ch">\&quot;</span><span class="st">id</span><span class="ch">\&quot;</span><span class="st">:</span><span class="ch">\&quot;</span><span class="st">ryan@contoso.com</span><span class="ch">\&quot;</span><span class="st">}}],</span><span class="ch">\&quot;</span><span class="st">roles</span><span class="ch">\&quot;</span><span class="st">:[</span><span class="ch">\&quot;</span><span class="st">read</span><span class="ch">\&quot;</span><span class="st">]}]&quot;</span></span>
<span id="cb1260-4"><a href="#cb1260-4" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1260-5"><a href="#cb1260-5" aria-hidden="true"></a><span class="fu">}</span></span></code></pre></div>
<div class="sourceCode" id="cb1262"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1262-1"><a href="#cb1262-1" aria-hidden="true"></a><span class="fu">{</span></span>
<span id="cb1262-2"><a href="#cb1262-2" aria-hidden="true"></a> <span class="dt">&quot;Metadata&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1262-3"><a href="#cb1262-3" aria-hidden="true"></a> <span class="dt">&quot;permissions&quot;</span><span class="fu">:</span> <span class="st">&quot;[{</span><span class="ch">\&quot;</span><span class="st">grantedToIdentities</span><span class="ch">\&quot;</span><span class="st">:[{</span><span class="ch">\&quot;</span><span class="st">user</span><span class="ch">\&quot;</span><span class="st">:{</span><span class="ch">\&quot;</span><span class="st">id</span><span class="ch">\&quot;</span><span class="st">:</span><span class="ch">\&quot;</span><span class="st">ryan@contoso.com</span><span class="ch">\&quot;</span><span class="st">}}],</span><span class="ch">\&quot;</span><span class="st">roles</span><span class="ch">\&quot;</span><span class="st">:[</span><span class="ch">\&quot;</span><span class="st">read</span><span class="ch">\&quot;</span><span class="st">]}]&quot;</span></span>
<span id="cb1262-4"><a href="#cb1262-4" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1262-5"><a href="#cb1262-5" aria-hidden="true"></a><span class="fu">}</span></span></code></pre></div>
<p>Note that adding a permission can fail if a conflicting permission already exists for the file/folder.</p>
<p>To update an existing permission, include both the Permission ID and the new <code>roles</code> to be assigned. <code>roles</code> is the only property that can be changed.</p>
<p>To remove permissions, pass in a blob containing only the permissions you wish to keep (which can be empty, to remove all.) Note that the <code>owner</code> role will be ignored, as it cannot be removed.</p>
@ -32200,54 +32247,24 @@ y/e/d&gt; y</code></pre>
</ul>
<h3 id="advanced-options-42">Advanced options</h3>
<p>Here are the Advanced options specific to pikpak (PikPak).</p>
<h4 id="pikpak-client-id">--pikpak-client-id</h4>
<p>OAuth Client Id.</p>
<p>Leave blank normally.</p>
<h4 id="pikpak-device-id">--pikpak-device-id</h4>
<p>Device ID used for authorization.</p>
<p>Properties:</p>
<ul>
<li>Config: client_id</li>
<li>Env Var: RCLONE_PIKPAK_CLIENT_ID</li>
<li>Config: device_id</li>
<li>Env Var: RCLONE_PIKPAK_DEVICE_ID</li>
<li>Type: string</li>
<li>Required: false</li>
</ul>
<h4 id="pikpak-client-secret">--pikpak-client-secret</h4>
<p>OAuth Client Secret.</p>
<p>Leave blank normally.</p>
<h4 id="pikpak-user-agent">--pikpak-user-agent</h4>
<p>HTTP user agent for pikpak.</p>
<p>Defaults to "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0" or "--pikpak-user-agent" provided on command line.</p>
<p>Properties:</p>
<ul>
<li>Config: client_secret</li>
<li>Env Var: RCLONE_PIKPAK_CLIENT_SECRET</li>
<li>Config: user_agent</li>
<li>Env Var: RCLONE_PIKPAK_USER_AGENT</li>
<li>Type: string</li>
<li>Required: false</li>
</ul>
<h4 id="pikpak-token">--pikpak-token</h4>
<p>OAuth Access Token as a JSON blob.</p>
<p>Properties:</p>
<ul>
<li>Config: token</li>
<li>Env Var: RCLONE_PIKPAK_TOKEN</li>
<li>Type: string</li>
<li>Required: false</li>
</ul>
<h4 id="pikpak-auth-url">--pikpak-auth-url</h4>
<p>Auth server URL.</p>
<p>Leave blank to use the provider defaults.</p>
<p>Properties:</p>
<ul>
<li>Config: auth_url</li>
<li>Env Var: RCLONE_PIKPAK_AUTH_URL</li>
<li>Type: string</li>
<li>Required: false</li>
</ul>
<h4 id="pikpak-token-url">--pikpak-token-url</h4>
<p>Token server url.</p>
<p>Leave blank to use the provider defaults.</p>
<p>Properties:</p>
<ul>
<li>Config: token_url</li>
<li>Env Var: RCLONE_PIKPAK_TOKEN_URL</li>
<li>Type: string</li>
<li>Required: false</li>
<li>Default: "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0"</li>
</ul>
<h4 id="pikpak-root-folder-id">--pikpak-root-folder-id</h4>
<p>ID of the root folder. Leave blank normally.</p>
@ -36949,6 +36966,79 @@ $ tree /tmp/c
<li>"error": return an error based on option value</li>
</ul>
<h1 id="changelog-1">Changelog</h1>
<h2 id="v1.68.2---2024-11-15">v1.68.2 - 2024-11-15</h2>
<p><a href="https://github.com/rclone/rclone/compare/v1.68.1...v1.68.2">See commits</a></p>
<ul>
<li>Security fixes
<ul>
<li>local backend: CVE-2024-52522: fix permission and ownership on symlinks with <code>--links</code> and <code>--metadata</code> (Nick Craig-Wood)
<ul>
<li>Only affects users using <code>--metadata</code> and <code>--links</code> and copying files to the local backend</li>
<li>See https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv</li>
</ul></li>
<li>build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1 (dependabot)
<ul>
<li>This is an issue in a dependency which is used for JWT certificates</li>
<li>See https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r</li>
</ul></li>
</ul></li>
<li>Bug Fixes
<ul>
<li>accounting: Fix wrong message on SIGUSR2 to enable/disable bwlimit (Nick Craig-Wood)</li>
<li>bisync: Fix output capture restoring the wrong output for logrus (Dimitrios Slamaris)</li>
<li>dlna: Fix loggingResponseWriter disregarding log level (Simon Bos)</li>
<li>serve s3: Fix excess locking which was making serve s3 single threaded (Nick Craig-Wood)</li>
<li>doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy Bush)</li>
</ul></li>
<li>Local
<ul>
<li>Fix permission and ownership on symlinks with <code>--links</code> and <code>--metadata</code> (Nick Craig-Wood)</li>
<li>Fix <code>--copy-links</code> on macOS when cloning (nielash)</li>
</ul></li>
<li>Onedrive
<ul>
<li>Fix Retry-After handling to look at 503 errors also (Nick Craig-Wood)</li>
</ul></li>
<li>Pikpak
<ul>
<li>Fix cid/gcid calculations for fs.OverrideRemote (wiserain)</li>
<li>Fix fatal crash on startup with token that can't be refreshed (Nick Craig-Wood)</li>
</ul></li>
<li>S3
<ul>
<li>Fix crash when using <code>--s3-download-url</code> after migration to SDKv2 (Nick Craig-Wood)</li>
<li>Storj provider: fix server-side copy of files bigger than 5GB (Kaloyan Raev)</li>
<li>Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)</li>
</ul></li>
</ul>
<h2 id="v1.68.1---2024-09-24">v1.68.1 - 2024-09-24</h2>
<p><a href="https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1">See commits</a></p>
<ul>
<li>Bug Fixes
<ul>
<li>build: Fix docker release build (ttionya)</li>
<li>doc fixes (Nick Craig-Wood, Pawel Palucha)</li>
<li>fs
<ul>
<li>Fix <code>--dump filters</code> not always appearing (Nick Craig-Wood)</li>
<li>Fix setting <code>stringArray</code> config values from environment variables (Nick Craig-Wood)</li>
</ul></li>
<li>rc: Fix default value of <code>--metrics-addr</code> (Nick Craig-Wood)</li>
<li>serve docker: Add missing <code>vfs-read-chunk-streams</code> option in docker volume driver (Divyam)</li>
</ul></li>
<li>Onedrive
<ul>
<li>Fix spurious "Couldn't decode error response: EOF" DEBUG (Nick Craig-Wood)</li>
</ul></li>
<li>Pikpak
<ul>
<li>Fix login issue where token retrieval fails (wiserain)</li>
</ul></li>
<li>S3
<ul>
<li>Fix rclone ignoring static credentials when <code>env_auth=true</code> (Nick Craig-Wood)</li>
</ul></li>
</ul>
<h2 id="v1.68.0---2024-09-08">v1.68.0 - 2024-09-08</h2>
<p><a href="https://github.com/rclone/rclone/compare/v1.67.0...v1.68.0">See commits</a></p>
<ul>

206
MANUAL.md generated
View file

@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
% Sep 08, 2024
% Nov 15, 2024
# Rclone syncs your files to cloud storage
@ -5259,7 +5259,9 @@ When running in background mode the user will have to stop the mount manually:
# Linux
fusermount -u /path/to/local/mount
# OS X
#... or on some systems
fusermount3 -u /path/to/local/mount
# OS X or Linux when using nfsmount
umount /path/to/local/mount
The umount operation can fail, for example when the mountpoint is busy.
@ -5603,9 +5605,9 @@ Note that systemd runs mount units without any environment variables including
`PATH` or `HOME`. This means that tilde (`~`) expansion will not work
and you should provide `--config` and `--cache-dir` explicitly as absolute
paths via rclone arguments.
Since mounting requires the `fusermount` program, rclone will use the fallback
PATH of `/bin:/usr/bin` in this scenario. Please ensure that `fusermount`
is present on this PATH.
Since mounting requires the `fusermount` or `fusermount3` program,
rclone will use the fallback PATH of `/bin:/usr/bin` in this scenario.
Please ensure that `fusermount`/`fusermount3` is present on this PATH.
## Rclone as Unix mount helper
@ -6472,7 +6474,9 @@ When running in background mode the user will have to stop the mount manually:
# Linux
fusermount -u /path/to/local/mount
# OS X
#... or on some systems
fusermount3 -u /path/to/local/mount
# OS X or Linux when using nfsmount
umount /path/to/local/mount
The umount operation can fail, for example when the mountpoint is busy.
@ -6816,9 +6820,9 @@ Note that systemd runs mount units without any environment variables including
`PATH` or `HOME`. This means that tilde (`~`) expansion will not work
and you should provide `--config` and `--cache-dir` explicitly as absolute
paths via rclone arguments.
Since mounting requires the `fusermount` program, rclone will use the fallback
PATH of `/bin:/usr/bin` in this scenario. Please ensure that `fusermount`
is present on this PATH.
Since mounting requires the `fusermount` or `fusermount3` program,
rclone will use the fallback PATH of `/bin:/usr/bin` in this scenario.
Please ensure that `fusermount`/`fusermount3` is present on this PATH.
## Rclone as Unix mount helper
@ -7734,7 +7738,7 @@ Flags to control the Remote Control API
```
--rc Enable the remote control server
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@ -16254,6 +16258,22 @@ so they take exactly the same form.
The options set by environment variables can be seen with the `-vv` flag, e.g. `rclone version -vv`.
Options that can appear multiple times (type `stringArray`) are
treated slighly differently as environment variables can only be
defined once. In order to allow a simple mechanism for adding one or
many items, the input is treated as a [CSV encoded](https://godoc.org/encoding/csv)
string. For example
| Environment Variable | Equivalent options |
|----------------------|--------------------|
| `RCLONE_EXCLUDE="*.jpg"` | `--exclude "*.jpg"` |
| `RCLONE_EXCLUDE="*.jpg,*.png"` | `--exclude "*.jpg"` `--exclude "*.png"` |
| `RCLONE_EXCLUDE='"*.jpg","*.png"'` | `--exclude "*.jpg"` `--exclude "*.png"` |
| `RCLONE_EXCLUDE='"/directory with comma , in it /**"'` | `--exclude "/directory with comma , in it /**" |
If `stringArray` options are defined as environment variables **and**
options on the command line then all the values will be used.
### Config file ###
You can set defaults for values in the config file on an individual
@ -16950,6 +16970,8 @@ processed in.
Arrange the order of filter rules with the most restrictive first and
work down.
Lines starting with # or ; are ignored, and can be used to write comments. Inline comments are not supported. _Use `-vv --dump filters` to see how they appear in the final regexp._
E.g. for `filter-file.txt`:
# a sample filter rule file
@ -16957,6 +16979,7 @@ E.g. for `filter-file.txt`:
+ *.jpg
+ *.png
+ file2.avi
- /dir/tmp/** # WARNING! This text will be treated as part of the path.
- /dir/Trash/**
+ /dir/**
# exclude everything else
@ -19767,7 +19790,7 @@ Here is an overview of the major features of each cloud storage system.
| OpenDrive | MD5 | R/W | Yes | Partial ⁸ | - | - |
| OpenStack Swift | MD5 | R/W | No | No | R/W | - |
| Oracle Object Storage | MD5 | R/W | No | No | R/W | - |
| pCloud | MD5, SHA1 ⁷ | R | No | No | W | - |
| pCloud | MD5, SHA1 ⁷ | R/W | No | No | W | - |
| PikPak | MD5 | R | No | No | R | - |
| Pixeldrain | SHA256 | R/W | No | No | R | RW |
| premiumize.me | - | - | Yes | No | R | - |
@ -20474,7 +20497,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.2")
```
@ -20623,7 +20646,7 @@ Flags to control the Remote Control API.
```
--rc Enable the remote control server
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@ -20659,7 +20682,7 @@ Flags to control the Remote Control API.
Flags to control the Metrics HTTP endpoint..
```
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to (default [""])
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to
--metrics-allow-origin string Origin which cross-domain request (CORS) can be executed from
--metrics-baseurl string Prefix for URLs - leave blank for root
--metrics-cert string TLS PEM key (concatenation of certificate and CA certificate)
@ -21153,21 +21176,18 @@ Backend-only flags (these can be set in the config file also).
--pcloud-token string OAuth Access Token as a JSON blob
--pcloud-token-url string Token server url
--pcloud-username string Your pcloud username
--pikpak-auth-url string Auth server URL
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
--pikpak-client-id string OAuth Client Id
--pikpak-client-secret string OAuth Client Secret
--pikpak-description string Description of the remote
--pikpak-device-id string Device ID used for authorization
--pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
--pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder
--pikpak-token string OAuth Access Token as a JSON blob
--pikpak-token-url string Token server url
--pikpak-trashed-only Only show files that are in the trash
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
--pikpak-user string Pikpak username
--pikpak-user-agent string HTTP user agent for pikpak (default "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0")
--pixeldrain-api-key string API key for your pixeldrain account
--pixeldrain-api-url string The API endpoint to connect to. In the vast majority of cases it's fine to leave (default "https://pixeldrain.com/api")
--pixeldrain-description string Description of the remote
@ -24741,6 +24761,38 @@ there for more details.
Setting this flag increases the chance for undetected upload failures.
### Increasing performance
#### Using server-side copy
If you are copying objects between S3 buckets in the same region, you should
use server-side copy.
This is much faster than downloading and re-uploading the objects, as no data is transferred.
For rclone to use server-side copy, you must use the same remote for the source and destination.
rclone copy s3:source-bucket s3:destination-bucket
When using server-side copy, the performance is limited by the rate at which rclone issues
API requests to S3.
See below for how to increase the number of API requests rclone makes.
#### Increasing the rate of API requests
You can increase the rate of API requests to S3 by increasing the parallelism using `--transfers` and `--checkers`
options.
Rclone uses a very conservative defaults for these settings, as not all providers support high rates of requests.
Depending on your provider, you can increase significantly the number of transfers and checkers.
For example, with AWS S3, if you can increase the number of checkers to values like 200.
If you are doing a server-side copy, you can also increase the number of transfers to 200.
rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket
You will need to experiment with these values to find the optimal settings for your setup.
### Versions
When bucket versioning is enabled (this can be done with rclone with
@ -27784,8 +27836,8 @@ chunk_size = 5M
copy_cutoff = 5M
```
[C14 Cold Storage](https://www.online.net/en/storage/c14-cold-storage) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`.
So you can configure your remote with the `storage_class = GLACIER` option to upload directly to C14. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
[Scaleway Glacier](https://www.scaleway.com/en/glacier-cold-storage/) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`.
So you can configure your remote with the `storage_class = GLACIER` option to upload directly to Scaleway Glacier. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
### Seagate Lyve Cloud {#lyve}
@ -37849,9 +37901,9 @@ then select "OAuth client ID".
9. It will show you a client ID and client secret. Make a note of these.
(If you selected "External" at Step 5 continue to Step 9.
(If you selected "External" at Step 5 continue to Step 10.
If you chose "Internal" you don't need to publish and can skip straight to
Step 10 but your destination drive must be part of the same Google Workspace.)
Step 11 but your destination drive must be part of the same Google Workspace.)
10. Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm.
You will also want to add yourself as a test user.
@ -48038,68 +48090,29 @@ Properties:
Here are the Advanced options specific to pikpak (PikPak).
#### --pikpak-client-id
#### --pikpak-device-id
OAuth Client Id.
Leave blank normally.
Device ID used for authorization.
Properties:
- Config: client_id
- Env Var: RCLONE_PIKPAK_CLIENT_ID
- Config: device_id
- Env Var: RCLONE_PIKPAK_DEVICE_ID
- Type: string
- Required: false
#### --pikpak-client-secret
#### --pikpak-user-agent
OAuth Client Secret.
HTTP user agent for pikpak.
Leave blank normally.
Defaults to "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0" or "--pikpak-user-agent" provided on command line.
Properties:
- Config: client_secret
- Env Var: RCLONE_PIKPAK_CLIENT_SECRET
- Config: user_agent
- Env Var: RCLONE_PIKPAK_USER_AGENT
- Type: string
- Required: false
#### --pikpak-token
OAuth Access Token as a JSON blob.
Properties:
- Config: token
- Env Var: RCLONE_PIKPAK_TOKEN
- Type: string
- Required: false
#### --pikpak-auth-url
Auth server URL.
Leave blank to use the provider defaults.
Properties:
- Config: auth_url
- Env Var: RCLONE_PIKPAK_AUTH_URL
- Type: string
- Required: false
#### --pikpak-token-url
Token server url.
Leave blank to use the provider defaults.
Properties:
- Config: token_url
- Env Var: RCLONE_PIKPAK_TOKEN_URL
- Type: string
- Required: false
- Default: "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0"
#### --pikpak-root-folder-id
@ -54602,6 +54615,55 @@ Options:
# Changelog
## v1.68.2 - 2024-11-15
[See commits](https://github.com/rclone/rclone/compare/v1.68.1...v1.68.2)
* Security fixes
* local backend: CVE-2024-52522: fix permission and ownership on symlinks with `--links` and `--metadata` (Nick Craig-Wood)
* Only affects users using `--metadata` and `--links` and copying files to the local backend
* See https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
* build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1 (dependabot)
* This is an issue in a dependency which is used for JWT certificates
* See https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
* Bug Fixes
* accounting: Fix wrong message on SIGUSR2 to enable/disable bwlimit (Nick Craig-Wood)
* bisync: Fix output capture restoring the wrong output for logrus (Dimitrios Slamaris)
* dlna: Fix loggingResponseWriter disregarding log level (Simon Bos)
* serve s3: Fix excess locking which was making serve s3 single threaded (Nick Craig-Wood)
* doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy Bush)
* Local
* Fix permission and ownership on symlinks with `--links` and `--metadata` (Nick Craig-Wood)
* Fix `--copy-links` on macOS when cloning (nielash)
* Onedrive
* Fix Retry-After handling to look at 503 errors also (Nick Craig-Wood)
* Pikpak
* Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
* Fix fatal crash on startup with token that can't be refreshed (Nick Craig-Wood)
* S3
* Fix crash when using `--s3-download-url` after migration to SDKv2 (Nick Craig-Wood)
* Storj provider: fix server-side copy of files bigger than 5GB (Kaloyan Raev)
* Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
## v1.68.1 - 2024-09-24
[See commits](https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1)
* Bug Fixes
* build: Fix docker release build (ttionya)
* doc fixes (Nick Craig-Wood, Pawel Palucha)
* fs
* Fix `--dump filters` not always appearing (Nick Craig-Wood)
* Fix setting `stringArray` config values from environment variables (Nick Craig-Wood)
* rc: Fix default value of `--metrics-addr` (Nick Craig-Wood)
* serve docker: Add missing `vfs-read-chunk-streams` option in docker volume driver (Divyam)
* Onedrive
* Fix spurious "Couldn't decode error response: EOF" DEBUG (Nick Craig-Wood)
* Pikpak
* Fix login issue where token retrieval fails (wiserain)
* S3
* Fix rclone ignoring static credentials when `env_auth=true` (Nick Craig-Wood)
## v1.68.0 - 2024-09-08
[See commits](https://github.com/rclone/rclone/compare/v1.67.0...v1.68.0)

245
MANUAL.txt generated
View file

@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
Sep 08, 2024
Nov 15, 2024
Rclone syncs your files to cloud storage
@ -4843,7 +4843,9 @@ manually:
# Linux
fusermount -u /path/to/local/mount
# OS X
#... or on some systems
fusermount3 -u /path/to/local/mount
# OS X or Linux when using nfsmount
umount /path/to/local/mount
The umount operation can fail, for example when the mountpoint is busy.
@ -5188,8 +5190,9 @@ Note that systemd runs mount units without any environment variables
including PATH or HOME. This means that tilde (~) expansion will not
work and you should provide --config and --cache-dir explicitly as
absolute paths via rclone arguments. Since mounting requires the
fusermount program, rclone will use the fallback PATH of /bin:/usr/bin
in this scenario. Please ensure that fusermount is present on this PATH.
fusermount or fusermount3 program, rclone will use the fallback PATH of
/bin:/usr/bin in this scenario. Please ensure that
fusermount/fusermount3 is present on this PATH.
Rclone as Unix mount helper
@ -6027,7 +6030,9 @@ manually:
# Linux
fusermount -u /path/to/local/mount
# OS X
#... or on some systems
fusermount3 -u /path/to/local/mount
# OS X or Linux when using nfsmount
umount /path/to/local/mount
The umount operation can fail, for example when the mountpoint is busy.
@ -6372,8 +6377,9 @@ Note that systemd runs mount units without any environment variables
including PATH or HOME. This means that tilde (~) expansion will not
work and you should provide --config and --cache-dir explicitly as
absolute paths via rclone arguments. Since mounting requires the
fusermount program, rclone will use the fallback PATH of /bin:/usr/bin
in this scenario. Please ensure that fusermount is present on this PATH.
fusermount or fusermount3 program, rclone will use the fallback PATH of
/bin:/usr/bin in this scenario. Please ensure that
fusermount/fusermount3 is present on this PATH.
Rclone as Unix mount helper
@ -7298,7 +7304,7 @@ RC Options
Flags to control the Remote Control API
--rc Enable the remote control server
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@ -15704,6 +15710,29 @@ they take exactly the same form.
The options set by environment variables can be seen with the -vv flag,
e.g. rclone version -vv.
Options that can appear multiple times (type stringArray) are treated
slighly differently as environment variables can only be defined once.
In order to allow a simple mechanism for adding one or many items, the
input is treated as a CSV encoded string. For example
----------------------------------------------------------------------------------------
Environment Variable Equivalent options
------------------------------------------------------ ---------------------------------
RCLONE_EXCLUDE="*.jpg" --exclude "*.jpg"
RCLONE_EXCLUDE="*.jpg,*.png" --exclude "*.jpg"
--exclude "*.png"
RCLONE_EXCLUDE='"*.jpg","*.png"' --exclude "*.jpg"
--exclude "*.png"
RCLONE_EXCLUDE='"/directory with comma , in it /**"' `--exclude "/directory with comma
, in it /**"
----------------------------------------------------------------------------------------
If stringArray options are defined as environment variables and options
on the command line then all the values will be used.
Config file
You can set defaults for values in the config file on an individual
@ -16399,6 +16428,10 @@ processed in.
Arrange the order of filter rules with the most restrictive first and
work down.
Lines starting with # or ; are ignored, and can be used to write
comments. Inline comments are not supported. Use -vv --dump filters to
see how they appear in the final regexp.
E.g. for filter-file.txt:
# a sample filter rule file
@ -16406,6 +16439,7 @@ E.g. for filter-file.txt:
+ *.jpg
+ *.png
+ file2.avi
- /dir/tmp/** # WARNING! This text will be treated as part of the path.
- /dir/Trash/**
+ /dir/**
# exclude everything else
@ -19240,7 +19274,7 @@ Here is an overview of the major features of each cloud storage system.
OpenDrive MD5 R/W Yes Partial ⁸ - -
OpenStack Swift MD5 R/W No No R/W -
Oracle Object Storage MD5 R/W No No R/W -
pCloud MD5, SHA1 ⁷ R No No W -
pCloud MD5, SHA1 ⁷ R/W No No W -
PikPak MD5 R No No R -
Pixeldrain SHA256 R/W No No R RW
premiumize.me - - Yes No R -
@ -20047,7 +20081,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.2")
Performance
@ -20172,7 +20206,7 @@ RC
Flags to control the Remote Control API.
--rc Enable the remote control server
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@ -20205,7 +20239,7 @@ Metrics
Flags to control the Metrics HTTP endpoint..
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to (default [""])
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to
--metrics-allow-origin string Origin which cross-domain request (CORS) can be executed from
--metrics-baseurl string Prefix for URLs - leave blank for root
--metrics-cert string TLS PEM key (concatenation of certificate and CA certificate)
@ -20696,21 +20730,18 @@ Backend-only flags (these can be set in the config file also).
--pcloud-token string OAuth Access Token as a JSON blob
--pcloud-token-url string Token server url
--pcloud-username string Your pcloud username
--pikpak-auth-url string Auth server URL
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
--pikpak-client-id string OAuth Client Id
--pikpak-client-secret string OAuth Client Secret
--pikpak-description string Description of the remote
--pikpak-device-id string Device ID used for authorization
--pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
--pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder
--pikpak-token string OAuth Access Token as a JSON blob
--pikpak-token-url string Token server url
--pikpak-trashed-only Only show files that are in the trash
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
--pikpak-user string Pikpak username
--pikpak-user-agent string HTTP user agent for pikpak (default "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0")
--pixeldrain-api-key string API key for your pixeldrain account
--pixeldrain-api-url string The API endpoint to connect to. In the vast majority of cases it's fine to leave (default "https://pixeldrain.com/api")
--pixeldrain-description string Description of the remote
@ -24275,6 +24306,41 @@ details.
Setting this flag increases the chance for undetected upload failures.
Increasing performance
Using server-side copy
If you are copying objects between S3 buckets in the same region, you
should use server-side copy. This is much faster than downloading and
re-uploading the objects, as no data is transferred.
For rclone to use server-side copy, you must use the same remote for the
source and destination.
rclone copy s3:source-bucket s3:destination-bucket
When using server-side copy, the performance is limited by the rate at
which rclone issues API requests to S3. See below for how to increase
the number of API requests rclone makes.
Increasing the rate of API requests
You can increase the rate of API requests to S3 by increasing the
parallelism using --transfers and --checkers options.
Rclone uses a very conservative defaults for these settings, as not all
providers support high rates of requests. Depending on your provider,
you can increase significantly the number of transfers and checkers.
For example, with AWS S3, if you can increase the number of checkers to
values like 200. If you are doing a server-side copy, you can also
increase the number of transfers to 200.
rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket
You will need to experiment with these values to find the optimal
settings for your setup.
Versions
When bucket versioning is enabled (this can be done with rclone with the
@ -27303,13 +27369,13 @@ rclone like this:
chunk_size = 5M
copy_cutoff = 5M
C14 Cold Storage is the low-cost S3 Glacier alternative from Scaleway
Scaleway Glacier is the low-cost S3 Glacier alternative from Scaleway
and it works the same way as on S3 by accepting the "GLACIER"
storage_class. So you can configure your remote with the
storage_class = GLACIER option to upload directly to C14. Don't forget
that in this state you can't read files back after, you will need to
restore them to "STANDARD" storage_class first before being able to read
them (see "restore" section above)
storage_class = GLACIER option to upload directly to Scaleway Glacier.
Don't forget that in this state you can't read files back after, you
will need to restore them to "STANDARD" storage_class first before being
able to read them (see "restore" section above)
Seagate Lyve Cloud
@ -37263,9 +37329,9 @@ Here is how to create your own Google Drive client ID for rclone:
9. It will show you a client ID and client secret. Make a note of
these.
(If you selected "External" at Step 5 continue to Step 9. If you
(If you selected "External" at Step 5 continue to Step 10. If you
chose "Internal" you don't need to publish and can skip straight to
Step 10 but your destination drive must be part of the same Google
Step 11 but your destination drive must be part of the same Google
Workspace.)
10. Go to "Oauth consent screen" and then click "PUBLISH APP" button and
@ -47695,68 +47761,32 @@ Advanced options
Here are the Advanced options specific to pikpak (PikPak).
--pikpak-client-id
--pikpak-device-id
OAuth Client Id.
Leave blank normally.
Device ID used for authorization.
Properties:
- Config: client_id
- Env Var: RCLONE_PIKPAK_CLIENT_ID
- Config: device_id
- Env Var: RCLONE_PIKPAK_DEVICE_ID
- Type: string
- Required: false
--pikpak-client-secret
--pikpak-user-agent
OAuth Client Secret.
HTTP user agent for pikpak.
Leave blank normally.
Defaults to "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0)
Gecko/20100101 Firefox/129.0" or "--pikpak-user-agent" provided on
command line.
Properties:
- Config: client_secret
- Env Var: RCLONE_PIKPAK_CLIENT_SECRET
- Config: user_agent
- Env Var: RCLONE_PIKPAK_USER_AGENT
- Type: string
- Required: false
--pikpak-token
OAuth Access Token as a JSON blob.
Properties:
- Config: token
- Env Var: RCLONE_PIKPAK_TOKEN
- Type: string
- Required: false
--pikpak-auth-url
Auth server URL.
Leave blank to use the provider defaults.
Properties:
- Config: auth_url
- Env Var: RCLONE_PIKPAK_AUTH_URL
- Type: string
- Required: false
--pikpak-token-url
Token server url.
Leave blank to use the provider defaults.
Properties:
- Config: token_url
- Env Var: RCLONE_PIKPAK_TOKEN_URL
- Type: string
- Required: false
- Default: "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0)
Gecko/20100101 Firefox/129.0"
--pikpak-root-folder-id
@ -54265,6 +54295,75 @@ Options:
Changelog
v1.68.2 - 2024-11-15
See commits
- Security fixes
- local backend: CVE-2024-52522: fix permission and ownership on
symlinks with --links and --metadata (Nick Craig-Wood)
- Only affects users using --metadata and --links and copying
files to the local backend
- See
https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
- build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1
(dependabot)
- This is an issue in a dependency which is used for JWT
certificates
- See
https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
- Bug Fixes
- accounting: Fix wrong message on SIGUSR2 to enable/disable
bwlimit (Nick Craig-Wood)
- bisync: Fix output capture restoring the wrong output for logrus
(Dimitrios Slamaris)
- dlna: Fix loggingResponseWriter disregarding log level (Simon
Bos)
- serve s3: Fix excess locking which was making serve s3 single
threaded (Nick Craig-Wood)
- doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy
Bush)
- Local
- Fix permission and ownership on symlinks with --links and
--metadata (Nick Craig-Wood)
- Fix --copy-links on macOS when cloning (nielash)
- Onedrive
- Fix Retry-After handling to look at 503 errors also (Nick
Craig-Wood)
- Pikpak
- Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
- Fix fatal crash on startup with token that can't be refreshed
(Nick Craig-Wood)
- S3
- Fix crash when using --s3-download-url after migration to SDKv2
(Nick Craig-Wood)
- Storj provider: fix server-side copy of files bigger than 5GB
(Kaloyan Raev)
- Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
v1.68.1 - 2024-09-24
See commits
- Bug Fixes
- build: Fix docker release build (ttionya)
- doc fixes (Nick Craig-Wood, Pawel Palucha)
- fs
- Fix --dump filters not always appearing (Nick Craig-Wood)
- Fix setting stringArray config values from environment
variables (Nick Craig-Wood)
- rc: Fix default value of --metrics-addr (Nick Craig-Wood)
- serve docker: Add missing vfs-read-chunk-streams option in
docker volume driver (Divyam)
- Onedrive
- Fix spurious "Couldn't decode error response: EOF" DEBUG (Nick
Craig-Wood)
- Pikpak
- Fix login issue where token retrieval fails (wiserain)
- S3
- Fix rclone ignoring static credentials when env_auth=true (Nick
Craig-Wood)
v1.68.0 - 2024-09-08
See commits

View file

@ -144,14 +144,10 @@ MANUAL.txt: MANUAL.md
pandoc -s --from markdown-smart --to plain MANUAL.md -o MANUAL.txt
commanddocs: rclone
-@rmdir -p '$$HOME/.config/rclone'
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs --config=/notfound docs/content/
@[ ! -e '$$HOME' ] || (echo 'Error: created unwanted directory named $$HOME' && exit 1)
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs docs/content/
backenddocs: rclone bin/make_backend_docs.py
-@rmdir -p '$$HOME/.config/rclone'
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" ./bin/make_backend_docs.py
@[ ! -e '$$HOME' ] || (echo 'Error: created unwanted directory named $$HOME' && exit 1)
rcdocs: rclone
bin/make_rc_docs.sh

View file

@ -1 +1 @@
v1.69.0
v1.68.2

View file

@ -209,22 +209,6 @@ rclone config file under the ` + "`client_id`, `tenant` and `client_secret`" + `
keys instead of setting ` + "`service_principal_file`" + `.
`,
Advanced: true,
}, {
Name: "disable_instance_discovery",
Help: `Skip requesting Microsoft Entra instance metadata
This should be set true only by applications authenticating in
disconnected clouds, or private clouds such as Azure Stack.
It determines whether rclone requests Microsoft Entra instance
metadata from ` + "`https://login.microsoft.com/`" + ` before
authenticating.
Setting this to true will skip this request, making you responsible
for ensuring the configured authority is valid and trustworthy.
`,
Default: false,
Advanced: true,
}, {
Name: "use_msi",
Help: `Use a managed service identity to authenticate (only works in Azure).
@ -259,20 +243,6 @@ msi_client_id, or msi_mi_res_id parameters.`,
Help: "Uses local storage emulator if provided as 'true'.\n\nLeave blank if using real azure storage endpoint.",
Default: false,
Advanced: true,
}, {
Name: "use_az",
Help: `Use Azure CLI tool az for authentication
Set to use the [Azure CLI tool az](https://learn.microsoft.com/en-us/cli/azure/)
as the sole means of authentication.
Setting this can be useful if you wish to use the az CLI on a host with
a System Managed Identity that you do not want to use.
Don't set env_auth at the same time.
`,
Default: false,
Advanced: true,
}, {
Name: "endpoint",
Help: "Endpoint for the service.\n\nLeave blank normally.",
@ -468,12 +438,10 @@ type Options struct {
Username string `config:"username"`
Password string `config:"password"`
ServicePrincipalFile string `config:"service_principal_file"`
DisableInstanceDiscovery bool `config:"disable_instance_discovery"`
UseMSI bool `config:"use_msi"`
MSIObjectID string `config:"msi_object_id"`
MSIClientID string `config:"msi_client_id"`
MSIResourceID string `config:"msi_mi_res_id"`
UseAZ bool `config:"use_az"`
Endpoint string `config:"endpoint"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
UploadConcurrency int `config:"upload_concurrency"`
@ -757,8 +725,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
// Read credentials from the environment
options := azidentity.DefaultAzureCredentialOptions{
ClientOptions: policyClientOptions,
DisableInstanceDiscovery: opt.DisableInstanceDiscovery,
ClientOptions: policyClientOptions,
}
cred, err = azidentity.NewDefaultAzureCredential(&options)
if err != nil {
@ -908,12 +875,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, fmt.Errorf("failed to acquire MSI token: %w", err)
}
case opt.UseAZ:
var options = azidentity.AzureCLICredentialOptions{}
cred, err = azidentity.NewAzureCLICredential(&options)
if err != nil {
return nil, fmt.Errorf("failed to create Azure CLI credentials: %w", err)
}
case opt.Account != "":
// Anonymous access
anonymous = true

View file

@ -43,7 +43,6 @@ import (
"github.com/rclone/rclone/lib/jwtutil"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/lib/rest"
"github.com/youmark/pkcs8"
"golang.org/x/oauth2"
@ -257,6 +256,7 @@ func getQueryParams(boxConfig *api.ConfigJSON) map[string]string {
}
func getDecryptedPrivateKey(boxConfig *api.ConfigJSON) (key *rsa.PrivateKey, err error) {
block, rest := pem.Decode([]byte(boxConfig.BoxAppSettings.AppAuth.PrivateKey))
if len(rest) > 0 {
return nil, fmt.Errorf("box: extra data included in private key: %w", err)
@ -619,7 +619,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
return shouldRetry(ctx, resp, err)
})
if err != nil {
// fmt.Printf("...Error %v\n", err)
//fmt.Printf("...Error %v\n", err)
return "", err
}
// fmt.Printf("...Id %q\n", *info.Id)
@ -966,26 +966,6 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err
}
// check if dest already exists
item, err := f.preUploadCheck(ctx, leaf, directoryID, src.Size())
if err != nil {
return nil, err
}
if item != nil { // dest already exists, need to copy to temp name and then move
tempSuffix := "-rclone-copy-" + random.String(8)
fs.Debugf(remote, "dst already exists, copying to temp name %v", remote+tempSuffix)
tempObj, err := f.Copy(ctx, src, remote+tempSuffix)
if err != nil {
return nil, err
}
fs.Debugf(remote+tempSuffix, "moving to real name %v", remote)
err = f.deleteObject(ctx, item.ID)
if err != nil {
return nil, err
}
return f.Move(ctx, tempObj, remote)
}
// Copy the object
opts := rest.Opts{
Method: "POST",

View file

@ -180,28 +180,12 @@ If this is set and no password is supplied then rclone will ask for a password
Default: "",
Help: `Socks 5 proxy host.
Supports the format user:pass@host:port, user@host:port, host:port.
Supports the format user:pass@host:port, user@host:port, host:port.
Example:
Example:
myUser:myPass@localhost:9005
`,
Advanced: true,
}, {
Name: "no_check_upload",
Default: false,
Help: `Don't check the upload is OK
Normally rclone will try to check the upload exists after it has
uploaded a file to make sure the size and modification time are as
expected.
This flag stops rclone doing these checks. This enables uploading to
folders which are write only.
You will likely need to use the --inplace flag also if uploading to
a write only folder.
`,
myUser:myPass@localhost:9005
`,
Advanced: true,
}, {
Name: config.ConfigEncoding,
@ -248,7 +232,6 @@ type Options struct {
AskPassword bool `config:"ask_password"`
Enc encoder.MultiEncoder `config:"encoding"`
SocksProxy string `config:"socks_proxy"`
NoCheckUpload bool `config:"no_check_upload"`
}
// Fs represents a remote FTP server
@ -1320,16 +1303,6 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return fmt.Errorf("update stor: %w", err)
}
o.fs.putFtpConnection(&c, nil)
if o.fs.opt.NoCheckUpload {
o.info = &FileInfo{
Name: o.remote,
Size: uint64(src.Size()),
ModTime: src.ModTime(ctx),
precise: true,
IsDir: false,
}
return nil
}
if err = o.SetModTime(ctx, src.ModTime(ctx)); err != nil {
return fmt.Errorf("SetModTime: %w", err)
}

View file

@ -60,14 +60,16 @@ const (
minSleep = 10 * time.Millisecond
)
// Description of how to auth for this app
var storageConfig = &oauth2.Config{
Scopes: []string{storage.DevstorageReadWriteScope},
Endpoint: google.Endpoint,
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectURL,
}
var (
// Description of how to auth for this app
storageConfig = &oauth2.Config{
Scopes: []string{storage.DevstorageReadWriteScope},
Endpoint: google.Endpoint,
ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectURL,
}
)
// Register with Fs
func init() {
@ -104,12 +106,6 @@ func init() {
Help: "Service Account Credentials JSON blob.\n\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login.",
Hide: fs.OptionHideBoth,
Sensitive: true,
}, {
Name: "access_token",
Help: "Short-lived access token.\n\nLeave blank normally.\nNeeded only if you want use short-lived access token instead of interactive login.",
Hide: fs.OptionHideConfigurator,
Sensitive: true,
Advanced: true,
}, {
Name: "anonymous",
Help: "Access public buckets and objects without credentials.\n\nSet to 'true' if you just want to download files and don't configure credentials.",
@ -383,7 +379,6 @@ type Options struct {
Enc encoder.MultiEncoder `config:"encoding"`
EnvAuth bool `config:"env_auth"`
DirectoryMarkers bool `config:"directory_markers"`
AccessToken string `config:"access_token"`
}
// Fs represents a remote storage server
@ -540,9 +535,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, fmt.Errorf("failed to configure Google Cloud Storage: %w", err)
}
} else if opt.AccessToken != "" {
ts := oauth2.Token{AccessToken: opt.AccessToken}
oAuthClient = oauth2.NewClient(ctx, oauth2.StaticTokenSource(&ts))
} else {
oAuthClient, _, err = oauthutil.NewClient(ctx, name, m, storageConfig)
if err != nil {
@ -952,6 +944,7 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
return e
}
return f.createDirectoryMarker(ctx, bucket, dir)
}
// mkdirParent creates the parent bucket/directory if it doesn't exist

View file

@ -28,6 +28,7 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/lib/batcher"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
@ -159,34 +160,6 @@ listings and transferred.
Without this flag, archived media will not be visible in directory
listings and won't be transferred.`,
Advanced: true,
}, {
Name: "proxy",
Default: "",
Help: strings.ReplaceAll(`Use the gphotosdl proxy for downloading the full resolution images
The Google API will deliver images and video which aren't full
resolution, and/or have EXIF data missing.
However if you ue the gphotosdl proxy tnen you can download original,
unchanged images.
This runs a headless browser in the background.
Download the software from [gphotosdl](https://github.com/rclone/gphotosdl)
First run with
gphotosdl -login
Then once you have logged into google photos close the browser window
and run
gphotosdl
Then supply the parameter |--gphotos-proxy "http://localhost:8282"| to make
rclone use the proxy.
`, "|", "`"),
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@ -208,7 +181,6 @@ type Options struct {
BatchMode string `config:"batch_mode"`
BatchSize int `config:"batch_size"`
BatchTimeout fs.Duration `config:"batch_timeout"`
Proxy string `config:"proxy"`
}
// Fs represents a remote storage server
@ -482,7 +454,7 @@ func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *api.Med
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// defer log.Trace(f, "remote=%q", remote)("")
defer log.Trace(f, "remote=%q", remote)("")
return f.newObjectWithInfo(ctx, remote, nil)
}
@ -695,7 +667,7 @@ func (f *Fs) listUploads(ctx context.Context, dir string) (entries fs.DirEntries
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
// defer log.Trace(f, "dir=%q", dir)("err=%v", &err)
defer log.Trace(f, "dir=%q", dir)("err=%v", &err)
match, prefix, pattern := patterns.match(f.root, dir, false)
if pattern == nil || pattern.isFile {
return nil, fs.ErrorDirNotFound
@ -712,7 +684,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
//
// The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
// defer log.Trace(f, "src=%+v", src)("")
defer log.Trace(f, "src=%+v", src)("")
// Temporary Object under construction
o := &Object{
fs: f,
@ -765,7 +737,7 @@ func (f *Fs) getOrCreateAlbum(ctx context.Context, albumTitle string) (album *ap
// Mkdir creates the album if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
// defer log.Trace(f, "dir=%q", dir)("err=%v", &err)
defer log.Trace(f, "dir=%q", dir)("err=%v", &err)
match, prefix, pattern := patterns.match(f.root, dir, false)
if pattern == nil {
return fs.ErrorDirNotFound
@ -789,7 +761,7 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
// defer log.Trace(f, "dir=%q")("err=%v", &err)
defer log.Trace(f, "dir=%q")("err=%v", &err)
match, _, pattern := patterns.match(f.root, dir, false)
if pattern == nil {
return fs.ErrorDirNotFound
@ -862,7 +834,7 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
// defer log.Trace(o, "")("")
defer log.Trace(o, "")("")
if !o.fs.opt.ReadSize || o.bytes >= 0 {
return o.bytes
}
@ -963,7 +935,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
// defer log.Trace(o, "")("")
defer log.Trace(o, "")("")
err := o.readMetaData(ctx)
if err != nil {
fs.Debugf(o, "ModTime: Failed to read metadata: %v", err)
@ -993,20 +965,16 @@ func (o *Object) downloadURL() string {
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
// defer log.Trace(o, "")("")
defer log.Trace(o, "")("")
err = o.readMetaData(ctx)
if err != nil {
fs.Debugf(o, "Open: Failed to read metadata: %v", err)
return nil, err
}
url := o.downloadURL()
if o.fs.opt.Proxy != "" {
url = strings.TrimRight(o.fs.opt.Proxy, "/") + "/id/" + o.id
}
var resp *http.Response
opts := rest.Opts{
Method: "GET",
RootURL: url,
RootURL: o.downloadURL(),
Options: options,
}
err = o.fs.pacer.Call(func() (bool, error) {
@ -1099,7 +1067,7 @@ func (f *Fs) commitBatch(ctx context.Context, items []uploadedItem, results []*a
//
// The new object may have been created if an error is returned
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
// defer log.Trace(o, "src=%+v", src)("err=%v", &err)
defer log.Trace(o, "src=%+v", src)("err=%v", &err)
match, _, pattern := patterns.match(o.fs.root, o.remote, true)
if pattern == nil || !pattern.isFile || !pattern.canUpload {
return errCantUpload

16
backend/local/lchmod.go Normal file
View file

@ -0,0 +1,16 @@
//go:build windows || plan9 || js || linux
package local
import "os"
const haveLChmod = false
// lChmod changes the mode of the named file to mode. If the file is a symbolic
// link, it changes the link, not the target. If there is an error,
// it will be of type *PathError.
func lChmod(name string, mode os.FileMode) error {
// Can't do this safely on this OS - chmoding a symlink always
// changes the destination.
return nil
}

View file

@ -0,0 +1,41 @@
//go:build !windows && !plan9 && !js && !linux
package local
import (
"os"
"syscall"
"golang.org/x/sys/unix"
)
const haveLChmod = true
// syscallMode returns the syscall-specific mode bits from Go's portable mode bits.
//
// Borrowed from the syscall source since it isn't public.
func syscallMode(i os.FileMode) (o uint32) {
o |= uint32(i.Perm())
if i&os.ModeSetuid != 0 {
o |= syscall.S_ISUID
}
if i&os.ModeSetgid != 0 {
o |= syscall.S_ISGID
}
if i&os.ModeSticky != 0 {
o |= syscall.S_ISVTX
}
return o
}
// lChmod changes the mode of the named file to mode. If the file is a symbolic
// link, it changes the link, not the target. If there is an error,
// it will be of type *PathError.
func lChmod(name string, mode os.FileMode) error {
// NB linux does not support AT_SYMLINK_NOFOLLOW as a parameter to fchmodat
// and returns ENOTSUP if you try, so we don't support this on linux
if e := unix.Fchmodat(unix.AT_FDCWD, name, syscallMode(mode), unix.AT_SYMLINK_NOFOLLOW); e != nil {
return &os.PathError{Op: "lChmod", Path: name, Err: e}
}
return nil
}

View file

@ -1,4 +1,4 @@
//go:build windows || plan9 || js
//go:build plan9 || js
package local

View file

@ -0,0 +1,19 @@
//go:build windows
package local
import (
"time"
)
const haveLChtimes = true
// lChtimes changes the access and modification times of the named
// link, similar to the Unix utime() or utimes() functions.
//
// The underlying filesystem may truncate or round the values to a
// less precise time unit.
// If there is an error, it will be of type *PathError.
func lChtimes(name string, atime time.Time, mtime time.Time) error {
return setTimes(name, atime, mtime, time.Time{}, true)
}

View file

@ -268,22 +268,66 @@ func TestMetadata(t *testing.T) {
r := fstest.NewRun(t)
const filePath = "metafile.txt"
when := time.Now()
const dayLength = len("2001-01-01")
whenRFC := when.Format(time.RFC3339Nano)
r.WriteFile(filePath, "metadata file contents", when)
f := r.Flocal.(*Fs)
// Set fs into "-l" / "--links" mode
f.opt.TranslateSymlinks = true
// Write a symlink to the file
symlinkPath := "metafile-link.txt"
osSymlinkPath := filepath.Join(f.root, symlinkPath)
symlinkPath += linkSuffix
require.NoError(t, os.Symlink(filePath, osSymlinkPath))
symlinkModTime := fstest.Time("2002-02-03T04:05:10.123123123Z")
require.NoError(t, lChtimes(osSymlinkPath, symlinkModTime, symlinkModTime))
// Get the object
obj, err := f.NewObject(ctx, filePath)
require.NoError(t, err)
o := obj.(*Object)
// Get the symlink object
symlinkObj, err := f.NewObject(ctx, symlinkPath)
require.NoError(t, err)
symlinkO := symlinkObj.(*Object)
// Record metadata for o
oMeta, err := o.Metadata(ctx)
require.NoError(t, err)
// Test symlink first to check it doesn't mess up file
t.Run("Symlink", func(t *testing.T) {
testMetadata(t, r, symlinkO, symlinkModTime)
})
// Read it again
oMetaNew, err := o.Metadata(ctx)
require.NoError(t, err)
// Check that operating on the symlink didn't change the file it was pointing to
// See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
assert.Equal(t, oMeta, oMetaNew, "metadata setting on symlink messed up file")
// Now run the same tests on the file
t.Run("File", func(t *testing.T) {
testMetadata(t, r, o, when)
})
}
func testMetadata(t *testing.T, r *fstest.Run, o *Object, when time.Time) {
ctx := context.Background()
whenRFC := when.Format(time.RFC3339Nano)
const dayLength = len("2001-01-01")
f := r.Flocal.(*Fs)
features := f.Features()
var hasXID, hasAtime, hasBtime bool
var hasXID, hasAtime, hasBtime, canSetXattrOnLinks bool
switch runtime.GOOS {
case "darwin", "freebsd", "netbsd", "linux":
hasXID, hasAtime, hasBtime = true, true, true
canSetXattrOnLinks = runtime.GOOS != "linux"
case "openbsd", "solaris":
hasXID, hasAtime = true, true
case "windows":
@ -306,6 +350,10 @@ func TestMetadata(t *testing.T) {
require.NoError(t, err)
assert.Nil(t, m)
if !canSetXattrOnLinks && o.translatedLink {
t.Skip("Skip remainder of test as can't set xattr on symlinks on this OS")
}
inM := fs.Metadata{
"potato": "chips",
"cabbage": "soup",
@ -320,18 +368,21 @@ func TestMetadata(t *testing.T) {
})
checkTime := func(m fs.Metadata, key string, when time.Time) {
t.Helper()
mt, ok := o.parseMetadataTime(m, key)
assert.True(t, ok)
dt := mt.Sub(when)
precision := time.Second
assert.True(t, dt >= -precision && dt <= precision, fmt.Sprintf("%s: dt %v outside +/- precision %v", key, dt, precision))
assert.True(t, dt >= -precision && dt <= precision, fmt.Sprintf("%s: dt %v outside +/- precision %v want %v got %v", key, dt, precision, mt, when))
}
checkInt := func(m fs.Metadata, key string, base int) int {
t.Helper()
value, ok := o.parseMetadataInt(m, key, base)
assert.True(t, ok)
return value
}
t.Run("Read", func(t *testing.T) {
m, err := o.Metadata(ctx)
require.NoError(t, err)
@ -341,13 +392,12 @@ func TestMetadata(t *testing.T) {
checkInt(m, "mode", 8)
checkTime(m, "mtime", when)
assert.Equal(t, len(whenRFC), len(m["mtime"]))
assert.Equal(t, whenRFC[:dayLength], m["mtime"][:dayLength])
if hasAtime {
if hasAtime && !o.translatedLink { // symlinks generally don't record atime
checkTime(m, "atime", when)
}
if hasBtime {
if hasBtime && !o.translatedLink { // symlinks generally don't record btime
checkTime(m, "btime", when)
}
if hasXID {
@ -371,6 +421,10 @@ func TestMetadata(t *testing.T) {
"mode": "0767",
"potato": "wedges",
}
if !canSetXattrOnLinks && o.translatedLink {
// Don't change xattr if not supported on symlinks
delete(newM, "potato")
}
err := o.writeMetadata(newM)
require.NoError(t, err)
@ -380,7 +434,11 @@ func TestMetadata(t *testing.T) {
mode := checkInt(m, "mode", 8)
if runtime.GOOS != "windows" {
assert.Equal(t, 0767, mode&0777, fmt.Sprintf("mode wrong - expecting 0767 got 0%o", mode&0777))
expectedMode := 0767
if o.translatedLink && runtime.GOOS == "linux" {
expectedMode = 0777 // perms of symlinks always read as 0777 on linux
}
assert.Equal(t, expectedMode, mode&0777, fmt.Sprintf("mode wrong - expecting 0%o got 0%o", expectedMode, mode&0777))
}
checkTime(m, "mtime", newMtime)
@ -390,7 +448,7 @@ func TestMetadata(t *testing.T) {
if haveSetBTime {
checkTime(m, "btime", newBtime)
}
if xattrSupported {
if xattrSupported && (canSetXattrOnLinks || !o.translatedLink) {
assert.Equal(t, "wedges", m["potato"])
}
})

View file

@ -105,7 +105,11 @@ func (o *Object) writeMetadataToFile(m fs.Metadata) (outErr error) {
}
if haveSetBTime {
if btimeOK {
err = setBTime(o.path, btime)
if o.translatedLink {
err = lsetBTime(o.path, btime)
} else {
err = setBTime(o.path, btime)
}
if err != nil {
outErr = fmt.Errorf("failed to set birth (creation) time: %w", err)
}
@ -121,7 +125,11 @@ func (o *Object) writeMetadataToFile(m fs.Metadata) (outErr error) {
if runtime.GOOS == "windows" || runtime.GOOS == "plan9" {
fs.Debugf(o, "Ignoring request to set ownership %o.%o on this OS", gid, uid)
} else {
err = os.Chown(o.path, uid, gid)
if o.translatedLink {
err = os.Lchown(o.path, uid, gid)
} else {
err = os.Chown(o.path, uid, gid)
}
if err != nil {
outErr = fmt.Errorf("failed to change ownership: %w", err)
}
@ -132,7 +140,16 @@ func (o *Object) writeMetadataToFile(m fs.Metadata) (outErr error) {
if mode >= 0 {
umode := uint(mode)
if umode <= math.MaxUint32 {
err = os.Chmod(o.path, os.FileMode(umode))
if o.translatedLink {
if haveLChmod {
err = lChmod(o.path, os.FileMode(umode))
} else {
fs.Debugf(o, "Unable to set mode %v on a symlink on this OS", os.FileMode(umode))
err = nil
}
} else {
err = os.Chmod(o.path, os.FileMode(umode))
}
if err != nil {
outErr = fmt.Errorf("failed to change permissions: %w", err)
}

View file

@ -13,3 +13,9 @@ func setBTime(name string, btime time.Time) error {
// Does nothing
return nil
}
// lsetBTime changes the birth time of the link passed in
func lsetBTime(name string, btime time.Time) error {
// Does nothing
return nil
}

View file

@ -9,15 +9,20 @@ import (
const haveSetBTime = true
// setBTime sets the birth time of the file passed in
func setBTime(name string, btime time.Time) (err error) {
// setTimes sets any of atime, mtime or btime
// if link is set it sets a link rather than the target
func setTimes(name string, atime, mtime, btime time.Time, link bool) (err error) {
pathp, err := syscall.UTF16PtrFromString(name)
if err != nil {
return err
}
fileFlag := uint32(syscall.FILE_FLAG_BACKUP_SEMANTICS)
if link {
fileFlag |= syscall.FILE_FLAG_OPEN_REPARSE_POINT
}
h, err := syscall.CreateFile(pathp,
syscall.FILE_WRITE_ATTRIBUTES, syscall.FILE_SHARE_WRITE, nil,
syscall.OPEN_EXISTING, syscall.FILE_FLAG_BACKUP_SEMANTICS, 0)
syscall.OPEN_EXISTING, fileFlag, 0)
if err != nil {
return err
}
@ -27,6 +32,28 @@ func setBTime(name string, btime time.Time) (err error) {
err = closeErr
}
}()
bFileTime := syscall.NsecToFiletime(btime.UnixNano())
return syscall.SetFileTime(h, &bFileTime, nil, nil)
var patime, pmtime, pbtime *syscall.Filetime
if !atime.IsZero() {
t := syscall.NsecToFiletime(atime.UnixNano())
patime = &t
}
if !mtime.IsZero() {
t := syscall.NsecToFiletime(mtime.UnixNano())
pmtime = &t
}
if !btime.IsZero() {
t := syscall.NsecToFiletime(btime.UnixNano())
pbtime = &t
}
return syscall.SetFileTime(h, pbtime, patime, pmtime)
}
// setBTime sets the birth time of the file passed in
func setBTime(name string, btime time.Time) (err error) {
return setTimes(name, time.Time{}, time.Time{}, btime, false)
}
// lsetBTime changes the birth time of the link passed in
func lsetBTime(name string, btime time.Time) error {
return setTimes(name, time.Time{}, time.Time{}, btime, true)
}

View file

@ -202,9 +202,14 @@ type SharingLinkType struct {
type LinkType string
const (
ViewLinkType LinkType = "view" // ViewLinkType (role: read) A view-only sharing link, allowing read-only access.
EditLinkType LinkType = "edit" // EditLinkType (role: write) An edit sharing link, allowing read-write access.
EmbedLinkType LinkType = "embed" // EmbedLinkType (role: read) A view-only sharing link that can be used to embed content into a host webpage. Embed links are not available for OneDrive for Business or SharePoint.
// ViewLinkType (role: read) A view-only sharing link, allowing read-only access.
ViewLinkType LinkType = "view"
// EditLinkType (role: write) An edit sharing link, allowing read-write access.
EditLinkType LinkType = "edit"
// EmbedLinkType (role: read) A view-only sharing link that can be used to embed
// content into a host webpage. Embed links are not available for OneDrive for
// Business or SharePoint.
EmbedLinkType LinkType = "embed"
)
// LinkScope represents the scope of the link represented by this permission.
@ -212,9 +217,12 @@ const (
type LinkScope string
const (
AnonymousScope LinkScope = "anonymous" // AnonymousScope = Anyone with the link has access, without needing to sign in. This may include people outside of your organization.
OrganizationScope LinkScope = "organization" // OrganizationScope = Anyone signed into your organization (tenant) can use the link to get access. Only available in OneDrive for Business and SharePoint.
// AnonymousScope = Anyone with the link has access, without needing to sign in.
// This may include people outside of your organization.
AnonymousScope LinkScope = "anonymous"
// OrganizationScope = Anyone signed into your organization (tenant) can use the
// link to get access. Only available in OneDrive for Business and SharePoint.
OrganizationScope LinkScope = "organization"
)
// PermissionsType provides information about a sharing permission granted for a DriveItem resource.
@ -236,10 +244,14 @@ type PermissionsType struct {
type Role string
const (
ReadRole Role = "read" // ReadRole provides the ability to read the metadata and contents of the item.
WriteRole Role = "write" // WriteRole provides the ability to read and modify the metadata and contents of the item.
OwnerRole Role = "owner" // OwnerRole represents the owner role for SharePoint and OneDrive for Business.
MemberRole Role = "member" // MemberRole represents the member role for SharePoint and OneDrive for Business.
// ReadRole provides the ability to read the metadata and contents of the item.
ReadRole Role = "read"
// WriteRole provides the ability to read and modify the metadata and contents of the item.
WriteRole Role = "write"
// OwnerRole represents the owner role for SharePoint and OneDrive for Business.
OwnerRole Role = "owner"
// MemberRole represents the member role for SharePoint and OneDrive for Business.
MemberRole Role = "member"
)
// PermissionsResponse is the response to the list permissions method

View file

@ -6,7 +6,6 @@ import (
"errors"
"fmt"
"net/http"
"slices"
"strings"
"time"
@ -15,6 +14,7 @@ import (
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/errcount"
"golang.org/x/exp/slices" // replace with slices after go1.21 is the minimum version
)
const (

View file

@ -827,7 +827,7 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
retry = true
fs.Debugf(nil, "HTTP 401: Unable to initialize RPS. Trying again.")
}
case 429: // Too Many Requests.
case 429, 503: // Too Many Requests, Server Too Busy
// see https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online
if values := resp.Header["Retry-After"]; len(values) == 1 && values[0] != "" {
retryAfter, parseErr := strconv.Atoi(values[0])
@ -1545,12 +1545,9 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
// Precision return the precision of this Fs
func (f *Fs) Precision() time.Duration {
// While this is true for some OneDrive personal accounts, it
// isn't true for all of them. See #8101 for details
//
// if f.driveType == driveTypePersonal {
// return time.Millisecond
// }
if f.driveType == driveTypePersonal {
return time.Millisecond
}
return time.Second
}

View file

@ -4,7 +4,6 @@ import (
"context"
"encoding/json"
"fmt"
"slices"
"testing"
"time"
@ -17,6 +16,7 @@ import (
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/exp/slices" // replace with slices after go1.21 is the minimum version
)
// go test -timeout 30m -run ^TestIntegration/FsMkdir/FsPutFiles/Internal$ github.com/rclone/rclone/backend/onedrive -remote TestOneDrive:meta -v

View file

@ -404,32 +404,6 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return dstObj, nil
}
// About gets quota information
func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
var uInfo usersInfoResponse
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
opts := rest.Opts{
Method: "GET",
Path: "/users/info.json/" + f.session.SessionID,
}
resp, err = f.srv.CallJSON(ctx, &opts, nil, &uInfo)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
}
usage = &fs.Usage{
Used: fs.NewUsageValue(uInfo.StorageUsed),
Total: fs.NewUsageValue(uInfo.MaxStorage * 1024 * 1024), // MaxStorage appears to be in MB
Free: fs.NewUsageValue(uInfo.MaxStorage*1024*1024 - uInfo.StorageUsed),
}
return usage, nil
}
// Move src to this remote using server-side move operations.
//
// This is stored with the remote path given.
@ -1173,7 +1147,6 @@ var (
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
_ fs.ParentIDer = (*Object)(nil)

View file

@ -231,10 +231,3 @@ type permissions struct {
type uploadFileChunkReply struct {
TotalWritten int64 `json:"TotalWritten"`
}
// usersInfoResponse describes OpenDrive users/info.json response
type usersInfoResponse struct {
// This response contains many other values but these are the only ones currently in use
StorageUsed int64 `json:"StorageUsed,string"`
MaxStorage int64 `json:"MaxStorage,string"`
}

View file

@ -561,6 +561,7 @@ func newFs(ctx context.Context, name, path string, m configmap.Mapper) (*Fs, err
if strings.Contains(err.Error(), "invalid_grant") {
return f, f.reAuthorize(ctx)
}
return nil, err
}
return f, nil

View file

@ -449,7 +449,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
// No root so return old f
return f, nil
}
_, err := tempF.newObject(ctx, remote)
_, err := tempF.newObjectWithLink(ctx, remote, nil)
if err != nil {
if err == fs.ErrorObjectNotFound {
// File doesn't exist so return old f
@ -487,7 +487,7 @@ func (f *Fs) CleanUp(ctx context.Context) error {
// ErrorIsDir if possible without doing any extra work,
// otherwise ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObject(ctx, remote)
return f.newObjectWithLink(ctx, remote, nil)
}
func (f *Fs) getObjectLink(ctx context.Context, remote string) (*proton.Link, error) {
@ -516,27 +516,35 @@ func (f *Fs) getObjectLink(ctx context.Context, remote string) (*proton.Link, er
return link, nil
}
// readMetaDataForLink reads the metadata from the remote
func (f *Fs) readMetaDataForLink(ctx context.Context, link *proton.Link) (*protonDriveAPI.FileSystemAttrs, error) {
// readMetaDataForRemote reads the metadata from the remote
func (f *Fs) readMetaDataForRemote(ctx context.Context, remote string, _link *proton.Link) (*proton.Link, *protonDriveAPI.FileSystemAttrs, error) {
link, err := f.getObjectLink(ctx, remote)
if err != nil {
return nil, nil, err
}
var fileSystemAttrs *protonDriveAPI.FileSystemAttrs
var err error
if err = f.pacer.Call(func() (bool, error) {
fileSystemAttrs, err = f.protonDrive.GetActiveRevisionAttrs(ctx, link)
return shouldRetry(ctx, err)
}); err != nil {
return nil, err
return nil, nil, err
}
return fileSystemAttrs, nil
return link, fileSystemAttrs, nil
}
// Return an Object from a path and link
// readMetaData gets the metadata if it hasn't already been fetched
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithLink(ctx context.Context, remote string, link *proton.Link) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
// it also sets the info
func (o *Object) readMetaData(ctx context.Context, link *proton.Link) (err error) {
if o.link != nil {
return nil
}
link, fileSystemAttrs, err := o.fs.readMetaDataForRemote(ctx, o.remote, link)
if err != nil {
return err
}
o.id = link.LinkID
@ -546,10 +554,6 @@ func (f *Fs) newObjectWithLink(ctx context.Context, remote string, link *proton.
o.mimetype = link.MIMEType
o.link = link
fileSystemAttrs, err := o.fs.readMetaDataForLink(ctx, link)
if err != nil {
return nil, err
}
if fileSystemAttrs != nil {
o.modTime = fileSystemAttrs.ModificationTime
o.originalSize = &fileSystemAttrs.Size
@ -557,18 +561,23 @@ func (f *Fs) newObjectWithLink(ctx context.Context, remote string, link *proton.
o.digests = &fileSystemAttrs.Digests
}
return o, nil
return nil
}
// Return an Object from a path only
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObject(ctx context.Context, remote string) (fs.Object, error) {
link, err := f.getObjectLink(ctx, remote)
func (f *Fs) newObjectWithLink(ctx context.Context, remote string, link *proton.Link) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
}
err := o.readMetaData(ctx, link)
if err != nil {
return nil, err
}
return f.newObjectWithLink(ctx, remote, link)
return o, nil
}
// List the objects and directories in dir into entries. The

View file

@ -2606,35 +2606,6 @@ knows about - please make a bug report if not.
`,
Default: fs.Tristate{},
Advanced: true,
}, {
Name: "directory_bucket",
Help: strings.ReplaceAll(`Set to use AWS Directory Buckets
If you are using an AWS Directory Bucket then set this flag.
This will ensure no |Content-Md5| headers are sent and ensure |ETag|
headers are not interpreted as MD5 sums. |X-Amz-Meta-Md5chksum| will
be set on all objects whether single or multipart uploaded.
This also sets |no_check_bucket = true|.
Note that Directory Buckets do not support:
- Versioning
- |Content-Encoding: gzip|
Rclone limitations with Directory Buckets:
- rclone does not support creating Directory Buckets with |rclone mkdir|
- ... or removing them with |rclone rmdir| yet
- Directory Buckets do not appear when doing |rclone lsf| at the top level.
- Rclone can't remove auto created directories yet. In theory this should
work with |directory_markers = true| but it doesn't.
- Directories don't seem to appear in recursive (ListR) listings.
`, "|", "`"),
Default: false,
Advanced: true,
Provider: "AWS",
}, {
Name: "sdk_log_mode",
Help: strings.ReplaceAll(`Set to debug the SDK
@ -2809,7 +2780,6 @@ type Options struct {
UseMultipartUploads fs.Tristate `config:"use_multipart_uploads"`
UseUnsignedPayload fs.Tristate `config:"use_unsigned_payload"`
SDKLogMode sdkLogMode `config:"sdk_log_mode"`
DirectoryBucket bool `config:"directory_bucket"`
}
// Fs represents a remote s3 server
@ -3398,6 +3368,10 @@ func setQuirks(opt *Options) {
opt.ChunkSize = 64 * fs.Mebi
}
useAlreadyExists = false // returns BucketAlreadyExists
// Storj doesn't support multi-part server side copy:
// https://github.com/storj/roadmap/issues/40
// So make cutoff very large which it does support
opt.CopyCutoff = math.MaxInt64
case "Synology":
useMultipartEtag = false
useAlreadyExists = false // untested
@ -3578,14 +3552,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
// MD5 digest of their object data.
f.etagIsNotMD5 = true
}
if opt.DirectoryBucket {
// Objects uploaded to directory buckets appear to have random ETags
//
// This doesn't appear to be documented
f.etagIsNotMD5 = true
// The normal API doesn't work for creating directory buckets, so don't try
f.opt.NoCheckBucket = true
}
f.setRoot(root)
f.features = (&fs.Features{
ReadMimeType: true,
@ -5784,7 +5750,7 @@ func (o *Object) downloadFromURL(ctx context.Context, bucketPath string, options
ContentEncoding: header("Content-Encoding"),
ContentLanguage: header("Content-Language"),
ContentType: header("Content-Type"),
StorageClass: types.StorageClass(*header("X-Amz-Storage-Class")),
StorageClass: types.StorageClass(deref(header("X-Amz-Storage-Class"))),
}
o.setMetaData(&head)
return resp.Body, err
@ -5978,8 +5944,8 @@ func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectIn
chunkSize: int64(chunkSize),
size: size,
f: f,
bucket: mOut.Bucket,
key: mOut.Key,
bucket: ui.req.Bucket,
key: ui.req.Key,
uploadID: mOut.UploadId,
multiPartUploadInput: &mReq,
completedParts: make([]types.CompletedPart, 0),
@ -6067,10 +6033,6 @@ func (w *s3ChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader
SSECustomerKey: w.multiPartUploadInput.SSECustomerKey,
SSECustomerKeyMD5: w.multiPartUploadInput.SSECustomerKeyMD5,
}
if w.f.opt.DirectoryBucket {
// Directory buckets do not support "Content-Md5" header
uploadPartReq.ContentMD5 = nil
}
var uout *s3.UploadPartOutput
err = w.f.pacer.Call(func() (bool, error) {
// rewind the reader on retry and after reading md5
@ -6347,7 +6309,7 @@ func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options [
if (multipart || o.fs.etagIsNotMD5) && !o.fs.opt.DisableChecksum {
// Set the md5sum as metadata on the object if
// - a multipart upload
// - the Etag is not an MD5, eg when using SSE/SSE-C or directory buckets
// - the Etag is not an MD5, eg when using SSE/SSE-C
// provided checksums aren't disabled
ui.req.Metadata[metaMD5Hash] = md5sumBase64
}
@ -6362,7 +6324,7 @@ func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options [
if size >= 0 {
ui.req.ContentLength = &size
}
if md5sumBase64 != "" && !o.fs.opt.DirectoryBucket {
if md5sumBase64 != "" {
ui.req.ContentMD5 = &md5sumBase64
}
if o.fs.opt.RequesterPays {

View file

@ -14,30 +14,21 @@ import (
"io"
"net/http"
"path"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/readers"
"github.com/rclone/rclone/lib/rest"
)
func (f *Fs) shouldRetryChunkMerge(ctx context.Context, resp *http.Response, err error, sleepTime *time.Duration, wasLocked *bool) (bool, error) {
func (f *Fs) shouldRetryChunkMerge(ctx context.Context, resp *http.Response, err error) (bool, error) {
// Not found. Can be returned by NextCloud when merging chunks of an upload.
if resp != nil && resp.StatusCode == 404 {
if *wasLocked {
// Assume a 404 error after we've received a 423 error is actually a success
return false, nil
}
return true, err
}
// 423 LOCKED
if resp != nil && resp.StatusCode == 423 {
*wasLocked = true
fs.Logf(f, "Sleeping for %v to wait for chunks to be merged after 423 error", *sleepTime)
time.Sleep(*sleepTime)
*sleepTime *= 2
return true, fmt.Errorf("merging the uploaded chunks failed with 423 LOCKED. This usually happens when the chunks merging is still in progress on NextCloud, but it may also indicate a failed transfer: %w", err)
return false, fmt.Errorf("merging the uploaded chunks failed with 423 LOCKED. This usually happens when the chunks merging is still in progress on NextCloud, but it may also indicate a failed transfer: %w", err)
}
return f.shouldRetry(ctx, resp, err)
@ -189,11 +180,9 @@ func (o *Object) mergeChunks(ctx context.Context, uploadDir string, options []fs
}
opts.ExtraHeaders = o.extraHeaders(ctx, src)
opts.ExtraHeaders["Destination"] = destinationURL.String()
sleepTime := 5 * time.Second
wasLocked := false
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetryChunkMerge(ctx, resp, err, &sleepTime, &wasLocked)
return o.fs.shouldRetryChunkMerge(ctx, resp, err)
})
if err != nil {
return fmt.Errorf("finalize chunked upload failed, destinationURL: \"%s\": %w", destinationURL, err)

View file

@ -27,8 +27,8 @@ func (t *Time) UnmarshalJSON(data []byte) error {
return nil
}
// OAuthUser is a Zoho user we are only interested in the ZUID here
type OAuthUser struct {
// User is a Zoho user we are only interested in the ZUID here
type User struct {
FirstName string `json:"First_Name"`
Email string `json:"Email"`
LastName string `json:"Last_Name"`
@ -36,41 +36,12 @@ type OAuthUser struct {
ZUID int64 `json:"ZUID"`
}
// UserInfoResponse is returned by the user info API.
type UserInfoResponse struct {
Data struct {
ID string `json:"id"`
Type string `json:"users"`
Attributes struct {
EmailID string `json:"email_id"`
Edition string `json:"edition"`
} `json:"attributes"`
} `json:"data"`
}
// PrivateSpaceInfo gives basic information about a users private folder.
type PrivateSpaceInfo struct {
Data struct {
ID string `json:"id"`
Type string `json:"string"`
} `json:"data"`
}
// CurrentTeamInfo gives information about the current user in a team.
type CurrentTeamInfo struct {
Data struct {
ID string `json:"id"`
Type string `json:"string"`
}
}
// TeamWorkspace represents a Zoho Team, Workspace or Private Space
// TeamWorkspace represents a Zoho Team or workspace
// It's actually a VERY large json object that differs between
// Team and Workspace and Private Space but we are only interested in some fields
// that all of them have so we can use the same struct.
// Team and Workspace but we are only interested in some fields
// that both of them have so we can use the same struct for both
type TeamWorkspace struct {
ID string `json:"id"`
Type string `json:"type"`
Attributes struct {
Name string `json:"name"`
Created Time `json:"created_time_in_millisecond"`
@ -78,8 +49,7 @@ type TeamWorkspace struct {
} `json:"attributes"`
}
// TeamWorkspaceResponse is the response by the list teams API, list workspace API
// or list team private spaces API.
// TeamWorkspaceResponse is the response by the list teams api
type TeamWorkspaceResponse struct {
TeamWorkspace []TeamWorkspace `json:"data"`
}
@ -210,38 +180,11 @@ func (ui *UploadInfo) GetUploadFileInfo() (*UploadFileInfo, error) {
return &ufi, nil
}
// LargeUploadInfo is once again a slightly different version of UploadInfo
// returned as part of an LargeUploadResponse by the large file upload API.
type LargeUploadInfo struct {
Attributes struct {
ParentID string `json:"parent_id"`
FileName string `json:"file_name"`
RessourceID string `json:"resource_id"`
FileInfo string `json:"file_info"`
} `json:"attributes"`
}
// GetUploadFileInfo decodes the embedded FileInfo
func (ui *LargeUploadInfo) GetUploadFileInfo() (*UploadFileInfo, error) {
var ufi UploadFileInfo
err := json.Unmarshal([]byte(ui.Attributes.FileInfo), &ufi)
if err != nil {
return nil, fmt.Errorf("failed to decode FileInfo: %w", err)
}
return &ufi, nil
}
// UploadResponse is the response to a file Upload
type UploadResponse struct {
Uploads []UploadInfo `json:"data"`
}
// LargeUploadResponse is the response returned by large file upload API.
type LargeUploadResponse struct {
Uploads []LargeUploadInfo `json:"data"`
Status string `json:"status"`
}
// WriteMetadataRequest is used to write metadata for a
// single item
type WriteMetadataRequest struct {

View file

@ -14,7 +14,6 @@ import (
"strings"
"time"
"github.com/google/uuid"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/random"
@ -37,11 +36,9 @@ const (
rcloneClientID = "1000.46MXF275FM2XV7QCHX5A7K3LGME66B"
rcloneEncryptedClientSecret = "U-2gxclZQBcOG9NPhjiXAhj-f0uQ137D0zar8YyNHXHkQZlTeSpIOQfmCb4oSpvosJp_SJLXmLLeUA"
minSleep = 10 * time.Millisecond
maxSleep = 60 * time.Second
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
configRootID = "root_folder_id"
defaultUploadCutoff = 10 * 1024 * 1024 // 10 MiB
)
// Globals
@ -53,7 +50,6 @@ var (
"WorkDrive.team.READ",
"WorkDrive.workspace.READ",
"WorkDrive.files.ALL",
"ZohoFiles.files.ALL",
},
Endpoint: oauth2.Endpoint{
AuthURL: "https://accounts.zoho.eu/oauth/v2/auth",
@ -65,8 +61,6 @@ var (
RedirectURL: oauthutil.RedirectLocalhostURL,
}
rootURL = "https://workdrive.zoho.eu/api/v1"
downloadURL = "https://download.zoho.eu/v1/workdrive"
uploadURL = "http://upload.zoho.eu/workdrive-api/v1/"
accountsURL = "https://accounts.zoho.eu"
)
@ -85,7 +79,7 @@ func init() {
getSrvs := func() (authSrv, apiSrv *rest.Client, err error) {
oAuthClient, _, err := oauthutil.NewClient(ctx, name, m, oauthConfig)
if err != nil {
return nil, nil, fmt.Errorf("failed to load OAuth client: %w", err)
return nil, nil, fmt.Errorf("failed to load oAuthClient: %w", err)
}
authSrv = rest.NewClient(oAuthClient).SetRoot(accountsURL)
apiSrv = rest.NewClient(oAuthClient).SetRoot(rootURL)
@ -94,12 +88,12 @@ func init() {
switch config.State {
case "":
return oauthutil.ConfigOut("type", &oauthutil.Options{
return oauthutil.ConfigOut("teams", &oauthutil.Options{
OAuth2Config: oauthConfig,
// No refresh token unless ApprovalForce is set
OAuth2Opts: []oauth2.AuthCodeOption{oauth2.ApprovalForce},
})
case "type":
case "teams":
// We need to rewrite the token type to "Zoho-oauthtoken" because Zoho wants
// it's own custom type
token, err := oauthutil.GetToken(name, m)
@ -114,43 +108,24 @@ func init() {
}
}
_, apiSrv, err := getSrvs()
authSrv, apiSrv, err := getSrvs()
if err != nil {
return nil, err
}
userInfo, err := getUserInfo(ctx, apiSrv)
if err != nil {
return nil, err
// Get the user Info
opts := rest.Opts{
Method: "GET",
Path: "/oauth/user/info",
}
// If personal Edition only one private Space is available. Directly configure that.
if userInfo.Data.Attributes.Edition == "PERSONAL" {
return fs.ConfigResult("private_space", userInfo.Data.ID)
}
// Otherwise go to team selection
return fs.ConfigResult("team", userInfo.Data.ID)
case "private_space":
_, apiSrv, err := getSrvs()
if err != nil {
return nil, err
}
workspaces, err := getPrivateSpaces(ctx, config.Result, apiSrv)
if err != nil {
return nil, err
}
return fs.ConfigChoose("workspace_end", "config_workspace", "Workspace ID", len(workspaces), func(i int) (string, string) {
workspace := workspaces[i]
return workspace.ID, workspace.Name
})
case "team":
_, apiSrv, err := getSrvs()
var user api.User
_, err = authSrv.CallJSON(ctx, &opts, nil, &user)
if err != nil {
return nil, err
}
// Get the teams
teams, err := listTeams(ctx, config.Result, apiSrv)
teams, err := listTeams(ctx, user.ZUID, apiSrv)
if err != nil {
return nil, err
}
@ -168,19 +143,9 @@ func init() {
if err != nil {
return nil, err
}
currentTeamInfo, err := getCurrentTeamInfo(ctx, teamID, apiSrv)
if err != nil {
return nil, err
}
privateSpaces, err := getPrivateSpaces(ctx, currentTeamInfo.Data.ID, apiSrv)
if err != nil {
return nil, err
}
workspaces = append(workspaces, privateSpaces...)
return fs.ConfigChoose("workspace_end", "config_workspace", "Workspace ID", len(workspaces), func(i int) (string, string) {
workspace := workspaces[i]
return workspace.ID, workspace.Name
return workspace.ID, workspace.Attributes.Name
})
case "workspace_end":
workspaceID := config.Result
@ -214,13 +179,7 @@ browser.`,
}, {
Value: "com.au",
Help: "Australia",
}},
}, {
Name: "upload_cutoff",
Help: "Cutoff for switching to large file upload api (>= 10 MiB).",
Default: fs.SizeSuffix(defaultUploadCutoff),
Advanced: true,
}, {
}}}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
@ -234,7 +193,6 @@ browser.`,
// Options defines the configuration for this backend
type Options struct {
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
RootFolderID string `config:"root_folder_id"`
Region string `config:"region"`
Enc encoder.MultiEncoder `config:"encoding"`
@ -242,15 +200,13 @@ type Options struct {
// Fs represents a remote workdrive
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the server
downloadsrv *rest.Client // the connection to the download server
uploadsrv *rest.Client // the connection to the upload server
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *fs.Pacer // pacer for API calls
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
srv *rest.Client // the connection to the server
dirCache *dircache.DirCache // Map of directory path to directory id
pacer *fs.Pacer // pacer for API calls
}
// Object describes a Zoho WorkDrive object
@ -273,8 +229,6 @@ func setupRegion(m configmap.Mapper) error {
return errors.New("no region set")
}
rootURL = fmt.Sprintf("https://workdrive.zoho.%s/api/v1", region)
downloadURL = fmt.Sprintf("https://download.zoho.%s/v1/workdrive", region)
uploadURL = fmt.Sprintf("https://upload.zoho.%s/workdrive-api/v1", region)
accountsURL = fmt.Sprintf("https://accounts.zoho.%s", region)
oauthConfig.Endpoint.AuthURL = fmt.Sprintf("https://accounts.zoho.%s/oauth/v2/auth", region)
oauthConfig.Endpoint.TokenURL = fmt.Sprintf("https://accounts.zoho.%s/oauth/v2/token", region)
@ -283,63 +237,11 @@ func setupRegion(m configmap.Mapper) error {
// ------------------------------------------------------------
type workspaceInfo struct {
ID string
Name string
}
func getUserInfo(ctx context.Context, srv *rest.Client) (*api.UserInfoResponse, error) {
var userInfo api.UserInfoResponse
opts := rest.Opts{
Method: "GET",
Path: "/users/me",
ExtraHeaders: map[string]string{"Accept": "application/vnd.api+json"},
}
_, err := srv.CallJSON(ctx, &opts, nil, &userInfo)
if err != nil {
return nil, err
}
return &userInfo, nil
}
func getCurrentTeamInfo(ctx context.Context, teamID string, srv *rest.Client) (*api.CurrentTeamInfo, error) {
var currentTeamInfo api.CurrentTeamInfo
opts := rest.Opts{
Method: "GET",
Path: "/teams/" + teamID + "/currentuser",
ExtraHeaders: map[string]string{"Accept": "application/vnd.api+json"},
}
_, err := srv.CallJSON(ctx, &opts, nil, &currentTeamInfo)
if err != nil {
return nil, err
}
return &currentTeamInfo, err
}
func getPrivateSpaces(ctx context.Context, teamUserID string, srv *rest.Client) ([]workspaceInfo, error) {
var privateSpaceListResponse api.TeamWorkspaceResponse
opts := rest.Opts{
Method: "GET",
Path: "/users/" + teamUserID + "/privatespace",
ExtraHeaders: map[string]string{"Accept": "application/vnd.api+json"},
}
_, err := srv.CallJSON(ctx, &opts, nil, &privateSpaceListResponse)
if err != nil {
return nil, err
}
workspaceList := make([]workspaceInfo, 0, len(privateSpaceListResponse.TeamWorkspace))
for _, workspace := range privateSpaceListResponse.TeamWorkspace {
workspaceList = append(workspaceList, workspaceInfo{ID: workspace.ID, Name: "My Space"})
}
return workspaceList, err
}
func listTeams(ctx context.Context, zuid string, srv *rest.Client) ([]api.TeamWorkspace, error) {
func listTeams(ctx context.Context, uid int64, srv *rest.Client) ([]api.TeamWorkspace, error) {
var teamList api.TeamWorkspaceResponse
opts := rest.Opts{
Method: "GET",
Path: "/users/" + zuid + "/teams",
Path: "/users/" + strconv.FormatInt(uid, 10) + "/teams",
ExtraHeaders: map[string]string{"Accept": "application/vnd.api+json"},
}
_, err := srv.CallJSON(ctx, &opts, nil, &teamList)
@ -349,24 +251,18 @@ func listTeams(ctx context.Context, zuid string, srv *rest.Client) ([]api.TeamWo
return teamList.TeamWorkspace, nil
}
func listWorkspaces(ctx context.Context, teamID string, srv *rest.Client) ([]workspaceInfo, error) {
var workspaceListResponse api.TeamWorkspaceResponse
func listWorkspaces(ctx context.Context, teamID string, srv *rest.Client) ([]api.TeamWorkspace, error) {
var workspaceList api.TeamWorkspaceResponse
opts := rest.Opts{
Method: "GET",
Path: "/teams/" + teamID + "/workspaces",
ExtraHeaders: map[string]string{"Accept": "application/vnd.api+json"},
}
_, err := srv.CallJSON(ctx, &opts, nil, &workspaceListResponse)
_, err := srv.CallJSON(ctx, &opts, nil, &workspaceList)
if err != nil {
return nil, err
}
workspaceList := make([]workspaceInfo, 0, len(workspaceListResponse.TeamWorkspace))
for _, workspace := range workspaceListResponse.TeamWorkspace {
workspaceList = append(workspaceList, workspaceInfo{ID: workspace.ID, Name: workspace.Attributes.Name})
}
return workspaceList, nil
return workspaceList.TeamWorkspace, nil
}
// --------------------------------------------------------------
@ -389,20 +285,13 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
}
authRetry := false
// Bail out early if we are missing OAuth Scopes.
if resp != nil && resp.StatusCode == 401 && strings.Contains(resp.Status, "INVALID_OAUTHSCOPE") {
fs.Errorf(nil, "zoho: missing OAuth Scope. Run rclone config reconnect to fix this issue.")
return false, err
}
if resp != nil && resp.StatusCode == 401 && len(resp.Header["Www-Authenticate"]) == 1 && strings.Contains(resp.Header["Www-Authenticate"][0], "expired_token") {
authRetry = true
fs.Debugf(nil, "Should retry: %v", err)
}
if resp != nil && resp.StatusCode == 429 {
err = pacer.RetryAfterError(err, 60*time.Second)
fs.Debugf(nil, "Too many requests. Trying again in %d seconds.", 60)
return true, err
fs.Errorf(nil, "zoho: rate limit error received, sleeping for 60s: %v", err)
time.Sleep(60 * time.Second)
}
return authRetry || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
@ -500,11 +389,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err := configstruct.Set(m, opt); err != nil {
return nil, err
}
if opt.UploadCutoff < defaultUploadCutoff {
return nil, fmt.Errorf("zoho: upload cutoff (%v) must be greater than equal to %v", opt.UploadCutoff, fs.SizeSuffix(defaultUploadCutoff))
}
err := setupRegion(m)
if err != nil {
return nil, err
@ -517,13 +401,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
f := &Fs{
name: name,
root: root,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
downloadsrv: rest.NewClient(oAuthClient).SetRoot(downloadURL),
uploadsrv: rest.NewClient(oAuthClient).SetRoot(uploadURL),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
name: name,
root: root,
opt: *opt,
srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
@ -761,61 +643,9 @@ func (f *Fs) createObject(ctx context.Context, remote string, size int64, modTim
return
}
func (f *Fs) uploadLargeFile(ctx context.Context, name string, parent string, size int64, in io.Reader, options ...fs.OpenOption) (*api.Item, error) {
opts := rest.Opts{
Method: "POST",
Path: "/stream/upload",
Body: in,
ContentLength: &size,
ContentType: "application/octet-stream",
Options: options,
ExtraHeaders: map[string]string{
"x-filename": url.QueryEscape(name),
"x-parent_id": parent,
"override-name-exist": "true",
"upload-id": uuid.New().String(),
"x-streammode": "1",
},
}
var err error
var resp *http.Response
var uploadResponse *api.LargeUploadResponse
err = f.pacer.CallNoRetry(func() (bool, error) {
resp, err = f.uploadsrv.CallJSON(ctx, &opts, nil, &uploadResponse)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("upload large error: %v", err)
}
if len(uploadResponse.Uploads) != 1 {
return nil, errors.New("upload: invalid response")
}
upload := uploadResponse.Uploads[0]
uploadInfo, err := upload.GetUploadFileInfo()
if err != nil {
return nil, fmt.Errorf("upload error: %w", err)
}
// Fill in the api.Item from the api.UploadFileInfo
var info api.Item
info.ID = upload.Attributes.RessourceID
info.Attributes.Name = upload.Attributes.FileName
// info.Attributes.Type = not used
info.Attributes.IsFolder = false
// info.Attributes.CreatedTime = not used
info.Attributes.ModifiedTime = uploadInfo.GetModTime()
// info.Attributes.UploadedTime = 0 not used
info.Attributes.StorageInfo.Size = uploadInfo.Size
info.Attributes.StorageInfo.FileCount = 0
info.Attributes.StorageInfo.FolderCount = 0
return &info, nil
}
func (f *Fs) upload(ctx context.Context, name string, parent string, size int64, in io.Reader, options ...fs.OpenOption) (*api.Item, error) {
params := url.Values{}
params.Set("filename", url.QueryEscape(name))
params.Set("filename", name)
params.Set("parent_id", parent)
params.Set("override-name-exist", strconv.FormatBool(true))
formReader, contentType, overhead, err := rest.MultipartUpload(ctx, in, nil, "content", name)
@ -875,40 +705,21 @@ func (f *Fs) upload(ctx context.Context, name string, parent string, size int64,
//
// The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
existingObj, err := f.NewObject(ctx, src.Remote())
switch err {
case nil:
return existingObj, existingObj.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound:
size := src.Size()
remote := src.Remote()
size := src.Size()
remote := src.Remote()
// Create the directory for the object if it doesn't exist
leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, true)
if err != nil {
return nil, err
}
// use normal upload API for small sizes (<10MiB)
if size < int64(f.opt.UploadCutoff) {
info, err := f.upload(ctx, f.opt.Enc.FromStandardName(leaf), directoryID, size, in, options...)
if err != nil {
return nil, err
}
return f.newObjectWithInfo(ctx, remote, info)
}
// large file API otherwise
info, err := f.uploadLargeFile(ctx, f.opt.Enc.FromStandardName(leaf), directoryID, size, in, options...)
if err != nil {
return nil, err
}
return f.newObjectWithInfo(ctx, remote, info)
default:
// Create the directory for the object if it doesn't exist
leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, true)
if err != nil {
return nil, err
}
// Upload the file
info, err := f.upload(ctx, f.opt.Enc.FromStandardName(leaf), directoryID, size, in, options...)
if err != nil {
return nil, err
}
return f.newObjectWithInfo(ctx, remote, info)
}
// Mkdir creates the container if it doesn't exist
@ -1348,7 +1159,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
Options: options,
}
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.downloadsrv.Call(ctx, &opts)
resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(ctx, resp, err)
})
if err != nil {
@ -1372,22 +1183,11 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return err
}
// use normal upload API for small sizes (<10MiB)
if size < int64(o.fs.opt.UploadCutoff) {
info, err := o.fs.upload(ctx, o.fs.opt.Enc.FromStandardName(leaf), directoryID, size, in, options...)
if err != nil {
return err
}
return o.setMetaData(info)
}
// large file API otherwise
info, err := o.fs.uploadLargeFile(ctx, o.fs.opt.Enc.FromStandardName(leaf), directoryID, size, in, options...)
// Overwrite the old file
info, err := o.fs.upload(ctx, o.fs.opt.Enc.FromStandardName(leaf), directoryID, size, in, options...)
if err != nil {
return err
}
return o.setMetaData(info)
}

View file

@ -11,8 +11,7 @@ import (
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestZoho:",
SkipInvalidUTF8: true,
NilObject: (*zoho.Object)(nil),
RemoteName: "TestZoho:",
NilObject: (*zoho.Object)(nil),
})
}

View file

@ -21,12 +21,12 @@ def find_backends():
def output_docs(backend, out, cwd):
"""Output documentation for backend options to out"""
out.flush()
subprocess.check_call(["./rclone", "--config=/notfound", "help", "backend", backend], stdout=out)
subprocess.check_call(["./rclone", "help", "backend", backend], stdout=out)
def output_backend_tool_docs(backend, out, cwd):
"""Output documentation for backend tool to out"""
out.flush()
subprocess.call(["./rclone", "--config=/notfound", "backend", "help", backend], stdout=out, stderr=subprocess.DEVNULL)
subprocess.call(["./rclone", "backend", "help", backend], stdout=out, stderr=subprocess.DEVNULL)
def alter_doc(backend):
"""Alter the documentation for backend"""

View file

@ -5,20 +5,13 @@ import (
"bytes"
"log"
"github.com/rclone/rclone/fs"
"github.com/sirupsen/logrus"
)
// CaptureOutput runs a function capturing its output.
func CaptureOutput(fun func()) []byte {
logSave := log.Writer()
logrusSave := logrus.StandardLogger().Writer()
defer func() {
err := logrusSave.Close()
if err != nil {
fs.Errorf(nil, "error closing logrusSave: %v", err)
}
}()
logrusSave := logrus.StandardLogger().Out
buf := &bytes.Buffer{}
log.SetOutput(buf)
logrus.SetOutput(buf)

View file

@ -15,7 +15,6 @@ import (
"path/filepath"
"regexp"
"runtime"
"slices"
"sort"
"strconv"
"strings"
@ -208,16 +207,15 @@ type bisyncTest struct {
parent1 fs.Fs
parent2 fs.Fs
// global flags
argRemote1 string
argRemote2 string
noCompare bool
noCleanup bool
golden bool
debug bool
stopAt int
TestFn bisync.TestFunc
ignoreModtime bool // ignore modtimes when comparing final listings, for backends without support
ignoreBlankHash bool // ignore blank hashes for backends where we allow them to be blank
argRemote1 string
argRemote2 string
noCompare bool
noCleanup bool
golden bool
debug bool
stopAt int
TestFn bisync.TestFunc
ignoreModtime bool // ignore modtimes when comparing final listings, for backends without support
}
var color = bisync.Color
@ -948,10 +946,6 @@ func (b *bisyncTest) checkPreReqs(ctx context.Context, opt *bisync.Options) (con
if (!b.fs1.Features().CanHaveEmptyDirectories || !b.fs2.Features().CanHaveEmptyDirectories) && (b.testCase == "createemptysrcdirs" || b.testCase == "rmdirs") {
b.t.Skip("skipping test as remote does not support empty dirs")
}
ignoreHashBackends := []string{"TestWebdavNextcloud", "TestWebdavOwncloud", "TestAzureFiles"} // backends that support hashes but allow them to be blank
if slices.ContainsFunc(ignoreHashBackends, func(prefix string) bool { return strings.HasPrefix(b.fs1.Name(), prefix) }) || slices.ContainsFunc(ignoreHashBackends, func(prefix string) bool { return strings.HasPrefix(b.fs2.Name(), prefix) }) {
b.ignoreBlankHash = true
}
if b.fs1.Precision() == fs.ModTimeNotSupported || b.fs2.Precision() == fs.ModTimeNotSupported {
if b.testCase != "nomodtime" {
b.t.Skip("skipping test as at least one remote does not support setting modtime")
@ -1557,12 +1551,6 @@ func (b *bisyncTest) mangleResult(dir, file string, golden bool) string {
if b.fs1.Hashes() == hash.Set(hash.None) || b.fs2.Hashes() == hash.Set(hash.None) {
logReplacements = append(logReplacements, `^.*{hashtype} differ.*$`, dropMe)
}
if b.ignoreBlankHash {
logReplacements = append(logReplacements,
`^.*hash is missing.*$`, dropMe,
`^.*not equal on recheck.*$`, dropMe,
)
}
rep := logReplacements
if b.testCase == "dry_run" {
rep = append(rep, dryrunReplacements...)

View file

@ -20,7 +20,6 @@ import (
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/flags"
"github.com/rclone/rclone/fs/filter"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/spf13/cobra"
@ -194,7 +193,7 @@ var commandDefinition = &cobra.Command{
cmd.Run(false, true, command, func() error {
err := Bisync(ctx, fs1, fs2, &opt)
if err == ErrBisyncAborted {
return fserrors.FatalError(err)
os.Exit(2)
}
return err
})

View file

@ -10,7 +10,6 @@ import (
"io"
"os"
"regexp"
"slices"
"sort"
"strconv"
"strings"
@ -22,6 +21,7 @@ import (
"github.com/rclone/rclone/fs/filter"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"golang.org/x/exp/slices"
)
// ListingHeader defines first line of a listing

View file

@ -66,7 +66,8 @@ func quotePath(path string) string {
return escapePath(path, true)
}
var Colors bool // Colors controls whether terminal colors are enabled
// Colors controls whether terminal colors are enabled
var Colors bool
// Color handles terminal colors for bisync
func Color(style string, s string) string {

View file

@ -23,7 +23,7 @@ import (
"github.com/rclone/rclone/lib/terminal"
)
// ErrBisyncAborted signals that bisync is aborted and forces non-zero exit code
// ErrBisyncAborted signals that bisync is aborted and forces exit code 2
var ErrBisyncAborted = errors.New("bisync aborted")
// bisyncRun keeps bisync runtime state

View file

@ -50,6 +50,7 @@ var (
version bool
// Errors
errorCommandNotFound = errors.New("command not found")
errorUncategorized = errors.New("uncategorized error")
errorNotEnoughArguments = errors.New("not enough arguments")
errorTooManyArguments = errors.New("too many arguments")
)
@ -494,6 +495,8 @@ func resolveExitCode(err error) {
os.Exit(exitcode.DirNotFound)
case errors.Is(err, fs.ErrorObjectNotFound):
os.Exit(exitcode.FileNotFound)
case errors.Is(err, errorUncategorized):
os.Exit(exitcode.UncategorizedError)
case errors.Is(err, accounting.ErrorMaxTransferLimitReached):
os.Exit(exitcode.TransferExceeded)
case errors.Is(err, fssync.ErrorMaxDurationReached):
@ -504,10 +507,8 @@ func resolveExitCode(err error) {
os.Exit(exitcode.NoRetryError)
case fserrors.IsFatalError(err):
os.Exit(exitcode.FatalError)
case errors.Is(err, errorCommandNotFound), errors.Is(err, errorNotEnoughArguments), errors.Is(err, errorTooManyArguments):
os.Exit(exitcode.UsageError)
default:
os.Exit(exitcode.UncategorizedError)
os.Exit(exitcode.UsageError)
}
}
@ -535,7 +536,6 @@ func Main() {
if strings.HasPrefix(err.Error(), "unknown command") && selfupdateEnabled {
Root.PrintErrf("You could use '%s selfupdate' to get latest features.\n\n", Root.CommandPath())
}
fs.Logf(nil, "Fatal error: %v", err)
os.Exit(exitcode.UsageError)
fs.Fatalf(nil, "Fatal error: %v", err)
}
}

View file

@ -107,7 +107,7 @@ func (lrw *loggingResponseWriter) logRequest(code int, err interface{}) {
err = ""
}
fs.LogPrintf(level, lrw.request.URL, "%s %s %d %s %s",
fs.LogLevelPrintf(level, lrw.request.URL, "%s %s %d %s %s",
lrw.request.RemoteAddr, lrw.request.Method, code,
lrw.request.Header.Get("SOAPACTION"), err)
}

View file

@ -174,7 +174,7 @@ func TestCmdTest(t *testing.T) {
// Test error and error output
out, err = rclone("version", "--provoke-an-error")
if assert.Error(t, err) {
assert.Contains(t, err.Error(), "exit status 2")
assert.Contains(t, err.Error(), "exit status 1")
assert.Contains(t, out, "Error: unknown flag")
}

View file

@ -889,9 +889,3 @@ put them back in again.` >}}
* Mathieu Moreau <mrx23dot@users.noreply.github.com>
* fsantagostinobietti <6057026+fsantagostinobietti@users.noreply.github.com>
* Oleg Kunitsyn <114359669+hiddenmarten@users.noreply.github.com>
* Divyam <47589864+divyam234@users.noreply.github.com>
* ttionya <ttionya@users.noreply.github.com>
* quiescens <quiescens@gmail.com>
* rishi.sridhar <rishi.sridhar@zohocorp.com>
* Lawrence Murray <lawrence@indii.org>
* Leandro Piccilli <leandro.piccilli@thalesgroup.com>

View file

@ -180,13 +180,6 @@ If the resource has multiple user-assigned identities you will need to
unset `env_auth` and set `use_msi` instead. See the [`use_msi`
section](#use_msi).
If you are operating in disconnected clouds, or private clouds such as
Azure Stack you may want to set `disable_instance_discovery = true`.
This determines whether rclone requests Microsoft Entra instance
metadata from `https://login.microsoft.com/` before authenticating.
Setting this to `true` will skip this request, making you responsible
for ensuring the configured authority is valid and trustworthy.
##### Env Auth: 3. Azure CLI credentials (as used by the az tool)
Credentials created with the `az` tool can be picked up using `env_auth`.
@ -297,16 +290,6 @@ be explicitly specified using exactly one of the `msi_object_id`,
If none of `msi_object_id`, `msi_client_id`, or `msi_mi_res_id` is
set, this is is equivalent to using `env_auth`.
#### Azure CLI tool `az` {#use_az}
Set to use the [Azure CLI tool `az`](https://learn.microsoft.com/en-us/cli/azure/)
as the sole means of authentication.
Setting this can be useful if you wish to use the `az` CLI on a host with
a System Managed Identity that you do not want to use.
Don't set `env_auth` at the same time.
#### Anonymous {#anonymous}
If you want to access resources with public anonymous access then set

View file

@ -968,15 +968,12 @@ that while concurrent bisync runs are allowed, _be very cautious_
that there is no overlap in the trees being synched between concurrent runs,
lest there be replicated files, deleted files and general mayhem.
### Exit codes
### Return codes
`rclone bisync` returns the following codes to calling program:
- `0` on a successful run,
- `1` for a non-critical failing run (a rerun may be successful),
- `2` on syntax or usage error,
- `7` for a critically aborted run (requires a `--resync` to recover).
See also the section about [exit codes](/docs/#exit-code) in main docs.
- `2` for a critically aborted run (requires a `--resync` to recover).
### Graceful Shutdown

View file

@ -5,6 +5,36 @@ description: "Rclone Changelog"
# Changelog
## v1.68.2 - 2024-11-15
[See commits](https://github.com/rclone/rclone/compare/v1.68.1...v1.68.2)
* Security fixes
* local backend: CVE-2024-52522: fix permission and ownership on symlinks with `--links` and `--metadata` (Nick Craig-Wood)
* Only affects users using `--metadata` and `--links` and copying files to the local backend
* See https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
* build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1 (dependabot)
* This is an issue in a dependency which is used for JWT certificates
* See https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
* Bug Fixes
* accounting: Fix wrong message on SIGUSR2 to enable/disable bwlimit (Nick Craig-Wood)
* bisync: Fix output capture restoring the wrong output for logrus (Dimitrios Slamaris)
* dlna: Fix loggingResponseWriter disregarding log level (Simon Bos)
* serve s3: Fix excess locking which was making serve s3 single threaded (Nick Craig-Wood)
* doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy Bush)
* Local
* Fix permission and ownership on symlinks with `--links` and `--metadata` (Nick Craig-Wood)
* Fix `--copy-links` on macOS when cloning (nielash)
* Onedrive
* Fix Retry-After handling to look at 503 errors also (Nick Craig-Wood)
* Pikpak
* Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
* Fix fatal crash on startup with token that can't be refreshed (Nick Craig-Wood)
* S3
* Fix crash when using `--s3-download-url` after migration to SDKv2 (Nick Craig-Wood)
* Storj provider: fix server-side copy of files bigger than 5GB (Kaloyan Raev)
* Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
## v1.68.1 - 2024-09-24
[See commits](https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1)

View file

@ -510,7 +510,7 @@ rclone [flags]
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
--metadata-mapper SpaceSepList Program to run to transforming metadata before upload
--metadata-set stringArray Add metadata key=value when uploading
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to (default [""])
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to
--metrics-allow-origin string Origin which cross-domain request (CORS) can be executed from
--metrics-baseurl string Prefix for URLs - leave blank for root
--metrics-cert string TLS PEM key (concatenation of certificate and CA certificate)
@ -616,21 +616,18 @@ rclone [flags]
--pcloud-token string OAuth Access Token as a JSON blob
--pcloud-token-url string Token server url
--pcloud-username string Your pcloud username
--pikpak-auth-url string Auth server URL
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
--pikpak-client-id string OAuth Client Id
--pikpak-client-secret string OAuth Client Secret
--pikpak-description string Description of the remote
--pikpak-device-id string Device ID used for authorization
--pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
--pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder
--pikpak-token string OAuth Access Token as a JSON blob
--pikpak-token-url string Token server url
--pikpak-trashed-only Only show files that are in the trash
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
--pikpak-user string Pikpak username
--pikpak-user-agent string HTTP user agent for pikpak (default "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0")
--pixeldrain-api-key string API key for your pixeldrain account
--pixeldrain-api-url string The API endpoint to connect to. In the vast majority of cases it's fine to leave (default "https://pixeldrain.com/api")
--pixeldrain-description string Description of the remote
@ -683,7 +680,7 @@ rclone [flags]
--quatrix-skip-project-folders Skip project folders in operations
-q, --quiet Print as little stuff as possible
--rc Enable the remote control server
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@ -932,7 +929,7 @@ rclone [flags]
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.2")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)

View file

@ -54,7 +54,9 @@ When running in background mode the user will have to stop the mount manually:
# Linux
fusermount -u /path/to/local/mount
# OS X
#... or on some systems
fusermount3 -u /path/to/local/mount
# OS X or Linux when using nfsmount
umount /path/to/local/mount
The umount operation can fail, for example when the mountpoint is busy.
@ -398,9 +400,9 @@ Note that systemd runs mount units without any environment variables including
`PATH` or `HOME`. This means that tilde (`~`) expansion will not work
and you should provide `--config` and `--cache-dir` explicitly as absolute
paths via rclone arguments.
Since mounting requires the `fusermount` program, rclone will use the fallback
PATH of `/bin:/usr/bin` in this scenario. Please ensure that `fusermount`
is present on this PATH.
Since mounting requires the `fusermount` or `fusermount3` program,
rclone will use the fallback PATH of `/bin:/usr/bin` in this scenario.
Please ensure that `fusermount`/`fusermount3` is present on this PATH.
## Rclone as Unix mount helper

View file

@ -55,7 +55,9 @@ When running in background mode the user will have to stop the mount manually:
# Linux
fusermount -u /path/to/local/mount
# OS X
#... or on some systems
fusermount3 -u /path/to/local/mount
# OS X or Linux when using nfsmount
umount /path/to/local/mount
The umount operation can fail, for example when the mountpoint is busy.
@ -399,9 +401,9 @@ Note that systemd runs mount units without any environment variables including
`PATH` or `HOME`. This means that tilde (`~`) expansion will not work
and you should provide `--config` and `--cache-dir` explicitly as absolute
paths via rclone arguments.
Since mounting requires the `fusermount` program, rclone will use the fallback
PATH of `/bin:/usr/bin` in this scenario. Please ensure that `fusermount`
is present on this PATH.
Since mounting requires the `fusermount` or `fusermount3` program,
rclone will use the fallback PATH of `/bin:/usr/bin` in this scenario.
Please ensure that `fusermount`/`fusermount3` is present on this PATH.
## Rclone as Unix mount helper

View file

@ -166,7 +166,7 @@ Flags to control the Remote Control API
```
--rc Enable the remote control server
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)

View file

@ -2868,9 +2868,9 @@ messages may not be valid after the retry. If rclone has done a retry
it will log a high priority message if the retry was successful.
### List of exit codes ###
* `0` - Success
* `1` - Error not otherwise categorised
* `2` - Syntax or usage error
* `0` - success
* `1` - Syntax or usage error
* `2` - Error not otherwise categorised
* `3` - Directory not found
* `4` - File not found
* `5` - Temporary error (one that more retries might fix) (Retry errors)

View file

@ -1809,9 +1809,9 @@ then select "OAuth client ID".
9. It will show you a client ID and client secret. Make a note of these.
(If you selected "External" at Step 5 continue to Step 9.
(If you selected "External" at Step 5 continue to Step 10.
If you chose "Internal" you don't need to publish and can skip straight to
Step 10 but your destination drive must be part of the same Google Workspace.)
Step 11 but your destination drive must be part of the same Google Workspace.)
10. Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm.
You will also want to add yourself as a test user.

View file

@ -505,6 +505,8 @@ processed in.
Arrange the order of filter rules with the most restrictive first and
work down.
Lines starting with # or ; are ignored, and can be used to write comments. Inline comments are not supported. _Use `-vv --dump filters` to see how they appear in the final regexp._
E.g. for `filter-file.txt`:
# a sample filter rule file
@ -512,6 +514,7 @@ E.g. for `filter-file.txt`:
+ *.jpg
+ *.png
+ file2.avi
- /dir/tmp/** # WARNING! This text will be treated as part of the path.
- /dir/Trash/**
+ /dir/**
# exclude everything else

View file

@ -115,7 +115,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.2")
```
@ -264,7 +264,7 @@ Flags to control the Remote Control API.
```
--rc Enable the remote control server
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@ -300,7 +300,7 @@ Flags to control the Remote Control API.
Flags to control the Metrics HTTP endpoint..
```
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to (default [""])
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to
--metrics-allow-origin string Origin which cross-domain request (CORS) can be executed from
--metrics-baseurl string Prefix for URLs - leave blank for root
--metrics-cert string TLS PEM key (concatenation of certificate and CA certificate)
@ -794,21 +794,18 @@ Backend-only flags (these can be set in the config file also).
--pcloud-token string OAuth Access Token as a JSON blob
--pcloud-token-url string Token server url
--pcloud-username string Your pcloud username
--pikpak-auth-url string Auth server URL
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
--pikpak-client-id string OAuth Client Id
--pikpak-client-secret string OAuth Client Secret
--pikpak-description string Description of the remote
--pikpak-device-id string Device ID used for authorization
--pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
--pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder
--pikpak-token string OAuth Access Token as a JSON blob
--pikpak-token-url string Token server url
--pikpak-trashed-only Only show files that are in the trash
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
--pikpak-user string Pikpak username
--pikpak-user-agent string HTTP user agent for pikpak (default "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0")
--pixeldrain-api-key string API key for your pixeldrain account
--pixeldrain-api-url string The API endpoint to connect to. In the vast majority of cases it's fine to leave (default "https://pixeldrain.com/api")
--pixeldrain-description string Description of the remote

View file

@ -363,20 +363,6 @@ Properties:
- Type: string
- Required: false
#### --gcs-access-token
Short-lived access token.
Leave blank normally.
Needed only if you want use short-lived access tokens instead of interactive login.
Properties:
- Config: access_token
- Env Var: RCLONE_GCS_ACCESS_TOKEN
- Type: string
- Required: false
#### --gcs-anonymous
Access public buckets and objects without credentials.

View file

@ -502,18 +502,12 @@ is covered by [bug #112096115](https://issuetracker.google.com/issues/112096115)
**The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort**
**NB** you **can** use the [--gphotos-proxy](#gphotos-proxy) flag to use a
headless browser to download images in full resolution.
### Downloading Videos
When videos are downloaded they are downloaded in a really compressed
version of the video compared to downloading it via the Google Photos
web interface. This is covered by [bug #113672044](https://issuetracker.google.com/issues/113672044).
**NB** you **can** use the [--gphotos-proxy](#gphotos-proxy) flag to use a
headless browser to download images in full resolution.
### Duplicates
If a file name is duplicated in a directory then rclone will add the

View file

@ -46,7 +46,7 @@ Here is an overview of the major features of each cloud storage system.
| OpenDrive | MD5 | R/W | Yes | Partial ⁸ | - | - |
| OpenStack Swift | MD5 | R/W | No | No | R/W | - |
| Oracle Object Storage | MD5 | R/W | No | No | R/W | - |
| pCloud | MD5, SHA1 ⁷ | R | No | No | W | - |
| pCloud | MD5, SHA1 ⁷ | R/W | No | No | W | - |
| PikPak | MD5 | R | No | No | R | - |
| Pixeldrain | SHA256 | R/W | No | No | R | RW |
| premiumize.me | - | - | Yes | No | R | - |
@ -521,7 +521,7 @@ upon backend-specific capabilities.
| Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | Yes | Yes | No | No | No |
| Microsoft Azure Files Storage | No | Yes | Yes | Yes | No | No | Yes | Yes | No | Yes | Yes |
| Microsoft OneDrive | Yes | Yes | Yes | Yes | Yes | Yes ⁵ | No | No | Yes | Yes | Yes |
| OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | Yes |
| OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | No | Yes |
| OpenStack Swift | Yes ¹ | Yes | No | No | No | Yes | Yes | No | No | Yes | No |
| Oracle Object Storage | No | Yes | No | No | Yes | Yes | Yes | Yes | No | No | No |
| pCloud | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |

View file

@ -111,68 +111,29 @@ Properties:
Here are the Advanced options specific to pikpak (PikPak).
#### --pikpak-client-id
#### --pikpak-device-id
OAuth Client Id.
Leave blank normally.
Device ID used for authorization.
Properties:
- Config: client_id
- Env Var: RCLONE_PIKPAK_CLIENT_ID
- Config: device_id
- Env Var: RCLONE_PIKPAK_DEVICE_ID
- Type: string
- Required: false
#### --pikpak-client-secret
#### --pikpak-user-agent
OAuth Client Secret.
HTTP user agent for pikpak.
Leave blank normally.
Defaults to "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0" or "--pikpak-user-agent" provided on command line.
Properties:
- Config: client_secret
- Env Var: RCLONE_PIKPAK_CLIENT_SECRET
- Config: user_agent
- Env Var: RCLONE_PIKPAK_USER_AGENT
- Type: string
- Required: false
#### --pikpak-token
OAuth Access Token as a JSON blob.
Properties:
- Config: token
- Env Var: RCLONE_PIKPAK_TOKEN
- Type: string
- Required: false
#### --pikpak-auth-url
Auth server URL.
Leave blank to use the provider defaults.
Properties:
- Config: auth_url
- Env Var: RCLONE_PIKPAK_AUTH_URL
- Type: string
- Required: false
#### --pikpak-token-url
Token server url.
Leave blank to use the provider defaults.
Properties:
- Config: token_url
- Env Var: RCLONE_PIKPAK_TOKEN_URL
- Type: string
- Required: false
- Default: "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0"
#### --pikpak-root-folder-id

View file

@ -2294,21 +2294,6 @@ You can also do this entirely on the command line
This is the provider used as main example and described in the [configuration](#configuration) section above.
### AWS Directory Buckets
From rclone v1.69 [Directory Buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-overview.html)
are supported.
You will need to set the `directory_buckets = true` config parameter
or use `--s3-directory-buckets`.
Note that rclone cannot yet:
- Create directory buckets
- List directory buckets
See [the --s3-directory-buckets flag](#s3-directory-buckets) for more info
### AWS Snowball Edge
[AWS Snowball](https://aws.amazon.com/snowball/) is a hardware
@ -3491,8 +3476,8 @@ chunk_size = 5M
copy_cutoff = 5M
```
[C14 Cold Storage](https://www.online.net/en/storage/c14-cold-storage) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`.
So you can configure your remote with the `storage_class = GLACIER` option to upload directly to C14. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
[Scaleway Glacier](https://www.scaleway.com/en/glacier-cold-storage/) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`.
So you can configure your remote with the `storage_class = GLACIER` option to upload directly to Scaleway Glacier. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
### Seagate Lyve Cloud {#lyve}

View file

@ -224,17 +224,6 @@ Properties:
- Type: string
- Required: false
#### --zoho-upload-cutoff
Cutoff for switching to large file upload api (>= 10 MiB).
Properties:
- Config: upload_cutoff
- Env Var: RCLONE_ZOHO_UPLOAD_CUTOFF
- Type: SizeSuffix
- Default: 10Mi
#### --zoho-encoding
The encoding for the backend.

View file

@ -1 +1 @@
v1.69.0
v1.68.2

View file

@ -35,13 +35,12 @@ func (tb *tokenBucket) startSignalHandler() {
tb.toggledOff = !tb.toggledOff
tb.curr, tb.prev = tb.prev, tb.curr
s, limit := "disabled", "off"
s := "disabled"
if !tb.curr._isOff() {
s = "enabled"
limit = tb.currLimit.Bandwidth.String()
}
fs.Logf(nil, "Bandwidth limit %s by user (now %s)", s, limit)
fs.Logf(nil, "Bandwidth limit %s by user", s)
}()
}
}()

111
fs/cache/cache.go vendored
View file

@ -4,7 +4,6 @@ package cache
import (
"context"
"runtime"
"strings"
"sync"
"github.com/rclone/rclone/fs"
@ -13,11 +12,10 @@ import (
)
var (
once sync.Once // creation
c *cache.Cache
mu sync.Mutex // mutex to protect remap
remap = map[string]string{} // map user supplied names to canonical names - [fsString]canonicalName
childParentMap = map[string]string{} // tracks a one-to-many relationship between parent dirs and their direct children files - [child]parent
once sync.Once // creation
c *cache.Cache
mu sync.Mutex // mutex to protect remap
remap = map[string]string{} // map user supplied names to canonical names
)
// Create the cache just once
@ -59,39 +57,6 @@ func addMapping(fsString, canonicalName string) {
mu.Unlock()
}
// addChild tracks known file (child) to directory (parent) relationships.
// Note that the canonicalName of a child will always equal that of its parent,
// but not everything with an equal canonicalName is a child.
// It could be an alias or overridden version of a directory.
func addChild(child, parent string) {
if child == parent {
return
}
mu.Lock()
childParentMap[child] = parent
mu.Unlock()
}
// returns true if name is definitely known to be a child (i.e. a file, not a dir).
// returns false if name is a dir or if we don't know.
func isChild(child string) bool {
mu.Lock()
_, found := childParentMap[child]
mu.Unlock()
return found
}
// ensures that we return fs.ErrorIsFile when necessary
func getError(fsString string, err error) error {
if err != nil && err != fs.ErrorIsFile {
return err
}
if isChild(fsString) {
return fs.ErrorIsFile
}
return nil
}
// GetFn gets an fs.Fs named fsString either from the cache or creates
// it afresh with the create function
func GetFn(ctx context.Context, fsString string, create func(ctx context.Context, fsString string) (fs.Fs, error)) (f fs.Fs, err error) {
@ -104,39 +69,31 @@ func GetFn(ctx context.Context, fsString string, create func(ctx context.Context
created = ok
return f, ok, err
})
f, ok := value.(fs.Fs)
if err != nil && err != fs.ErrorIsFile {
if ok {
return f, err // for possible future uses of PutErr
}
return nil, err
}
f = value.(fs.Fs)
// Check we stored the Fs at the canonical name
if created {
canonicalName := fs.ConfigString(f)
if canonicalName != canonicalFsString {
if err == nil { // it's a dir
// Note that if err == fs.ErrorIsFile at this moment
// then we can't rename the remote as it will have the
// wrong error status, we need to add a new one.
if err == nil {
fs.Debugf(nil, "fs cache: renaming cache item %q to be canonical %q", canonicalFsString, canonicalName)
value, found := c.Rename(canonicalFsString, canonicalName)
if found {
f = value.(fs.Fs)
}
addMapping(canonicalFsString, canonicalName)
} else { // it's a file
// the fs we cache is always the file's parent, never the file,
// but we use the childParentMap to return the correct error status based on the fsString passed in.
fs.Debugf(nil, "fs cache: renaming child cache item %q to be canonical for parent %q", canonicalFsString, canonicalName)
value, found := c.Rename(canonicalFsString, canonicalName) // rename the file entry to parent
if found {
f = value.(fs.Fs) // if parent already exists, use it
}
Put(canonicalName, f) // force err == nil for the cache
addMapping(canonicalFsString, canonicalName) // note the fsString-canonicalName connection for future lookups
addChild(fsString, canonicalName) // note the file-directory connection for future lookups
} else {
fs.Debugf(nil, "fs cache: adding new entry for parent of %q, %q", canonicalFsString, canonicalName)
Put(canonicalName, f)
}
}
}
return f, getError(fsString, err) // ensure fs.ErrorIsFile is returned when necessary
return f, err
}
// Pin f into the cache until Unpin is called
@ -154,6 +111,7 @@ func PinUntilFinalized(f fs.Fs, x interface{}) {
runtime.SetFinalizer(x, func(_ interface{}) {
Unpin(f)
})
}
// Unpin f from the cache
@ -216,9 +174,6 @@ func PutErr(fsString string, f fs.Fs, err error) {
canonicalName := fs.ConfigString(f)
c.PutErr(canonicalName, f, err)
addMapping(fsString, canonicalName)
if err == fs.ErrorIsFile {
addChild(fsString, canonicalName)
}
}
// Put puts an fs.Fs named fsString into the cache
@ -231,7 +186,6 @@ func Put(fsString string, f fs.Fs) {
// Returns number of entries deleted
func ClearConfig(name string) (deleted int) {
createOnFirstUse()
ClearMappingsPrefix(name)
return c.DeletePrefix(name + ":")
}
@ -239,7 +193,6 @@ func ClearConfig(name string) (deleted int) {
func Clear() {
createOnFirstUse()
c.Clear()
ClearMappings()
}
// Entries returns the number of entries in the cache
@ -247,39 +200,3 @@ func Entries() int {
createOnFirstUse()
return c.Entries()
}
// ClearMappings removes everything from remap and childParentMap
func ClearMappings() {
mu.Lock()
defer mu.Unlock()
remap = map[string]string{}
childParentMap = map[string]string{}
}
// ClearMappingsPrefix deletes all mappings to parents with given prefix
//
// Returns number of entries deleted
func ClearMappingsPrefix(prefix string) (deleted int) {
mu.Lock()
do := func(mapping map[string]string) {
for key, val := range mapping {
if !strings.HasPrefix(val, prefix) {
continue
}
delete(mapping, key)
deleted++
}
}
do(remap)
do(childParentMap)
mu.Unlock()
return deleted
}
// EntriesWithPinCount returns the number of pinned and unpinned entries in the cache
//
// Each entry is counted only once, regardless of entry.pinCount
func EntriesWithPinCount() (pinned, unpinned int) {
createOnFirstUse()
return c.EntriesWithPinCount()
}

View file

@ -24,7 +24,7 @@ func mockNewFs(t *testing.T) func(ctx context.Context, path string) (fs.Fs, erro
switch path {
case "mock:/":
return mockfs.NewFs(ctx, "mock", "/", nil)
case "mock:/file.txt", "mock:file.txt", "mock:/file2.txt", "mock:file2.txt":
case "mock:/file.txt", "mock:file.txt":
fMock, err := mockfs.NewFs(ctx, "mock", "/", nil)
require.NoError(t, err)
return fMock, fs.ErrorIsFile
@ -55,7 +55,6 @@ func TestGet(t *testing.T) {
}
func TestGetFile(t *testing.T) {
defer ClearMappings()
create := mockNewFs(t)
assert.Equal(t, 0, Entries())
@ -64,7 +63,7 @@ func TestGetFile(t *testing.T) {
require.Equal(t, fs.ErrorIsFile, err)
require.NotNil(t, f)
assert.Equal(t, 1, Entries())
assert.Equal(t, 2, Entries())
f2, err := GetFn(context.Background(), "mock:/file.txt", create)
require.Equal(t, fs.ErrorIsFile, err)
@ -72,7 +71,7 @@ func TestGetFile(t *testing.T) {
assert.Equal(t, f, f2)
// check it is also found when referred to by parent name
// check parent is there too
f2, err = GetFn(context.Background(), "mock:/", create)
require.Nil(t, err)
require.NotNil(t, f2)
@ -81,7 +80,6 @@ func TestGetFile(t *testing.T) {
}
func TestGetFile2(t *testing.T) {
defer ClearMappings()
create := mockNewFs(t)
assert.Equal(t, 0, Entries())
@ -90,7 +88,7 @@ func TestGetFile2(t *testing.T) {
require.Equal(t, fs.ErrorIsFile, err)
require.NotNil(t, f)
assert.Equal(t, 1, Entries())
assert.Equal(t, 2, Entries())
f2, err := GetFn(context.Background(), "mock:file.txt", create)
require.Equal(t, fs.ErrorIsFile, err)
@ -98,7 +96,7 @@ func TestGetFile2(t *testing.T) {
assert.Equal(t, f, f2)
// check it is also found when referred to by parent name
// check parent is there too
f2, err = GetFn(context.Background(), "mock:/", create)
require.Nil(t, err)
require.NotNil(t, f2)
@ -126,22 +124,22 @@ func TestPutErr(t *testing.T) {
assert.Equal(t, 0, Entries())
PutErr("mock:/", f, fs.ErrorNotFoundInConfigFile)
PutErr("mock:file.txt", f, fs.ErrorIsFile)
assert.Equal(t, 1, Entries())
fNew, err := GetFn(context.Background(), "mock:/", create)
require.Equal(t, fs.ErrorNotFoundInConfigFile, err)
fNew, err := GetFn(context.Background(), "mock:file.txt", create)
require.Equal(t, fs.ErrorIsFile, err)
require.Equal(t, f, fNew)
assert.Equal(t, 1, Entries())
// Check canonicalisation
PutErr("mock:/file.txt", f, fs.ErrorNotFoundInConfigFile)
PutErr("mock:/file.txt", f, fs.ErrorIsFile)
fNew, err = GetFn(context.Background(), "mock:/file.txt", create)
require.Equal(t, fs.ErrorNotFoundInConfigFile, err)
require.Equal(t, fs.ErrorIsFile, err)
require.Equal(t, f, fNew)
assert.Equal(t, 1, Entries())
@ -192,75 +190,6 @@ func TestPin(t *testing.T) {
Unpin(f2)
}
func TestPinFile(t *testing.T) {
defer ClearMappings()
create := mockNewFs(t)
// Test pinning and unpinning nonexistent
f, err := mockfs.NewFs(context.Background(), "mock", "/file.txt", nil)
require.NoError(t, err)
Pin(f)
Unpin(f)
// Now test pinning an existing
f2, err := GetFn(context.Background(), "mock:/file.txt", create)
require.Equal(t, fs.ErrorIsFile, err)
assert.Equal(t, 1, len(childParentMap))
Pin(f2)
assert.Equal(t, 1, Entries())
pinned, unpinned := EntriesWithPinCount()
assert.Equal(t, 1, pinned)
assert.Equal(t, 0, unpinned)
Unpin(f2)
assert.Equal(t, 1, Entries())
pinned, unpinned = EntriesWithPinCount()
assert.Equal(t, 0, pinned)
assert.Equal(t, 1, unpinned)
// try a different child of the same parent, and parent
// should not add additional cache items
called = 0 // this one does create() because we haven't seen it before and don't yet know it's a file
f3, err := GetFn(context.Background(), "mock:/file2.txt", create)
assert.Equal(t, fs.ErrorIsFile, err)
assert.Equal(t, 1, Entries())
assert.Equal(t, 2, len(childParentMap))
parent, err := GetFn(context.Background(), "mock:/", create)
assert.NoError(t, err)
assert.Equal(t, 1, Entries())
assert.Equal(t, 2, len(childParentMap))
Pin(f3)
assert.Equal(t, 1, Entries())
pinned, unpinned = EntriesWithPinCount()
assert.Equal(t, 1, pinned)
assert.Equal(t, 0, unpinned)
Unpin(f3)
assert.Equal(t, 1, Entries())
pinned, unpinned = EntriesWithPinCount()
assert.Equal(t, 0, pinned)
assert.Equal(t, 1, unpinned)
Pin(parent)
assert.Equal(t, 1, Entries())
pinned, unpinned = EntriesWithPinCount()
assert.Equal(t, 1, pinned)
assert.Equal(t, 0, unpinned)
Unpin(parent)
assert.Equal(t, 1, Entries())
pinned, unpinned = EntriesWithPinCount()
assert.Equal(t, 0, pinned)
assert.Equal(t, 1, unpinned)
// all 3 should have equal configstrings
assert.Equal(t, fs.ConfigString(f2), fs.ConfigString(f3))
assert.Equal(t, fs.ConfigString(f2), fs.ConfigString(parent))
}
func TestClearConfig(t *testing.T) {
create := mockNewFs(t)
@ -269,9 +198,9 @@ func TestClearConfig(t *testing.T) {
_, err := GetFn(context.Background(), "mock:/file.txt", create)
require.Equal(t, fs.ErrorIsFile, err)
assert.Equal(t, 1, Entries())
assert.Equal(t, 2, Entries()) // file + parent
assert.Equal(t, 1, ClearConfig("mock"))
assert.Equal(t, 2, ClearConfig("mock"))
assert.Equal(t, 0, Entries())
}

View file

@ -22,7 +22,7 @@ func mockNewFs(t *testing.T) func() {
require.NoError(t, err)
cache.Put("mock:/", f)
cache.Put(":mock:/", f)
f, err = mockfs.NewFs(ctx, "mock", "dir/", nil)
f, err = mockfs.NewFs(ctx, "mock", "dir/file.txt", nil)
require.NoError(t, err)
cache.PutErr("mock:dir/file.txt", f, fs.ErrorIsFile)
return func() {

View file

@ -597,108 +597,6 @@ func TestServerSideCopy(t *testing.T) {
fstest.CheckItems(t, FremoteCopy, file1)
}
// Test copying a file over itself
func TestCopyOverSelf(t *testing.T) {
ctx := context.Background()
r := fstest.NewRun(t)
file1 := r.WriteObject(ctx, "sub dir/hello world", "hello world", t1)
r.CheckRemoteItems(t, file1)
file2 := r.WriteFile("sub dir/hello world", "hello world again", t2)
r.CheckLocalItems(t, file2)
ctx = predictDstFromLogger(ctx)
err := CopyDir(ctx, r.Fremote, r.Flocal, false)
require.NoError(t, err)
testLoggerVsLsf(ctx, r.Fremote, operations.GetLoggerOpt(ctx).JSON, t)
r.CheckRemoteItems(t, file2)
}
// Test server-side copying a file over itself
func TestServerSideCopyOverSelf(t *testing.T) {
ctx := context.Background()
r := fstest.NewRun(t)
file1 := r.WriteObject(ctx, "sub dir/hello world", "hello world", t1)
r.CheckRemoteItems(t, file1)
FremoteCopy, _, finaliseCopy, err := fstest.RandomRemote()
require.NoError(t, err)
defer finaliseCopy()
t.Logf("Server side copy (if possible) %v -> %v", r.Fremote, FremoteCopy)
ctx = predictDstFromLogger(ctx)
err = CopyDir(ctx, FremoteCopy, r.Fremote, false)
require.NoError(t, err)
testLoggerVsLsf(ctx, r.Fremote, operations.GetLoggerOpt(ctx).JSON, t)
fstest.CheckItems(t, FremoteCopy, file1)
file2 := r.WriteObject(ctx, "sub dir/hello world", "hello world again", t2)
r.CheckRemoteItems(t, file2)
ctx = predictDstFromLogger(ctx)
err = CopyDir(ctx, FremoteCopy, r.Fremote, false)
require.NoError(t, err)
testLoggerVsLsf(ctx, r.Fremote, operations.GetLoggerOpt(ctx).JSON, t)
fstest.CheckItems(t, FremoteCopy, file2)
}
// Test moving a file over itself
func TestMoveOverSelf(t *testing.T) {
ctx := context.Background()
r := fstest.NewRun(t)
file1 := r.WriteObject(ctx, "sub dir/hello world", "hello world", t1)
r.CheckRemoteItems(t, file1)
file2 := r.WriteFile("sub dir/hello world", "hello world again", t2)
r.CheckLocalItems(t, file2)
ctx = predictDstFromLogger(ctx)
err := MoveDir(ctx, r.Fremote, r.Flocal, false, false)
require.NoError(t, err)
testLoggerVsLsf(ctx, r.Fremote, operations.GetLoggerOpt(ctx).JSON, t)
r.CheckLocalItems(t)
r.CheckRemoteItems(t, file2)
}
// Test server-side moving a file over itself
func TestServerSideMoveOverSelf(t *testing.T) {
ctx := context.Background()
r := fstest.NewRun(t)
file1 := r.WriteObject(ctx, "sub dir/hello world", "hello world", t1)
r.CheckRemoteItems(t, file1)
FremoteCopy, _, finaliseCopy, err := fstest.RandomRemote()
require.NoError(t, err)
defer finaliseCopy()
t.Logf("Server side copy (if possible) %v -> %v", r.Fremote, FremoteCopy)
ctx = predictDstFromLogger(ctx)
err = CopyDir(ctx, FremoteCopy, r.Fremote, false)
require.NoError(t, err)
testLoggerVsLsf(ctx, r.Fremote, operations.GetLoggerOpt(ctx).JSON, t)
fstest.CheckItems(t, FremoteCopy, file1)
file2 := r.WriteObject(ctx, "sub dir/hello world", "hello world again", t2)
r.CheckRemoteItems(t, file2)
// ctx = predictDstFromLogger(ctx)
err = MoveDir(ctx, FremoteCopy, r.Fremote, false, false)
require.NoError(t, err)
// testLoggerVsLsf(ctx, r.Fremote, operations.GetLoggerOpt(ctx).JSON, t) // not currently supported
r.CheckRemoteItems(t)
fstest.CheckItems(t, FremoteCopy, file2)
// check that individual file moves also work without MoveDir
file3 := r.WriteObject(ctx, "sub dir/hello world", "hello world a third time", t3)
r.CheckRemoteItems(t, file3)
ctx = predictDstFromLogger(ctx)
fs.Debugf(nil, "testing file moves")
err = moveDir(ctx, FremoteCopy, r.Fremote, false, false)
require.NoError(t, err)
testLoggerVsLsf(ctx, FremoteCopy, operations.GetLoggerOpt(ctx).JSON, t)
r.CheckRemoteItems(t)
fstest.CheckItems(t, FremoteCopy, file3)
}
// Check that if the local file doesn't exist when we copy it up,
// nothing happens to the remote file
func TestCopyAfterDelete(t *testing.T) {
@ -2422,19 +2320,15 @@ func testSyncBackupDir(t *testing.T, backupDir string, suffix string, suffixKeep
r.CheckRemoteItems(t, file1b, file2, file3a, file1a)
}
func TestSyncBackupDir(t *testing.T) {
testSyncBackupDir(t, "backup", "", false)
}
func TestSyncBackupDirWithSuffix(t *testing.T) {
testSyncBackupDir(t, "backup", ".bak", false)
}
func TestSyncBackupDirWithSuffixKeepExtension(t *testing.T) {
testSyncBackupDir(t, "backup", "-2019-01-01", true)
}
func TestSyncBackupDirSuffixOnly(t *testing.T) {
testSyncBackupDir(t, "", ".bak", false)
}
@ -2912,7 +2806,7 @@ func predictDstFromLogger(ctx context.Context) context.Context {
}
func DstLsf(ctx context.Context, Fremote fs.Fs) *bytes.Buffer {
opt := operations.ListJSONOpt{
var opt = operations.ListJSONOpt{
NoModTime: false,
NoMimeType: true,
DirsOnly: false,

View file

@ -1,4 +1,4 @@
package fs
// VersionTag of rclone
var VersionTag = "v1.69.0"
var VersionTag = "v1.68.2"

View file

@ -6,7 +6,6 @@ import (
"fmt"
"os"
"path"
"slices"
"github.com/rclone/rclone/fs"
yaml "gopkg.in/yaml.v2"
@ -36,7 +35,6 @@ type Backend struct {
CleanUp bool // when running clean, run cleanup first
Ignore []string // test names to ignore the failure of
Tests []string // paths of tests to run, blank for all
IgnoreTests []string // paths of tests not to run, blank for none
ListRetries int // -list-retries if > 0
ExtraTime float64 // factor to multiply the timeout by
}
@ -44,15 +42,15 @@ type Backend struct {
// includeTest returns true if this backend should be included in this
// test
func (b *Backend) includeTest(t *Test) bool {
// Is this test ignored
if slices.Contains(b.IgnoreTests, t.Path) {
return false
}
// Empty b.Tests imples do all of them except the ignored
if len(b.Tests) == 0 {
return true
}
return slices.Contains(b.Tests, t.Path)
for _, testPath := range b.Tests {
if testPath == t.Path {
return true
}
}
return false
}
// MakeRuns creates Run objects the Backend and Test

View file

@ -395,10 +395,6 @@ backends:
- backend: "cache"
remote: "TestCache:"
fastlist: false
ignoretests:
- TestBisyncLocalRemote
- TestBisyncRemoteLocal
- TestBisyncRemoteRemote
- backend: "mega"
remote: "TestMega:"
fastlist: false

6
go.mod
View file

@ -59,7 +59,7 @@ require (
github.com/prometheus/client_golang v1.19.1
github.com/putdotio/go-putio/putio v0.0.0-20200123120452-16d982cac2b8
github.com/quasilyte/go-ruleguard/dsl v0.3.22
github.com/rclone/gofakes3 v0.0.3-0.20240807151802-e80146f8de87
github.com/rclone/gofakes3 v0.0.3
github.com/rfjakob/eme v1.1.2
github.com/rivo/uniseg v0.4.7
github.com/rogpeppe/go-internal v1.12.0
@ -79,6 +79,7 @@ require (
go.etcd.io/bbolt v1.3.10
goftp.io/server/v2 v2.0.1
golang.org/x/crypto v0.25.0
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56
golang.org/x/net v0.27.0
golang.org/x/oauth2 v0.21.0
golang.org/x/sync v0.8.0
@ -204,7 +205,6 @@ require (
go.opentelemetry.io/otel v1.24.0 // indirect
go.opentelemetry.io/otel/metric v1.24.0 // indirect
go.opentelemetry.io/otel/trace v1.24.0 // indirect
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
golang.org/x/mod v0.19.0 // indirect
golang.org/x/tools v0.23.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240701130421-f6361c86f094 // indirect
@ -223,7 +223,7 @@ require (
require (
github.com/Microsoft/go-winio v0.6.1 // indirect
github.com/ProtonMail/go-crypto v1.0.0
github.com/golang-jwt/jwt/v4 v4.5.0
github.com/golang-jwt/jwt/v4 v4.5.1
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/pkg/xattr v0.4.9
golang.org/x/mobile v0.0.0-20240716161057-1ad2df20a8b6

8
go.sum
View file

@ -275,8 +275,8 @@ github.com/gofrs/flock v0.8.1 h1:+gYjHKf32LDeiEEFhQaotPbLuUXjY5ZqxKgXy7n59aw=
github.com/gofrs/flock v0.8.1/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt/v4 v4.5.0 h1:7cYmW1XlMY7h7ii7UhUyChSgS5wUJEnm9uZVTGqOWzg=
github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang-jwt/jwt/v4 v4.5.1 h1:JdqV9zKUdtaa9gdPlywC3aeoEsR681PlKC+4F5gQgeo=
github.com/golang-jwt/jwt/v4 v4.5.1/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang-jwt/jwt/v5 v5.2.1 h1:OuVbFODueb089Lh128TAcimifWaLhJwVflnrgM17wHk=
github.com/golang-jwt/jwt/v5 v5.2.1/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
@ -519,8 +519,8 @@ github.com/quic-go/quic-go v0.40.1 h1:X3AGzUNFs0jVuO3esAGnTfvdgvL4fq655WaOi1snv1
github.com/quic-go/quic-go v0.40.1/go.mod h1:PeN7kuVJ4xZbxSv/4OX6S1USOX8MJvydwpTx31vx60c=
github.com/rasky/go-xdr v0.0.0-20170124162913-1a41d1a06c93 h1:UVArwN/wkKjMVhh2EQGC0tEc1+FqiLlvYXY5mQ2f8Wg=
github.com/rasky/go-xdr v0.0.0-20170124162913-1a41d1a06c93/go.mod h1:Nfe4efndBz4TibWycNE+lqyJZiMX4ycx+QKV8Ta0f/o=
github.com/rclone/gofakes3 v0.0.3-0.20240807151802-e80146f8de87 h1:0YRo2aYhE+SCZsjWYMFe8zLD18xieXy7wQ8M9Ywcr/g=
github.com/rclone/gofakes3 v0.0.3-0.20240807151802-e80146f8de87/go.mod h1:z7+o2VUwitO0WuVHReQlOW9jZ03LpeJ0PUFSULyTIds=
github.com/rclone/gofakes3 v0.0.3 h1:0sKCxJ8TUUAG5KXGuc/fcDKGnzB/j6IjNQui9ntIZPo=
github.com/rclone/gofakes3 v0.0.3/go.mod h1:z7+o2VUwitO0WuVHReQlOW9jZ03LpeJ0PUFSULyTIds=
github.com/relvacode/iso8601 v1.3.0 h1:HguUjsGpIMh/zsTczGN3DVJFxTU/GX+MMmzcKoMO7ko=
github.com/relvacode/iso8601 v1.3.0/go.mod h1:FlNp+jz+TXpyRqgmM7tnzHHzBnz776kmAH2h3sZCn0I=
github.com/rfjakob/eme v1.1.2 h1:SxziR8msSOElPayZNFfQw4Tjx/Sbaeeh3eRvrHVMUs4=

16
lib/cache/cache.go vendored
View file

@ -260,19 +260,3 @@ func (c *Cache) SetFinalizer(finalize func(interface{})) {
c.finalize = finalize
c.mu.Unlock()
}
// EntriesWithPinCount returns the number of pinned and unpinned entries in the cache
//
// Each entry is counted only once, regardless of entry.pinCount
func (c *Cache) EntriesWithPinCount() (pinned, unpinned int) {
c.mu.Lock()
for _, entry := range c.cache {
if entry.pinCount <= 0 {
unpinned++
} else {
pinned++
}
}
c.mu.Unlock()
return pinned, unpinned
}

View file

@ -4,10 +4,10 @@ package exitcode
const (
// Success is returned when rclone finished without error.
Success = iota
// UncategorizedError is returned for any error not categorised otherwise.
UncategorizedError
// UsageError is returned when there was a syntax or usage error in the arguments.
UsageError
// UncategorizedError is returned for any error not categorised otherwise.
UncategorizedError
// DirNotFound is returned when a source or destination directory is not found.
DirNotFound
// FileNotFound is returned when a source or destination file is not found.

View file

@ -17,7 +17,8 @@ import (
)
const (
BufferSize = 1024 * 1024 // BufferSize is the default size of the pages used in the reader
// BufferSize is the default size of the pages used in the reader
BufferSize = 1024 * 1024
bufferCacheSize = 64 // max number of buffers to keep in cache
bufferCacheFlushTime = 5 * time.Second // flush the cached buffers after this long
)

340
rclone.1 generated
View file

@ -1,7 +1,7 @@
.\"t
.\" Automatically generated by Pandoc 2.9.2.1
.\"
.TH "rclone" "1" "Sep 08, 2024" "User Manual" ""
.TH "rclone" "1" "Nov 15, 2024" "User Manual" ""
.hy
.SH Rclone syncs your files to cloud storage
.PP
@ -6429,7 +6429,9 @@ manually:
\f[C]
# Linux
fusermount -u /path/to/local/mount
# OS X
#... or on some systems
fusermount3 -u /path/to/local/mount
# OS X or Linux when using nfsmount
umount /path/to/local/mount
\f[R]
.fi
@ -6859,9 +6861,11 @@ including \f[C]PATH\f[R] or \f[C]HOME\f[R].
This means that tilde (\f[C]\[ti]\f[R]) expansion will not work and you
should provide \f[C]--config\f[R] and \f[C]--cache-dir\f[R] explicitly
as absolute paths via rclone arguments.
Since mounting requires the \f[C]fusermount\f[R] program, rclone will
use the fallback PATH of \f[C]/bin:/usr/bin\f[R] in this scenario.
Please ensure that \f[C]fusermount\f[R] is present on this PATH.
Since mounting requires the \f[C]fusermount\f[R] or
\f[C]fusermount3\f[R] program, rclone will use the fallback PATH of
\f[C]/bin:/usr/bin\f[R] in this scenario.
Please ensure that \f[C]fusermount\f[R]/\f[C]fusermount3\f[R] is present
on this PATH.
.SS Rclone as Unix mount helper
.PP
The core Unix program \f[C]/bin/mount\f[R] normally takes the
@ -7886,7 +7890,9 @@ manually:
\f[C]
# Linux
fusermount -u /path/to/local/mount
# OS X
#... or on some systems
fusermount3 -u /path/to/local/mount
# OS X or Linux when using nfsmount
umount /path/to/local/mount
\f[R]
.fi
@ -8317,9 +8323,11 @@ including \f[C]PATH\f[R] or \f[C]HOME\f[R].
This means that tilde (\f[C]\[ti]\f[R]) expansion will not work and you
should provide \f[C]--config\f[R] and \f[C]--cache-dir\f[R] explicitly
as absolute paths via rclone arguments.
Since mounting requires the \f[C]fusermount\f[R] program, rclone will
use the fallback PATH of \f[C]/bin:/usr/bin\f[R] in this scenario.
Please ensure that \f[C]fusermount\f[R] is present on this PATH.
Since mounting requires the \f[C]fusermount\f[R] or
\f[C]fusermount3\f[R] program, rclone will use the fallback PATH of
\f[C]/bin:/usr/bin\f[R] in this scenario.
Please ensure that \f[C]fusermount\f[R]/\f[C]fusermount3\f[R] is present
on this PATH.
.SS Rclone as Unix mount helper
.PP
The core Unix program \f[C]/bin/mount\f[R] normally takes the
@ -9527,7 +9535,7 @@ Flags to control the Remote Control API
.nf
\f[C]
--rc Enable the remote control server
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default [\[dq]localhost:5572\[dq]])
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@ -19836,6 +19844,49 @@ they take exactly the same form.
The options set by environment variables can be seen with the
\f[C]-vv\f[R] flag, e.g.
\f[C]rclone version -vv\f[R].
.PP
Options that can appear multiple times (type \f[C]stringArray\f[R]) are
treated slighly differently as environment variables can only be defined
once.
In order to allow a simple mechanism for adding one or many items, the
input is treated as a CSV encoded (https://godoc.org/encoding/csv)
string.
For example
.PP
.TS
tab(@);
lw(36.7n) lw(33.3n).
T{
Environment Variable
T}@T{
Equivalent options
T}
_
T{
\f[C]RCLONE_EXCLUDE=\[dq]*.jpg\[dq]\f[R]
T}@T{
\f[C]--exclude \[dq]*.jpg\[dq]\f[R]
T}
T{
\f[C]RCLONE_EXCLUDE=\[dq]*.jpg,*.png\[dq]\f[R]
T}@T{
\f[C]--exclude \[dq]*.jpg\[dq]\f[R] \f[C]--exclude \[dq]*.png\[dq]\f[R]
T}
T{
\f[C]RCLONE_EXCLUDE=\[aq]\[dq]*.jpg\[dq],\[dq]*.png\[dq]\[aq]\f[R]
T}@T{
\f[C]--exclude \[dq]*.jpg\[dq]\f[R] \f[C]--exclude \[dq]*.png\[dq]\f[R]
T}
T{
\f[C]RCLONE_EXCLUDE=\[aq]\[dq]/directory with comma , in it /**\[dq]\[aq]\f[R]
T}@T{
\[ga]--exclude \[dq]/directory with comma , in it /**\[dq]
T}
.TE
.PP
If \f[C]stringArray\f[R] options are defined as environment variables
\f[B]and\f[R] options on the command line then all the values will be
used.
.SS Config file
.PP
You can set defaults for values in the config file on an individual
@ -20904,6 +20955,12 @@ See above for the order filter flags are processed in.
Arrange the order of filter rules with the most restrictive first and
work down.
.PP
Lines starting with # or ; are ignored, and can be used to write
comments.
Inline comments are not supported.
\f[I]Use \f[CI]-vv --dump filters\f[I] to see how they appear in the
final regexp.\f[R]
.PP
E.g.
for \f[C]filter-file.txt\f[R]:
.IP
@ -20914,6 +20971,7 @@ for \f[C]filter-file.txt\f[R]:
+ *.jpg
+ *.png
+ file2.avi
- /dir/tmp/** # WARNING! This text will be treated as part of the path.
- /dir/Trash/**
+ /dir/**
# exclude everything else
@ -24914,7 +24972,7 @@ pCloud
T}@T{
MD5, SHA1 \[u2077]
T}@T{
R
R/W
T}@T{
No
T}@T{
@ -27668,7 +27726,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.68.0\[dq])
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.68.2\[dq])
\f[R]
.fi
.SS Performance
@ -27817,7 +27875,7 @@ Flags to control the Remote Control API.
.nf
\f[C]
--rc Enable the remote control server
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default [\[dq]localhost:5572\[dq]])
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@ -27853,7 +27911,7 @@ Flags to control the Metrics HTTP endpoint..
.IP
.nf
\f[C]
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to (default [\[dq]\[dq]])
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to
--metrics-allow-origin string Origin which cross-domain request (CORS) can be executed from
--metrics-baseurl string Prefix for URLs - leave blank for root
--metrics-cert string TLS PEM key (concatenation of certificate and CA certificate)
@ -28347,21 +28405,18 @@ Backend-only flags (these can be set in the config file also).
--pcloud-token string OAuth Access Token as a JSON blob
--pcloud-token-url string Token server url
--pcloud-username string Your pcloud username
--pikpak-auth-url string Auth server URL
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
--pikpak-client-id string OAuth Client Id
--pikpak-client-secret string OAuth Client Secret
--pikpak-description string Description of the remote
--pikpak-device-id string Device ID used for authorization
--pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
--pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder
--pikpak-token string OAuth Access Token as a JSON blob
--pikpak-token-url string Token server url
--pikpak-trashed-only Only show files that are in the trash
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
--pikpak-user string Pikpak username
--pikpak-user-agent string HTTP user agent for pikpak (default \[dq]Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0\[dq])
--pixeldrain-api-key string API key for your pixeldrain account
--pixeldrain-api-url string The API endpoint to connect to. In the vast majority of cases it\[aq]s fine to leave (default \[dq]https://pixeldrain.com/api\[dq])
--pixeldrain-description string Description of the remote
@ -33001,6 +33056,50 @@ You can disable this with the --s3-no-head option - see there for more
details.
.PP
Setting this flag increases the chance for undetected upload failures.
.SS Increasing performance
.SS Using server-side copy
.PP
If you are copying objects between S3 buckets in the same region, you
should use server-side copy.
This is much faster than downloading and re-uploading the objects, as no
data is transferred.
.PP
For rclone to use server-side copy, you must use the same remote for the
source and destination.
.IP
.nf
\f[C]
rclone copy s3:source-bucket s3:destination-bucket
\f[R]
.fi
.PP
When using server-side copy, the performance is limited by the rate at
which rclone issues API requests to S3.
See below for how to increase the number of API requests rclone makes.
.SS Increasing the rate of API requests
.PP
You can increase the rate of API requests to S3 by increasing the
parallelism using \f[C]--transfers\f[R] and \f[C]--checkers\f[R]
options.
.PP
Rclone uses a very conservative defaults for these settings, as not all
providers support high rates of requests.
Depending on your provider, you can increase significantly the number of
transfers and checkers.
.PP
For example, with AWS S3, if you can increase the number of checkers to
values like 200.
If you are doing a server-side copy, you can also increase the number of
transfers to 200.
.IP
.nf
\f[C]
rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket
\f[R]
.fi
.PP
You will need to experiment with these values to find the optimal
settings for your setup.
.SS Versions
.PP
When bucket versioning is enabled (this can be done with rclone with the
@ -37301,11 +37400,12 @@ copy_cutoff = 5M
\f[R]
.fi
.PP
C14 Cold Storage (https://www.online.net/en/storage/c14-cold-storage) is
Scaleway Glacier (https://www.scaleway.com/en/glacier-cold-storage/) is
the low-cost S3 Glacier alternative from Scaleway and it works the same
way as on S3 by accepting the \[dq]GLACIER\[dq] \f[C]storage_class\f[R].
So you can configure your remote with the
\f[C]storage_class = GLACIER\f[R] option to upload directly to C14.
\f[C]storage_class = GLACIER\f[R] option to upload directly to Scaleway
Glacier.
Don\[aq]t forget that in this state you can\[aq]t read files back after,
you will need to restore them to \[dq]STANDARD\[dq] storage_class first
before being able to read them (see \[dq]restore\[dq] section above)
@ -50124,9 +50224,9 @@ It will show you a client ID and client secret.
Make a note of these.
.RS 4
.PP
(If you selected \[dq]External\[dq] at Step 5 continue to Step 9.
(If you selected \[dq]External\[dq] at Step 5 continue to Step 10.
If you chose \[dq]Internal\[dq] you don\[aq]t need to publish and can
skip straight to Step 10 but your destination drive must be part of the
skip straight to Step 11 but your destination drive must be part of the
same Google Workspace.)
.RE
.IP "10." 4
@ -63841,79 +63941,37 @@ Required: true
.SS Advanced options
.PP
Here are the Advanced options specific to pikpak (PikPak).
.SS --pikpak-client-id
.SS --pikpak-device-id
.PP
OAuth Client Id.
.PP
Leave blank normally.
Device ID used for authorization.
.PP
Properties:
.IP \[bu] 2
Config: client_id
Config: device_id
.IP \[bu] 2
Env Var: RCLONE_PIKPAK_CLIENT_ID
Env Var: RCLONE_PIKPAK_DEVICE_ID
.IP \[bu] 2
Type: string
.IP \[bu] 2
Required: false
.SS --pikpak-client-secret
.SS --pikpak-user-agent
.PP
OAuth Client Secret.
HTTP user agent for pikpak.
.PP
Leave blank normally.
Defaults to \[dq]Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0)
Gecko/20100101 Firefox/129.0\[dq] or \[dq]--pikpak-user-agent\[dq]
provided on command line.
.PP
Properties:
.IP \[bu] 2
Config: client_secret
Config: user_agent
.IP \[bu] 2
Env Var: RCLONE_PIKPAK_CLIENT_SECRET
Env Var: RCLONE_PIKPAK_USER_AGENT
.IP \[bu] 2
Type: string
.IP \[bu] 2
Required: false
.SS --pikpak-token
.PP
OAuth Access Token as a JSON blob.
.PP
Properties:
.IP \[bu] 2
Config: token
.IP \[bu] 2
Env Var: RCLONE_PIKPAK_TOKEN
.IP \[bu] 2
Type: string
.IP \[bu] 2
Required: false
.SS --pikpak-auth-url
.PP
Auth server URL.
.PP
Leave blank to use the provider defaults.
.PP
Properties:
.IP \[bu] 2
Config: auth_url
.IP \[bu] 2
Env Var: RCLONE_PIKPAK_AUTH_URL
.IP \[bu] 2
Type: string
.IP \[bu] 2
Required: false
.SS --pikpak-token-url
.PP
Token server url.
.PP
Leave blank to use the provider defaults.
.PP
Properties:
.IP \[bu] 2
Config: token_url
.IP \[bu] 2
Env Var: RCLONE_PIKPAK_TOKEN_URL
.IP \[bu] 2
Type: string
.IP \[bu] 2
Required: false
Default: \[dq]Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0)
Gecko/20100101 Firefox/129.0\[dq]
.SS --pikpak-root-folder-id
.PP
ID of the root folder.
@ -72472,6 +72530,132 @@ Options:
.IP \[bu] 2
\[dq]error\[dq]: return an error based on option value
.SH Changelog
.SS v1.68.2 - 2024-11-15
.PP
See commits (https://github.com/rclone/rclone/compare/v1.68.1...v1.68.2)
.IP \[bu] 2
Security fixes
.RS 2
.IP \[bu] 2
local backend: CVE-2024-52522: fix permission and ownership on symlinks
with \f[C]--links\f[R] and \f[C]--metadata\f[R] (Nick Craig-Wood)
.RS 2
.IP \[bu] 2
Only affects users using \f[C]--metadata\f[R] and \f[C]--links\f[R] and
copying files to the local backend
.IP \[bu] 2
See
https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
.RE
.IP \[bu] 2
build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1
(dependabot)
.RS 2
.IP \[bu] 2
This is an issue in a dependency which is used for JWT certificates
.IP \[bu] 2
See
https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
.RE
.RE
.IP \[bu] 2
Bug Fixes
.RS 2
.IP \[bu] 2
accounting: Fix wrong message on SIGUSR2 to enable/disable bwlimit (Nick
Craig-Wood)
.IP \[bu] 2
bisync: Fix output capture restoring the wrong output for logrus
(Dimitrios Slamaris)
.IP \[bu] 2
dlna: Fix loggingResponseWriter disregarding log level (Simon Bos)
.IP \[bu] 2
serve s3: Fix excess locking which was making serve s3 single threaded
(Nick Craig-Wood)
.IP \[bu] 2
doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy Bush)
.RE
.IP \[bu] 2
Local
.RS 2
.IP \[bu] 2
Fix permission and ownership on symlinks with \f[C]--links\f[R] and
\f[C]--metadata\f[R] (Nick Craig-Wood)
.IP \[bu] 2
Fix \f[C]--copy-links\f[R] on macOS when cloning (nielash)
.RE
.IP \[bu] 2
Onedrive
.RS 2
.IP \[bu] 2
Fix Retry-After handling to look at 503 errors also (Nick Craig-Wood)
.RE
.IP \[bu] 2
Pikpak
.RS 2
.IP \[bu] 2
Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
.IP \[bu] 2
Fix fatal crash on startup with token that can\[aq]t be refreshed (Nick
Craig-Wood)
.RE
.IP \[bu] 2
S3
.RS 2
.IP \[bu] 2
Fix crash when using \f[C]--s3-download-url\f[R] after migration to
SDKv2 (Nick Craig-Wood)
.IP \[bu] 2
Storj provider: fix server-side copy of files bigger than 5GB (Kaloyan
Raev)
.IP \[bu] 2
Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
.RE
.SS v1.68.1 - 2024-09-24
.PP
See commits (https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1)
.IP \[bu] 2
Bug Fixes
.RS 2
.IP \[bu] 2
build: Fix docker release build (ttionya)
.IP \[bu] 2
doc fixes (Nick Craig-Wood, Pawel Palucha)
.IP \[bu] 2
fs
.RS 2
.IP \[bu] 2
Fix \f[C]--dump filters\f[R] not always appearing (Nick Craig-Wood)
.IP \[bu] 2
Fix setting \f[C]stringArray\f[R] config values from environment
variables (Nick Craig-Wood)
.RE
.IP \[bu] 2
rc: Fix default value of \f[C]--metrics-addr\f[R] (Nick Craig-Wood)
.IP \[bu] 2
serve docker: Add missing \f[C]vfs-read-chunk-streams\f[R] option in
docker volume driver (Divyam)
.RE
.IP \[bu] 2
Onedrive
.RS 2
.IP \[bu] 2
Fix spurious \[dq]Couldn\[aq]t decode error response: EOF\[dq] DEBUG
(Nick Craig-Wood)
.RE
.IP \[bu] 2
Pikpak
.RS 2
.IP \[bu] 2
Fix login issue where token retrieval fails (wiserain)
.RE
.IP \[bu] 2
S3
.RS 2
.IP \[bu] 2
Fix rclone ignoring static credentials when \f[C]env_auth=true\f[R]
(Nick Craig-Wood)
.RE
.SS v1.68.0 - 2024-09-08
.PP
See commits (https://github.com/rclone/rclone/compare/v1.67.0...v1.68.0)

View file

@ -40,7 +40,7 @@ type Dir struct {
modTimeMu sync.Mutex // protects the following
modTime time.Time
_virtuals atomic.Int32 // number of virtual directory entries in this directory and children
_hasVirtual atomic.Bool // shows if the directory has virtual entries
}
//go:generate stringer -type=vState
@ -67,6 +67,7 @@ func newDir(vfs *VFS, f fs.Fs, parent *Dir, fsDir fs.Directory) *Dir {
items: make(map[string]Node),
}
d.cleanupTimer = time.AfterFunc(time.Duration(vfs.Opt.DirCacheTime*2), d.cacheCleanup)
d.setHasVirtual(false)
return d
}
@ -197,60 +198,58 @@ func (d *Dir) Node() Node {
return d
}
// hasVirtual returns whether the directory or children has virtual entries
// hasVirtual returns whether the directory has virtual entries
func (d *Dir) hasVirtual() bool {
return d._virtuals.Load() != 0
return d._hasVirtual.Load()
}
// addVirtual adds n virtual items to this directory and all of its parents
func (d *Dir) addVirtual(n int32) {
for d != nil {
d._virtuals.Add(n)
d = d.parent
}
// setHasVirtual sets the hasVirtual flag for the directory
func (d *Dir) setHasVirtual(hasVirtual bool) {
d._hasVirtual.Store(hasVirtual)
}
// ForgetAll forgets directory entries for this directory and any children.
//
// It does not invalidate or clear the cache of the parent directory.
//
// It returns true if the directory or any of its children had virtual
// entries so could not be forgotten. Children which didn't have
// virtual entries will be forgotten even if true is returned.
// It returns true if the directory or any of its children had virtual entries
// so could not be forgotten. Children which didn't have virtual entries and
// children with virtual entries will be forgotten even if true is returned.
func (d *Dir) ForgetAll() (hasVirtual bool) {
// We run this part with RLock only to avoid deadlocks in the recursion
d.mu.RLock()
fs.Debugf(d.path, "forgetting directory cache")
for _, node := range d.items {
if dir, ok := node.(*Dir); ok {
dir.ForgetAll()
if dir.ForgetAll() {
d.setHasVirtual(true)
}
}
}
d.mu.RUnlock()
// We run this part with Lock so we can modify the Dir
d.mu.Lock()
defer d.mu.Unlock()
// Purge any unnecessary virtual entries
d._purgeVirtual()
// Don't clear directory entries if there are virtual entries in this
// directory or any children
hasVirtual = d.hasVirtual()
if !hasVirtual {
d.read = time.Time{}
d.items = make(map[string]Node)
d.cleanupTimer.Stop()
} else {
d.cleanupTimer.Reset(time.Duration(d.vfs.Opt.DirCacheTime * 2))
d.read = time.Time{}
// Check if this dir has virtual entries
if len(d.virtual) != 0 {
d.setHasVirtual(true)
}
return hasVirtual
// Don't clear directory entries if there are virtual entries in this
// directory or any children
if !d.hasVirtual() {
d.items = make(map[string]Node)
d.cleanupTimer.Stop()
}
return d.hasVirtual()
}
// forgetDirPath clears the cache for itself and all subdirectories if
@ -449,10 +448,8 @@ func (d *Dir) addObject(node Node) {
if node.IsDir() {
vAdd = vAddDir
}
if _, found := d.virtual[leaf]; !found {
d.addVirtual(1)
}
d.virtual[leaf] = vAdd
d.setHasVirtual(true)
fs.Debugf(d.path, "Added virtual directory entry %v: %q", vAdd, leaf)
d.mu.Unlock()
}
@ -495,10 +492,8 @@ func (d *Dir) delObject(leaf string) {
if d.virtual == nil {
d.virtual = make(map[string]vState)
}
if _, found := d.virtual[leaf]; !found {
d.addVirtual(1)
}
d.virtual[leaf] = vDel
d.setHasVirtual(true)
fs.Debugf(d.path, "Added virtual directory entry %v: %q", vDel, leaf)
d.mu.Unlock()
}
@ -585,9 +580,9 @@ func (d *Dir) _deleteVirtual(name string) {
return
}
delete(d.virtual, name)
d.addVirtual(-1)
if len(d.virtual) == 0 {
d.virtual = nil
d.setHasVirtual(false)
}
fs.Debugf(d.path, "Removed virtual directory entry %v: %q", virtualState, name)
}

View file

@ -607,51 +607,3 @@ func TestDirRename(t *testing.T) {
func TestDirStructSize(t *testing.T) {
t.Logf("Dir struct has size %d bytes", unsafe.Sizeof(Dir{}))
}
// Check that open files appear in the directory listing properly after a forget
func TestDirFileOpen(t *testing.T) {
_, vfs, dir, _ := dirCreate(t)
assert.False(t, dir.hasVirtual())
assert.False(t, dir.parent.hasVirtual())
_, err := dir.Mkdir("sub")
require.NoError(t, err)
assert.True(t, dir.hasVirtual())
assert.True(t, dir.parent.hasVirtual())
fd0, err := vfs.Create("dir/sub/file0")
require.NoError(t, err)
_, err = fd0.Write([]byte("hello"))
require.NoError(t, err)
defer func() {
require.NoError(t, fd0.Close())
}()
fd2, err := vfs.Create("dir/sub/file2")
require.NoError(t, err)
_, err = fd2.Write([]byte("hello world!"))
require.NoError(t, err)
require.NoError(t, fd2.Close())
assert.True(t, dir.hasVirtual())
assert.True(t, dir.hasVirtual())
assert.True(t, dir.parent.hasVirtual())
// Now forget the directory
hasVirtual := dir.parent.ForgetAll()
assert.True(t, hasVirtual)
assert.True(t, dir.hasVirtual())
assert.True(t, dir.parent.hasVirtual())
// Check the files can still be found
fi, err := vfs.Stat("dir/sub/file0")
require.NoError(t, err)
assert.Equal(t, int64(5), fi.Size())
fi, err = vfs.Stat("dir/sub/file2")
require.NoError(t, err)
assert.Equal(t, int64(12), fi.Size())
}