Compare commits

...
Sign in to create a new pull request.

953 commits

Author SHA1 Message Date
Aleksey Kravchenko
12b572e62a [#5] Update frostfs backend docs
Signed-off-by: Aleksey Kravchenko <al.kravchenko@yadro.com>
2025-01-27 16:33:41 +03:00
Aleksey Kravchenko
be95531bfd [#5] Add container zone names support.
Signed-off-by: Aleksey Kravchenko <al.kravchenko@yadro.com>
2025-01-27 16:33:41 +03:00
Aleksey Kravchenko
b93e134b5b [#1] Add forgejo actions
Signed-off-by: Aleksey Kravchenko <al.kravchenko@yadro.com>
2025-01-16 11:11:33 +03:00
Aleksey Kravchenko
0255757831 [#1] Add frostfs backend docs
Signed-off-by: Aleksey Kravchenko <al.kravchenko@yadro.com>
2025-01-16 11:11:33 +03:00
Aleksey Kravchenko
5385ce33a5 [#1] Add frostfs backend
Signed-off-by: Aleksey Kravchenko <al.kravchenko@yadro.com>
2025-01-16 10:56:12 +03:00
Nick Craig-Wood
f2d16ab4c5 Version v1.68.2 2024-11-15 12:20:50 +00:00
Nick Craig-Wood
c0fc4fe0ca s3: fix multitenant multipart uploads with CEPH
CEPH uses a special bucket form `tenant:bucket` for multitentant
access using S3 as documented here:

https://docs.ceph.com/en/reef/radosgw/multitenancy/#s3

However when doing multipart uploads, in the reply from
`CreateMultipart` the `tenant:` was missing from the `Bucket` response
rclone was using to build the `UploadPart` request. This caused a 404
failure return. This may be a CEPH bug, but it is easy to work around.

This changes the code to use the `Bucket` and `Key` that we used in
`CreateMultipart` in `UploadPart` rather than the one returned from
`CreateMultipart` which fixes the problem.

See: https://forum.rclone.org/t/rclone-zcat-does-not-work-with-a-multitenant-ceph-backend/48618
2024-11-14 16:50:19 +00:00
Nick Craig-Wood
669b2f2669 local: fix permission and ownership on symlinks with --links and --metadata
Before this change, if writing to a local backend with --metadata and
--links, if the incoming metadata contained mode or ownership
information then rclone would apply the mode/ownership to the
destination of the link not the link itself.

This fixes the problem by using the link safe sycall variants
lchown/fchmodat when --links and --metadata is in use. Note that Linux
does not support setting permissions on symlinks, so rclone emits a
debug message in this case.

This also fixes setting times on symlinks on Windows which wasn't
implemented for atime, mtime and was incorrectly setting the target of
the symlink for btime.

See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
2024-11-14 16:36:22 +00:00
Dimitrios Slamaris
e1ba10a86e bisync: fix output capture restoring the wrong output for logrus
Before this change, if rclone is used as a library and logrus is used
after a call to rc `sync/bisync`, logging does not work anymore and
leads to writing to a closed pipe.

This change restores the output correctly.

Fixes #8158
2024-11-14 16:36:22 +00:00
Nick Craig-Wood
022442cf58 build: fix comments after golangci-lint upgrade 2024-11-14 16:36:22 +00:00
dependabot[bot]
5cc4488294 build(deps): bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1
Bumps [github.com/golang-jwt/jwt/v4](https://github.com/golang-jwt/jwt) from 4.5.0 to 4.5.1.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v4.5.0...v4.5.1)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v4
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-11-14 16:36:22 +00:00
Nick Craig-Wood
ec9566c5c3 pikpak: fix fatal crash on startup with token that can't be refreshed 2024-11-14 16:36:22 +00:00
Nick Craig-Wood
f6976eb4c4 serve s3: fix excess locking which was making serve s3 single threaded
The fix for this was in the upstream library to narrow the locking
window.

See: https://forum.rclone.org/t/can-rclone-serve-s3-handle-more-than-one-client/48329/
2024-11-14 16:36:22 +00:00
Nick Craig-Wood
c242c00799 onedrive: fix Retry-After handling to look at 503 errors also
According to the Microsoft docs a Retry-After header can be returned
on 429 errors and 503 errors, but before this change we were only
checking for it on 429 errors.

See: https://forum.rclone.org/t/onedrive-503-response-retry-after-not-used/48045
2024-11-14 16:36:22 +00:00
Kaloyan Raev
bf954b74ff s3: Storj provider: fix server-side copy of files bigger than 5GB
Like some other S3-compatible providers, Storj does not currently
implements UploadPartCopy and returns NotImplemented errors for
multi-part server side copies.

This patch works around the problem by raising --s3-copy-cutoff for
Storj to the maximum. This means that rclone will never use
multi-part copies for files in Storj. This includes files larger than
5GB which (according to AWS documentation) must be copied with
multi-part copy. This works fine for Storj.

See https://github.com/storj/roadmap/issues/40
2024-11-14 16:36:22 +00:00
tgfisher
88f0770d0a docs: mention that inline comments are not supported in a filter-file 2024-11-14 16:36:22 +00:00
Randy Bush
41d905c9b0 docs: fix forward refs in step 9 of using your own client id 2024-11-14 16:36:22 +00:00
Alexandre Hamez
300a063b5e docs: fix Scaleway Glacier website URL 2024-11-14 16:36:21 +00:00
Simon Bos
61bf29ed5e dlna: fix loggingResponseWriter disregarding log level 2024-11-14 16:36:21 +00:00
Nick Craig-Wood
3191717572 s3: fix crash when using --s3-download-url after migration to SDKv2
Before this change rclone was crashing when the download URL did not
supply an X-Amz-Storage-Class header.

This change allows the header to be missing.

See: https://forum.rclone.org/t/sigsegv-on-ubuntu-24-04/48047
2024-11-14 16:36:21 +00:00
Nick Craig-Wood
961dfe97b5 docs: update overview to show pcloud can set modtime
See 258092f9c6 and #7896
2024-11-14 16:36:21 +00:00
Nick Craig-Wood
22612b4b38 Add RcloneView as a sponsor 2024-11-14 16:36:21 +00:00
Nick Craig-Wood
b9927461c3 accounting: fix wrong message on SIGUSR2 to enable/disable bwlimit
This was caused by the message code only looking at one of the
bandwidth filters, not all of them.

Fixes #8104
2024-11-14 16:36:21 +00:00
wiserain
6d04be99f2 pikpak: fix cid/gcid calculations for fs.OverrideRemote
Previously, cid/gcid (custom hash for pikpak) calculations failed when 
attempting to unwrap object info from `fs.OverrideRemote`. 

This commit introduces a new function that can correctly unwrap 
object info from both regular objects and `fs.OverrideRemote` types, 
ensuring uploads with accurate cid/gcid calculations in all scenarios.
2024-11-14 16:36:21 +00:00
nielash
06ae0dfa54 local: fix --copy-links on macOS when cloning
Before this change, --copy-links erroneously behaved like --links when using cloning
on macOS, and cloning was not supported at all when using --links.

After this change, --copy-links does what it's supposed to, and takes advantage of
cloning when possible, by copying the file being linked to instead of the link
itself.

Cloning is now also supported in --links mode for regular files (which benefit
most from cloning). symlinks in --links mode continue to be tossed back to be
handled by rclone's special translation logic.

See https://forum.rclone.org/t/macos-local-to-local-copy-with-copy-links-causes-error/47671/5?u=nielash
2024-11-14 16:36:21 +00:00
Nick Craig-Wood
912f29b5b8 Start v1.68.2-DEV development 2024-09-24 17:25:53 +01:00
Nick Craig-Wood
8d78768aaa Version v1.68.1 2024-09-24 15:47:01 +01:00
Nick Craig-Wood
6aa924f28d docs: document that fusermount3 may be needed when mounting/unmounting
See: https://forum.rclone.org/t/documentation-fusermount-vs-fusermount3/47816/
2024-09-23 17:33:09 +01:00
wiserain
48f2c2db70 pikpak: fix login issue where token retrieval fails
This addresses the login issue caused by pikpak's recent cancellation 
of existing login methods and requirement for additional verifications. 

To resolve this, we've made the following changes:

1. Similar to lib/oauthutil, we've integrated a mechanism to handle 
captcha tokens.

2. A new pikpakClient has been introduced to wrap the existing 
rest.Client and incorporate the necessary headers including 
x-captcha-token for each request.

3. Several options have been added/removed to support persistent 
user/client identification.

* client_id: No longer configurable.
* client_secret: Deprecated as it's no longer used.
* user_agent: A new option that defaults to PC/Firefox's user agent 
but can be overridden using the --pikpak-user-agent flag.
* device_id: A new option that is randomly generated if invalid. 
It is recommended not to delete or change it frequently.
* captcha_token: A new option that is automatically managed 
by rclone, similar to the OAuth token.

Fixes #7950 #8005
2024-09-23 17:33:09 +01:00
Nick Craig-Wood
a88066aff3 s3: fix rclone ignoring static credentials when env_auth=true
The SDKv2 conversion introduced a regression to do with setting
credentials with env_auth=true. The rclone documentation explicitly
states that env_auth only applies if secret_access_key and
access_key_id are blank and users had been relying on that.

However after the SDKv2 conversion we were ignoring static credentials
if env_auth=true.

This fixes the problem by ignoring env_auth=true if secret_access_key
and access_key_id are both provided. This brings rclone back into line
with the documentation and users expectations.

Fixes #8067
2024-09-23 17:33:09 +01:00
Nick Craig-Wood
75f5b06ff7 fs: fix setting stringArray config values from environment variables
After the config re-organisation, the setting of stringArray config
values (eg `--exclude` set with `RCLONE_EXCLUDE`) was broken and gave
a message like this for `RCLONE_EXCLUDE=*.jpg`:

    Failed to load "filter" default values: failed to initialise "filter" options:
    couldn't parse config item "exclude" = "*.jpg" as []string: parsing "*.jpg" as []string failed:
    invalid character '/' looking for beginning of value

This was caused by the parser trying to parse the input string as a
JSON value.

When the config was re-organised it was thought that the internal
representation of stringArray values was not important as it was never
visible externally, however this turned out not to be true.

A defined representation was chosen - a comma separated string and
this was documented and tests were introduced in this patch.

This potentially introduces a very small backwards incompatibility. In
rclone v1.67.0

    RCLONE_EXCLUDE=a,b

Would be interpreted as

    --exclude "a,b"

Whereas this new code will interpret it as

    --exclude "a" --exclude "b"

The benefit of being able to set multiple values with an environment
variable was deemed to outweigh the very small backwards compatibility
risk.

If a value with a `,` is needed, then use CSV escaping, eg

    RCLONE_EXCLUDE="a,b"

(Note this needs to have the quotes in so at the unix shell that would be

    RCLONE_EXCLUDE='"a,b"'

Fixes #8063
2024-09-23 17:33:09 +01:00
Nick Craig-Wood
daeeb7c145 rc: fix default value of --metrics-addr
Before this fix it was empty string, which isn't a good default for a
stringArray.
2024-09-23 17:33:09 +01:00
Nick Craig-Wood
d6a5fc6ffa fs: fix --dump filters not always appearing
Before this fix, we initialised the options blocks in a random order.
This meant that there was a 50/50 chance whether --dump filters would
show the filters or not as it was depending on the "main" block having
being read first to set the Dump flags.

This initialises the options blocks in a defined order which is
alphabetically but with main first which fixes the problem.
2024-09-23 17:33:09 +01:00
Nick Craig-Wood
c0bfedf99c docs: correct notes on docker manual build 2024-09-23 17:33:09 +01:00
ttionya
76b76c30bf build: fix docker release build - fixes #8062
This updates the action to use `docker/build-push-action` instead of `ilteoood/docker_buildx`
which fixes the build problem in testing.
2024-09-23 17:33:09 +01:00
Pawel Palucha
737fcc804f docs: add section for improving performance for s3 2024-09-23 17:33:09 +01:00
Nick Craig-Wood
70f3965354 onedrive: fix spurious "Couldn't decode error response: EOF" DEBUG
This DEBUG was being generated on redirects which don't have a JSON
body and is irrelevant.
2024-09-23 17:33:09 +01:00
Divyam
d5c100edaf serve docker: add missing vfs-read-chunk-streams option in docker volume driver 2024-09-23 17:33:09 +01:00
Nick Craig-Wood
dc7458cea0 Start v1.68.1-DEV development 2024-09-23 17:29:48 +01:00
Nick Craig-Wood
49f69196c2 Version v1.68.0 2024-09-08 16:21:56 +01:00
Nick Craig-Wood
3f7651291b gofile: fix failed downloads on newly uploaded objects
The upload routine no longer returns a url to download the object.

This fixes the problem by fetching it if necessary when we attempt to
Open the object.
2024-09-08 14:58:29 +01:00
Nick Craig-Wood
796013dd06 gofile: fix Move a file
For some reason the parent ID got out of date in the Object (exact
reason not known - but the fact that this was OK before suggests a
change in the provider).

However we know the parent ID as it is in the directory cache, so use
that instead.
2024-09-08 14:58:29 +01:00
Nick Craig-Wood
e0da406ca7 test_all: mark linkbox fs/sync test TestSyncOverlapWithFilter as ignore
This gives the error

> Update second step failed: Linkbox error 500: The file name needs to include a suffix, such as xxx.mp4

As linkbox can't have files starting with "." and we are trying to save a file called ".ignore".
2024-09-08 14:58:14 +01:00
albertony
9a02c04028 jottacloud: fix setting of metadata on server side move - fixes #7900 2024-09-07 16:00:03 +01:00
albertony
918185273f docs: group the different options affecting lsjson output 2024-09-07 09:41:08 +01:00
Nick Craig-Wood
3f2074901a fichier: fix server side move - fixes #7856
The server side move had a combination of bugs
- Fichier changed the API disallowing a move to the same name
- Rclone was using the wrong object for some operations
2024-09-06 18:20:10 +01:00
Nick Craig-Wood
648afc7df4 fichier: Fix detection of Flood Detected error 2024-09-06 18:20:10 +01:00
Nick Craig-Wood
16e0245a8e rc: add vfs/queue-set-expiry to adjust expiry of items in the VFS queue 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
59acb9dfa9 rc: add vfs/queue to show the status of the upload queue 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
bfec159504 vfs: keep a record of the file size in the writeback queue 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
842396c8a0 build: fix gocritic change missed in merge
The original problem was introduced here

bcdfad3c83 build: update logging statements to make json log work #6038

And this was fixed non-optimally here

f1a84d171e build: fix build after update
2024-09-06 17:33:35 +01:00
Nick Craig-Wood
5f9a201b45 Add Oleg Kunitsyn to contributors 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
22583d0a5f Add fsantagostinobietti to contributors 2024-09-06 17:33:35 +01:00
Nick Craig-Wood
f1466a429c Add Mathieu Moreau to contributors 2024-09-06 17:33:35 +01:00
Florian Klink
e3b09211b8 lib/sd-activation: wrap coreos/go-systemd
It fails to build on plan9, which is part of the rclone CI matrix, and
the PR fixing it upstream doesn't seem to be getting traction.

Stub it on our side, we can still remove this once it gets merged.
2024-09-06 17:21:56 +01:00
Florian Klink
156feff9f2 sftp: support listening on passed FDs 2024-09-06 17:21:56 +01:00
Florian Klink
b29a22095f http: fix addr CLI arg help text
This was missing the fact rclone also supports listening on Unix Domain
Sockets.
2024-09-06 17:21:56 +01:00
Florian Klink
861c01caf5 http: support listening on passed FDs
Instead of the listening addresses specified above, rclone will listen to all
FDs passed by the service manager, if any (and ignore any arguments passed by
`--{{ .Prefix }}addr`.

This allows rclone to be a socket-activated service. It can be configured as described in
https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html

It's possible to test this interactively through `systemd-socket-activate`,
firing of a request in a second terminal:

```
❯ systemd-socket-activate -l 8088 -l 8089 --fdname=foo:bar -- ./rclone serve webdav :local:test/
Listening on [::]:8088 as 3.
Listening on [::]:8089 as 4.
Communication attempt on fd 3.
Execing ./rclone (./rclone serve webdav :local:test/)
2024/04/24 18:14:42 NOTICE: Local file system at /home/flokli/dev/flokli/rclone/test: WebDav Server started on [sd-listen:bar-0/ sd-listen:foo-0/]
```
2024-09-06 17:21:56 +01:00
Nick Craig-Wood
f1a84d171e build: fix build after update
This adds an import missed in

bcdfad3c83 build: update logging statements to make json log work #6038
2024-09-06 17:18:46 +01:00
albertony
bcdfad3c83 build: update logging statements to make json log work - fixes #6038
This changes log statements from log to fs package, which is required for --use-json-log
to properly make log output in JSON format. The recently added custom linting rule,
handled by ruleguard via gocritic via golangci-lint, warns about these and suggests
the alternative. Fixing was therefore basically running "golangci-lint run --fix",
although some manual fixup of mainly imports are necessary following that.
2024-09-06 17:04:18 +01:00
albertony
88b0757288 build: update custom linting rule for log to suggest new non-format functions 2024-09-06 17:04:18 +01:00
albertony
33d6c3f92f fs: add non-format variants of log functions to avoid non-constant format string warnings 2024-09-06 17:04:18 +01:00
albertony
752809309d fs: add log Printf, Fatalf and Panicf 2024-09-06 17:04:18 +01:00
albertony
4a54cc134f fs: refactor base log method name for improved consistency 2024-09-06 17:04:18 +01:00
albertony
dfc2c98bbf fs: refactor log statements to use common helper 2024-09-06 17:04:18 +01:00
albertony
604d6bcb9c build: enable custom linting rules with ruleguard via gocritic 2024-09-06 17:04:18 +01:00
Oleg Kunitsyn
d15704ef9f
rcserver: implement prometheus metrics on a dedicated port - fixes #7940 2024-09-06 15:00:36 +01:00
fsantagostinobietti
26bc9826e5 swift: add total/free space info in about command.
With the enhancement in version v2.0.3 of ncw/swift library, we can now get Total and Free space info from remotes that support this feature (ex. Blomp storage)
2024-09-06 12:46:51 +01:00
Mathieu Moreau
2a28b0eaf0
docs: filtering: added Byte unit for min/max-size parameters. 2024-09-06 12:28:29 +01:00
Nick Craig-Wood
2d1c2b1f76 config encryption: set, remove and check to manage config file encryption #7859 2024-09-06 10:34:29 +01:00
Nick Craig-Wood
ffb2e2a6de config: use --password-command to set config file password if supplied
Before this change, rclone ignored the --password-command on the
rclone config setting except when decrypting an existing config file.

This change allows for offloading the password storage/generation into
external hardware key or other protected password storage.

Fixes #7859
2024-09-06 10:34:29 +01:00
Nick Craig-Wood
c9c283533c config: factor --password-command code into its own function #7859 2024-09-06 10:34:29 +01:00
Nick Craig-Wood
71799d7efd Add yuval-cloudinary to contributors 2024-09-06 10:34:29 +01:00
Nick Craig-Wood
8f4fdf6cc8 Add nipil to contributors 2024-09-06 10:34:29 +01:00
yuval-cloudinary
91b11f9eac documentation: add cheatsheet for configuration encryption 2024-09-05 17:39:25 +01:00
nipil
b49927fbd0 docs: more secure two-step signature and hash validation 2024-09-05 16:54:26 +01:00
Nick Craig-Wood
1a8b7662e7 serve nfs: unify the nfs library logging with rclone's logging better
Before this we ignored the logging levels and logged everything as
debug. This will obey the rclone logging flags and log at the correct
level.
2024-09-04 10:50:21 +01:00
Nick Craig-Wood
6ba3e24853 serve nfs: fix incorrect user id and group id exported to NFS #7973
Before this change all exports were exported as root and the --uid and
--gid flags of the VFS were ignored.

This fixes the issue by exporting the UID and GID correctly which
default to the current user and group unless set explicitly.
2024-09-04 10:50:21 +01:00
Nick Craig-Wood
802a938bd1 zoho: fix inefficiencies uploading with new API to avoid throttling
Before this fix, rclone queried the uploaded object to find its size
and modtime after upload as the API did not return these items.

Zoho have subsequently modified the API to return these items so
rclone uses them to avoid an API call.

This should help with rclone being throttled by Zoho.

See: https://forum.rclone.org/t/second-followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/45697/20
2024-09-04 10:45:47 +01:00
Nick Craig-Wood
9deb3e8adf Add crystalstall to contributors 2024-09-04 10:45:12 +01:00
crystalstall
296281a6eb
docs: fix some function names in comments
Signed-off-by: crystalstall <crystalruby@qq.com>
2024-09-02 18:20:08 +02:00
albertony
711478554e lib/file: use builtin MkdirAll with go1.22 instead of our own custom version for windows
Starting with go1.22 the standard os.MkdirAll has improved its handling of volume names,
and as part of that it now stops recursing into parent directory if it is a volume name
(see: cd589c8a73).
This is similar to what was our main change and reason for creating a custom version. When
building with go1.22 or newer we can therefore stop using our custom version, with the
advantage that we automatically get current and future relevant improvements from golang.
To support building with go1.21 the existing custom version is still kept, and therefore
also our wrapper function file.MkdirAll - but it now just calls os.MkdirAll with go1.22
or newer on Windows.

See #5401, #6420 and acf1e2df84 for details about the
creation of our custom version of MkdirAll.
2024-09-02 18:16:38 +02:00
albertony
906aef91fa docs: document that paths using volume guids are supported 2024-09-02 18:01:47 +02:00
Nick Craig-Wood
6b58cd0870 s3: fix accounting for mulpart transfers after migration to SDKv2 #4989 2024-08-31 20:11:53 +01:00
Sebastian Bünger
af9f8ced80
yandex: implement custom user agent to help with upload speeds 2024-08-29 18:25:08 +01:00
Georg Welzel
c63f1865f3 operations: copy: generate stable partial suffix 2024-08-28 08:45:38 +02:00
Nick Craig-Wood
1bb89bc818 docs: add missing sftp providers to README and main docs page - fixes #8038 2024-08-28 07:25:49 +01:00
Nick Craig-Wood
a365503750 nfsmount: fix stale handle problem after converting options to new style
This problem was caused by the defaults not being set for the options
after the conversion to the new config system in

28ba4b832d serve nfs: convert options to new style

This makes the nfs serve options globally available so nfsmount can
use them directly.

Fixes #8029
2024-08-28 07:08:23 +01:00
Nick Craig-Wood
3bb6d0a42b docs: mark flags.md as auto generated so contributors don't edit it 2024-08-28 07:08:23 +01:00
Nick Craig-Wood
f65755b3a3 Add Pawel Palucha to contributors 2024-08-28 07:08:21 +01:00
Nick Craig-Wood
33c5f35935 Add John Oxley to contributors 2024-08-28 07:07:34 +01:00
Nick Craig-Wood
4367b999c9 Add Georg Welzel to contributors 2024-08-28 07:04:02 +01:00
Nick Craig-Wood
b57e6213aa Add Péter Bozsó to contributors 2024-08-28 07:04:02 +01:00
Nick Craig-Wood
cd90ba4337 Add Sam Harrison to contributors 2024-08-28 07:04:02 +01:00
Pawel Palucha
0e5eb7a9bb s3: allow restoring from intelligent-tiering storage class 2024-08-25 15:08:06 +02:00
nielash
956c2963fd bisync: don't convert modtime precision in listings - fixes #8025
Before this change, bisync proactively converted modtime precision when greater
than what the destination backend supported.

This dates back to a time before bisync considered the modifyWindow for same-side
comparisons. Back then, it was problematic to save a listing with 12:54:49.7 for
a backend that can't handle that precision, as on the next run the backend would
report the time as 12:54:50 and bisync would think the file had changed. So the
truncation was a workaround to anticipate this and proactively record the time
with the precision we expect to receive next time.

However, this caused problems for backends (such as dropbox) that round instead
of truncating as bisync expected.

After this change, bisync preserves the original precision in the listing
(without conversion), even when greater than what the backend supports, to avoid
rounding error. On the next run, bisync will compare it to the rounded time
reported by the backend, and if it's within the modifyWindow, it will treat them
as equivalent.
2024-08-24 22:32:48 -04:00
John Oxley
146562975b
build: rename Unknwon/goconfig to unknwon/goconfig
Before this change we used the repo with an initial uppercase `U`. However it is now canonically spelled with a lower case `u`.

This package is too old to have a go.mod but the README clearly states the desired capitalization.

In 4b0d4b818a the
recommended capitalization was changed to lower case.

Co-authored-by: John Oxley <joxley@meta.com>
2024-08-23 11:03:27 +01:00
Georg Welzel
4c1cb0622e backend: pcloud: Implement OpenWriterAt feature 2024-08-22 23:42:32 +02:00
Georg Welzel
258092f9c6 backend: pcloud: implement SetModTime - Fixes #7896 2024-08-22 23:42:32 +02:00
Sam Harrison
be448c9e13 filescom: don't make an extra fetch call on each item in a list response 2024-08-19 22:31:57 +02:00
albertony
4e708e59f2 local: fix incorrect conversion between integer types 2024-08-18 10:29:36 +02:00
albertony
c8366dfef3 local: fix incorrect conversion between integer types 2024-08-17 17:07:17 +02:00
albertony
1e14523b82 docs: make tardigrade page auto redirect to storj page 2024-08-17 16:00:42 +02:00
albertony
da25305ba0 docs: update backend config samples 2024-08-17 16:00:18 +02:00
albertony
e439121ab2 config: fix size computation for allocation may overflow 2024-08-17 15:03:39 +02:00
albertony
37c12732f9 lib: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
4c488e7517 serve docker: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
7261f47bd2 local: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
1db8b20fbc s3: fix incorrect conversion between integer types 2024-08-17 15:03:39 +02:00
albertony
a87d8967fc s3: fix potentially unsafe quoting issue 2024-08-17 15:03:39 +02:00
albertony
4804f1f1e9 dropbox: fix potentially unsafe quoting issue 2024-08-17 15:03:39 +02:00
Eng Zer Jun
d1c84f9115 refactor: replace min/max helpers with built-in min/max
We upgraded our minimum Go version in commit ca24447090. We can now use
the built-in `min` and `max` functions directly.

Reference: https://go.dev/ref/spec#Min_and_max
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
2024-08-17 13:09:44 +02:00
JT Olio
e0b08883cb go.mod: update storj.io/uplink to latest release
this has a couple of bug fixes and small enhancements.

we are working on reducing the size of this library, but this
version bump does not yet have those improvements.
2024-08-16 23:06:45 +02:00
Péter Bozsó
a0af72c27a docs: update ssh tunnel example 2024-08-16 20:27:23 +02:00
Péter Bozsó
28d6985764 docs: update rclone authorize section 2024-08-16 20:27:23 +02:00
Péter Bozsó
f2ce9a9557 docs: fix command highlight 2024-08-16 20:27:23 +02:00
albertony
95151eac82 docs: fix alignment of some of the icons in the storage system dropdown 2024-08-16 11:07:54 +02:00
Sam Harrison
bd9bf4eb1c docs: mark filescom as supporting link sharing 2024-08-15 22:55:45 +01:00
albertony
705c72d293 build: enable gocritic linter
Running with default set of checks, except disabling the following
- appendAssign: append result not assigned to the same slice (diagnostics check, many false positives)
- captLocal: using capitalized names for local variables (style check, too opinionated)
- commentFormatting: not having a space between `//` and comment text (style check, too opinionated)
- exitAfterDefer: log.Fatalln will exit, and `defer func(){...}(...)` will not run (diagnostics check, to be revisited)
- ifElseChain: rewrite if-else to switch statement (style check, many occurrences and a bit opinionated, to be revisited)
- singleCaseSwitch: should rewrite switch statement to if statement (style check, many occurrences and a bit opinionated, to be revisited)
2024-08-15 22:08:34 +01:00
albertony
330c6702eb build: ignore remaining gocritic lint issues 2024-08-15 22:08:34 +01:00
albertony
4d787ae87f build: fix gocritic lint issue unlambda 2024-08-15 22:08:34 +01:00
albertony
86e9a56d73 build: fix gocritic lint issue dupbranchbody 2024-08-15 22:08:34 +01:00
albertony
64e8013c1b build: fix gocritic lint issue sloppylen 2024-08-15 22:08:34 +01:00
albertony
33bff6fe71 build: fix gocritic lint issue wrapperfunc 2024-08-15 22:08:34 +01:00
albertony
e82b5b11af build: fix gocritic lint issue elseif 2024-08-15 22:08:34 +01:00
albertony
4454ed9d3b build: fix gocritic lint issue underef 2024-08-15 22:08:34 +01:00
albertony
bad8207378 build: fix gocritic lint issue valswap 2024-08-15 22:08:34 +01:00
albertony
c6d3714e73 build: fix gocritic lint issue assignop 2024-08-15 22:08:34 +01:00
albertony
59501fcdb6 build: fix gocritic lint issue unslice 2024-08-15 22:08:34 +01:00
Florian Klink
afd199d756 dlna: document external subtitle feature 2024-08-15 22:01:52 +01:00
Florian Klink
00e073df1e dlna: set more correct mime type
The code currently hardcodes `text/srt` for all subtitles.

`text/srt` is wrong, it seems `application/x-subrip` is the official
extension coming from the official mime database, at least (and still
works with the Samsung TV I tested with). Also add that one to `fs/
mimetype.go`.

Compared to previous iterations of this PR, I dropped tests ensuring
certain mime types are present - as detection still seems to be fairly
platform-specific.
2024-08-15 22:01:52 +01:00
Florian Klink
2e007f89c7 dlna: don't swallow video.{idx,sub}
.idx and .sub subtitle files only work if both are present, but the code
was overwriting the first-inserted element to subtitlesByName, as it was
keyed by the basename without extension.

Make subtitlesByName point to a slice of nodes instead.
2024-08-15 22:01:52 +01:00
Florian Klink
edd9347694 dlna: add cds_test.go
This tests the mediaWithResources function in various scenarios.
2024-08-15 22:01:52 +01:00
Florian Klink
1fad49ee35 dlna: also look at "Subs" subdirectory
Apparently it seems pretty common for subtitles to be put in a
subdirectory called "Subs", rather than in the same directory as the
media file itself.

This covers that usecase, by checking the returned listing for a
directory called "Subs" to exist.

If it does, its child nodes are added to the list before they're being
passed to mediaWithResources, allowing these subtitles to be discovered
automatically.
2024-08-15 22:01:52 +01:00
Sam Harrison
182b2a6417 chore: add childish-sambino as filescom maintainer 2024-08-15 22:00:26 +01:00
albertony
da9faf1ffe Make filtering rules for help and listremotes more lenient 2024-08-15 18:45:12 +02:00
albertony
303358eeda help: cleanup template syntax (consistent whitespace) 2024-08-15 18:45:12 +02:00
albertony
62233b4993 help: avoid empty additional help topics header 2024-08-15 18:45:12 +02:00
albertony
498abcc062 help: make help command output less distracting 2024-08-15 18:45:12 +02:00
albertony
482bfae8fa docs: consistent newline of first line in command output 2024-08-15 18:26:34 +02:00
Sam Harrison
ae9960a4ed filescom: add Files.com backend 2024-08-15 17:00:39 +01:00
Nick Craig-Wood
089c168fb9 fstests: attempt to fix flaky serve s3 test
Sometimes (particularly on macOS amd64) the serve s3 test fails with
TestIntegration/FsMkdir/FsPutError where it wasn't expecting to get an
object but it did.

This is likely caused by a race between the serve s3 goroutine
deleting the half uploaded file and the fstests code looking for it to
not exist.

This fix treats it like any other eventual consistency problem and
retries the check using the test framework.
2024-08-15 16:30:29 +01:00
albertony
6f515ded8f docs: move the link to global flags page to the main options header 2024-08-15 16:41:45 +02:00
albertony
91c6faff71 docs: make command group options subsections of main options 2024-08-15 16:41:45 +02:00
albertony
874616a73e docs: stop shouting the SEE ALSO header 2024-08-15 16:41:45 +02:00
albertony
458d93ea7e docs: fix the rclone root command header levels 2024-08-15 16:41:45 +02:00
albertony
513653910c docs: make the see also section header consistent and listed in toc of command pages 2024-08-15 16:41:45 +02:00
nielash
bd5199910b local: --local-no-clone flag to disable cloning for server-side copies
This flag allows users to disable the reflink cloning feature and instead force
"deep" copies, for certain use cases where data redundancy is preferable. It is
functionally equivalent to using `--disable Copy` on local.
2024-08-15 15:36:38 +01:00
nielash
f6d836eefd local: support setting custom --metadata during server-side Copy 2024-08-15 15:36:38 +01:00
nielash
87ec26001f local: add server-side copy with xattrs on macOS (part-fix #1710)
Before this change, macOS-specific metadata was not preserved by rclone, even for
local-to-local transfers (it does not use the "user." prefix, nor is Mac metadata
limited to xattrs.) Additionally, rclone did not take advantage of APFS's native
"cloning" functionality for fast and deduplicated transfers.

After this change, local (on macOS only) supports "server-side copy" similarly to
other remotes, and achieves this by using (when possible) macOS's native APFS
"cloning", which is the same underlying mechanism deployed when a user
duplicates a file via the Finder UI. This has several advantages over the
previous behavior:

- It is extremely fast (even large files can be cloned instantly)
- It is very efficient in terms of storage, as it automatically deduplicates when
possible (i.e. so that having two identical files does not consume more storage
than having just one.) (The concept is similar to a "hard link", but subsequent
modifications will not affect the original file.)
- It preserves Mac-specific metadata to the maximum degree, including not only
xattrs but also metadata not easily settable by other methods, including Finder
and Spotlight params.

When server-side "clone" is not available (for example, on non-APFS volumes), it
falls back to server-side "copy" (still preserving metadata but using more disk
storage.) It is only used when both remotes are local (and not wrapped by other
remotes, such as crypt.) The behavior of local on non-mac systems is unchanged.
2024-08-15 15:36:38 +01:00
albertony
3e12612aae docs: add automatic alias redirects for command pages 2024-08-15 16:18:38 +02:00
Florian Klink
aee2480fc4 cmd/rc: add --unix-socket option
This adds an additional flag --unix-socket, and if supplied connects
to the unix socket given.

    rclone rcd --rc-addr unix:///tmp/my.socket
    rclone rc --unix-socket /tmp/my.socket core/stats
2024-08-15 15:14:51 +01:00
Florian Klink
3ffa47ea16 webdav: add --webdav-unix-socket-path to connect to a unix socket
This adds a new optional parameter to the backend, to specify a path
to a unix domain socket to connect to, instead the specified URL.

The URL itself is still used for the rest of the HTTP client, allowing
host and subpath to stay intact.

This allows using rclone with the webdav backend to connect to a WebDAV
server provided at a Unix Domain socket:

    rclone serve webdav --addr unix:///tmp/my.socket remote:path
    rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav:
2024-08-15 15:14:51 +01:00
Nick Craig-Wood
70e8ad456f serve nfs: implement on disk cache for file handles 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
55b9b3e33a serve nfs: factor caching to its own file 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
ce7dfa075c serve nfs: update github.com/willscott/go-nfs to latest
This fixes various cache invalidation bugs
2024-08-14 21:55:26 +01:00
Nick Craig-Wood
a697d27455 serve nfs: store billy FS in the Handler 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
cae22a7562 serve nfs: mask unimplemented error from chmod 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
877321c2fb serve nfs: add tracing to filesystem calls 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
574378e871 serve nfs: rename types and methods which should be internal 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
50d42babd8 nfsmount: require --vfs-cache-mode writes or above in tests
These tests fail for --vfs-cache-mode minimal on Linux for the same
reason they don't work properly with --vfs-cache-mode off
2024-08-14 21:55:26 +01:00
Nick Craig-Wood
13ea77dd71 nfsmount: allow tests to run on any unix where sudo mount/umount works 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
62b76b631c nfsmount: make the --sudo flag work for umount as well as mount 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
96f92b7364 nfsmount: add tcp option to NFS mount options to fix mounting under Linux 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
7c02a63884 build: install NFS client libraries to allow nfsmount tests to run 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
67d4394a37 vfstest: fix crash if open failed 2024-08-14 21:55:26 +01:00
Nick Craig-Wood
c1a98768bc Implement Gofile backend - fixes #4632 2024-08-14 21:15:37 +01:00
Nick Craig-Wood
bac9abebfb lib/encoder: add Exclamation mark encoding 2024-08-14 21:15:37 +01:00
Nick Craig-Wood
27b281ef69 chunkedreader: add --vfs-read-chunk-streams to parallel read chunks
This converts the ChunkedReader into an interface and provides two
implementations one sequential and one parallel.

This can be used to improve the performance of the VFS on high
bandwidth or high latency links.

Fixes #4760
2024-08-14 21:13:09 +01:00
Nick Craig-Wood
10270a4354 accounting: fix race detected by the race detector 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
d08b49d723 pool: Add ability to wait for a write to RW 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
cb2d2d72a0 pool: Make RW thread safe so can read and write at the same time 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
e686e34f89 multipart: make pool buffer size public 2024-08-14 21:13:09 +01:00
Nick Craig-Wood
5f66350331 Add Fornax to contributors 2024-08-14 21:12:56 +01:00
Nick Craig-Wood
e1d935b854 build: use go1.23 for the linter
This reverts commit 485aa90d13.

As the upstream problem is now fixed by golangci-lint v1.60.1
2024-08-14 18:27:13 +01:00
Nick Craig-Wood
61b27cda80 build: fix govet lint errors with golangci-lint v1.60.1
There were a lot of instances of this lint error

    printf: non-constant format string in call to github.com/rclone/rclone/fs.Logf (govet)

Which were fixed by re-arranging the arguments and adding "%s".

There were quite a few genuine bugs which were found too.
2024-08-14 18:25:40 +01:00
Nick Craig-Wood
83613634f9 build: bisync: fix govet lint errors with golangci-lint v1.60.1
There were a lot of instances of this lint error

    printf: non-constant format string in call to github.com/rclone/rclone/fs.Logf (govet)

Most of these could not easily be fixed so had nolint lines added.

This should probably be done in a neater way perhaps by making
LogColorf/ErrorColorf functions.
2024-08-14 18:21:31 +01:00
Nick Craig-Wood
1c80cbd13a build: fix staticcheck lint errors with golangci-lint v1.60.1 2024-08-14 17:48:24 +01:00
Nick Craig-Wood
9d5315a944 build: fix gosimple lint errors with golangci-lint v1.60.1 2024-08-14 17:46:12 +01:00
Nick Craig-Wood
8d1d096c11 drive: fix copying Google Docs to a backend which only supports SHA1
When copying Google Docs to Backblaze B2 errors like this would happen

    ERROR : test.docx: Failed to calculate src hash: hash type not supported
    ERROR : test.docx: corrupted on transfer: sha1 hashes differ src

This was due to an oversight in

8fd66daab6 drive: add support of SHA-1 and SHA-256 checksum

Which omitted to change the base object (which includes Google Docs) so
that it supported SHA-1 and SHA-256.
2024-08-12 20:27:12 +01:00
Nick Craig-Wood
4b922d86d7 drive: update docs on creating admin service accounts 2024-08-12 20:27:12 +01:00
Fornax
3b3625037c
Add pixeldrain backend
This commit adds support for pixeldrain's experimental filesystem API.
2024-08-12 13:35:44 +01:00
kapitainsky
bfa3278f30
docs: add comment how to reduce rclone binary size (#8000)
See #7998
2024-08-10 17:52:32 +01:00
albertony
e334366345 Make listremotes long output backwards compatible - fixes #7995
The format was changed to include the source attribute in #7404, but that is now
reverted and the source information is only shown in json output.
2024-08-09 17:39:00 +01:00
Nick Craig-Wood
642d4082ac test_backend_sizes.py calculates space in the binary each backend uses #7998 2024-08-09 12:13:24 +01:00
albertony
024ff6ed15 listremotes: added options for filtering, ordering and json output 2024-08-08 13:41:31 +01:00
albertony
d6b0743cf4 config: make getting config values more consistent 2024-08-08 13:41:31 +01:00
albertony
e4749cf0d0 config: make listing of remotes more consistent 2024-08-08 13:41:31 +01:00
albertony
8d2907d8f5 config: avoid remote with empty name from environment 2024-08-08 13:41:31 +01:00
albertony
1720d3e11c help: global flags help command extended filtering 2024-08-08 13:41:31 +01:00
albertony
c6352231e4 help: global flags help command now takes glob filter 2024-08-08 13:41:31 +01:00
albertony
731947f3ca filter: add options for glob to regexp without anchors and special path rules 2024-08-08 13:41:31 +01:00
albertony
16d642825d docs: remove old genautocomplete command docs and add as alias from the newer completion command 2024-08-08 13:34:10 +01:00
albertony
50aebcf403 docs: replace references to genautocomplete with the new name completion 2024-08-08 13:34:10 +01:00
Nick Craig-Wood
c8555d1b16 serve s3: update to AWS SDKv2 by updating github.com/rclone/gofakes3
This is the last dependency for the SDKv1 and this commit removes it
from go.mod also.
2024-08-07 16:35:39 +01:00
Nick Craig-Wood
3ec0ff5d8f s3: fix SSE-C after SDKv2 change
The new SDK apparently keeds the customer key to be base64 encoded
where the old one did that for you automatically.

See: https://github.com/aws/aws-sdk-go-v2/issues/2736
See: https://forum.rclone.org/t/new-s3-backend-help-testing-needed/47139/3
2024-08-07 12:13:13 +01:00
wiserain
746516511d pikpak: update to using AWS SDK v2 #4989 2024-08-07 12:13:13 +01:00
Nick Craig-Wood
8aef1de695 s3: fix Cloudflare R2 integration tests after SDKv2 update #4989
Cloudflare will normally automatically decompress files with
`Content-Encoding: gzip` when downloaded. This is not what AWS S3 does
and it breaks the integration tests.

This fudges the integration tests to upload the test file with
`Cache-Control: no-transform` on Cloudflare R2 and puts a note in the
docs about this problem.
2024-08-07 12:13:13 +01:00
Nick Craig-Wood
cb611b8330 s3: add --s3-sdk-log-mode to control SDK debugging 2024-08-07 12:13:13 +01:00
Nick Craig-Wood
66ae050a8b s3: fix GCS provider after SDKv2 update #4989
This also adds GCS via S3 to the integration tester.
2024-08-07 12:13:13 +01:00
Nick Craig-Wood
fd9049c83d s3: update to using AWS SDK v2 - fixes #4989
SDK v2 conversion

Changes

  - `--s3-sts-endpoint` is no longer supported
  - `--s3-use-unsigned-payload` to control use of trailer checksums (needed for non AWS)
2024-08-07 12:13:13 +01:00
Nick Craig-Wood
a1f52bcf50 fstest: implement method to skip ChunkedCopy tests 2024-08-06 12:45:07 +01:00
Nick Craig-Wood
0470450583 build: disable wasm/js build due to go bug
Rclone is too big for js/wasm until
https://github.com/golang/go/issues/64856 is fixed
2024-08-04 12:18:34 +01:00
Nick Craig-Wood
1901bae4eb Add @dmcardle as gitannex maintainer 2024-08-01 17:48:39 +01:00
Nick Craig-Wood
9866d1c636 docs: s3: add section on using too much memory #7974 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
c5c7bcdd45 docs: link the workaround for big directory syncs in the FAQ #7974 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
d5c7b55ba5 Add David Seifert to contributors 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
feafbfca52 Add Will Miles to contributors 2024-08-01 16:33:09 +01:00
Nick Craig-Wood
abe01179ae Add Ernie Hershey to contributors 2024-08-01 16:33:09 +01:00
David Seifert
612c717ea0
docs: rc: fix correct _path to _root in on the fly backend docs 2024-07-30 10:19:47 +01:00
Saleh Dindar
f26d2c6ba8 fs/http: reload client certificates on expiry
In corporate environments, client certificates have short life times
for added security, and they get renewed automatically. This means
that client certificate can expire in the middle of long running
command such as `mount`.

This commit attempts to reload the client certificates 30s before they
expire.

This will be active for all backends which use HTTP.
2024-07-24 15:02:32 +01:00
Will Miles
dcecb0ede4 docs: clarify hasher operation
Add a line to the "other operations" block to indicate that the hasher overlay will apply auto-size and other checks for all commands.
2024-07-24 11:07:52 +01:00
Ernie Hershey
47588a7fd0 docs: fix typo in batcher docs for dropbox and googlephotos 2024-07-24 10:58:22 +01:00
Nick Craig-Wood
ba381f8721 b2: update versions documentation - fixes #7878 2024-07-24 10:52:05 +01:00
Nick Craig-Wood
8f0ddcca4e s3: document need to set force_path_style for buckets with invalid DNS names
Fixes #6110
2024-07-23 11:34:08 +01:00
Nick Craig-Wood
404ef80025 ncdu: document that excludes are not shown - fixes #6087 2024-07-23 11:29:07 +01:00
Nick Craig-Wood
13fa583368 sftp: clarify the docs for key_pem - fixes #7921 2024-07-23 10:07:44 +01:00
Nick Craig-Wood
e111ffba9e serve ftp: fix failed startup due to config changes
See: https://forum.rclone.org/t/failed-to-ftp-failed-to-parse-host-port/46959
2024-07-22 14:54:32 +01:00
Nick Craig-Wood
30ba7542ff docs: add Route4Me as a sponsor 2024-07-22 14:48:41 +01:00
wiserain
31fabb3402
pikpak: correct file transfer progress for uploads by hash
Pikpak can accelerate file uploads by leveraging existing content 
in its storage (identified by a custom hash called gcid). 
Previously, file transfer statistics were incorrect for uploads 
without outbound traffic as the input stream remained unchanged.

This commit addresses the issue by:

* Removing unnecessary unwrapping/wrapping of accountings 
before/after gcid calculation, leading immediate AccountRead() on buffering.
* Correctly tracking file transfer statistics for uploads 
with no incoming/outgoing traffic by marking them as Server Side Copies.

This change ensures correct statistics tracking and improves overall user experience.
2024-07-20 21:50:08 +09:00
Nick Craig-Wood
b3edc9d360 fs: fix --use-json-log and -vv after config reorganization 2024-07-20 12:49:08 +01:00
Nick Craig-Wood
04f35fc3ac Add Tobias Markus to contributors 2024-07-20 12:49:08 +01:00
Tobias Markus
8e5dd79e4d
ulozto: fix upload of > 2GB files on 32 bit platforms - fixes #7960 2024-07-20 11:29:34 +01:00
Nick Craig-Wood
b809e71d6f lib/mmap: fix lint error on deprecated reflect.SliceHeader
reflect.SliceHeader is deprecated, however the replacement gives a go
vet warning so this disables the lint warning in one use of
reflect.SliceHeader and replaces it in the other.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
d149d1ec3e lib/http: fix tests after go1.23 update
go1.22 output the Content-Length on a bad Range request on a file but
go1.23 doesn't - adapt the tests accordingly.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
3b51ad24b2 rc: fix tests after go1.23 upgrade
go1.23 adds a doctype to the HTML output when serving file listings.
This adapts the tests for that.
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
485aa90d13 build: use go1.22 for the linter to fix excess memory usage
golangci-lint seems to have a bug which uses excess memory under go1.23

See: https://github.com/golangci/golangci-lint/issues/4874
2024-07-20 10:54:47 +01:00
Nick Craig-Wood
8958d06456 build: update all dependencies 2024-07-20 10:54:47 +01:00
Nick Craig-Wood
ca24447090 build: update to go1.23rc1 and make go1.21 the minimum required version 2024-07-20 10:54:47 +01:00
Nick Craig-Wood
d008381e59 Add AThePeanut4 to contributors 2024-07-20 10:54:47 +01:00
AThePeanut4
14629c66f9 systemd: prevent unmount rc command from sending a STOPPING=1 sd-notify message
This prevents an `rclone rcd` server from prematurely going into the
'deactivating' state, which was causing systemd to kill it with a
SIGABRT after the stop timeout.

Fixes #7540
2024-07-19 10:32:34 +01:00
Nick Craig-Wood
4824837eed azureblob: allow anonymous access for public resources
See: https://forum.rclone.org/t/azure-blob-public-resources/46882
2024-07-18 11:13:29 +01:00
Nick Craig-Wood
5287a9b5fa Add Ke Wang to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
f2ce1767f0 Add itsHenry to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
7f048ac901 Add Tomasz Melcer to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
b0d0e0b267 Add Paul Collins to contributors 2024-07-18 11:13:29 +01:00
Nick Craig-Wood
f5eef420a4 Add Russ Bubley to contributors 2024-07-18 11:13:29 +01:00
Sawjan Gurung
9de485f949
serve s3: implement --auth-proxy
This implements --auth-proxy for serve s3. In addition it:

* add listbuckets tests with and without authProxy
* use auth proxy test framework
* servetest: implement workaround for #7454
* update github.com/rclone/gofakes3 to fix race condition
2024-07-17 15:14:08 +01:00
Kyle Reynolds
d4b29fef92
fs: Allow semicolons as well as spaces in --bwlimit timetable parsing - fixes #7595 2024-07-17 11:04:01 +01:00
wiserain
471531eb6a
pikpak: optimize upload by pre-fetching gcid from API
This commit optimizes the PikPak upload process by pre-fetching the Global 
Content Identifier (gcid) from the API server before calculating it locally.

Previously, a gcid required for uploads was calculated locally. This process was 
resource-intensive and time-consuming. By first checking for a cached gcid 
on the server, we can potentially avoid the local calculation entirely. 
This significantly improves upload speed especially for large files.
2024-07-17 12:20:09 +09:00
Nick Craig-Wood
afd2663057 rc: add option blocks parameter to options/get and options/info 2024-07-16 15:02:50 +01:00
Ke Wang
97d6a00483 chore(deps): update github.com/rclone/gofakes3 2024-07-16 10:58:02 +01:00
Nick Craig-Wood
5ddedae431 fstest: fix compile after merge
After merging this commit

56caab2033 b2: Include custom upload headers in large file info

The compile failed as a change had been missed. Should have rebased
before merging!
2024-07-15 12:18:14 +01:00
URenko
e1b7bf7701 local: fix encoding of root path
fix #7824
Statements like rclone copy <somewhere> . will spontaneously miss
if . expands to a path with a Full Width replacement character.
This is due to the incorrect order in which
relative paths and decoding were handled in the original implementation.
2024-07-15 12:10:04 +01:00
URenko
2a615f4681 vfs: fix cache encoding with special characters - #7760
The vfs use the hardcoded OS encoding when creating temp file,
but decode it with encoding for the local filesystem (--local-encoding)
when copying it to remote.
This caused failures when the filenames contained special characters.
The hardcoded OS encoding is now used uniformly.
2024-07-15 12:10:04 +01:00
URenko
e041796bfe docs: correct description of encoding None and add Raw. 2024-07-15 12:10:04 +01:00
URenko
1b9217bc78 lib/encoder: add EncodeRaw 2024-07-15 12:10:04 +01:00
wiserain
846c1aeed0 pikpak: non-buffered hash calculation for local source files 2024-07-15 11:53:01 +01:00
Pat Patterson
56caab2033 b2: Include custom upload headers in large file info - fixes #7744 2024-07-15 11:51:37 +01:00
itsHenry
495a5759d3 chore(deps): update github.com/rclone/gofakes3 2024-07-15 11:34:28 +01:00
Nick Craig-Wood
d9bd6f35f2 fs/test: fix erratic test 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
532a0818f7 fs: make sure we load the options defaults to start with 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
91558ce6aa fs: fix the defaults overriding the actual config
After re-organising the config it became apparent that there was a bug
in the config system which hadn't manifested until now.

This was the default config overriding the main config and was fixed
by noting when the defaults had actually changed.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
8fbb259091 rc: add options/info call to enumerate options
This also makes some fields in the Options block optional - these are
documented in rc.md
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
4d2bc190cc fs: convert main options to new config system
There are some flags which haven't been converted which could be
converted in the future.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c2bf300dd8 accounting: fix creating of global stats ignoring the config
Before this change the global stats were created before the global
config which meant they ignored the global config completely.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c954c397d9 filter: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
25c6379688 filter: rename Opt to Options for consistency 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
ce1859cd82 rc: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
cf25ae69ad lib/http: convert options to new style
There are still users of the old style options which haven't been
converted yet.
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
dce8317042 log: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
eff2497633 serve sftp: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
28ba4b832d serve nfs: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
58da1a165c serve ftp: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
eec95a164d serve dlna: convert options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
44cd2e07ca cmd/mountlib: convert mount options to new style 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
a28287e96d vfs: convert vfs options to new style
This also
- move in use options (Opt) from vfsflags to vfscommon
- change os.FileMode to vfscommon.FileMode in parameters
- rework vfscommon.FileMode and add tests
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
fc1d8dafd5 vfs: convert time.Duration option to fs.Duration 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
2c57fe9826 cmd/mountlib: convert time.Duration option to fs.Duration 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
7c51b10d15 configstruct: skip items with config:"-" 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
3280b6b83c configstruct: allow parsing of []string encoded as JSON 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
1a77a2f92b configstruct: make nested config structs work 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c156716d01 configstruct: fix parsing of invalid booleans in the config
Apparently fmt.Sscanln doesn't parse bool's properly and this isn't
likely to be fixed by the Go team who regard sscanf as a mistake.

This only uses sscan for integers and uses the correct routine for
everything else.

This also implements parsing time.Duration

See: https://github.com/golang/go/issues/43306
2024-07-15 11:09:54 +01:00
Nick Craig-Wood
0d9d0eef4c fs: check the names and types of the options blocks are correct 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
2e653f8128 fs: make Flagger and FlaggerNP interfaces public so we can test flags elsewhere 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
e79273f9c9 fs: add Options registry and rework rc to use it 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
8e10fe71f7 fs: allow []string to work in Options 2024-07-15 11:09:54 +01:00
Nick Craig-Wood
c6ab37a59f flags: factor AddFlagsFromOptions from cmd
This is in preparation for generalising the backend config
2024-07-15 11:09:53 +01:00
Nick Craig-Wood
671a15f65f fs: add Groups and FieldName to Option 2024-07-15 11:09:53 +01:00
Nick Craig-Wood
8d72698d5a fs: refactor fs.ConfigMap to take a prefix and Options rather than an fs.RegInfo
This is in preparation for generalising the backend config system
2024-07-15 11:09:53 +01:00
Nick Craig-Wood
6e853c82d8 sftp: ignore errors when closing the connection pool
There is no need to report errors when draining the connection pool -
they are useless at this point.

See: https://forum.rclone.org/t/rclone-fails-to-close-unused-tcp-connections-due-to-use-of-closed-network-connection/46735
2024-07-15 10:48:45 +01:00
Tomasz Melcer
27267547b9
sftp: use uint32 for mtime
The SFTP protocol (and the golang sftp package) internally uses uint32 unix
time for expressing mtime. Hence it is a waste of memory to store it as 24-byte
time.Time data structure in long-lived data structures. So despite that the
golang sftp package uses time.Time as external interface, we can re-encode the
value back to the original format and save memory.

Co-authored-by: Tomasz Melcer <tomasz@melcer.pl>
2024-07-09 10:23:11 +01:00
wiserain
cdcf0e5cb8
pikpak: optimize file move by removing unnecessary readMetaData() call
Previously, the code relied on calling `readMetaData()` after every file move operation.
This introduced an unnecessary API call and potentially impacted performance.

This change removes the redundant `readMetaData()` call, improving efficiency.
2024-07-08 18:16:00 +09:00
wiserain
6507770014
pikpak: fix error with copyto command
Fixes an issue where copied files could not be renamed when using the
`copyto` command. This occurred because the object ID was empty
before calling `readMetaData`. The fix preemptively calls `readMetaData`
to ensure an object ID is available before attempting the rename operation.
2024-07-08 10:37:42 +09:00
Paul Collins
bd5799c079 swift: add workarounds for bad listings in Ceph RGW
Ceph's Swift API emulation does not fully confirm to the API spec.
As a result, it sometimes returns fewer items in a container than
the requested limit, which according to the spec should means
that there are no more objects left in the container.  (Note that
python-swiftclient always fetches unless the current page is empty.)

This commit adds a pair of new Swift backend settings to handle this.

Set `fetch_until_empty_page` to true to always fetch another
page of the container listing unless there are no items left.

Alternatively, set `partial_page_fetch_threshold` to an integer
percentage.  In this case rclone will fetch a new page only when
the current page is within this percentage of the limit.

Swift API reference: https://docs.openstack.org/swift/latest/api/pagination.html

PR against ncw/swift with research and discussion: https://github.com/ncw/swift/pull/167

Fixes #7924
2024-06-28 11:14:26 +01:00
Russ Bubley
c834eb7dcb
sftp: fix docs on connections not to refer to concurrency 2024-06-28 10:42:52 +01:00
Nick Craig-Wood
754e53dbcc docs: remove warp as silver sponsor 2024-06-24 10:33:18 +01:00
Nick Craig-Wood
5511fa441a onedrive: fix nil pointer error when uploading small files
Before this fix when uploading a single part file, if the
o.fetchAndUpdateMetadata() call failed rclone would call
o.setMetaData() with a nil info which caused a crash.

This fixes the problem by returning the error from
o.fetchAndUpdateMetadata() explicitly.

See: https://forum.rclone.org/t/serve-webdav-is-crashing-fatal-error-sync-unlock-of-unlocked-mutex/46300
2024-06-24 09:30:59 +01:00
Nick Craig-Wood
4ed4483bbc vfs: fix fatal error: sync: unlock of unlocked mutex in panics
Before this change a panic could be overwritten with the message

    fatal error: sync: unlock of unlocked mutex

This was because we temporarily unlocked the mutex, but failed to lock
it again if there was a panic.

This is code is never the cause of an error but it masks the
underlying error by overwriting the panic cause.

See: https://forum.rclone.org/t/serve-webdav-is-crashing-fatal-error-sync-unlock-of-unlocked-mutex/46300
2024-06-24 09:30:59 +01:00
Nick Craig-Wood
0e85ba5080 Add Filipe Herculano to contributors 2024-06-24 09:30:59 +01:00
Nick Craig-Wood
e5095a7d7b Add Thearas to contributors 2024-06-24 09:30:59 +01:00
wiserain
300851e8bf
pikpak: implement custom hash to replace wrong sha1
This improves PikPak's file integrity verification by implementing a custom 
hash function named gcid and replacing the previously used SHA-1 hash.
2024-06-20 00:57:21 +09:00
wiserain
cbccad9491
pikpak: improves data consistency by ensuring async tasks complete
Similar to uploads implemented in commit ce5024bf33, 
this change ensures most asynchronous file operations (copy, move, delete, 
purge, and cleanup) complete before proceeding with subsequent actions. 
This reduces the risk of data inconsistencies and improves overall reliability.
2024-06-20 00:07:05 +09:00
dependabot[bot]
9f1a7cfa67 build(deps): bump docker/build-push-action from 5 to 6
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 5 to 6.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v5...v6)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-18 14:48:30 +01:00
Filipe Herculano
d84a4c9ac1
s3: fix incorrect region for Magalu provider 2024-06-15 17:40:28 +01:00
Thearas
1c9da8c96a docs: recommend no_check_bucket = true for Alibaba - fixes #7889
Change-Id: Ib6246e416ce67dddc3cb69350de69129a8826ce3
2024-06-15 17:39:05 +01:00
Nick Craig-Wood
af9c5fef93 docs: tidy .gitignore for docs 2024-06-15 13:08:20 +01:00
Nick Craig-Wood
7060777d1d docs: fix hugo warning: found no layout file for "html" for kind "term"
Hugo has been making this warning for a while

WARN found no layout file for "html" for kind "term": You should
create a template file which matches Hugo Layouts Lookup Rules for
this combination.

This turned out to be the addition of the `groups:` keyword to the
command frontmatter. Hugo is doing something with this keyword though
this isn't documented in the frontmatter documentation.

The fix was removing the `groups:` keyword from the frontmatter since
it was never used by hugo.
2024-06-15 12:59:49 +01:00
Nick Craig-Wood
0197e7f4e5 docs: remove slug and url from command pages since they are no longer needed 2024-06-15 12:37:43 +01:00
Nick Craig-Wood
c1c9e209f3 docs: fix hugo warning: found no layout file for "html" for kind "section"
Hugo has been making this warning for a while

WARN found no layout file for "html" for kind "section": You should
create a template file which matches Hugo Layouts Lookup Rules for
this combination.

It turned out to be
- the arrangement of the oracle object storage docs and sub page
- the fact that a section template was missing
2024-06-15 12:29:37 +01:00
Nick Craig-Wood
fd182af866 serve dlna: fix panic: invalid argument to Int63n
This updates the upstream github.com/anacrolix/dms to master to fix
the problem.

Fixes #7911
2024-06-15 10:58:57 +01:00
Nick Craig-Wood
4ea629446f Start v1.68.0-DEV development 2024-06-14 17:54:27 +01:00
Nick Craig-Wood
93e8a976ef Version v1.67.0 2024-06-14 16:04:51 +01:00
nielash
8470bdf810 s3: fix 405 error on HEAD for delete marker with versionId
When getting an object by specifying a versionId in the request, if the
specified version is a delete marker, it returns 405 (Method Not Allowed),
instead of 404 (Not Found) which would be returned without a versionId. See
https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeleteMarker.html

Before this change, we were only looking for 404 (and not 405) to determine
whether the object exists. This meant that in some circumstances (ex. when
Versioning is enabled for the bucket and we have a non-null X-Amz-Version-Id), we
deemed the object to exist when we should not have.

After this change, 405 (Method Not Allowed) is treated the same as 404 (Not
Found) for the purposes of headObject.

See https://forum.rclone.org/t/bisync-rename-failed-method-not-allowed/45723/13
2024-06-13 18:09:29 +01:00
Nick Craig-Wood
1aa3a37a28 gitannex: make tests run more quietly - use go test -v for more info
These tests were generating 1000s of lines of logs and making it
difficult to figure out what was failing in other tests.
2024-06-13 17:33:56 +01:00
albertony
ae887ad042 jottacloud: set metadata on server side copy and move - fixes #7900 2024-06-13 16:19:36 +01:00
Nick Craig-Wood
d279fea44a qingstor: disable integration tests as test account suspended
QingStor support have disabled the integration test account with this message

尊敬的用户您好:依据监管部门相关内容安全合规要求,QingStor即日起限制对
个人客户提供对象存储服务,您的对象存储服务将被系统置于禁用状态,如需继
续使用QingsStor对象存储服务,您可以通过工单或者拨打400热线申请开通,未
解封期间您的数据将不受影响,感谢您的谅解和支持。

Which google translate renders as

> Dear user: In accordance with the relevant content security
> compliance requirements of the regulatory authorities, QingStor will
> limit the provision of object storage services to individual
> customers from now on. Your object storage service will be disabled
> by the system. If you need to continue to use the QingsStor object
> storage service, you can apply for activation through a work order
> or by calling the 400 hotline. Your data will not be affected during
> the period of unblocking. Thank you for your understanding and
> support.
2024-06-13 12:50:35 +01:00
Nick Craig-Wood
282e34f2d5 operations: add operations.ReadFile to read the contents of a file into memory 2024-06-13 12:48:46 +01:00
Nick Craig-Wood
021f25a748 fs: make ConfigFs take an fs.Info which makes it more useful 2024-06-13 12:48:46 +01:00
Nick Craig-Wood
18e9d039ad touch: fix using -R on certain backends
On backends which return a valid object for "" with NewObject then
touch was going wrong as it thought it was passed an object.

This should not happen normally but s3 can be configured with
--s3-no-head where it is happy to believe that all objects exist.
2024-06-12 17:57:28 +01:00
Nick Craig-Wood
cbcfb90d9a serve s3: fix XML of error message
This updates the s3 libary to fix the XML of the error response

Fixes #7749
2024-06-12 17:53:57 +01:00
Nick Craig-Wood
caba22a585 fs/logger: make the tests deterministic
Previously this used `rclone test makefiles --seed 0` which sets a
random seed and every now and again we get this error

    Failed to open file "$WORK\\src\\moru": open $WORK\src\moru: is a directory

Because a file with the same name was created as a file in the src and
a dir in the dst.

This fixes it by using determinstic seeds each time.
2024-06-12 16:39:30 +01:00
Nick Craig-Wood
3fef8016b5 zoho: sleep for 60 seconds if rate limit error received 2024-06-12 16:34:30 +01:00
Nick Craig-Wood
edf6537c61 zoho: remove simple file names complication which is no longer needed 2024-06-12 16:34:27 +01:00
Nick Craig-Wood
00f0e9df9d zoho: retry reading info if size wasn't returned 2024-06-12 16:34:24 +01:00
Nick Craig-Wood
e6ab644350 zoho: fix throttling problem when uploading files
Before this change rclone checked to see if a file existed before
uploading it. It did this to avoid making duplicate files. This
involved listing the destination directory to see if the file existed
which was rate limited by Zoho.

However Zoho can't have duplicate files anyway so this fix just
removes that check and the PutUnchecked method which isn't needed.

See: https://forum.rclone.org/t/second-followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/45697
See: https://forum.rclone.org/t/followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/44794
2024-06-12 16:34:18 +01:00
Nick Craig-Wood
61c18e3b60 zoho: use cursor listing for improved performance
Cursor listing enables us to list up to 1,000 items per call
(previously it was 10) and uses one less transaction per call.

See: https://forum.rclone.org/t/second-followup-on-the-older-topic-rclone-invokes-more-number-of-workdrive-s-files-listing-api-calls-which-exceeds-the-throttling-limit/45697/4
2024-06-12 16:34:11 +01:00
Nick Craig-Wood
d068e0b1a9 operations: fix hashing problem in integration tests
Before this change backends which supported more than one hash (eg
pcloud) or backends which wrapped backends supporting more than one
hash (combine) would fail the TestMultithreadCopy and
TestMultithreadCopyAbort with an error like

    Failed to make new multi hasher: requested set 000001ff contains unknown hash types

This was caused by the tests limiting the globally available hashes to
the first hash supplied by the backend.

This was added in this commit

d5d28a7513 operations: fix overwrite of destination when multi-thread transfer fails

to overcome the tests taking >100s on the local backend because they
made every single hash that the local backend. It brought this time
down to 20s.

This commit fixes the problem and retains the CPU speedup by only
applying the fix from the original commit if the destination backend
is the local backend. This fixes the common case (testing on the local
backend). This does not fix the problem for a backend which wraps the
local backend (eg combine) but this is run only on the integration
test machine and not on all the CI.
2024-06-12 11:11:54 +01:00
Nick Craig-Wood
a341065b8d Add Bill Fraser to contributors 2024-06-12 11:11:54 +01:00
Nick Craig-Wood
0c29a1fe31 Add Florian Klink to contributors 2024-06-12 11:11:54 +01:00
Nick Craig-Wood
1a40300b5f Add Michał Dzienisiewicz to contributors 2024-06-12 11:11:54 +01:00
dependabot[bot]
44be27729a build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azidentity
Bumps [github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://github.com/Azure/azure-sdk-for-go) from 1.5.2 to 1.6.0.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/internal/v1.5.2...sdk/azcore/v1.6.0)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azidentity
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-06-12 09:33:04 +01:00
wiserain
b7624287ac
pikpak: implement configurable chunk size for multipart upload
Previously, the fixed 10MB chunk size could lead to exceeding the maximum 
allowed number of parts for very large files. Similar to other backends, options for 
chunk size and upload concurrency are now user-configurable. Additionally, 
the internal library is used to automatically adjust chunk size to prevent exceeding 
upload part limitations.

Fixes #7850
2024-06-12 13:19:25 +09:00
Evan Harris
6db9f7180f docs: added info about --progress terminal width 2024-06-11 21:37:40 +01:00
wiserain
b601961e54 pikpak: remove PublicLink from integration tests
This commit removes the test for PublicLink as it is not currently supported in the test environment.

This removes it from the integration tests to avoid meaningless retries.
2024-06-12 04:43:49 +09:00
Nick Craig-Wood
51aca9cf9d onedrive: add --onedrive-hard-delete to permanently delete files
Fixes #7812
2024-06-11 17:48:15 +01:00
Bill Fraser
7c0645dda9 dropbox: add option to override root namespace
This lets you, for example, use shared folders without mounting them
into your home namespace first, as long as you know their namespace ID.

(The --dropbox-shared-folders flag could thus be changed to not need to
mount the shared folder first, but I'm not doing that here as it's a
behavior change, who knows, maybe somebody relies on it.)
2024-06-11 12:49:01 +01:00
Florian Klink
aed77a8fb2 tree-wide: replace /bin/bash with /usr/bin/env bash
The latter is more portable, while the former only works on systems
where /bin/bash exists (or is symlinked appropriately).
2024-06-11 12:47:47 +01:00
Michał Dzienisiewicz
4250dd98f3 protondrive: don't auth with an empty access token 2024-06-11 12:46:00 +01:00
nielash
c13118246c serve s3: fix in-memory metadata storing wrong modtime
Before this change, serve s3 did not consistently save the correct modtime value
in memory after putting or copying an object, which could sometimes cause an
incorrect modtime to be returned. This change fixes the issue by ensuring that
both "mtime" and "X-Amz-Meta-Mtime" are updated in b.meta when we have fresh data.

The issue was discovered on the TestBisyncRemoteRemote/ext_paths test.
2024-06-11 12:10:43 +01:00
nielash
a56cd52025 vfs: fix renaming a directory
Before this change, renaming a directory d failed to rename its key in
d.parent.items, which caused trouble later when doing Dir.Stat on a
subdirectory. This change fixes the issue.
2024-06-11 12:09:19 +01:00
nielash
3ae4534ce6 fstest: make RandomRemoteName shorter
Use 12 random characters instead of 24, to avoid trouble on the bisync tests
2024-06-11 12:02:48 +01:00
nielash
9c287c72d6 googlephotos: remove unnecessary nil check
golangci-lint was complaining about this. `entry` can never be nil because
itemToDirEntry never returns a nil interface value
2024-06-11 11:55:42 +01:00
nielash
862d5d6086 s3, googlecloudstorage, azureblob: fix encoding issue with dir path comparison
`remote` has been converted ToStandardPath a few lines above, so `directory`
needs to be converted the same way in order to be compared properly. This was
spotted on `TestBisyncRemoteRemote/extended_filenames` for
`TestS3,directory_markers:` and `TestGoogleCloudStorage,directory_markers:`
which tripped over a directory name containing a Line Feed symbol.
2024-06-11 11:54:54 +01:00
Nick Craig-Wood
003f4531fe sync: don't test reading metadata if we can't write it 2024-06-11 11:31:35 +01:00
Nick Craig-Wood
a52e887ddd linkbox: ignore TestListDirSorted test until encoding is implemented 2024-06-11 11:31:35 +01:00
Nick Craig-Wood
b7681e72bf Add Tomasz Melcer to contributors 2024-06-11 11:31:35 +01:00
wiserain
ce5024bf33
pikpak: improve upload reliability and resolve potential file conflicts
This attempts to resolve upload conflicts by implementing cancel/cleanup on failed
uploads

* fix panic error on defer cancel upload
* increase pacer min sleep from 10 to 100 ms
* stop using uploadByForm()
* introduce force sleep before and after async tasks
* use pacer's retry scheme instead of manual implementation

Fixes #7787
2024-06-10 18:08:07 +01:00
Tomasz Melcer
d2af114139 sftp: --sftp-connections to limit maximum number of connections
Done based on a similar feature in the ftp remote. However, the switch
name is different, as `concurrency` is already taken by a different
feature.
2024-06-09 18:36:30 +01:00
Nick Craig-Wood
c8d6b02dd6 ulozto: fix panic in various integration tests
Before this change some of the integration tests were producing this error

    panic: runtime error: invalid memory address or nil pointer dereference

This was caused by an `fs.Object` of which the type (`*Object`) was
not `nil`, but the value within was `nil`. These do not compare as
`nil` leading to the panic.

This is a classic Go gotcha: https://go.dev/doc/faq#nil_error

This was easily fixed by changing the type of one function to return
fs.Object instead of *Object.
2024-06-08 17:44:11 +01:00
Nick Craig-Wood
55cac4c34d swift: fix integration tester with use_segments_container=false 2024-06-08 17:44:11 +01:00
Nick Craig-Wood
7ce60a47e8 drive: fix tests for backend query command
The tests assumed that there would be only one match, but on the
integration test server there are multiple matches due to failed test
runs.
2024-06-08 17:44:11 +01:00
Nick Craig-Wood
27496fb26d mailru: attempt to fix throttling by decreasing min sleep to 100ms
Before this change we waited a minimum of 10ms between API calls for
mailru.

The tests no longer pass at this rate, so this increases the time to
100ms.

See #7768
2024-06-08 17:44:11 +01:00
Nick Craig-Wood
39f8d039fe sync: fix expecting SFTP to have MkdirMetadata method: optional feature not implemented
Before this fix we attempted to copy metadata to SFTP backends despite
them not being capable of it.

This fixes the problem by making the need to copy metadata explicit
rather than implicit in a value being present or not.
2024-06-08 17:44:11 +01:00
Nick Craig-Wood
57f5ad188b operations: fix incorrect modtime on some multipart transfers
In this commit

6a0a54ab97 operations: fix missing metadata for multipart transfers to local disk

We broke the setting of modification times when doing multipart
transfers from a backend which didn't support metadata to a backend
which did support metadata.

This was fixed by setting the "mtime" in the metadata if it was
missing.
2024-06-08 17:44:11 +01:00
Nick Craig-Wood
76798d5bb1 sync: fix tests on backends which can't have empty directories 2024-06-08 17:44:11 +01:00
Nick Craig-Wood
5921bb0efd cache: fix tests when testing for Object.SetMetadata 2024-06-08 17:44:11 +01:00
Nick Craig-Wood
ce0d8a70a3 Add Charles Hamilton to contributors 2024-06-08 17:44:11 +01:00
Nick Craig-Wood
c23a40cb2a Add Thomas Schneider to contributors 2024-06-08 17:44:11 +01:00
Nick Craig-Wood
86a1951a56 Add Bruno Fernandes to contributors 2024-06-08 17:44:11 +01:00
Charles Hamilton
b778ec0142
windows: make rclone work with SeBackupPrivilege and/or SeRestorePrivilege
On Windows, this change includes the `FILE_FLAG_BACKUP_SEMANTICS` in
all calls to `CreateFile`.

Adding this flag allows is useful when rclone is running within a
security context that has `SeBackupPrivilege` and/or `SeRestorePrivilege`
token privileges enabled.

Without this flag, rclone cannot properly leverage special security
groups such as Backup Operators who possess the these privileges.

See: https://forum.rclone.org/t/rclone-sebackupprivilege-file-flag-backup-semantics/45339
See: https://github.com/rclone/rclone/pull/7877.
2024-06-07 13:26:30 +01:00
Dan McArdle
dac7f76b14 cmd/gitannex: Update command docs
Mentioned the possibility of skipping the symlink for new versions of
git-annex. (Probably deserves a test once the new git-annex trickles
down to CI platforms.)

I stopped trying to explain each config parameter here. Rather, the doc
now shows the user how to ask git-annex to describe config parameters
with `--whatelse`.
2024-06-06 17:42:27 +01:00
Dan McArdle
446d6b28b8 cmd/gitannex: Support synonyms of config values 2024-06-06 17:42:27 +01:00
Thomas Schneider
7e04ff9528 S3: Ceph Backend use already exist changed to true (now tested) - fixes #7871 2024-06-06 11:27:07 +01:00
Bruno Fernandes
4568feb5f9
s3: Add Magalu S3 Object Storage as provider 2024-06-06 11:25:45 +01:00
Nick Craig-Wood
b9a2d3b6b9 config: fix default value for description 2024-06-06 09:25:17 +01:00
Nick Craig-Wood
775e567a7b b2: update URLs to new home 2024-06-06 09:25:17 +01:00
Nick Craig-Wood
59fc7ac193 Add yumeiyin to contributors 2024-06-06 09:25:17 +01:00
albertony
fea61cac9e serve dlna: make BrowseMetadata more compliant - fixes #7883 2024-06-02 14:07:45 +02:00
albertony
3f3e4b055e Fix new lint issues reported by golangci-lint v1.59.0
Error return value of `fmt.Fprintf` is not checked (errcheck)
2024-05-31 09:48:32 +02:00
yumeiyin
2257c03391
docs: fix some comments 2024-05-24 21:39:40 +02:00
Nick Craig-Wood
8f1c309c81 build: update all dependencies 2024-05-22 15:50:31 +01:00
Nick Craig-Wood
8e2f596fd0 drive: debug when we are ignoring permissions #7853 2024-05-21 15:32:26 +01:00
Nick Craig-Wood
de742ffc67 Add Dominik Joe Pantůček to contributors 2024-05-21 15:32:26 +01:00
Dominik Joe Pantůček
181ed55662
docs: crypt: fix incorrect terminology
This fixes the misuse of the key-derivation term (salt) used in place
of symmetric cipher nonce (IV) in the crypt remote documentation.
2024-05-20 23:21:21 +01:00
Nick Craig-Wood
a5700a4a53 operations: rework rcat so that it doesn't call the --metadata-mapper twice
The --metadata-mapper was being called twice for files that rclone
needed to stream to disk,

This happened only for:
- files bigger than --upload-streaming-cutoff
- on backends which didn't support PutStream

This also meant that these were being logged as two transfers which
was a little strange.

This fixes the problem by not using operations.Copy to upload the file
once it has been streamed to disk, instead using the Put method on the
backend.

This should have no effect on reliability of the transfers as we retry
Put if possible.

This also tidies up the Rcat function to make the different ways of
uploading the data clearer and make it easy to see that it gets
verified on all those paths.

See #7848
2024-05-20 18:16:54 +01:00
Nick Craig-Wood
faa58315c5 operations: ensure SrcFsType is set correctly when using --metadata-mapper
Before this change on files which have unknown length (like Google
Documents) the SrcFsType would be set to "memoryFs".

This change fixes the problem by getting the Copy function to pass the
src Fs into a variant of Rcat.

Fixes #7848
2024-05-20 18:16:54 +01:00
Nick Craig-Wood
7b89735ae7 onedrive: allow setting permissions to fail if failok flag is set
For example using

    --onedrive-metadata-permissions read,write,failok

Will allow permissions to be read and written but if the writing
fails, then only an ERROR will be written in the log and the transfer
won't fail.
2024-05-17 11:03:46 +01:00
Nick Craig-Wood
91192c2c5e Add Evan McBeth to contributors 2024-05-17 11:03:46 +01:00
Evan McBeth
96e39ea486
docs: improve readability in faq 2024-05-16 15:35:42 +02:00
Nick Craig-Wood
488ed28635 fs: fix panic when using --metadata-mapper on large google doc files
Before this change, attempting to copy a large google doc while using
the metadata mapper caused a panic. Google doc files use Rcat to
download as they have an unknown size, and when the size of the doc
file got above --streaming-upload-cutoff it used a
object.NewStaticObjectInfo with a `nil` Fs to upload the file which
caused the crash in the metadata mapper code.

This change makes sure that the Fs in object.NewStaticObjectInfo is
never nil, and it returns MemoryFs which is consistent with the Rcat
code when the source is sized below the --streaming-upload-cutoff
threshold.

Fixes #7845
2024-05-16 10:05:08 +01:00
Nick Craig-Wood
b059c96322 Add JT Olio to contributors 2024-05-16 10:05:08 +01:00
Nick Craig-Wood
6d22168a8c Add overallteach to contributors 2024-05-16 10:05:08 +01:00
JT Olio
e34e2df600 go.mod: update storj.io/uplink to latest release
significant performance and stability improvements
2024-05-16 08:43:57 +01:00
overallteach
6607102034 chore: fix function name in comment
Signed-off-by: overallteach <cricis@foxmail.com>
2024-05-15 19:30:17 +01:00
Nick Craig-Wood
c6c327e4e7 build: update issue label notification machinery 2024-05-15 15:07:45 +01:00
Nick Craig-Wood
6a0a54ab97 operations: fix missing metadata for multipart transfers to local disk
Before this change multipart downloads to the local disk with
--metadata failed to have their metadata set properly.

This was because the OpenWriterAt interface doesn't receive metadata
when creating the object.

This patch fixes the problem by using the recently introduced
Object.SetMetadata method to set the metadata on the object after the
download has completed (when using --metadata). If the backend we are
copying to is using OpenWriterAt but the Object doesn't support
SetMetadata then it will write an ERROR level log but complete
successfully. This should not happen at the moment as only the local
backend supports metadata and OpenWriterAt but it may in the future.

It also adds a test to check metadata is preserved when doing
multipart transfers.

Fixes #7424
2024-05-14 12:51:03 +01:00
Nick Craig-Wood
629e895da8 local: implement Object.SetMetadata 2024-05-14 12:51:03 +01:00
Nick Craig-Wood
cc634213a5 fs: define the optional interface SetMetadata and implement it in wrapping backends
This also implements backend integration tests for the feature
2024-05-14 12:51:03 +01:00
Nick Craig-Wood
e9e9feb21e drive: allow setting metadata to fail if failok flag is set
For example using

    --drive-metadata-permissions read,write,failok

Will allow metadata to be read and written but if the writing fails,
then only an ERROR will be written in the log and the transfer won't
fail.
2024-05-13 19:44:03 +01:00
Dan McArdle
f26fc8f07c cmd/gitannex: When tags do not match, run e2e tests anyway
Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
96703bb31e build: Inject rclone version tag when testing
This enables gitannex end-to-end tests to run on CI. Otherwise, the
version would not match and tests that check the rclone version would
fail like so:

```
=== RUN   TestEndToEnd
    e2e_test.go:199: Skipping due to rclone version: expected version "v1.67.0-DEV", but got "v1.67.0-beta.7905.220bbe24d.merge"
--- SKIP: TestEndToEnd (0.07s)
```

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
96d3adc771 cmd/gitannex: Remove assumption in e2e test version check 2024-05-13 18:44:31 +01:00
Dan McArdle
f82822baca .github/workflows: Install git-annex-remote-rclone on Linux and macOS
Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
af33a4f822 cmd/gitannex: Add TestEndToEndMigration tests
For each layout mode, these tests start with a git-annex-remote-rclone
remote, migrate it to a git-annex-remote-rclone-builtin remote. They
verify that a file copied pre-migration is still present and that `git
annex testremote` passes.

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
a675cc6677 cmd/gitannex: Describe new rclonelayout config in help
Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
ad605ee356 cmd/gitannex: Drop chdir from e2e tests
Now that e2e tests are running in parallel, undoing the chdir to the
temp dir was causing flaky failures on cleanup. We don't need it anyway
because the worrisome subcommands have their working directory
controlled by `runInRepo()`.

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
4ab235c06c cmd/gitannex: Repeat TestEndToEnd for all layout modes
I'm hopeful that running these in parallel will not impact CI runtime
very much, but that likely depends on the number of CPU cores and
whether the tmp filesystem is backed by memory vs a physical disk.

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
9a2b85d71c cmd/gitannex: Refactor e2e tests, add layout compat tests
TestEndToEndRepoLayoutCompat exercises git-annex-remote-rclone-builtin
and git-annex-remote-rclone on the same rclone remote to ensure they are
compatible. It repeats the same test for all known layout modes.

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
29b58dd4c5 cmd/gitannex: Add support for different layouts
This commit adds support for the same repo layouts supported by
git-annex-remote-rclone. This should enable git-annex users with remotes
of type "rclone" to switch to a "rclone-builtin" without needing to
retransfer content.

Issue #7625
2024-05-13 18:44:31 +01:00
Dan McArdle
36ad4eb145 cmd/gitannex: Simplify messageParser's finalParameter() func
Issue #7625
2024-05-13 18:44:31 +01:00
nielash
61ab519791 chunker: fix finalizer already set error
Before this change, cache.PinUntilFinalized was called twice if the root pointed
to a composite multi-chunk file without metadata, resulting in a fatal "finalizer
already set" error. This change fixes the issue.
2024-05-13 18:33:54 +01:00
nielash
678941afc1 mailru: use --tpslimit 10 on bisync tests
see https://github.com/rclone/rclone/issues/7768#issuecomment-2060888980
2024-05-13 18:33:11 +01:00
nielash
b153254b3a bisync: ignore "Implicitly create directory" messages on tests 2024-05-13 18:32:55 +01:00
nielash
17cd7a9496 quatrix: fix f.String() not including subpath 2024-05-13 18:32:41 +01:00
Nick Craig-Wood
0735f44f91 operations: fix lsjson --encrypted when using --crypt-XXX parameters
Before this change an `rclone lsjson --encrypted` command where
additional `--crypt-` parameters were supplied on the command line:

    rclone lsjson --crypt-description XXX --encrypted secret:

Produced an error like this:

    Failed to lsjson: ListJSON failed to load config for crypt remote: config name contains invalid characters...

This was due to an incorrect lookup of the crypt config to create the
encrypted mapping.

Fixes #7833
2024-05-13 17:59:58 +01:00
Nick Craig-Wood
04c69959b8 Add Sunny to contributors 2024-05-13 17:59:58 +01:00
Nick Craig-Wood
25cc8c927a Add Michael Terry to contributors 2024-05-13 17:59:58 +01:00
Sunny
6356b51b33
serve http: added content-length header when html directory is served 2024-05-13 17:24:54 +01:00
albertony
1890608f55 docs: minor formatting improvement 2024-05-13 12:50:22 +02:00
Michael Terry
cd76fd9219 oauthutil: clear client secret if client ID is set
When an external OAuth flow is being used (i.e. a client ID and an
OAuth token are set in the config), a client secret should not be set.
If one is, the server may reject a token refresh attempt.

But there's no way to clear out a backend's default client secret via
configuration, since empty-string config values are ignored.

So instead, when a client ID is set, we should clear out any default
client secret, since it wouldn't apply anyway.
2024-05-11 16:03:32 +01:00
Nick Craig-Wood
5b8cdaff39 drive: fix description being overwritten on server side moves
Before this the description for files got overwritten on server side
moves.

This change stops rclone setting the description to the leaf name
completely.

See: https://forum.rclone.org/t/file-descriptions-in-google-drive-not-shown-after-dedupe/45510
Fixes: #7770
2024-05-11 15:59:40 +01:00
albertony
f2f559230c bump golangci/golangci-lint-action from 4 to 6
Version 5 removed go cache management, and therefore also options skip-pkg-cache and
skip-build-cache, because the cache related to go itself is already handled by
actions/setup-go, and now it only caches golangci-lint analysis. Since we run multiple
golangci-lint-action steps for different goos, we want to cache package and build cache
and golangci-lint results from all of them, and therefore this commit now changes the
approach by disabling all built-in caching and introducing a separate cache step to
handle it properly.
2024-05-11 14:43:29 +02:00
nielash
e0b38cc9ac onedrive: add support for group permissions
This change adds support for "group" identities, and SharePoint variants
"siteUser" and "siteGroup". It also adds support for using any identity type
(including "application" and "device") as a recipient source when adding
permissions.
2024-05-10 16:25:08 +01:00
nielash
68dc79eddd onedrive: fix references to deprecated permissions properties
Before this change, metadata permissions used the `grantedTo` and
`grantedToIdentities` properties, which are deprecated on OneDrive Business in
favor of `grantedToV2` and `grantedToIdentitiesV2`. After this change, OneDrive
Business uses the new V2 versions, while OneDrive Personal still uses the
originals, as the V2 versions are not available for OneDrive Personal. (see
https://learn.microsoft.com/en-us/answers/questions/1079737/inconsistency-between-grantedtov2-and-grantedto-re)
2024-05-10 16:25:08 +01:00
nielash
76cea0c704 onedrive: skip writing permissions with 'owner' role
The 'owner' role is an implicit role that can't be removed, so don't try to.
2024-05-10 16:25:08 +01:00
Nick Craig-Wood
41d5d8b88a build: add issue label notification machinery 2024-05-10 16:15:28 +01:00
Nick Craig-Wood
aa2746d0de union: fix deleting dirs when all remotes can't have empty dirs 2024-05-10 11:15:12 +01:00
wiserain
b2f6aac754
pikpak: improve getFile() usage
Previously, `getFile()` was called indiscriminately during uploads, moves, 
and download link generation. This could lead to users with download limit 
causing subsequent operations like uploads and moves to fail. 
This PR optimizes the use of getFile(), by only calling it 
when it's strictly necessary.
2024-05-08 09:09:56 +09:00
Eric Wolf
a0dacf4930
docs: exit code 9 requires --error-on-no-transfer
Updated exit code 9 definition to include that it requires the use of the "--error-on-no-transfer" flag with a link to that section.
2024-05-07 09:18:05 +02:00
IoT Maestro
c5ff5afc21 ulozto: Fix handling of root paths with leading / trailing slashes.
This fixes #7796
2024-05-04 20:04:30 +01:00
Nick Craig-Wood
bd8523f208 fstest: reduce precision of directory time checks on CI
For unknown reasons the precision of modification times of directories
on the CI is > 15mS compared to files which are 100nS. The tests
work fine when run in Virtualbox though so I conjecture this is
something to do with the file system used there.
2024-05-03 12:29:18 +01:00
nielash
0bfd70c405 sync: remove now superfluous copyEmptyDirectories function 2024-05-03 12:29:18 +01:00
Nick Craig-Wood
47735d8fe1 sync: fix failed to update directory timestamp or metadata: directory not found
See: https://forum.rclone.org/t/empty-dirs-not-wanted/45059/14
Co-authored-by: nielash <nielronash@gmail.com>
2024-05-03 12:29:18 +01:00
Nick Craig-Wood
617534112b sync: fix directory modification times not being set
Co-authored-by: nielash <nielronash@gmail.com>
2024-05-03 12:29:18 +01:00
Nick Craig-Wood
271ec43189 sync: don't need to sync directories if they haven't been modified
Before this change we synced directories regardless if the source
directory existed. It is irrelevant whether the source directory
exists or not, what we need to know is has the directory been
modified.

Co-authored-by: nielash <nielronash@gmail.com>
2024-05-03 12:29:18 +01:00
Nick Craig-Wood
10eb4742dd sync: fix creation of empty directories when --create-empty-src-dirs=false
In v1.66.0 the changes to enable metadata preservation on directories
introduced a regression, namely that empty directories were created
despite the state of the --create-empty-src-dirs flag.

This patch fixes the problem by letting the normal rclone directory
creation create the directories and fixing up their timestamps and
metadata afterwards if --create-empty-src-dirs=false.

Fixes #7689
See: https://forum.rclone.org/t/empty-dirs-not-wanted/45059/
See: https://forum.rclone.org/t/how-to-ignore-empty-directories-when-uploading-from-windows/45057/
2024-05-03 12:29:18 +01:00
Nick Craig-Wood
2a2ec06ec1 sync: fix management of empty directories to make it more accurate
Before this change we used the same datastructure for managing empty
directories for both --create-empty-src-dirs in sync/copy/move and for
the --delete-empty-src-dirs flag in move.

These two uses are subtly incompatible and this change uses a separate
datastructure for both uses. This makes it more accurate and easier to
understand.
2024-05-03 12:29:18 +01:00
Nick Craig-Wood
7237b142fa drive: be more explicit in debug when setting permissions fail 2024-05-02 18:10:16 +01:00
Nick Craig-Wood
254e514330 onedrive,drive: make errors setting permissions into no retry errors 2024-04-30 09:34:33 +01:00
Nick Craig-Wood
9fa610088f docs: add Backblaze as a sponsor 2024-04-29 12:43:13 +01:00
Nick Craig-Wood
d2fa45acf3 storj: update bio on request 2024-04-29 11:49:13 +01:00
albertony
a86eb7ad50 docs: note that newer linux kernel version is required for ARMv5 2024-04-27 22:21:40 +02:00
Nick Craig-Wood
1fef8e667c build: migrate bucket storage for the project to new provider
This changes

- beta.rclone.org
- www.rclone.org
- pub.rclone.org
- downloads.rclone.org
2024-04-25 17:04:18 +01:00
Nick Craig-Wood
a5daef3892 Add hidewrong to contributors 2024-04-25 17:04:18 +01:00
Nick Craig-Wood
5bf70c68f1 swift: implement --swift-use-segments-container to allow >5G files on Blomp
This switches between storing chunks in a separate container suffixed
with `_segments` (the default) and a directory in the root
`.file-segments`)

By default the `.file-segments` mode will be auto selected if
`auth_url`s that require it are detected.

If the `.file-segments` mode is in use then rclone will omit that
directory from listings.

See: https://forum.rclone.org/t/blomp-unable-to-upload-5gb-files/42498/
2024-04-25 11:14:14 +01:00
Nick Craig-Wood
8a18c29835 random: update Password docs 2024-04-25 11:14:14 +01:00
albertony
29ed17d19c build: add linting for different values of GOOS 2024-04-22 19:29:12 +02:00
albertony
7ee22fcdf9 build: fix linting issues reported by running golangci-lint with different GOOS 2024-04-22 19:29:12 +02:00
albertony
159e274921 build: fix linting issues reported by golangci-lint on windows 2024-04-22 19:29:12 +02:00
albertony
fdc56b21c1 log: fix lint issue SA1019: syscall.Syscall has been deprecated since Go 1.18: Use SyscallN instead. 2024-04-22 19:29:12 +02:00
albertony
1ca825b6f0 build: run go mod tidy 2024-04-22 19:29:12 +02:00
Kyle Reynolds
d36bc8833c backend http: Adding no-escape flag for option to not escape URL metacharacters in path names - fixes issue #7637 2024-04-22 17:57:09 +01:00
nielash
8977655869 bisync: avoid starting tests we don't have time to finish
To prevent all-or-nothing retries, for tests that take longer (in total) than the
-timeout but less than the -timeout * -maxtries

https://github.com/rclone/rclone/pull/7743#issuecomment-2057250848
2024-04-19 22:29:41 +01:00
nielash
58e09e1cd4 bisync: skip test if config string contains a space 2024-04-19 22:29:41 +01:00
Kyle Reynolds
64734dfe41 fs accounting: Add deleted files total size to status summary line - fixes issue #7190 2024-04-18 22:09:23 +01:00
albertony
68bf6aa584 build: remove build constraint syntax for go 1.16 and older 2024-04-18 16:53:55 +02:00
albertony
db17aaf7cd build: remove separate go module cache step as its done by setup-go 2024-04-18 12:43:26 +02:00
albertony
9531cd2c46 Convert source files with crlf to lf 2024-04-18 11:32:45 +02:00
hidewrong
c09426bcfe
fix spelling 2024-04-17 18:02:44 +02:00
nielash
30517698aa bisync: make session path even shorter on tests
The .lck file filename length needs to be less than 255 bytes (not symbols) on
linux, and it was still too long on this test, because of the
subdir=測試_Русский_{spc}_{spc}_ě_áñ
on remotes with long names, such as TestChunkerChunk3bNoRenameLocal:
2024-04-16 14:45:54 -04:00
Nick Craig-Wood
8dc4c01209 build: make integration tests run better on macOS and Windows
This changes as many of the integraton tests as possible so that they
use port forwarding rather than the docker IP directly.

Using the docker IP directly does not work on macOS and Windows as the
docker images are running in a VM rather than a container.

This adds the PORTS.md document to document which port numbers we are
using for which service as they need to be unique.
2024-04-16 10:48:48 +01:00
Nick Craig-Wood
807a7dabaa docs: fix heading anchor 2024-04-16 10:48:48 +01:00
Nick Craig-Wood
416324c047 Add pawsey-kbuckley to contributors 2024-04-16 10:48:48 +01:00
Nick Craig-Wood
524137f78a Add Katia Esposito to contributors 2024-04-16 10:48:48 +01:00
Evan Harris
f4c033a6a6 lsjson: small docs change to clarify options 2024-04-15 17:11:35 +01:00
pawsey-kbuckley
d459fb0cb8
genautocomplete: remove Ubuntu-ism from docs and clarify non-root use 2024-04-15 17:00:43 +01:00
Dave Nicolson
205745313d
docs: fix macOS install from source link 2024-04-15 16:33:41 +01:00
Katia Esposito
79c00879ff ncdu: Do not quit on Esc 2024-04-15 16:18:27 +01:00
Nick Craig-Wood
2cff5514aa fix: test_all re-running too much stuff
This re-works the code which works out which tests need re-running to
be more accurate.
2024-04-15 16:08:27 +01:00
Nick Craig-Wood
88322f3eb2 Add Dave Nicolson to contributors 2024-04-15 16:08:27 +01:00
Nick Craig-Wood
036690c060 Add Butanediol to contributors 2024-04-15 16:08:27 +01:00
Nick Craig-Wood
805584a8dd Add yudrywet to contributors 2024-04-15 16:08:27 +01:00
Dave Nicolson
cc3ae931db
docs: Add left and right padding to prevent icon truncation 2024-04-14 17:51:10 +01:00
Butanediol
0c0d64c316 serve s3: fix Last-Modified header format 2024-04-14 17:49:51 +01:00
yudrywet
50aa677934 chore: fix function names in comment
Signed-off-by: yudrywet <yudeyao@yeah.net>
2024-04-14 14:38:01 +01:00
nielash
51582e36e8 onedrive: set all metadata permissions and return error summary
Before this change when setting permissions from the metadata rclone
would stop on the first error.

This change causes rclone to attempt to set all the permissions and
return an error summary at the end.
2024-04-13 19:57:30 +01:00
Kyle Reynolds
47cbddbd27 fs rc: fixes incorrect Content-Type in HTTP API - fixes #7726 2024-04-13 19:56:34 +01:00
nielash
5323a21898 operations: fix move when dst is nil and fdst is case-insensitive
Before this change, the MoveCaseInsensitive logic in operations.move made the
assumption that dst != nil && remote != "". After this change, it should work
correctly when either one is present without the other.
2024-04-13 19:28:09 +01:00
Nick Craig-Wood
f2e693f722 sync: fix case normalisation on s3
Before this change when the sync routine attempted to normalise a
case, say from "FiLe.txt" to "file.txt" this caused a 400 Bad Request
error:

> This copy request is illegal because it is trying to copy an object
> to itself without changing the object's metadata, storage class,
> website redirect location or encryption attributes.

This was caused by passing the same object as the source and
destination to the move routine, whereas the destination object had a
different case and didn't exist, so should have been passed as nil.

See: https://github.com/rclone/rclone/pull/7743#discussion_r1557345906
2024-04-13 19:28:09 +01:00
Nick Craig-Wood
93955b755f operations: fix retries downloading too much data with certain backends
Before this fix if more than one retry happened on a file that rclone
had opened for read with a backend that uses fs.FixRangeOption then
rclone would read too much data and the transfer would fail.

Backends affected:

- azureblob, azurefiles, b2, box, dropbox, fichier, filefabric
- googlecloudstorage, hidrive, imagekit, jottacloud, koofr, netstorage
- onedrive, opendrive, oracleobjectstorage, pikpak, premiumizeme
- protondrive, qingstor, quatrix, s3, sharefile, sugarsync, swift
- uptobox, webdav, zoho

This was because rclone was emitting Range requests for the wrong data
range on the second and subsequent retries.

This was caused by fs.FixRangeOption modifying the options and the
reopen code relying on them not being modified.

This fix makes a copy of the fs.FixRangeOption in the reopen code to
fix the problem.

In future it might be best to change fs.FixRangeOption so it returns a
new options slice.

Fixes #7759
2024-04-13 19:25:15 +01:00
Nick Craig-Wood
a4fc5edc5e operations: add more assertions to ReOpen tests to check seek positions 2024-04-13 19:25:15 +01:00
Nick Craig-Wood
8b73dcb95d Add static-moonlight to contributors 2024-04-13 19:25:15 +01:00
static-moonlight
3ba57cabce doc: add example how to run serve s3 2024-04-13 19:22:26 +01:00
Nick Craig-Wood
c87097109b serve s3: adjust to move of Mikubill/gofakes3 to rclone/gofakes3
This also updates the interface which has gained a ctx parameter in
the mean time.
2024-04-13 18:25:41 +01:00
Nick Craig-Wood
ae76498a38 Add guangwu to contributors 2024-04-13 18:25:41 +01:00
Nick Craig-Wood
10f730c49f Add jakzoe to contributors 2024-04-13 18:25:41 +01:00
albertony
88d96d133b Add go mod and sum to gitattributes for consistent line endings 2024-04-13 11:16:42 +02:00
nielash
2c7680050b bisync: rename extended_char_paths test
The .lck file filename length needs to be less than 255 bytes (not symbols) on
linux, and it was still too long on this test, because of the
subdir=測試_Русский_{spc}_{spc}_ě_áñ
2024-04-11 16:27:20 +01:00
nielash
fe6c9aa4da chunker: fix case-insensitive comparison on local without metadata
Before this change, chunker would erroneously consider two different paths to be
equal if, due to special characters, they normalized to equal-folding strings in
Standard Encoding, but not otherwise. This caused base objects to get moved when
they should not have been. This change fixes the issue, which was discovered on
the bisync integration tests.

Ideally it should also be fixed when the base Fs is non-local, but there's not an
easy way at the moment to reference the wrapped Fs's encoding, at least without
breaking encapsulation.
2024-04-11 16:27:20 +01:00
nielash
8524afa9ce chunker: fix NewFs when root points to composite multi-chunk file without metadata
Before this change, calling NewFs on a composite multi-chunk file with
--chunker-meta-format "none"
would fail due to f.base pointing to the wrong Fs. This change fixes the issue,
which was discovered on the bisync integration tests.
2024-04-11 16:27:20 +01:00
nielash
21f3ba13f6 bisync: more fixes for integration tests
-use fs.ConfigStringFull instead of bilib.StripHexString to properly reverse
connection string remotes
2024-04-11 16:27:20 +01:00
nielash
04128f97ee bisync: fix endless loop if lockfile decoder errors
Before this change, the decoder looked only for `io.EOF`, and if any other error
was returned, it could cause an infinite loop. This change fixes the issue by
breaking for any non-nil error.
2024-04-10 16:33:05 +01:00
nielash
bef9fd0bc3 bisync: make tempDir path shorter
to avoid exceeding linux filename length limits
2024-04-10 16:33:05 +01:00
guangwu
2ab2ec29f9 fix: close cpu profile
Signed-off-by: guoguangwu <guoguangwug@gmail.com>
2024-04-09 11:23:55 +01:00
jakzoe
8817ee25ae
docs: fix typo in filtering.md
Fix typo: moved misplaced double quotation mark.
2024-04-09 11:16:59 +01:00
Nick Craig-Wood
46b3854330 drive: set all metadata permissions and return error summary
Before this change when setting permissions from the metadata rclone
would stop on the first error.

This change causes rclone to attempt to set all the permissions and
return an error summary at the end.
2024-04-08 17:23:22 +01:00
Nick Craig-Wood
efbaca3a95 crypt: fix max suggested length of filenames 2024-04-08 17:23:22 +01:00
nielash
f995ece64d bisync: fix io.PipeWriter not getting closed on tests 2024-04-07 21:55:26 -04:00
jumbi77
68c2ba74dd pikpak: fix a typo in a comment
Last still open fix from PR #6970
2024-04-06 11:27:24 +01:00
albertony
e739ee2c27 docs: ensure empty line between text and a following heading 2024-04-05 21:39:44 +02:00
Dan McArdle
05e5712bc4 .github/workflows: Upgrade deprecated macos-11 to macos-latest
See https://github.com/actions/runner-images
2024-04-05 18:01:39 +01:00
Dan McArdle
a2e38e9883 cmd/gitannex: Downgrade to protocol version 1
This enables compatibility with versions of git-annex currently
available on GitHub's "ubuntu-latest" image, aka Ubuntu 22.04 Jammy.
Currently, Jammy is shipping git-annex 8.20210223-2ubuntu2.
https://packages.ubuntu.com/jammy/git-annex

Issue #7625
2024-04-05 18:01:39 +01:00
Dan McArdle
ef42c32cc6 cmd/gitannex: Replace e2e test script with Go test
This commit implements milestone 2.1 for the gitannex subcommand:
https://github.com/rclone/rclone/issues/7625#issuecomment-1951403856

This rewrite makes a few improvements over the old shell script:

(1) It no longer uses the system's rclone.conf. Now, it writes the
    rclone.conf file in an ephemeral directory.

(2) It no longer makes any assumptions about the contents of /tmp.

However, it now assumes that an rclone built from the HEAD commit is on
the PATH. It makes a best-effort attempt to verify this assumption, but
I'm not sure it's bulletproof.

I'm hoping that writing this in Go will enable more cross-platform
support in the future, but for now we're still restricted to Unixy
systems due to reliance on the HOME environment variable.

Issue #7625
2024-04-05 18:01:39 +01:00
Nick Craig-Wood
6a5c0065ef docs: clarify option syntax
See: https://forum.rclone.org/t/seeming-documentation-problem-rclones-syntax-a-problem-with-the-categories-on-this-forum/45395/
2024-04-05 15:59:32 +01:00
Nick Craig-Wood
6da27db844 build: fix CVE-2023-45288 by upgrading golang.org/x/net
See: https://pkg.go.dev/vuln/GO-2024-2687
2024-04-05 15:59:32 +01:00
Nick Craig-Wood
c0497d46d5 ulozto: remove use of github.com/pkg/errors 2024-04-05 15:59:32 +01:00
Nick Craig-Wood
df3df06d2e Add Pieter van Oostrum to contributors 2024-04-05 15:59:32 +01:00
Pieter van Oostrum
1e3ab7acfd docs: fix MANUAL formatting problems
1) Missing closing code backticks (```) in s3.md causes formatting problems
2) Pandoc requires blank lines before ATX headings
2024-04-05 10:41:47 +02:00
Kyle Reynolds
339bc1d1a3 backend koofr: remove trailing bracket - fixes #7600 2024-04-04 20:03:26 +01:00
nielash
71069ed5c1 webdav: fix SetModTime erasing checksums on owncloud and nextcloud
Before this change, calling SetModTime on owncloud and nextcloud would
inadvertently erase the object's stored hashes. This change fixes the issue,
which was discovered by the bisync integration tests.
2024-04-03 16:43:11 -04:00
nielash
75df38f6ee bisync: use fstest.RandomRemote on tests
- use fstest.RandomRemote to create the root level directory
- fix parsing of canonical name for connection string remotes
2024-04-03 16:43:11 -04:00
nielash
ce4064aabf hdfs: fix f.String() not including subpath 2024-04-03 16:43:11 -04:00
Nick Craig-Wood
7c9f1b8917 local: disable unreliable test
In this commit we merged an unreliable test

e053c8a1c0 copy: fix nil pointer dereference when corrupted on transfer with nil dst

It is a good idea but very hard to implement so it always works.

Hence this disables it for the moment.
2024-04-02 18:48:34 +01:00
Nick Craig-Wood
e71c95a554 docs: update warp sponsorship 2024-04-02 16:32:24 +01:00
nielash
e053c8a1c0 copy: fix nil pointer dereference when corrupted on transfer with nil dst 2024-04-02 15:34:58 +01:00
Nick Craig-Wood
c2d96113ac Add Erisa A to contributors 2024-04-02 15:34:58 +01:00
Nick Craig-Wood
5ff961d2ea Add yoelvini to contributors 2024-04-02 15:34:58 +01:00
Nick Craig-Wood
eedeaf7cbb Add Alexandre Lavigne to contributors 2024-04-02 15:34:58 +01:00
Kyle Reynolds
f6e716543a
test info: improve cleanup of temp files - fixes #7209
Co-authored-by: Kyle Reynolds <kyle.reynolds@bridgerphotonics.com>
2024-04-02 15:03:33 +01:00
nielash
998df26ceb onedrive: fix --metadata-mapper called twice if writing permissions
Before this change, the --metadata-mapper was called twice if an object was
uploaded via multipart upload with --metadata and --onedrive-metadata-permissions
"write" or "read,write". This change fixes the issue.
2024-04-02 14:57:43 +01:00
Pat Patterson
93c960df59 b2: Add tests for new cleanup and cleanup-hidden backend commands. 2024-04-02 12:36:43 +01:00
Nikita Shoshin
92368f6d2b rcserver: set ModTime for dirs and files served by --rc-serve 2024-04-02 12:10:45 +01:00
Erisa A
08bf5228a7 docs: Add R2 note about no_check_bucket 2024-04-02 12:08:56 +01:00
yoelvini
76f3eb3ed2
s3: add new AWS region il-central-1 Tel Aviv 2024-04-01 18:17:16 +01:00
nielash
0d43da7655 bisync: more fixes for integration tests
- fix parsing of connection string remotes (comma in name)
- skip remotes that can't upload empty files
- Mkdir the test case subdir before cache.Get-ing it
	(only storj seems to need this... bug?)
2024-03-31 17:56:28 -04:00
Alexandre Lavigne
f9429de807
s3: update Scaleway's configuration options - fixes #7507
In order to handle special character, the configuration must specify
rclone configuration to use `list_url_encode`.
2024-03-31 17:42:20 +01:00
nielash
bce80be2f8 bisync: several fixes for integration tests
Several fixes for the bisync integration tests:

- use unique initdir and datadir for each subtest so concurrent tests don't interfere with each other
- remove dots from dir names for bucket backends
- ignore messages specific to cache backend
- skip fix-case tests on backends that can't fix-case
- don't expect "{hashtype} differ" messages on backends with no hash types
- print timestamps in UTC local

More fixes will still be needed, but this should hopefully fix a good portion of them.
2024-03-30 13:39:44 -04:00
Nick Craig-Wood
679f4fdfa9 ulozto: make password config item be obscured 2024-03-30 17:32:32 +00:00
Nick Craig-Wood
7c828ffe09 operations: fix very long file names when using copy with --partial
Before this change we were using the wrong variable to read the
filename length from. This meant that very long filenames were not
being truncated as intended.

This problem was spotted by Wang Zhiwei on the forum in a code review.

See: https://forum.rclone.org/t/why-use-c-remoteforcopy-instead-of-c-remote-to-check-length-in-copy-operation/45099
2024-03-30 09:06:58 +00:00
Nick Craig-Wood
1cf1f4fab2 Add Warrentheo to contributors 2024-03-30 09:06:58 +00:00
Nick Craig-Wood
853e802d8d Add Alex Garel to contributors 2024-03-30 09:06:30 +00:00
Warrentheo
3052d026ce
onedrive: fix typo 2024-03-30 08:22:32 +00:00
albertony
9c9487365f config: show more user friendly names of custom types in ui 2024-03-29 21:20:19 +01:00
albertony
0f6c10ca02 config: add ending period on description option help text 2024-03-29 17:11:21 +01:00
Alex Garel
a075654f20
docs: add an indication in case of recursive shortcuts in drive
Help people handle an issue which might be difficult to understand
otherwise.

If you have recursive shortcuts (pointing to a parent folder) in a
google drive, rclone is doing infinite recursion, never ending and
filling the disk. Even if you ask not to get shortcuts content.
2024-03-29 12:35:47 +01:00
IoT Maestro
571d20d126 ulozto: implement Mover and DirMover interfaces. 2024-03-29 09:09:12 +00:00
IoT Maestro
c9ce384ec7 ulozto: revert the temporary file size limitations 2024-03-29 09:07:44 +00:00
IoT Maestro
748c43d525 ulozto: set Content-Length header if the file size is known. 2024-03-29 09:07:44 +00:00
Nick Craig-Wood
7c20ec3772 local: fix and update -l docs
See: https://forum.rclone.org/t/rclone-l-cloning-symlink-file-problem/45286/
2024-03-28 11:56:22 +00:00
Nick Craig-Wood
42914bc0b0 serve webdav: fix webdav with --baseurl under Windows
Windows webdav does an OPTIONS request on the root even when given a
path and if we return 404 here then Windows refuses to use the path.

This patch allows OPTIONS requests only on the root to fix this.

This affects all the HTTP servers.
2024-03-28 10:06:04 +00:00
nielash
f62e7b5b30 memory: fix incorrect list entries when rooted at subdirectory
Before this change, List would return incorrect directory paths (relative to the
wrong root) if the Fs root pointed to a subdirectory. For example, listing dir
"a/b/c/d" of remote :memory: would work correctly, but listing dir "c/d" of
remote :memory:a/b would not, and would result in "Entry doesn't belong in
directory %q (contains subdir)" errors.

This change fixes the issue and adds a test to detect any other backends that
might have the same issue.
2024-03-27 11:43:26 -04:00
nielash
2b0a25a64d memory: fix deadlock in operations.Purge
Before this change, the Memory backend had the potential to deadlock under
certain conditions, if the ListR callback required locking the b.mu mutex. This
was the case with operations.Purge, because Memory has no Purge method, and the
fallback option does:

	err = DeleteFiles(ctx, listToChan(ctx, f, dir))

which potentially starts removing objects before the listing has completed.

This change fixes the issue by batching all the entries before calling the
callback on them.
2024-03-27 11:42:49 -04:00
nielash
2bebbfaded bisync: add to integration tests - fixes #7665
This change officially adds bisync to the nightly integration tests for all
backends.

This will be part of giving us the confidence to take bisync out of beta.

A number of fixes have been added to account for features which can differ on
different backends -- for example, hash types / modtime support, empty
directories, unicode normalization, and unimportant differences in log output.
We will likely find that more of these are needed once we start running these
with the full set of remotes.

Additionally, bisync's extremely sensitive tests revealed a few bugs in other
backends that weren't previously covered by other tests. Fixes for those issues
have been submitted on the following separate PRs (and bisync test failures will
be expected until they are merged):

- #7670 memory: fix deadlock in operations.Purge
- #7688 memory: fix incorrect list entries when rooted at subdirectory
- #7690 memory: fix dst mutating src after server-side copy
- #7692 dropbox: fix chunked uploads when size <= chunkSize

Relatedly, workarounds have been put in place for the following backend
limitations that are unsolvable for the time being:

- #3262 drive is sometimes aware of trashed files/folders when it shouldn't be
- #6199 dropbox can't handle emojis and certain other characters
- #4590 onedrive API has longstanding bug for conflictBehavior=replace in
	server-side copy/move
2024-03-27 10:50:14 -04:00
nielash
fecce67ac6 memory: fix dst mutating src after server-side copy
Before this change, the Memory backend's Copy method created a dst object that
referenced the src's objectData by pointer instead of making a copy. While this
minimized memory usage, an unintended consequence was that subsequently mutating
the src (such as changing the modtime) would inadvertently also mutate the dst,
and vice versa.

This change fixes the issue and adds a test.
2024-03-26 20:40:06 -04:00
Nick Craig-Wood
a67688dcc7 mount,cmount,mount2: add --direct-io flag to force uncached access
This change adds the --direct-io flag to the mount. This means the
page cache is completely bypassed for reads and writes. No read-ahead
takes place. Shared mmap is disabled.

This is useful to accurately read files which may change length
frequently on the source.
2024-03-26 17:32:11 +00:00
Nick Craig-Wood
f3f743c3f9 vfs: fix download loop when file size shrunk
Before this change, if a file shrunk in size on the remote then rclone
could get into an loop trying to download the file forever.

The symptom was repeating errors like this:

    vfs cache: restart download failed: failed to start downloader: failed to open downloader: vfs reader: failed to open source file: invalid seek position

The fix was to check that file size in various places and makes sure
that we weren't trying to download too much data.

This was a problems with backends (like s3) which update the size of
the object on Open to the actual size of the object.
2024-03-26 17:32:10 +00:00
Nick Craig-Wood
ac6ba11d22 local: add --local-time-type to use mtime/atime/btime/ctime as the time
Fixes #7484
2024-03-26 11:58:28 +00:00
Nick Craig-Wood
854a36c4ab Add psychopatt to contributors 2024-03-26 11:58:28 +00:00
psychopatt
522ab1de6d
docs: remove email from authors 2024-03-26 11:45:22 +00:00
Nick Craig-Wood
215ae17272 rc: fix stats groups being ignored in operations/check
Before this change operations/check was using a background context for
the checking which was causing the stats group to be ignored.

This fixes the problem and also a similar problem in backend/command

See: https://forum.rclone.org/t/operations-check-only-reports-to-global-stats-not-per-job-group/45254
2024-03-26 11:23:40 +00:00
Nick Craig-Wood
efed6b01d2 drive: fix server side copy with metadata from my drive to shared drive
Before this change trying to server side copy an object from a my
drive to a shared drive using --metadata caused this error:

    Sharing restrictions cannot be set on a shared drive item., teamDrivesSharingRestrictionNotAllowed

This was because we were setting the "writers-can-share" metadata
which isn't allowed on shared drives
2024-03-26 11:16:22 +00:00
Nick Craig-Wood
d11fe9779e drive: stop sending notification emails when setting permissions 2024-03-26 11:11:18 +00:00
Nick Craig-Wood
f167846fb9 Add iotmaestro to contributors 2024-03-26 11:11:18 +00:00
Nick Craig-Wood
1f4b433ace Add Vitaly to contributors 2024-03-26 11:11:18 +00:00
Nick Craig-Wood
4d09320b2b Add hoyho to contributors 2024-03-26 11:11:18 +00:00
Nick Craig-Wood
af313d66d5 Add Lewis Hook to contributors 2024-03-26 11:11:18 +00:00
iotmaestro
4b5c10f72e
Add a new backend for uloz.to
Note that this temporarily skips uploads of files over 2.5 GB.

See https://github.com/rclone/rclone/pull/7552#issuecomment-1956316492
for details.
2024-03-26 09:46:47 +00:00
Dan McArdle
dfc329c036 cmd/gitannex: Add the gitannex subcommand
This commit adds a new subcommand named "gitannex", aka
"git-annex-remote-rclone-builtin" when invoked via a symlink.

This accomplishes milestone 1 from issue #7625: "minimal support for the
external special remote protocol".

Issue #7625
2024-03-26 09:43:43 +00:00
gvitali
d9601c78b1 linkbox: fix list paging and optimized synchronization.
1. The maximum number of objects on a page should be no more than
1000. Currently it is 1024, for this reason the listing always ends on
the first page with the error “object not found”, rclone tries to
upload the file again, Linkbox stores it with the name “filename(N)”,
and so the storage fills up indefinitely.

2. A hyphen is added to the list of allowed characters, that makes
queries more optimized (no need to load all files in a directory for
an entity with a hyphen).
2024-03-24 12:05:58 +00:00
Vitaly
4258ad705e linkbox: fix working with names longer than 8-25 Unicode chars.
The LinkBox API does not allow searching by more than 25 Unicode
characters in the name, for this reason it is currently impossible to
work with files and folders named longer than 8 Unicode chars (if
encoded in base32).

This fix queries all files in a directory for long names and checks
their names one by one, thus solving the issue.

Fixes #7542
2024-03-24 12:05:58 +00:00
Pat Patterson
070cff8a65 b2: Add new cleanup and cleanup-hidden backend commands. 2024-03-23 18:07:02 +00:00
hoyho
a24aeba495 s3: validate CopyCutoff size before copy
Signed-off-by: hoyho <luohaihao@gmail.com>
2024-03-23 15:09:38 +00:00
Lewis Hook
bf494d48d6 Improve error messages when objects have been corrupted on transfer - fixes #5268 2024-03-23 12:35:35 +00:00
Nick Craig-Wood
aee8d909b3 onedrive: fix "unauthenticated: Unauthenticated" errors when downloading
Before this change we would pass the Authorization header on to the
download server. This is allowed according to the docs, but on some
onedrive servers this sometimes causes an error with the text
"unauthenticated: Unauthenticated".

This is a similar fix to

dedad9f071 onedrive: fix "unauthenticated: Unauthenticated" errors when uploading

See: https://forum.rclone.org/t/cryptcheck-on-encrypted-onedrive-personal-failed-with-unauthenticated-error/44581/
2024-03-23 12:08:35 +00:00
Nick Craig-Wood
48262849df lib/rest: Add Client.Do function to call http.Client.Do 2024-03-23 12:08:23 +00:00
Nick Craig-Wood
09cc8179cc lib/rest: add CheckRedirect function for redirect management 2024-03-23 12:08:23 +00:00
Nick Craig-Wood
ff855fe1fb operations: Fix "optional feature not implemented" error with a crypted sftp
Before this change operations.SetDirModTime could return the error
"optional feature not implemented" when attempting to set modification
times on crypted sftp backends.

This was because crypt wraps the directories using fs.DirWrapper but
these return fs.ErrorNotImplemented for the SetModTime method.

The fix is to recognise that error and fall back to using the
DirSetModTime method on the backend which does work.

Fixes #7673
2024-03-22 17:36:04 +00:00
Nick Craig-Wood
5ee89bdcf8 Add Kyle Reynolds to contributors 2024-03-22 17:36:04 +00:00
Nick Craig-Wood
f7bf28806c Add YukiUnHappy to contributors 2024-03-22 17:36:04 +00:00
Nick Craig-Wood
df6c573c99 Add Gachoud Philippe to contributors 2024-03-22 17:36:04 +00:00
Nick Craig-Wood
b7c06e5eb9 Add racerole to contributors 2024-03-22 17:36:04 +00:00
Nick Craig-Wood
e0e9ac50d3 Add John-Paul Smith to contributors 2024-03-22 17:36:04 +00:00
YukiUnHappy
f68d962c86 onedrive: make server-side copy to work in more scenarios 2024-03-22 17:29:38 +00:00
kapitainsky
6232cc123f docs: Proton Drive, correct typo
Proton Drive correct typo
2024-03-22 16:36:21 +00:00
Gachoud Philippe
a33576af7d
docs: drive: corrected relative path of scopes to absolute
and added some links to the reference
2024-03-22 12:26:34 +00:00
kapitainsky
2591703494 docs: clarify shell_type = none and ssh = behaviour
Discussed on the forum:

https://forum.rclone.org/t/can-rclone-be-made-to-work-with-an-sftp-server-confining-users-to-an-sftp-jail-and-no-login/44931
2024-03-21 15:08:56 +01:00
Kyle Reynolds
7803b4ed6c
fs: improve JSON Unmarshalling for Duration
Enhanced the UnmarshalJSON method for the Duration type to correctly
handle the special string 'off' and ensure large integers are parsed
accurately without floating-point rounding errors. This resolves
issues with setting and removing the MinAge filter through the rclone
rc command.

Fixes #3783

Co-authored-by: Kyle Reynolds <kyle.reynolds@bridgerphotonics.com>
2024-03-13 18:08:59 +00:00
racerole
00fb847662
docs: remove repeated words 2024-03-13 17:12:39 +00:00
Thomas Müller
c7bfadd10a owncloud: add config owncloud_exclude_mounts which allows to exclude mounted folders when listing remote resources 2024-03-13 17:09:10 +00:00
John-Paul Smith
ca903b9872
drive: backend query command
This command executes a list query in Google Drive’s native query
language and returns a JSON dump of matches. It’s useful for locating
files quickly in folders with a large number of files, where rclone’s
normal list command is slow due to client-side filtering.
2024-03-11 20:16:13 +00:00
Nick Craig-Wood
b7783f75a4 Start v1.67.0-DEV development 2024-03-10 12:14:00 +00:00
Nick Craig-Wood
b6013a5e68 Version v1.66.0 2024-03-10 11:22:43 +00:00
Nick Craig-Wood
b7422a4fc8 docs: update metadata docs with Move and Copy support 2024-03-09 14:13:18 +00:00
nielash
9b650d3517 hasher: look for cached hash if passed hash unexpectedly blank
Before this change, Hasher did not check whether a "passed hash" (hashtype
natively supported by the wrapped backend) returned from a backend was blank,
and would sometimes return a blank hash to the caller even when a non-blank hash
was already stored in the db. This caused issues with, for example, Google
Drive, which has SHA1 / SHA256 hashes for some files but not others
(https://rclone.org/drive/#sha1-or-sha256-hashes-may-be-missing) and sometimes also
does not have hashes for very recently modified files.

After this change, Hasher will check if the received "passed hash" is
unexpectedly blank, and if so, it will continue to try other enabled methods,
such as retrieving a value from the database, or possibly regenerating it.

https://forum.rclone.org/t/hasher-with-gdrive-backend-does-not-return-sha1-sha256-for-old-files/44680/9?u=nielash
2024-03-09 11:58:02 +00:00
nielash
ff0acfb568 hasher: fix error from trying to stop an already-stopped db
Before this change, Hasher would sometimes try to stop a bolt db that was
already stopped, resulting in an error. This change fixes the issue by checking
first whether the db is already stopped.

https://forum.rclone.org/t/hasher-with-gdrive-backend-does-not-return-sha1-sha256-for-old-files/44680/11?u=nielash
2024-03-09 11:58:02 +00:00
Nick Craig-Wood
ac830ddd42 sync: don't sync directory modtimes from backends which don't have directories
Some backends (like s3, swift, gcs, azureblob) don't have directories
(this can be overridden on some using the directory markers feature).

It therefore makes no sense to sync directory times from them as they
will all be a value made up by rclone (--default-time)

We use the feature flag CanHaveEmptyDirectories to mark backends
without real directory support and disable the directory modification
time syncing on those.
2024-03-09 11:28:15 +00:00
Nick Craig-Wood
f491efc85d sync: fix integration tests on chunker
The tests added in this commit needed a tweak for chunker

8c69455c37 sync: don't set dir modtimes if already set
2024-03-08 15:04:35 +00:00
Nick Craig-Wood
fcb182efce docs: add current sponsor logos in 2024-03-08 15:04:35 +00:00
nielash
1473de3f04 onedrive: add metadata support
This change adds support for metadata on OneDrive. Metadata (including
permissions) is supported for both files and directories.

OneDrive supports System Metadata (not User Metadata, as of this writing.) Much
of the metadata is read-only, and there are some differences between OneDrive
Personal and Business (see table in OneDrive backend docs for details).

Permissions are also supported, if --onedrive-metadata-permissions is set. The
accepted values for --onedrive-metadata-permissions are read, write, read,write, and
off (the default). write supports adding new permissions, updating the "role" of
existing permissions, and removing permissions. Updating and removing require
the Permission ID to be known, so it is recommended to use read,write instead of
write if you wish to update/remove permissions.

Permissions are read/written in JSON format using the same schema as the
OneDrive API, which differs slightly between OneDrive Personal and Business.
(See OneDrive backend docs for examples.)

To write permissions, pass in a "permissions" metadata key using this same
format. The --metadata-mapper tool can be very helpful for this.

When adding permissions, an email address can be provided in the User.ID or
DisplayName properties of grantedTo or grantedToIdentities. Alternatively, an
ObjectID can be provided in User.ID. At least one valid recipient must be
provided in order to add a permission for a user. Creating a Public Link is also
supported, if Link.Scope is set to "anonymous".

Note that adding a permission can fail if a conflicting permission already
exists for the file/folder.

To update an existing permission, include both the Permission ID and the new
roles to be assigned. roles is the only property that can be changed.

To remove permissions, pass in a blob containing only the permissions you wish
to keep (which can be empty, to remove all.)

Note that both reading and writing permissions requires extra API calls, so if
you don't need to read or write permissions it is recommended to omit --onedrive-
metadata-permissions.

Metadata and permissions are supported for Folders (directories) as well as
Files. Note that setting the mtime or btime on a Folder requires one extra API
call on OneDrive Business only.

OneDrive does not currently support User Metadata. When writing metadata, only
writeable system properties will be written -- any read-only or unrecognized keys
passed in will be ignored.

TIP: to see the metadata and permissions for any file or folder, run:

rclone lsjson remote:path --stat -M --onedrive-metadata-permissions read

See the OneDrive backend docs for a table of all the supported metadata
properties.
2024-03-08 14:48:54 +00:00
Nick Craig-Wood
4e07a72dc7 fs: Implement --no-update-dir-modtime to disable setting modification times on dirs 2024-03-07 17:20:24 +00:00
Nick Craig-Wood
99acee7ba0 operations: remove stray debug 2024-03-07 17:15:43 +00:00
Nick Craig-Wood
bda4f25baa s3: support metadata setting and mapping on server side Copy
Before this change the backend would not run the metadata mapper and
it would ignore metadata set when doing server side copies.
2024-03-07 14:44:45 +00:00
Nick Craig-Wood
9f2ce2c7fc drive: support metadata setting and mapping on server side Move,Copy
Before this change the backend would not run the metadata mapper and
it would ignore metadata set when doing server side moves or copies.
2024-03-07 14:44:45 +00:00
Nick Craig-Wood
6e85a39e99 local: support metadata setting and mapping on server side Move
Before this change the backend would not run the metadata mapper and
it would ignore metadata set when doing server side moves.
2024-03-07 14:44:45 +00:00
Nick Craig-Wood
24b4148b5e fs: add MetadataAsOpenOptions 2024-03-07 14:44:45 +00:00
Nick Craig-Wood
41b1250eaf fstests: add tests for Metadata on server side Move and Copy 2024-03-07 14:44:45 +00:00
Nick Craig-Wood
339d3e8ee6 netstorage,quatrix,seafile: fix Root to return correct directory when pointing to a file
This fixes the TestIntegration/FsMkdir/FsPutFiles/FsIsFile/FsRoot
integration test.
2024-03-07 14:44:45 +00:00
Nick Craig-Wood
5750795324 protondrive: fix encoding of Root method
This fixes the TestIntegration/FsMkdir/FsPutFiles/FsIsFile/FsRoot
integration test.
2024-03-07 14:44:45 +00:00
Nick Craig-Wood
cdcb8b2a0a Add huajin tong to contributors 2024-03-07 14:44:45 +00:00
huajin tong
b1ae7df556
docs: fix some comments
Signed-off-by: thirdkeyword <fliterdashen@gmail.com>
2024-03-07 12:57:15 +00:00
nielash
431524445e combine: fix operations.DirMove across upstreams - fixes #7661
Before this change, operations.DirMove would fail when moving a directory, if
the src and dest were on different upstreams of a combine remote.

The issue only affected operations.DirMove, and not sync.MoveDir, because they
checked for server-side-move support in different ways.

MoveDir checks by just trying it and seeing what error comes back. This works
fine for combine because combine returns fs.ErrorCantDirMove which MoveDir
understands what to do with.

DirMove, however, only checked whether the function pointer is nil. This is an
unreliable way to check for combine, because combine does advertise support for
DirMove, despite not always being able to do it.

This change fixes the issue by checking the returned error in a manner similar
to sync.MoveDir and falling back to individual file moves (copy + delete)
depending on which error was returned.
2024-03-07 11:11:46 +00:00
nielash
252562d00a combine: fix CopyDirMetadata error on upstream root
Before this change, operations.CopyDirMetadata would fail with: `internal error:
expecting directory string from combine root '' to have SetMetadata method:
optional feature not implemented` if the dst was the root directory of a combine
upstream. This is because combine was returning a *fs.Dir, which does not
satisfy the fs.SetMetadataer interface.

While it is true that combine cannot set metadata on the root of an upstream
(see also #7652), this should not be considered an error that causes sync to do
high-level retries, abort without doing deletes, etc.

This change addresses the issue by creating a new type of DirWrapper that is
allowed to fail silently, for exceptional cases such as this where certain
special directories have more limited abilities than what the Fs usually
supports.

It is possible that other similar wrapping backends (Union?) may need this same
fix.
2024-03-07 11:09:07 +00:00
nielash
6a72cfd6e1 operations: fix typo in log messages
I assume this must be a typo as %T of dir would only ever print "string"
2024-03-07 11:09:07 +00:00
nielash
354ea6fff3 docs: update to reflect dir modtime/metadata support 2024-03-07 11:09:07 +00:00
nielash
8c69455c37 sync: don't set dir modtimes if already set
Before this change, directory modtimes (and metadata) were always synced from
src to dst, even if already in sync (i.e. their modtimes already matched.) This
potentially required excessive API calls, made logs noisy, and was potentially
problematic for backends that create "versions" or otherwise log activity
updates when modtime/metadata is updated.

After this change, a new DirsEqual function is added to check whether dirs are
equal based on a number of factors such as ModifyWindow and sync flags in use.
If the dirs are equal, the modtime/metadata update is skipped.

For backends that require setDirModTimeAfter, the "after" sync is performed only
for dirs that could have been changed by the sync (i.e. dirs containing files
that were created/updated.)

Note that dir metadata (other than modtime) is not currently considered by
DirsEqual, consistent with how object metadata is synced (only when objects are
unequal for reasons other than metadata).

To sync dir modtimes and metadata unconditionally (the previous behavior), use
--ignore-times.
2024-03-07 09:57:11 +00:00
nielash
fd8faeb0e6 vfs: fix unicode normalization on macOS - fixes #7072
Before this change, the VFS layer did not properly handle unicode normalization,
which caused problems particularly for users of macOS. While attempts were made
to handle it with various `-o modules=iconv` combinations, this was an imperfect
solution, as no one combination allowed both NFC and NFD content to
simultaneously be both visible and editable via Finder.

After this change, the VFS supports `--no-unicode-normalization` (default `false`)
via the existing `--vfs-case-insensitive` logic, which is extended to apply to both
case insensitivity and unicode normalization form.

This change also adds an additional flag, `--vfs-block-norm-dupes`, to address a
probably rare but potentially possible scenario where a directory contains
multiple duplicate filenames after applying case and unicode normalization
settings. In such a scenario, this flag (disabled by default) hides the
duplicates. This comes with a performance tradeoff, as rclone will have to scan
the entire directory for duplicates when listing a directory. For this reason,
it is recommended to leave this disabled if not needed. However, macOS users may
wish to consider using it, as otherwise, if a remote directory contains both NFC
and NFD versions of the same filename, an odd situation will occur: both
versions of the file will be visible in the mount, and both will appear to be
editable, however, editing either version will actually result in only the NFD
version getting edited under the hood. `--vfs-block-norm-dupes` prevents this
confusion by detecting this scenario, hiding the duplicates, and logging an
error, similar to how this is handled in `rclone sync`.
2024-03-06 16:12:13 +00:00
Kyle Reynolds
dcdbad3554
bisync: clarify file operation directions in dry-run logs - fixes #7029
Before this change, NOTICE log messages during bisync dry runs were unclear as
to the direction of the skipped operation (Path1 to 2 vs. 2 to 1.) This change
adjusts the cmd/bisync/log.go indent function to be more expressive about
direction.
2024-03-06 09:26:53 -05:00
Nick Craig-Wood
effad3fe4b build: fix CVE-2024-24786 by upgrading google.golang.org/protobuf
See: https://pkg.go.dev/vuln/GO-2024-2611
2024-03-06 12:42:38 +00:00
Nick Craig-Wood
692af42858 operations: fix TestSetDirModTime for backends with SetDirModTime but not Metadata 2024-03-01 11:39:21 +00:00
Nick Craig-Wood
1693d7ad0f sftp: set DirModTimeUpdatesOnWrite to fix integration tests 2024-03-01 11:29:08 +00:00
Nick Craig-Wood
3bb9394ae5 operations: fix TestMkdirModTime test
This was failing on backends that didn't support metadata but did
support setting directory modtimes.
2024-03-01 11:18:24 +00:00
Nick Craig-Wood
be39e99918 sync: fix TestMoveEmptyDirectories so they work on backends which don't support DirModTimes 2024-03-01 10:56:48 +00:00
Nick Craig-Wood
6e28edeb9a cache: fix crash in tests which assumed local could Purge 2024-02-29 17:55:36 +00:00
Nick Craig-Wood
d50572b108 operations: add operations/hashsum to the rc as rclone hashsum equivalent
Fixes #7569
2024-02-29 16:21:42 +00:00
Nick Craig-Wood
0b8689dc28 rc: Add GetFsNamedFileOK to get an fs which could also be a file 2024-02-29 16:21:42 +00:00
Nick Craig-Wood
5994fcfed8 fs/cache: add PutErr to add an fs.Fs with an fs.ErrorIsFile error to the cache 2024-02-29 16:21:41 +00:00
Nick Craig-Wood
e3f6f68885 lib/cache: add PutErr to put a value with an error into the cache 2024-02-29 16:21:41 +00:00
Nick Craig-Wood
6ff1b6c505 local: delete backend implementation of Purge to speed up and make stats
In this commit (2014 for v1.02) Purge was implemented for the local
backend:

1527e64ee7 local: Implement Purger interface

This appeared to be implemented just to make a Purge and doesn't
appear to do anything useful.

It is in fact significatly worse than the rclone fallback purge since
it doesn't operate in parallel or update stats.

This patch removes the Purge routine for a consequent speed up and
showing of stats.

See: https://forum.rclone.org/t/progress-flag-for-rclone-purge/44416
2024-02-29 15:04:51 +00:00
Nick Craig-Wood
4a049c12fe copyurl: add troubleshooting section to the docs
See: https://forum.rclone.org/t/copyurl-fails-with-stream-error-wget-and-curl-works/44382/2
2024-02-29 14:58:12 +00:00
Nick Craig-Wood
15890b7ce7 cmd: make auto completion work for all shells and reduce the size
This updates the bash completion to work with GenBashCompletionV2
which cuts down the size of the completion file dramatically.

See: https://forum.rclone.org/t/request-make-remote-path-completion-work-for-fish-and-zsh/42982/
See: #7000
2024-02-29 14:46:50 +00:00
Nick Craig-Wood
186bb85c44 crypt: add missing error check spotted by linter 2024-02-29 14:46:50 +00:00
nielash
4c6d2c5410 crypt: improve handling of undecryptable file names - fixes #5787 fixes #6439 fixes #6437
Before this change, undecryptable file names would be skipped very quietly
(there was a log warning, but only at DEBUG level),
failing to alert users of a potentially serious issue that needs attention.

After this change, the log level is raised to NOTICE by default and a new
--crypt-strict-names flag allows raising an error, for users who may prefer not
to proceed if such an issue is detected.

See https://forum.rclone.org/t/skipping-undecryptable-file-name-should-be-an-error/27115
https://github.com/rclone/rclone/issues/5787
2024-02-29 12:11:02 +00:00
Nick Craig-Wood
f5f86786b2 sync: implement directory sync for mod times and metadata
Directory mod times are synced by default if the backend is capable
and directory metadata is synced if the --metadata flag is provided
and the backend is capable.

This updates the bisync golden tests also which were affected by
--dry-run setting of directory modtimes.

Fixes #6685
2024-02-28 16:26:14 +00:00
Nick Craig-Wood
15579c2195 fstests: factor out fstest.NewObject function 2024-02-28 16:26:14 +00:00
Nick Craig-Wood
e8fe0b0553 operations: Implement CopyDirMetadata, CopyDirModTime and SetDirModTime 2024-02-28 16:26:14 +00:00
Nick Craig-Wood
09953d77b5 lsjson,lsf: make sure metadata appears for directories 2024-02-28 16:26:14 +00:00
Nick Craig-Wood
e4d0055b3e drive: implement modtime and metadata setting for directories 2024-02-28 16:26:14 +00:00
Nick Craig-Wood
a60da2ef38 local: fix setting of btime on directories on Windows
Before this change this would give errors like this

    failed to set metadata on directory: failed to set birth (creation) time: Access is denied.

This was caused by opening the directory in the wrong mode.
2024-02-28 16:25:59 +00:00
Nick Craig-Wood
7b01564f83 local: implement modtime and metadata for directories
A consequence of this is that fs.Directory returned by the local
backend will now have a correct size in (rather than -1). Some tests
depended on this and have been fixed by this commit too.
2024-02-28 16:09:04 +00:00
Nick Craig-Wood
39db8caff1 cache,chunker,combine,compress,crypt,hasher,union: implement MkdirMetadata and related Features 2024-02-28 16:09:04 +00:00
nielash
0297542f6b cache,chunker,combine,compress,crypt,hasher,union: implement DirSetModTime (if supported by wrapped remote) 2024-02-28 16:09:04 +00:00
nielash
17c0ecc72c sftp: implement DirSetModTime 2024-02-28 16:09:04 +00:00
nielash
cbcb295185 drive: implement DirSetModTime 2024-02-27 19:59:13 +00:00
nielash
67e3725205 local: implement DirSetModTime 2024-02-27 19:59:13 +00:00
Nick Craig-Wood
61d76ae47d fstests: add integration tests for Directory Metadata and ModTime 2024-02-27 19:59:13 +00:00
Nick Craig-Wood
fd1ca2dfe8 fs: allow Metadata calls to be called with Directory or Object
This involved adding the Fs() method to DirEntry as it is needed in
the metadata mapper.

Unspecialised fs.Dir objects will return a new fs.Unknown from their
Fs() methods as they are not specific to any given Fs.
2024-02-27 10:56:19 +00:00
Nick Craig-Wood
e1032f693f fs: add DirWrapper for wrapping Directory-s with optional methods 2024-02-27 10:56:19 +00:00
Nick Craig-Wood
a4cadd1128 fs: add Directory Metadata flags for backends and interfaces
Add backend flags
- ReadDirMetadata
- WriteDirMetadata
- WriteDirSetModTime
- UserDirMetadata
- DirModTimeUpdatesOnWrite

Add Metadata/SetMetadata for directories.

Add MkdirMetadata optional feature
2024-02-27 10:56:19 +00:00
nielash
6da52d76a7 fs: implement DirSetModTime optional feature 2024-02-22 11:13:54 +00:00
Nick Craig-Wood
71a1bbb2be errcount: factor errcount abstraction from operations 2024-02-22 11:13:54 +00:00
Nick Craig-Wood
8f0e9f9f6b mega: fix panic with go1.22
Before this fix rclone would crash with

    panic: encoding alphabet includes duplicate symbols

When compiled with go1.22. This was fixed upstream in

https://github.com/t3rm1n4l/go-mega/issues/48

And this just pulls in the fix.

Fixes #7639
2024-02-21 18:41:44 +00:00
Nick Craig-Wood
072d1f10ab serve webdav: fix --baseurl without leading /
The webdav server needs the prefix passed to it with a leading /
otherwise it does not remove it properly.

The docs state that a leading slash is optional so this patch adds one
if not present.

See: https://forum.rclone.org/t/cant-rename-files-in-rclone-serve-webdav-with-baseurl-maybe-wrong-handling-of-move-request-method/44637
2024-02-21 18:08:44 +00:00
Nick Craig-Wood
5014348229 Add Anders Swanson to contributors 2024-02-21 18:08:44 +00:00
Nick Craig-Wood
ed78ac7c92 Add Joe Cai to contributors 2024-02-21 18:08:44 +00:00
Nick Craig-Wood
53d873d60d Add Dan McArdle to contributors 2024-02-21 18:08:44 +00:00
Nick Craig-Wood
f2c35fdec6 Add Gabriel Ramos to contributors 2024-02-21 18:08:44 +00:00
Nick Craig-Wood
1c69b20ed7 Add Jack Provance to contributors 2024-02-21 18:08:44 +00:00
nielash
547c635552 mailru: add override for TestApplyTransforms - #7591
mailru is unable to handle filenames with certain combining characters (for
example: йěáñ), and is therefore incapable of testing ApplyTransforms. (It is
also therefore incapable of fully supporting --no-unicode-normalization.)

The same override is applied to chunker when wrapping mailru.
2024-02-21 18:02:19 +00:00
nielash
f0d9117ff3 linkbox: add override for TestFixCase - #7591
linkbox already has an override for TestCaseInsensitiveMoveFile, and being able
to handle case-insensitive moves is a prerequisite for TestFixCase.
2024-02-21 18:02:19 +00:00
nielash
9d2bd163c7 opendrive: fix moving file/folder within the same parent dir - #7591
Before this change, moving (renaming) a file or folder to a different name
within the same parent directory would fail, due to using the wrong API
operation ("/file/move_copy.json" and "/folder/move_copy.json", instead of the
separate "/file/rename.json" and "/folder/rename.json" that opendrive has for
this purpose.)

After this change, Move and DirMove check whether the move is within the same
parent dir. If so, "rename" is used. If not, "move_copy" is used, like before.
2024-02-21 18:02:19 +00:00
Anders Swanson
db8fb5ceda oracleobjectstorage: supports workload identity authentication for OKE
Signed-off-by: Anders Swanson <anders.swanson@oracle.com>
2024-02-20 16:25:59 +00:00
Joe Cai
a1e66cc5e8 swift: Avoid unnecessary container versioning check
Container versioning check is only needed for non-empty large objects.
2024-02-20 15:52:25 +00:00
nielash
7b8bbe531e nfsmount: fix --volname being ignored #7503
Before this change, nfsmount ignored the --volname flag. After this change, the --
volname flag is respected, making it possible to set a custom volume name.

macOS users should note that Finder will show the correct volume name in most
places, but a notable exception is the sidebar, which will show "localhost".
This seems to be a system limitation (at least without `sudo`), but see the
discussion at https://github.com/rclone/rclone/issues/7503#issuecomment-1933997678
for some possible workarounds.
2024-02-18 05:08:59 -05:00
nielash
0e2f1d64e3 nfsmount: fix exit after external unmount #7503
Before this change, if a user unmounted externally (for example, via the Finder
UI), rclone would not be aware of this and wait forever to exit -- effectively
causing a deadlock that would require Ctrl+C to terminate.

After this change, when the handler detects an external unmount, it calls a
function which allows rclone to cleanly shutdown the VFS and exit.
2024-02-18 05:08:59 -05:00
nielash
5638a3841f serve nfs: fix writing files via Finder on macOS - fixes #7503
Before this change, writing files to an `nfsmount` via Finder on macOS would
cause critical errors, rendering `nfsmount` effectively unusable on macOS. This
change fixes the issue so that writes via Finder should be possible.

The issue was primarily caused by the handler's HandleLimit being set to -1. -1 is
the correct default for a NullAuthHandler, but not for a CachingHandler, which
interprets -1 not as "no limit" but as "no cache".

This change sets a high default of 1000000, and gives the user control over it
with a new --nfs-cache-handle-limit flag (available in both `serve nfs` and
`nfsmount`. A minimum of 5 is enforced, as any lower than this will be
insufficient to support directory listing.
2024-02-18 05:08:59 -05:00
Dan McArdle
6986a43b68 bisync: delete flushCache() function from tests
The flushCache() function has a bug that causes it to never actually
flush the cache. Specifically, it checks whether DirCacheFlush is nil,
but never calls it.

The tests are already passing without flushing the dir cache, so this
commit just deletes flushCache() and its call sites.

Fixes rclone/rclone#7623
2024-02-18 04:14:51 -05:00
Oksana Zhykina
11c6489fd1 quatrix: add option to skip project folders 2024-02-18 07:38:19 +01:00
Gabriel Ramos
43823bc925
webdav: reduce priority of chunks upload log 2024-02-18 07:29:23 +01:00
dependabot[bot]
a3b661be0d build(deps): bump golangci/golangci-lint-action from 3 to 4
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 3 to 4.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-02-18 07:25:50 +01:00
Jack Provance
f113c68b13
docs: Fix a heading level in webdav.md documentation (#7631)
This fixes a heading problem under the "Provider Notes" section.
2024-02-18 07:16:23 +01:00
nielash
137f7f62fb sync: use operations.DirMove instead of sync.MoveDir for --fix-case - #7591
This should be more efficient for the purposes of --fix-case, as operations.DirMove
accepts `srcRemote` and `dstRemote` arguments, while sync.MoveDir does not.

This also factors the two-step-move logic to operations.DirMoveCaseInsensitive, so
that it is reusable by other commands.
2024-02-13 15:07:41 -05:00
nielash
dfe76570a1 operations: skip backends incapable of testing TestApplyTransforms - #7591
This adds a step to detect whether the backend is capable of supporting the
feature, and skips the test if not. A backend can be incapable if, for example,
it is non-case-preserving or automatically converts NFD to NFC.
2024-02-13 15:07:41 -05:00
nielash
f4c058e13e bisync: use global --retries and --retries-sleep flags instead of overriding 2024-02-12 13:24:54 -05:00
nielash
407a0f3733 cmd: refactor --retries and --retries-sleep to global config
This change moves the --retries and --retries-sleep flags/variables from cmd to
config (consistent with --low-level-retries), so that they can be more easily
referenced from subcommands.
2024-02-12 13:24:54 -05:00
nielash
b14269fd23 bisync: add support for --retries-sleep - fixes #7555
Before this change, bisync supported --retries but not --retries-sleep.
This change adds support for --retries-sleep.
2024-02-12 13:24:54 -05:00
nielash
76b7bcd4d7 bisync: reset errors between retries
Before this change, in the event of a retryable error, bisync would always retry
the maximum number of times allowed by the `--retries` flag, even if one of the
retries was successful. This change fixes the issue, so that bisync moves on
after the first successful retry.
2024-02-12 13:24:54 -05:00
nielash
782ab3f582 bisync: clean up docs
(as the flags in docs/content/bisync.md do not update automatically, unlike
docs/content/commands/rclone_bisync.md)
2024-02-12 13:24:54 -05:00
nielash
9c6325c131 backend: rename variables to fix CI lint test failures 2024-02-12 12:49:00 -05:00
Volodymyr
2abeda5961
quatrix: fix Content-Range header
This change does not actually affect uploads. Just to be right according to definition of Content-Range in
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Range#range-end
2024-02-09 16:44:45 +00:00
nielash
885a543023 operations: use --download for TestApplyTransforms #7591
This makes it possible to run the test even on remotes without MD5 support.
2024-02-08 16:08:05 +00:00
nielash
f3680d222c operations: fix TestCaseInsensitiveMoveFileDryRun on chunker integration tests #7591
It appears that ci.DryRun = true affects the behavior of r.WriteObject on
chunker only, and no other remotes. This change puts a quick bandaid on it by
setting it later on in the test, but perhaps the underlying issue warrants a
closer look at some point... is chunker checking ci.DryRun itself in a way that
no other remote does? If so, should it? (Does this break encapsulation?)
2024-02-08 16:08:02 +00:00
nielash
d2b37cf61e operations: fix case-insensitive moves in operations.Move #7591
Before this change, operations.moveOrCopyFile had a special section to detect
and handle changing case of a file on a case insensitive remote, but
operations.Move did not. This caused operations.Move to fail for certain
backends that are incapable of renaming a file in-place to an equal-folding name.
(Not all case-insensitive backends have this limitation -- for example, Dropbox
does but macOS local does not.)

After this change, the special two-part-move section from
operations.moveOrCopyFile is factored out to its own function,
moveCaseInsensitive, which is then called from both operations.moveOrCopyFile
and operations.Move.
2024-02-08 16:07:57 +00:00
Nick Craig-Wood
83f61a9cfb s3: GCS provider: fix server side copy of files bigger than 5G
GCS gives NotImplemented errors for multi-part server side copies. The
threshold for these is currently set just below 5G so any files bigger
than 5G that rclone attempts to server side copy will fail.

This patch works around the problem by adding a quirk for GCS raising
--s3-copy-cutoff to the maximum. This means that rclone will never use
multi-part copies for files in GCS. This includes files bigger than
5GB which (according to AWS documentation) must be copied with
multi-part copy. However this seems to work with GCS.

See: https://forum.rclone.org/t/chunker-uploads-to-gcs-s3-fail-if-the-chunk-size-is-greater-than-the-max-part-size/44349/
See: https://issuetracker.google.com/issues/323465186
2024-02-08 14:53:30 +00:00
Nick Craig-Wood
b206496f63 b2: clarify exactly what --b2-download-auth-duration does in the docs
See: https://forum.rclone.org/t/what-does-b2-download-auth-duration-mean/44504/
2024-02-08 09:39:53 +00:00
Nick Craig-Wood
24fdecf107 ftp: fix mkdir with rsftp which is returning the wrong code
On a successfull MKD, rsftp seems to return code 250 whereas we and
the RFC expects 257.

This patch makes rclone accept 250 here as well.

See: https://forum.rclone.org/t/rclone-pop-up-an-i-o-error-when-creating-a-folder-in-a-mounted-ftp-drive/44368/3
2024-02-07 22:09:56 +00:00
Nick Craig-Wood
9bd7262dfc Add DanielEgbers to contributors 2024-02-07 22:09:56 +00:00
DanielEgbers
a0dff2dd9c
Seafile: Fix download/upload error when FILE_SERVER_ROOT is relative
A seafile server can be configured to use a relative URL as
FILE_SERVER_ROOT in order to support more than one hostname/ip. (see
https://github.com/haiwen/seahub/issues/3398#issuecomment-506920360 )

The previous backend implementation always expected an absolute
download/upload URL, resulting in an "unsupported protocol scheme"
error.

With this commit it supports both absolute and relative.
2024-02-05 11:48:51 +00:00
Nick Craig-Wood
91b54aafcc rc: add srcFs and dstFs to core/stats and core/transferred stats
Before this change it wasn't possible to see where transfers were
going from and to in core/stats and core/transferred.

When use in rclone mount in particular this made interpreting the
stats very hard.
2024-02-02 11:43:10 +00:00
Nick Craig-Wood
81a29e6895 Add Thomas Müller to contributors 2024-02-02 11:43:10 +00:00
Nick Craig-Wood
f762ef668f Add Michael Eischer to contributors 2024-02-02 11:43:10 +00:00
Thomas Müller
99b9062551 owncloud: add config owncloud_exclude_shares which allows to exclude shared files and folders when listing remote resources 2024-01-31 14:47:24 +00:00
Michael Eischer
ef2c5a1998
serve restic: fix error handling
* serve restic: return internal error if listing failed

If listing a remote failed, then rclone returned http status "not
found". This has become a problem since restic 0.16.0 which ignores "not
found"-errors while listing a directory.

Just return internal server error, if something unexpected happens while
listing a directory.

* serve restic: fix error handling if getting a file fails

If the call to `newObject` in `serveObject` fails, then rclone always
returned a "not found" error. This prevents restic from distinguishing
permanent "not found" errors from everything else.

Thus, only return "not found" if the object is not found and an internal
server error otherwise.
2024-01-29 17:54:23 +00:00
Nick Craig-Wood
6e4dd2ab96 docs: ignore amazon cloud drive doc stub when building the docs 2024-01-25 16:35:33 +00:00
Nick Craig-Wood
0c17a17e19 Changelog updates from Version v1.65.2 2024-01-24 16:40:47 +00:00
Nick Craig-Wood
03295bbc3c azureblob: fix data corruption bug #7590
It was reported that rclone copy occasionally uploaded corrupted data
to azure blob.

This turned out to be a race condition updating the block count which
caused blocks to be duplicated.

This bug was introduced in this commit in v1.64.0 and will be fixed in v1.65.2

0427177857 azureblob: implement OpenChunkWriter and multi-thread uploads #7056

This race only seems to happen if `--checksum` is used but can happen otherwise.

Unfortunately Azure blob does not check the MD5 that we send them so
despite sending incorrect data this corruption is not detected. The
corruption is detected when rclone tries to download the file, so
attempting to copy the files back to local disk will result in errors
such as:

    ERROR : file.pokosuf5.partial: corrupted on transfer: md5 hash differ "XXX" vs "YYY"

This adds a check to test the blocklist we upload is as we expected
which would have caught the problem had it been in place earlier.
2024-01-24 11:28:05 +00:00
Nick Craig-Wood
b3a1f66759 build: add -race flag to integration tester test_all 2024-01-24 11:27:43 +00:00
Nick Craig-Wood
a947f75d3b Add Kyle Reynolds to contributors 2024-01-24 11:27:43 +00:00
Nick Craig-Wood
ae0a4c8bbf Add Tera to contributors 2024-01-24 11:27:43 +00:00
Kyle Reynolds
7835991147
fs: add more detailed logging for file includes/excludes
This makes a DEBUG log to show why files were included or excluded.

Fixes #7463
2024-01-22 16:46:26 +00:00
nielash
810644e873 bisync: add --resync-mode for customizing --resync - fixes #5681
Before this change, the path1 version of a file always prevailed during
--resync, and many users requested options to automatically select the winner
based on characteristics such as newer, older, larger, and smaller. This change
adds support for such options.

Note that ideally this feature would have been implemented by allowing the
existing `--resync` flag to optionally accept string values such as `--resync
newer`. However, this would have been a breaking change, as the existing flag
is a `bool` and it does not seem to be possible to have a `string` flag that
accepts both `--resync newer` and `--resync` (with no argument.) (`NoOptDefVal`
does not work for this, as it would force an `=` like `--resync=newer`.) So
instead, the best compromise to avoid a breaking change was to add a new
`--resync-mode CHOICE` flag that implies `--resync`, while maintaining the
existing behavior of `--resync` (which implies `--resync-mode path1`. i.e. both
flags are now valid, and either can be used without the other.

--resync-mode CHOICE

In the event that a file differs on both sides during a `--resync`,
`--resync-mode` controls which version will overwrite the other. The supported
options are similar to `--conflict-resolve`. For all of the following options,
the version that is kept is referred to as the "winner", and the version that
is overwritten (deleted) is referred to as the "loser". The options are named
after the "winner":

- `path1` - (the default) - the version from Path1 is unconditionally
considered the winner (regardless of `modtime` and `size`, if any). This can be
useful if one side is more trusted or up-to-date than the other, at the time of
the `--resync`.
- `path2` - same as `path1`, except the path2 version is considered the winner.
- `newer` - the newer file (by `modtime`) is considered the winner, regardless
of which side it came from. This may result in having a mix of some winners
from Path1, and some winners from Path2. (The implementation is analagous to
running `rclone copy --update` in both directions.)
- `older` - same as `newer`, except the older file is considered the winner,
and the newer file is considered the loser.
- `larger` - the larger file (by `size`) is considered the winner (regardless
of `modtime`, if any). This can be a useful option for remotes without
`modtime` support, or with the kinds of files (such as logs) that tend to grow
but not shrink, over time.
- `smaller` - the smaller file (by `size`) is considered the winner (regardless
of `modtime`, if any).

For all of the above options, note the following:
- If either of the underlying remotes lacks support for the chosen method, it
will be ignored and will fall back to the default of `path1`. (For example, if
`--resync-mode newer` is set, but one of the paths uses a remote that doesn't
support `modtime`.)
- If a winner can't be determined because the chosen method's attribute is
missing or equal, it will be ignored, and bisync will instead try to determine
whether the files differ by looking at the other `--compare` methods in effect.
(For example, if `--resync-mode newer` is set, but the Path1 and Path2 modtimes
are identical, bisync will compare the sizes.) If bisync concludes that they
differ, preference is given to whichever is the "source" at that moment. (In
practice, this gives a slight advantage to Path2, as the 2to1 copy comes before
the 1to2 copy.) If the files _do not_ differ, nothing is copied (as both sides
are already correct).
- These options apply only to files that exist on both sides (with the same
name and relative path). Files that exist *only* on one side and not the other
are *always* copied to the other, during `--resync` (this is one of the main
differences between resync and non-resync runs.).
- `--conflict-resolve`, `--conflict-loser`, and `--conflict-suffix` do not
apply during `--resync`, and unlike these flags, nothing is renamed during
`--resync`. When a file differs on both sides during `--resync`, one version
always overwrites the other (much like in `rclone copy`.) (Consider using
`--backup-dir` to retain a backup of the losing version.)
- Unlike for `--conflict-resolve`, `--resync-mode none` is not a valid option
(or rather, it will be interpreted as "no resync", unless `--resync` has also
been specified, in which case it will be ignored.)
- Winners and losers are decided at the individual file-level only (there is
not currently an option to pick an entire winning directory atomically,
although the `path1` and `path2` options typically produce a similar result.)
- To maintain backward-compatibility, the `--resync` flag implies
`--resync-mode path1` unless a different `--resync-mode` is explicitly
specified. Similarly, all `--resync-mode` options (except `none`) imply
`--resync`, so it is not necessary to use both the `--resync` and
`--resync-mode` flags simultaneously -- either one is sufficient without the
other.
2024-01-20 17:17:01 -05:00
nielash
8d3bcc025a bisync: fix --colors flag
quick fix to get around lack of support in fs.Infof etc.
2024-01-20 17:17:01 -05:00
nielash
0f549520ef bisync: factor resync to separate file 2024-01-20 17:17:01 -05:00
nielash
ba16fcfaf5 bisync: skip empty test case dirs 2024-01-20 17:17:01 -05:00
nielash
68f0998699 bisync: add options to auto-resolve conflicts - fixes #7471
Before this change, when a file was new/changed on both paths (relative to the
prior sync), and the versions on each side were not identical, bisync would
keep both versions, renaming them with ..path1 and ..path2 suffixes,
respectively. Many users have requested more control over how bisync handles
such conflicts -- including an option to automatically select one version as
the "winner" and rename or delete the "loser". This change introduces support
for such options.

--conflict-resolve CHOICE

In bisync, a "conflict" is a file that is *new* or *changed* on *both sides*
(relative to the prior run) AND is *not currently identical* on both sides.
`--conflict-resolve` controls how bisync handles such a scenario. The currently
supported options are:

- `none` - (the default) - do not attempt to pick a winner, keep and rename
both files according to `--conflict-loser` and
`--conflict-suffix` settings. For example, with the default
settings, `file.txt` on Path1 is renamed `file.txt.conflict1` and `file.txt` on
Path2 is renamed `file.txt.conflict2`. Both are copied to the opposite path
during the run, so both sides end up with a copy of both files. (As `none` is
the default, it is not necessary to specify `--conflict-resolve none` -- you
can just omit the flag.)
- `newer` - the newer file (by `modtime`) is considered the winner and is
copied without renaming. The older file (the "loser") is handled according to
`--conflict-loser` and `--conflict-suffix` settings (either renamed or
deleted.) For example, if `file.txt` on Path1 is newer than `file.txt` on
Path2, the result on both sides (with other default settings) will be `file.txt`
(winner from Path1) and `file.txt.conflict1` (loser from Path2).
- `older` - same as `newer`, except the older file is considered the winner,
and the newer file is considered the loser.
- `larger` - the larger file (by `size`) is considered the winner (regardless
of `modtime`, if any).
- `smaller` - the smaller file (by `size`) is considered the winner (regardless
of `modtime`, if any).
- `path1` - the version from Path1 is unconditionally considered the winner
(regardless of `modtime` and `size`, if any). This can be useful if one side is
usually more trusted or up-to-date than the other.
- `path2` - same as `path1`, except the path2 version is considered the
winner.

For all of the above options, note the following:
- If either of the underlying remotes lacks support for the chosen method, it
will be ignored and fall back to `none`. (For example, if `--conflict-resolve
newer` is set, but one of the paths uses a remote that doesn't support
`modtime`.)
- If a winner can't be determined because the chosen method's attribute is
missing or equal, it will be ignored and fall back to `none`. (For example, if
`--conflict-resolve newer` is set, but the Path1 and Path2 modtimes are
identical, even if the sizes may differ.)
- If the file's content is currently identical on both sides, it is not
considered a "conflict", even if new or changed on both sides since the prior
sync. (For example, if you made a change on one side and then synced it to the
other side by other means.) Therefore, none of the conflict resolution flags
apply in this scenario.
- The conflict resolution flags do not apply during a `--resync`, as there is
no "prior run" to speak of (but see `--resync-mode` for similar
options.)

--conflict-loser CHOICE

`--conflict-loser` determines what happens to the "loser" of a sync conflict
(when `--conflict-resolve` determines a winner) or to both
files (when there is no winner.) The currently supported options are:

- `num` - (the default) - auto-number the conflicts by automatically appending
the next available number to the `--conflict-suffix`, in chronological order.
For example, with the default settings, the first conflict for `file.txt` will
be renamed `file.txt.conflict1`. If `file.txt.conflict1` already exists,
`file.txt.conflict2` will be used instead (etc., up to a maximum of
9223372036854775807 conflicts.)
- `pathname` - rename the conflicts according to which side they came from,
which was the default behavior prior to `v1.66`. For example, with
`--conflict-suffix path`, `file.txt` from Path1 will be renamed
`file.txt.path1`, and `file.txt` from Path2 will be renamed `file.txt.path2`.
If two non-identical suffixes are provided (ex. `--conflict-suffix
cloud,local`), the trailing digit is omitted. Importantly, note that with
`pathname`, there is no auto-numbering beyond `2`, so if `file.txt.path2`
somehow already exists, it will be overwritten. Using a dynamic date variable
in your `--conflict-suffix` (see below) is one possible way to avoid this. Note
also that conflicts-of-conflicts are possible, if the original conflict is not
manually resolved -- for example, if for some reason you edited
`file.txt.path1` on both sides, and those edits were different, the result
would be `file.txt.path1.path1` and `file.txt.path1.path2` (in addition to
`file.txt.path2`.)
- `delete` - keep the winner only and delete the loser, instead of renaming it.
If a winner cannot be determined (see `--conflict-resolve` for details on how
this could happen), `delete` is ignored and the default `num` is used instead
(i.e. both versions are kept and renamed, and neither is deleted.) `delete` is
inherently the most destructive option, so use it only with care.

For all of the above options, note that if a winner cannot be determined (see
`--conflict-resolve` for details on how this could happen), or if
`--conflict-resolve` is not in use, *both* files will be renamed.

--conflict-suffix STRING[,STRING]

`--conflict-suffix` controls the suffix that is appended when bisync renames a
`--conflict-loser` (default: `conflict`).
`--conflict-suffix` will accept either one string or two comma-separated
strings to assign different suffixes to Path1 vs. Path2. This may be helpful
later in identifying the source of the conflict. (For example,
`--conflict-suffix dropboxconflict,laptopconflict`)

With `--conflict-loser num`, a number is always appended to the suffix. With
`--conflict-loser pathname`, a number is appended only when one suffix is
specified (or when two identical suffixes are specified.) i.e. with
`--conflict-loser pathname`, all of the following would produce exactly the
same result:

```
--conflict-suffix path
--conflict-suffix path,path
--conflict-suffix path1,path2
```

Suffixes may be as short as 1 character. By default, the suffix is appended
after any other extensions (ex. `file.jpg.conflict1`), however, this can be
changed with the `--suffix-keep-extension` flag (i.e. to instead result in
`file.conflict1.jpg`).

`--conflict-suffix` supports several *dynamic date variables* when enclosed in
curly braces as globs. This can be helpful to track the date and/or time that
each conflict was handled by bisync. For example:

```
--conflict-suffix {DateOnly}-conflict
// result: myfile.txt.2006-01-02-conflict1
```

All of the formats described [here](https://pkg.go.dev/time#pkg-constants) and
[here](https://pkg.go.dev/time#example-Time.Format) are supported, but take
care to ensure that your chosen format does not use any characters that are
illegal on your remotes (for example, macOS does not allow colons in
filenames, and slashes are also best avoided as they are often interpreted as
directory separators.) To address this particular issue, an additional
`{MacFriendlyTime}` (or just `{mac}`) option is supported, which results in
`2006-01-02 0304PM`.

Note that `--conflict-suffix` is entirely separate from rclone's main `--sufix`
flag. This is intentional, as users may wish to use both flags simultaneously,
if also using `--backup-dir`.

Finally, note that the default in bisync prior to `v1.66` was to rename
conflicts with `..path1` and `..path2` (with two periods, and `path` instead of
`conflict`.) Bisync now defaults to a single dot instead of a double dot, but
additional dots can be added by including them in the specified suffix string.
For example, for behavior equivalent to the previous default, use:

```
[--conflict-resolve none] --conflict-loser pathname --conflict-suffix .path
```
2024-01-20 17:17:01 -05:00
nielash
d031cc138d bisync: check for syntax errors in path args - fixes #7511
Before this change, certain shell quoting / escaping errors (particularly on
Windows) were not detected by Bisync, possibly resulting in incorrect expansion
and confusing errors. In particular, Windows paths with a single trailing
backslash followed by a quote would be interpreted as an escaped quote --
resulting in the quote and subsequent flags being erroneously considered part
of the path.

After this change, Bisync specifically checks for a few of the most common
patterns, and if detected, exits with a more helpful error message before doing
any damage.
2024-01-20 16:54:12 -05:00
nielash
e71b252b65 bisync: add overlapping paths check
Before this change, Bisync did not check to make sure that Path1 and Path2 do
not overlap, nor did it check for overlaps with `--backup-dir`. While `sync`
does check for these things, it can sometimes be fooled because of the way
Bisync calls it with `--files-from` filters. Relying on sync could also leave a
run in a half-finished state if it were to error in one direction but not the
other (`--backup-dir` only checks for overlaps with the dest.)

After this change, Bisync does its own check up front, so we can quickly return
an error and exit before any changes are made.
2024-01-20 16:54:12 -05:00
nielash
e9cd3e5986 bisync: allow lock file expiration/renewal with --max-lock - #7470
Background: Bisync uses lock files as a safety feature to prevent
interference from other bisync runs while it is running. Bisync normally
removes these lock files at the end of a run, but if bisync is abruptly
interrupted, these files will be left behind. By default, they will lock out
all future runs, until the user has a chance to manually check things out and
remove the lock.

Before this change, lock files blocked future runs indefinitely, so a single
interrupted run would lock out all future runs forever (absent user
intervention), and there was no way to change this behavior.

After this change, a new --max-lock flag can be used to make lock files
automatically expire after a certain period of time, so that future runs are
not locked out forever, and auto-recovery is possible. --max-lock can be any
duration 2m or greater (or 0 to disable). If set, lock files older than this
will be considered "expired", and future runs will be allowed to disregard them
and proceed. (Note that the --max-lock duration must be set by the process that
left the lock file -- not the later one interpreting it.)

If set, bisync will also "renew" these lock files every
--max-lock_minus_one_minute throughout a run, for extra safety. (For example,
with --max-lock 5m, bisync would renew the lock file (for another 5 minutes)
every 4 minutes until the run has completed.) In other words, it should not be
possible for a lock file to pass its expiration time while the process that
created it is still running -- and you can therefore be reasonably sure that
any _expired_ lock file you may find was left there by an interrupted run, not
one that is still running and just taking awhile.

If --max-lock is 0 or not set, the default is that lock files will never
expire, and will block future runs (of these same two bisync paths)
indefinitely.

For maximum resilience from disruptions, consider setting a relatively short
duration like --max-lock 2m along with --resilient and --recover, and a
relatively frequent cron schedule. The result will be a very robust
"set-it-and-forget-it" bisync run that can automatically bounce back from
almost any interruption it might encounter, without requiring the user to get
involved and run a --resync.
2024-01-20 16:31:28 -05:00
nielash
4025f42bd9 bisync: Graceful Shutdown, --recover from interruptions without --resync - fixes #7470
Before this change, bisync had no mechanism to gracefully cancel a sync early
and exit in a clean state. Additionally, there was no way to recover on the
next run -- any interruption at all would cause bisync to require a --resync,
which made  bisync more difficult to use as a scheduled background process.

This change introduces a "Graceful Shutdown" mode and --recover flag to
robustly recover from even un-graceful shutdowns.

If --recover is set, in the event of a sudden interruption or other un-graceful
shutdown, bisync will attempt to automatically recover on the next run, instead
of requiring --resync. Bisync is able to recover robustly by keeping one
"backup" listing at all times, representing the state of both paths after the
last known successful sync. Bisync can then compare the current state with this
snapshot to determine which changes it needs to retry. Changes that were synced
after this snapshot (during the run that was later interrupted) will appear to
bisync as if they are "new or changed on both sides", but in most cases this is
not a problem, as bisync will simply do its usual "equality check" and learn
that no action needs to be taken on these files, since they are already
identical on both sides.

In the rare event that a file is synced successfully during a run that later
aborts, and then that same file changes AGAIN before the next run, bisync will
think it is a sync conflict, and handle it accordingly. (From bisync's
perspective, the file has changed on both sides since the last trusted sync,
and the files on either side are not currently identical.) Therefore, --recover
carries with it a slightly increased chance of having conflicts -- though in
practice this is pretty rare, as the conditions required to cause it are quite
specific. This risk can be reduced by using bisync's "Graceful Shutdown" mode
(triggered by sending SIGINT or Ctrl+C), when you have the choice, instead of
forcing a sudden termination.

--recover and --resilient are similar, but distinct -- the main difference is
that --resilient is about _retrying_, while --recover is about _recovering_.
Most users will probably want both. --resilient allows retrying when bisync has
chosen to abort itself due to safety features such as failing --check-access or
detecting a filter change. --resilient does not cover external interruptions
such as a user shutting down their computer in the middle of a sync -- that is
what --recover is for.

"Graceful Shutdown" mode is activated by sending SIGINT or pressing Ctrl+C
during a run. Once triggered, bisync will use best efforts to exit cleanly
before the timer runs out. If bisync is in the middle of transferring files, it
will attempt to cleanly empty its queue by finishing what it has started but
not taking more. If it cannot do so within 30 seconds, it will cancel the
in-progress transfers at that point and then give itself a maximum of 60
seconds to wrap up, save its state for next time, and exit. With the -vP flags
you will see constant status updates and a final confirmation of whether or not
the graceful shutdown was successful.

At any point during the "Graceful Shutdown" sequence, a second SIGINT or Ctrl+C
will trigger an immediate, un-graceful exit, which will leave things in a
messier state. Usually a robust recovery will still be possible if using
--recover mode, otherwise you will need to do a --resync.

If you plan to use Graceful Shutdown mode, it is recommended to use --resilient
and --recover, and it is important to NOT use --inplace, otherwise you risk
leaving partially-written files on one side, which may be confused for real
files on the next run. Note also that in the event of an abrupt interruption, a
lock file will be left behind to block concurrent runs. You will need to delete
it before you can proceed with the next run (or wait for it to expire on its
own, if using --max-lock.)
2024-01-20 16:31:28 -05:00
nielash
b4216648e4 bisync: full support for comparing checksum, size, modtime - fixes #5679 fixes #5683 fixes #5684 fixes #5675
Before this change, bisync could only detect changes based on modtime, and
would refuse to run if either path lacked modtime support. This made bisync
unavailable for many of rclone's backends. Additionally, bisync did not account
for the Fs's precision when comparing modtimes, meaning that they could only be
reliably compared within the same side -- not against the opposite side. Size
and checksum (even when available) were ignored completely for deltas.

After this change, bisync now fully supports comparing based on any combination
of size, modtime, and checksum, lifting the prior restriction on backends
without modtime support. The comparison logic considers the backend's
precision, hash types, and other features as appropriate.

The comparison features optionally use a new --compare flag (which takes any
combination of size,modtime,checksum) and even supports some combinations not
otherwise supported in `sync` (like comparing all three at the same time.) By
default (without the --compare flag), bisync inherits the same comparison
options as `sync` (that is: size and modtime by default, unless modified with
flags such as --checksum or --size-only.) If the --compare flag is set, it will
override these defaults.

If --compare includes checksum and both remotes support checksums but have no
hash types in common with each other, checksums will be considered only for
comparisons within the same side (to determine what has changed since the prior
sync), but not for comparisons against the opposite side. If one side supports
checksums and the other does not, checksums will only be considered on the side
that supports them. When comparing with checksum and/or size without modtime,
bisync cannot determine whether a file is newer or older -- only whether it is
changed or unchanged. (If it is changed on both sides, bisync still does the
standard equality-check to avoid declaring a sync conflict unless it absolutely
has to.)

Also included are some new flags to customize the checksum comparison behavior
on backends where hashes are slow or unavailable. --no-slow-hash and
--slow-hash-sync-only allow selectively ignoring checksums on backends such as
local where they are slow. --download-hash allows computing them by downloading
when (and only when) they're otherwise not available. Of course, this option
probably won't be practical with large files, but may be a good option for
syncing small-but-important files with maximum accuracy (for example, a source
code repo on a crypt remote.) An additional advantage over methods like
cryptcheck is that the original file is not required for comparison (for
example, --download-hash can be used to bisync two different crypt remotes with
different passwords.)

Additionally, all of the above are now considered during the final --check-sync
for much-improved accuracy (before this change, it only compared filenames!)

Many other details are explained in the included docs.
2024-01-20 16:08:06 -05:00
nielash
d8e07bfd8e bisync: document beta status more clearly - fixes #6082 2024-01-20 15:38:26 -05:00
nielash
199d82969b bisync: normalize session name to non-canonical - fixes #7423
Before this change, bisync used the "canonical" Fs name in the filename for its
listing files, including any {hexstring} suffix. An unintended consequence of
this was that if a user added a backend-specific flag from the command line
(thus "overriding" the config), bisync would fail to find the listing files it
created during the prior run without this flag, due to the path now having a
{hexstring} suffix that wasn't there before (or vice versa, if the flag was
present when the session was established, and later removed.) This would
sometimes cause bisync to fail with a critical error (if no listing existed
with the alternate name), or worse -- it would sometimes cause bisync to use an
old, incorrect listing (if old listings with the alternate name DID still
exist, from before the user changed their flags.)

After this change, the issue is fixed by always normalizing the SessionName to
the non-canonical version (no {hexstring} suffix), regardless of the flags. To
avoid a breaking change, we first check if a suffixed listing exists. If so, we
rename it (and overwrite the non-suffixed version, if any.) If not, we carry on
with the non-suffixed version. (We should only find a suffixed version if
created prior to this commit.)

The result for the user is that the same pair of paths will always use the same
.lst filenames, with or without backend-specific flags.
2024-01-20 15:38:26 -05:00
nielash
bb74a13c07 bisync: update version number in docs
as these changes did not make it in time for 1.65
2024-01-20 15:38:26 -05:00
nielash
57624629d6 bisync: account for differences in backend features on integration tests - see #5679
Before this change, integration tests often could not be run on backends with
differing features from the local system that goldenized them. In particular,
differences in modtime precision, checksum support, and encoding would cause
false positives. After this change, the tests more accurately account for the
features of the backend being tested, which allows us to see true positives
more clearly, and more meaningfully assess whether a backend is supported.
2024-01-20 14:50:08 -05:00
nielash
7c6f0cc455 operations: fix renaming a file on macOS
Before this change, a file would sometimes be silently deleted instead of
renamed on macOS, due to its unique handling of unicode normalization. Rclone
already had a SameObject check in place for case insensitivity before deleting
the source (for example if "hello.txt" was renamed to "HELLO.txt"), but had no
such check for unicode normalization. After this change, the delete is skipped
on macOS if the src and dst filenames normalize to the same NFC string.

Example of the previous behavior:

 ~ % rclone touch /Users/nielash/rename_test/ö
 ~ % rclone lsl /Users/nielash/rename_test/ö
        0 2023-11-21 17:28:06.170486000 ö
 ~ % rclone moveto /Users/nielash/rename_test/ö /Users/nielash/rename_test/ö -vv
2023/11/21 17:28:51 DEBUG : rclone: Version "v1.64.0" starting with parameters ["rclone" "moveto" "/Users/nielash/rename_test/ö" "/Users/nielash/rename_test/ö" "-vv"]
2023/11/21 17:28:51 DEBUG : Creating backend with remote "/Users/nielash/rename_test/ö"
2023/11/21 17:28:51 DEBUG : Using config file from "/Users/nielash/.config/rclone/rclone.conf"
2023/11/21 17:28:51 DEBUG : fs cache: adding new entry for parent of "/Users/nielash/rename_test/ö", "/Users/nielash/rename_test"
2023/11/21 17:28:51 DEBUG : Creating backend with remote "/Users/nielash/rename_test/"
2023/11/21 17:28:51 DEBUG : fs cache: renaming cache item "/Users/nielash/rename_test/" to be canonical "/Users/nielash/rename_test"
2023/11/21 17:28:51 DEBUG : ö: Size and modification time the same (differ by 0s, within tolerance 1ns)
2023/11/21 17:28:51 DEBUG : ö: Unchanged skipping
2023/11/21 17:28:51 INFO  : ö: Deleted
2023/11/21 17:28:51 INFO  :
Transferred:   	          0 B / 0 B, -, 0 B/s, ETA -
Checks:                 1 / 1, 100%
Deleted:                1 (files), 0 (dirs)
Elapsed time:         0.0s

2023/11/21 17:28:51 DEBUG : 5 go routines active
 ~ % rclone lsl /Users/nielash/rename_test/
 ~ %
2024-01-20 14:50:08 -05:00
nielash
422b037087 bisync: fallback to cryptcheck or --download when can't check hash
Bisync checks file equality before renaming sync conflicts by comparing
checksums. Before this change, backends without checksum support (notably
Crypt) would fall back to --size-only for these checks, which is not a very
safe method (differing files can sometimes have the same size, especially if
they're small.) After this change, Crypt remotes fallback to using Cryptcheck
so that checksums can be compared. As a last resort when neither Check nor
Cryptcheck are available, files are compared using --download so that we can be
certain the files are identical regardless of checksum support.
2024-01-20 14:50:08 -05:00
nielash
7f854acb05 local: fix cleanRootPath on Windows after go1.21.4 stdlib update
Similar to
acf1e2df84,
go1.21.4 appears to have broken sync.MoveDir on Windows because
filepath.VolumeName() returns `\\?` instead of `\\?\C:` in cleanRootPath. It
looks like the Go team is aware of the issue and planning a fix, so this may
only be needed temporarily.
2024-01-20 14:50:08 -05:00
nielash
bbf9b1b3d2 bisync: support two --backup-dir paths on different remotes
Before this change, bisync supported `--backup-dir` only when `Path1` and
`Path2` were different paths on the same remote. With this change, bisync
introduces new `--backup-dir1` and `--backup-dir2` flags to support separate
backup-dirs for `Path1` and `Path2`.

`--backup-dir1` and `--backup-dir2` can use different remotes from each other,
but `--backup-dir1` must use the same remote as `Path1`, and `--backup-dir2`
must use the same remote as `Path2`. Each backup directory must not overlap its
respective bisync Path without being excluded by a filter rule.

The standard `--backup-dir` will also work, if both paths use the same remote
(but note that deleted files from both paths would be mixed together in the
same dir). If either `--backup-dir1` and `--backup-dir2` are set, they will
override `--backup-dir`.
2024-01-20 14:50:08 -05:00
nielash
9cf783677e bisync: support files with unknown length, including Google Docs - fixes #5696
Before this change, bisync intentionally ignored Google Docs (albeit in a
buggy way that caused problems during --resync.) After this change, Google Docs
(including Google Sheets, Slides, etc.) are now supported in bisync, subject to
the same options, defaults, and limitations as in `rclone sync`. When bisyncing
drive with non-drive backends, the drive -> non-drive direction is controlled
by `--drive-export-formats` (default `"docx,xlsx,pptx,svg"`) and the non-drive
-> drive direction is controlled by `--drive-import-formats` (default none.)

For example, with the default export/import formats, a Google Sheet on the
drive side will be synced to an `.xlsx` file on the non-drive side. In the
reverse direction, `.xlsx` files with filenames that match an existing Google
Sheet will be synced to that Google Sheet, while `.xlsx` files that do NOT
match an existing Google Sheet will be copied to drive as normal `.xlsx` files
(without conversion to Sheets, although the Google Drive web browser UI may
still give you the option to open it as one.)

If `--drive-import-formats` is set (it's not, by default), then all of the
specified formats will be converted to Google Docs, if there is no existing
Google Doc with a matching name. Caution: such conversion can be quite lossy,
and in most cases it's probably not what you want!

To bisync Google Docs as URL shortcut links (in a manner similar to "Drive for
Desktop"), use: `--drive-export-formats url` (or alternatives.)

Note that these link files cannot be edited on the non-drive side -- you will
get errors if you try to sync an edited link file back to drive. They CAN be
deleted (it will result in deleting the corresponding Google Doc.) If you
create a `.url` file on the non-drive side that does not match an existing
Google Doc, bisyncing it will just result in copying the literal `.url` file
over to drive (no Google Doc will be created.) So, as a general rule of thumb,
think of them as read-only placeholders on the non-drive side, and make all
your changes on the drive side.

Likewise, even with other export-formats, it is best to only move/rename Google
Docs on the drive side. This is because otherwise, bisync will interpret this
as a file deleted and another created, and accordingly, it will delete the
Google Doc and create a new file at the new path. (Whether or not that new file
is a Google Doc depends on `--drive-import-formats`.)

Lastly, take note that all Google Docs on the drive side have a size of `-1`
and no checksum. Therefore, they cannot be reliably synced with the
`--checksum` or `--size-only` flags. (To be exact: they will still get
created/deleted, and bisync's delta engine will notice changes and queue them
for syncing, but the underlying sync function will consider them identical and
skip them.) To work around this, use the default (modtime and size) instead of
`--checksum` or `--size-only`.

To ignore Google Docs entirely, use `--drive-skip-gdocs`.

Nearly all of the Google Docs logic is outsourced to the Drive backend, so
future changes should also be supported by bisync.
2024-01-20 14:50:08 -05:00
nielash
4d5d6ee61b bisync: provide more info in critical error msgs 2024-01-20 14:50:08 -05:00
nielash
44637dcd7f bisync: high-level retries if --resilient
Before this change, bisync had no ability to retry in the event of sync errors.
After this change, bisync will retry if --resilient is passed, but only in one
direction at a time. We can safely retry in one direction because the source is
still intact, even if the dest was left in a messy state. If the first
direction still fails after our final retry, we abort and do NOT continue in
the other direction, to prevent the messy dest from polluting the source. If
the first direction succeeds, we do then allow retries in the other direction.

The number of retries is controllable by --retries (default 3)

bisync: high-level retries if --resilient

Before this change, bisync had no ability to retry in the event of sync errors.
After this change, bisync will retry if --resilient is passed, but only in one
direction at a time. We can safely retry in one direction because the source is
still intact, even if the dest was left in a messy state. If the first
direction still fails after our final retry, we abort and do NOT continue in
the other direction, to prevent the messy dest from polluting the source. If
the first direction succeeds, we do then allow retries in the other direction.

The number of retries is controllable by --retries (default 3)
2024-01-20 14:50:08 -05:00
nielash
98f539de8f bisync: refactor normalization code, fix deltas - fixes #7270
Refactored the case / unicode normalization logic to be much more efficient,
 and fix the last outstanding issue from #7270. Before this change, we were
 doing lots of for loops and re-normalizing strings we had already normalized
 earlier. Now, we leave the normalizing entirely to March and avoid
 re-transforming later, which seems to make a large difference in terms of
 performance.
2024-01-20 14:50:08 -05:00
nielash
58fd6d7b94 docs: add bisync to index 2024-01-20 14:50:08 -05:00
nielash
9c96c13a35 bisync: optimize --resync performance -- partially addresses #5681
Before this change, --resync was handled in three steps, and needed to do a lot
of unnecessary work to implement its own --ignore-existing logic, which also
caused problems with unicode normalization, in addition to being pretty slow.
After this change, it is refactored to produce the same result much more
efficiently, by reducing the three steps to two and letting ci.IgnoreExisting
do the work instead of reinventing the wheel.

The behavior and sync order remain unchanged for now -- just faster (but see
the ongoing lively discussions about potential future changes in #5681!)
2024-01-20 14:50:08 -05:00
nielash
f7f4651828 bisync: handle unicode and case normalization consistently - mostly-fixes #7270
Before this change, Bisync sometimes normalized NFD to NFC and sometimes
did not, causing errors in some scenarios (particularly for users of macOS).
It was similarly inconsistent in its handling of case-insensitivity.

There were three main places where Bisync should have normalized, but didn't:

1. When building the list of files that need to be transferred during --resync
2. When building the list of deltas during a non-resync
3. When comparing Path1 to Path2 during --check-sync

After this change, 1 and 3 are resolved, and bisync supports
--no-unicode-normalization and --ignore-case-sync in the same way as sync.
2 will be addressed in a future update.
2024-01-20 14:50:08 -05:00
nielash
11afc3dde0 sync: --fix-case flag to rename case insensitive dest - fixes #4854
Before this change, a sync to a case insensitive dest (such as macOS / Windows)
would not result in a matching filename if the source and dest had casing
differences but were otherwise equal. For example, syncing `hello.txt` to
`HELLO.txt` would result in the dest filename remaining `HELLO.txt`.
Furthermore, `--local-case-sensitive` did not solve this, as it actually caused
`HELLO.txt` to get deleted!

After this change, `HELLO.txt` is renamed to `hello.txt` to match the source,
only if the `--fix-case` flag is specified. (The old behavior remains the
default.)
2024-01-20 14:50:08 -05:00
nielash
88e516adee moveOrCopyFile: avoid panic on --dry-run
Before this change, changing the case of a file on a case insensitive remote
would fatally panic when `--dry-run` was set, due to `moveOrCopyFile`
attempting to access the non-existent `tmpObj` it (would normally have)
created. After this change, the panic is avoided by skipping this step during
a `--dry-run` (with the usual "skipped as --dry-run is set" log message.)
2024-01-20 14:50:08 -05:00
nielash
fd95511091 bisync: generate listings concurrently with march -- fixes #7332
Before this change, bisync needed to build a full listing for Path1, then a
full listing for Path2, then compare them -- and each of those tasks needed to
finish before the next one could start. In addition to being slow and
inefficient, it also caused real problems if a file changed between the time
bisync checked it on Path1 and the time it checked the corresponding file on
Path2.

This change solves these problems by listing both paths concurrently, using
the same March infrastructure that check and sync use to traverse two
directories in lock-step, optimized by Go's robust concurrency support.
Listings should now be much faster, and any given path is now checked
nearly-instantaneously on both sides, minimizing room for error.

Further discussion:
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=4.%20Listings%20should%20alternate%20between%20paths%20to%20minimize%20errors
2024-01-20 14:50:08 -05:00
nielash
0cac5d67ab bisync: introduce terminal colors
This introduces a few basic color codings to make the terminal output more
readable (and more fun). Rclone's standard --color flag is supported.
(AUTO|NEVER|ALWAYS)

Only a few lines have colors right now -- more will probably be added in
future versions.
2024-01-20 14:50:08 -05:00
nielash
6d6dc00abb bisync: rollback listing on error
Before this change, bisync had no mechanism for "retrying" a file again next
time, in the event of an unexpected and possibly temporary error. After this
change, bisync is now essentially able to mark a file as needing to be
rechecked next time. Bisync does this by keeping one prior listing on hand at
all times. In a low-confidence situation, bisync can revert a given file row
back to its state at the end of the last known successful sync, ensuring that
any subsequent changes will be re-noticed on the next run.
This can potentially be helpful for a dynamically changing file system, where
files may be changing quickly while bisync is working with them.
2024-01-20 14:50:08 -05:00
nielash
079763f09a bisync: isDir check for deltas
Before this change, if --create-empty-src-dirs was specified, bisync would
include directories in the list of deltas to evaluate by their modtime,
relative to the prior sync. This was unnecessary, as rclone does not yet
support setting modtime for directories.

After this change, we skip directories when comparing modtimes. (In other
words, we care only if a directory is created or deleted, not whether it is
newer or older.)
2024-01-20 14:50:08 -05:00
nielash
978cbf9360 bisync: generate final listing from sync results, not relisting -- fixes #5676
Before this change, if there were changes to sync, bisync listed each path
twice: once before the sync and once after. The second listing caused quite
a lot of problems, in addition to making each run much slower and more
expensive. A serious side-effect was that file changes could slip through
undetected, if they happened to occur while a sync was running (between the
first and second listing snapshots.)

After this change, the second listing is eliminated by getting the underlying
sync operation to report back a list of what it changed. Not only is this more
efficient, but also much more robust to concurrent modifications. It should no
longer be necessary to avoid make changes while it's running -- bisync will
simply learn about those changes next time and handle them on the next run.
Additionally, this also makes --check-sync usable again.

For further discussion, see:
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=5.%20Final%20listings%20should%20be%20created%20from%20initial%20snapshot%20%2B%20deltas%2C%20not%20full%20re%2Dscans%2C%20to%20avoid%20errors%20if%20files%20changed%20during%20sync
2024-01-20 14:50:08 -05:00
nielash
3a50f35df9 sync: report list of synced paths to file -- see #7282
Allows rclone sync to accept the same output file flags as rclone check,
for the purpose of writing results to a file.
A new --dest-after option is also supported, which writes a list file using
the same ListFormat flags as lsf (including customizable options for hash,
modtime, etc.) Conceptually it is similar to rsync's --itemize-changes, but
not identical -- it should output an accurate list of what will be on the
destination after the sync.

Note that it has a few limitations, and certain scenarios
are not currently supported:

--max-duration / CutoffModeHard
--compare-dest / --copy-dest (because equal() is called multiple times for the
    same file)
server-side moves of an entire dir at once (because we never get the individual
file objects in the dir)
High-level retries, because there would be dupes
Possibly some error scenarios that didn't come up on the tests

Note also that each file is logged during the sync, as opposed to after, so it
is most useful as a predictor of what SHOULD happen to each file
(which may or may not match what actually DID.)

Only rclone sync is currently supported -- support for copy and move may be
added in the future.
2024-01-20 14:50:08 -05:00
nielash
c0968a0987 operations: add logger to log list of sync results -- fixes #7282
Logger instruments the Sync routine with a status report for each file pair,
making it possible to output a list of the synced files, along with their
attributes and sigil categorization (match/differ/missing/etc.)
It is very customizable by passing in a custom LoggerFn, options, and
io.Writers to be written to. Possible uses include:
- allow sync to write path lists to a file, in the same format as rclone check
- allow sync to output a --dest-after file using the same format flags as lsf
- receive results as JSON when calling sync from an internal function
- predict the post-sync state of the destination

For usage examples, see bisync.WriteResults() or sync.SyncLoggerFn()
2024-01-20 14:50:08 -05:00
nielash
932f9ec34a bisync: document support for atomic uploads 2024-01-20 14:50:08 -05:00
nielash
0e5f12126f bisync: merge copies and deletes, support --track-renames and --backup-dir -- fixes #5690 fixes #5685
Before this change, bisync handled copies and deletes in separate operations.
After this change, they are combined in one sync operation, which is faster
and also allows bisync to support --track-renames and --backup-dir.

Bisync uses a --files-from filter containing only the paths bisync has
determined need to be synced. Just like in sync (but in both directions),
if a path is present on the dst but not the src, it's interpreted as a delete
rather than a copy.
2024-01-20 14:50:08 -05:00
nielash
5c7ba0bfd3 bisync: fix tests on macOS
normalizes unicode and ignores .DS_Store files to make testing possible
on macOS
2024-01-20 14:50:08 -05:00
nielash
9933d6c071 check: respect --no-unicode-normalization and --ignore-case-sync for --checkfile
Before this change, --no-unicode-normalization and --ignore-case-sync
were respected for rclone check but not for rclone check --checkfile,
causing them to give different results.

This change adds support for --checkfile so that the behavior is consistent.
2024-01-20 14:50:08 -05:00
nielash
66929416d4 lsf: add --time-format flag
Before this change, lsf's time format was hard-coded to "2006-01-02 15:04:05",
regardless of the Fs's precision. After this change, a new optional
--time-format flag is added to allow customizing the format (the default is
unchanged).

Examples:
	rclone lsf remote:path --format pt --time-format 'Jan 2, 2006 at 3:04pm (MST)'
	rclone lsf remote:path --format pt --time-format '2006-01-02 15:04:05.000000000'
	rclone lsf remote:path --format pt --time-format '2006-01-02T15:04:05.999999999Z07:00'
	rclone lsf remote:path --format pt --time-format RFC3339
	rclone lsf remote:path --format pt --time-format DateOnly
	rclone lsf remote:path --format pt --time-format max

--time-format max will automatically truncate '2006-01-02 15:04:05.000000000'
to the maximum precision supported by the remote.
2024-01-20 14:50:08 -05:00
dependabot[bot]
b06935a12e build(deps): bump actions/cache from 3 to 4
Bumps [actions/cache](https://github.com/actions/cache) from 3 to 4.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-19 17:19:08 +00:00
Tera
806f6ab1eb add missing backtick 2024-01-19 11:17:36 +00:00
Nick Craig-Wood
c482624a6c config: add config/paths to the rc as rclone config paths equivalent
Fixes #7568
2024-01-18 17:47:39 +00:00
kapitainsky
17fea90ac9 docs: add rclone OS requirements
Adds rclone OS requirements list and latest rclone versions known to be working with specific historical OS versions.

Discussed on the forum:
https://forum.rclone.org/t/rclone-1-65-1-runtime-exception-error-crash-immediately-after-running-the-command/44051

Fixes: #7571
2024-01-17 16:42:33 +00:00
Harshit Budhraja
78176d39fd imagekit: updated overview - supported operations 2024-01-17 16:38:54 +00:00
Nick Craig-Wood
ae3c73f610 stats: fix race between ResetCounters and stopAverageLoop called from time.AfterFunc
Before this change StatsInfo.ResetCounters() and stopAverageLoop()
(when called from time.AfterFunc) could race on StatsInfo.average.
This was because the deferred stopAverageLoop accessed
StatsInfo.average without locking.

For some reason this only ever happened on macOS. This caused the CI
to fail on macOS thus causing the macOS builds not to appear.

This commit fixes the problem with a bit of extra locking.

It also renames all StatsInfo methods that should be called without
the lock to start with an initial underscore as this is the convention
we use elsewhere.

Fixes #7567
2024-01-17 10:23:50 +00:00
Nick Craig-Wood
d20f647487 Add Harshit Budhraja to contributors 2024-01-17 10:23:50 +00:00
Harshit Budhraja
6521394865 imagekit: Updated docs and web content 2024-01-16 18:25:25 +00:00
Nick Craig-Wood
42cac4cf53 build: use API when fetching golangci-lint as it is more reliable
This was turned off previously because we used it in the CI and it
rate limited.
2024-01-15 16:22:07 +00:00
Nick Craig-Wood
223d8c5fe3 serve dlna: now only supported on go1.21 or later
This is due to use of go1.21 only constructs in github.com/anacrolix/log
2024-01-15 16:22:07 +00:00
Nick Craig-Wood
dd0e5b9a7f operations: use built in io.OffsetWriter for go1.20 2024-01-15 16:22:07 +00:00
Nick Craig-Wood
da244a3709 ssh: shorten wait delay for external ssh binaries now that we are using go1.20
Now we are guaranteed to have go1.20 or later we can use the WaitDelay
flag when running external ssh binaries.
2024-01-15 16:22:07 +00:00
Nick Craig-Wood
938b43c26c build: remove random.Seed since random generator is seeded automatically in go1.20
Now that the minimum version is go1.20 we can stop seeding the random
number generator.
2024-01-15 16:22:07 +00:00
Nick Craig-Wood
13fb2fb2ec build: update to go1.22rc1 and make go1.20 the minimum required version 2024-01-15 16:22:07 +00:00
Nick Craig-Wood
43cc2435c3 build: update indirect dependencies where possible 2024-01-15 16:18:42 +00:00
Nick Craig-Wood
1b1e43074f build: update direct dependencies and fix serve nfs
This updates the direct dependencies.

The latest github.com/willscott/go-nfs has changed the interface
slightly so this implements a dummy InvalidateHandle method in order
to satisfy it.
2024-01-15 16:18:42 +00:00
Nick Craig-Wood
cacfc100de docs: add warp.dev sponsorship to github home page 2024-01-15 11:57:27 +00:00
Nick Craig-Wood
f8c5695aed docs: add warp.dev as a sponsor 2024-01-15 11:55:38 +00:00
Nick Craig-Wood
a5972fe0d1 docs: update website footer 2024-01-15 11:55:38 +00:00
Nick Craig-Wood
184459ba8f vfs: fix stale data when using --vfs-cache-mode full
Before this change the VFS cache could get into a state where when an
object was updated remotely, the fingerprint of the item was correct
for the new object but the data in the VFS cache was for the old
object.

This fixes the problem by updating the fingerprint of the item at the
point we remove the stale data. The empty cache item now represents
the new item even though it has no data in.

This stops the fallback code for an empty fingerprint running (used
when we are writing items to the cache instead of reading them) which
was causing the problem.

Fixes #6053
See: https://forum.rclone.org/t/cached-webdav-mount-fingerprints-get-nuked-on-ls/43974/
2024-01-15 11:12:59 +00:00
Nick Craig-Wood
519fe98e6e azureblob: implement --azureblob-delete-snapshots
This flag controls what happens when we try to delete a blob with a
snapshot. The UI follows the azcopy tool.

See: https://forum.rclone.org/t/how-to-delete-undeleted-blobs-on-azure/43911/
2024-01-13 14:27:54 +00:00
Nick Craig-Wood
3df6518006 Add Nikhil Ahuja to contributors 2024-01-13 14:27:54 +00:00
Nikhil Ahuja
1045f54128 oracleobjectstorage: Support "backend restore" command - fixes #7371 2024-01-09 09:43:36 +00:00
dependabot[bot]
0563cc6314 build(deps): bump github.com/cloudflare/circl from 1.3.6 to 1.3.7
Bumps [github.com/cloudflare/circl](https://github.com/cloudflare/circl) from 1.3.6 to 1.3.7.
- [Release notes](https://github.com/cloudflare/circl/releases)
- [Commits](https://github.com/cloudflare/circl/compare/v1.3.6...v1.3.7)

---
updated-dependencies:
- dependency-name: github.com/cloudflare/circl
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-08 17:38:09 +00:00
Nick Craig-Wood
e20f2eee59 Changelog updates from Version v1.65.1 2024-01-08 11:54:02 +00:00
Vincent Murphy
41b8935a6c docs: Fix broken test_proxy.py link again
The previous fix fixed the auto generated output - this fixes the source.
2024-01-08 11:54:02 +00:00
Nick Craig-Wood
fbdf71ab64 operations: fix files moved by rclone move not being counted as transfers
Before this change we were only counting moves as checks. This means
that when using `rclone move` the `Transfers` stat did not count up
like it should do.

This changes introduces a new primitive operations.MoveTransfers which
counts moves as Transfers for use where that is appropriate, such as
rclone move/moveto. Otherwise moves are counted as checks and their
bytes are not accounted.

See: #7183
See: https://forum.rclone.org/t/stats-one-line-date-broken-in-1-64-0-and-later/43263/
2024-01-07 11:26:09 +00:00
Nick Craig-Wood
d392f9fcd8 accounting: fix stats to show server side transfers
Before this fix we were not counting transferred files nor transferred
bytes for server side moves/copies.

If the server side move/copy has been marked as a transfer and not a
checker then this accounts transferred files and transferred bytes.

The transferred bytes are not accounted to the network though so this
should not affect the network stats.
2024-01-07 11:26:09 +00:00
Nick Craig-Wood
dedad9f071 onedrive: fix "unauthenticated: Unauthenticated" errors when uploading
Before this change, sometimes when uploading files the onedrive
servers return 401 Unauthorized errors with the text "unauthenticated:
Unauthenticated".

This is because we are sending the Authorization header with the
request and it says in the docs that we shouldn't.

https://learn.microsoft.com/en-us/graph/api/driveitem-createuploadsession?view=graph-rest-1.0#remarks

> If you include the Authorization header when issuing the PUT call,
> it may result in an HTTP 401 Unauthorized response. Only send the
> Authorization header and bearer token when issuing the POST during
> the first step. Don't include it when you issue the PUT call.

This patch fixes the problem by doing the PUT request with an
unauthenticated client.

Fixes #7405
See: https://forum.rclone.org/t/onedrive-unauthenticated-when-trying-to-copy-sync-but-can-use-lsd/41149/
See: https://forum.rclone.org/t/onedrive-unauthenticated-issue/43792/
2024-01-07 11:14:08 +00:00
Nick Craig-Wood
1f6271fa15 s3: copy parts in parallel when doing chunked server side copy
Before this change rclone copied each chunk serially.

After this change it does --s3-upload-concurrency at once.

See: https://forum.rclone.org/t/transfer-big-files-50gb-from-s3-bucket-to-another-s3-bucket-doesnt-starts/43209
2024-01-05 15:54:52 +00:00
Nick Craig-Wood
c16c22d6e1 s3: fix crash if no UploadId in multipart upload
Before this change if the S3 API returned a multipart upload with no
UploadId then rclone would crash.

This detects the problem and attempts to retry the multipart upload
creation.

See: https://forum.rclone.org/t/panic-runtime-error-invalid-memory-address-or-nil-pointer-dereference/43425
2024-01-05 15:52:52 +00:00
Nick Craig-Wood
486a10bec5 serve s3: fix listing oddities
Before this change, listing a subdirectory gave errors like this:

    Entry doesn't belong in directory "" (contains subdir) - ignoring

It also did full recursive listings when it didn't need to.

This was caused by the code using the underlying Fs to do recursive
listings on bucket based backends.

Using both the VFS and the underlying Fs is a mistake so this patch
removes the code which uses the underlying Fs and just uses the VFS.

Fixes #7500
2024-01-05 15:51:13 +00:00
Nick Craig-Wood
5fa13e3e31 protondrive: fix CVE-2023-45286 / GHSA-xwh9-gc39-5298
A race condition in go-resty can result in HTTP request body
disclosure across requests.

See: https://pkg.go.dev/vuln/GO-2023-2328
Fixes: #7491
2024-01-04 17:14:53 +00:00
Nick Craig-Wood
0e746f25a3 amazonclouddrive: remove Amazon Drive backend code and docs #7539
The Amazon Drive backend is closed from 2023-12-31.

See: https://www.amazon.com/b?ie=UTF8&node=23943055011
2024-01-04 17:05:54 +00:00
Nick Craig-Wood
578b9df6ea build: fix docker build on arm/v6
Unexpectedly the team which runs the Go docker images have removed the
arm/v6 image which means that the rclone docker images no longer
build.

One of the recommended fixes is what we've done here - switch to the
alpine builder. This has the advantage that it actually builds arm/v6
architecture unlike the previous builder which build arm/v5.

See: https://github.com/docker-library/golang/issues/502
2024-01-03 17:43:23 +00:00
Nick Craig-Wood
208e49ce4b fs: update use of math/rand to modern practice 2024-01-03 16:14:40 +00:00
Nick Craig-Wood
7aa066cff8 Add Paul Stern to contributors 2024-01-03 16:14:40 +00:00
dependabot[bot]
64df4cf2db
build(deps): bump golang.org/x/crypto to fix ssh terrapin CVE-2023-48795
Fixes SSH terrapin attack: see https://terrapin-attack.com.

Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.14.0 to 0.17.0.
- [Commits](https://github.com/golang/crypto/compare/v0.14.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-03 15:47:34 +00:00
rkonfj
451d7badf7 oauthutil: avoid panic when *token and *ts.token are the same
the field `raw` of `oauth2.Token` may be an uncomparable type(often map[string]interface{}), causing `*token != *ts.token` expression to panic(comparing uncomparable type ...).

the semantics of comparing whether two tokens are the same can be achieved by comparing accessToken, refreshToken and expire to avoid panic.
2024-01-03 15:15:14 +00:00
WeidiDeng
d977fa25fa
ftp: fix multi-thread copy
Before this change multi-thread copies using the FTP backend used to error with

    551 Error reading file

This was caused by a spurious error being reported which this code silences.

Fixes #7532
See #3942
2024-01-03 12:21:08 +00:00
Paul Stern
bb679a9def backend: add description field for all backends
Fixes #4391
2024-01-03 10:57:59 +00:00
Nick Craig-Wood
a3d19942bd googlephotos: fix nil pointer exception when batch failed
This was a simple error check that was missing. Interestingly the
errcheck linter did not spot this.

See: https://forum.rclone.org/t/invalid-memory-address-or-nil-pointer-dereference-error-when-copy-to-google-photos/43634/
2024-01-03 10:57:59 +00:00
Nick Craig-Wood
394195cfdf Add rarspace01 to contributors 2024-01-03 10:57:59 +00:00
nielash
3ca766b2f1 hasher: fix invalid memory address error when MaxAge == 0
When f.opt.MaxAge == 0, f.db is never set, however several methods later assume
it is set and attempt to access it, causing an invalid memory address error.
This change fixes the issue in a few spots (there may still be others I haven't
yet encountered.)
2024-01-02 18:14:01 +00:00
albertony
3bf8c877c3 docs/librclone: the newer and recommended ucrt64 subsystem of msys2 can now be used for building on windows 2024-01-01 21:56:45 +01:00
rarspace01
fba2d4c4a7
docs: fix broken link in serve webdav 2023-12-30 18:10:27 +01:00
Oksana
8503282a5a
azure-files: fix storage base url
Documented in https://learn.microsoft.com/en-us/azure/storage/common/storage-account-overview
2023-12-18 14:15:13 +00:00
Manoj Ghosh
743ea6ac26
oracle object storage: fix object storage endpoint for custom endpoints 2023-12-15 10:13:35 +00:00
Nick Craig-Wood
c69eb84573 chunker,compress,crypt,hasher,union: fix rclone move a file over itself deleting the file
This fixes the Root() returned by the backend when it has returned
fs.ErrorIsFile.

Before this change it returned a root which included the file path.

Because Root() was wrong this caused the detection of the file being
moved over itself check to fail.

This adds an integration test to check it for all backends.

See: https://forum.rclone.org/t/rclone-move-chunker-dir-file-chunker-dir-deletes-all-file-chunks/43333/
2023-12-10 22:29:57 +00:00
Nick Craig-Wood
f98e672f37 selfupdate: fix crash in tests if beta not found 2023-12-10 22:29:57 +00:00
Nick Craig-Wood
242fe96b18 Add keongalvin to contributors 2023-12-10 22:29:57 +00:00
rkonfj
3f159bac16 backend: fs implements the Shutdowner interface
Since `tokenRenewer` adds a Shutdown method, we should call it to
clean up resources.

changes backends:
onedrive,box,pcloud,amazonclouddrive,hidrive,jottacloud,sharefile
,premiumizeme

Signed-off-by: rkonfj <rkonfj@gmail.com>
2023-12-09 11:44:50 +00:00
rkonfj
6c58e9976c oauthutil: add Shutdown method
Before this change, calling the `oauthutil.NewRenew` func may
cause goroutine leaks.

This change adds a `Shutdown` method to allow the caller to exit
the goroutine to avoid leaks.

Signed-off-by: rkonfj <rkonfj@gmail.com>
2023-12-09 11:44:50 +00:00
keongalvin
110d07548f docs: fix broken link 2023-12-08 16:21:09 +00:00
Nick Craig-Wood
f45cee831f dropbox: fix used space on dropbox team accounts
Before this change we were not using the used space from the team
stats.

This patch uses that as the used space if available as it seems to
include the user stats in it.

See: https://forum.rclone.org/t/rclone-about-with-dropbox-reporte-size-incorrectly/43269/
2023-12-08 14:26:46 +00:00
Nick Craig-Wood
ef0f3020e4 vfs: note that --vfs-refresh runs in the background #6830 2023-12-08 14:26:46 +00:00
Nick Craig-Wood
113b2b648c Add emyarod to contributors 2023-12-08 14:26:46 +00:00
Nick Craig-Wood
57ab4d279e Add Anthony Metzidis to contributors 2023-12-08 14:26:46 +00:00
Nick Craig-Wood
8e21c77ead Add Eli Orzitzer to contributors 2023-12-08 14:26:46 +00:00
emyarod
4751980659 docs: update contributor email 2023-12-08 11:21:26 +00:00
Anthony Metzidis
9fe343b725 s3: S3 IPv6 support with option "use_dual_stack" (bool)
dualstack_endpoint=true enables IPv6 DNS lookup for S3 endpoints
in s3.go, add Options.DualstackEndpoint to support IPv6 on S3
2023-12-08 11:11:47 +00:00
dependabot[bot]
2f5685b405 build(deps): bump actions/setup-go from 4 to 5
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 4 to 5.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-07 16:48:50 +00:00
Eli Orzitzer
c3117d9efb Doc change: Add the CreateBucket permission requirement for AWS S3 2023-12-07 16:46:04 +00:00
Nick Craig-Wood
1ebbc74f1d nfsmount: compile for all unix oses, add --sudo and fix error/option handling
- make compile on all unix OSes - this will make the docs appear on linux and rclone.org!
- add --sudo flag for using with mount
- improve error reporting
- fix option handling
2023-12-05 10:44:53 +00:00
Nick Craig-Wood
aee787d33e serve nfs: Mark as experimental 2023-12-05 10:44:53 +00:00
Anagh Kumar Baranwal
298c13e719 systemd: Fix detection and switch to the coreos package everywhere
rather than having 2 separate libraries

Signed-off-by: Anagh Kumar Baranwal <6824881+darthShadow@users.noreply.github.com>
2023-12-02 14:17:15 +00:00
Nick Craig-Wood
f0c774156e onedrive: fix error listing: unknown object type <nil>
This error was introduced in this commit when refactoring the list
routine.

b8591b230d onedrive: implement ListR method which gives --fast-list support

The error was caused by OneNote files not being skipped properly.
2023-12-02 10:49:15 +00:00
Nick Craig-Wood
08c460dd1a Add ben-ba to contributors 2023-12-02 10:49:15 +00:00
ben-ba
e3d0bff9ca
docs: fix typo in docs.md
- OpenChunkedWriter
+ OpenChunkWriter
2023-12-01 20:45:48 +01:00
Nick Craig-Wood
caf5dd9d5e mount: notice daemon dying much quicker
Before this change we waited until until the timeout to check the
daemon was alive.

Now we check it every 100ms like we do the mount status.

This also fixes compiling on all platforms which was broken by the
previous change

9bfbf2a4a mount: fix macOS not noticing errors with --daemon

See: https://forum.rclone.org/t/rclone-mount-daemon-exits-successfully-even-when-mount-fails/43146
2023-12-01 09:36:05 +00:00
Nick Craig-Wood
97d7945cef Add halms to contributors 2023-12-01 09:36:05 +00:00
Manoj Ghosh
9061e81850 multipart copy create bucket if it doesn't exist. 2023-11-29 15:47:56 +00:00
halms
58339845f4 smb: fix shares not listed by updating go-smb2
Before this change the IP address of the server was used in the SMB
connect request (see CloudSoda/go-smb2#18).
The updated library now can pass the hostname instead.

The update requires a small change in the dial method call.

Fixes rclone#6672
2023-11-29 15:39:27 +00:00
Nick Craig-Wood
4d4f3de5a5 s3: add --s3-version-deleted to show delete markers in listings when using versions.
See: https://forum.rclone.org/t/s3-object-deletion-times/42781
2023-11-29 09:44:40 +00:00
Nick Craig-Wood
9bfbf2a4ae mount: fix macOS not noticing errors with --daemon
See: https://forum.rclone.org/t/rclone-mount-daemon-exits-successfully-even-when-mount-fails/43146
2023-11-28 19:42:00 +00:00
Nick Craig-Wood
96f8b7c827 install.sh: fix harmless error message on install
This was caused by trying to write to a non existent file, and
changing the order of the cleanup fixed it.

https://forum.rclone.org/t/rclone-v1-65-0-release/43100/18
2023-11-28 19:10:04 +00:00
Nick Craig-Wood
85f142a206 Start v1.66.0-DEV development 2023-11-26 17:14:38 +00:00
Nick Craig-Wood
82b963e372 Version v1.65.0 2023-11-26 16:07:39 +00:00
Nick Craig-Wood
74d5477fad onedrive: add --onedrive-delta flag to enable ListR
Before this change ListR was unconditionally enabled on onedrive.

This caused performance problems for some uses, so now the
--onedrive-delta flag has to be supplied.

Fixes #7362
2023-11-26 16:06:49 +00:00
Nick Craig-Wood
b5857f0bf8 smb: fix modtime of multithread uploads by setting PartialUploads
Before this change PartialUploads was not set. This is clearly wrong
since incoming files are visible on the smb server.

Setting PartialUploads fixes the multithread upload modtime problem as
it uses the PartialUploads flag as an indication that it needs to set
the modtime explicitly.

This problem was detected by the new TestMultithreadCopy integration
tests

Fixes #7411
2023-11-25 18:46:48 +00:00
Nick Craig-Wood
edb5ccdd0b smb: fix about size wrong by switching to github.com/cloudsoda/go-smb2/ fork
Before this change smb drives sometimes showed a fraction of the
correct size using `rclone about`.

This fixes the problem by switching the upstream library from
github.com/hirochachacha/go-smb2 to github.com/cloudsoda/go-smb2 which
has a fix for the problem.

The new library passes the integration tests.

Fixes #6733
2023-11-25 18:45:41 +00:00
Nick Craig-Wood
0244caf13a serve s3: fix overwrite of files with 0 length file
Before this change overwriting an existing file with a 0 length file
didn't update the file size.

This change corrects the issue and makes sure the file is truncated
properly.

This was discovered by the full integration tests.
2023-11-24 20:47:06 +00:00
Nick Craig-Wood
aaa897337d serve s3: fix error handling for listing non-existent prefix - fixes #7455
Before this change serve s3 would return NoSuchKey errors when a non
existent prefix was listed.

This change fixes it to return an empty list like AWS does.

This was discovered by the full integration tests.
2023-11-24 20:47:06 +00:00
Nick Craig-Wood
e7c002adef test_all: make integration test for serve s3 2023-11-24 20:47:06 +00:00
Nick Craig-Wood
9e62a74a23 Add Abhinav Dhiman to contributors 2023-11-24 20:47:06 +00:00
Nick Craig-Wood
a10abf9934 Add 你知道未来吗 to contributors 2023-11-24 20:47:06 +00:00
Abhinav Dhiman
36eb3cd660
imagekit: Added ImageKit backend 2023-11-24 18:18:01 +00:00
你知道未来吗
fd2322cb41
fs/fshttp: fix --contimeout being ignored
The following command will block for 60s(default) when the network is slow or unavailable:

```
rclone  --contimeout 10s --low-level-retries 0 lsd dropbox:
```

This change will make it timeout after the expected 10s.

Signed-off-by: rkonfj <rkonfj@gmail.com>
2023-11-24 17:53:33 +00:00
Nick Craig-Wood
4eed3ae99a s3: ensure we can set upload cutoff that we use for Rclone provider
This is a workaround to make the new multipart upload integration
tests pass.
2023-11-24 16:32:06 +00:00
Nick Craig-Wood
d8855b21eb serve s3: document multipart copy doesn't work #7454
This puts in a workaround for the tests also
2023-11-24 15:49:33 +00:00
Nick Craig-Wood
8f47b6746d b2: fix streaming chunked files an exact multiple of chunk size
Before this change, streaming files an exact multiple of the chunk
size would cause rclone to attempt to stream a 0 sized chunk which was
rejected by the b2 servers.

This bug was noticed by the new integration tests for chunked streaming.
2023-11-24 14:32:01 +00:00
Nick Craig-Wood
cc2a4c2e20 fstest: factor chunked streaming tests from b2 and use in all backends 2023-11-24 12:58:40 +00:00
Nick Craig-Wood
fabeb8e44e b2: fix server side chunked copy when file size was exactly --b2-copy-cutoff
Before this change the b2 servers would complain as this was only a
single part transfer.

This was noticed by the new integration tests for server side chunked copy.
2023-11-24 12:37:11 +00:00
Nick Craig-Wood
c27977d4d5 fstest: factor chunked copy tests from b2 and use them in s3 and oos 2023-11-24 12:37:11 +00:00
Nick Craig-Wood
d5d28a7513 operations: fix overwrite of destination when multi-thread transfer fails
Before this change, if a multithread upload failed (let's say the
source became unavailable) rclone would finalise the file first before
aborting the transfer.

This caused the partial file to be written which would overwrite any
existing files.

This was fixed by making sure we Abort the transfer before Close-ing
it.

This updates the docs to encourage calling of Abort before Close and
updates writerAtChunkWriter to make sure that works properly.

This also reworks the tests to detect this and to make sure we upload
and download to each multi-thread capable backend (we were only
downloading before which isn't a full test).

Fixes #7071
2023-11-24 11:19:58 +00:00
Nick Craig-Wood
94ccc95515 random: stop using deprecated rand.Seed in go1.20 and later 2023-11-24 11:19:58 +00:00
Nick Craig-Wood
5d5473c8a5 random: speed up String function for generating larger blocks 2023-11-24 11:19:58 +00:00
Nick Craig-Wood
251a8e3c39 hash: allow runtime configuration of supported hashes for testing 2023-11-24 11:19:58 +00:00
Nick Craig-Wood
a259226eb2 Add Alen Šiljak to contributors 2023-11-24 11:19:58 +00:00
Alen Šiljak
5fba502516 http: enable methods used with WebDAV - fixes #7444
Without this, requests like PROPFIND, issued from a browser, fail.
2023-11-23 16:49:03 +00:00
Nick Craig-Wood
ba11040d6b s3: detect looping when using gcs and versions
Apparently gcs doesn't return an S3 compatible result when using
versions.

In particular it doesn't return a NextKeyMarker - this means rclone
loops and fetches the same page over and over again.

This patch detects the problem and stops the infinite retries but it
doesn't fix the underlying problem.

See: https://forum.rclone.org/t/list-s3-versions-files-looping-bug/42974
See: https://issuetracker.google.com/u/0/issues/312292516
2023-11-23 09:50:28 +00:00
Nick Craig-Wood
668711e432 dropbox: fix missing encoding for rclone purge again
This commit fixed the problem but made the integration tests fail.

33376bf399 dropbox: fix missing encoding for rclone purge

This fixes the problem properly by making sure we send the encoded or
non encoded root to the right places.
2023-11-21 12:23:28 +00:00
Nick Craig-Wood
a71d181cb0 test_all: limit the Zoho tests to just the backend
The free account has a very ungenerous 1000 api calls per day limit
and the full integration test suite breaches that so limit the
integration tests to just the backend.
2023-11-21 12:06:31 +00:00
Nick Craig-Wood
cab42107f7 test_all: remove uptobox from integration tests
The uptobox service hasn't running since 20 September 2023.

This removes it from the integration tests to save noise.
2023-11-21 11:49:39 +00:00
Nick Craig-Wood
1f9a79ef09 operations: use less memory when doing multithread uploads
For uploads which are coming from disk or going to disk or going to a
backend which doesn't need to seek except for retries this doesn't
buffer the input.

This dramatically reduces rclone's memory usage.

Fixes #7350
2023-11-20 18:07:05 +00:00
Nick Craig-Wood
c0fb9ebfce operations: make Open() return an io.ReadSeekCloser #7350
As part of reducing memory usage in rclone, we need to have a raw
handle to an object we can seek with.
2023-11-20 18:07:05 +00:00
Nick Craig-Wood
e8fcde8de1 fs: add ChunkWriterDoesntSeek feature flag and set it for b2 2023-11-20 18:07:05 +00:00
Nick Craig-Wood
72dfdd97d8 mockobject: fix SetUnknownSize method to obey parameter passed in 2023-11-20 18:07:05 +00:00
Nick Craig-Wood
bb88b8499b box: fix performance problem reading metadata for single files
Before this change the backend used to list the directory to find the
metadata for a single file. For lots of files in a directory this
caused a serious performance problem.

This change uses the preflight check to check for a files existence
and find its ID.

See: https://forum.rclone.org/t/psa-box-com-has-serious-performance-issues-in-directories-with-thousands-of-files/41128/10
See: https://forum.box.com/t/is-there-an-api-to-find-a-file-by-leaf-name-given-a-folder-id/997/
See: https://developer.box.com/guides/uploads/check/
2023-11-20 18:07:05 +00:00
Nick Craig-Wood
4ac5cb07ca gcs: fix 400 Bad request errors when using multi-thread copy
Before this change, on every Open, we added the userProject parameter
to the URL in the object.

This meant it grew and grew until Google returned Error 400 (Bad
Request) errors when the URL became too long.

This fixes the problem by adding the userProject parameter once.

See: https://forum.rclone.org/t/endlessly-repeating-userproject-parameter-in-get-to-google-storage-context-canceled-got-http-response-code-400/42652
2023-11-20 18:07:05 +00:00
Nick Craig-Wood
4a3e9bbabf http: implement set backend command to update running backend
See: https://forum.rclone.org/t/updating-the-url-of-http-remote-not-applied-on-mounts/42763
2023-11-20 18:07:05 +00:00
Nick Craig-Wood
33376bf399 dropbox: fix missing encoding for rclone purge
This was causing directories with encodable characters in not to be
found on purge.

See: https://forum.rclone.org/t/purge-command-does-not-work-on-directories-with-files/42793
2023-11-20 18:07:05 +00:00
asdffdsazqqq
94b7c49196 Update Docs to show SMB remote supports modtime.md 2023-11-20 17:50:28 +00:00
albertony
a7faf05393 docs: cleanup backend hashes sections 2023-11-20 17:43:57 +00:00
albertony
98a96596df docs: replace mod-time with modtime 2023-11-20 17:43:57 +00:00
Nick Craig-Wood
88bd80c1fa march: Fix excessive parallelism when using --no-traverse
When using `--no-traverse` the march routines call NewObject on each
potential object in the destination.

The concurrency limiter was accidentally arranged so that there were
`--checkers` * `--checkers` NewObject calls going on at once.

This became obvious when using the sftp backend which used too many
connections.

Fixes #5824
2023-11-20 17:36:31 +00:00
Nick Craig-Wood
c6755aa768 Add Mina Galić to contributors 2023-11-20 17:36:31 +00:00
Mina Galić
01be5c75be Makefile: use POSIX compatible install arguments
install -t doesn't exist on BSD.
flip the arguments since we only have one.
2023-11-20 15:01:26 +00:00
Jacob Hands
20bd17f107 install.sh: Clean up temp files in install script 2023-11-20 15:00:08 +00:00
Nick Craig-Wood
64ec5709fe drive: fix integration tests by enabling metadata support from the context
Before this change, the drive backend only used metadata if it was
created with Metadata enabled.

This patch changes it so the Metadata support is enabled dynamically
if it is set in the context.

This fixes the metadata tests in the integration tests which have been
changed to make sure Metadata is enabled.
2023-11-19 12:48:27 +00:00
Nick Craig-Wood
1ea8678be2 fstests: make sure Metadata is enabled in the context for metadata tests 2023-11-19 12:48:27 +00:00
Nick Craig-Wood
8341de05c6 Refresh CONTRIBUTING.md
- add dos and don'ts section to writing a new backend
- bring markdown up to modern style
2023-11-19 12:48:27 +00:00
Nick Craig-Wood
47ca0c326e fs: implement --metadata-mapper to transform metatadata with a user supplied program 2023-11-18 17:49:35 +00:00
Nick Craig-Wood
54196f34e3 drive: fix error updating created time metadata on existing object
Google drive doesn't allow the btime (created time) metadata to be
updated when updating an existing object.

This changes skips btime metadata if we are updating an existing
object but allows it otherwise.
2023-11-18 17:49:35 +00:00
Nick Craig-Wood
9fdf3d548a drive: add read/write metadata support
- fetch metadata with listings and fetch permissions in parallel
- only write permissions out if they are not inherited.
- make setting labels, owner and permissions work controlled by flags
    - `--drive-metadata-labels`, `--drive-metadata-owner`, `--drive-metadata-permissions`
2023-11-18 17:49:35 +00:00
Nick Craig-Wood
10774d297a Add moongdal to contributors 2023-11-18 17:49:35 +00:00
Nick Craig-Wood
bf9053705d Add viktor to contributors 2023-11-18 17:49:35 +00:00
Nick Craig-Wood
0bd059ec55 Add karan to contributors 2023-11-18 17:49:35 +00:00
Nick Craig-Wood
59d363b3c1 Add Oksana Zhykina to contributors 2023-11-18 17:49:35 +00:00
Nick Craig-Wood
94a5de58c8 linkbox: pre-merge fixes
- convert to directoryCache - makes backend much more efficient
- don't force --low-level-retries to 2
- don't wrap paced calls in pacer
- fix shouldRetry
- fix file list searching mechanism
2023-11-18 17:14:45 +00:00
viktor
a466ababd0 backend: add Linkbox backend
Add backend for linkbox.io with read and write capabilities

fixes #6960 #6629
2023-11-18 17:14:45 +00:00
Nick Craig-Wood
168d577297 vfs: error out early if can't upload 0 length file
Before this change if a backend can't upload 0 length files and
`--vfs-cache-mode writes` was in use then the writeback logic would
try to upload the 0 length file forever.

This change causes it to exit on the first failure to upload.
2023-11-18 17:14:45 +00:00
Nick Craig-Wood
ddaf01ece9 azurefiles: finish docs and implementation and add optional interfaces
- use rclone's http Transport
- fix handling of 0 length files
- combine into one file and remove uneeded abstraction
- make `chunk_size` and `upload_concurrency` settable
- make auth the same as azureblob
- set the Features correctly
- implement `--azurefiles-max-stream-size`
- remove arbitrary sleep on Mkdir
- implement `--header-upload`
- implement read and write MimeType for objects
- implement optional methods
    - About
    - Copy
    - DirMove
    - Move
    - OpenWriterAt
    - PutStream
- finish documentation
- disable build on plan9 and js

Fixes #365
Fixes #7378
2023-11-18 16:48:23 +00:00
karan
b5301e03a6 Implement Azure Files backend
Co-authored-by: moongdal <moongdal@tutanota.com>
2023-11-18 16:42:13 +00:00
Dimitri Papadopoulos
e9763552f7 fs: fix a typo in a comment 2023-11-16 17:15:00 +00:00
Oksana Zhykina
6b60e09ff2 quatrix: overwrite files on conflict during server-side move 2023-11-16 17:14:00 +00:00
Oksana Zhykina
41a52f50df quatrix: add partial upload support 2023-11-16 17:14:00 +00:00
Nick Craig-Wood
93f35c915a serve s3: pre-merge tweaks
- Changes
    - Rename `--s3-authkey` to `--auth-key` to get it out of the s3 backend namespace
    - Enable `Content-MD5` integrity checks
    - Remove locking after code audit
- Documentation
    - Factor out documentation into seperate file
    - Add Quickstart to docs
    - Add Bugs section to docs
    - Add experimental tag to docs
    - Add rclone provider to s3 backend docs
- Fixes
    - Correct quirks in s3 backend
    - Change fmt.Printlns into fs.Logs
    - Make metadata storage per backend not global
    - Log on startup if anonymous access is enabled
- Coding style fixes
    - rename fs to vfs to save confusion with the rest of rclone code
    - rename db to b for *s3Backend

Fixes #7062
2023-11-16 16:59:56 +00:00
Nick Craig-Wood
a2c4f07a57 Add Saw-jan to contributors 2023-11-16 16:59:56 +00:00
Saw-jan
d3dcc61154 serve s3: fixes before merge
- add context to log and fallthrough to error log level
- test: use rclone random lib to generate random strings
- calculate hash from vfs cache if file is uploading
- add server started log with server url
- remove md5 hasher
2023-11-16 16:59:56 +00:00
Nick Craig-Wood
34ef5147aa Add Artur Neumann to contributors 2023-11-16 16:59:56 +00:00
Artur Neumann
aa29742be2 serve s3: fix file name encoding using s3 serve with mc client
using the mc (minio) client file encoding were wrong
see Mikubill/gofakes3#2 for details
2023-11-16 16:59:56 +00:00
Nick Craig-Wood
ef366b47f1 Add Mikubill to contributors 2023-11-16 16:59:55 +00:00
Mikubill
23abac2a59 serve s3: let rclone act as an S3 compatible server 2023-11-16 16:59:55 +00:00
Nick Craig-Wood
d3ba32c43e s3: add --s3-disable-multipart-uploads flag 2023-11-16 16:59:55 +00:00
Nick Craig-Wood
cdf5a97bb6 bin/update_authors.py: add authors from Co-authored-by: lines too 2023-11-16 16:59:55 +00:00
albertony
e1b0417c28 size: dont show duplicate object count when less than 1k 2023-11-14 16:44:12 +00:00
Nick Craig-Wood
acf1e2df84 lib/file: fix MkdirAll after go1.21.4 stdlib update
In ths security related issue the go1.21.4 stdlib changed the parsing
of volume names on Windows.

https://github.com/golang/go/issues/63713

This had the consequences of breaking the MkdirAll tests which were
looking for specific error messages which changed and using invalid
paths.

In particular under go1.21.3:

    filepath.VolumeName(`\\?\C:`) == `\\?\C:`

But under go1.21.4 it is:

    filepath.VolumeName(`\\?\C:`) == `\\?`

The path `\\?\C:` isn't actually a valid Windows path. I reported this
as a FYI bug upstream - I'm not expecting it to be fixed.

See: https://github.com/golang/go/issues/64101
2023-11-14 09:47:46 +00:00
Nick Craig-Wood
831d1df67f docs: factor large docs into separate .md files to make them easier to maintain.
We then use the go embed command to embed them back into the binary.
2023-11-13 16:27:09 +00:00
Nick Craig-Wood
e67157cf46 Add Tayo-pasedaRJ to contributors 2023-11-13 16:27:09 +00:00
Nick Craig-Wood
ac012618db Add Adithya Kumar to contributors 2023-11-13 16:27:09 +00:00
Nick Craig-Wood
7f09d9c2a0 Add wuxingzhong to contributors 2023-11-13 16:27:09 +00:00
Tayo-pasedaRJ
0548e61910
hdfs: added support for list of namenodes in hdfs remote config
Users can now input a comma separated list of namenodes when writing
config for hdfs remotes.

This is required when you have multiple namenodes in your hdfs cluster
and cannot be certain which namenodes will be in 'standby' or 'active'
states.

This was available before but wasn't documented and didn't use the
correct rclone interfaces.
2023-11-13 15:55:52 +00:00
Adithya Kumar
ad83ff769b
webdav: added an rclone vendor to work with rclone serve webdav
Fixes #7160
2023-11-05 12:37:25 +00:00
albertony
ca14b00b34 docs: show hashsum arguments as optional in usage string 2023-11-03 23:31:00 +01:00
albertony
52d444f4a9 docs: document how to build with version info and icon resources on windows 2023-11-01 12:44:04 +01:00
albertony
4506f35f2e build: refactor version info and icon resource handling on windows
This makes it easier to add resources with any build method, and also when
building librclone.dll.

Goversioninfo is now used as a library, instead of running it as a tool.
2023-11-01 12:44:04 +01:00
wuxingzhong
4ab57eb90b
serve dnla: fix crash on graceful exit
Before this change, closing a uninitialised chan would cause a crash.
2023-10-31 16:44:25 +00:00
Nick Craig-Wood
23ab6fa3a0 operations: fix server side copies on partial upload backends after refactor
After the copy refactor:

179f978f75 operations: refactor Copy into methods on an temporary object

There was some confusion in the code about server side copies - should
they or shouldn't they use partials?

This manifested in unit test failures for remotes which supported
server side Copy and PartialUploads. This combination is rare and only
exists in the sftp backend with the --sftp-copy-is-hardlink flag.

This fix makes the choice that backends which set PartialUploads
always use partials even for server side copies.
2023-10-30 16:50:19 +00:00
Nick Craig-Wood
af8ba18580 mount: disable mount for freebsd
The upstream library rclone uses for rclone mount no longer supports
freebsd. Not only is it broken, but it no longer compiles.

This patch disables rclone mount for freebsd.

However all is not lost for freebsd users - compiling rclone with the
`cmount` tag, so `go install -tags cmount` will install a working
`rclone mount` command which uses cgofuse and the libfuse C library
directly.

Note that the binaries from rclone.org will not have mount support as
we don't have a freebsd build machine in CI and it is very hard to
cross compile cmount.

See: https://github.com/bazil/fuse/issues/280
Fixes #5843
2023-10-29 15:46:41 +00:00
Nick Craig-Wood
0b90dd23c1 build: update all dependencies 2023-10-29 15:46:38 +00:00
Nick Craig-Wood
e64be7652a operations: fix invalid UTF-8 when truncating file names when not using --inplace
Before this change, when not using --inplace, rclone could generate
invalid file names when truncating file names to fit within the
character size limits.

This fixes it by taking care to truncate on UTF-8 character
boundaries.

See: https://forum.rclone.org/t/ssh-fx-failure-when-copying-file-with-nonstandard-characters-to-sftp-remote-with-ntfs-drive/42560/
2023-10-29 14:04:37 +00:00
Nick Craig-Wood
179f978f75 operations: refactor Copy into methods on an temporary object
operations.Copy had become very unwieldy. This refactors it into
methods on a copy object which is created for the duration of the
copy. This makes it much easier to read and reason about.
2023-10-29 14:04:37 +00:00
Nick Craig-Wood
17b7ee1f3a operations: factor Copy into its own file 2023-10-29 14:04:37 +00:00
dependabot[bot]
5c73363b16 build(deps): bump google.golang.org/grpc from 1.56.2 to 1.56.3
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.56.2 to 1.56.3.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.56.2...v1.56.3)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-28 18:31:35 +01:00
Nick Craig-Wood
bf21db0ac4 b2: fix multi-thread upload with copyto going to wrong name
See: https://forum.rclone.org/t/errors-and-failure-with-big-file-upload-to-b2/42522/
2023-10-28 15:18:00 +01:00
Nick Craig-Wood
0180301b3f fstests: add integration test for OpenChunkWriter uploading to the wrong name 2023-10-28 15:18:00 +01:00
Nick Craig-Wood
adfb1f7c7d b2: fix error handler to remove confusing DEBUG messages
On a 404 error, b2 returns an empty body which, before this change,
caused the error handler to try to parse an empty string and give the
following DEBUG message:

    Couldn't decode error response: EOF

This is confusing as it is expected in normal operations and isn't an
error.

This change reads the body of an error response first then tries to
decode it only if it isn't empty, which avoids the confusing DEBUG
message.

This also upgrades failure to read the body or failure to decode the
JSON to ERROR messages as now we are certain that we should have
something to read and decode.
2023-10-28 15:18:00 +01:00
Nick Craig-Wood
6092fe2aaa s3: emit a debug message if anonymous credentials are in use
This can indicate the user is expecting `env_auth=true` to be the
default so we say that in the debug message.

See: https://forum.rclone.org/t/rclone-with-amazon-s3-access-point/42411
2023-10-27 16:00:47 +01:00
Nick Craig-Wood
53868ef4e1 ncdu: fix crash when re-entering changed directory after rescan
ncdu stores the position that it was in for each directory. However
doing a rescan can cause those positions to be out of range if the
number of files decreased in a directory. When re-entering the
directory, this causes an index out of range error.

This fixes the problem by detecting the index out of range and
flushing the saved directory position.

See: https://forum.rclone.org/t/slice-bounds-out-of-range-during-ncdu/42492/
2023-10-24 14:26:57 +01:00
Nick Craig-Wood
e1ad467009 fs: fix docs for Bits 2023-10-23 15:43:55 +01:00
Nick Craig-Wood
12db7b6935 fs: add IsSet convenience method to Bits 2023-10-23 15:43:42 +01:00
Nick Craig-Wood
7434ad8618 docs: remove third party logos from source tree 2023-10-23 15:35:25 +01:00
Nick Craig-Wood
e4ab59bcc7 docs: update Storj image and link 2023-10-23 15:35:25 +01:00
Nick Craig-Wood
9119c6c76f Add alfish2000 to contributors 2023-10-23 15:35:25 +01:00
alfish2000
9d4d294793 union: fix documentation 2023-10-21 10:37:43 +01:00
Nick Craig-Wood
750ed556a5 build: fix new lint errors with golangci-lint v1.55.0 2023-10-20 18:53:30 +01:00
Nick Craig-Wood
5b0d3d060f selfupdate: make sure we don't run tests if selfupdate is set 2023-10-20 18:14:27 +01:00
Nick Craig-Wood
5b0f9dc4e3 local: fix copying from Windows Volume Shadows
For some files the Windows Volume Shadow Service (VSS) advertises the
file size as X in the directory listing but returns a different number
Y on stat-ing the file. If the file is opened and read there are Y
bytes available for reading.

Existing copy tools copy Y bytes rather than X so for consistency
rclone should do the same.

This fixes the problem by stat-ing the file immediately before opening
it. This will also reduce the unnecessary occurrence of "can't copy -
source file is being updated" errors; if the file has finished
changing by the time we come to copy it then we now can copy it
successfully.

See: https://forum.rclone.org/t/consistently-getting-corrupted-on-transfer-sizes-differ-syncing-to-an-smb-share/42218/
2023-10-19 16:38:10 +01:00
Nick Craig-Wood
b0a87d7cf1 Changelog updates from Version 1.64.2 2023-10-19 12:34:34 +01:00
Nick Craig-Wood
37d786c82a selfupdate: fix "invalid hashsum signature" error
This was caused by a change to the upstream library
ProtonMail/go-crypto checking the flags on the keys more strictly.

However the signing key for rclone is very old and does not have those
flags. Adding those flags using `gpg --edit-key` and then the
`change-usage` subcommand to remove, save, quite then re-add, save
quit the signing capabilities caused the key to work.

This also adds tests for the verification and adds the selfupdate
tests into the integration test harness as they had been disabled on
CI because they rely on external sources and are sometimes unreliable.

Fixes #7373
2023-10-18 17:55:19 +01:00
Nick Craig-Wood
56fe12c479 build: add the serve docker tests to the integration tester
These had been disabled on CI for being unreliable, so test them in
the integration tests framework which will retry them.
2023-10-18 17:55:19 +01:00
Nick Craig-Wood
9197180610 build: fix docker build running out of space
This removes some unused SDKs from the build machine to free some
space up before building. It also adds some lines to show the free
space.
2023-10-18 17:55:19 +01:00
Nick Craig-Wood
f4a538371d Add Ivan Yanitra to contributors 2023-10-18 17:55:10 +01:00
Nick Craig-Wood
f2ec08cba2 Add Keigo Imai to contributors 2023-10-18 17:55:10 +01:00
Nick Craig-Wood
8f25531b7f Add Gabriel Espinoza to contributors 2023-10-18 17:55:10 +01:00
Ivan Yanitra
0ee6d0b4bf azureblob: add support cold tier 2023-10-18 17:54:25 +01:00
Keigo Imai
4ac4597afb drive: add a note that --drive-scope accepts comma-separated list of scopes 2023-10-18 17:54:08 +01:00
Joda Stößer
143df6f6d2 docs: change authors email for SimJoSt 2023-10-18 16:31:15 +01:00
Nick Craig-Wood
8264ba987b Changelog updates from Version 1.64.1 2023-10-17 18:37:04 +01:00
Gabriel Espinoza
7a27d9a192
lib/http: export basic go strings functions
makes the following go strings functions available to be used in custom templates; contains, hasPrefix, hasSuffix

added documentation for exported funcs
2023-10-16 19:46:19 +01:00
albertony
195ad98311 docs: update documentation for --fast-list adding info about ListR 2023-10-16 18:11:22 +02:00
Nick Craig-Wood
29baa5888f mount: fix automount not detecting drive is ready
With automount the target mount drive appears twice in /proc/self/mountinfo.

    379 27 0:70 / /mnt/rclone rw,relatime shared:433 - autofs systemd-1 rw,fd=57,...
    566 379 0:90 / /mnt/rclone rw,nosuid,nodev,relatime shared:488 - fuse.rclone remote: rw,...

Before this fix we only looked for the mount once in
/proc/self/mountinfo. It finds the automount line and since this
doesn't have fs type rclone it concludes the mount isn't ready yet.

This patch makes rclone look through all the mounts and if any of them
have fs type rclone it concludes the mount is ready.

See: https://forum.rclone.org/t/systemd-mount-works-but-automount-does-not/42287/
2023-10-16 12:13:20 +01:00
Nick Craig-Wood
c7a2719fac sftp: implement --sftp-copy-is-hardlink to server side copy as hardlink
If the server does not support hardlinks then it falls back to normal
copy.

See: https://forum.rclone.org/t/sftp-remote-server-side-copy/41867
2023-10-16 12:08:22 +01:00
Nick Craig-Wood
c190b9b14f serve sftp: return not supported error for not supported commands
Before this change, if a hardlink command was issued, rclone would
just ignore it and not return an error.

This changes any unknown operations (including hardlink) to return an
unsupported error.
2023-10-16 12:08:22 +01:00
Nick Craig-Wood
5fa68e9ca5 b2: fix chunked streaming uploads
Streaming uploads are used by rclone rcat and rclone mount
--vfs-cache-mode off.

After the multipart chunker refactor the multipart chunked streaming
upload was accidentally mixing the first and the second parts up which
was causing corrupted uploads.

This was caused by a simple off by one error in the refactoring where
we went from 1 based part number counting to 0 based part number
counting.

Fixing this revealed that the metadata wasn't being re-read for the
copied object either.

This fixes both of those issues and adds an integration tests so it
won't happen again.

Fixes #7367
2023-10-13 15:46:36 +01:00
Nick Craig-Wood
b9727cc6ab build: upgrade golang.org/x/net to v0.17.0 to fix HTTP/2 rapid reset
Vulnerability1: GO-2023-2102

HTTP/2 rapid reset can cause excessive work in net/http

More info: https://pkg.go.dev/vuln/GO-2023-2102
2023-10-12 17:44:16 +01:00
Nick Craig-Wood
d8d76ff647 b2: fix server side copies greater than 4GB
After the multipart chunker refactor the multipart chunked server side
copy was accidentally sending one part too many. The last part was 0
length which was rejected by b2.

This was caused by a simple off by one error in the refactoring where
we went from 1 based part number counting to 0 based part number
counting.

Fixing this revealed that the metadata wasn't being re-read for the
copied object either.

This fixes both of those issues and adds an integration tests so it
won't happen again.

See: https://forum.rclone.org/t/large-server-side-copy-in-b2-fails-due-to-bad-byte-range/42294
2023-10-12 11:19:56 +01:00
Nick Craig-Wood
5afa838457 cmd: Make --progress output logs in the same format as without
See: https://forum.rclone.org/t/using-progress-change-dates-from-2023-10-05-to-2023-10-05/42173
2023-10-11 11:36:31 +01:00
Nick Craig-Wood
2de084944b operations: fix error message on delete to have file name - fixes #7355 2023-10-11 11:34:11 +01:00
Vitor Gomes
48a8bfa6b3 operations: fix OpenOptions ignored in copy if operation was a multiThreadCopy 2023-10-11 11:19:03 +01:00
Nick Craig-Wood
d3ce795c30 build: fix docker beta build running out of space
This removes some unused SDKs from the build machine to free some
space up before building. It also adds some lines to show the free
space.
2023-10-10 15:59:07 +01:00
Nick Craig-Wood
c04657cd4c Add Volodymyr to contributors 2023-10-10 15:59:07 +01:00
Volodymyr
6255d9dfaa
operations: implement --partial-suffix to control extension of temporary file names 2023-10-10 12:27:32 +01:00
Nick Craig-Wood
f56ea2bee2 s3: fix no error being returned when creating a bucket we don't own
Before this change if you tried to create a bucket that already
existed, but someone else owned then rclone did not return an error.

This now will return an error on providers that return the
AlreadyOwnedByYou error code or no error on bucket creation of an
existing bucket owned by you.

This introduces a new provider quirk and this has been set or cleared
for as many providers as can be tested. This can be overridden by the
--s3-use-already-exists flag.

Fixes #7351
2023-10-09 18:15:02 +01:00
Nick Craig-Wood
d6ba60c04d oracleobjectstorage: fix OpenOptions being ignored in uploadMultipart with chunkWriter 2023-10-09 17:13:42 +01:00
Vitor Gomes
37eaa3682a s3: fix OpenOptions being ignored in uploadMultipart with chunkWriter 2023-10-09 17:12:56 +01:00
Nick Craig-Wood
c5f6fc3283 drive: add --drive-show-all-gdocs to allow unexportable gdocs to be server side copied
Before this change, attempting to server side copy a google form would
give this error

    No export formats found for "application/vnd.google-apps.form"

Adding this flag allows the form to be server side copied but not
downloaded.

Fixes #6302
2023-10-09 16:53:03 +01:00
Nick Craig-Wood
4daf755da0 Add Saleh Dindar to contributors 2023-10-09 16:53:03 +01:00
Nick Craig-Wood
eee8ad5146 Add Beyond Meat to contributors 2023-10-09 16:53:03 +01:00
Saleh Dindar
bcb3289dad nfsmount: documentation for new NFS mount feature for macOS 2023-10-06 14:08:20 +01:00
Saleh Dindar
ef2ef8ef84 nfsmount: New mount command to provide mount mechanism on macOS without FUSE
Summary:
In cases where cmount is not available in macOS, we alias nfsmount to mount command and transparently start the NFS server and mount it to the target dir.

The NFS server is started on localhost on a random port so it is reasonably secure.

Test Plan:
```
go run rclone.go mount --http-url https://beta.rclone.org :http: nfs-test
```

Added mount tests:
```
go test ./cmd/nfsmount
```
2023-10-06 14:08:20 +01:00
Saleh Dindar
c69cf46f06 serve nfs: new serve nfs command
Summary:
Adding a new command to serve any remote over NFS. This is only useful for new macOS versions where FUSE mounts are not available.
 * Added willscot/go-nfs dependency and updated go.mod and go.sum

Test Plan:
```
go run rclone.go serve nfs --http-url https://beta.rclone.org :http:
```

Test that it is serving correctly by mounting the NFS directory.

```
mkdir nfs-test
mount -oport=58654,mountport=58654 localhost: nfs-test
```

Then we can list the mounted directory to see it is working.
```
ls nfs-test
```
2023-10-06 14:08:20 +01:00
Saleh Dindar
25f59b2918 vfs: Add go-billy dependency and make sure vfs.Handle implements billy.File
billy defines a common file system interface that is used in multiple go packages.
vfs.Handle implements billy.File mostly, only two methods needed to be added to
make it compliant.

An interface check is added as well.

This is a preliminary work for adding serve nfs command.
2023-10-06 14:08:20 +01:00
Saleh Dindar
7801b160f2 vfs: [bugfix] Update dir modification time
A subtle bug where dir modification time is not updated when the dir already exists
in the cache. It is only noticeable when some clients use dir modification time to
invalidate cache.
2023-10-06 14:08:20 +01:00
Saleh Dindar
23f8dea182 vfs: [bugfix] Implement Name() method in WriteFileHandle and ReadFileHandle
Name() method was originally left out and defaulted to the base
class which always returns empty. This trigerred incorrect behavior
in serve nfs where it relied on the Name() of the interafce to figure
out what file it was modifying.

This method is copied from RWFileHandle struct.

Added extra assert in the tests.
2023-10-06 14:08:20 +01:00
Beyond Meat
3337fe31c7 vfs: add --vfs-refresh flag to read all the directories on start
Refreshes the directory listing recursively at VFS start time.
2023-10-06 13:11:09 +01:00
Nick Craig-Wood
a752563842 operations: add operations/check to the rc API
Fixes #7015
2023-10-04 17:52:57 +01:00
Nick Craig-Wood
c12085b265 operations: close file in TestUploadFile test so it can be deleted on Windows 2023-10-04 17:52:57 +01:00
Nick Craig-Wood
3ab9077820 googlephotos: implement batcher for uploads - fixes #6920 2023-10-03 18:01:34 +01:00
Nick Craig-Wood
b94806a143 dropbox: factor batcher into lib/batcher 2023-10-03 18:01:34 +01:00
Nick Craig-Wood
55d10f4d25 fs: re-implement DumpMode with Bits
This almost 100% backwards compatible. The only difference being that
in the rc options/get output DumpMode will be output as strings
instead of integers. This is a lot more convenient for the user. They
still accept integer inputs though so the fallout from this should be
minimal.
2023-10-03 15:24:09 +01:00
Nick Craig-Wood
75745fcb21 fs: create fs.Bits for easy creation of parameters from a bitset of choices 2023-10-03 15:24:09 +01:00
Nick Craig-Wood
1cc22da87d vfs: re-implement CacheMode with fs.Enum
This almost 100% backwards compatible. The only difference being that
in the rc options/get output CacheMode will be output as strings
instead of integers. This is a lot more convenient for the user. They
still accept integer inputs though so the fallout from this should be
minimal.
2023-10-03 15:14:24 +01:00
Nick Craig-Wood
3092f82dcc fs: re-implement CutoffMode, LogLevel, TerminalColorMode with Enum
This almost 100% backwards compatible. The only difference being that
in the rc options/get output CutoffMode, LogLevel, TerminalColorMode
will be output as strings instead of integers. This is a lot more
convenient for the user. They still accept integer inputs though so
the fallout from this should be minimal.
2023-10-03 15:14:24 +01:00
Nick Craig-Wood
60a6ef914c fs: create fs.Enum for easy creation of parameters from a list of choices 2023-10-03 15:14:24 +01:00
Nick Craig-Wood
3553cc4a5f fs: fix option types printing incorrectly for backend flags
Before this change backend types were printing incorrectly as the name
of the type, not what was defined by the Type() method.

This was not working due to not calling the Type() method. However
this needed to be defined on a non-pointer type due to the way the
options are handled.
2023-10-03 11:23:58 +01:00
Nick Craig-Wood
b8591b230d onedrive: implement ListR method which gives --fast-list support
This implents ListR for onedrive. The API only allows doing this at
the root so it is inefficient to use it not at the root.

Fixes #7317
2023-10-02 11:12:08 +01:00
Nick Craig-Wood
ecb09badba onedrive: factor API types back into correct file 2023-10-02 10:48:06 +01:00
Nick Craig-Wood
cb43e86d16 b2: reduce default --b2-upload-concurrency to 4 to reduce memory usage
In v1.63 memory usage in the b2 backend was limited to `--transfers` *
`--b2-chunk-size`

However in v1.64 this was raised to `--transfers` * `--b2-chunk-size`
* `--b2-upload-concurrency`.

The default value for this was accidently set quite high at 16 which
means by default rclone could use up to 6.4GB of memory!

The new default sets a more reasonable (but still high) max memory of 1.6GB.
2023-10-01 12:30:26 +01:00
Nick Craig-Wood
5c48102ede b2: fix locking window when getting mutipart upload URL
Before this change, the lock was held while the upload URL was being
fetched from the server.

This meant that any other threads were blocked from getting upload
URLs unecessarily.

It also increased the potential for deadlock.
2023-10-01 12:30:26 +01:00
Nick Craig-Wood
96438ff259 pacer: fix b2 deadlock by defaulting max connections to unlimited
Before this change, the maximum number of connections was set to 10.

This means that b2 could deadlock while uploading multipart uploads
due to a lock being held longer than it should have been.
2023-10-01 12:30:26 +01:00
albertony
c1df3ce08c docs: add utime (time of file upload) to standard system metadata 2023-09-29 13:19:57 +02:00
albertony
19ad39fa1c jottacloud: add support for reading and writing metadata
Most useful is the addition of the file created timestamp, but also a timestamp for
when the file was uploaded.

Currently supporting a rather minimalistic set of metadata items, see PR #6359 for
some thoughts about possible extensions.
2023-09-29 13:19:57 +02:00
Nick Craig-Wood
b296f37801 s3: fix slice bounds out of range error when listing
In this commit:

5f938fb9ed s3: fix "Entry doesn't belong in directory" errors when using directory markers

We checked that the remote has the prefix and then changed the remote
before removing the prefix. This sometimes causes:

    panic: runtime error: slice bounds out of range [56:55]

The fix is to do the modification of the remote after removing the
prefix.

See: https://forum.rclone.org/t/cryptcheck-panic-runtime-error-slice-bounds-out-of-range/41977
2023-09-25 11:52:23 +01:00
Nick Craig-Wood
23e44c6065 Add rinsuki to contributors 2023-09-25 11:52:23 +01:00
rinsuki
8fd66daab6 drive: add support of SHA-1 and SHA-256 checksum 2023-09-24 17:38:30 +01:00
Nick Craig-Wood
9e80d48b03 s3: add docs on how to add a new provider 2023-09-23 14:36:48 +01:00
Nick Craig-Wood
eb3082a1eb s3: add Linode provider 2023-09-23 14:34:00 +01:00
Nick Craig-Wood
77ea22ac5b s3: Factor providers list out and auto generate textual version 2023-09-23 14:34:00 +01:00
Nick Craig-Wood
9959712a06 docs: fix backend doc generator to not output duplicate config names
This was always the intention, it was just implemented wrong.

This shortens the s3 docs by 1369 bringing them down to half the size
just about.

Fixes #7325
2023-09-23 12:54:08 +01:00
Nick Craig-Wood
7586fecbca Add Nikita Shoshin to contributors 2023-09-23 12:54:08 +01:00
Nikita Shoshin
94cdb00eb6 rcserver: set Last-Modified header for files served by --rc-serve 2023-09-23 12:20:29 +01:00
Dimitri Papadopoulos Orfanos
3d473eb54e
docs: fix typos found by codespell in docs and code comments 2023-09-23 12:20:01 +01:00
1493 changed files with 284798 additions and 116112 deletions

View file

@ -0,0 +1,45 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: community, triage, bug
assignees: ''
---
<!--- Provide a general summary of the issue in the Title above -->
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
## Possible Solution
<!--- Not obligatory -->
<!--- If no reason/fix/additions for the bug can be suggested, -->
<!--- uncomment the following phrase: -->
<!--- No fix can be suggested by a QA engineer. Further solutions shall be up to developers. -->
## Steps to Reproduce (for bugs)
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. -->
1.
## Context
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
## Regression
<!-- Is this issue a regression? (Yes / No) -->
<!-- If Yes, optionally please include version or commit id or PR# that caused this regression, if you have these details. -->
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Version used:
* Server setup and configuration:
* Operating System and version (`uname -a`):

View file

@ -0,0 +1 @@
blank_issues_enabled: false

View file

@ -0,0 +1,24 @@
on:
pull_request:
push:
branches:
- tcl/master
jobs:
builds:
name: Builds
runs-on: ubuntu-latest
strategy:
matrix:
go_versions: [ '1.22', '1.23' ]
fail-fast: false
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '${{ matrix.go_versions }}'
- name: Build binary
run: make

View file

@ -0,0 +1,20 @@
on: [pull_request]
jobs:
dco:
name: DCO
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Setup Go
uses: actions/setup-go@v3
with:
go-version: '1.23'
- name: Run commit format checker
uses: https://git.frostfs.info/TrueCloudLab/dco-go@v3
with:
from: 'origin/${{ github.event.pull_request.base.ref }}'

View file

@ -0,0 +1,67 @@
on:
pull_request:
push:
branches:
- tcl/master
jobs:
lint:
name: Lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '1.23'
cache: true
- name: Install linters
run: go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest
- name: Run linters
run: make check
test:
name: Test
runs-on: oci-runner
strategy:
matrix:
go_versions: [ '1.23' ]
fail-fast: false
steps:
- uses: actions/checkout@v3
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '${{ matrix.go_versions }}'
- name: Tests for the FrostFS backend
env:
RESTIC_TEST_FUSE: false
AIO_IMAGE: truecloudlab/frostfs-aio
AIO_VERSION: 1.7.0-nightly.4
RCLONE_CONFIG: /config/rclone.conf
# run only tests related to FrostFS backend
run: |-
podman-service.sh
podman info
mkdir /config
printf "[TestFrostFS]\ntype = frostfs\nendpoint = localhost:8080\nwallet = /config/wallet.json\nplacement_policy = REP 1\nrequest_timeout = 20s\nconnection_timeout = 21s" > /config/rclone.conf
echo "Run frostfs aio container"
docker run -d --net=host --name aio $AIO_IMAGE:$AIO_VERSION --restart always -p 8080:8080
echo "Wait for frostfs to start"
until docker exec aio curl --fail http://localhost:8083 > /dev/null 2>&1; do sleep 0.2; done;
echo "Issue creds"
docker exec aio /usr/bin/issue-creds.sh native
echo "Copy wallet"
docker cp aio:/config/user-wallet.json /config/wallet.json
echo "Start tests"
go test -v github.com/rclone/rclone/backend/frostfs

4
.gitattributes vendored
View file

@ -1,3 +1,7 @@
# Go writes go.mod and go.sum with lf even on windows
go.mod text eol=lf
go.sum text eol=lf
# Ignore generated files in GitHub language statistics and diffs
/MANUAL.* linguist-generated=true
/rclone.1 linguist-generated=true

View file

@ -27,12 +27,12 @@ jobs:
strategy:
fail-fast: false
matrix:
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.19', 'go1.20']
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.21', 'go1.22']
include:
- job_name: linux
os: ubuntu-latest
go: '1.21'
go: '>=1.23.0-rc.1'
gotags: cmount
build_flags: '-include "^linux/"'
check: true
@ -43,14 +43,14 @@ jobs:
- job_name: linux_386
os: ubuntu-latest
go: '1.21'
go: '>=1.23.0-rc.1'
goarch: 386
gotags: cmount
quicktest: true
- job_name: mac_amd64
os: macos-11
go: '1.21'
os: macos-latest
go: '>=1.23.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true
@ -58,15 +58,15 @@ jobs:
deploy: true
- job_name: mac_arm64
os: macos-11
go: '1.21'
os: macos-latest
go: '>=1.23.0-rc.1'
gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true
- job_name: windows
os: windows-latest
go: '1.21'
go: '>=1.23.0-rc.1'
gotags: cmount
cgo: '0'
build_flags: '-include "^windows/"'
@ -76,20 +76,20 @@ jobs:
- job_name: other_os
os: ubuntu-latest
go: '1.21'
go: '>=1.23.0-rc.1'
build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true
deploy: true
- job_name: go1.19
- job_name: go1.21
os: ubuntu-latest
go: '1.19'
go: '1.21'
quicktest: true
racequicktest: true
- job_name: go1.20
- job_name: go1.22
os: ubuntu-latest
go: '1.20'
go: '1.22'
quicktest: true
racequicktest: true
@ -104,7 +104,7 @@ jobs:
fetch-depth: 0
- name: Install Go
uses: actions/setup-go@v4
uses: actions/setup-go@v5
with:
go-version: ${{ matrix.go }}
check-latest: true
@ -124,7 +124,7 @@ jobs:
sudo modprobe fuse
sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf
sudo apt-get install fuse3 libfuse-dev rpm pkg-config
sudo apt-get install fuse3 libfuse-dev rpm pkg-config git-annex git-annex-remote-rclone nfs-common
if: matrix.os == 'ubuntu-latest'
- name: Install Libraries on macOS
@ -137,7 +137,8 @@ jobs:
brew untap --force homebrew/cask
brew update
brew install --cask macfuse
if: matrix.os == 'macos-11'
brew install git-annex git-annex-remote-rclone
if: matrix.os == 'macos-latest'
- name: Install Libraries on Windows
shell: powershell
@ -167,14 +168,6 @@ jobs:
printf "\n\nSystem environment:\n\n"
env
- name: Go module cache
uses: actions/cache@v3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Build rclone
shell: bash
run: |
@ -216,7 +209,6 @@ jobs:
shell: bash
run: |
if [[ "${{ matrix.os }}" == "ubuntu-latest" ]]; then make release_dep_linux ; fi
if [[ "${{ matrix.os }}" == "windows-latest" ]]; then make release_dep_windows ; fi
make ci_beta
env:
RCLONE_CONFIG_PASS: ${{ secrets.RCLONE_CONFIG_PASS }}
@ -231,21 +223,71 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Get runner parameters
id: get-runner-parameters
shell: bash
run: |
echo "year-week=$(/bin/date -u "+%Y%V")" >> $GITHUB_OUTPUT
echo "runner-os-version=$ImageOS" >> $GITHUB_OUTPUT
- name: Checkout
uses: actions/checkout@v4
- name: Code quality test
uses: golangci/golangci-lint-action@v3
with:
# Optional: version of golangci-lint to use in form of v1.2 or v1.2.3 or `latest` to use the latest version
version: latest
# Run govulncheck on the latest go version, the one we build binaries with
- name: Install Go
uses: actions/setup-go@v4
id: setup-go
uses: actions/setup-go@v5
with:
go-version: '1.21'
go-version: '>=1.23.0-rc.1'
check-latest: true
cache: false
- name: Cache
uses: actions/cache@v4
with:
path: |
~/go/pkg/mod
~/.cache/go-build
~/.cache/golangci-lint
key: golangci-lint-${{ steps.get-runner-parameters.outputs.runner-os-version }}-go${{ steps.setup-go.outputs.go-version }}-${{ steps.get-runner-parameters.outputs.year-week }}-${{ hashFiles('go.sum') }}
restore-keys: golangci-lint-${{ steps.get-runner-parameters.outputs.runner-os-version }}-go${{ steps.setup-go.outputs.go-version }}-${{ steps.get-runner-parameters.outputs.year-week }}-
- name: Code quality test (Linux)
uses: golangci/golangci-lint-action@v6
with:
version: latest
skip-cache: true
- name: Code quality test (Windows)
uses: golangci/golangci-lint-action@v6
env:
GOOS: "windows"
with:
version: latest
skip-cache: true
- name: Code quality test (macOS)
uses: golangci/golangci-lint-action@v6
env:
GOOS: "darwin"
with:
version: latest
skip-cache: true
- name: Code quality test (FreeBSD)
uses: golangci/golangci-lint-action@v6
env:
GOOS: "freebsd"
with:
version: latest
skip-cache: true
- name: Code quality test (OpenBSD)
uses: golangci/golangci-lint-action@v6
env:
GOOS: "openbsd"
with:
version: latest
skip-cache: true
- name: Install govulncheck
run: go install golang.org/x/vuln/cmd/govulncheck@latest
@ -267,17 +309,9 @@ jobs:
# Upgrade together with NDK version
- name: Set up Go
uses: actions/setup-go@v4
uses: actions/setup-go@v5
with:
go-version: '1.21'
- name: Go module cache
uses: actions/cache@v3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
go-version: '>=1.23.0-rc.1'
- name: Set global environment variables
shell: bash

View file

@ -10,6 +10,15 @@ jobs:
runs-on: ubuntu-latest
name: Build image job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
@ -42,9 +51,12 @@ jobs:
# See https://docs.github.com/en/actions/security-guides/automatic-token-authentication#about-the-github_token-secret
# for more detailed information.
password: ${{ secrets.GITHUB_TOKEN }}
- name: Show disk usage
shell: bash
run: |
df -h .
- name: Build and publish image
uses: docker/build-push-action@v5
uses: docker/build-push-action@v6
with:
file: Dockerfile
context: .
@ -54,8 +66,12 @@ jobs:
rclone/rclone:beta
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6
cache-from: type=gha
cache-to: type=gha,mode=max
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, mode=max, scope=${{ github.workflow }}
provenance: false
# Eventually cache will need to be cleared if builds more frequent than once a week
# https://github.com/docker/build-push-action/issues/252
- name: Show disk usage
shell: bash
run: |
df -h .

View file

@ -10,6 +10,15 @@ jobs:
runs-on: ubuntu-latest
name: Build image job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
@ -23,15 +32,27 @@ jobs:
- name: Get actual major version
id: actual_major_version
run: echo ::set-output name=ACTUAL_MAJOR_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g' | cut -d "." -f 1)
- name: Build and publish image
uses: ilteoood/docker_buildx@1.1.0
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
tag: latest,${{ steps.actual_patch_version.outputs.ACTUAL_PATCH_VERSION }},${{ steps.actual_minor_version.outputs.ACTUAL_MINOR_VERSION }},${{ steps.actual_major_version.outputs.ACTUAL_MAJOR_VERSION }}
imageName: rclone/rclone
platform: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6
publish: true
dockerHubUser: ${{ secrets.DOCKER_HUB_USER }}
dockerHubPassword: ${{ secrets.DOCKER_HUB_PASSWORD }}
username: ${{ secrets.DOCKER_HUB_USER }}
password: ${{ secrets.DOCKER_HUB_PASSWORD }}
- name: Build and publish image
uses: docker/build-push-action@v6
with:
file: Dockerfile
context: .
platforms: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6
push: true
tags: |
rclone/rclone:latest
rclone/rclone:${{ steps.actual_patch_version.outputs.ACTUAL_PATCH_VERSION }}
rclone/rclone:${{ steps.actual_minor_version.outputs.ACTUAL_MINOR_VERSION }}
rclone/rclone:${{ steps.actual_major_version.outputs.ACTUAL_MAJOR_VERSION }}
build_docker_volume_plugin:
if: github.repository == 'rclone/rclone'
@ -39,6 +60,15 @@ jobs:
runs-on: ubuntu-latest
name: Build docker plugin job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:

15
.github/workflows/notify.yml vendored Normal file
View file

@ -0,0 +1,15 @@
name: Notify users based on issue labels
on:
issues:
types: [labeled]
jobs:
notify:
runs-on: ubuntu-latest
steps:
- uses: jenschelkopf/issue-label-notification-action@1.3
with:
token: ${{ secrets.NOTIFY_ACTION_TOKEN }}
recipients: |
Support Contract=@rclone/support

View file

@ -1,14 +1,14 @@
name: Publish to Winget
on:
release:
types: [released]
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: vedantmgoyal2009/winget-releaser@v2
with:
identifier: Rclone.Rclone
installers-regex: '-windows-\w+\.zip$'
token: ${{ secrets.WINGET_TOKEN }}
name: Publish to Winget
on:
release:
types: [released]
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: vedantmgoyal2009/winget-releaser@v2
with:
identifier: Rclone.Rclone
installers-regex: '-windows-\w+\.zip$'
token: ${{ secrets.WINGET_TOKEN }}

9
.gitignore vendored
View file

@ -3,10 +3,13 @@ _junk/
rclone
rclone.exe
build
docs/public
/docs/public/
/docs/.hugo_build.lock
/docs/static/img/logos/
rclone.iml
.idea
.history
.vscode
*.test
*.iml
fuzz-build.zip
@ -14,4 +17,6 @@ fuzz-build.zip
*.rej
Thumbs.db
__pycache__
.DS_Store
.DS_Store
resource_windows_*.syso
.devcontainer

View file

@ -13,6 +13,7 @@ linters:
- stylecheck
- unused
- misspell
- gocritic
#- prealloc
#- maligned
disable-all: true
@ -98,3 +99,46 @@ linters-settings:
# Only enable the checks performed by the staticcheck stand-alone tool,
# as documented here: https://staticcheck.io/docs/configuration/options/#checks
checks: ["all", "-ST1000", "-ST1003", "-ST1016", "-ST1020", "-ST1021", "-ST1022", "-ST1023"]
gocritic:
# Enable all default checks with some exceptions and some additions (commented).
# Cannot use both enabled-checks and disabled-checks, so must specify all to be used.
disable-all: true
enabled-checks:
#- appendAssign # Enabled by default
- argOrder
- assignOp
- badCall
- badCond
#- captLocal # Enabled by default
- caseOrder
- codegenComment
#- commentFormatting # Enabled by default
- defaultCaseOrder
- deprecatedComment
- dupArg
- dupBranchBody
- dupCase
- dupSubExpr
- elseif
#- exitAfterDefer # Enabled by default
- flagDeref
- flagName
#- ifElseChain # Enabled by default
- mapKey
- newDeref
- offBy1
- regexpMust
- ruleguard # Not enabled by default
#- singleCaseSwitch # Enabled by default
- sloppyLen
- sloppyTypeAssert
- switchTrue
- typeSwitchVar
- underef
- unlambda
- unslice
- valSwap
- wrapperFunc
settings:
ruleguard:
rules: "${configDir}/bin/rules.go"

View file

@ -1,8 +1,8 @@
# Contributing to rclone #
# Contributing to rclone
This is a short guide on how to contribute things to rclone.
## Reporting a bug ##
## Reporting a bug
If you've just got a question or aren't sure if you've found a bug
then please use the [rclone forum](https://forum.rclone.org/) instead
@ -12,13 +12,13 @@ When filing an issue, please include the following information if
possible as well as a description of the problem. Make sure you test
with the [latest beta of rclone](https://beta.rclone.org/):
* Rclone version (e.g. output from `rclone version`)
* Which OS you are using and how many bits (e.g. Windows 10, 64 bit)
* The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`)
* A log of the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`)
* if the log contains secrets then edit the file with a text editor first to obscure them
- Rclone version (e.g. output from `rclone version`)
- Which OS you are using and how many bits (e.g. Windows 10, 64 bit)
- The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`)
- A log of the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`)
- if the log contains secrets then edit the file with a text editor first to obscure them
## Submitting a new feature or bug fix ##
## Submitting a new feature or bug fix
If you find a bug that you'd like to fix, or a new feature that you'd
like to implement then please submit a pull request via GitHub.
@ -73,9 +73,9 @@ This is typically enough if you made a simple bug fix, otherwise please read the
Make sure you
* Add [unit tests](#testing) for a new feature.
* Add [documentation](#writing-documentation) for a new feature.
* [Commit your changes](#committing-your-changes) using the [message guideline](#commit-messages).
- Add [unit tests](#testing) for a new feature.
- Add [documentation](#writing-documentation) for a new feature.
- [Commit your changes](#committing-your-changes) using the [commit message guidelines](#commit-messages).
When you are done with that push your changes to GitHub:
@ -88,9 +88,9 @@ Your changes will then get reviewed and you might get asked to fix some stuff. I
You may sometimes be asked to [base your changes on the latest master](#basing-your-changes-on-the-latest-master) or [squash your commits](#squashing-your-commits).
## Using Git and GitHub ##
## Using Git and GitHub
### Committing your changes ###
### Committing your changes
Follow the guideline for [commit messages](#commit-messages) and then:
@ -107,7 +107,7 @@ You can modify the message or changes in the latest commit using:
If you amend to commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
### Replacing your previously pushed commits ###
### Replacing your previously pushed commits
Note that you are about to rewrite the GitHub history of your branch. It is good practice to involve your collaborators before modifying commits that have been pushed to GitHub.
@ -115,7 +115,7 @@ Your previously pushed commits are replaced by:
git push --force origin my-new-feature
### Basing your changes on the latest master ###
### Basing your changes on the latest master
To base your changes on the latest version of the [rclone master](https://github.com/rclone/rclone/tree/master) (upstream):
@ -149,13 +149,21 @@ If you squash commits that have been pushed to GitHub, then you will have to [re
Tip: You may like to use `git rebase -i master` if you are experienced or have a more complex situation.
### GitHub Continuous Integration ###
### GitHub Continuous Integration
rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions) to build and test the project, which should be automatically available for your fork too from the `Actions` tab in your repository.
## Testing ##
## Testing
### Quick testing ###
### Code quality tests
If you install [golangci-lint](https://github.com/golangci/golangci-lint) then you can run the same tests as get run in the CI which can be very helpful.
You can run them with `make check` or with `golangci-lint run ./...`.
Using these tests ensures that the rclone codebase all uses the same coding standards. These tests also check for easy mistakes to make (like forgetting to check an error return).
### Quick testing
rclone's tests are run from the go testing framework, so at the top
level you can run this to run all the tests.
@ -168,7 +176,7 @@ You can also use `make`, if supported by your platform
The quicktest is [automatically run by GitHub](#github-continuous-integration) when you push your branch to GitHub.
### Backend testing ###
### Backend testing
rclone contains a mixture of unit tests and integration tests.
Because it is difficult (and in some respects pointless) to test cloud
@ -201,9 +209,9 @@ altogether with an HTML report and test retries then from the
project root:
go install github.com/rclone/rclone/fstest/test_all
test_all -backend drive
test_all -backends drive
### Full integration testing ###
### Full integration testing
If you want to run all the integration tests against all the remotes,
then change into the project root and run
@ -218,55 +226,56 @@ The commands may require some extra go packages which you can install with
The full integration tests are run daily on the integration test server. You can
find the results at https://pub.rclone.org/integration-tests/
## Code Organisation ##
## Code Organisation
Rclone code is organised into a small number of top level directories
with modules beneath.
* backend - the rclone backends for interfacing to cloud providers -
* all - import this to load all the cloud providers
* ...providers
* bin - scripts for use while building or maintaining rclone
* cmd - the rclone commands
* all - import this to load all the commands
* ...commands
* cmdtest - end-to-end tests of commands, flags, environment variables,...
* docs - the documentation and website
* content - adjust these docs only - everything else is autogenerated
* command - these are auto-generated - edit the corresponding .go file
* fs - main rclone definitions - minimal amount of code
* accounting - bandwidth limiting and statistics
* asyncreader - an io.Reader which reads ahead
* config - manage the config file and flags
* driveletter - detect if a name is a drive letter
* filter - implements include/exclude filtering
* fserrors - rclone specific error handling
* fshttp - http handling for rclone
* fspath - path handling for rclone
* hash - defines rclone's hash types and functions
* list - list a remote
* log - logging facilities
* march - iterates directories in lock step
* object - in memory Fs objects
* operations - primitives for sync, e.g. Copy, Move
* sync - sync directories
* walk - walk a directory
* fstest - provides integration test framework
* fstests - integration tests for the backends
* mockdir - mocks an fs.Directory
* mockobject - mocks an fs.Object
* test_all - Runs integration tests for everything
* graphics - the images used in the website, etc.
* lib - libraries used by the backend
* atexit - register functions to run when rclone exits
* dircache - directory ID to name caching
* oauthutil - helpers for using oauth
* pacer - retries with backoff and paces operations
* readers - a selection of useful io.Readers
* rest - a thin abstraction over net/http for REST
* vfs - Virtual FileSystem layer for implementing rclone mount and similar
- backend - the rclone backends for interfacing to cloud providers -
- all - import this to load all the cloud providers
- ...providers
- bin - scripts for use while building or maintaining rclone
- cmd - the rclone commands
- all - import this to load all the commands
- ...commands
- cmdtest - end-to-end tests of commands, flags, environment variables,...
- docs - the documentation and website
- content - adjust these docs only - everything else is autogenerated
- command - these are auto-generated - edit the corresponding .go file
- fs - main rclone definitions - minimal amount of code
- accounting - bandwidth limiting and statistics
- asyncreader - an io.Reader which reads ahead
- config - manage the config file and flags
- driveletter - detect if a name is a drive letter
- filter - implements include/exclude filtering
- fserrors - rclone specific error handling
- fshttp - http handling for rclone
- fspath - path handling for rclone
- hash - defines rclone's hash types and functions
- list - list a remote
- log - logging facilities
- march - iterates directories in lock step
- object - in memory Fs objects
- operations - primitives for sync, e.g. Copy, Move
- sync - sync directories
- walk - walk a directory
- fstest - provides integration test framework
- fstests - integration tests for the backends
- mockdir - mocks an fs.Directory
- mockobject - mocks an fs.Object
- test_all - Runs integration tests for everything
- graphics - the images used in the website, etc.
- lib - libraries used by the backend
- atexit - register functions to run when rclone exits
- dircache - directory ID to name caching
- oauthutil - helpers for using oauth
- pacer - retries with backoff and paces operations
- readers - a selection of useful io.Readers
- rest - a thin abstraction over net/http for REST
- librclone - in memory interface to rclone's API for embedding rclone
- vfs - Virtual FileSystem layer for implementing rclone mount and similar
## Writing Documentation ##
## Writing Documentation
If you are adding a new feature then please update the documentation.
@ -277,22 +286,22 @@ alphabetical order.
If you add a new backend option/flag, then it should be documented in
the source file in the `Help:` field.
* Start with the most important information about the option,
- Start with the most important information about the option,
as a single sentence on a single line.
* This text will be used for the command-line flag help.
* It will be combined with other information, such as any default value,
- This text will be used for the command-line flag help.
- It will be combined with other information, such as any default value,
and the result will look odd if not written as a single sentence.
* It should end with a period/full stop character, which will be shown
- It should end with a period/full stop character, which will be shown
in docs but automatically removed when producing the flag help.
* Try to keep it below 80 characters, to reduce text wrapping in the terminal.
* More details can be added in a new paragraph, after an empty line (`"\n\n"`).
* Like with docs generated from Markdown, a single line break is ignored
- Try to keep it below 80 characters, to reduce text wrapping in the terminal.
- More details can be added in a new paragraph, after an empty line (`"\n\n"`).
- Like with docs generated from Markdown, a single line break is ignored
and two line breaks creates a new paragraph.
* This text will be shown to the user in `rclone config`
- This text will be shown to the user in `rclone config`
and in the docs (where it will be added by `make backenddocs`,
normally run some time before next release).
* To create options of enumeration type use the `Examples:` field.
* Each example value have their own `Help:` field, but they are treated
- To create options of enumeration type use the `Examples:` field.
- Each example value have their own `Help:` field, but they are treated
a bit different than the main option help text. They will be shown
as an unordered list, therefore a single line break is enough to
create a new list item. Also, for enumeration texts like name of
@ -312,12 +321,12 @@ combined unmodified with other information (such as any default value).
Note that you can use [GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository)
for small changes in the docs which makes it very easy.
## Making a release ##
## Making a release
There are separate instructions for making a release in the RELEASE.md
file.
## Commit messages ##
## Commit messages
Please make the first line of your commit message a summary of the
change that a user (not a developer) of rclone would like to read, and
@ -358,7 +367,7 @@ error fixing the hang.
Fixes #1498
```
## Adding a dependency ##
## Adding a dependency
rclone uses the [go
modules](https://tip.golang.org/cmd/go/#hdr-Modules__module_versions__and_more)
@ -370,7 +379,7 @@ To add a dependency `github.com/ncw/new_dependency` see the
instructions below. These will fetch the dependency and add it to
`go.mod` and `go.sum`.
GO111MODULE=on go get github.com/ncw/new_dependency
go get github.com/ncw/new_dependency
You can add constraints on that package when doing `go get` (see the
go docs linked above), but don't unless you really need to.
@ -378,15 +387,15 @@ go docs linked above), but don't unless you really need to.
Please check in the changes generated by `go mod` including `go.mod`
and `go.sum` in the same commit as your other changes.
## Updating a dependency ##
## Updating a dependency
If you need to update a dependency then run
GO111MODULE=on go get -u golang.org/x/crypto
go get golang.org/x/crypto
Check in a single commit as above.
## Updating all the dependencies ##
## Updating all the dependencies
In order to update all the dependencies then run `make update`. This
just uses the go modules to update all the modules to their latest
@ -395,7 +404,7 @@ stable release. Check in the changes in a single commit as above.
This should be done early in the release cycle to pick up new versions
of packages in time for them to get some testing.
## Updating a backend ##
## Updating a backend
If you update a backend then please run the unit tests and the
integration tests for that backend.
@ -410,87 +419,133 @@ integration tests.
The next section goes into more detail about the tests.
## Writing a new backend ##
## Writing a new backend
Choose a name. The docs here will use `remote` as an example.
Note that in rclone terminology a file system backend is called a
remote or an fs.
Research
### Research
* Look at the interfaces defined in `fs/types.go`
* Study one or more of the existing remotes
- Look at the interfaces defined in `fs/types.go`
- Study one or more of the existing remotes
Getting going
### Getting going
* Create `backend/remote/remote.go` (copy this from a similar remote)
* box is a good one to start from if you have a directory-based remote
* b2 is a good one to start from if you have a bucket-based remote
* Add your remote to the imports in `backend/all/all.go`
* HTTP based remotes are easiest to maintain if they use rclone's [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) module, but if there is a really good go SDK then use that instead.
* Try to implement as many optional methods as possible as it makes the remote more usable.
* Use [lib/encoder](https://pkg.go.dev/github.com/rclone/rclone/lib/encoder) to make sure we can encode any path name and `rclone info` to help determine the encodings needed
* `rclone purge -v TestRemote:rclone-info`
* `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
* `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
* open `remote.csv` in a spreadsheet and examine
- Create `backend/remote/remote.go` (copy this from a similar remote)
- box is a good one to start from if you have a directory-based remote (and shows how to use the directory cache)
- b2 is a good one to start from if you have a bucket-based remote
- Add your remote to the imports in `backend/all/all.go`
- HTTP based remotes are easiest to maintain if they use rclone's [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) module, but if there is a really good Go SDK from the provider then use that instead.
- Try to implement as many optional methods as possible as it makes the remote more usable.
- Use [lib/encoder](https://pkg.go.dev/github.com/rclone/rclone/lib/encoder) to make sure we can encode any path name and `rclone info` to help determine the encodings needed
- `rclone purge -v TestRemote:rclone-info`
- `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
- `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
- open `remote.csv` in a spreadsheet and examine
Important:
### Guidelines for a speedy merge
* Please use [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) if you are implementing a REST like backend and parsing XML/JSON in the backend. It makes maintenance much easier.
* If your backend is HTTP based then please use rclone's Client or Transport from [fs/fshttp](https://pkg.go.dev/github.com/rclone/rclone/fs/fshttp) - this adds features like `--dump bodies`, `--tpslimit`, `--user-agent` without you having to code anything!
- **Do** use [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) if you are implementing a REST like backend and parsing XML/JSON in the backend.
- **Do** use rclone's Client or Transport from [fs/fshttp](https://pkg.go.dev/github.com/rclone/rclone/fs/fshttp) if your backend is HTTP based - this adds features like `--dump bodies`, `--tpslimit`, `--user-agent` without you having to code anything!
- **Do** follow your example backend exactly - use the same code order, function names, layout, structure. **Don't** move stuff around and **Don't** delete the comments.
- **Do not** split your backend up into `fs.go` and `object.go` (there are a few backends like that - don't follow them!)
- **Do** put your API type definitions in a separate file - by preference `api/types.go`
- **Remember** we have >50 backends to maintain so keeping them as similar as possible to each other is a high priority!
Unit tests
### Unit tests
* Create a config entry called `TestRemote` for the unit tests to use
* Create a `backend/remote/remote_test.go` - copy and adjust your example remote
* Make sure all tests pass with `go test -v`
- Create a config entry called `TestRemote` for the unit tests to use
- Create a `backend/remote/remote_test.go` - copy and adjust your example remote
- Make sure all tests pass with `go test -v`
Integration tests
### Integration tests
* Add your backend to `fstest/test_all/config.yaml`
* Once you've done that then you can use the integration test framework from the project root:
* go install ./...
* test_all -backends remote
- Add your backend to `fstest/test_all/config.yaml`
- Once you've done that then you can use the integration test framework from the project root:
- go install ./...
- test_all -backends remote
Or if you want to run the integration tests manually:
* Make sure integration tests pass with
* `cd fs/operations`
* `go test -v -remote TestRemote:`
* `cd fs/sync`
* `go test -v -remote TestRemote:`
* If your remote defines `ListR` check with this also
* `go test -v -remote TestRemote: -fast-list`
- Make sure integration tests pass with
- `cd fs/operations`
- `go test -v -remote TestRemote:`
- `cd fs/sync`
- `go test -v -remote TestRemote:`
- If your remote defines `ListR` check with this also
- `go test -v -remote TestRemote: -fast-list`
See the [testing](#testing) section for more information on integration tests.
Add your fs to the docs - you'll need to pick an icon for it from
### Backend documentation
Add your backend to the docs - you'll need to pick an icon for it from
[fontawesome](http://fontawesome.io/icons/). Keep lists of remotes in
alphabetical order of full name of remote (e.g. `drive` is ordered as
`Google Drive`) but with the local file system last.
* `README.md` - main GitHub page
* `docs/content/remote.md` - main docs page (note the backend options are automatically added to this file with `make backenddocs`)
* make sure this has the `autogenerated options` comments in (see your reference backend docs)
* update them in your backend with `bin/make_backend_docs.py remote`
* `docs/content/overview.md` - overview docs
* `docs/content/docs.md` - list of remotes in config section
* `docs/content/_index.md` - front page of rclone.org
* `docs/layouts/chrome/navbar.html` - add it to the website navigation
* `bin/make_manual.py` - add the page to the `docs` constant
- `README.md` - main GitHub page
- `docs/content/remote.md` - main docs page (note the backend options are automatically added to this file with `make backenddocs`)
- make sure this has the `autogenerated options` comments in (see your reference backend docs)
- update them in your backend with `bin/make_backend_docs.py remote`
- `docs/content/overview.md` - overview docs
- `docs/content/docs.md` - list of remotes in config section
- `docs/content/_index.md` - front page of rclone.org
- `docs/layouts/chrome/navbar.html` - add it to the website navigation
- `bin/make_manual.py` - add the page to the `docs` constant
Once you've written the docs, run `make serve` and check they look OK
in the web browser and the links (internal and external) all work.
## Writing a plugin ##
## Adding a new s3 provider
It is quite easy to add a new S3 provider to rclone.
You'll need to modify the following files
- `backend/s3/s3.go`
- Add the provider to `providerOption` at the top of the file
- Add endpoints and other config for your provider gated on the provider in `fs.RegInfo`.
- Exclude your provider from generic config questions (eg `region` and `endpoint).
- Add the provider to the `setQuirks` function - see the documentation there.
- `docs/content/s3.md`
- Add the provider at the top of the page.
- Add a section about the provider linked from there.
- Add a transcript of a trial `rclone config` session
- Edit the transcript to remove things which might change in subsequent versions
- **Do not** alter or add to the autogenerated parts of `s3.md`
- **Do not** run `make backenddocs` or `bin/make_backend_docs.py s3`
- `README.md` - this is the home page in github
- Add the provider and a link to the section you wrote in `docs/contents/s3.md`
- `docs/content/_index.md` - this is the home page of rclone.org
- Add the provider and a link to the section you wrote in `docs/contents/s3.md`
When adding the provider, endpoints, quirks, docs etc keep them in
alphabetical order by `Provider` name, but with `AWS` first and
`Other` last.
Once you've written the docs, run `make serve` and check they look OK
in the web browser and the links (internal and external) all work.
Once you've written the code, test `rclone config` works to your
satisfaction, and check the integration tests work `go test -v -remote
NewS3Provider:`. You may need to adjust the quirks to get them to
pass. Some providers just can't pass the tests with control characters
in the names so if these fail and the provider doesn't support
`urlEncodeListings` in the quirks then ignore them. Note that the
`SetTier` test may also fail on non AWS providers.
For an example of adding an s3 provider see [eb3082a1](https://github.com/rclone/rclone/commit/eb3082a1ebdb76d5625f14cedec3f5154a5e7b10).
## Writing a plugin
New features (backends, commands) can also be added "out-of-tree", through Go plugins.
Changes will be kept in a dynamically loaded file instead of being compiled into the main binary.
This is useful if you can't merge your changes upstream or don't want to maintain a fork of rclone.
Usage
### Usage
- Naming
- Plugins names must have the pattern `librcloneplugin_KIND_NAME.so`.
@ -505,7 +560,7 @@ Usage
- Plugins must be compiled against the exact version of rclone to work.
(The rclone used during building the plugin must be the same as the source of rclone)
Building
### Building
To turn your existing additions into a Go plugin, move them to an external repository
and change the top-level package name to `main`.

View file

@ -1,8 +1,9 @@
FROM golang AS builder
FROM golang:alpine AS builder
COPY . /go/src/github.com/rclone/rclone/
WORKDIR /go/src/github.com/rclone/rclone/
RUN apk add --no-cache make bash gawk git
RUN \
CGO_ENABLED=0 \
make

View file

@ -21,6 +21,8 @@ Current active maintainers of rclone are:
| Chun-Hung Tseng | @henrybear327 | Proton Drive Backend |
| Hideo Aoyama | @boukendesho | snap packaging |
| nielash | @nielash | bisync |
| Dan McArdle | @dmcardle | gitannex |
| Sam Harrison | @childish-sambino | filescom |
**This is a work in progress Draft**

69023
MANUAL.html generated

File diff suppressed because it is too large Load diff

15424
MANUAL.md generated

File diff suppressed because it is too large Load diff

41410
MANUAL.txt generated

File diff suppressed because it is too large Load diff

View file

@ -30,29 +30,37 @@ ifdef RELEASE_TAG
TAG := $(RELEASE_TAG)
endif
GO_VERSION := $(shell go version)
GO_OS := $(shell go env GOOS)
ifdef BETA_SUBDIR
BETA_SUBDIR := /$(BETA_SUBDIR)
endif
BETA_PATH := $(BRANCH_PATH)$(TAG)$(BETA_SUBDIR)
BETA_URL := https://beta.rclone.org/$(BETA_PATH)/
BETA_UPLOAD_ROOT := memstore:beta-rclone-org
BETA_UPLOAD_ROOT := beta.rclone.org:
BETA_UPLOAD := $(BETA_UPLOAD_ROOT)/$(BETA_PATH)
# Pass in GOTAGS=xyz on the make command line to set build tags
ifdef GOTAGS
BUILDTAGS=-tags "$(GOTAGS)"
LINTTAGS=--build-tags "$(GOTAGS)"
endif
LDFLAGS=--ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)"
.PHONY: rclone test_all vars version
rclone:
go build -v --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) $(BUILD_ARGS)
ifeq ($(GO_OS),windows)
go run bin/resource_windows.go -version $(TAG) -syso resource_windows_`go env GOARCH`.syso
endif
go build -v $(LDFLAGS) $(BUILDTAGS) $(BUILD_ARGS)
ifeq ($(GO_OS),windows)
rm resource_windows_`go env GOARCH`.syso
endif
mkdir -p `go env GOPATH`/bin/
cp -av rclone`go env GOEXE` `go env GOPATH`/bin/rclone`go env GOEXE`.new
mv -v `go env GOPATH`/bin/rclone`go env GOEXE`.new `go env GOPATH`/bin/rclone`go env GOEXE`
test_all:
go install --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) $(BUILD_ARGS) github.com/rclone/rclone/fstest/test_all
go install $(LDFLAGS) $(BUILDTAGS) $(BUILD_ARGS) github.com/rclone/rclone/fstest/test_all
vars:
@echo SHELL="'$(SHELL)'"
@ -80,13 +88,13 @@ test: rclone test_all
# Quick test
quicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) ./...
RCLONE_CONFIG="/notfound" go test $(LDFLAGS) $(BUILDTAGS) ./...
racequicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -cpu=2 -race ./...
RCLONE_CONFIG="/notfound" go test $(LDFLAGS) $(BUILDTAGS) -cpu=2 -race ./...
compiletest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -run XXX ./...
RCLONE_CONFIG="/notfound" go test $(LDFLAGS) $(BUILDTAGS) -run XXX ./...
# Do source code quality checks
check: rclone
@ -96,16 +104,12 @@ check: rclone
# Get the build dependencies
build_dep:
go run bin/get-github-release.go -extract golangci-lint golangci/golangci-lint 'golangci-lint-.*\.tar\.gz'
go run bin/get-github-release.go -use-api -extract golangci-lint golangci/golangci-lint 'golangci-lint-.*\.tar\.gz'
# Get the release dependencies we only install on linux
release_dep_linux:
go install github.com/goreleaser/nfpm/v2/cmd/nfpm@latest
# Get the release dependencies we only install on Windows
release_dep_windows:
GOOS="" GOARCH="" go install github.com/josephspurrier/goversioninfo/cmd/goversioninfo@latest
# Update dependencies
showupdates:
@echo "*** Direct dependencies that could be updated ***"
@ -150,7 +154,7 @@ rcdocs: rclone
install: rclone
install -d ${DESTDIR}/usr/bin
install -t ${DESTDIR}/usr/bin ${GOPATH}/bin/rclone
install ${GOPATH}/bin/rclone ${DESTDIR}/usr/bin
clean:
go clean ./...
@ -164,7 +168,7 @@ website:
@if grep -R "raw HTML omitted" docs/public ; then echo "ERROR: found unescaped HTML - fix the markdown source" ; fi
upload_website: website
rclone -v sync docs/public memstore:www-rclone-org
rclone -v sync docs/public www.rclone.org:
upload_test_website: website
rclone -P sync docs/public test-rclone-org:
@ -191,8 +195,8 @@ check_sign:
cd build && gpg --verify SHA256SUMS && gpg --decrypt SHA256SUMS | sha256sum -c
upload:
rclone -P copy build/ memstore:downloads-rclone-org/$(TAG)
rclone lsf build --files-only --include '*.{zip,deb,rpm}' --include version.txt | xargs -i bash -c 'i={}; j="$$i"; [[ $$i =~ (.*)(-v[0-9\.]+-)(.*) ]] && j=$${BASH_REMATCH[1]}-current-$${BASH_REMATCH[3]}; rclone copyto -v "memstore:downloads-rclone-org/$(TAG)/$$i" "memstore:downloads-rclone-org/$$j"'
rclone -P copy build/ downloads.rclone.org:/$(TAG)
rclone lsf build --files-only --include '*.{zip,deb,rpm}' --include version.txt | xargs -i bash -c 'i={}; j="$$i"; [[ $$i =~ (.*)(-v[0-9\.]+-)(.*) ]] && j=$${BASH_REMATCH[1]}-current-$${BASH_REMATCH[3]}; rclone copyto -v "downloads.rclone.org:/$(TAG)/$$i" "downloads.rclone.org:/$$j"'
upload_github:
./bin/upload-github $(TAG)
@ -202,7 +206,7 @@ cross: doc
beta:
go run bin/cross-compile.go $(BUILD_FLAGS) $(BUILDTAGS) $(BUILD_ARGS) $(TAG)
rclone -v copy build/ memstore:pub-rclone-org/$(TAG)
rclone -v copy build/ pub.rclone.org:/$(TAG)
@echo Beta release ready at https://pub.rclone.org/$(TAG)/
log_since_last_release:
@ -215,18 +219,18 @@ ci_upload:
sudo chown -R $$USER build
find build -type l -delete
gzip -r9v build
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD)/testbuilds
./rclone --no-check-dest --config bin/ci.rclone.conf -v copy build/ $(BETA_UPLOAD)/testbuilds
ifeq ($(or $(BRANCH_PATH),$(RELEASE_TAG)),)
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD_ROOT)/test/testbuilds-latest
./rclone --no-check-dest --config bin/ci.rclone.conf -v copy build/ $(BETA_UPLOAD_ROOT)/test/testbuilds-latest
endif
@echo Beta release ready at $(BETA_URL)/testbuilds
ci_beta:
git log $(LAST_TAG).. > /tmp/git-log.txt
go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt $(BUILD_FLAGS) $(BUILDTAGS) $(BUILD_ARGS) $(TAG)
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
rclone --no-check-dest --config bin/ci.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
ifeq ($(or $(BRANCH_PATH),$(RELEASE_TAG)),)
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)$(BETA_SUBDIR)
rclone --no-check-dest --config bin/ci.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)$(BETA_SUBDIR)
endif
@echo Beta release ready at $(BETA_URL)
@ -235,7 +239,7 @@ fetch_binaries:
rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/
serve: website
cd docs && hugo server -v -w --disableFastRender
cd docs && hugo server --logLevel info -w --disableFastRender
tag: retag doc
bin/make_changelog.py $(LAST_TAG) $(VERSION) > docs/content/changelog.md.new

View file

@ -1,3 +1,21 @@
<div align="center">
<sup>Special thanks to our sponsor:</sup>
<br>
<br>
<a href="https://www.warp.dev/?utm_source=github&utm_medium=referral&utm_campaign=rclone_20231103">
<div>
<img src="https://rclone.org/img/logos/warp-github.svg" width="300" alt="Warp">
</div>
<b>Warp is a modern, Rust-based terminal with AI built in so you and your team can build great software, faster.</b>
<div>
<sup>Visit warp.dev to learn more.</sup>
</div>
</a>
<br>
<hr>
</div>
<br>
[<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-light-mode-only)
[<img src="https://rclone.org/img/logo_on_dark__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-dark-mode-only)
@ -23,7 +41,6 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* 1Fichier [:page_facing_up:](https://rclone.org/fichier/)
* Akamai Netstorage [:page_facing_up:](https://rclone.org/netstorage/)
* Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
* Amazon Drive [:page_facing_up:](https://rclone.org/amazonclouddrive/) ([See note](https://rclone.org/amazonclouddrive/#status))
* Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
* ArvanCloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
@ -38,14 +55,18 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
* Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
* Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files)
* Files.com [:page_facing_up:](https://rclone.org/filescom/)
* FTP [:page_facing_up:](https://rclone.org/ftp/)
* GoFile [:page_facing_up:](https://rclone.org/gofile/)
* Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/)
* Google Drive [:page_facing_up:](https://rclone.org/drive/)
* Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)
* HDFS (Hadoop Distributed Filesystem) [:page_facing_up:](https://rclone.org/hdfs/)
* Hetzner Storage Box [:page_facing_up:](https://rclone.org/sftp/#hetzner-storage-box)
* HiDrive [:page_facing_up:](https://rclone.org/hidrive/)
* HTTP [:page_facing_up:](https://rclone.org/http/)
* Huawei Cloud Object Storage Service(OBS) [:page_facing_up:](https://rclone.org/s3/#huawei-obs)
* ImageKit [:page_facing_up:](https://rclone.org/imagekit/)
* Internet Archive [:page_facing_up:](https://rclone.org/internetarchive/)
* Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
* IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
@ -53,11 +74,15 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* Koofr [:page_facing_up:](https://rclone.org/koofr/)
* Leviia Object Storage [:page_facing_up:](https://rclone.org/s3/#leviia)
* Liara Object Storage [:page_facing_up:](https://rclone.org/s3/#liara-object-storage)
* Linkbox [:page_facing_up:](https://rclone.org/linkbox)
* Linode Object Storage [:page_facing_up:](https://rclone.org/s3/#linode)
* Magalu Object Storage [:page_facing_up:](https://rclone.org/s3/#magalu)
* Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
* Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
* Mega [:page_facing_up:](https://rclone.org/mega/)
* Memory [:page_facing_up:](https://rclone.org/memory/)
* Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/)
* Microsoft Azure Files Storage [:page_facing_up:](https://rclone.org/azurefiles/)
* Microsoft OneDrive [:page_facing_up:](https://rclone.org/onedrive/)
* Minio [:page_facing_up:](https://rclone.org/s3/#minio)
* Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud)
@ -71,6 +96,7 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* pCloud [:page_facing_up:](https://rclone.org/pcloud/)
* Petabox [:page_facing_up:](https://rclone.org/s3/#petabox)
* PikPak [:page_facing_up:](https://rclone.org/pikpak/)
* Pixeldrain [:page_facing_up:](https://rclone.org/pixeldrain/)
* premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
* put.io [:page_facing_up:](https://rclone.org/putio/)
* Proton Drive [:page_facing_up:](https://rclone.org/protondrive/)
@ -79,6 +105,7 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* Quatrix [:page_facing_up:](https://rclone.org/quatrix/)
* Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
* RackCorp Object Storage [:page_facing_up:](https://rclone.org/s3/#RackCorp)
* rsync.net [:page_facing_up:](https://rclone.org/sftp/#rsync-net)
* Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
* Seafile [:page_facing_up:](https://rclone.org/seafile/)
* SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs)
@ -89,6 +116,7 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
* Synology C2 Object Storage [:page_facing_up:](https://rclone.org/s3/#synology-c2)
* Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos)
* Uloz.to [:page_facing_up:](https://rclone.org/ulozto/)
* Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
* WebDAV [:page_facing_up:](https://rclone.org/webdav/)
* Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
@ -117,6 +145,7 @@ These backends adapt or modify other storage providers
* Partial syncs supported on a whole file basis
* [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed files
* [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory identical
* [Bisync](https://rclone.org/bisync/) (two way) to keep two directories in sync bidirectionally
* [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash equality
* Can sync to and from network, e.g. two different cloud accounts
* Optional large file chunking ([Chunker](https://rclone.org/chunker/))

View file

@ -37,16 +37,45 @@ This file describes how to make the various kinds of releases
## Update dependencies
Early in the next release cycle update the dependencies
Early in the next release cycle update the dependencies.
* Review any pinned packages in go.mod and remove if possible
* make updatedirect
* make
* git commit -a -v
* make update
* make
* `make updatedirect`
* `make GOTAGS=cmount`
* `make compiletest`
* Fix anything which doesn't compile at this point and commit changes here
* `git commit -a -v -m "build: update all dependencies"`
If the `make updatedirect` upgrades the version of go in the `go.mod`
then go to manual mode. `go1.20` here is the lowest supported version
in the `go.mod`.
```
go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all > /tmp/potential-upgrades
go get -d $(cat /tmp/potential-upgrades)
go mod tidy -go=1.20 -compat=1.20
```
If the `go mod tidy` fails use the output from it to remove the
package which can't be upgraded from `/tmp/potential-upgrades` when
done
```
git co go.mod go.sum
```
And try again.
Optionally upgrade the direct and indirect dependencies. This is very
likely to fail if the manual method was used abve - in that case
ignore it as it is too time consuming to fix.
* `make update`
* `make GOTAGS=cmount`
* `make compiletest`
* roll back any updates which didn't compile
* git commit -a -v --amend
* `git commit -a -v --amend`
* **NB** watch out for this changing the default go version in `go.mod`
Note that `make update` updates all direct and indirect dependencies
and there can occasionally be forwards compatibility problems with
@ -54,6 +83,9 @@ doing that so it may be necessary to roll back dependencies to the
version specified by `make updatedirect` in order to get rclone to
build.
Once it compiles locally, push it on a test branch and commit fixes
until the tests pass.
## Tidy beta
At some point after the release run
@ -90,6 +122,13 @@ Now
* git commit -a -v -m "Changelog updates from Version ${NEW_TAG}"
* git push
## Sponsor logos
If updating the website note that the sponsor logos have been moved out of the main repository.
You will need to checkout `/docs/static/img/logos` from https://github.com/rclone/third-party-logos
which is a private repo containing artwork from sponsors.
## Update the website between releases
Create an update website branch based off the last release
@ -114,32 +153,23 @@ Cherry pick any changes back to master and the stable branch if it is active.
## Making a manual build of docker
The rclone docker image should autobuild on via GitHub actions. If it doesn't
or needs to be updated then rebuild like this.
See: https://github.com/ilteoood/docker_buildx/issues/19
See: https://github.com/ilteoood/docker_buildx/blob/master/scripts/install_buildx.sh
To do a basic build of rclone's docker image to debug builds locally:
```
git co v1.54.1
docker pull golang
export DOCKER_CLI_EXPERIMENTAL=enabled
docker buildx create --name actions_builder --use
docker run --rm --privileged docker/binfmt:820fdd95a9972a5308930a2bdfb8573dd4447ad3
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
SUPPORTED_PLATFORMS=$(docker buildx inspect --bootstrap | grep 'Platforms:*.*' | cut -d : -f2,3)
echo "Supported platforms: $SUPPORTED_PLATFORMS"
docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push .
docker buildx stop actions_builder
docker buildx build --load -t rclone/rclone:testing --progress=plain .
docker run --rm rclone/rclone:testing version
```
### Old build for linux/amd64 only
To test the multipatform build
```
docker pull golang
docker build --rm --ulimit memlock=67108864 -t rclone/rclone:1.52.0 -t rclone/rclone:1.52 -t rclone/rclone:1 -t rclone/rclone:latest .
docker push rclone/rclone:1.52.0
docker push rclone/rclone:1.52
docker push rclone/rclone:1
docker push rclone/rclone:latest
docker buildx build -t rclone/rclone:testing --progress=plain --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 .
```
To make a full build then set the tags correctly and add `--push`
Note that you can't only build one architecture - you need to build them all.
```
docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push .
```

View file

@ -1 +1 @@
v1.65.0
v1.68.2

View file

@ -23,8 +23,8 @@ func prepare(t *testing.T, root string) {
configfile.Install()
// Configure the remote
config.FileSet(remoteName, "type", "alias")
config.FileSet(remoteName, "remote", root)
config.FileSetValue(remoteName, "type", "alias")
config.FileSetValue(remoteName, "remote", root)
}
func TestNewFS(t *testing.T) {
@ -81,10 +81,12 @@ func TestNewFS(t *testing.T) {
for i, gotEntry := range gotEntries {
what := fmt.Sprintf("%s, entry=%d", what, i)
wantEntry := test.entries[i]
_, isDir := gotEntry.(fs.Directory)
require.Equal(t, wantEntry.remote, gotEntry.Remote(), what)
require.Equal(t, wantEntry.size, gotEntry.Size(), what)
_, isDir := gotEntry.(fs.Directory)
if !isDir {
require.Equal(t, wantEntry.size, gotEntry.Size(), what)
}
require.Equal(t, wantEntry.isDir, isDir, what)
}
}

View file

@ -4,8 +4,8 @@ package all
import (
// Active file systems
_ "github.com/rclone/rclone/backend/alias"
_ "github.com/rclone/rclone/backend/amazonclouddrive"
_ "github.com/rclone/rclone/backend/azureblob"
_ "github.com/rclone/rclone/backend/azurefiles"
_ "github.com/rclone/rclone/backend/b2"
_ "github.com/rclone/rclone/backend/box"
_ "github.com/rclone/rclone/backend/cache"
@ -17,16 +17,21 @@ import (
_ "github.com/rclone/rclone/backend/dropbox"
_ "github.com/rclone/rclone/backend/fichier"
_ "github.com/rclone/rclone/backend/filefabric"
_ "github.com/rclone/rclone/backend/filescom"
_ "github.com/rclone/rclone/backend/frostfs"
_ "github.com/rclone/rclone/backend/ftp"
_ "github.com/rclone/rclone/backend/gofile"
_ "github.com/rclone/rclone/backend/googlecloudstorage"
_ "github.com/rclone/rclone/backend/googlephotos"
_ "github.com/rclone/rclone/backend/hasher"
_ "github.com/rclone/rclone/backend/hdfs"
_ "github.com/rclone/rclone/backend/hidrive"
_ "github.com/rclone/rclone/backend/http"
_ "github.com/rclone/rclone/backend/imagekit"
_ "github.com/rclone/rclone/backend/internetarchive"
_ "github.com/rclone/rclone/backend/jottacloud"
_ "github.com/rclone/rclone/backend/koofr"
_ "github.com/rclone/rclone/backend/linkbox"
_ "github.com/rclone/rclone/backend/local"
_ "github.com/rclone/rclone/backend/mailru"
_ "github.com/rclone/rclone/backend/mega"
@ -37,6 +42,7 @@ import (
_ "github.com/rclone/rclone/backend/oracleobjectstorage"
_ "github.com/rclone/rclone/backend/pcloud"
_ "github.com/rclone/rclone/backend/pikpak"
_ "github.com/rclone/rclone/backend/pixeldrain"
_ "github.com/rclone/rclone/backend/premiumizeme"
_ "github.com/rclone/rclone/backend/protondrive"
_ "github.com/rclone/rclone/backend/putio"
@ -51,6 +57,7 @@ import (
_ "github.com/rclone/rclone/backend/storj"
_ "github.com/rclone/rclone/backend/sugarsync"
_ "github.com/rclone/rclone/backend/swift"
_ "github.com/rclone/rclone/backend/ulozto"
_ "github.com/rclone/rclone/backend/union"
_ "github.com/rclone/rclone/backend/uptobox"
_ "github.com/rclone/rclone/backend/webdav"

File diff suppressed because it is too large Load diff

View file

@ -1,21 +0,0 @@
// Test AmazonCloudDrive filesystem interface
//go:build acd
// +build acd
package amazonclouddrive_test
import (
"testing"
"github.com/rclone/rclone/backend/amazonclouddrive"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.NilObject = fs.Object((*amazonclouddrive.Object)(nil))
fstests.RemoteName = "TestAmazonCloudDrive:"
fstests.Run(t)
}

View file

@ -1,5 +1,4 @@
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
package azureblob
@ -8,6 +7,7 @@ import (
"context"
"crypto/md5"
"encoding/base64"
"encoding/binary"
"encoding/hex"
"encoding/json"
"errors"
@ -295,10 +295,10 @@ avoid the time out.`,
Advanced: true,
}, {
Name: "access_tier",
Help: `Access tier of blob: hot, cool or archive.
Help: `Access tier of blob: hot, cool, cold or archive.
Archived blobs can be restored by setting access tier to hot or
cool. Leave blank if you intend to use default access tier, which is
Archived blobs can be restored by setting access tier to hot, cool or
cold. Leave blank if you intend to use default access tier, which is
set at account level
If there is no "access tier" specified, rclone doesn't apply any tier.
@ -306,7 +306,7 @@ rclone performs "Set Tier" operation on blobs while uploading, if objects
are not modified, specifying "access tier" to new one will have no effect.
If blobs are in "archive tier" at remote, trying to perform data transfer
operations from remote will not be allowed. User should first restore by
tiering blob to "Hot" or "Cool".`,
tiering blob to "Hot", "Cool" or "Cold".`,
Advanced: true,
}, {
Name: "archive_tier_delete",
@ -401,6 +401,24 @@ rclone does if you know the container exists already.
Help: `If set, do not do HEAD before GET when getting objects.`,
Default: false,
Advanced: true,
}, {
Name: "delete_snapshots",
Help: `Set to specify how to deal with snapshots on blob deletion.`,
Examples: []fs.OptionExample{
{
Value: "",
Help: "By default, the delete operation fails if a blob has snapshots",
}, {
Value: string(blob.DeleteSnapshotsOptionTypeInclude),
Help: "Specify 'include' to remove the root blob and all its snapshots",
}, {
Value: string(blob.DeleteSnapshotsOptionTypeOnly),
Help: "Specify 'only' to remove only the snapshots but keep the root blob.",
},
},
Default: "",
Exclusive: true,
Advanced: true,
}},
})
}
@ -437,6 +455,7 @@ type Options struct {
DirectoryMarkers bool `config:"directory_markers"`
NoCheckContainer bool `config:"no_check_container"`
NoHeadObject bool `config:"no_head_object"`
DeleteSnapshots string `config:"delete_snapshots"`
}
// Fs represents a remote azure server
@ -520,6 +539,7 @@ func (o *Object) split() (container, containerPath string) {
func validateAccessTier(tier string) bool {
return strings.EqualFold(tier, string(blob.AccessTierHot)) ||
strings.EqualFold(tier, string(blob.AccessTierCool)) ||
strings.EqualFold(tier, string(blob.AccessTierCold)) ||
strings.EqualFold(tier, string(blob.AccessTierArchive))
}
@ -649,8 +669,8 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if opt.AccessTier == "" {
opt.AccessTier = string(defaultAccessTier)
} else if !validateAccessTier(opt.AccessTier) {
return nil, fmt.Errorf("supported access tiers are %s, %s and %s",
string(blob.AccessTierHot), string(blob.AccessTierCool), string(blob.AccessTierArchive))
return nil, fmt.Errorf("supported access tiers are %s, %s, %s and %s",
string(blob.AccessTierHot), string(blob.AccessTierCool), string(blob.AccessTierCold), string(blob.AccessTierArchive))
}
if !validatePublicAccess((opt.PublicAccess)) {
@ -691,10 +711,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
ClientOptions: policyClientOptions,
}
// Here we auth by setting one of cred, sharedKeyCred or f.svc
// Here we auth by setting one of cred, sharedKeyCred, f.svc or anonymous
var (
cred azcore.TokenCredential
sharedKeyCred *service.SharedKeyCredential
anonymous = false
)
switch {
case opt.EnvAuth:
@ -854,6 +875,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, fmt.Errorf("failed to acquire MSI token: %w", err)
}
case opt.Account != "":
// Anonymous access
anonymous = true
default:
return nil, errors.New("no authentication method configured")
}
@ -883,6 +907,12 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, fmt.Errorf("create client failed: %w", err)
}
} else if anonymous {
// Anonymous public access
f.svc, err = service.NewClientWithNoCredential(opt.Endpoint, &clientOpt)
if err != nil {
return nil, fmt.Errorf("create public client failed: %w", err)
}
}
}
if f.svc == nil {
@ -1068,7 +1098,7 @@ func (f *Fs) list(ctx context.Context, containerName, directory, prefix string,
isDirectory := isDirectoryMarker(*file.Properties.ContentLength, file.Metadata, remote)
if isDirectory {
// Don't insert the root directory
if remote == directory {
if remote == f.opt.Enc.ToStandardPath(directory) {
continue
}
// process directory markers as directories
@ -1899,7 +1929,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
var offset int64
var count int64
if o.AccessTier() == blob.AccessTierArchive {
return nil, fmt.Errorf("blob in archive tier, you need to set tier to hot or cool first")
return nil, fmt.Errorf("blob in archive tier, you need to set tier to hot, cool, cold first")
}
fs.FixRangeOption(options, o.size)
for _, option := range options {
@ -1965,34 +1995,21 @@ func (rs *readSeekCloser) Close() error {
return nil
}
// increment the array as LSB binary
func increment(xs *[8]byte) {
for i, digit := range xs {
newDigit := digit + 1
xs[i] = newDigit
if newDigit >= digit {
// exit if no carry
break
}
}
}
// record chunk number and id for Close
type azBlock struct {
chunkNumber int
chunkNumber uint64
id string
}
// Implements the fs.ChunkWriter interface
type azChunkWriter struct {
chunkSize int64
size int64
f *Fs
ui uploadInfo
blocksMu sync.Mutex // protects the below
blocks []azBlock // list of blocks for finalize
binaryBlockID [8]byte // block counter as LSB first 8 bytes
o *Object
chunkSize int64
size int64
f *Fs
ui uploadInfo
blocksMu sync.Mutex // protects the below
blocks []azBlock // list of blocks for finalize
o *Object
}
// OpenChunkWriter returns the chunk size and a ChunkWriter
@ -2077,16 +2094,16 @@ func (w *azChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader
return 0, nil
}
md5sum := m.Sum(nil)
transactionalMD5 := md5sum[:]
// increment the blockID and save the blocks for finalize
increment(&w.binaryBlockID)
blockID := base64.StdEncoding.EncodeToString(w.binaryBlockID[:])
var binaryBlockID [8]byte // block counter as LSB first 8 bytes
binary.LittleEndian.PutUint64(binaryBlockID[:], uint64(chunkNumber))
blockID := base64.StdEncoding.EncodeToString(binaryBlockID[:])
// Save the blockID for the commit
w.blocksMu.Lock()
w.blocks = append(w.blocks, azBlock{
chunkNumber: chunkNumber,
chunkNumber: uint64(chunkNumber),
id: blockID,
})
w.blocksMu.Unlock()
@ -2099,7 +2116,7 @@ func (w *azChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader
}
options := blockblob.StageBlockOptions{
// Specify the transactional md5 for the body, to be validated by the service.
TransactionalValidation: blob.TransferValidationTypeMD5(transactionalMD5),
TransactionalValidation: blob.TransferValidationTypeMD5(md5sum),
}
_, err = w.ui.blb.StageBlock(ctx, blockID, &readSeekCloser{Reader: reader, Seeker: reader}, &options)
if err != nil {
@ -2119,7 +2136,7 @@ func (w *azChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader
return currentChunkSize, err
}
// Abort the multpart upload.
// Abort the multipart upload.
//
// FIXME it would be nice to delete uncommitted blocks.
//
@ -2151,9 +2168,20 @@ func (w *azChunkWriter) Close(ctx context.Context) (err error) {
return w.blocks[i].chunkNumber < w.blocks[j].chunkNumber
})
// Create a list of block IDs
// Create and check a list of block IDs
blockIDs := make([]string, len(w.blocks))
for i := range w.blocks {
if w.blocks[i].chunkNumber != uint64(i) {
return fmt.Errorf("internal error: expecting chunkNumber %d but got %d", i, w.blocks[i].chunkNumber)
}
chunkBytes, err := base64.StdEncoding.DecodeString(w.blocks[i].id)
if err != nil {
return fmt.Errorf("internal error: bad block ID: %w", err)
}
chunkNumber := binary.LittleEndian.Uint64(chunkBytes)
if w.blocks[i].chunkNumber != chunkNumber {
return fmt.Errorf("internal error: expecting decoded chunkNumber %d but got %d", w.blocks[i].chunkNumber, chunkNumber)
}
blockIDs[i] = w.blocks[i].id
}
@ -2355,9 +2383,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
blb := o.getBlobSVC()
//only := blob.DeleteSnapshotsOptionTypeOnly
opt := blob.DeleteOptions{
//DeleteSnapshots: &only,
opt := blob.DeleteOptions{}
if o.fs.opt.DeleteSnapshots != "" {
action := blob.DeleteSnapshotsOptionType(o.fs.opt.DeleteSnapshots)
opt.DeleteSnapshots = &action
}
return o.fs.pacer.Call(func() (bool, error) {
_, err := blb.Delete(ctx, &opt)

View file

@ -1,5 +1,4 @@
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
package azureblob
@ -17,21 +16,3 @@ func (f *Fs) InternalTest(t *testing.T) {
enabled = f.Features().GetTier
assert.True(t, enabled)
}
func TestIncrement(t *testing.T) {
for _, test := range []struct {
in [8]byte
want [8]byte
}{
{[8]byte{0, 0, 0, 0}, [8]byte{1, 0, 0, 0}},
{[8]byte{0xFE, 0, 0, 0}, [8]byte{0xFF, 0, 0, 0}},
{[8]byte{0xFF, 0, 0, 0}, [8]byte{0, 1, 0, 0}},
{[8]byte{0, 1, 0, 0}, [8]byte{1, 1, 0, 0}},
{[8]byte{0xFF, 0xFF, 0xFF, 0xFE}, [8]byte{0, 0, 0, 0xFF}},
{[8]byte{0xFF, 0xFF, 0xFF, 0xFF}, [8]byte{0, 0, 0, 0, 1}},
{[8]byte{0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, [8]byte{0, 0, 0, 0, 0, 0, 0}},
} {
increment(&test.in)
assert.Equal(t, test.want, test.in)
}
}

View file

@ -1,7 +1,6 @@
// Test AzureBlob filesystem interface
//go:build !plan9 && !solaris && !js
// +build !plan9,!solaris,!js
package azureblob
@ -19,7 +18,7 @@ func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestAzureBlob:",
NilObject: (*Object)(nil),
TiersToTest: []string{"Hot", "Cool"},
TiersToTest: []string{"Hot", "Cool", "Cold"},
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: defaultChunkSize,
},
@ -35,7 +34,7 @@ func TestIntegration2(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*Object)(nil),
TiersToTest: []string{"Hot", "Cool"},
TiersToTest: []string{"Hot", "Cool", "Cold"},
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: defaultChunkSize,
},
@ -62,6 +61,7 @@ func TestValidateAccessTier(t *testing.T) {
"HOT": {"HOT", true},
"Hot": {"Hot", true},
"cool": {"cool", true},
"cold": {"cold", true},
"archive": {"archive", true},
"empty": {"", false},
"unknown": {"unknown", false},

View file

@ -2,6 +2,6 @@
// about "no buildable Go source files "
//go:build plan9 || solaris || js
// +build plan9 solaris js
// Package azureblob provides an interface to the Microsoft Azure blob object storage system
package azureblob

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,69 @@
//go:build !plan9 && !js
package azurefiles
import (
"context"
"math/rand"
"strings"
"testing"
"github.com/rclone/rclone/fstest/fstests"
"github.com/stretchr/testify/assert"
)
func (f *Fs) InternalTest(t *testing.T) {
t.Run("Authentication", f.InternalTestAuth)
}
var _ fstests.InternalTester = (*Fs)(nil)
func (f *Fs) InternalTestAuth(t *testing.T) {
t.Skip("skipping since this requires authentication credentials which are not part of repo")
shareName := "test-rclone-oct-2023"
testCases := []struct {
name string
options *Options
}{
{
name: "ConnectionString",
options: &Options{
ShareName: shareName,
ConnectionString: "",
},
},
{
name: "AccountAndKey",
options: &Options{
ShareName: shareName,
Account: "",
Key: "",
}},
{
name: "SASUrl",
options: &Options{
ShareName: shareName,
SASURL: "",
}},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
fs, err := newFsFromOptions(context.TODO(), "TestAzureFiles", "", tc.options)
assert.NoError(t, err)
dirName := randomString(10)
assert.NoError(t, fs.Mkdir(context.TODO(), dirName))
})
}
}
const chars = "abcdefghijklmnopqrstuvwzyxABCDEFGHIJKLMNOPQRSTUVWZYX"
func randomString(charCount int) string {
strBldr := strings.Builder{}
for i := 0; i < charCount; i++ {
randPos := rand.Int63n(52)
strBldr.WriteByte(chars[randPos])
}
return strBldr.String()
}

View file

@ -0,0 +1,17 @@
//go:build !plan9 && !js
package azurefiles
import (
"testing"
"github.com/rclone/rclone/fstest/fstests"
)
func TestIntegration(t *testing.T) {
var objPtr *Object
fstests.Run(t, &fstests.Opt{
RemoteName: "TestAzureFiles:",
NilObject: objPtr,
})
}

View file

@ -0,0 +1,7 @@
// Build for azurefiles for unsupported platforms to stop go complaining
// about "no buildable Go source files "
//go:build plan9 || js
// Package azurefiles provides an interface to Microsoft Azure Files
package azurefiles

View file

@ -42,11 +42,11 @@ func TestTimestampIsZero(t *testing.T) {
}
func TestTimestampEqual(t *testing.T) {
assert.False(t, emptyT.Equal(emptyT))
assert.False(t, emptyT.Equal(emptyT)) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid dupArg: suspicious method call with the same argument and receiver
assert.False(t, t0.Equal(emptyT))
assert.False(t, emptyT.Equal(t0))
assert.False(t, t0.Equal(t1))
assert.False(t, t1.Equal(t0))
assert.True(t, t0.Equal(t0))
assert.True(t, t1.Equal(t1))
assert.True(t, t0.Equal(t0)) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid dupArg: suspicious method call with the same argument and receiver
assert.True(t, t1.Equal(t1)) //nolint:gocritic // Don't include gocritic when running golangci-lint to avoid dupArg: suspicious method call with the same argument and receiver
}

View file

@ -9,6 +9,7 @@ import (
"bytes"
"context"
"crypto/sha1"
"encoding/json"
"errors"
"fmt"
gohash "hash"
@ -59,6 +60,7 @@ const (
defaultChunkSize = 96 * fs.Mebi
defaultUploadCutoff = 200 * fs.Mebi
largeFileCopyCutoff = 4 * fs.Gibi // 5E9 is the max
defaultMaxAge = 24 * time.Hour
)
// Globals
@ -100,7 +102,7 @@ below will cause b2 to return specific errors:
* "force_cap_exceeded"
These will be set in the "X-Bz-Test-Mode" header which is documented
in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration_checklist.html).`,
in the [b2 integrations checklist](https://www.backblaze.com/docs/cloud-storage-integration-checklist).`,
Default: "",
Hide: fs.OptionHideConfigurator,
Advanced: true,
@ -159,7 +161,7 @@ concurrently.
Note that chunks are stored in memory and there may be up to
"--transfers" * "--b2-upload-concurrency" chunks stored at once
in memory.`,
Default: 16,
Default: 4,
Advanced: true,
}, {
Name: "disable_checksum",
@ -192,9 +194,12 @@ Example:
Advanced: true,
}, {
Name: "download_auth_duration",
Help: `Time before the authorization token will expire in s or suffix ms|s|m|h|d.
Help: `Time before the public link authorization token will expire in s or suffix ms|s|m|h|d.
This is used in combination with "rclone link" for making files
accessible to the public and sets the duration before the download
authorization token will expire.
The duration before the download authorization token will expire.
The minimum value is 1 second. The maximum value is one week.`,
Default: fs.Duration(7 * 24 * time.Hour),
Advanced: true,
@ -239,7 +244,7 @@ See: [rclone backend lifecycle](#lifecycle) for setting lifecycles after bucket
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
// See: https://www.backblaze.com/b2/docs/files.html
// See: https://www.backblaze.com/docs/cloud-storage-files
// Encode invalid UTF-8 bytes as json doesn't handle them properly.
// FIXME: allow /, but not leading, trailing or double
Default: (encoder.Display |
@ -294,13 +299,14 @@ type Fs struct {
// Object describes a b2 object
type Object struct {
fs *Fs // what this object is part of
remote string // The remote path
id string // b2 id of the file
modTime time.Time // The modified time of the object if known
sha1 string // SHA-1 hash if known
size int64 // Size of the object
mimeType string // Content-Type of the object
fs *Fs // what this object is part of
remote string // The remote path
id string // b2 id of the file
modTime time.Time // The modified time of the object if known
sha1 string // SHA-1 hash if known
size int64 // Size of the object
mimeType string // Content-Type of the object
meta map[string]string // The object metadata if known - may be nil - with lower case keys
}
// ------------------------------------------------------------
@ -358,7 +364,7 @@ var retryErrorCodes = []int{
504, // Gateway Time-out
}
// shouldRetryNoAuth returns a boolean as to whether this resp and err
// shouldRetryNoReauth returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetryNoReauth(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
@ -399,11 +405,18 @@ func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (b
// errorHandler parses a non 2xx error response into an error
func errorHandler(resp *http.Response) error {
// Decode error response
errResponse := new(api.Error)
err := rest.DecodeJSON(resp, &errResponse)
body, err := rest.ReadBody(resp)
if err != nil {
fs.Debugf(nil, "Couldn't decode error response: %v", err)
fs.Errorf(nil, "Couldn't read error out of body: %v", err)
body = nil
}
// Decode error response if there was one - they can be blank
errResponse := new(api.Error)
if len(body) > 0 {
err = json.Unmarshal(body, errResponse)
if err != nil {
fs.Errorf(nil, "Couldn't decode error response: %v", err)
}
}
if errResponse.Code == "" {
errResponse.Code = "unknown"
@ -447,6 +460,14 @@ func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
return
}
func (f *Fs) setCopyCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
err = checkUploadChunkSize(cs)
if err == nil {
old, f.opt.CopyCutoff = f.opt.CopyCutoff, cs
}
return
}
// setRoot changes the root of the Fs
func (f *Fs) setRoot(root string) {
f.root = parsePath(root)
@ -497,10 +518,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
f.setRoot(root)
f.features = (&fs.Features{
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
BucketBasedRootOK: true,
ReadMimeType: true,
WriteMimeType: true,
BucketBased: true,
BucketBasedRootOK: true,
ChunkWriterDoesntSeek: true,
}).Fill(ctx, f)
// Set the test flag if required
if opt.TestMode != "" {
@ -1228,7 +1250,7 @@ func (f *Fs) deleteByID(ctx context.Context, ID, Name string) error {
// if oldOnly is true then it deletes only non current files.
//
// Implemented here so we can make sure we delete old versions.
func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool, deleteHidden bool, deleteUnfinished bool, maxAge time.Duration) error {
bucket, directory := f.split(dir)
if bucket == "" {
return errors.New("can't purge from root")
@ -1246,7 +1268,7 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
}
}
var isUnfinishedUploadStale = func(timestamp api.Timestamp) bool {
return time.Since(time.Time(timestamp)).Hours() > 24
return time.Since(time.Time(timestamp)) > maxAge
}
// Delete Config.Transfers in parallel
@ -1269,6 +1291,21 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
}
}()
}
if oldOnly {
if deleteHidden && deleteUnfinished {
fs.Infof(f, "cleaning bucket %q of all hidden files, and pending multipart uploads older than %v", bucket, maxAge)
} else if deleteHidden {
fs.Infof(f, "cleaning bucket %q of all hidden files", bucket)
} else if deleteUnfinished {
fs.Infof(f, "cleaning bucket %q of pending multipart uploads older than %v", bucket, maxAge)
} else {
fs.Errorf(f, "cleaning bucket %q of nothing. This should never happen!", bucket)
return nil
}
} else {
fs.Infof(f, "cleaning bucket %q of all files", bucket)
}
last := ""
checkErr(f.list(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", true, 0, true, false, func(remote string, object *api.File, isDirectory bool) error {
if !isDirectory {
@ -1279,14 +1316,14 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
tr := accounting.Stats(ctx).NewCheckingTransfer(oi, "checking")
if oldOnly && last != remote {
// Check current version of the file
if object.Action == "hide" {
if deleteHidden && object.Action == "hide" {
fs.Debugf(remote, "Deleting current version (id %q) as it is a hide marker", object.ID)
toBeDeleted <- object
} else if object.Action == "start" && isUnfinishedUploadStale(object.UploadTimestamp) {
} else if deleteUnfinished && object.Action == "start" && isUnfinishedUploadStale(object.UploadTimestamp) {
fs.Debugf(remote, "Deleting current version (id %q) as it is a start marker (upload started at %s)", object.ID, time.Time(object.UploadTimestamp).Local())
toBeDeleted <- object
} else {
fs.Debugf(remote, "Not deleting current version (id %q) %q", object.ID, object.Action)
fs.Debugf(remote, "Not deleting current version (id %q) %q dated %v (%v ago)", object.ID, object.Action, time.Time(object.UploadTimestamp).Local(), time.Since(time.Time(object.UploadTimestamp)))
}
} else {
fs.Debugf(remote, "Deleting (id %q)", object.ID)
@ -1308,12 +1345,17 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
// Purge deletes all the files and directories including the old versions.
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purge(ctx, dir, false)
return f.purge(ctx, dir, false, false, false, defaultMaxAge)
}
// CleanUp deletes all the hidden files.
// CleanUp deletes all hidden files and pending multipart uploads older than 24 hours.
func (f *Fs) CleanUp(ctx context.Context) error {
return f.purge(ctx, "", true)
return f.purge(ctx, "", true, true, true, defaultMaxAge)
}
// cleanUp deletes all hidden files and/or pending multipart uploads older than the specified age.
func (f *Fs) cleanUp(ctx context.Context, deleteHidden bool, deleteUnfinished bool, maxAge time.Duration) (err error) {
return f.purge(ctx, "", true, deleteHidden, deleteUnfinished, maxAge)
}
// copy does a server-side copy from dstObj <- srcObj
@ -1321,7 +1363,7 @@ func (f *Fs) CleanUp(ctx context.Context) error {
// If newInfo is nil then the metadata will be copied otherwise it
// will be replaced with newInfo
func (f *Fs) copy(ctx context.Context, dstObj *Object, srcObj *Object, newInfo *api.File) (err error) {
if srcObj.size >= int64(f.opt.CopyCutoff) {
if srcObj.size > int64(f.opt.CopyCutoff) {
if newInfo == nil {
newInfo, err = srcObj.getMetaData(ctx)
if err != nil {
@ -1332,7 +1374,11 @@ func (f *Fs) copy(ctx context.Context, dstObj *Object, srcObj *Object, newInfo *
if err != nil {
return err
}
return up.Copy(ctx)
err = up.Copy(ctx)
if err != nil {
return err
}
return dstObj.decodeMetaDataFileInfo(up.info)
}
dstBucket, dstPath := dstObj.split()
@ -1521,7 +1567,7 @@ func (o *Object) Size() int64 {
//
// Make sure it is lower case.
//
// Remove unverified prefix - see https://www.backblaze.com/b2/docs/uploading.html
// Remove unverified prefix - see https://www.backblaze.com/docs/cloud-storage-upload-files-with-the-native-api
// Some tools (e.g. Cyberduck) use this
func cleanSHA1(sha1 string) string {
const unverified = "unverified:"
@ -1548,7 +1594,14 @@ func (o *Object) decodeMetaDataRaw(ID, SHA1 string, Size int64, UploadTimestamp
o.size = Size
// Use the UploadTimestamp if can't get file info
o.modTime = time.Time(UploadTimestamp)
return o.parseTimeString(Info[timeKey])
err = o.parseTimeString(Info[timeKey])
if err != nil {
return err
}
// For now, just set "mtime" in metadata
o.meta = make(map[string]string, 1)
o.meta["mtime"] = o.modTime.Format(time.RFC3339Nano)
return nil
}
// decodeMetaData sets the metadata in the object from an api.File
@ -1650,6 +1703,16 @@ func timeString(modTime time.Time) string {
return strconv.FormatInt(modTime.UnixNano()/1e6, 10)
}
// parseTimeStringHelper converts a decimal string number of milliseconds
// elapsed since January 1, 1970 UTC into a time.Time
func parseTimeStringHelper(timeString string) (time.Time, error) {
unixMilliseconds, err := strconv.ParseInt(timeString, 10, 64)
if err != nil {
return time.Time{}, err
}
return time.Unix(unixMilliseconds/1e3, (unixMilliseconds%1e3)*1e6).UTC(), nil
}
// parseTimeString converts a decimal string number of milliseconds
// elapsed since January 1, 1970 UTC into a time.Time and stores it in
// the modTime variable.
@ -1657,12 +1720,12 @@ func (o *Object) parseTimeString(timeString string) (err error) {
if timeString == "" {
return nil
}
unixMilliseconds, err := strconv.ParseInt(timeString, 10, 64)
modTime, err := parseTimeStringHelper(timeString)
if err != nil {
fs.Debugf(o, "Failed to parse mod time string %q: %v", timeString, err)
return nil
}
o.modTime = time.Unix(unixMilliseconds/1e3, (unixMilliseconds%1e3)*1e6).UTC()
o.modTime = modTime
return nil
}
@ -1739,14 +1802,14 @@ func (file *openFile) Close() (err error) {
// Check to see we read the correct number of bytes
if file.o.Size() != file.bytes {
return fmt.Errorf("object corrupted on transfer - length mismatch (want %d got %d)", file.o.Size(), file.bytes)
return fmt.Errorf("corrupted on transfer: lengths differ want %d vs got %d", file.o.Size(), file.bytes)
}
// Check the SHA1
receivedSHA1 := file.o.sha1
calculatedSHA1 := fmt.Sprintf("%x", file.hash.Sum(nil))
if receivedSHA1 != "" && receivedSHA1 != calculatedSHA1 {
return fmt.Errorf("object corrupted on transfer - SHA1 mismatch (want %q got %q)", receivedSHA1, calculatedSHA1)
return fmt.Errorf("corrupted on transfer: SHA1 hashes differ want %q vs got %q", receivedSHA1, calculatedSHA1)
}
return nil
@ -1816,6 +1879,14 @@ func (o *Object) getOrHead(ctx context.Context, method string, options []fs.Open
ContentType: resp.Header.Get("Content-Type"),
Info: Info,
}
// Embryonic metadata support - just mtime
o.meta = make(map[string]string, 1)
modTime, err := parseTimeStringHelper(info.Info[timeKey])
if err == nil {
o.meta["mtime"] = modTime.Format(time.RFC3339Nano)
}
// When reading files from B2 via cloudflare using
// --b2-download-url cloudflare strips the Content-Length
// headers (presumably so it can inject stuff) so use the old
@ -1913,13 +1984,17 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err == nil {
fs.Debugf(o, "File is big enough for chunked streaming")
up, err := o.fs.newLargeUpload(ctx, o, in, src, o.fs.opt.ChunkSize, false, nil)
up, err := o.fs.newLargeUpload(ctx, o, in, src, o.fs.opt.ChunkSize, false, nil, options...)
if err != nil {
o.fs.putRW(rw)
return err
}
// NB Stream returns the buffer and token
return up.Stream(ctx, rw)
err = up.Stream(ctx, rw)
if err != nil {
return err
}
return o.decodeMetaDataFileInfo(up.info)
} else if err == io.EOF {
fs.Debugf(o, "File has %d bytes, which makes only one chunk. Using direct upload.", n)
defer o.fs.putRW(rw)
@ -1941,7 +2016,10 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return o.decodeMetaDataFileInfo(up.info)
}
modTime := src.ModTime(ctx)
modTime, err := o.getModTime(ctx, src, options)
if err != nil {
return err
}
calculatedSha1, _ := src.Hash(ctx, hash.SHA1)
if calculatedSha1 == "" {
@ -2046,6 +2124,36 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return o.decodeMetaDataFileInfo(&response)
}
// Get modTime from the source; if --metadata is set, fetch the src metadata and get it from there.
// When metadata support is added to b2, this method will need a more generic name
func (o *Object) getModTime(ctx context.Context, src fs.ObjectInfo, options []fs.OpenOption) (time.Time, error) {
modTime := src.ModTime(ctx)
// Fetch metadata if --metadata is in use
meta, err := fs.GetMetadataOptions(ctx, o.fs, src, options)
if err != nil {
return time.Time{}, fmt.Errorf("failed to read metadata from source object: %w", err)
}
// merge metadata into request and user metadata
for k, v := range meta {
k = strings.ToLower(k)
// For now, the only metadata we're concerned with is "mtime"
switch k {
case "mtime":
// mtime in meta overrides source ModTime
metaModTime, err := time.Parse(time.RFC3339Nano, v)
if err != nil {
fs.Debugf(o, "failed to parse metadata %s: %q: %v", k, v, err)
} else {
modTime = metaModTime
}
default:
// Do nothing for now
}
}
return modTime, nil
}
// OpenChunkWriter returns the chunk size and a ChunkWriter
//
// Pass in the remote and the src object
@ -2063,7 +2171,7 @@ func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectIn
// Temporary Object under construction
o := &Object{
fs: f,
remote: src.Remote(),
remote: remote,
}
bucket, _ := o.split()
@ -2077,7 +2185,7 @@ func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectIn
Concurrency: o.fs.opt.UploadConcurrency,
//LeavePartsOnError: o.fs.opt.LeavePartsOnError,
}
up, err := f.newLargeUpload(ctx, o, nil, src, f.opt.ChunkSize, false, nil)
up, err := f.newLargeUpload(ctx, o, nil, src, f.opt.ChunkSize, false, nil, options...)
return info, up, err
}
@ -2215,8 +2323,56 @@ func (f *Fs) lifecycleCommand(ctx context.Context, name string, arg []string, op
return bucket.LifecycleRules, nil
}
var cleanupHelp = fs.CommandHelp{
Name: "cleanup",
Short: "Remove unfinished large file uploads.",
Long: `This command removes unfinished large file uploads of age greater than
max-age, which defaults to 24 hours.
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
rclone backend cleanup b2:bucket/path/to/object
rclone backend cleanup -o max-age=7w b2:bucket/path/to/object
Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
`,
Opts: map[string]string{
"max-age": "Max age of upload to delete",
},
}
func (f *Fs) cleanupCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
maxAge := defaultMaxAge
if opt["max-age"] != "" {
maxAge, err = fs.ParseDuration(opt["max-age"])
if err != nil {
return nil, fmt.Errorf("bad max-age: %w", err)
}
}
return nil, f.cleanUp(ctx, false, true, maxAge)
}
var cleanupHiddenHelp = fs.CommandHelp{
Name: "cleanup-hidden",
Short: "Remove old versions of files.",
Long: `This command removes any old hidden versions of files.
Note that you can use --interactive/-i or --dry-run with this command to see what
it would do.
rclone backend cleanup-hidden b2:bucket/path/to/dir
`,
}
func (f *Fs) cleanupHiddenCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
return nil, f.cleanUp(ctx, true, false, 0)
}
var commandHelp = []fs.CommandHelp{
lifecycleHelp,
cleanupHelp,
cleanupHiddenHelp,
}
// Command the backend to run a named command
@ -2232,6 +2388,10 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
switch name {
case "lifecycle":
return f.lifecycleCommand(ctx, name, arg, opt)
case "cleanup":
return f.cleanupCommand(ctx, name, arg, opt)
case "cleanup-hidden":
return f.cleanupHiddenCommand(ctx, name, arg, opt)
default:
return nil, fs.ErrorCommandNotFound
}

View file

@ -1,14 +1,29 @@
package b2
import (
"context"
"crypto/sha1"
"fmt"
"path"
"strings"
"testing"
"time"
"github.com/rclone/rclone/backend/b2/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/bucket"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/lib/version"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Test b2 string encoding
// https://www.backblaze.com/b2/docs/string_encoding.html
// https://www.backblaze.com/docs/cloud-storage-native-api-string-encoding
var encodeTest = []struct {
fullyEncoded string
@ -168,3 +183,304 @@ func TestParseTimeString(t *testing.T) {
}
}
// Return a map of the headers in the options with keys stripped of the "x-bz-info-" prefix
func OpenOptionToMetaData(options []fs.OpenOption) map[string]string {
var headers = make(map[string]string)
for _, option := range options {
k, v := option.Header()
k = strings.ToLower(k)
if strings.HasPrefix(k, headerPrefix) {
headers[k[len(headerPrefix):]] = v
}
}
return headers
}
func (f *Fs) internalTestMetadata(t *testing.T, size string, uploadCutoff string, chunkSize string) {
what := fmt.Sprintf("Size%s/UploadCutoff%s/ChunkSize%s", size, uploadCutoff, chunkSize)
t.Run(what, func(t *testing.T) {
ctx := context.Background()
ss := fs.SizeSuffix(0)
err := ss.Set(size)
require.NoError(t, err)
original := random.String(int(ss))
contents := fstest.Gz(t, original)
mimeType := "text/html"
if chunkSize != "" {
ss := fs.SizeSuffix(0)
err := ss.Set(chunkSize)
require.NoError(t, err)
_, err = f.SetUploadChunkSize(ss)
require.NoError(t, err)
}
if uploadCutoff != "" {
ss := fs.SizeSuffix(0)
err := ss.Set(uploadCutoff)
require.NoError(t, err)
_, err = f.SetUploadCutoff(ss)
require.NoError(t, err)
}
item := fstest.NewItem("test-metadata", contents, fstest.Time("2001-05-06T04:05:06.499Z"))
btime := time.Now()
metadata := fs.Metadata{
// Just mtime for now - limit to milliseconds since x-bz-info-src_last_modified_millis can't support any
"mtime": "2009-05-06T04:05:06.499Z",
}
// Need to specify HTTP options with the header prefix since they are passed as-is
options := []fs.OpenOption{
&fs.HTTPOption{Key: "X-Bz-Info-a", Value: "1"},
&fs.HTTPOption{Key: "X-Bz-Info-b", Value: "2"},
}
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, true, contents, true, mimeType, metadata, options...)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()
o := obj.(*Object)
gotMetadata, err := o.getMetaData(ctx)
require.NoError(t, err)
// X-Bz-Info-a & X-Bz-Info-b
optMetadata := OpenOptionToMetaData(options)
for k, v := range optMetadata {
got := gotMetadata.Info[k]
assert.Equal(t, v, got, k)
}
// mtime
for k, v := range metadata {
got := o.meta[k]
assert.Equal(t, v, got, k)
}
assert.Equal(t, mimeType, gotMetadata.ContentType, "Content-Type")
// Modification time from the x-bz-info-src_last_modified_millis header
var mtime api.Timestamp
err = mtime.UnmarshalJSON([]byte(gotMetadata.Info[timeKey]))
if err != nil {
fs.Debugf(o, "Bad "+timeHeader+" header: %v", err)
}
assert.Equal(t, item.ModTime, time.Time(mtime), "Modification time")
// Upload time
gotBtime := time.Time(gotMetadata.UploadTimestamp)
dt := gotBtime.Sub(btime)
assert.True(t, dt < time.Minute && dt > -time.Minute, fmt.Sprintf("btime more than 1 minute out want %v got %v delta %v", btime, gotBtime, dt))
t.Run("GzipEncoding", func(t *testing.T) {
// Test that the gzipped file we uploaded can be
// downloaded
checkDownload := func(wantContents string, wantSize int64, wantHash string) {
gotContents := fstests.ReadObject(ctx, t, o, -1)
assert.Equal(t, wantContents, gotContents)
assert.Equal(t, wantSize, o.Size())
gotHash, err := o.Hash(ctx, hash.SHA1)
require.NoError(t, err)
assert.Equal(t, wantHash, gotHash)
}
t.Run("NoDecompress", func(t *testing.T) {
checkDownload(contents, int64(len(contents)), sha1Sum(t, contents))
})
})
})
}
func (f *Fs) InternalTestMetadata(t *testing.T) {
// 1 kB regular file
f.internalTestMetadata(t, "1kiB", "", "")
// 10 MiB large file
f.internalTestMetadata(t, "10MiB", "6MiB", "6MiB")
}
func sha1Sum(t *testing.T, s string) string {
hash := sha1.Sum([]byte(s))
return fmt.Sprintf("%x", hash)
}
// This is adapted from the s3 equivalent.
func (f *Fs) InternalTestVersions(t *testing.T) {
ctx := context.Background()
// Small pause to make the LastModified different since AWS
// only seems to track them to 1 second granularity
time.Sleep(2 * time.Second)
// Create an object
const dirName = "versions"
const fileName = dirName + "/" + "test-versions.txt"
contents := random.String(100)
item := fstest.NewItem(fileName, contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
obj := fstests.PutTestContents(ctx, t, f, &item, contents, true)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()
objMetadata, err := obj.(*Object).getMetaData(ctx)
require.NoError(t, err)
// Small pause
time.Sleep(2 * time.Second)
// Remove it
assert.NoError(t, obj.Remove(ctx))
// Small pause to make the LastModified different since AWS only seems to track them to 1 second granularity
time.Sleep(2 * time.Second)
// And create it with different size and contents
newContents := random.String(101)
newItem := fstest.NewItem(fileName, newContents, fstest.Time("2002-05-06T04:05:06.499999999Z"))
newObj := fstests.PutTestContents(ctx, t, f, &newItem, newContents, true)
newObjMetadata, err := newObj.(*Object).getMetaData(ctx)
require.NoError(t, err)
t.Run("Versions", func(t *testing.T) {
// Set --b2-versions for this test
f.opt.Versions = true
defer func() {
f.opt.Versions = false
}()
// Read the contents
entries, err := f.List(ctx, dirName)
require.NoError(t, err)
tests := 0
var fileNameVersion string
for _, entry := range entries {
t.Log(entry)
remote := entry.Remote()
if remote == fileName {
t.Run("ReadCurrent", func(t *testing.T) {
assert.Equal(t, newContents, fstests.ReadObject(ctx, t, entry.(fs.Object), -1))
})
tests++
} else if versionTime, p := version.Remove(remote); !versionTime.IsZero() && p == fileName {
t.Run("ReadVersion", func(t *testing.T) {
assert.Equal(t, contents, fstests.ReadObject(ctx, t, entry.(fs.Object), -1))
})
assert.WithinDuration(t, time.Time(objMetadata.UploadTimestamp), versionTime, time.Second, "object time must be with 1 second of version time")
fileNameVersion = remote
tests++
}
}
assert.Equal(t, 2, tests, "object missing from listing")
// Check we can read the object with a version suffix
t.Run("NewObject", func(t *testing.T) {
o, err := f.NewObject(ctx, fileNameVersion)
require.NoError(t, err)
require.NotNil(t, o)
assert.Equal(t, int64(100), o.Size(), o.Remote())
})
// Check we can make a NewFs from that object with a version suffix
t.Run("NewFs", func(t *testing.T) {
newPath := bucket.Join(fs.ConfigStringFull(f), fileNameVersion)
// Make sure --b2-versions is set in the config of the new remote
fs.Debugf(nil, "oldPath = %q", newPath)
lastColon := strings.LastIndex(newPath, ":")
require.True(t, lastColon >= 0)
newPath = newPath[:lastColon] + ",versions" + newPath[lastColon:]
fs.Debugf(nil, "newPath = %q", newPath)
fNew, err := cache.Get(ctx, newPath)
// This should return pointing to a file
require.Equal(t, fs.ErrorIsFile, err)
require.NotNil(t, fNew)
// With the directory above
assert.Equal(t, dirName, path.Base(fs.ConfigStringFull(fNew)))
})
})
t.Run("VersionAt", func(t *testing.T) {
// We set --b2-version-at for this test so make sure we reset it at the end
defer func() {
f.opt.VersionAt = fs.Time{}
}()
var (
firstObjectTime = time.Time(objMetadata.UploadTimestamp)
secondObjectTime = time.Time(newObjMetadata.UploadTimestamp)
)
for _, test := range []struct {
what string
at time.Time
want []fstest.Item
wantErr error
wantSize int64
}{
{
what: "Before",
at: firstObjectTime.Add(-time.Second),
want: fstests.InternalTestFiles,
wantErr: fs.ErrorObjectNotFound,
},
{
what: "AfterOne",
at: firstObjectTime.Add(time.Second),
want: append([]fstest.Item{item}, fstests.InternalTestFiles...),
wantSize: 100,
},
{
what: "AfterDelete",
at: secondObjectTime.Add(-time.Second),
want: fstests.InternalTestFiles,
wantErr: fs.ErrorObjectNotFound,
},
{
what: "AfterTwo",
at: secondObjectTime.Add(time.Second),
want: append([]fstest.Item{newItem}, fstests.InternalTestFiles...),
wantSize: 101,
},
} {
t.Run(test.what, func(t *testing.T) {
f.opt.VersionAt = fs.Time(test.at)
t.Run("List", func(t *testing.T) {
fstest.CheckListing(t, f, test.want)
})
// b2 NewObject doesn't work with VersionAt
//t.Run("NewObject", func(t *testing.T) {
// gotObj, gotErr := f.NewObject(ctx, fileName)
// assert.Equal(t, test.wantErr, gotErr)
// if gotErr == nil {
// assert.Equal(t, test.wantSize, gotObj.Size())
// }
//})
})
}
})
t.Run("Cleanup", func(t *testing.T) {
require.NoError(t, f.cleanUp(ctx, true, false, 0))
items := append([]fstest.Item{newItem}, fstests.InternalTestFiles...)
fstest.CheckListing(t, f, items)
// Set --b2-versions for this test
f.opt.Versions = true
defer func() {
f.opt.Versions = false
}()
fstest.CheckListing(t, f, items)
})
// Purge gets tested later
}
// -run TestIntegration/FsMkdir/FsPutFiles/Internal
func (f *Fs) InternalTest(t *testing.T) {
t.Run("Metadata", f.InternalTestMetadata)
t.Run("Versions", f.InternalTestVersions)
}
var _ fstests.InternalTester = (*Fs)(nil)

View file

@ -28,7 +28,12 @@ func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
func (f *Fs) SetCopyCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setCopyCutoff(cs)
}
var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetUploadCutoffer = (*Fs)(nil)
_ fstests.SetCopyCutoffer = (*Fs)(nil)
)

View file

@ -1,6 +1,6 @@
// Upload large files for b2
//
// Docs - https://www.backblaze.com/b2/docs/large_files.html
// Docs - https://www.backblaze.com/docs/cloud-storage-large-files
package b2
@ -91,7 +91,7 @@ type largeUpload struct {
// newLargeUpload starts an upload of object o from in with metadata in src
//
// If newInfo is set then metadata from that will be used instead of reading it from src
func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs.ObjectInfo, defaultChunkSize fs.SizeSuffix, doCopy bool, newInfo *api.File) (up *largeUpload, err error) {
func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs.ObjectInfo, defaultChunkSize fs.SizeSuffix, doCopy bool, newInfo *api.File, options ...fs.OpenOption) (up *largeUpload, err error) {
size := src.Size()
parts := 0
chunkSize := defaultChunkSize
@ -104,11 +104,6 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
parts++
}
}
opts := rest.Opts{
Method: "POST",
Path: "/b2_start_large_file",
}
bucket, bucketPath := o.split()
bucketID, err := f.getBucketID(ctx, bucket)
if err != nil {
@ -118,12 +113,27 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
BucketID: bucketID,
Name: f.opt.Enc.FromStandardPath(bucketPath),
}
optionsToSend := make([]fs.OpenOption, 0, len(options))
if newInfo == nil {
modTime := src.ModTime(ctx)
modTime, err := o.getModTime(ctx, src, options)
if err != nil {
return nil, err
}
request.ContentType = fs.MimeType(ctx, src)
request.Info = map[string]string{
timeKey: timeString(modTime),
}
// Custom upload headers - remove header prefix since they are sent in the body
for _, option := range options {
k, v := option.Header()
k = strings.ToLower(k)
if strings.HasPrefix(k, headerPrefix) {
request.Info[k[len(headerPrefix):]] = v
} else {
optionsToSend = append(optionsToSend, option)
}
}
// Set the SHA1 if known
if !o.fs.opt.DisableCheckSum || doCopy {
if calculatedSha1, err := src.Hash(ctx, hash.SHA1); err == nil && calculatedSha1 != "" {
@ -134,6 +144,11 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
request.ContentType = newInfo.ContentType
request.Info = newInfo.Info
}
opts := rest.Opts{
Method: "POST",
Path: "/b2_start_large_file",
Options: optionsToSend,
}
var response api.StartLargeFileResponse
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &request, &response)
@ -169,24 +184,26 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
// This should be returned with returnUploadURL when finished
func (up *largeUpload) getUploadURL(ctx context.Context) (upload *api.GetUploadPartURLResponse, err error) {
up.uploadMu.Lock()
defer up.uploadMu.Unlock()
if len(up.uploads) == 0 {
opts := rest.Opts{
Method: "POST",
Path: "/b2_get_upload_part_url",
}
var request = api.GetUploadPartURLRequest{
ID: up.id,
}
err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &upload)
return up.f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("failed to get upload URL: %w", err)
}
} else {
if len(up.uploads) > 0 {
upload, up.uploads = up.uploads[0], up.uploads[1:]
up.uploadMu.Unlock()
return upload, nil
}
up.uploadMu.Unlock()
opts := rest.Opts{
Method: "POST",
Path: "/b2_get_upload_part_url",
}
var request = api.GetUploadPartURLRequest{
ID: up.id,
}
err = up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &upload)
return up.f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("failed to get upload URL: %w", err)
}
return upload, nil
}
@ -391,10 +408,11 @@ func (up *largeUpload) Stream(ctx context.Context, initialUploadBlock *pool.RW)
hasMoreParts = true
)
up.size = initialUploadBlock.Size()
up.parts = 0
for part := 0; hasMoreParts; part++ {
// Get a block of memory from the pool and token which limits concurrency.
var rw *pool.RW
if part == 1 {
if part == 0 {
rw = initialUploadBlock
} else {
rw = up.f.getRW(false)
@ -409,12 +427,18 @@ func (up *largeUpload) Stream(ctx context.Context, initialUploadBlock *pool.RW)
// Read the chunk
var n int64
if part == 1 {
if part == 0 {
n = rw.Size()
} else {
n, err = io.CopyN(rw, up.in, up.chunkSize)
if err == io.EOF {
fs.Debugf(up.o, "Read less than a full chunk, making this the last one.")
if n == 0 {
fs.Debugf(up.o, "Not sending empty chunk after EOF - ending.")
up.f.putRW(rw)
break
} else {
fs.Debugf(up.o, "Read less than a full chunk %d, making this the last one.", n)
}
hasMoreParts = false
} else if err != nil {
// other kinds of errors indicate failure
@ -424,7 +448,7 @@ func (up *largeUpload) Stream(ctx context.Context, initialUploadBlock *pool.RW)
}
// Keep stats up to date
up.parts = part
up.parts += 1
up.size += n
if part > maxParts {
up.f.putRW(rw)
@ -454,7 +478,7 @@ func (up *largeUpload) Copy(ctx context.Context) (err error) {
remaining = up.size
)
g.SetLimit(up.f.opt.UploadConcurrency)
for part := 0; part <= up.parts; part++ {
for part := 0; part < up.parts; part++ {
// Fail fast, in case an errgroup managed function returns an error
// gCtx is cancelled. There is no point in copying all the other parts.
if gCtx.Err() != nil {

View file

@ -167,19 +167,7 @@ type PreUploadCheckResponse struct {
// PreUploadCheckConflict is returned in the ContextInfo error field
// from PreUploadCheck when the error code is "item_name_in_use"
type PreUploadCheckConflict struct {
Conflicts struct {
Type string `json:"type"`
ID string `json:"id"`
FileVersion struct {
Type string `json:"type"`
ID string `json:"id"`
Sha1 string `json:"sha1"`
} `json:"file_version"`
SequenceID string `json:"sequence_id"`
Etag string `json:"etag"`
Sha1 string `json:"sha1"`
Name string `json:"name"`
} `json:"conflicts"`
Conflicts ItemMini `json:"conflicts"`
}
// UpdateFileModTime is used in Update File Info

View file

@ -154,7 +154,7 @@ func init() {
Default: "",
Help: `Impersonate this user ID when using a service account.
Settng this flag allows rclone, when using a JWT service account, to
Setting this flag allows rclone, when using a JWT service account, to
act on behalf of another user by setting the as-user header.
The user ID is the Box identifier for a user. User IDs can found for
@ -380,7 +380,7 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.Item, err error) {
// defer fs.Trace(f, "path=%q", path)("info=%+v, err=%v", &info, &err)
// defer log.Trace(f, "path=%q", path)("info=%+v, err=%v", &info, &err)
leaf, directoryID, err := f.dirCache.FindPath(ctx, path, false)
if err != nil {
if err == fs.ErrorDirNotFound {
@ -389,20 +389,30 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.It
return nil, err
}
found, err := f.listAll(ctx, directoryID, false, true, true, func(item *api.Item) bool {
if strings.EqualFold(item.Name, leaf) {
info = item
return true
}
return false
// Use preupload to find the ID
itemMini, err := f.preUploadCheck(ctx, leaf, directoryID, -1)
if err != nil {
return nil, err
}
if itemMini == nil {
return nil, fs.ErrorObjectNotFound
}
// Now we have the ID we can look up the object proper
opts := rest.Opts{
Method: "GET",
Path: "/files/" + itemMini.ID,
Parameters: fieldsValue(),
}
var item api.Item
err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &item)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
}
if !found {
return nil, fs.ErrorObjectNotFound
}
return info, nil
return &item, nil
}
// errorHandler parses a non 2xx error response into an error
@ -762,7 +772,7 @@ func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time,
//
// It returns "", nil if the file is good to go
// It returns "ID", nil if the file must be updated
func (f *Fs) preUploadCheck(ctx context.Context, leaf, directoryID string, size int64) (ID string, err error) {
func (f *Fs) preUploadCheck(ctx context.Context, leaf, directoryID string, size int64) (item *api.ItemMini, err error) {
check := api.PreUploadCheck{
Name: f.opt.Enc.FromStandardName(leaf),
Parent: api.Parent{
@ -787,16 +797,16 @@ func (f *Fs) preUploadCheck(ctx context.Context, leaf, directoryID string, size
var conflict api.PreUploadCheckConflict
err = json.Unmarshal(apiErr.ContextInfo, &conflict)
if err != nil {
return "", fmt.Errorf("pre-upload check: JSON decode failed: %w", err)
return nil, fmt.Errorf("pre-upload check: JSON decode failed: %w", err)
}
if conflict.Conflicts.Type != api.ItemTypeFile {
return "", fmt.Errorf("pre-upload check: can't overwrite non file with file: %w", err)
return nil, fs.ErrorIsDir
}
return conflict.Conflicts.ID, nil
return &conflict.Conflicts, nil
}
return "", fmt.Errorf("pre-upload check: %w", err)
return nil, fmt.Errorf("pre-upload check: %w", err)
}
return "", nil
return nil, nil
}
// Put the object
@ -817,11 +827,11 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
// Preflight check the upload, which returns the ID if the
// object already exists
ID, err := f.preUploadCheck(ctx, leaf, directoryID, src.Size())
item, err := f.preUploadCheck(ctx, leaf, directoryID, src.Size())
if err != nil {
return nil, err
}
if ID == "" {
if item == nil {
return f.PutUnchecked(ctx, in, src, options...)
}
@ -829,7 +839,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
o := &Object{
fs: f,
remote: remote,
id: ID,
id: item.ID,
}
return o, o.Update(ctx, in, src, options...)
}
@ -1197,6 +1207,12 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
return err
}
// Shutdown shutdown the fs
func (f *Fs) Shutdown(ctx context.Context) error {
f.tokenRenewer.Shutdown()
return nil
}
// ChangeNotify calls the passed function with a path that has had changes.
// If the implementation uses polling, it should adhere to the given interval.
//
@ -1212,7 +1228,7 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
}
// box can send duplicate Event IDs. Use this map to track and filter
// the ones we've alread processed.
// the ones we've already processed.
processedEventIDs := make(map[string]time.Time)
var ticker *time.Ticker
@ -1709,6 +1725,7 @@ var (
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
)

View file

@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
// Package cache implements a virtual provider to cache existing remotes.
package cache
@ -410,18 +409,16 @@ func NewFs(ctx context.Context, name, rootPath string, m configmap.Mapper) (fs.F
if err != nil {
return nil, fmt.Errorf("failed to connect to the Plex API %v: %w", opt.PlexURL, err)
}
} else {
if opt.PlexPassword != "" && opt.PlexUsername != "" {
decPass, err := obscure.Reveal(opt.PlexPassword)
if err != nil {
decPass = opt.PlexPassword
}
f.plexConnector, err = newPlexConnector(f, opt.PlexURL, opt.PlexUsername, decPass, opt.PlexInsecure, func(token string) {
m.Set("plex_token", token)
})
if err != nil {
return nil, fmt.Errorf("failed to connect to the Plex API %v: %w", opt.PlexURL, err)
}
} else if opt.PlexPassword != "" && opt.PlexUsername != "" {
decPass, err := obscure.Reveal(opt.PlexPassword)
if err != nil {
decPass = opt.PlexPassword
}
f.plexConnector, err = newPlexConnector(f, opt.PlexURL, opt.PlexUsername, decPass, opt.PlexInsecure, func(token string) {
m.Set("plex_token", token)
})
if err != nil {
return nil, fmt.Errorf("failed to connect to the Plex API %v: %w", opt.PlexURL, err)
}
}
}

View file

@ -1,5 +1,4 @@
//go:build !plan9 && !js && !race
// +build !plan9,!js,!race
package cache_test
@ -11,7 +10,6 @@ import (
goflag "flag"
"fmt"
"io"
"log"
"math/rand"
"os"
"path"
@ -30,10 +28,11 @@ import (
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/testy"
"github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/vfs/vfsflags"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/stretchr/testify/require"
)
@ -93,7 +92,7 @@ func TestMain(m *testing.M) {
goflag.Parse()
var rc int
log.Printf("Running with the following params: \n remote: %v", remoteName)
fs.Logf(nil, "Running with the following params: \n remote: %v", remoteName)
runInstance = newRun()
rc = m.Run()
os.Exit(rc)
@ -123,10 +122,10 @@ func TestInternalListRootAndInnerRemotes(t *testing.T) {
/* TODO: is this testing something?
func TestInternalVfsCache(t *testing.T) {
vfsflags.Opt.DirCacheTime = time.Second * 30
vfscommon.Opt.DirCacheTime = time.Second * 30
testSize := int64(524288000)
vfsflags.Opt.CacheMode = vfs.CacheModeWrites
vfscommon.Opt.CacheMode = vfs.CacheModeWrites
id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, map[string]string{"writes": "true", "info_age": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
@ -338,7 +337,7 @@ func TestInternalCachedUpdatedContentMatches(t *testing.T) {
func TestInternalWrappedWrittenContentMatches(t *testing.T) {
id := fmt.Sprintf("tiwwcm%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
if runInstance.rootIsCrypt {
t.Skip("test skipped with crypt remote")
@ -368,7 +367,7 @@ func TestInternalWrappedWrittenContentMatches(t *testing.T) {
func TestInternalLargeWrittenContentMatches(t *testing.T) {
id := fmt.Sprintf("tilwcm%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second)
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
if runInstance.rootIsCrypt {
t.Skip("test skipped with crypt remote")
@ -408,7 +407,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
// update in the wrapped fs
originalSize, err := runInstance.size(t, rootFs, "data.bin")
require.NoError(t, err)
log.Printf("original size: %v", originalSize)
fs.Logf(nil, "original size: %v", originalSize)
o, err := cfs.UnWrap().NewObject(context.Background(), runInstance.encryptRemoteIfNeeded(t, "data.bin"))
require.NoError(t, err)
@ -417,7 +416,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
if runInstance.rootIsCrypt {
data2, err = base64.StdEncoding.DecodeString(cryptedText3Base64)
require.NoError(t, err)
expectedSize = expectedSize + 1 // FIXME newline gets in, likely test data issue
expectedSize++ // FIXME newline gets in, likely test data issue
} else {
data2 = []byte("test content")
}
@ -425,7 +424,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
err = o.Update(context.Background(), bytes.NewReader(data2), objInfo)
require.NoError(t, err)
require.Equal(t, int64(len(data2)), o.Size())
log.Printf("updated size: %v", len(data2))
fs.Logf(nil, "updated size: %v", len(data2))
// get a new instance from the cache
if runInstance.wrappedIsExternal {
@ -485,49 +484,49 @@ func TestInternalMoveWithNotify(t *testing.T) {
err = runInstance.retryBlock(func() error {
li, err := runInstance.list(t, rootFs, "test")
if err != nil {
log.Printf("err: %v", err)
fs.Logf(nil, "err: %v", err)
return err
}
if len(li) != 2 {
log.Printf("not expected listing /test: %v", li)
fs.Logf(nil, "not expected listing /test: %v", li)
return fmt.Errorf("not expected listing /test: %v", li)
}
li, err = runInstance.list(t, rootFs, "test/one")
if err != nil {
log.Printf("err: %v", err)
fs.Logf(nil, "err: %v", err)
return err
}
if len(li) != 0 {
log.Printf("not expected listing /test/one: %v", li)
fs.Logf(nil, "not expected listing /test/one: %v", li)
return fmt.Errorf("not expected listing /test/one: %v", li)
}
li, err = runInstance.list(t, rootFs, "test/second")
if err != nil {
log.Printf("err: %v", err)
fs.Logf(nil, "err: %v", err)
return err
}
if len(li) != 1 {
log.Printf("not expected listing /test/second: %v", li)
fs.Logf(nil, "not expected listing /test/second: %v", li)
return fmt.Errorf("not expected listing /test/second: %v", li)
}
if fi, ok := li[0].(os.FileInfo); ok {
if fi.Name() != "data.bin" {
log.Printf("not expected name: %v", fi.Name())
fs.Logf(nil, "not expected name: %v", fi.Name())
return fmt.Errorf("not expected name: %v", fi.Name())
}
} else if di, ok := li[0].(fs.DirEntry); ok {
if di.Remote() != "test/second/data.bin" {
log.Printf("not expected remote: %v", di.Remote())
fs.Logf(nil, "not expected remote: %v", di.Remote())
return fmt.Errorf("not expected remote: %v", di.Remote())
}
} else {
log.Printf("unexpected listing: %v", li)
fs.Logf(nil, "unexpected listing: %v", li)
return fmt.Errorf("unexpected listing: %v", li)
}
log.Printf("complete listing: %v", li)
fs.Logf(nil, "complete listing: %v", li)
return nil
}, 12, time.Second*10)
require.NoError(t, err)
@ -577,43 +576,43 @@ func TestInternalNotifyCreatesEmptyParts(t *testing.T) {
err = runInstance.retryBlock(func() error {
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test")))
if !found {
log.Printf("not found /test")
fs.Logf(nil, "not found /test")
return fmt.Errorf("not found /test")
}
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"), runInstance.encryptRemoteIfNeeded(t, "one")))
if !found {
log.Printf("not found /test/one")
fs.Logf(nil, "not found /test/one")
return fmt.Errorf("not found /test/one")
}
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"), runInstance.encryptRemoteIfNeeded(t, "one"), runInstance.encryptRemoteIfNeeded(t, "test2")))
if !found {
log.Printf("not found /test/one/test2")
fs.Logf(nil, "not found /test/one/test2")
return fmt.Errorf("not found /test/one/test2")
}
li, err := runInstance.list(t, rootFs, "test/one")
if err != nil {
log.Printf("err: %v", err)
fs.Logf(nil, "err: %v", err)
return err
}
if len(li) != 1 {
log.Printf("not expected listing /test/one: %v", li)
fs.Logf(nil, "not expected listing /test/one: %v", li)
return fmt.Errorf("not expected listing /test/one: %v", li)
}
if fi, ok := li[0].(os.FileInfo); ok {
if fi.Name() != "test2" {
log.Printf("not expected name: %v", fi.Name())
fs.Logf(nil, "not expected name: %v", fi.Name())
return fmt.Errorf("not expected name: %v", fi.Name())
}
} else if di, ok := li[0].(fs.DirEntry); ok {
if di.Remote() != "test/one/test2" {
log.Printf("not expected remote: %v", di.Remote())
fs.Logf(nil, "not expected remote: %v", di.Remote())
return fmt.Errorf("not expected remote: %v", di.Remote())
}
} else {
log.Printf("unexpected listing: %v", li)
fs.Logf(nil, "unexpected listing: %v", li)
return fmt.Errorf("unexpected listing: %v", li)
}
log.Printf("complete listing /test/one/test2")
fs.Logf(nil, "complete listing /test/one/test2")
return nil
}, 12, time.Second*10)
require.NoError(t, err)
@ -708,7 +707,7 @@ func TestInternalMaxChunkSizeRespected(t *testing.T) {
func TestInternalExpiredEntriesRemoved(t *testing.T) {
id := fmt.Sprintf("tieer%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second * 4 // needs to be lower than the defined
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second * 4) // needs to be lower than the defined
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil)
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
@ -743,7 +742,7 @@ func TestInternalExpiredEntriesRemoved(t *testing.T) {
}
func TestInternalBug2117(t *testing.T) {
vfsflags.Opt.DirCacheTime = time.Second * 10
vfscommon.Opt.DirCacheTime = fs.Duration(time.Second * 10)
id := fmt.Sprintf("tib2117%v", time.Now().Unix())
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, map[string]string{"info_age": "72h", "chunk_clean_interval": "15m"})
@ -771,24 +770,24 @@ func TestInternalBug2117(t *testing.T) {
di, err := runInstance.list(t, rootFs, "test/dir1/dir2")
require.NoError(t, err)
log.Printf("len: %v", len(di))
fs.Logf(nil, "len: %v", len(di))
require.Len(t, di, 1)
time.Sleep(time.Second * 30)
di, err = runInstance.list(t, rootFs, "test/dir1/dir2")
require.NoError(t, err)
log.Printf("len: %v", len(di))
fs.Logf(nil, "len: %v", len(di))
require.Len(t, di, 1)
di, err = runInstance.list(t, rootFs, "test/dir1")
require.NoError(t, err)
log.Printf("len: %v", len(di))
fs.Logf(nil, "len: %v", len(di))
require.Len(t, di, 4)
di, err = runInstance.list(t, rootFs, "test")
require.NoError(t, err)
log.Printf("len: %v", len(di))
fs.Logf(nil, "len: %v", len(di))
require.Len(t, di, 4)
}
@ -829,7 +828,7 @@ func newRun() *run {
} else {
r.tmpUploadDir = uploadDir
}
log.Printf("Temp Upload Dir: %v", r.tmpUploadDir)
fs.Logf(nil, "Temp Upload Dir: %v", r.tmpUploadDir)
return r
}
@ -850,8 +849,8 @@ func (r *run) encryptRemoteIfNeeded(t *testing.T, remote string) string {
func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool, flags map[string]string) (fs.Fs, *cache.Persistent) {
fstest.Initialise()
remoteExists := false
for _, s := range config.FileSections() {
if s == remote {
for _, s := range config.GetRemotes() {
if s.Name == remote {
remoteExists = true
}
}
@ -875,12 +874,12 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
cacheRemote := remote
if !remoteExists {
localRemote := remote + "-local"
config.FileSet(localRemote, "type", "local")
config.FileSet(localRemote, "nounc", "true")
config.FileSetValue(localRemote, "type", "local")
config.FileSetValue(localRemote, "nounc", "true")
m.Set("type", "cache")
m.Set("remote", localRemote+":"+filepath.Join(os.TempDir(), localRemote))
} else {
remoteType := config.FileGet(remote, "type")
remoteType := config.GetValue(remote, "type")
if remoteType == "" {
t.Skipf("skipped due to invalid remote type for %v", remote)
return nil, nil
@ -891,14 +890,14 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
m.Set("password", cryptPassword1)
m.Set("password2", cryptPassword2)
}
remoteRemote := config.FileGet(remote, "remote")
remoteRemote := config.GetValue(remote, "remote")
if remoteRemote == "" {
t.Skipf("skipped due to invalid remote wrapper for %v", remote)
return nil, nil
}
remoteRemoteParts := strings.Split(remoteRemote, ":")
remoteWrapping := remoteRemoteParts[0]
remoteType := config.FileGet(remoteWrapping, "type")
remoteType := config.GetValue(remoteWrapping, "type")
if remoteType != "cache" {
t.Skipf("skipped due to invalid remote type for %v: '%v'", remoteWrapping, remoteType)
return nil, nil
@ -935,8 +934,7 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
}
if purge {
_ = f.Features().Purge(context.Background(), "")
require.NoError(t, err)
_ = operations.Purge(context.Background(), f, "")
}
err = f.Mkdir(context.Background(), "")
require.NoError(t, err)
@ -949,7 +947,7 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
}
func (r *run) cleanupFs(t *testing.T, f fs.Fs) {
err := f.Features().Purge(context.Background(), "")
err := operations.Purge(context.Background(), f, "")
require.NoError(t, err)
cfs, err := r.getCacheFs(f)
require.NoError(t, err)
@ -1193,7 +1191,7 @@ func (r *run) updateData(t *testing.T, rootFs fs.Fs, src, data, append string) e
func (r *run) cleanSize(t *testing.T, size int64) int64 {
if r.rootIsCrypt {
denominator := int64(65536 + 16)
size = size - 32
size -= 32
quotient := size / denominator
remainder := size % denominator
return (quotient*65536 + remainder - 16)

View file

@ -1,7 +1,6 @@
// Test Cache filesystem interface
//go:build !plan9 && !js && !race
// +build !plan9,!js,!race
package cache_test
@ -16,10 +15,11 @@ import (
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
UnimplementableFsMethods: []string{"PublicLink", "OpenWriterAt", "OpenChunkWriter"},
UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier", "Metadata"},
SkipInvalidUTF8: true, // invalid UTF-8 confuses the cache
RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil),
UnimplementableFsMethods: []string{"PublicLink", "OpenWriterAt", "OpenChunkWriter", "DirSetModTime", "MkdirMetadata"},
UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier", "Metadata", "SetMetadata"},
UnimplementableDirectoryMethods: []string{"Metadata", "SetMetadata", "SetModTime"},
SkipInvalidUTF8: true, // invalid UTF-8 confuses the cache
})
}

View file

@ -2,6 +2,6 @@
// about "no buildable Go source files "
//go:build plan9 || js
// +build plan9 js
// Package cache implements a virtual provider to cache existing remotes.
package cache

View file

@ -1,5 +1,4 @@
//go:build !plan9 && !js && !race
// +build !plan9,!js,!race
package cache_test

View file

@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View file

@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache
@ -119,7 +118,7 @@ func (r *Handle) startReadWorkers() {
r.scaleWorkers(totalWorkers)
}
// scaleOutWorkers will increase the worker pool count by the provided amount
// scaleWorkers will increase the worker pool count by the provided amount
func (r *Handle) scaleWorkers(desired int) {
current := r.workers
if current == desired {
@ -209,7 +208,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
offset := chunkStart % int64(r.cacheFs().opt.ChunkSize)
// we align the start offset of the first chunk to a likely chunk in the storage
chunkStart = chunkStart - offset
chunkStart -= offset
r.queueOffset(chunkStart)
found := false
@ -328,7 +327,7 @@ func (r *Handle) Seek(offset int64, whence int) (int64, error) {
chunkStart := r.offset - (r.offset % int64(r.cacheFs().opt.ChunkSize))
if chunkStart >= int64(r.cacheFs().opt.ChunkSize) {
chunkStart = chunkStart - int64(r.cacheFs().opt.ChunkSize)
chunkStart -= int64(r.cacheFs().opt.ChunkSize)
}
r.queueOffset(chunkStart)
@ -416,10 +415,8 @@ func (w *worker) run() {
continue
}
}
} else {
if w.r.storage().HasChunk(w.r.cachedObject, chunkStart) {
continue
}
} else if w.r.storage().HasChunk(w.r.cachedObject, chunkStart) {
continue
}
chunkEnd := chunkStart + int64(w.r.cacheFs().opt.ChunkSize)

View file

@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View file

@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View file

@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View file

@ -1,5 +1,4 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache

View file

@ -1,3 +1,6 @@
//go:build !plan9 && !js
// +build !plan9,!js
package cache
import bolt "go.etcd.io/bbolt"

View file

@ -29,6 +29,7 @@ import (
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/lib/encoder"
)
// Chunker's composite files have one or more chunks
@ -101,8 +102,10 @@ var (
//
// And still chunker's primary function is to chunk large files
// rather than serve as a generic metadata container.
const maxMetadataSize = 1023
const maxMetadataSizeWritten = 255
const (
maxMetadataSize = 1023
maxMetadataSizeWritten = 255
)
// Current/highest supported metadata format.
const metadataVersion = 2
@ -305,7 +308,6 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
root: rpath,
opt: *opt,
}
cache.PinUntilFinalized(f.base, f)
f.dirSort = true // processEntries requires that meta Objects prerun data chunks atm.
if err := f.configure(opt.NameFormat, opt.MetaFormat, opt.HashType, opt.Transactions); err != nil {
@ -317,26 +319,41 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
// i.e. `rpath` does not exist in the wrapped remote, but chunker
// detects a composite file because it finds the first chunk!
// (yet can't satisfy fstest.CheckListing, will ignore)
if err == nil && !f.useMeta && strings.Contains(rpath, "/") {
if err == nil && !f.useMeta {
firstChunkPath := f.makeChunkName(remotePath, 0, "", "")
_, testErr := cache.Get(ctx, baseName+firstChunkPath)
newBase, testErr := cache.Get(ctx, baseName+firstChunkPath)
if testErr == fs.ErrorIsFile {
f.base = newBase
err = testErr
}
}
cache.PinUntilFinalized(f.base, f)
// Correct root if definitely pointing to a file
if err == fs.ErrorIsFile {
f.root = path.Dir(f.root)
if f.root == "." || f.root == "/" {
f.root = ""
}
}
// Note 1: the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs.
// Note 2: features.Fill() points features.PutStream to our PutStream,
// but features.Mask() will nullify it if wrappedFs does not have it.
f.features = (&fs.Features{
CaseInsensitive: true,
DuplicateFiles: true,
ReadMimeType: false, // Object.MimeType not supported
WriteMimeType: true,
BucketBased: true,
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: true,
CaseInsensitive: true,
DuplicateFiles: true,
ReadMimeType: false, // Object.MimeType not supported
WriteMimeType: true,
BucketBased: true,
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: true,
}).Fill(ctx, f).Mask(ctx, baseFs).WrapsFs(f, baseFs)
f.features.Disable("ListR") // Recursive listing may cause chunker skip files
@ -813,8 +830,7 @@ func (f *Fs) processEntries(ctx context.Context, origEntries fs.DirEntries, dirP
}
case fs.Directory:
isSubdir[entry.Remote()] = true
wrapDir := fs.NewDirCopy(ctx, entry)
wrapDir.SetRemote(entry.Remote())
wrapDir := fs.NewDirWrapper(entry.Remote(), entry)
tempEntries = append(tempEntries, wrapDir)
default:
if f.opt.FailHard {
@ -947,6 +963,11 @@ func (f *Fs) scanObject(ctx context.Context, remote string, quickScan bool) (fs.
}
if caseInsensitive {
sameMain = strings.EqualFold(mainRemote, remote)
if sameMain && f.base.Features().IsLocal {
// on local, make sure the EqualFold still holds true when accounting for encoding.
// sometimes paths with special characters will only normalize the same way in Standard Encoding.
sameMain = strings.EqualFold(encoder.OS.FromStandardPath(mainRemote), encoder.OS.FromStandardPath(remote))
}
} else {
sameMain = mainRemote == remote
}
@ -960,13 +981,13 @@ func (f *Fs) scanObject(ctx context.Context, remote string, quickScan bool) (fs.
}
continue
}
//fs.Debugf(f, "%q belongs to %q as chunk %d", entryRemote, mainRemote, chunkNo)
// fs.Debugf(f, "%q belongs to %q as chunk %d", entryRemote, mainRemote, chunkNo)
if err := o.addChunk(entry, chunkNo); err != nil {
return nil, err
}
}
if o.main == nil && (o.chunks == nil || len(o.chunks) == 0) {
if o.main == nil && len(o.chunks) == 0 {
// Scanning hasn't found data chunks with conforming names.
if f.useMeta || quickScan {
// Metadata is required but absent and there are no chunks.
@ -1122,8 +1143,8 @@ func (o *Object) readXactID(ctx context.Context) (xactID string, err error) {
// put implements Put, PutStream, PutUnchecked, Update
func (f *Fs) put(
ctx context.Context, in io.Reader, src fs.ObjectInfo, remote string, options []fs.OpenOption,
basePut putFn, action string, target fs.Object) (obj fs.Object, err error) {
basePut putFn, action string, target fs.Object,
) (obj fs.Object, err error) {
// Perform consistency checks
if err := f.forbidChunk(src, remote); err != nil {
return nil, fmt.Errorf("%s refused: %w", action, err)
@ -1563,6 +1584,14 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.base.Mkdir(ctx, dir)
}
// MkdirMetadata makes the root directory of the Fs object
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
if do := f.base.Features().MkdirMetadata; do != nil {
return do(ctx, dir, metadata)
}
return nil, fs.ErrorNotImplemented
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
@ -1880,6 +1909,14 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return do(ctx, srcFs.base, srcRemote, dstRemote)
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
if do := f.base.Features().DirSetModTime; do != nil {
return do(ctx, dir, modTime)
}
return fs.ErrorNotImplemented
}
// CleanUp the trash in the Fs
//
// Implement this if you have a way of emptying the trash or
@ -1928,7 +1965,7 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
return
}
wrappedNotifyFunc := func(path string, entryType fs.EntryType) {
//fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType)
// fs.Debugf(f, "ChangeNotify: path %q entryType %d", path, entryType)
if entryType == fs.EntryObject {
mainPath, _, _, xactID := f.parseChunkName(path)
metaXactID := ""
@ -2540,6 +2577,8 @@ var (
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.PutUncheckeder = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)

View file

@ -36,6 +36,7 @@ func TestIntegration(t *testing.T) {
"GetTier",
"SetTier",
"Metadata",
"SetMetadata",
},
UnimplementableFsMethods: []string{
"PublicLink",

View file

@ -222,18 +222,23 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
}
// check features
var features = (&fs.Features{
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
BucketBased: true,
SetTier: true,
GetTier: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
PartialUploads: true,
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
BucketBased: true,
SetTier: true,
GetTier: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: true,
PartialUploads: true,
}).Fill(ctx, f)
canMove := true
for _, u := range f.upstreams {
@ -440,6 +445,32 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return u.f.Mkdir(ctx, uRemote)
}
// MkdirMetadata makes the root directory of the Fs object
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
u, uRemote, err := f.findUpstream(dir)
if err != nil {
return nil, err
}
do := u.f.Features().MkdirMetadata
if do == nil {
return nil, fs.ErrorNotImplemented
}
newDir, err := do(ctx, uRemote, metadata)
if err != nil {
return nil, err
}
entries := fs.DirEntries{newDir}
entries, err = u.wrapEntries(ctx, entries)
if err != nil {
return nil, err
}
newDir, ok := entries[0].(fs.Directory)
if !ok {
return nil, fmt.Errorf("internal error: expecting %T to be fs.Directory", entries[0])
}
return newDir, nil
}
// purge the upstream or fallback to a slow way
func (u *upstream) purge(ctx context.Context, dir string) (err error) {
if do := u.f.Features().Purge; do != nil {
@ -755,12 +786,11 @@ func (u *upstream) wrapEntries(ctx context.Context, entries fs.DirEntries) (fs.D
case fs.Object:
entries[i] = u.newObject(x)
case fs.Directory:
newDir := fs.NewDirCopy(ctx, x)
newPath, err := u.pathAdjustment.do(newDir.Remote())
newPath, err := u.pathAdjustment.do(x.Remote())
if err != nil {
return nil, err
}
newDir.SetRemote(newPath)
newDir := fs.NewDirWrapper(newPath, x)
entries[i] = newDir
default:
return nil, fmt.Errorf("unknown entry type %T", entry)
@ -783,7 +813,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if f.root == "" && dir == "" {
entries = make(fs.DirEntries, 0, len(f.upstreams))
for combineDir := range f.upstreams {
d := fs.NewDir(combineDir, f.when)
d := fs.NewLimitedDirWrapper(combineDir, fs.NewDir(combineDir, f.when))
entries = append(entries, d)
}
return entries, nil
@ -965,6 +995,22 @@ func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
return do(ctx, uDirs)
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
u, uDir, err := f.findUpstream(dir)
if err != nil {
return err
}
if uDir == "" {
fs.Debugf(dir, "Can't set modtime on upstream root. skipping.")
return nil
}
if do := u.f.Features().DirSetModTime; do != nil {
return do(ctx, uDir, modTime)
}
return fs.ErrorNotImplemented
}
// CleanUp the trash in the Fs
//
// Implement this if you have a way of emptying the trash or
@ -1073,6 +1119,17 @@ func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return do.Metadata(ctx)
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
do, ok := o.Object.(fs.SetMetadataer)
if !ok {
return fs.ErrorNotImplemented
}
return do.SetMetadata(ctx, metadata)
}
// SetTier performs changing storage tier of the Object if
// multiple storage classes supported
func (o *Object) SetTier(tier string) error {
@ -1099,6 +1156,8 @@ var (
_ fs.PublicLinker = (*Fs)(nil)
_ fs.PutUncheckeder = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.OpenWriterAter = (*Fs)(nil)
_ fs.FullObject = (*Object)(nil)

View file

@ -14,6 +14,7 @@ import (
"fmt"
"io"
"os"
"path"
"regexp"
"strings"
"time"
@ -37,6 +38,7 @@ import (
const (
initialChunkSize = 262144 // Initial and max sizes of chunks when reading parts of the file. Currently
maxChunkSize = 8388608 // at 256 KiB and 8 MiB.
chunkStreams = 0 // Streams to use for reading
bufferSize = 8388608
heuristicBytes = 1048576
@ -172,21 +174,33 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
opt: *opt,
mode: compressionModeFromName(opt.CompressionMode),
}
// Correct root if definitely pointing to a file
if err == fs.ErrorIsFile {
f.root = path.Dir(f.root)
if f.root == "." || f.root == "/" {
f.root = ""
}
}
// the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs
f.features = (&fs.Features{
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: false,
WriteMimeType: false,
GetTier: true,
SetTier: true,
BucketBased: true,
CanHaveEmptyDirectories: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
PartialUploads: true,
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: false,
WriteMimeType: false,
GetTier: true,
SetTier: true,
BucketBased: true,
CanHaveEmptyDirectories: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: true,
PartialUploads: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
// We support reading MIME types no matter the wrapped fs
f.features.ReadMimeType = true
@ -442,7 +456,7 @@ func (f *Fs) verifyObjectHash(ctx context.Context, o fs.Object, hasher *hash.Mul
if err != nil {
fs.Errorf(o, "Failed to remove corrupted object: %v", err)
}
return fmt.Errorf("corrupted on transfer: %v compressed hashes differ %q vs %q", ht, srcHash, dstHash)
return fmt.Errorf("corrupted on transfer: %v compressed hashes differ src(%s) %q vs dst(%s) %q", ht, f.Fs, srcHash, o.Fs(), dstHash)
}
return nil
}
@ -776,6 +790,14 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.Fs.Mkdir(ctx, dir)
}
// MkdirMetadata makes the root directory of the Fs object
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
if do := f.Fs.Features().MkdirMetadata; do != nil {
return do(ctx, dir, metadata)
}
return nil, fs.ErrorNotImplemented
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
@ -919,6 +941,14 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return do(ctx, srcFs.Fs, srcRemote, dstRemote)
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
if do := f.Fs.Features().DirSetModTime; do != nil {
return do(ctx, dir, modTime)
}
return fs.ErrorNotImplemented
}
// CleanUp the trash in the Fs
//
// Implement this if you have a way of emptying the trash or
@ -1257,6 +1287,17 @@ func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return do.Metadata(ctx)
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
do, ok := o.Object.(fs.SetMetadataer)
if !ok {
return fs.ErrorNotImplemented
}
return do.SetMetadata(ctx, metadata)
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(ctx context.Context, ht hash.Type) (string, error) {
@ -1322,7 +1363,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.Read
}
}
// Get a chunkedreader for the wrapped object
chunkedReader := chunkedreader.New(ctx, o.Object, initialChunkSize, maxChunkSize)
chunkedReader := chunkedreader.New(ctx, o.Object, initialChunkSize, maxChunkSize, chunkStreams)
// Get file handle
var file io.Reader
if offset != 0 {
@ -1489,6 +1530,8 @@ var (
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.UnWrapper = (*Fs)(nil)

View file

@ -329,7 +329,7 @@ func (c *Cipher) obfuscateSegment(plaintext string) string {
for _, runeValue := range plaintext {
dir += int(runeValue)
}
dir = dir % 256
dir %= 256
// We'll use this number to store in the result filename...
var result bytes.Buffer
@ -450,7 +450,7 @@ func (c *Cipher) deobfuscateSegment(ciphertext string) (string, error) {
if pos >= 26 {
pos -= 6
}
pos = pos - thisdir
pos -= thisdir
if pos < 0 {
pos += 52
}
@ -888,7 +888,7 @@ func (fh *decrypter) fillBuffer() (err error) {
fs.Errorf(nil, "crypt: ignoring: %v", ErrorEncryptedBadBlock)
// Zero out the bad block and continue
for i := range (*fh.buf)[:n] {
(*fh.buf)[i] = 0
fh.buf[i] = 0
}
}
fh.bufIndex = 0

View file

@ -130,6 +130,16 @@ trying to recover an encrypted file with errors and it is desired to
recover as much of the file as possible.`,
Default: false,
Advanced: true,
}, {
Name: "strict_names",
Help: `If set, this will raise an error when crypt comes across a filename that can't be decrypted.
(By default, rclone will just log a NOTICE and continue as normal.)
This can happen if encrypted and unencrypted files are stored in the same
directory (which is not recommended.) It may also indicate a more serious
problem that should be investigated.`,
Default: false,
Advanced: true,
}, {
Name: "filename_encoding",
Help: `How to encode the encrypted filename to text string.
@ -253,22 +263,34 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
cipher: cipher,
}
cache.PinUntilFinalized(f.Fs, f)
// Correct root if definitely pointing to a file
if err == fs.ErrorIsFile {
f.root = path.Dir(f.root)
if f.root == "." || f.root == "/" {
f.root = ""
}
}
// the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs
f.features = (&fs.Features{
CaseInsensitive: !cipher.dirNameEncrypt || cipher.NameEncryptionMode() == NameEncryptionOff,
DuplicateFiles: true,
ReadMimeType: false, // MimeTypes not supported with crypt
WriteMimeType: false,
BucketBased: true,
CanHaveEmptyDirectories: true,
SetTier: true,
GetTier: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
PartialUploads: true,
CaseInsensitive: !cipher.dirNameEncrypt || cipher.NameEncryptionMode() == NameEncryptionOff,
DuplicateFiles: true,
ReadMimeType: false, // MimeTypes not supported with crypt
WriteMimeType: false,
BucketBased: true,
CanHaveEmptyDirectories: true,
SetTier: true,
GetTier: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: true,
PartialUploads: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
return f, err
@ -287,6 +309,7 @@ type Options struct {
PassBadBlocks bool `config:"pass_bad_blocks"`
FilenameEncoding string `config:"filename_encoding"`
Suffix string `config:"suffix"`
StrictNames bool `config:"strict_names"`
}
// Fs represents a wrapped fs.Fs
@ -321,45 +344,64 @@ func (f *Fs) String() string {
}
// Encrypt an object file name to entries.
func (f *Fs) add(entries *fs.DirEntries, obj fs.Object) {
func (f *Fs) add(entries *fs.DirEntries, obj fs.Object) error {
remote := obj.Remote()
decryptedRemote, err := f.cipher.DecryptFileName(remote)
if err != nil {
fs.Debugf(remote, "Skipping undecryptable file name: %v", err)
return
if f.opt.StrictNames {
return fmt.Errorf("%s: undecryptable file name detected: %v", remote, err)
}
fs.Logf(remote, "Skipping undecryptable file name: %v", err)
return nil
}
if f.opt.ShowMapping {
fs.Logf(decryptedRemote, "Encrypts to %q", remote)
}
*entries = append(*entries, f.newObject(obj))
return nil
}
// Encrypt a directory file name to entries.
func (f *Fs) addDir(ctx context.Context, entries *fs.DirEntries, dir fs.Directory) {
func (f *Fs) addDir(ctx context.Context, entries *fs.DirEntries, dir fs.Directory) error {
remote := dir.Remote()
decryptedRemote, err := f.cipher.DecryptDirName(remote)
if err != nil {
fs.Debugf(remote, "Skipping undecryptable dir name: %v", err)
return
if f.opt.StrictNames {
return fmt.Errorf("%s: undecryptable dir name detected: %v", remote, err)
}
fs.Logf(remote, "Skipping undecryptable dir name: %v", err)
return nil
}
if f.opt.ShowMapping {
fs.Logf(decryptedRemote, "Encrypts to %q", remote)
}
*entries = append(*entries, f.newDir(ctx, dir))
return nil
}
// Encrypt some directory entries. This alters entries returning it as newEntries.
func (f *Fs) encryptEntries(ctx context.Context, entries fs.DirEntries) (newEntries fs.DirEntries, err error) {
newEntries = entries[:0] // in place filter
errors := 0
var firsterr error
for _, entry := range entries {
switch x := entry.(type) {
case fs.Object:
f.add(&newEntries, x)
err = f.add(&newEntries, x)
case fs.Directory:
f.addDir(ctx, &newEntries, x)
err = f.addDir(ctx, &newEntries, x)
default:
return nil, fmt.Errorf("unknown object type %T", entry)
}
if err != nil {
errors++
if firsterr == nil {
firsterr = err
}
}
}
if firsterr != nil {
return nil, fmt.Errorf("there were %v undecryptable name errors. first error: %v", errors, firsterr)
}
return newEntries, nil
}
@ -478,7 +520,7 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options [
if err != nil {
fs.Errorf(o, "Failed to remove corrupted object: %v", err)
}
return nil, fmt.Errorf("corrupted on transfer: %v encrypted hash differ src %q vs dst %q", ht, srcHash, dstHash)
return nil, fmt.Errorf("corrupted on transfer: %v encrypted hashes differ src(%s) %q vs dst(%s) %q", ht, f.Fs, srcHash, o.Fs(), dstHash)
}
fs.Debugf(src, "%v = %s OK", ht, srcHash)
}
@ -513,6 +555,37 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.Fs.Mkdir(ctx, f.cipher.EncryptDirName(dir))
}
// MkdirMetadata makes the root directory of the Fs object
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
do := f.Fs.Features().MkdirMetadata
if do == nil {
return nil, fs.ErrorNotImplemented
}
newDir, err := do(ctx, f.cipher.EncryptDirName(dir), metadata)
if err != nil {
return nil, err
}
var entries = make(fs.DirEntries, 0, 1)
err = f.addDir(ctx, &entries, newDir)
if err != nil {
return nil, err
}
newDir, ok := entries[0].(fs.Directory)
if !ok {
return nil, fmt.Errorf("internal error: expecting %T to be fs.Directory", entries[0])
}
return newDir, nil
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
do := f.Fs.Features().DirSetModTime
if do == nil {
return fs.ErrorNotImplemented
}
return do(ctx, f.cipher.EncryptDirName(dir), modTime)
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
@ -754,7 +827,7 @@ func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
}
out := make([]fs.Directory, len(dirs))
for i, dir := range dirs {
out[i] = fs.NewDirCopy(ctx, dir).SetRemote(f.cipher.EncryptDirName(dir.Remote()))
out[i] = fs.NewDirWrapper(f.cipher.EncryptDirName(dir.Remote()), dir)
}
return do(ctx, out)
}
@ -990,14 +1063,14 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// newDir returns a dir with the Name decrypted
func (f *Fs) newDir(ctx context.Context, dir fs.Directory) fs.Directory {
newDir := fs.NewDirCopy(ctx, dir)
remote := dir.Remote()
decryptedRemote, err := f.cipher.DecryptDirName(remote)
if err != nil {
fs.Debugf(remote, "Undecryptable dir name: %v", err)
} else {
newDir.SetRemote(decryptedRemote)
remote = decryptedRemote
}
newDir := fs.NewDirWrapper(remote, dir)
return newDir
}
@ -1175,6 +1248,17 @@ func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return do.Metadata(ctx)
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
do, ok := o.Object.(fs.SetMetadataer)
if !ok {
return fs.ErrorNotImplemented
}
return do.SetMetadata(ctx, metadata)
}
// MimeType returns the content type of the Object if
// known, or "" if not
//
@ -1200,6 +1284,8 @@ var (
_ fs.Abouter = (*Fs)(nil)
_ fs.Wrapper = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)

View file

@ -71,7 +71,7 @@ const (
// 1<<18 is the minimum size supported by the Google uploader, and there is no maximum.
minChunkSize = fs.SizeSuffix(googleapi.MinUploadChunkSize)
defaultChunkSize = 8 * fs.Mebi
partialFields = "id,name,size,md5Checksum,trashed,explicitlyTrashed,modifiedTime,createdTime,mimeType,parents,webViewLink,shortcutDetails,exportLinks,resourceKey"
partialFields = "id,name,size,md5Checksum,sha1Checksum,sha256Checksum,trashed,explicitlyTrashed,modifiedTime,createdTime,mimeType,parents,webViewLink,shortcutDetails,exportLinks,resourceKey"
listRGrouping = 50 // number of IDs to search at once when using ListR
listRInputBuffer = 1000 // size of input buffer when using ListR
defaultXDGIcon = "text-html"
@ -143,6 +143,46 @@ var (
_linkTemplates map[string]*template.Template // available link types
)
// rwChoices type for fs.Bits
type rwChoices struct{}
func (rwChoices) Choices() []fs.BitsChoicesInfo {
return []fs.BitsChoicesInfo{
{Bit: uint64(rwOff), Name: "off"},
{Bit: uint64(rwRead), Name: "read"},
{Bit: uint64(rwWrite), Name: "write"},
{Bit: uint64(rwFailOK), Name: "failok"},
}
}
// rwChoice type alias
type rwChoice = fs.Bits[rwChoices]
const (
rwRead rwChoice = 1 << iota
rwWrite
rwFailOK
rwOff rwChoice = 0
)
// Examples for the options
var rwExamples = fs.OptionExamples{{
Value: rwOff.String(),
Help: "Do not read or write the value",
}, {
Value: rwRead.String(),
Help: "Read the value only",
}, {
Value: rwWrite.String(),
Help: "Write the value only",
}, {
Value: rwFailOK.String(),
Help: "If writing fails log errors only, don't fail the transfer",
}, {
Value: (rwRead | rwWrite).String(),
Help: "Read and Write the value.",
}}
// Parse the scopes option returning a slice of scopes
func driveScopes(scopesString string) (scopes []string) {
if scopesString == "" {
@ -250,9 +290,16 @@ func init() {
}
return nil, fmt.Errorf("unknown state %q", config.State)
},
MetadataInfo: &fs.MetadataInfo{
System: systemMetadataInfo,
Help: `User metadata is stored in the properties field of the drive object.
Metadata is supported on files and directories.
`,
},
Options: append(driveOAuthOptions(), []fs.Option{{
Name: "scope",
Help: "Scope that rclone should use when requesting access from drive.",
Help: "Comma separated list of scopes that rclone should use when requesting access from drive.",
Examples: []fs.OptionExample{{
Value: "drive",
Help: "Full access all files, excluding Application Data Folder.",
@ -320,16 +367,35 @@ rather than shortcuts themselves when doing server side copies.`,
Default: false,
Help: "Skip google documents in all listings.\n\nIf given, gdocs practically become invisible to rclone.",
Advanced: true,
}, {
Name: "show_all_gdocs",
Default: false,
Help: `Show all Google Docs including non-exportable ones in listings.
If you try a server side copy on a Google Form without this flag, you
will get this error:
No export formats found for "application/vnd.google-apps.form"
However adding this flag will allow the form to be server side copied.
Note that rclone doesn't add extensions to the Google Docs file names
in this mode.
Do **not** use this flag when trying to download Google Docs - rclone
will fail to download them.
`,
Advanced: true,
}, {
Name: "skip_checksum_gphotos",
Default: false,
Help: `Skip MD5 checksum on Google photos and videos only.
Help: `Skip checksums on Google photos and videos only.
Use this if you get checksum errors when transferring Google photos or
videos.
Setting this flag will cause Google photos and videos to return a
blank MD5 checksum.
blank checksums.
Google photos are identified by being in the "photos" space.
@ -620,6 +686,56 @@ having trouble with like many empty directories.
`,
Advanced: true,
Default: true,
}, {
Name: "metadata_owner",
Help: `Control whether owner should be read or written in metadata.
Owner is a standard part of the file metadata so is easy to read. But it
isn't always desirable to set the owner from the metadata.
Note that you can't set the owner on Shared Drives, and that setting
ownership will generate an email to the new owner (this can't be
disabled), and you can't transfer ownership to someone outside your
organization.
`,
Advanced: true,
Default: rwRead,
Examples: rwExamples,
}, {
Name: "metadata_permissions",
Help: `Control whether permissions should be read or written in metadata.
Reading permissions metadata from files can be done quickly, but it
isn't always desirable to set the permissions from the metadata.
Note that rclone drops any inherited permissions on Shared Drives and
any owner permission on My Drives as these are duplicated in the owner
metadata.
`,
Advanced: true,
Default: rwOff,
Examples: rwExamples,
}, {
Name: "metadata_labels",
Help: `Control whether labels should be read or written in metadata.
Reading labels metadata from files takes an extra API transaction and
will slow down listings. It isn't always desirable to set the labels
from the metadata.
The format of labels is documented in the drive API documentation at
https://developers.google.com/drive/api/reference/rest/v3/Label -
rclone just provides a JSON dump of this format.
When setting labels, the label and fields must already exist - rclone
will not create them. This means that if you are transferring labels
from two different accounts you will have to create the labels in
advance and use the metadata mapper to translate the IDs between the
two accounts.
`,
Advanced: true,
Default: rwOff,
Examples: rwExamples,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
@ -667,6 +783,7 @@ type Options struct {
UseTrash bool `config:"use_trash"`
CopyShortcutContent bool `config:"copy_shortcut_content"`
SkipGdocs bool `config:"skip_gdocs"`
ShowAllGdocs bool `config:"show_all_gdocs"`
SkipChecksumGphotos bool `config:"skip_checksum_gphotos"`
SharedWithMe bool `config:"shared_with_me"`
TrashedOnly bool `config:"trashed_only"`
@ -695,6 +812,9 @@ type Options struct {
SkipDanglingShortcuts bool `config:"skip_dangling_shortcuts"`
ResourceKey string `config:"resource_key"`
FastListBugFix bool `config:"fast_list_bug_fix"`
MetadataOwner rwChoice `config:"metadata_owner"`
MetadataPermissions rwChoice `config:"metadata_permissions"`
MetadataLabels rwChoice `config:"metadata_labels"`
Enc encoder.MultiEncoder `config:"encoding"`
EnvAuth bool `config:"env_auth"`
}
@ -716,23 +836,25 @@ type Fs struct {
exportExtensions []string // preferred extensions to download docs
importMimeTypes []string // MIME types to convert to docs
isTeamDrive bool // true if this is a team drive
fileFields googleapi.Field // fields to fetch file info with
m configmap.Mapper
grouping int32 // number of IDs to search at once in ListR - read with atomic
listRmu *sync.Mutex // protects listRempties
listRempties map[string]struct{} // IDs of supposedly empty directories which triggered grouping disable
dirResourceKeys *sync.Map // map directory ID to resource key
grouping int32 // number of IDs to search at once in ListR - read with atomic
listRmu *sync.Mutex // protects listRempties
listRempties map[string]struct{} // IDs of supposedly empty directories which triggered grouping disable
dirResourceKeys *sync.Map // map directory ID to resource key
permissionsMu *sync.Mutex // protect the below
permissions map[string]*drive.Permission // map permission IDs to Permissions
}
type baseObject struct {
fs *Fs // what this object is part of
remote string // The remote path
id string // Drive Id of this object
modifiedDate string // RFC3339 time it was last modified
mimeType string // The object MIME type
bytes int64 // size of the object
parents []string // IDs of the parent directories
resourceKey *string // resourceKey is needed for link shared objects
fs *Fs // what this object is part of
remote string // The remote path
id string // Drive Id of this object
modifiedDate string // RFC3339 time it was last modified
mimeType string // The object MIME type
bytes int64 // size of the object
parents []string // IDs of the parent directories
resourceKey *string // resourceKey is needed for link shared objects
metadata *fs.Metadata // metadata if known
}
type documentObject struct {
baseObject
@ -751,9 +873,16 @@ type Object struct {
baseObject
url string // Download URL of this object
md5sum string // md5sum of the object
sha1sum string // sha1sum of the object
sha256sum string // sha256sum of the object
v2Download bool // generate v2 download link ondemand
}
// Directory describes a drive directory
type Directory struct {
baseObject
}
// ------------------------------------------------------------
// Name of the remote (as passed into NewFs)
@ -979,7 +1108,7 @@ func (f *Fs) list(ctx context.Context, dirIDs []string, title string, directorie
list.Header().Add("X-Goog-Drive-Resource-Keys", resourceKeysHeader)
}
fields := fmt.Sprintf("files(%s),nextPageToken,incompleteSearch", f.fileFields)
fields := fmt.Sprintf("files(%s),nextPageToken,incompleteSearch", f.getFileFields(ctx))
OUTER:
for {
@ -1253,16 +1382,25 @@ func newFs(ctx context.Context, name, path string, m configmap.Mapper) (*Fs, err
listRmu: new(sync.Mutex),
listRempties: make(map[string]struct{}),
dirResourceKeys: new(sync.Map),
permissionsMu: new(sync.Mutex),
permissions: make(map[string]*drive.Permission),
}
f.isTeamDrive = opt.TeamDriveID != ""
f.fileFields = f.getFileFields()
f.features = (&fs.Features{
DuplicateFiles: true,
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
FilterAware: true,
DuplicateFiles: true,
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
FilterAware: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: false, // FIXME need to check!
}).Fill(ctx, f)
// Create a new authorized Drive client.
@ -1367,7 +1505,7 @@ func NewFs(ctx context.Context, name, path string, m configmap.Mapper) (fs.Fs, e
return f, nil
}
func (f *Fs) newBaseObject(remote string, info *drive.File) baseObject {
func (f *Fs) newBaseObject(ctx context.Context, remote string, info *drive.File) (o baseObject, err error) {
modifiedDate := info.ModifiedTime
if f.opt.UseCreatedDate {
modifiedDate = info.CreatedTime
@ -1378,7 +1516,7 @@ func (f *Fs) newBaseObject(remote string, info *drive.File) baseObject {
if f.opt.SizeAsQuota {
size = info.QuotaBytesUsed
}
return baseObject{
o = baseObject{
fs: f,
remote: remote,
id: info.Id,
@ -1387,10 +1525,15 @@ func (f *Fs) newBaseObject(remote string, info *drive.File) baseObject {
bytes: size,
parents: info.Parents,
}
err = nil
if fs.GetConfig(ctx).Metadata {
err = o.parseMetadata(ctx, info)
}
return o, err
}
// getFileFields gets the fields for a normal file Get or List
func (f *Fs) getFileFields() (fields googleapi.Field) {
func (f *Fs) getFileFields(ctx context.Context) (fields googleapi.Field) {
fields = partialFields
if f.opt.AuthOwnerOnly {
fields += ",owners"
@ -1404,40 +1547,53 @@ func (f *Fs) getFileFields() (fields googleapi.Field) {
if f.opt.SizeAsQuota {
fields += ",quotaBytesUsed"
}
if fs.GetConfig(ctx).Metadata {
fields += "," + metadataFields
}
return fields
}
// newRegularObject creates an fs.Object for a normal drive.File
func (f *Fs) newRegularObject(remote string, info *drive.File) fs.Object {
func (f *Fs) newRegularObject(ctx context.Context, remote string, info *drive.File) (obj fs.Object, err error) {
// wipe checksum if SkipChecksumGphotos and file is type Photo or Video
if f.opt.SkipChecksumGphotos {
for _, space := range info.Spaces {
if space == "photos" {
info.Md5Checksum = ""
info.Sha1Checksum = ""
info.Sha256Checksum = ""
break
}
}
}
o := &Object{
baseObject: f.newBaseObject(remote, info),
url: fmt.Sprintf("%sfiles/%s?alt=media", f.svc.BasePath, actualID(info.Id)),
md5sum: strings.ToLower(info.Md5Checksum),
sha1sum: strings.ToLower(info.Sha1Checksum),
sha256sum: strings.ToLower(info.Sha256Checksum),
v2Download: f.opt.V2DownloadMinSize != -1 && info.Size >= int64(f.opt.V2DownloadMinSize),
}
o.baseObject, err = f.newBaseObject(ctx, remote, info)
if err != nil {
return nil, err
}
if info.ResourceKey != "" {
o.resourceKey = &info.ResourceKey
}
return o
return o, nil
}
// newDocumentObject creates an fs.Object for a google docs drive.File
func (f *Fs) newDocumentObject(remote string, info *drive.File, extension, exportMimeType string) (fs.Object, error) {
func (f *Fs) newDocumentObject(ctx context.Context, remote string, info *drive.File, extension, exportMimeType string) (fs.Object, error) {
mediaType, _, err := mime.ParseMediaType(exportMimeType)
if err != nil {
return nil, err
}
url := info.ExportLinks[mediaType]
baseObject := f.newBaseObject(remote+extension, info)
baseObject, err := f.newBaseObject(ctx, remote+extension, info)
if err != nil {
return nil, err
}
baseObject.bytes = -1
baseObject.mimeType = exportMimeType
return &documentObject{
@ -1449,7 +1605,7 @@ func (f *Fs) newDocumentObject(remote string, info *drive.File, extension, expor
}
// newLinkObject creates an fs.Object that represents a link a google docs drive.File
func (f *Fs) newLinkObject(remote string, info *drive.File, extension, exportMimeType string) (fs.Object, error) {
func (f *Fs) newLinkObject(ctx context.Context, remote string, info *drive.File, extension, exportMimeType string) (fs.Object, error) {
t := linkTemplate(exportMimeType)
if t == nil {
return nil, fmt.Errorf("unsupported link type %s", exportMimeType)
@ -1468,7 +1624,10 @@ func (f *Fs) newLinkObject(remote string, info *drive.File, extension, exportMim
return nil, fmt.Errorf("executing template failed: %w", err)
}
baseObject := f.newBaseObject(remote+extension, info)
baseObject, err := f.newBaseObject(ctx, remote+extension, info)
if err != nil {
return nil, err
}
baseObject.bytes = int64(buf.Len())
baseObject.mimeType = exportMimeType
return &linkObject{
@ -1484,7 +1643,7 @@ func (f *Fs) newLinkObject(remote string, info *drive.File, extension, exportMim
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, info *drive.File) (fs.Object, error) {
// If item has MD5 sum it is a file stored on drive
if info.Md5Checksum != "" {
return f.newRegularObject(remote, info), nil
return f.newRegularObject(ctx, remote, info)
}
extension, exportName, exportMimeType, isDocument := f.findExportFormat(ctx, info)
@ -1515,13 +1674,15 @@ func (f *Fs) newObjectWithExportInfo(
case info.MimeType == shortcutMimeTypeDangling:
// Pretend a dangling shortcut is a regular object
// It will error if used, but appear in listings so it can be deleted
return f.newRegularObject(remote, info), nil
return f.newRegularObject(ctx, remote, info)
case info.Md5Checksum != "":
// If item has MD5 sum it is a file stored on drive
return f.newRegularObject(remote, info), nil
return f.newRegularObject(ctx, remote, info)
case f.opt.SkipGdocs:
fs.Debugf(remote, "Skipping google document type %q", info.MimeType)
return nil, fs.ErrorObjectNotFound
case f.opt.ShowAllGdocs:
return f.newDocumentObject(ctx, remote, info, "", info.MimeType)
default:
// If item MimeType is in the ExportFormats then it is a google doc
if !isDocument {
@ -1533,9 +1694,9 @@ func (f *Fs) newObjectWithExportInfo(
return nil, fs.ErrorObjectNotFound
}
if isLinkMimeType(exportMimeType) {
return f.newLinkObject(remote, info, extension, exportMimeType)
return f.newLinkObject(ctx, remote, info, extension, exportMimeType)
}
return f.newDocumentObject(remote, info, extension, exportMimeType)
return f.newDocumentObject(ctx, remote, info, extension, exportMimeType)
}
}
@ -1586,26 +1747,72 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
return pathIDOut, found, err
}
// CreateDir makes a directory with pathID as parent and name leaf
func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) {
// createDir makes a directory with pathID as parent and name leaf with optional metadata
func (f *Fs) createDir(ctx context.Context, pathID, leaf string, metadata fs.Metadata) (info *drive.File, err error) {
leaf = f.opt.Enc.FromStandardName(leaf)
// fmt.Println("Making", path)
// Define the metadata for the directory we are going to create.
pathID = actualID(pathID)
createInfo := &drive.File{
Name: leaf,
Description: leaf,
MimeType: driveFolderType,
Parents: []string{pathID},
Name: leaf,
MimeType: driveFolderType,
Parents: []string{pathID},
}
var updateMetadata updateMetadataFn
if len(metadata) > 0 {
updateMetadata, err = f.updateMetadata(ctx, createInfo, metadata, true)
if err != nil {
return nil, fmt.Errorf("create dir: failed to update metadata: %w", err)
}
}
var info *drive.File
err = f.pacer.Call(func() (bool, error) {
info, err = f.svc.Files.Create(createInfo).
Fields("id").
Fields(f.getFileFields(ctx)).
SupportsAllDrives(true).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
if updateMetadata != nil {
err = updateMetadata(ctx, info)
if err != nil {
return nil, err
}
}
return info, nil
}
// updateDir updates an existing a directory with the metadata passed in
func (f *Fs) updateDir(ctx context.Context, dirID string, metadata fs.Metadata) (info *drive.File, err error) {
if len(metadata) == 0 {
return f.getFile(ctx, dirID, f.getFileFields(ctx))
}
dirID = actualID(dirID)
updateInfo := &drive.File{}
updateMetadata, err := f.updateMetadata(ctx, updateInfo, metadata, true)
if err != nil {
return nil, fmt.Errorf("update dir: failed to update metadata from source object: %w", err)
}
err = f.pacer.Call(func() (bool, error) {
info, err = f.svc.Files.Update(dirID, updateInfo).
Fields(f.getFileFields(ctx)).
SupportsAllDrives(true).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
err = updateMetadata(ctx, info)
if err != nil {
return nil, err
}
return info, nil
}
// CreateDir makes a directory with pathID as parent and name leaf
func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string, err error) {
info, err := f.createDir(ctx, pathID, leaf, nil)
if err != nil {
return "", err
}
@ -1716,7 +1923,7 @@ func (f *Fs) findExportFormatByMimeType(ctx context.Context, itemMimeType string
return "", "", isDocument
}
// findExportFormatByMimeType works out the optimum export settings
// findExportFormat works out the optimum export settings
// for the given drive.File.
//
// Look through the exportExtensions and find the first format that can be
@ -2012,13 +2219,13 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
case in <- job:
default:
overflow = append(overflow, job)
wg.Add(-1)
wg.Done()
}
}
// Send the entry to the caller, queueing any directories as new jobs
cb := func(entry fs.DirEntry) error {
if d, isDir := entry.(*fs.Dir); isDir {
if d, isDir := entry.(fs.Directory); isDir {
job := listREntry{actualID(d.ID()), d.Remote()}
sendJob(job)
}
@ -2163,7 +2370,7 @@ func (f *Fs) resolveShortcut(ctx context.Context, item *drive.File) (newItem *dr
fs.Errorf(nil, "Expecting shortcutDetails in %v", item)
return item, nil
}
newItem, err = f.getFile(ctx, item.ShortcutDetails.TargetId, f.fileFields)
newItem, err = f.getFile(ctx, item.ShortcutDetails.TargetId, f.getFileFields(ctx))
if err != nil {
var gerr *googleapi.Error
if errors.As(err, &gerr) && gerr.Code == 404 {
@ -2195,11 +2402,11 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, item *drive.File
if item.ResourceKey != "" {
f.dirResourceKeys.Store(item.Id, item.ResourceKey)
}
when, _ := time.Parse(timeFormatIn, item.ModifiedTime)
d := fs.NewDir(remote, when).SetID(item.Id)
if len(item.Parents) > 0 {
d.SetParentID(item.Parents[0])
baseObject, err := f.newBaseObject(ctx, remote, item)
if err != nil {
return nil, err
}
d := &Directory{baseObject: baseObject}
return d, nil
case f.opt.AuthOwnerOnly && !isAuthOwned(item):
// ignore object
@ -2227,7 +2434,6 @@ func (f *Fs) createFileInfo(ctx context.Context, remote string, modTime time.Tim
// Define the metadata for the file we are going to create.
createInfo := &drive.File{
Name: leaf,
Description: leaf,
Parents: []string{directoryID},
ModifiedTime: modTime.Format(timeFormatOut),
}
@ -2295,6 +2501,10 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
} else {
createInfo.MimeType = fs.MimeTypeFromName(remote)
}
updateMetadata, err := f.fetchAndUpdateMetadata(ctx, src, options, createInfo, false)
if err != nil {
return nil, err
}
var info *drive.File
if size >= 0 && size < int64(f.opt.UploadCutoff) {
@ -2319,6 +2529,10 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
return nil, err
}
}
err = updateMetadata(ctx, info)
if err != nil {
return nil, err
}
return f.newObjectWithInfo(ctx, remote, info)
}
@ -2384,6 +2598,59 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return err
}
// MkdirMetadata makes the directory passed in as dir.
//
// It shouldn't return an error if it already exists.
//
// If the metadata is not nil it is set.
//
// It returns the directory that was created.
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
var info *drive.File
dirID, err := f.dirCache.FindDir(ctx, dir, false)
if err == fs.ErrorDirNotFound {
// Directory does not exist so create it
var leaf, parentID string
leaf, parentID, err = f.dirCache.FindPath(ctx, dir, true)
if err != nil {
return nil, err
}
info, err = f.createDir(ctx, parentID, leaf, metadata)
} else if err == nil {
// Directory exists and needs updating
info, err = f.updateDir(ctx, dirID, metadata)
}
if err != nil {
return nil, err
}
// Convert the info into a directory entry
entry, err := f.itemToDirEntry(ctx, dir, info)
if err != nil {
return nil, err
}
dirEntry, ok := entry.(fs.Directory)
if !ok {
return nil, fmt.Errorf("internal error: expecting %T to be an fs.Directory", entry)
}
return dirEntry, nil
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
dirID, err := f.dirCache.FindDir(ctx, dir, false)
if err != nil {
return err
}
o := baseObject{
fs: f,
remote: dir,
id: dirID,
}
return o.SetModTime(ctx, modTime)
}
// delete a file or directory unconditionally by ID
func (f *Fs) delete(ctx context.Context, id string, useTrash bool) error {
return f.pacer.Call(func() (bool, error) {
@ -2527,6 +2794,12 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
createInfo.Description = ""
}
// Adjust metadata if required
updateMetadata, err := f.fetchAndUpdateMetadata(ctx, src, fs.MetadataAsOpenOptions(ctx), createInfo, false)
if err != nil {
return nil, err
}
// get the ID of the thing to copy
// copy the contents if CopyShortcutContent
// else copy the shortcut only
@ -2540,7 +2813,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
var info *drive.File
err = f.pacer.Call(func() (bool, error) {
copy := f.svc.Files.Copy(id, createInfo).
Fields(partialFields).
Fields(f.getFileFields(ctx)).
SupportsAllDrives(true).
KeepRevisionForever(f.opt.KeepRevisionForever)
srcObj.addResourceKey(copy.Header())
@ -2560,7 +2833,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
// FIXME remove this when google fixes the problem!
if isDoc {
// A short sleep is needed here in order to make the
// change effective, without it is is ignored. This is
// change effective, without it is ignored. This is
// probably some eventual consistency nastiness.
sleepTime := 2 * time.Second
fs.Debugf(f, "Sleeping for %v before setting the modtime to work around drive bug - see #4517", sleepTime)
@ -2576,6 +2849,11 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Errorf(existingObject, "Failed to remove existing object after copy: %v", err)
}
}
// Finalise metadata
err = updateMetadata(ctx, info)
if err != nil {
return nil, err
}
return newObject, nil
}
@ -2749,13 +3027,19 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
dstParents := strings.Join(dstInfo.Parents, ",")
dstInfo.Parents = nil
// Adjust metadata if required
updateMetadata, err := f.fetchAndUpdateMetadata(ctx, src, fs.MetadataAsOpenOptions(ctx), dstInfo, true)
if err != nil {
return nil, err
}
// Do the move
var info *drive.File
err = f.pacer.Call(func() (bool, error) {
info, err = f.svc.Files.Update(shortcutID(srcObj.id), dstInfo).
RemoveParents(srcParentID).
AddParents(dstParents).
Fields(partialFields).
Fields(f.getFileFields(ctx)).
SupportsAllDrives(true).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
@ -2764,6 +3048,11 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
return nil, err
}
// Finalise metadata
err = updateMetadata(ctx, info)
if err != nil {
return nil, err
}
return f.newObjectWithInfo(ctx, remote, info)
}
@ -3007,7 +3296,7 @@ func (f *Fs) DirCacheFlush() {
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5)
return hash.NewHashSet(hash.MD5, hash.SHA1, hash.SHA256)
}
func (f *Fs) changeChunkSize(chunkSizeString string) (err error) {
@ -3236,7 +3525,7 @@ func (f *Fs) unTrashDir(ctx context.Context, dir string, recurse bool) (r unTras
// copy file with id to dest
func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
info, err := f.getFile(ctx, id, f.fileFields)
info, err := f.getFile(ctx, id, f.getFileFields(ctx))
if err != nil {
return fmt.Errorf("couldn't find id: %w", err)
}
@ -3269,6 +3558,50 @@ func (f *Fs) copyID(ctx context.Context, id, dest string) (err error) {
return nil
}
func (f *Fs) query(ctx context.Context, query string) (entries []*drive.File, err error) {
list := f.svc.Files.List()
if query != "" {
list.Q(query)
}
if f.opt.ListChunk > 0 {
list.PageSize(f.opt.ListChunk)
}
list.SupportsAllDrives(true)
list.IncludeItemsFromAllDrives(true)
if f.isTeamDrive && !f.opt.SharedWithMe {
list.DriveId(f.opt.TeamDriveID)
list.Corpora("drive")
}
// If using appDataFolder then need to add Spaces
if f.rootFolderID == "appDataFolder" {
list.Spaces("appDataFolder")
}
fields := fmt.Sprintf("files(%s),nextPageToken,incompleteSearch", f.getFileFields(ctx))
var results []*drive.File
for {
var files *drive.FileList
err = f.pacer.Call(func() (bool, error) {
files, err = list.Fields(googleapi.Field(fields)).Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("failed to execute query: %w", err)
}
if files.IncompleteSearch {
fs.Errorf(f, "search result INCOMPLETE")
}
results = append(results, files.Files...)
if files.NextPageToken == "" {
break
}
list.PageToken(files.NextPageToken)
}
return results, nil
}
var commandHelp = []fs.CommandHelp{{
Name: "get",
Short: "Get command for fetching the drive config parameters",
@ -3419,6 +3752,47 @@ Use the --interactive/-i or --dry-run flag to see what would be copied before co
}, {
Name: "importformats",
Short: "Dump the import formats for debug purposes",
}, {
Name: "query",
Short: "List files using Google Drive query language",
Long: `This command lists files based on a query
Usage:
rclone backend query drive: query
The query syntax is documented at [Google Drive Search query terms and
operators](https://developers.google.com/drive/api/guides/ref-search-terms).
For example:
rclone backend query drive: "'0ABc9DEFGHIJKLMNop0QRatUVW3X' in parents and name contains 'foo'"
If the query contains literal ' or \ characters, these need to be escaped with
\ characters. "'" becomes "\'" and "\" becomes "\\\", for example to match a
file named "foo ' \.txt":
rclone backend query drive: "name = 'foo \' \\\.txt'"
The result is a JSON array of matches, for example:
[
{
"createdTime": "2017-06-29T19:58:28.537Z",
"id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD",
"md5Checksum": "68518d16be0c6fbfab918be61d658032",
"mimeType": "text/plain",
"modifiedTime": "2024-02-02T10:40:02.874Z",
"name": "foo ' \\.txt",
"parents": [
"0BxAe_BCDE4zkFGZpcWJGek0xbzC"
],
"resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC",
"sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893",
"size": "311",
"webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC"
}
]`,
}}
// Command the backend to run a named command
@ -3536,6 +3910,17 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
return f.exportFormats(ctx), nil
case "importformats":
return f.importFormats(ctx), nil
case "query":
if len(arg) == 1 {
query := arg[0]
var results, err = f.query(ctx, query)
if err != nil {
return nil, fmt.Errorf("failed to execute query: %q, error: %w", query, err)
}
return results, nil
} else {
return nil, errors.New("need a query argument")
}
default:
return nil, fs.ErrorCommandNotFound
}
@ -3568,13 +3953,19 @@ func (o *baseObject) Remote() string {
// Hash returns the Md5sum of an object returning a lowercase hex string
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.MD5 {
return "", hash.ErrUnsupported
if t == hash.MD5 {
return o.md5sum, nil
}
return o.md5sum, nil
if t == hash.SHA1 {
return o.sha1sum, nil
}
if t == hash.SHA256 {
return o.sha256sum, nil
}
return "", hash.ErrUnsupported
}
func (o *baseObject) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hash.MD5 {
if t != hash.MD5 && t != hash.SHA1 && t != hash.SHA256 {
return "", hash.ErrUnsupported
}
return "", nil
@ -3910,10 +4301,20 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
MimeType: srcMimeType,
ModifiedTime: src.ModTime(ctx).Format(timeFormatOut),
}
updateMetadata, err := o.fs.fetchAndUpdateMetadata(ctx, src, options, updateInfo, true)
if err != nil {
return err
}
info, err := o.baseObject.update(ctx, updateInfo, srcMimeType, in, src)
if err != nil {
return err
}
err = updateMetadata(ctx, info)
if err != nil {
return err
}
newO, err := o.fs.newObjectWithInfo(ctx, o.remote, info)
if err != nil {
return err
@ -3999,6 +4400,26 @@ func (o *baseObject) ParentID() string {
return ""
}
// Metadata returns metadata for an object
//
// It should return nil if there is no Metadata
func (o *baseObject) Metadata(ctx context.Context) (metadata fs.Metadata, err error) {
if o.metadata != nil {
return *o.metadata, nil
}
fs.Debugf(o, "Fetching metadata")
id := actualID(o.id)
info, err := o.fs.getFile(ctx, id, o.fs.getFileFields(ctx))
if err != nil {
return nil, err
}
err = o.parseMetadata(ctx, info)
if err != nil {
return nil, err
}
return *o.metadata, nil
}
func (o *documentObject) ext() string {
return o.baseObject.remote[len(o.baseObject.remote)-o.extLen:]
}
@ -4006,6 +4427,37 @@ func (o *linkObject) ext() string {
return o.baseObject.remote[len(o.baseObject.remote)-o.extLen:]
}
// Items returns the count of items in this directory or this
// directory and subdirectories if known, -1 for unknown
func (d *Directory) Items() int64 {
return -1
}
// SetMetadata sets metadata for a Directory
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (d *Directory) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
info, err := d.fs.updateDir(ctx, d.id, metadata)
if err != nil {
return fmt.Errorf("failed to update directory info: %w", err)
}
// Update directory from info returned
baseObject, err := d.fs.newBaseObject(ctx, d.remote, info)
if err != nil {
return fmt.Errorf("failed to process directory info: %w", err)
}
d.baseObject = baseObject
return err
}
// Hash does nothing on a directory
//
// This method is implemented with the incorrect type signature to
// stop the Directory type asserting to fs.Object or fs.ObjectInfo
func (d *Directory) Hash() {
// Does nothing
}
// templates for document link files
const (
urlTemplate = `[InternetShortcut]{{"\r"}}
@ -4055,11 +4507,14 @@ var (
_ fs.PublicLinker = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
_ fs.ParentIDer = (*Object)(nil)
_ fs.Metadataer = (*Object)(nil)
_ fs.Object = (*documentObject)(nil)
_ fs.MimeTyper = (*documentObject)(nil)
_ fs.IDer = (*documentObject)(nil)
@ -4068,4 +4523,8 @@ var (
_ fs.MimeTyper = (*linkObject)(nil)
_ fs.IDer = (*linkObject)(nil)
_ fs.ParentIDer = (*linkObject)(nil)
_ fs.Directory = (*Directory)(nil)
_ fs.SetModTimer = (*Directory)(nil)
_ fs.SetMetadataer = (*Directory)(nil)
_ fs.ParentIDer = (*Directory)(nil)
)

View file

@ -524,12 +524,49 @@ func (f *Fs) InternalTestCopyID(t *testing.T) {
})
}
// TestIntegration/FsMkdir/FsPutFiles/Internal/Query
func (f *Fs) InternalTestQuery(t *testing.T) {
ctx := context.Background()
var err error
t.Run("BadQuery", func(t *testing.T) {
_, err = f.query(ctx, "this is a bad query")
require.Error(t, err)
assert.Contains(t, err.Error(), "failed to execute query")
})
t.Run("NoMatch", func(t *testing.T) {
results, err := f.query(ctx, fmt.Sprintf("name='%s' and name!='%s'", existingSubDir, existingSubDir))
require.NoError(t, err)
assert.Len(t, results, 0)
})
t.Run("GoodQuery", func(t *testing.T) {
pathSegments := strings.Split(existingFile, "/")
var parent string
for _, item := range pathSegments {
// the file name contains ' characters which must be escaped
escapedItem := f.opt.Enc.FromStandardName(item)
escapedItem = strings.ReplaceAll(escapedItem, `\`, `\\`)
escapedItem = strings.ReplaceAll(escapedItem, `'`, `\'`)
results, err := f.query(ctx, fmt.Sprintf("%strashed=false and name='%s'", parent, escapedItem))
require.NoError(t, err)
require.True(t, len(results) > 0)
for _, result := range results {
assert.True(t, len(result.Id) > 0)
assert.Equal(t, result.Name, item)
}
parent = fmt.Sprintf("'%s' in parents and ", results[0].Id)
}
})
}
// TestIntegration/FsMkdir/FsPutFiles/Internal/AgeQuery
func (f *Fs) InternalTestAgeQuery(t *testing.T) {
// Check set up for filtering
assert.True(t, f.Features().FilterAware)
opt := &filter.Opt{}
opt := &filter.Options{}
err := opt.MaxAge.Set("1h")
assert.NoError(t, err)
flt, err := filter.NewFilter(opt)
@ -611,6 +648,7 @@ func (f *Fs) InternalTest(t *testing.T) {
t.Run("Shortcuts", f.InternalTestShortcuts)
t.Run("UnTrash", f.InternalTestUnTrash)
t.Run("CopyID", f.InternalTestCopyID)
t.Run("Query", f.InternalTestQuery)
t.Run("AgeQuery", f.InternalTestAgeQuery)
t.Run("ShouldRetry", f.InternalTestShouldRetry)
}

638
backend/drive/metadata.go Normal file
View file

@ -0,0 +1,638 @@
package drive
import (
"context"
"encoding/json"
"fmt"
"strconv"
"strings"
"sync"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/errcount"
"golang.org/x/sync/errgroup"
drive "google.golang.org/api/drive/v3"
"google.golang.org/api/googleapi"
)
// system metadata keys which this backend owns
var systemMetadataInfo = map[string]fs.MetadataHelp{
"content-type": {
Help: "The MIME type of the file.",
Type: "string",
Example: "text/plain",
},
"mtime": {
Help: "Time of last modification with mS accuracy.",
Type: "RFC 3339",
Example: "2006-01-02T15:04:05.999Z07:00",
},
"btime": {
Help: "Time of file birth (creation) with mS accuracy. Note that this is only writable on fresh uploads - it can't be written for updates.",
Type: "RFC 3339",
Example: "2006-01-02T15:04:05.999Z07:00",
},
"copy-requires-writer-permission": {
Help: "Whether the options to copy, print, or download this file, should be disabled for readers and commenters.",
Type: "boolean",
Example: "true",
},
"writers-can-share": {
Help: "Whether users with only writer permission can modify the file's permissions. Not populated and ignored when setting for items in shared drives.",
Type: "boolean",
Example: "false",
},
"viewed-by-me": {
Help: "Whether the file has been viewed by this user.",
Type: "boolean",
Example: "true",
ReadOnly: true,
},
"owner": {
Help: "The owner of the file. Usually an email address. Enable with --drive-metadata-owner.",
Type: "string",
Example: "user@example.com",
},
"permissions": {
Help: "Permissions in a JSON dump of Google drive format. On shared drives these will only be present if they aren't inherited. Enable with --drive-metadata-permissions.",
Type: "JSON",
Example: "{}",
},
"folder-color-rgb": {
Help: "The color for a folder or a shortcut to a folder as an RGB hex string.",
Type: "string",
Example: "881133",
},
"description": {
Help: "A short description of the file.",
Type: "string",
Example: "Contract for signing",
},
"starred": {
Help: "Whether the user has starred the file.",
Type: "boolean",
Example: "false",
},
"labels": {
Help: "Labels attached to this file in a JSON dump of Googled drive format. Enable with --drive-metadata-labels.",
Type: "JSON",
Example: "[]",
},
}
// Extra fields we need to fetch to implement the system metadata above
var metadataFields = googleapi.Field(strings.Join([]string{
"copyRequiresWriterPermission",
"description",
"folderColorRgb",
"hasAugmentedPermissions",
"owners",
"permissionIds",
"permissions",
"properties",
"starred",
"viewedByMe",
"viewedByMeTime",
"writersCanShare",
}, ","))
// Fields we need to read from permissions
var permissionsFields = googleapi.Field(strings.Join([]string{
"*",
"permissionDetails/*",
}, ","))
// getPermission returns permissions for the fileID and permissionID passed in
func (f *Fs) getPermission(ctx context.Context, fileID, permissionID string, useCache bool) (perm *drive.Permission, inherited bool, err error) {
f.permissionsMu.Lock()
defer f.permissionsMu.Unlock()
if useCache {
perm = f.permissions[permissionID]
if perm != nil {
return perm, false, nil
}
}
fs.Debugf(f, "Fetching permission %q", permissionID)
err = f.pacer.Call(func() (bool, error) {
perm, err = f.svc.Permissions.Get(fileID, permissionID).
Fields(permissionsFields).
SupportsAllDrives(true).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, false, err
}
inherited = len(perm.PermissionDetails) > 0 && perm.PermissionDetails[0].Inherited
cleanPermission(perm)
// cache the permission
f.permissions[permissionID] = perm
return perm, inherited, err
}
// Set the permissions on the info
func (f *Fs) setPermissions(ctx context.Context, info *drive.File, permissions []*drive.Permission) (err error) {
errs := errcount.New()
for _, perm := range permissions {
if perm.Role == "owner" {
// ignore owner permissions - these are set with owner
continue
}
cleanPermissionForWrite(perm)
err := f.pacer.Call(func() (bool, error) {
_, err := f.svc.Permissions.Create(info.Id, perm).
SupportsAllDrives(true).
SendNotificationEmail(false).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
fs.Errorf(f, "Failed to set permission %s for %q: %v", perm.Role, perm.EmailAddress, err)
errs.Add(err)
}
}
err = errs.Err("failed to set permission")
if err != nil {
err = fserrors.NoRetryError(err)
}
return err
}
// Clean attributes from permissions which we can't write
func cleanPermissionForWrite(perm *drive.Permission) {
perm.Deleted = false
perm.DisplayName = ""
perm.Id = ""
perm.Kind = ""
perm.PermissionDetails = nil
perm.TeamDrivePermissionDetails = nil
}
// Clean and cache the permission if not already cached
func (f *Fs) cleanAndCachePermission(perm *drive.Permission) {
f.permissionsMu.Lock()
defer f.permissionsMu.Unlock()
cleanPermission(perm)
if _, found := f.permissions[perm.Id]; !found {
f.permissions[perm.Id] = perm
}
}
// Clean fields we don't need to keep from the permission
func cleanPermission(perm *drive.Permission) {
// DisplayName: Output only. The "pretty" name of the value of the
// permission. The following is a list of examples for each type of
// permission: * `user` - User's full name, as defined for their Google
// account, such as "Joe Smith." * `group` - Name of the Google Group,
// such as "The Company Administrators." * `domain` - String domain
// name, such as "thecompany.com." * `anyone` - No `displayName` is
// present.
perm.DisplayName = ""
// Kind: Output only. Identifies what kind of resource this is. Value:
// the fixed string "drive#permission".
perm.Kind = ""
// PermissionDetails: Output only. Details of whether the permissions on
// this shared drive item are inherited or directly on this item. This
// is an output-only field which is present only for shared drive items.
perm.PermissionDetails = nil
// PhotoLink: Output only. A link to the user's profile photo, if
// available.
perm.PhotoLink = ""
// TeamDrivePermissionDetails: Output only. Deprecated: Output only. Use
// `permissionDetails` instead.
perm.TeamDrivePermissionDetails = nil
}
// Fields we need to read from labels
var labelsFields = googleapi.Field(strings.Join([]string{
"*",
}, ","))
// getLabels returns labels for the fileID passed in
func (f *Fs) getLabels(ctx context.Context, fileID string) (labels []*drive.Label, err error) {
fs.Debugf(f, "Fetching labels for %q", fileID)
listLabels := f.svc.Files.ListLabels(fileID).
Fields(labelsFields).
Context(ctx)
for {
var info *drive.LabelList
err = f.pacer.Call(func() (bool, error) {
info, err = listLabels.Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
labels = append(labels, info.Labels...)
if info.NextPageToken == "" {
break
}
listLabels.PageToken(info.NextPageToken)
}
for _, label := range labels {
cleanLabel(label)
}
return labels, nil
}
// Set the labels on the info
func (f *Fs) setLabels(ctx context.Context, info *drive.File, labels []*drive.Label) (err error) {
if len(labels) == 0 {
return nil
}
req := drive.ModifyLabelsRequest{}
for _, label := range labels {
req.LabelModifications = append(req.LabelModifications, &drive.LabelModification{
FieldModifications: labelFieldsToFieldModifications(label.Fields),
LabelId: label.Id,
})
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Files.ModifyLabels(info.Id, &req).
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("failed to set labels: %w", err)
}
return nil
}
// Convert label fields into something which can set the fields
func labelFieldsToFieldModifications(fields map[string]drive.LabelField) (out []*drive.LabelFieldModification) {
for id, field := range fields {
var emails []string
for _, user := range field.User {
emails = append(emails, user.EmailAddress)
}
out = append(out, &drive.LabelFieldModification{
// FieldId: The ID of the field to be modified.
FieldId: id,
// SetDateValues: Replaces the value of a dateString Field with these
// new values. The string must be in the RFC 3339 full-date format:
// YYYY-MM-DD.
SetDateValues: field.DateString,
// SetIntegerValues: Replaces the value of an `integer` field with these
// new values.
SetIntegerValues: field.Integer,
// SetSelectionValues: Replaces a `selection` field with these new
// values.
SetSelectionValues: field.Selection,
// SetTextValues: Sets the value of a `text` field.
SetTextValues: field.Text,
// SetUserValues: Replaces a `user` field with these new values. The
// values must be valid email addresses.
SetUserValues: emails,
})
}
return out
}
// Clean fields we don't need to keep from the label
func cleanLabel(label *drive.Label) {
// Kind: This is always drive#label
label.Kind = ""
for name, field := range label.Fields {
// Kind: This is always drive#labelField.
field.Kind = ""
// Note the fields are copies so we need to write them
// back to the map
label.Fields[name] = field
}
}
// Parse the metadata from drive item
//
// It should return nil if there is no Metadata
func (o *baseObject) parseMetadata(ctx context.Context, info *drive.File) (err error) {
metadata := make(fs.Metadata, 16)
// Dump user metadata first as it overrides system metadata
for k, v := range info.Properties {
metadata[k] = v
}
// System metadata
metadata["copy-requires-writer-permission"] = fmt.Sprint(info.CopyRequiresWriterPermission)
metadata["writers-can-share"] = fmt.Sprint(info.WritersCanShare)
metadata["viewed-by-me"] = fmt.Sprint(info.ViewedByMe)
metadata["content-type"] = info.MimeType
// Owners: Output only. The owner of this file. Only certain legacy
// files may have more than one owner. This field isn't populated for
// items in shared drives.
if o.fs.opt.MetadataOwner.IsSet(rwRead) && len(info.Owners) > 0 {
user := info.Owners[0]
if len(info.Owners) > 1 {
fs.Logf(o, "Ignoring more than 1 owner")
}
if user != nil {
id := user.EmailAddress
if id == "" {
id = user.DisplayName
}
metadata["owner"] = id
}
}
if o.fs.opt.MetadataPermissions.IsSet(rwRead) {
// We only write permissions out if they are not inherited.
//
// On My Drives permissions seem to be attached to every item
// so they will always be written out.
//
// On Shared Drives only non-inherited permissions will be
// written out.
// To read the inherited permissions flag will mean we need to
// read the permissions for each object and the cache will be
// useless. However shared drives don't return permissions
// only permissionIds so will need to fetch them for each
// object. We use HasAugmentedPermissions to see if there are
// special permissions before fetching them to save transactions.
// HasAugmentedPermissions: Output only. Whether there are permissions
// directly on this file. This field is only populated for items in
// shared drives.
if o.fs.isTeamDrive && !info.HasAugmentedPermissions {
// Don't process permissions if there aren't any specifically set
fs.Debugf(o, "Ignoring %d permissions and %d permissionIds as is shared drive with hasAugmentedPermissions false", len(info.Permissions), len(info.PermissionIds))
info.Permissions = nil
info.PermissionIds = nil
}
// PermissionIds: Output only. List of permission IDs for users with
// access to this file.
//
// Only process these if we have no Permissions
if len(info.PermissionIds) > 0 && len(info.Permissions) == 0 {
info.Permissions = make([]*drive.Permission, 0, len(info.PermissionIds))
g, gCtx := errgroup.WithContext(ctx)
g.SetLimit(o.fs.ci.Checkers)
var mu sync.Mutex // protect the info.Permissions from concurrent writes
for _, permissionID := range info.PermissionIds {
permissionID := permissionID
g.Go(func() error {
// must fetch the team drive ones individually to check the inherited flag
perm, inherited, err := o.fs.getPermission(gCtx, actualID(info.Id), permissionID, !o.fs.isTeamDrive)
if err != nil {
return fmt.Errorf("failed to read permission: %w", err)
}
// Don't write inherited permissions out
if inherited {
return nil
}
// Don't write owner role out - these are covered by the owner metadata
if perm.Role == "owner" {
return nil
}
mu.Lock()
info.Permissions = append(info.Permissions, perm)
mu.Unlock()
return nil
})
}
err = g.Wait()
if err != nil {
return err
}
} else {
// Clean the fetched permissions
for _, perm := range info.Permissions {
o.fs.cleanAndCachePermission(perm)
}
}
// Permissions: Output only. The full list of permissions for the file.
// This is only available if the requesting user can share the file. Not
// populated for items in shared drives.
if len(info.Permissions) > 0 {
buf, err := json.Marshal(info.Permissions)
if err != nil {
return fmt.Errorf("failed to marshal permissions: %w", err)
}
metadata["permissions"] = string(buf)
}
// Permission propagation
// https://developers.google.com/drive/api/guides/manage-sharing#permission-propagation
// Leads me to believe that in non shared drives, permissions
// are added to each item when you set permissions for a
// folder whereas in shared drives they are inherited and
// placed on the item directly.
}
if info.FolderColorRgb != "" {
metadata["folder-color-rgb"] = info.FolderColorRgb
}
if info.Description != "" {
metadata["description"] = info.Description
}
metadata["starred"] = fmt.Sprint(info.Starred)
metadata["btime"] = info.CreatedTime
metadata["mtime"] = info.ModifiedTime
if o.fs.opt.MetadataLabels.IsSet(rwRead) {
// FIXME would be really nice if we knew if files had labels
// before listing but we need to know all possible label IDs
// to get it in the listing.
labels, err := o.fs.getLabels(ctx, actualID(info.Id))
if err != nil {
return fmt.Errorf("failed to fetch labels: %w", err)
}
buf, err := json.Marshal(labels)
if err != nil {
return fmt.Errorf("failed to marshal labels: %w", err)
}
metadata["labels"] = string(buf)
}
o.metadata = &metadata
return nil
}
// Set the owner on the info
func (f *Fs) setOwner(ctx context.Context, info *drive.File, owner string) (err error) {
perm := drive.Permission{
Role: "owner",
EmailAddress: owner,
// Type: The type of the grantee. Valid values are: * `user` * `group` *
// `domain` * `anyone` When creating a permission, if `type` is `user`
// or `group`, you must provide an `emailAddress` for the user or group.
// When `type` is `domain`, you must provide a `domain`. There isn't
// extra information required for an `anyone` type.
Type: "user",
}
err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Permissions.Create(info.Id, &perm).
SupportsAllDrives(true).
TransferOwnership(true).
// SendNotificationEmail(false). - required apparently!
Context(ctx).Do()
return f.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("failed to set owner: %w", err)
}
return nil
}
// Call back to set metadata that can't be set on the upload/update
//
// The *drive.File passed in holds the current state of the drive.File
// and this should update it with any modifications.
type updateMetadataFn func(context.Context, *drive.File) error
// read the metadata from meta and write it into updateInfo
//
// update should be true if this is being used to create metadata for
// an update/PATCH call as the rules on what can be updated are
// slightly different there.
//
// It returns a callback which should be called to finish the updates
// after the data is uploaded.
func (f *Fs) updateMetadata(ctx context.Context, updateInfo *drive.File, meta fs.Metadata, update bool) (callback updateMetadataFn, err error) {
callbackFns := []updateMetadataFn{}
callback = func(ctx context.Context, info *drive.File) error {
for _, fn := range callbackFns {
err := fn(ctx, info)
if err != nil {
return err
}
}
return nil
}
// merge metadata into request and user metadata
for k, v := range meta {
k, v := k, v
// parse a boolean from v and write into out
parseBool := func(out *bool) error {
b, err := strconv.ParseBool(v)
if err != nil {
return fmt.Errorf("can't parse metadata %q = %q: %w", k, v, err)
}
*out = b
return nil
}
switch k {
case "copy-requires-writer-permission":
if err := parseBool(&updateInfo.CopyRequiresWriterPermission); err != nil {
return nil, err
}
case "writers-can-share":
if !f.isTeamDrive {
if err := parseBool(&updateInfo.WritersCanShare); err != nil {
return nil, err
}
} else {
fs.Debugf(f, "Ignoring %s=%s as can't set on shared drives", k, v)
}
case "viewed-by-me":
// Can't write this
case "content-type":
updateInfo.MimeType = v
case "owner":
if !f.opt.MetadataOwner.IsSet(rwWrite) {
continue
}
// Can't set Owner on upload so need to set afterwards
callbackFns = append(callbackFns, func(ctx context.Context, info *drive.File) error {
err := f.setOwner(ctx, info, v)
if err != nil && f.opt.MetadataOwner.IsSet(rwFailOK) {
fs.Errorf(f, "Ignoring error as failok is set: %v", err)
return nil
}
return err
})
case "permissions":
if !f.opt.MetadataPermissions.IsSet(rwWrite) {
continue
}
var perms []*drive.Permission
err := json.Unmarshal([]byte(v), &perms)
if err != nil {
return nil, fmt.Errorf("failed to unmarshal permissions: %w", err)
}
// Can't set Permissions on upload so need to set afterwards
callbackFns = append(callbackFns, func(ctx context.Context, info *drive.File) error {
err := f.setPermissions(ctx, info, perms)
if err != nil && f.opt.MetadataPermissions.IsSet(rwFailOK) {
// We've already logged the permissions errors individually here
fs.Debugf(f, "Ignoring error as failok is set: %v", err)
return nil
}
return err
})
case "labels":
if !f.opt.MetadataLabels.IsSet(rwWrite) {
continue
}
var labels []*drive.Label
err := json.Unmarshal([]byte(v), &labels)
if err != nil {
return nil, fmt.Errorf("failed to unmarshal labels: %w", err)
}
// Can't set Labels on upload so need to set afterwards
callbackFns = append(callbackFns, func(ctx context.Context, info *drive.File) error {
err := f.setLabels(ctx, info, labels)
if err != nil && f.opt.MetadataLabels.IsSet(rwFailOK) {
fs.Errorf(f, "Ignoring error as failok is set: %v", err)
return nil
}
return err
})
case "folder-color-rgb":
updateInfo.FolderColorRgb = v
case "description":
updateInfo.Description = v
case "starred":
if err := parseBool(&updateInfo.Starred); err != nil {
return nil, err
}
case "btime":
if update {
fs.Debugf(f, "Skipping btime metadata as can't update it on an existing file: %v", v)
} else {
updateInfo.CreatedTime = v
}
case "mtime":
updateInfo.ModifiedTime = v
default:
if updateInfo.Properties == nil {
updateInfo.Properties = make(map[string]string, 1)
}
updateInfo.Properties[k] = v
}
}
return callback, nil
}
// Fetch metadata and update updateInfo if --metadata is in use
func (f *Fs) fetchAndUpdateMetadata(ctx context.Context, src fs.ObjectInfo, options []fs.OpenOption, updateInfo *drive.File, update bool) (callback updateMetadataFn, err error) {
meta, err := fs.GetMetadataOptions(ctx, f, src, options)
if err != nil {
return nil, fmt.Errorf("failed to read metadata from source object: %w", err)
}
callback, err = f.updateMetadata(ctx, updateInfo, meta, update)
if err != nil {
return nil, fmt.Errorf("failed to update metadata from source object: %w", err)
}
return callback, nil
}

View file

@ -8,121 +8,19 @@ package dropbox
import (
"context"
"errors"
"fmt"
"sync"
"time"
"github.com/dropbox/dropbox-sdk-go-unofficial/v6/dropbox/files"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/atexit"
)
const (
maxBatchSize = 1000 // max size the batch can be
defaultTimeoutSync = 500 * time.Millisecond // kick off the batch if nothing added for this long (sync)
defaultTimeoutAsync = 10 * time.Second // kick off the batch if nothing added for this long (ssync)
defaultBatchSizeAsync = 100 // default batch size if async
)
// batcher holds info about the current items waiting for upload
type batcher struct {
f *Fs // Fs this batch is part of
mode string // configured batch mode
size int // maximum size for batch
timeout time.Duration // idle timeout for batch
async bool // whether we are using async batching
in chan batcherRequest // incoming items to batch
closed chan struct{} // close to indicate batcher shut down
atexit atexit.FnHandle // atexit handle
shutOnce sync.Once // make sure we shutdown once only
wg sync.WaitGroup // wait for shutdown
}
// batcherRequest holds an incoming request with a place for a reply
type batcherRequest struct {
commitInfo *files.UploadSessionFinishArg
result chan<- batcherResponse
}
// Return true if batcherRequest is the quit request
func (br *batcherRequest) isQuit() bool {
return br.commitInfo == nil
}
// Send this to get the engine to quit
var quitRequest = batcherRequest{}
// batcherResponse holds a response to be delivered to clients waiting
// for a batch to complete.
type batcherResponse struct {
err error
entry *files.FileMetadata
}
// newBatcher creates a new batcher structure
func newBatcher(ctx context.Context, f *Fs, mode string, size int, timeout time.Duration) (*batcher, error) {
// fs.Debugf(f, "Creating batcher with mode %q, size %d, timeout %v", mode, size, timeout)
if size > maxBatchSize || size < 0 {
return nil, fmt.Errorf("dropbox: batch size must be < %d and >= 0 - it is currently %d", maxBatchSize, size)
}
async := false
switch mode {
case "sync":
if size <= 0 {
ci := fs.GetConfig(ctx)
size = ci.Transfers
}
if timeout <= 0 {
timeout = defaultTimeoutSync
}
case "async":
if size <= 0 {
size = defaultBatchSizeAsync
}
if timeout <= 0 {
timeout = defaultTimeoutAsync
}
async = true
case "off":
size = 0
default:
return nil, fmt.Errorf("dropbox: batch mode must be sync|async|off not %q", mode)
}
b := &batcher{
f: f,
mode: mode,
size: size,
timeout: timeout,
async: async,
in: make(chan batcherRequest, size),
closed: make(chan struct{}),
}
if b.Batching() {
b.atexit = atexit.Register(b.Shutdown)
b.wg.Add(1)
go b.commitLoop(context.Background())
}
return b, nil
}
// Batching returns true if batching is active
func (b *batcher) Batching() bool {
return b.size > 0
}
// finishBatch commits the batch, returning a batch status to poll or maybe complete
func (b *batcher) finishBatch(ctx context.Context, items []*files.UploadSessionFinishArg) (complete *files.UploadSessionFinishBatchResult, err error) {
func (f *Fs) finishBatch(ctx context.Context, items []*files.UploadSessionFinishArg) (complete *files.UploadSessionFinishBatchResult, err error) {
var arg = &files.UploadSessionFinishBatchArg{
Entries: items,
}
err = b.f.pacer.Call(func() (bool, error) {
complete, err = b.f.srv.UploadSessionFinishBatchV2(arg)
err = f.pacer.Call(func() (bool, error) {
complete, err = f.srv.UploadSessionFinishBatchV2(arg)
// If error is insufficient space then don't retry
if e, ok := err.(files.UploadSessionFinishAPIError); ok {
if e.EndpointError != nil && e.EndpointError.Path != nil && e.EndpointError.Path.Tag == files.WriteErrorInsufficientSpace {
@ -139,23 +37,10 @@ func (b *batcher) finishBatch(ctx context.Context, items []*files.UploadSessionF
return complete, nil
}
// commit a batch
func (b *batcher) commitBatch(ctx context.Context, items []*files.UploadSessionFinishArg, results []chan<- batcherResponse) (err error) {
// If commit fails then signal clients if sync
var signalled = b.async
defer func() {
if err != nil && !signalled {
// Signal to clients that there was an error
for _, result := range results {
result <- batcherResponse{err: err}
}
}
}()
desc := fmt.Sprintf("%s batch length %d starting with: %s", b.mode, len(items), items[0].Commit.Path)
fs.Debugf(b.f, "Committing %s", desc)
// Called by the batcher to commit a batch
func (f *Fs) commitBatch(ctx context.Context, items []*files.UploadSessionFinishArg, results []*files.FileMetadata, errors []error) (err error) {
// finalise the batch getting either a result or a job id to poll
complete, err := b.finishBatch(ctx, items)
complete, err := f.finishBatch(ctx, items)
if err != nil {
return err
}
@ -166,19 +51,13 @@ func (b *batcher) commitBatch(ctx context.Context, items []*files.UploadSessionF
return fmt.Errorf("expecting %d items in batch but got %d", len(results), len(entries))
}
// Report results to clients
var (
errorTag = ""
errorCount = 0
)
// Format results for return
for i := range results {
item := entries[i]
resp := batcherResponse{}
if item.Tag == "success" {
resp.entry = item.Success
results[i] = item.Success
} else {
errorCount++
errorTag = item.Tag
errorTag := item.Tag
if item.Failure != nil {
errorTag = item.Failure.Tag
if item.Failure.LookupFailed != nil {
@ -191,112 +70,9 @@ func (b *batcher) commitBatch(ctx context.Context, items []*files.UploadSessionF
errorTag += "/" + item.Failure.PropertiesError.Tag
}
}
resp.err = fmt.Errorf("batch upload failed: %s", errorTag)
}
if !b.async {
results[i] <- resp
errors[i] = fmt.Errorf("upload failed: %s", errorTag)
}
}
// Show signalled so no need to report error to clients from now on
signalled = true
// Report an error if any failed in the batch
if errorTag != "" {
return fmt.Errorf("batch had %d errors: last error: %s", errorCount, errorTag)
}
fs.Debugf(b.f, "Committed %s", desc)
return nil
}
// commitLoop runs the commit engine in the background
func (b *batcher) commitLoop(ctx context.Context) {
var (
items []*files.UploadSessionFinishArg // current batch of uncommitted files
results []chan<- batcherResponse // current batch of clients awaiting results
idleTimer = time.NewTimer(b.timeout)
commit = func() {
err := b.commitBatch(ctx, items, results)
if err != nil {
fs.Errorf(b.f, "%s batch commit: failed to commit batch length %d: %v", b.mode, len(items), err)
}
items, results = nil, nil
}
)
defer b.wg.Done()
defer idleTimer.Stop()
idleTimer.Stop()
outer:
for {
select {
case req := <-b.in:
if req.isQuit() {
break outer
}
items = append(items, req.commitInfo)
results = append(results, req.result)
idleTimer.Stop()
if len(items) >= b.size {
commit()
} else {
idleTimer.Reset(b.timeout)
}
case <-idleTimer.C:
if len(items) > 0 {
fs.Debugf(b.f, "Batch idle for %v so committing", b.timeout)
commit()
}
}
}
// commit any remaining items
if len(items) > 0 {
commit()
}
}
// Shutdown finishes any pending batches then shuts everything down
//
// Can be called from atexit handler
func (b *batcher) Shutdown() {
if !b.Batching() {
return
}
b.shutOnce.Do(func() {
atexit.Unregister(b.atexit)
fs.Infof(b.f, "Committing uploads - please wait...")
// show that batcher is shutting down
close(b.closed)
// quit the commitLoop by sending a quitRequest message
//
// Note that we don't close b.in because that will
// cause write to closed channel in Commit when we are
// exiting due to a signal.
b.in <- quitRequest
b.wg.Wait()
})
}
// Commit commits the file using a batch call, first adding it to the
// batch and then waiting for the batch to complete in a synchronous
// way if async is not set.
func (b *batcher) Commit(ctx context.Context, commitInfo *files.UploadSessionFinishArg) (entry *files.FileMetadata, err error) {
select {
case <-b.closed:
return nil, fserrors.FatalError(errors.New("batcher is shutting down"))
default:
}
fs.Debugf(b.f, "Adding %q to batch", commitInfo.Commit.Path)
resp := make(chan batcherResponse, 1)
b.in <- batcherRequest{
commitInfo: commitInfo,
result: resp,
}
// If running async then don't wait for the result
if b.async {
return nil, nil
}
result := <-resp
return result.entry, result.err
}

View file

@ -47,6 +47,7 @@ import (
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/batcher"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
@ -121,6 +122,14 @@ var (
// Errors
errNotSupportedInSharedMode = fserrors.NoRetryError(errors.New("not supported in shared files mode"))
// Configure the batcher
defaultBatcherOptions = batcher.Options{
MaxBatchSize: 1000,
DefaultTimeoutSync: 500 * time.Millisecond,
DefaultTimeoutAsync: 10 * time.Second,
DefaultBatchSizeAsync: 100,
}
)
// Gets an oauth config with the right scopes
@ -152,7 +161,7 @@ func init() {
},
})
},
Options: append(oauthutil.SharedOptions, []fs.Option{{
Options: append(append(oauthutil.SharedOptions, []fs.Option{{
Name: "chunk_size",
Help: fmt.Sprintf(`Upload chunk size (< %v).
@ -207,71 +216,12 @@ are supported.
Note that we don't unmount the shared folder afterwards so the
--dropbox-shared-folders can be omitted after the first use of a particular
shared folder.`,
shared folder.
See also --dropbox-root-namespace for an alternative way to work with shared
folders.`,
Default: false,
Advanced: true,
}, {
Name: "batch_mode",
Help: `Upload file batching sync|async|off.
This sets the batch mode used by rclone.
For full info see [the main docs](https://rclone.org/dropbox/#batch-mode)
This has 3 possible values
- off - no batching
- sync - batch uploads and check completion (default)
- async - batch upload and don't check completion
Rclone will close any outstanding batches when it exits which may make
a delay on quit.
`,
Default: "sync",
Advanced: true,
}, {
Name: "batch_size",
Help: `Max number of files in upload batch.
This sets the batch size of files to upload. It has to be less than 1000.
By default this is 0 which means rclone which calculate the batch size
depending on the setting of batch_mode.
- batch_mode: async - default batch_size is 100
- batch_mode: sync - default batch_size is the same as --transfers
- batch_mode: off - not in use
Rclone will close any outstanding batches when it exits which may make
a delay on quit.
Setting this is a great idea if you are uploading lots of small files
as it will make them a lot quicker. You can use --transfers 32 to
maximise throughput.
`,
Default: 0,
Advanced: true,
}, {
Name: "batch_timeout",
Help: `Max time to allow an idle upload batch before uploading.
If an upload batch is idle for more than this long then it will be
uploaded.
The default for this is 0 which means rclone will choose a sensible
default based on the batch_mode in use.
- batch_mode: async - default batch_timeout is 10s
- batch_mode: sync - default batch_timeout is 500ms
- batch_mode: off - not in use
`,
Default: fs.Duration(0),
Advanced: true,
}, {
Name: "batch_commit_timeout",
Help: `Max time to wait for a batch to finish committing`,
Default: fs.Duration(10 * time.Minute),
Advanced: true,
}, {
Name: "pacer_min_sleep",
Default: defaultMinSleep,
@ -290,23 +240,28 @@ default based on the batch_mode in use.
encoder.EncodeDel |
encoder.EncodeRightSpace |
encoder.EncodeInvalidUtf8,
}}...),
}, {
Name: "root_namespace",
Help: "Specify a different Dropbox namespace ID to use as the root for all paths.",
Default: "",
Advanced: true,
}}...), defaultBatcherOptions.FsOptions("For full info see [the main docs](https://rclone.org/dropbox/#batch-mode)\n\n")...),
})
}
// Options defines the configuration for this backend
type Options struct {
ChunkSize fs.SizeSuffix `config:"chunk_size"`
Impersonate string `config:"impersonate"`
SharedFiles bool `config:"shared_files"`
SharedFolders bool `config:"shared_folders"`
BatchMode string `config:"batch_mode"`
BatchSize int `config:"batch_size"`
BatchTimeout fs.Duration `config:"batch_timeout"`
BatchCommitTimeout fs.Duration `config:"batch_commit_timeout"`
AsyncBatch bool `config:"async_batch"`
PacerMinSleep fs.Duration `config:"pacer_min_sleep"`
Enc encoder.MultiEncoder `config:"encoding"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
Impersonate string `config:"impersonate"`
SharedFiles bool `config:"shared_files"`
SharedFolders bool `config:"shared_folders"`
BatchMode string `config:"batch_mode"`
BatchSize int `config:"batch_size"`
BatchTimeout fs.Duration `config:"batch_timeout"`
AsyncBatch bool `config:"async_batch"`
PacerMinSleep fs.Duration `config:"pacer_min_sleep"`
Enc encoder.MultiEncoder `config:"encoding"`
RootNsid string `config:"root_namespace"`
}
// Fs represents a remote dropbox server
@ -325,7 +280,7 @@ type Fs struct {
slashRootSlash string // root with "/" prefix and postfix, lowercase
pacer *fs.Pacer // To pace the API calls
ns string // The namespace we are using or "" for none
batcher *batcher // batch builder
batcher *batcher.Batcher[*files.UploadSessionFinishArg, *files.FileMetadata]
}
// Object describes a dropbox object
@ -431,7 +386,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
oldToken = strings.TrimSpace(oldToken)
if ok && oldToken != "" && oldToken[0] != '{' {
fs.Infof(name, "Converting token to new format")
newToken := fmt.Sprintf(`{"access_token":"%s","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}`, oldToken)
newToken := fmt.Sprintf(`{"access_token":%q,"token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}`, oldToken)
err := config.SetValueAndSave(name, config.ConfigToken, newToken)
if err != nil {
return nil, fmt.Errorf("NewFS convert token: %w", err)
@ -451,7 +406,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
ci: ci,
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(opt.PacerMinSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.batcher, err = newBatcher(ctx, f, f.opt.BatchMode, f.opt.BatchSize, time.Duration(f.opt.BatchTimeout))
batcherOptions := defaultBatcherOptions
batcherOptions.Mode = f.opt.BatchMode
batcherOptions.Size = f.opt.BatchSize
batcherOptions.Timeout = time.Duration(f.opt.BatchTimeout)
f.batcher, err = batcher.New(ctx, f, f.commitBatch, batcherOptions)
if err != nil {
return nil, err
}
@ -478,15 +437,15 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
members := []*team.UserSelectorArg{&user}
args := team.NewMembersGetInfoArgs(members)
memberIds, err := f.team.MembersGetInfo(args)
memberIDs, err := f.team.MembersGetInfo(args)
if err != nil {
return nil, fmt.Errorf("invalid dropbox team member: %q: %w", opt.Impersonate, err)
}
if len(memberIds) == 0 || memberIds[0].MemberInfo == nil || memberIds[0].MemberInfo.Profile == nil {
if len(memberIDs) == 0 || memberIDs[0].MemberInfo == nil || memberIDs[0].MemberInfo.Profile == nil {
return nil, fmt.Errorf("dropbox team member not found: %q", opt.Impersonate)
}
cfg.AsMemberID = memberIds[0].MemberInfo.Profile.MemberProfile.TeamMemberId
cfg.AsMemberID = memberIDs[0].MemberInfo.Profile.MemberProfile.TeamMemberId
}
f.srv = files.New(cfg)
@ -552,8 +511,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.features.Fill(ctx, f)
// If root starts with / then use the actual root
if strings.HasPrefix(root, "/") {
if f.opt.RootNsid != "" {
f.ns = f.opt.RootNsid
fs.Debugf(f, "Overriding root namespace to %q", f.ns)
} else if strings.HasPrefix(root, "/") {
// If root starts with / then use the actual root
var acc *users.FullAccount
err = f.pacer.Call(func() (bool, error) {
acc, err = f.users.GetCurrentAccount()
@ -694,7 +656,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(ctx, remote, nil)
}
// listSharedFoldersApi lists all available shared folders mounted and not mounted
// listSharedFolders lists all available shared folders mounted and not mounted
// we'll need the id later so we have to return them in original format
func (f *Fs) listSharedFolders(ctx context.Context) (entries fs.DirEntries, err error) {
started := false
@ -996,6 +958,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error)
if root == "/" {
return errors.New("can't remove root directory")
}
encRoot := f.opt.Enc.FromStandardPath(root)
if check {
// check directory exists
@ -1004,10 +967,9 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error)
return fmt.Errorf("Rmdir: %w", err)
}
root = f.opt.Enc.FromStandardPath(root)
// check directory empty
arg := files.ListFolderArg{
Path: root,
Path: encRoot,
Recursive: false,
}
if root == "/" {
@ -1028,7 +990,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) (err error)
// remove it
err = f.pacer.Call(func() (bool, error) {
_, err = f.srv.DeleteV2(&files.DeleteArg{Path: root})
_, err = f.srv.DeleteV2(&files.DeleteArg{Path: encRoot})
return shouldRetry(ctx, err)
})
return err
@ -1281,18 +1243,21 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
return nil, err
}
var total uint64
used := q.Used
if q.Allocation != nil {
if q.Allocation.Individual != nil {
total += q.Allocation.Individual.Allocated
}
if q.Allocation.Team != nil {
total += q.Allocation.Team.Allocated
// Override used with Team.Used as this includes q.Used already
used = q.Allocation.Team.Used
}
}
usage = &fs.Usage{
Total: fs.NewUsageValue(int64(total)), // quota of bytes that can be used
Used: fs.NewUsageValue(int64(q.Used)), // bytes in use
Free: fs.NewUsageValue(int64(total - q.Used)), // bytes which can be uploaded before reaching the quota
Total: fs.NewUsageValue(int64(total)), // quota of bytes that can be used
Used: fs.NewUsageValue(int64(used)), // bytes in use
Free: fs.NewUsageValue(int64(total - used)), // bytes which can be uploaded before reaching the quota
}
return usage, nil
}
@ -1722,7 +1687,7 @@ func (o *Object) uploadChunked(ctx context.Context, in0 io.Reader, commitInfo *f
// If we are batching then we should have written all the data now
// store the commit info now for a batch commit
if o.fs.batcher.Batching() {
return o.fs.batcher.Commit(ctx, args)
return o.fs.batcher.Commit(ctx, o.remote, args)
}
err = o.fs.pacer.Call(func() (bool, error) {

View file

@ -61,7 +61,7 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
return false, err // No such user
case 186:
return false, err // IP blocked?
case 374:
case 374, 412: // Flood detected seems to be #412 now
fs.Debugf(nil, "Sleeping for 30 seconds due to: %v", err)
time.Sleep(30 * time.Second)
default:

View file

@ -441,23 +441,28 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
srcFs := srcObj.fs
// Find current directory ID
_, currentDirectoryID, err := f.dirCache.FindPath(ctx, remote, false)
srcLeaf, srcDirectoryID, err := srcFs.dirCache.FindPath(ctx, srcObj.remote, false)
if err != nil {
return nil, err
}
// Create temporary object
dstObj, leaf, directoryID, err := f.createObject(ctx, remote)
dstObj, dstLeaf, dstDirectoryID, err := f.createObject(ctx, remote)
if err != nil {
return nil, err
}
// If it is in the correct directory, just rename it
var url string
if currentDirectoryID == directoryID {
resp, err := f.renameFile(ctx, srcObj.file.URL, leaf)
if srcDirectoryID == dstDirectoryID {
// No rename needed
if srcLeaf == dstLeaf {
return src, nil
}
resp, err := f.renameFile(ctx, srcObj.file.URL, dstLeaf)
if err != nil {
return nil, fmt.Errorf("couldn't rename file: %w", err)
}
@ -466,11 +471,16 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
}
url = resp.URLs[0].URL
} else {
folderID, err := strconv.Atoi(directoryID)
dstFolderID, err := strconv.Atoi(dstDirectoryID)
if err != nil {
return nil, err
}
resp, err := f.moveFile(ctx, srcObj.file.URL, folderID, leaf)
rename := dstLeaf
// No rename needed
if srcLeaf == dstLeaf {
rename = ""
}
resp, err := f.moveFile(ctx, srcObj.file.URL, dstFolderID, rename)
if err != nil {
return nil, fmt.Errorf("couldn't move file: %w", err)
}

View file

@ -0,0 +1,901 @@
// Package filescom provides an interface to the Files.com
// object storage system.
package filescom
import (
"context"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"path"
"strings"
"time"
files_sdk "github.com/Files-com/files-sdk-go/v3"
"github.com/Files-com/files-sdk-go/v3/bundle"
"github.com/Files-com/files-sdk-go/v3/file"
file_migration "github.com/Files-com/files-sdk-go/v3/filemigration"
"github.com/Files-com/files-sdk-go/v3/folder"
"github.com/Files-com/files-sdk-go/v3/session"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
)
/*
Run of rclone info
stringNeedsEscaping = []rune{
'/', '\x00'
}
maxFileLength = 512 // for 1 byte unicode characters
maxFileLength = 512 // for 2 byte unicode characters
maxFileLength = 512 // for 3 byte unicode characters
maxFileLength = 512 // for 4 byte unicode characters
canWriteUnnormalized = true
canReadUnnormalized = true
canReadRenormalized = true
canStream = true
*/
const (
minSleep = 10 * time.Millisecond
maxSleep = 2 * time.Second
decayConstant = 2 // bigger for slower decay, exponential
folderNotEmpty = "processing-failure/folder-not-empty"
)
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "filescom",
Description: "Files.com",
NewFs: NewFs,
Options: []fs.Option{
{
Name: "site",
Help: "Your site subdomain (e.g. mysite) or custom domain (e.g. myfiles.customdomain.com).",
}, {
Name: "username",
Help: "The username used to authenticate with Files.com.",
}, {
Name: "password",
Help: "The password used to authenticate with Files.com.",
IsPassword: true,
}, {
Name: "api_key",
Help: "The API key used to authenticate with Files.com.",
Advanced: true,
Sensitive: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: (encoder.Display |
encoder.EncodeBackSlash |
encoder.EncodeRightSpace |
encoder.EncodeRightCrLfHtVt |
encoder.EncodeInvalidUtf8),
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Site string `config:"site"`
Username string `config:"username"`
Password string `config:"password"`
APIKey string `config:"api_key"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote files.com server
type Fs struct {
name string // name of this remote
root string // the path we are working on
opt Options // parsed options
features *fs.Features // optional features
fileClient *file.Client // the connection to the file API
folderClient *folder.Client // the connection to the folder API
migrationClient *file_migration.Client // the connection to the file migration API
bundleClient *bundle.Client // the connection to the bundle API
pacer *fs.Pacer // pacer for API calls
}
// Object describes a files object
//
// Will definitely have info but maybe not meta
type Object struct {
fs *Fs // what this object is part of
remote string // The remote path
size int64 // size of the object
crc32 string // CRC32 of the object content
md5 string // MD5 of the object content
mimeType string // Content-Type of the object
modTime time.Time // modification time of the object
}
// ------------------------------------------------------------
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("files root '%s'", f.root)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// Encode remote and turn it into an absolute path in the share
func (f *Fs) absPath(remote string) string {
return f.opt.Enc.FromStandardPath(path.Join(f.root, remote))
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
429, // Too Many Requests.
500, // Internal Server Error
502, // Bad Gateway
503, // Service Unavailable
504, // Gateway Timeout
509, // Bandwidth Limit Exceeded
}
// shouldRetry returns a boolean as to whether this err deserves to be
// retried. It returns the err as a convenience
func shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if apiErr, ok := err.(files_sdk.ResponseError); ok {
for _, e := range retryErrorCodes {
if apiErr.HttpCode == e {
fs.Debugf(nil, "Retrying API error %v", err)
return true, err
}
}
}
return fserrors.ShouldRetry(err), err
}
// readMetaDataForPath reads the metadata from the path
func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *files_sdk.File, err error) {
params := files_sdk.FileFindParams{
Path: f.absPath(path),
}
var file files_sdk.File
err = f.pacer.Call(func() (bool, error) {
file, err = f.fileClient.Find(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
return &file, nil
}
// NewFs constructs an Fs from the path, container:path
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
root = strings.Trim(root, "/")
config, err := newClientConfig(ctx, opt)
if err != nil {
return nil, err
}
f := &Fs{
name: name,
root: root,
opt: *opt,
fileClient: &file.Client{Config: config},
folderClient: &folder.Client{Config: config},
migrationClient: &file_migration.Client{Config: config},
bundleClient: &bundle.Client{Config: config},
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
CaseInsensitive: true,
CanHaveEmptyDirectories: true,
ReadMimeType: true,
DirModTimeUpdatesOnWrite: true,
}).Fill(ctx, f)
if f.root != "" {
info, err := f.readMetaDataForPath(ctx, "")
if err == nil && !info.IsDir() {
f.root = path.Dir(f.root)
if f.root == "." {
f.root = ""
}
return f, fs.ErrorIsFile
}
}
return f, err
}
func newClientConfig(ctx context.Context, opt *Options) (config files_sdk.Config, err error) {
if opt.Site != "" {
if strings.Contains(opt.Site, ".") {
config.EndpointOverride = opt.Site
} else {
config.Subdomain = opt.Site
}
_, err = url.ParseRequestURI(config.Endpoint())
if err != nil {
err = fmt.Errorf("invalid domain or subdomain: %v", opt.Site)
return
}
}
config = config.Init().SetCustomClient(fshttp.NewClient(ctx))
if opt.APIKey != "" {
config.APIKey = opt.APIKey
return
}
if opt.Username == "" {
err = errors.New("username not found")
return
}
if opt.Password == "" {
err = errors.New("password not found")
return
}
opt.Password, err = obscure.Reveal(opt.Password)
if err != nil {
return
}
sessionClient := session.Client{Config: config}
params := files_sdk.SessionCreateParams{
Username: opt.Username,
Password: opt.Password,
}
thisSession, err := sessionClient.Create(params, files_sdk.WithContext(ctx))
if err != nil {
err = fmt.Errorf("couldn't create session: %w", err)
return
}
config.SessionId = thisSession.Id
return
}
// Return an Object from a path
//
// If it can't be found it returns the error fs.ErrorObjectNotFound.
func (f *Fs) newObjectWithInfo(ctx context.Context, remote string, file *files_sdk.File) (fs.Object, error) {
o := &Object{
fs: f,
remote: remote,
}
var err error
if file != nil {
err = o.setMetaData(file)
} else {
err = o.readMetaData(ctx) // reads info and meta, returning an error
}
if err != nil {
return nil, err
}
return o, nil
}
// NewObject finds the Object at remote. If it can't be found
// it returns the error fs.ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
return f.newObjectWithInfo(ctx, remote, nil)
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
var it *folder.Iter
params := files_sdk.FolderListForParams{
Path: f.absPath(dir),
}
err = f.pacer.Call(func() (bool, error) {
it, err = f.folderClient.ListFor(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("couldn't list files: %w", err)
}
for it.Next() {
item := ptr(it.File())
remote := f.opt.Enc.ToStandardPath(item.DisplayName)
remote = path.Join(dir, remote)
if remote == dir {
continue
}
if item.IsDir() {
d := fs.NewDir(remote, item.ModTime())
entries = append(entries, d)
} else {
o, err := f.newObjectWithInfo(ctx, remote, item)
if err != nil {
return nil, err
}
entries = append(entries, o)
}
}
err = it.Err()
if files_sdk.IsNotExist(err) {
return nil, fs.ErrorDirNotFound
}
return
}
// Creates from the parameters passed in a half finished Object which
// must have setMetaData called on it
//
// Returns the object and error.
//
// Used to create new objects
func (f *Fs) createObject(ctx context.Context, remote string) (o *Object, err error) {
// Create the directory for the object if it doesn't exist
err = f.mkParentDir(ctx, remote)
if err != nil {
return
}
// Temporary Object under construction
o = &Object{
fs: f,
remote: remote,
}
return o, nil
}
// Put the object
//
// Copy the reader in to the new object which is returned.
//
// The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
// Temporary Object under construction
fs := &Object{
fs: f,
remote: src.Remote(),
}
return fs, fs.Update(ctx, in, src, options...)
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.Put(ctx, in, src, options...)
}
func (f *Fs) mkdir(ctx context.Context, path string) error {
if path == "" || path == "." {
return nil
}
params := files_sdk.FolderCreateParams{
Path: path,
MkdirParents: ptr(true),
}
err := f.pacer.Call(func() (bool, error) {
_, err := f.folderClient.Create(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if files_sdk.IsExist(err) {
return nil
}
return err
}
// Make the parent directory of remote
func (f *Fs) mkParentDir(ctx context.Context, remote string) error {
return f.mkdir(ctx, path.Dir(f.absPath(remote)))
}
// Mkdir creates the container if it doesn't exist
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return f.mkdir(ctx, f.absPath(dir))
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
o := Object{
fs: f,
remote: dir,
}
return o.SetModTime(ctx, modTime)
}
// purgeCheck removes the root directory, if check is set then it
// refuses to do so if it has anything in
func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
path := f.absPath(dir)
if path == "" || path == "." {
return errors.New("can't purge root directory")
}
params := files_sdk.FileDeleteParams{
Path: path,
Recursive: ptr(!check),
}
err := f.pacer.Call(func() (bool, error) {
err := f.fileClient.Delete(params, files_sdk.WithContext(ctx))
// Allow for eventual consistency deletion of child objects.
if isFolderNotEmpty(err) {
return true, err
}
return shouldRetry(ctx, err)
})
if err != nil {
if files_sdk.IsNotExist(err) {
return fs.ErrorDirNotFound
} else if isFolderNotEmpty(err) {
return fs.ErrorDirectoryNotEmpty
}
return fmt.Errorf("rmdir failed: %w", err)
}
return nil
}
// Rmdir deletes the root folder
//
// Returns an error if it isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, true)
}
// Precision return the precision of this Fs
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Copy src to this remote using server-side copy operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (dstObj fs.Object, err error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
err = srcObj.readMetaData(ctx)
if err != nil {
return
}
srcPath := srcObj.fs.absPath(srcObj.remote)
dstPath := f.absPath(remote)
if strings.EqualFold(srcPath, dstPath) {
return nil, fmt.Errorf("can't copy %q -> %q as are same name when lowercase", srcPath, dstPath)
}
// Create temporary object
dstObj, err = f.createObject(ctx, remote)
if err != nil {
return
}
// Copy the object
params := files_sdk.FileCopyParams{
Path: srcPath,
Destination: dstPath,
Overwrite: ptr(true),
}
var action files_sdk.FileAction
err = f.pacer.Call(func() (bool, error) {
action, err = f.fileClient.Copy(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return
}
err = f.waitForAction(ctx, action, "copy")
if err != nil {
return
}
err = dstObj.SetModTime(ctx, srcObj.modTime)
return
}
// Purge deletes all the files and the container
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.purgeCheck(ctx, dir, false)
}
// move a file or folder
func (f *Fs) move(ctx context.Context, src *Fs, srcRemote string, dstRemote string) (info *files_sdk.File, err error) {
// Move the object
params := files_sdk.FileMoveParams{
Path: src.absPath(srcRemote),
Destination: f.absPath(dstRemote),
}
var action files_sdk.FileAction
err = f.pacer.Call(func() (bool, error) {
action, err = f.fileClient.Move(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
}
err = f.waitForAction(ctx, action, "move")
if err != nil {
return nil, err
}
info, err = f.readMetaDataForPath(ctx, dstRemote)
return
}
func (f *Fs) waitForAction(ctx context.Context, action files_sdk.FileAction, operation string) (err error) {
var migration files_sdk.FileMigration
err = f.pacer.Call(func() (bool, error) {
migration, err = f.migrationClient.Wait(action, func(migration files_sdk.FileMigration) {
// noop
}, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err == nil && migration.Status != "completed" {
return fmt.Errorf("%v did not complete successfully: %v", operation, migration.Status)
}
return
}
// Move src to this remote using server-side move operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
// Create temporary object
dstObj, err := f.createObject(ctx, remote)
if err != nil {
return nil, err
}
// Do the move
info, err := f.move(ctx, srcObj.fs, srcObj.remote, dstObj.remote)
if err != nil {
return nil, err
}
err = dstObj.setMetaData(info)
if err != nil {
return nil, err
}
return dstObj, nil
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server-side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) (err error) {
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(srcFs, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
// Check if destination exists
_, err = f.readMetaDataForPath(ctx, dstRemote)
if err == nil {
return fs.ErrorDirExists
}
// Create temporary object
dstObj, err := f.createObject(ctx, dstRemote)
if err != nil {
return
}
// Do the move
_, err = f.move(ctx, srcFs, srcRemote, dstObj.remote)
return
}
// PublicLink adds a "readable by anyone with link" permission on the given file or folder.
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (url string, err error) {
params := files_sdk.BundleCreateParams{
Paths: []string{f.absPath(remote)},
}
if expire < fs.DurationOff {
params.ExpiresAt = ptr(time.Now().Add(time.Duration(expire)))
}
var bundle files_sdk.Bundle
err = f.pacer.Call(func() (bool, error) {
bundle, err = f.bundleClient.Create(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
url = bundle.Url
return
}
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set {
return hash.NewHashSet(hash.CRC32, hash.MD5)
}
// ------------------------------------------------------------
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.remote
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Hash returns the MD5 of an object returning a lowercase hex string
func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
switch t {
case hash.CRC32:
if o.crc32 == "" {
return "", nil
}
return fmt.Sprintf("%08s", o.crc32), nil
case hash.MD5:
return o.md5, nil
}
return "", hash.ErrUnsupported
}
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
return o.size
}
// setMetaData sets the metadata from info
func (o *Object) setMetaData(file *files_sdk.File) error {
o.modTime = file.ModTime()
if !file.IsDir() {
o.size = file.Size
o.crc32 = file.Crc32
o.md5 = file.Md5
o.mimeType = file.MimeType
}
return nil
}
// readMetaData gets the metadata if it hasn't already been fetched
//
// it also sets the info
func (o *Object) readMetaData(ctx context.Context) (err error) {
file, err := o.fs.readMetaDataForPath(ctx, o.remote)
if err != nil {
if files_sdk.IsNotExist(err) {
return fs.ErrorObjectNotFound
}
return err
}
if file.IsDir() {
return fs.ErrorIsDir
}
return o.setMetaData(file)
}
// ModTime returns the modification time of the object
//
// It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.modTime
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error) {
params := files_sdk.FileUpdateParams{
Path: o.fs.absPath(o.remote),
ProvidedMtime: &modTime,
}
var file files_sdk.File
err = o.fs.pacer.Call(func() (bool, error) {
file, err = o.fs.fileClient.Update(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
if err != nil {
return err
}
return o.setMetaData(&file)
}
// Storable returns a boolean showing whether this object storable
func (o *Object) Storable() bool {
return true
}
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
// Offset and Count for range download
var offset, count int64
fs.FixRangeOption(options, o.size)
for _, option := range options {
switch x := option.(type) {
case *fs.RangeOption:
offset, count = x.Decode(o.size)
if count < 0 {
count = o.size - offset
}
case *fs.SeekOption:
offset = x.Offset
count = o.size - offset
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
params := files_sdk.FileDownloadParams{
Path: o.fs.absPath(o.remote),
}
headers := &http.Header{}
headers.Set("Range", fmt.Sprintf("bytes=%v-%v", offset, offset+count-1))
err = o.fs.pacer.Call(func() (bool, error) {
_, err = o.fs.fileClient.Download(
params,
files_sdk.WithContext(ctx),
files_sdk.RequestHeadersOption(headers),
files_sdk.ResponseBodyOption(func(closer io.ReadCloser) error {
in = closer
return err
}),
)
return shouldRetry(ctx, err)
})
return
}
// Returns a pointer to t - useful for returning pointers to constants
func ptr[T any](t T) *T {
return &t
}
func isFolderNotEmpty(err error) bool {
var re files_sdk.ResponseError
ok := errors.As(err, &re)
return ok && re.Type == folderNotEmpty
}
// Update the object with the contents of the io.Reader, modTime and size
//
// If existing is set then it updates the object rather than creating a new one.
//
// The new object may have been created if an error is returned.
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
uploadOpts := []file.UploadOption{
file.UploadWithContext(ctx),
file.UploadWithReader(in),
file.UploadWithDestinationPath(o.fs.absPath(o.remote)),
file.UploadWithProvidedMtime(src.ModTime(ctx)),
}
err := o.fs.pacer.Call(func() (bool, error) {
err := o.fs.fileClient.Upload(uploadOpts...)
return shouldRetry(ctx, err)
})
if err != nil {
return err
}
return o.readMetaData(ctx)
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
params := files_sdk.FileDeleteParams{
Path: o.fs.absPath(o.remote),
}
return o.fs.pacer.Call(func() (bool, error) {
err := o.fs.fileClient.Delete(params, files_sdk.WithContext(ctx))
return shouldRetry(ctx, err)
})
}
// MimeType of an Object if known, "" otherwise
func (o *Object) MimeType(ctx context.Context) string {
return o.mimeType
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.MimeTyper = (*Object)(nil)
)

View file

@ -0,0 +1,17 @@
// Test Files filesystem interface
package filescom_test
import (
"testing"
"github.com/rclone/rclone/backend/filescom"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFilesCom:",
NilObject: (*filescom.Object)(nil),
})
}

1254
backend/frostfs/frostfs.go Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,16 @@
package frostfs
import (
"testing"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFrostFS:",
NilObject: (*Object)(nil),
SkipInvalidUTF8: true,
})
}

326
backend/frostfs/util.go Normal file
View file

@ -0,0 +1,326 @@
package frostfs
import (
"context"
"fmt"
"strconv"
"strings"
"time"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container"
cid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/container/id"
resolver "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/ns"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object"
oid "git.frostfs.info/TrueCloudLab/frostfs-sdk-go/object/id"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/pool"
"git.frostfs.info/TrueCloudLab/frostfs-sdk-go/user"
"git.frostfs.info/TrueCloudLab/policy-engine/pkg/chain"
"git.frostfs.info/TrueCloudLab/policy-engine/schema/native"
"github.com/nspcc-dev/neo-go/cli/flags"
"github.com/nspcc-dev/neo-go/pkg/crypto/keys"
"github.com/nspcc-dev/neo-go/pkg/wallet"
"github.com/rclone/rclone/fs"
)
type endpointInfo struct {
Address string
Priority int
Weight float64
}
func publicReadWriteCCPRules() []chain.Rule {
return []chain.Rule{
{
Status: chain.Allow, Actions: chain.Actions{
Inverted: false,
Names: []string{
native.MethodPutObject,
native.MethodGetObject,
native.MethodHeadObject,
native.MethodDeleteObject,
native.MethodSearchObject,
native.MethodRangeObject,
native.MethodHashObject,
native.MethodPatchObject,
},
}, Resources: chain.Resources{
Inverted: false,
Names: []string{native.ResourceFormatRootObjects},
}, Any: false},
}
}
func privateCCPRules() []chain.Rule {
rule := publicReadWriteCCPRules()
// The same as public-read-write, except that only the owner is allowed to perform the listed actions
rule[0].Condition = []chain.Condition{
{
Op: chain.CondStringEquals,
Kind: chain.KindRequest,
Key: native.PropertyKeyActorRole,
Value: native.PropertyValueContainerRoleOwner,
},
}
return rule
}
func publicReadCCPRules() []chain.Rule {
rule := privateCCPRules()
// Add a rule that allows other users to perform reading actions.
rule = append(rule, chain.Rule{
Status: chain.Allow, Actions: chain.Actions{
Inverted: false,
Names: []string{
native.MethodGetObject,
native.MethodHeadObject,
native.MethodRangeObject,
native.MethodHashObject,
native.MethodSearchObject,
},
}, Resources: chain.Resources{
Inverted: false,
Names: []string{native.ResourceFormatRootObjects},
}, Condition: []chain.Condition{
{
Op: chain.CondStringEquals,
Kind: chain.KindRequest,
Key: native.PropertyKeyActorRole,
Value: native.PropertyValueContainerRoleOthers,
},
}, Any: false})
return rule
}
func parseContainerCreationPolicyString(policyString string) ([]chain.Rule, error) {
switch policyString {
case "private":
return privateCCPRules(), nil
case "public-read":
return publicReadCCPRules(), nil
case "public-read-write":
return publicReadWriteCCPRules(), nil
}
return nil, fmt.Errorf("invalid container creation policy: %s", policyString)
}
func parseEndpoints(endpointParam string) ([]endpointInfo, error) {
var err error
expectedLength := -1 // to make sure all endpoints have the same format
endpoints := strings.Split(strings.TrimSpace(endpointParam), " ")
res := make([]endpointInfo, 0, len(endpoints))
seen := make(map[string]struct{}, len(endpoints))
for _, endpoint := range endpoints {
endpointInfoSplit := strings.Split(endpoint, ",")
address := endpointInfoSplit[0]
if len(address) == 0 {
continue
}
if _, ok := seen[address]; ok {
return nil, fmt.Errorf("endpoint '%s' is already defined", address)
}
seen[address] = struct{}{}
epInfo := endpointInfo{
Address: address,
Priority: 1,
Weight: 1,
}
if expectedLength == -1 {
expectedLength = len(endpointInfoSplit)
}
if len(endpointInfoSplit) != expectedLength {
return nil, fmt.Errorf("all endpoints must have the same format: '%s'", endpointParam)
}
switch len(endpointInfoSplit) {
case 1:
case 2:
epInfo.Priority, err = parsePriority(endpointInfoSplit[1])
if err != nil {
return nil, fmt.Errorf("invalid endpoint '%s': %w", endpoint, err)
}
case 3:
epInfo.Priority, err = parsePriority(endpointInfoSplit[1])
if err != nil {
return nil, fmt.Errorf("invalid endpoint '%s': %w", endpoint, err)
}
epInfo.Weight, err = parseWeight(endpointInfoSplit[2])
if err != nil {
return nil, fmt.Errorf("invalid endpoint '%s': %w", endpoint, err)
}
default:
return nil, fmt.Errorf("invalid endpoint format '%s'", endpoint)
}
res = append(res, epInfo)
}
return res, nil
}
func parsePriority(priorityStr string) (int, error) {
priority, err := strconv.Atoi(priorityStr)
if err != nil {
return 0, fmt.Errorf("invalid priority '%s': %w", priorityStr, err)
}
if priority <= 0 {
return 0, fmt.Errorf("priority must be positive '%s'", priorityStr)
}
return priority, nil
}
func parseWeight(weightStr string) (float64, error) {
weight, err := strconv.ParseFloat(weightStr, 64)
if err != nil {
return 0, fmt.Errorf("invalid weight '%s': %w", weightStr, err)
}
if weight <= 0 {
return 0, fmt.Errorf("weight must be positive '%s'", weightStr)
}
return weight, nil
}
func createPool(ctx context.Context, key *keys.PrivateKey, cfg *Options) (*pool.Pool, error) {
var prm pool.InitParameters
prm.SetKey(&key.PrivateKey)
prm.SetNodeDialTimeout(time.Duration(cfg.FrostfsConnectionTimeout))
prm.SetHealthcheckTimeout(time.Duration(cfg.FrostfsRequestTimeout))
prm.SetClientRebalanceInterval(time.Duration(cfg.FrostfsRebalanceInterval))
prm.SetSessionExpirationDuration(cfg.FrostfsSessionExpiration)
nodes, err := getNodePoolParams(cfg.FrostfsEndpoint)
if err != nil {
return nil, err
}
for _, node := range nodes {
prm.AddNode(node)
}
p, err := pool.NewPool(prm)
if err != nil {
return nil, fmt.Errorf("create pool: %w", err)
}
if err = p.Dial(ctx); err != nil {
return nil, fmt.Errorf("dial pool: %w", err)
}
return p, nil
}
func getNodePoolParams(endpointParam string) ([]pool.NodeParam, error) {
endpointInfos, err := parseEndpoints(endpointParam)
if err != nil {
return nil, fmt.Errorf("parse endpoints params: %w", err)
}
res := make([]pool.NodeParam, len(endpointInfos))
for i, info := range endpointInfos {
res[i] = pool.NewNodeParam(info.Priority, info.Address, info.Weight)
}
return res, nil
}
func createNNSResolver(cfg *Options) (*resolver.NNS, error) {
if cfg.RPCEndpoint == "" {
return nil, nil
}
var nns resolver.NNS
if err := nns.Dial(cfg.RPCEndpoint); err != nil {
return nil, fmt.Errorf("dial NNS resolver: %w", err)
}
return &nns, nil
}
func getAccount(cfg *Options) (*wallet.Account, error) {
w, err := wallet.NewWalletFromFile(cfg.Wallet)
if err != nil {
return nil, err
}
addr := w.GetChangeAddress()
if cfg.Address != "" {
addr, err = flags.ParseAddress(cfg.Address)
if err != nil {
return nil, fmt.Errorf("invalid address")
}
}
acc := w.GetAccount(addr)
err = acc.Decrypt(cfg.Password, w.Scrypt)
if err != nil {
return nil, err
}
return acc, nil
}
func newAddress(cnrID cid.ID, objID oid.ID) oid.Address {
var addr oid.Address
addr.SetContainer(cnrID)
addr.SetObject(objID)
return addr
}
func formObject(own *user.ID, cnrID cid.ID, name string, header map[string]string) *object.Object {
attributes := make([]object.Attribute, 0, 1+len(header))
filename := object.NewAttribute()
filename.SetKey(object.AttributeFileName)
filename.SetValue(name)
attributes = append(attributes, *filename)
for key, val := range header {
attr := object.NewAttribute()
attr.SetKey(key)
attr.SetValue(val)
attributes = append(attributes, *attr)
}
obj := object.New()
obj.SetOwnerID(*own)
obj.SetContainerID(cnrID)
obj.SetAttributes(attributes...)
return obj
}
func newDir(cnrID cid.ID, cnr container.Container, defaultZone string) *fs.Dir {
remote := cnrID.EncodeToString()
timestamp := container.CreatedAt(cnr)
if domain := container.ReadDomain(cnr); domain.Name() != "" {
if defaultZone != domain.Zone() {
remote = domain.Name() + "." + domain.Zone()
} else {
remote = domain.Name()
}
}
dir := fs.NewDir(remote, timestamp)
dir.SetID(cnrID.String())
return dir
}
func getContainerNameAndZone(containerStr, defaultZone string) (cnrName string, cnrZone string) {
defer func() {
if len(cnrZone) == 0 {
cnrZone = defaultZone
}
}()
if idx := strings.Index(containerStr, "."); idx >= 0 {
return containerStr[:idx], containerStr[idx+1:]
}
return containerStr, defaultZone
}

View file

@ -0,0 +1,205 @@
package frostfs
import (
"strconv"
"testing"
"github.com/stretchr/testify/require"
)
func TestGetZoneAndContainerNames(t *testing.T) {
for i, tc := range []struct {
cnrStr string
defZone string
expectedName string
expectedZone string
}{
{
cnrStr: "",
defZone: "def_zone",
expectedName: "",
expectedZone: "def_zone",
},
{
cnrStr: "",
defZone: "def_zone",
expectedName: "",
expectedZone: "def_zone",
},
{
cnrStr: "cnr_name",
defZone: "def_zone",
expectedName: "cnr_name",
expectedZone: "def_zone",
},
{
cnrStr: "cnr_name.",
defZone: "def_zone",
expectedName: "cnr_name",
expectedZone: "def_zone",
},
{
cnrStr: ".cnr_zone",
defZone: "def_zone",
expectedName: "",
expectedZone: "cnr_zone",
}, {
cnrStr: ".cnr_zone",
defZone: "def_zone",
expectedName: "",
expectedZone: "cnr_zone",
},
} {
t.Run(strconv.Itoa(i), func(t *testing.T) {
actualName, actualZone := getContainerNameAndZone(tc.cnrStr, tc.defZone)
require.Equal(t, tc.expectedZone, actualZone)
require.Equal(t, tc.expectedName, actualName)
})
}
}
func TestParseContainerCreationPolicy(t *testing.T) {
for i, tc := range []struct {
ACLString string
ExpectedError bool
}{
{
ACLString: "",
ExpectedError: true,
},
{
ACLString: "public-ready",
ExpectedError: true,
},
{
ACLString: "public-read",
ExpectedError: false,
},
{
ACLString: "public-read-write",
ExpectedError: false,
},
{
ACLString: "private",
ExpectedError: false,
},
} {
t.Run(strconv.Itoa(i), func(t *testing.T) {
rules, err := parseContainerCreationPolicyString(tc.ACLString)
if tc.ExpectedError {
require.Error(t, err)
require.Nil(t, rules)
} else {
require.NoError(t, err)
require.NotNil(t, rules)
}
})
}
}
func TestParseEndpoints(t *testing.T) {
for i, tc := range []struct {
EndpointsParam string
ExpectedError bool
ExpectedResult []endpointInfo
}{
{
EndpointsParam: "s01.frostfs.devenv:8080",
ExpectedResult: []endpointInfo{{
Address: "s01.frostfs.devenv:8080",
Priority: 1,
Weight: 1,
}},
},
{
EndpointsParam: "s01.frostfs.devenv:8080,2",
ExpectedResult: []endpointInfo{{
Address: "s01.frostfs.devenv:8080",
Priority: 2,
Weight: 1,
}},
},
{
EndpointsParam: "s01.frostfs.devenv:8080,2,3",
ExpectedResult: []endpointInfo{{
Address: "s01.frostfs.devenv:8080",
Priority: 2,
Weight: 3,
}},
},
{
EndpointsParam: " s01.frostfs.devenv:8080 s02.frostfs.devenv:8080 ",
ExpectedResult: []endpointInfo{
{
Address: "s01.frostfs.devenv:8080",
Priority: 1,
Weight: 1,
},
{
Address: "s02.frostfs.devenv:8080",
Priority: 1,
Weight: 1,
},
},
},
{
EndpointsParam: "s01.frostfs.devenv:8080,1,1 s02.frostfs.devenv:8080,2,1 s03.frostfs.devenv:8080,2,9",
ExpectedResult: []endpointInfo{
{
Address: "s01.frostfs.devenv:8080",
Priority: 1,
Weight: 1,
},
{
Address: "s02.frostfs.devenv:8080",
Priority: 2,
Weight: 1,
},
{
Address: "s03.frostfs.devenv:8080",
Priority: 2,
Weight: 9,
},
},
},
{
EndpointsParam: "s01.frostfs.devenv:8080,-1,1",
ExpectedError: true,
},
{
EndpointsParam: "s01.frostfs.devenv:8080,,",
ExpectedError: true,
},
{
EndpointsParam: "s01.frostfs.devenv:8080,sd,sd",
ExpectedError: true,
},
{
EndpointsParam: "s01.frostfs.devenv:8080,1,0",
ExpectedError: true,
},
{
EndpointsParam: "s01.frostfs.devenv:8080,1 s02.frostfs.devenv:8080",
ExpectedError: true,
},
{
EndpointsParam: "s01.frostfs.devenv:8080,1,2 s02.frostfs.devenv:8080",
ExpectedError: true,
},
{
EndpointsParam: "s01.frostfs.devenv:8080,1,2 s02.frostfs.devenv:8080,1",
ExpectedError: true,
},
} {
t.Run(strconv.Itoa(i), func(t *testing.T) {
res, err := parseEndpoints(tc.EndpointsParam)
if tc.ExpectedError {
require.Error(t, err)
return
}
require.NoError(t, err)
require.Equal(t, tc.ExpectedResult, res)
})
}
}

View file

@ -85,7 +85,7 @@ to an encrypted one. Cannot be used in combination with implicit FTPS.`,
Default: false,
}, {
Name: "concurrency",
Help: strings.Replace(`Maximum number of FTP simultaneous connections, 0 for unlimited.
Help: strings.ReplaceAll(`Maximum number of FTP simultaneous connections, 0 for unlimited.
Note that setting this is very likely to cause deadlocks so it should
be used with care.
@ -99,7 +99,7 @@ maximum of |--checkers| and |--transfers|.
So for |concurrency 3| you'd use |--checkers 2 --transfers 2
--check-first| or |--checkers 1 --transfers 1|.
`, "|", "`", -1),
`, "|", "`"),
Default: 0,
Advanced: true,
}, {
@ -970,6 +970,8 @@ func (f *Fs) mkdir(ctx context.Context, abspath string) error {
f.putFtpConnection(&c, err)
if errX := textprotoError(err); errX != nil {
switch errX.Code {
case ftp.StatusRequestedFileActionOK: // some ftp servers apparently return 250 instead of 257
err = nil // see: https://forum.rclone.org/t/rclone-pop-up-an-i-o-error-when-creating-a-folder-in-a-mounted-ftp-drive/44368/
case ftp.StatusFileUnavailable: // dir already exists: see issue #2181
err = nil
case 521: // dir already exists: error number according to RFC 959: issue #2363

311
backend/gofile/api/types.go Normal file
View file

@ -0,0 +1,311 @@
// Package api has type definitions for gofile
//
// Converted from the API docs with help from https://mholt.github.io/json-to-go/
package api
import (
"fmt"
"time"
)
const (
// 2017-05-03T07:26:10-07:00
timeFormat = `"` + time.RFC3339 + `"`
)
// Time represents date and time information for the
// gofile API, by using RFC3339
type Time time.Time
// MarshalJSON turns a Time into JSON (in UTC)
func (t *Time) MarshalJSON() (out []byte, err error) {
timeString := (*time.Time)(t).Format(timeFormat)
return []byte(timeString), nil
}
// UnmarshalJSON turns JSON into a Time
func (t *Time) UnmarshalJSON(data []byte) error {
newT, err := time.Parse(timeFormat, string(data))
if err != nil {
return err
}
*t = Time(newT)
return nil
}
// Error is returned from gofile when things go wrong
type Error struct {
Status string `json:"status"`
}
// Error returns a string for the error and satisfies the error interface
func (e Error) Error() string {
out := fmt.Sprintf("Error %q", e.Status)
return out
}
// IsError returns true if there is an error
func (e Error) IsError() bool {
return e.Status != "ok"
}
// Err returns err if not nil, or e if IsError or nil
func (e Error) Err(err error) error {
if err != nil {
return err
}
if e.IsError() {
return e
}
return nil
}
// Check Error satisfies the error interface
var _ error = (*Error)(nil)
// Types of things in Item
const (
ItemTypeFolder = "folder"
ItemTypeFile = "file"
)
// Item describes a folder or a file as returned by /contents
type Item struct {
ID string `json:"id"`
ParentFolder string `json:"parentFolder"`
Type string `json:"type"`
Name string `json:"name"`
Size int64 `json:"size"`
Code string `json:"code"`
CreateTime int64 `json:"createTime"`
ModTime int64 `json:"modTime"`
Link string `json:"link"`
MD5 string `json:"md5"`
MimeType string `json:"mimetype"`
ChildrenCount int `json:"childrenCount"`
DirectLinks map[string]*DirectLink `json:"directLinks"`
//Public bool `json:"public"`
//ServerSelected string `json:"serverSelected"`
//Thumbnail string `json:"thumbnail"`
//DownloadCount int `json:"downloadCount"`
//TotalDownloadCount int64 `json:"totalDownloadCount"`
//TotalSize int64 `json:"totalSize"`
//ChildrenIDs []string `json:"childrenIds"`
Children map[string]*Item `json:"children"`
}
// ToNativeTime converts a go time to a native time
func ToNativeTime(t time.Time) int64 {
return t.Unix()
}
// FromNativeTime converts native time to a go time
func FromNativeTime(t int64) time.Time {
return time.Unix(t, 0)
}
// DirectLink describes a direct link to a file so it can be
// downloaded by third parties.
type DirectLink struct {
ExpireTime int64 `json:"expireTime"`
SourceIpsAllowed []any `json:"sourceIpsAllowed"`
DomainsAllowed []any `json:"domainsAllowed"`
Auth []any `json:"auth"`
IsReqLink bool `json:"isReqLink"`
DirectLink string `json:"directLink"`
}
// Contents is returned from the /contents call
type Contents struct {
Error
Data struct {
Item
} `json:"data"`
Metadata Metadata `json:"metadata"`
}
// Metadata is returned when paging is in use
type Metadata struct {
TotalCount int `json:"totalCount"`
TotalPages int `json:"totalPages"`
Page int `json:"page"`
PageSize int `json:"pageSize"`
HasNextPage bool `json:"hasNextPage"`
}
// AccountsGetID is the result of /accounts/getid
type AccountsGetID struct {
Error
Data struct {
ID string `json:"id"`
} `json:"data"`
}
// Stats of storage and traffic
type Stats struct {
FolderCount int64 `json:"folderCount"`
FileCount int64 `json:"fileCount"`
Storage int64 `json:"storage"`
TrafficDirectGenerated int64 `json:"trafficDirectGenerated"`
TrafficReqDownloaded int64 `json:"trafficReqDownloaded"`
TrafficWebDownloaded int64 `json:"trafficWebDownloaded"`
}
// AccountsGet is the result of /accounts/{id}
type AccountsGet struct {
Error
Data struct {
ID string `json:"id"`
Email string `json:"email"`
Tier string `json:"tier"`
PremiumType string `json:"premiumType"`
Token string `json:"token"`
RootFolder string `json:"rootFolder"`
SubscriptionProvider string `json:"subscriptionProvider"`
SubscriptionEndDate int `json:"subscriptionEndDate"`
SubscriptionLimitDirectTraffic int64 `json:"subscriptionLimitDirectTraffic"`
SubscriptionLimitStorage int64 `json:"subscriptionLimitStorage"`
StatsCurrent Stats `json:"statsCurrent"`
// StatsHistory map[int]map[int]map[int]Stats `json:"statsHistory"`
} `json:"data"`
}
// CreateFolderRequest is the input to /contents/createFolder
type CreateFolderRequest struct {
ParentFolderID string `json:"parentFolderId"`
FolderName string `json:"folderName"`
ModTime int64 `json:"modTime,omitempty"`
}
// CreateFolderResponse is the output from /contents/createFolder
type CreateFolderResponse struct {
Error
Data Item `json:"data"`
}
// DeleteRequest is the input to DELETE /contents
type DeleteRequest struct {
ContentsID string `json:"contentsId"` // comma separated list of IDs
}
// DeleteResponse is the input to DELETE /contents
type DeleteResponse struct {
Error
Data map[string]Error
}
// Server is an upload server
type Server struct {
Name string `json:"name"`
Zone string `json:"zone"`
}
// String returns a string representation of the Server
func (s *Server) String() string {
return fmt.Sprintf("%s (%s)", s.Name, s.Zone)
}
// Root returns the root URL for the server
func (s *Server) Root() string {
return fmt.Sprintf("https://%s.gofile.io/", s.Name)
}
// URL returns the upload URL for the server
func (s *Server) URL() string {
return fmt.Sprintf("https://%s.gofile.io/contents/uploadfile", s.Name)
}
// ServersResponse is the output from /servers
type ServersResponse struct {
Error
Data struct {
Servers []Server `json:"servers"`
} `json:"data"`
}
// UploadResponse is returned by POST /contents/uploadfile
type UploadResponse struct {
Error
Data Item `json:"data"`
}
// DirectLinksRequest specifies the parameters for the direct link
type DirectLinksRequest struct {
ExpireTime int64 `json:"expireTime,omitempty"`
SourceIpsAllowed []any `json:"sourceIpsAllowed,omitempty"`
DomainsAllowed []any `json:"domainsAllowed,omitempty"`
Auth []any `json:"auth,omitempty"`
}
// DirectLinksResult is returned from POST /contents/{id}/directlinks
type DirectLinksResult struct {
Error
Data struct {
ExpireTime int64 `json:"expireTime"`
SourceIpsAllowed []any `json:"sourceIpsAllowed"`
DomainsAllowed []any `json:"domainsAllowed"`
Auth []any `json:"auth"`
IsReqLink bool `json:"isReqLink"`
ID string `json:"id"`
DirectLink string `json:"directLink"`
} `json:"data"`
}
// UpdateItemRequest describes the updates to be done to an item for PUT /contents/{id}/update
//
// The Value of the attribute to define :
// For Attribute "name" : The name of the content (file or folder)
// For Attribute "description" : The description displayed on the download page (folder only)
// For Attribute "tags" : A comma-separated list of tags (folder only)
// For Attribute "public" : either true or false (folder only)
// For Attribute "expiry" : A unix timestamp of the expiration date (folder only)
// For Attribute "password" : The password to set (folder only)
type UpdateItemRequest struct {
Attribute string `json:"attribute"`
Value any `json:"attributeValue"`
}
// UpdateItemResponse is returned by PUT /contents/{id}/update
type UpdateItemResponse struct {
Error
Data Item `json:"data"`
}
// MoveRequest is the input to /contents/move
type MoveRequest struct {
FolderID string `json:"folderId"`
ContentsID string `json:"contentsId"` // comma separated list of IDs
}
// MoveResponse is returned by POST /contents/move
type MoveResponse struct {
Error
Data map[string]struct {
Error
Item `json:"data"`
} `json:"data"`
}
// CopyRequest is the input to /contents/copy
type CopyRequest struct {
FolderID string `json:"folderId"`
ContentsID string `json:"contentsId"` // comma separated list of IDs
}
// CopyResponse is returned by POST /contents/copy
type CopyResponse struct {
Error
Data map[string]struct {
Error
Item `json:"data"`
} `json:"data"`
}
// UploadServerStatus is returned when fetching the root of an upload server
type UploadServerStatus struct {
Error
Data struct {
Server string `json:"server"`
Test string `json:"test"`
} `json:"data"`
}

1646
backend/gofile/gofile.go Normal file

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,17 @@
// Test Gofile filesystem interface
package gofile_test
import (
"testing"
"github.com/rclone/rclone/backend/gofile"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestGoFile:",
NilObject: (*gofile.Object)(nil),
})
}

View file

@ -697,7 +697,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
// is this a directory marker?
if isDirectory {
// Don't insert the root directory
if remote == directory {
if remote == f.opt.Enc.ToStandardPath(directory) {
continue
}
// process directory markers as directories
@ -1310,10 +1310,11 @@ func (o *Object) Storable() bool {
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
url := o.url
if o.fs.opt.UserProject != "" {
o.url = o.url + "&userProject=" + o.fs.opt.UserProject
url += "&userProject=" + o.fs.opt.UserProject
}
req, err := http.NewRequestWithContext(ctx, "GET", o.url, nil)
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
return nil, err
}

View file

@ -56,8 +56,7 @@ type MediaItem struct {
CreationTime time.Time `json:"creationTime"`
Width string `json:"width"`
Height string `json:"height"`
Photo struct {
} `json:"photo"`
Photo struct{} `json:"photo"`
} `json:"mediaMetadata"`
Filename string `json:"filename"`
}
@ -68,7 +67,7 @@ type MediaItems struct {
NextPageToken string `json:"nextPageToken"`
}
//Content categories
// Content categories
// NONE Default content category. This category is ignored when any other category is used in the filter.
// LANDSCAPES Media items containing landscapes.
// RECEIPTS Media items containing receipts.
@ -187,5 +186,5 @@ type BatchCreateResponse struct {
// BatchRemoveItems is for removing items from an album
type BatchRemoveItems struct {
MediaItemIds []string `json:"mediaItemIds"`
MediaItemIDs []string `json:"mediaItemIds"`
}

View file

@ -29,6 +29,7 @@ import (
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/lib/batcher"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer"
@ -71,6 +72,14 @@ var (
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectURL,
}
// Configure the batcher
defaultBatcherOptions = batcher.Options{
MaxBatchSize: 50,
DefaultTimeoutSync: 1000 * time.Millisecond,
DefaultTimeoutAsync: 10 * time.Second,
DefaultBatchSizeAsync: 50,
}
)
// Register with Fs
@ -111,7 +120,7 @@ will count towards storage in your Google Account.`)
}
return nil, fmt.Errorf("unknown state %q", config.State)
},
Options: append(oauthutil.SharedOptions, []fs.Option{{
Options: append(append(oauthutil.SharedOptions, []fs.Option{{
Name: "read_only",
Default: false,
Help: `Set to make the Google Photos backend read only.
@ -158,7 +167,7 @@ listings and won't be transferred.`,
Default: (encoder.Base |
encoder.EncodeCrLf |
encoder.EncodeInvalidUtf8),
}}...),
}}...), defaultBatcherOptions.FsOptions("")...),
})
}
@ -169,6 +178,9 @@ type Options struct {
StartYear int `config:"start_year"`
IncludeArchived bool `config:"include_archived"`
Enc encoder.MultiEncoder `config:"encoding"`
BatchMode string `config:"batch_mode"`
BatchSize int `config:"batch_size"`
BatchTimeout fs.Duration `config:"batch_timeout"`
}
// Fs represents a remote storage server
@ -187,6 +199,7 @@ type Fs struct {
uploadedMu sync.Mutex // to protect the below
uploaded dirtree.DirTree // record of uploaded items
createMu sync.Mutex // held when creating albums to prevent dupes
batcher *batcher.Batcher[uploadedItem, *api.MediaItem]
}
// Object describes a storage object
@ -267,7 +280,7 @@ func errorHandler(resp *http.Response) error {
if strings.HasPrefix(resp.Header.Get("Content-Type"), "image/") {
body = []byte("Image not found or broken")
}
var e = api.Error{
e := api.Error{
Details: api.ErrorDetails{
Code: resp.StatusCode,
Message: string(body),
@ -312,6 +325,14 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
albums: map[bool]*albums{},
uploaded: dirtree.New(),
}
batcherOptions := defaultBatcherOptions
batcherOptions.Mode = f.opt.BatchMode
batcherOptions.Size = f.opt.BatchSize
batcherOptions.Timeout = time.Duration(f.opt.BatchTimeout)
f.batcher, err = batcher.New(ctx, f, f.commitBatch, batcherOptions)
if err != nil {
return nil, err
}
f.features = (&fs.Features{
ReadMimeType: true,
}).Fill(ctx, f)
@ -599,9 +620,7 @@ func (f *Fs) listDir(ctx context.Context, prefix string, filter api.SearchFilter
if err != nil {
return err
}
if entry != nil {
entries = append(entries, entry)
}
entries = append(entries, entry)
return nil
})
if err != nil {
@ -681,7 +700,7 @@ func (f *Fs) createAlbum(ctx context.Context, albumTitle string) (album *api.Alb
Path: "/albums",
Parameters: url.Values{},
}
var request = api.CreateAlbum{
request := api.CreateAlbum{
Album: &api.Album{
Title: albumTitle,
},
@ -781,6 +800,13 @@ func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.None)
}
// Shutdown the backend, closing any background tasks and any
// cached connections.
func (f *Fs) Shutdown(ctx context.Context) error {
f.batcher.Shutdown()
return nil
}
// ------------------------------------------------------------
// Fs returns the parent Fs
@ -961,6 +987,82 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
return resp.Body, err
}
// input to the batcher
type uploadedItem struct {
AlbumID string // desired album
UploadToken string // upload ID
}
// Commit a batch of items to albumID returning the errors in errors
func (f *Fs) commitBatchAlbumID(ctx context.Context, items []uploadedItem, results []*api.MediaItem, errors []error, albumID string) {
// Create the media item from an UploadToken, optionally adding to an album
opts := rest.Opts{
Method: "POST",
Path: "/mediaItems:batchCreate",
}
request := api.BatchCreateRequest{
AlbumID: albumID,
}
itemsInBatch := 0
for i := range items {
if items[i].AlbumID == albumID {
request.NewMediaItems = append(request.NewMediaItems, api.NewMediaItem{
SimpleMediaItem: api.SimpleMediaItem{
UploadToken: items[i].UploadToken,
},
})
itemsInBatch++
}
}
var result api.BatchCreateResponse
var resp *http.Response
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, request, &result)
return shouldRetry(ctx, resp, err)
})
if err != nil {
err = fmt.Errorf("failed to create media item: %w", err)
}
if err == nil && len(result.NewMediaItemResults) != itemsInBatch {
err = fmt.Errorf("bad response to BatchCreate expecting %d items but got %d", itemsInBatch, len(result.NewMediaItemResults))
}
j := 0
for i := range items {
if items[i].AlbumID == albumID {
if err == nil {
media := &result.NewMediaItemResults[j]
if media.Status.Code != 0 {
errors[i] = fmt.Errorf("upload failed: %s (%d)", media.Status.Message, media.Status.Code)
} else {
results[i] = &media.MediaItem
}
} else {
errors[i] = err
}
j++
}
}
}
// Called by the batcher to commit a batch
func (f *Fs) commitBatch(ctx context.Context, items []uploadedItem, results []*api.MediaItem, errors []error) (err error) {
// Discover all the AlbumIDs as we have to upload these separately
//
// Should maybe have one batcher per AlbumID
albumIDs := map[string]struct{}{}
for i := range items {
albumIDs[items[i].AlbumID] = struct{}{}
}
// batch the albums
for albumID := range albumIDs {
// errors returned in errors
f.commitBatchAlbumID(ctx, items, results, errors, albumID)
}
return nil
}
// Update the object with the contents of the io.Reader, modTime and size
//
// The new object may have been created if an error is returned
@ -1021,37 +1123,29 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return errors.New("empty upload token")
}
// Create the media item from an UploadToken, optionally adding to an album
opts = rest.Opts{
Method: "POST",
Path: "/mediaItems:batchCreate",
uploaded := uploadedItem{
AlbumID: albumID,
UploadToken: uploadToken,
}
var request = api.BatchCreateRequest{
AlbumID: albumID,
NewMediaItems: []api.NewMediaItem{
{
SimpleMediaItem: api.SimpleMediaItem{
UploadToken: uploadToken,
},
},
},
// Save the upload into an album
var info *api.MediaItem
if o.fs.batcher.Batching() {
info, err = o.fs.batcher.Commit(ctx, o.remote, uploaded)
} else {
errors := make([]error, 1)
results := make([]*api.MediaItem, 1)
err = o.fs.commitBatch(ctx, []uploadedItem{uploaded}, results, errors)
if err != nil {
err = errors[0]
info = results[0]
}
}
var result api.BatchCreateResponse
err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, request, &result)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return fmt.Errorf("failed to create media item: %w", err)
return fmt.Errorf("failed to commit batch: %w", err)
}
if len(result.NewMediaItemResults) != 1 {
return errors.New("bad response to BatchCreate wrong number of items")
}
mediaItemResult := result.NewMediaItemResults[0]
if mediaItemResult.Status.Code != 0 {
return fmt.Errorf("upload failed: %s (%d)", mediaItemResult.Status.Message, mediaItemResult.Status.Code)
}
o.setMetaData(&mediaItemResult.MediaItem)
o.setMetaData(info)
// Add upload to internal storage
if pattern.isUpload {
@ -1078,8 +1172,8 @@ func (o *Object) Remove(ctx context.Context) (err error) {
Path: "/albums/" + album.ID + ":batchRemoveMediaItems",
NoResponse: true,
}
var request = api.BatchRemoveItems{
MediaItemIds: []string{o.id},
request := api.BatchRemoveItems{
MediaItemIDs: []string{o.id},
}
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) {

View file

@ -38,7 +38,7 @@ type dirPattern struct {
toEntries func(ctx context.Context, f lister, prefix string, match []string) (fs.DirEntries, error)
}
// dirPatters is a slice of all the directory patterns
// dirPatterns is a slice of all the directory patterns
type dirPatterns []dirPattern
// patterns describes the layout of the google photos backend file system.

View file

@ -80,6 +80,14 @@ func (f *Fs) dbDump(ctx context.Context, full bool, root string) error {
}
root = fspath.JoinRootPath(remoteFs.Root(), f.Root())
}
if f.db == nil {
if f.opt.MaxAge == 0 {
fs.Errorf(f, "db not found. (disabled with max_age = 0)")
} else {
fs.Errorf(f, "db not found.")
}
return kv.ErrInactive
}
op := &kvDump{
full: full,
root: root,

View file

@ -114,6 +114,13 @@ func NewFs(ctx context.Context, fsname, rpath string, cmap configmap.Mapper) (fs
root: rpath,
opt: opt,
}
// Correct root if definitely pointing to a file
if err == fs.ErrorIsFile {
f.root = path.Dir(f.root)
if f.root == "." || f.root == "/" {
f.root = ""
}
}
baseFeatures := baseFs.Features()
f.fpTime = baseFs.Precision() != fs.ModTimeNotSupported
@ -157,16 +164,21 @@ func NewFs(ctx context.Context, fsname, rpath string, cmap configmap.Mapper) (fs
}
stubFeatures := &fs.Features{
CanHaveEmptyDirectories: true,
IsLocal: true,
ReadMimeType: true,
WriteMimeType: true,
SetTier: true,
GetTier: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
PartialUploads: true,
CanHaveEmptyDirectories: true,
IsLocal: true,
ReadMimeType: true,
WriteMimeType: true,
SetTier: true,
GetTier: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
ReadDirMetadata: true,
WriteDirMetadata: true,
WriteDirSetModTime: true,
UserDirMetadata: true,
DirModTimeUpdatesOnWrite: true,
PartialUploads: true,
}
f.features = stubFeatures.Fill(ctx, f).Mask(ctx, f.Fs).WrapsFs(f, f.Fs)
@ -334,6 +346,22 @@ func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
return errors.New("MergeDirs not supported")
}
// DirSetModTime sets the directory modtime for dir
func (f *Fs) DirSetModTime(ctx context.Context, dir string, modTime time.Time) error {
if do := f.Fs.Features().DirSetModTime; do != nil {
return do(ctx, dir, modTime)
}
return fs.ErrorNotImplemented
}
// MkdirMetadata makes the root directory of the Fs object
func (f *Fs) MkdirMetadata(ctx context.Context, dir string, metadata fs.Metadata) (fs.Directory, error) {
if do := f.Fs.Features().MkdirMetadata; do != nil {
return do(ctx, dir, metadata)
}
return nil, fs.ErrorNotImplemented
}
// DirCacheFlush resets the directory cache - used in testing
// as an optional interface
func (f *Fs) DirCacheFlush() {
@ -411,7 +439,9 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
// Shutdown the backend, closing any background tasks and any cached connections.
func (f *Fs) Shutdown(ctx context.Context) (err error) {
err = f.db.Stop(false)
if f.db != nil && !f.db.IsStopped() {
err = f.db.Stop(false)
}
if do := f.Fs.Features().Shutdown; do != nil {
if err2 := do(ctx); err2 != nil {
err = err2
@ -505,6 +535,17 @@ func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
return do.Metadata(ctx)
}
// SetMetadata sets metadata for an Object
//
// It should return fs.ErrorNotImplemented if it can't set metadata
func (o *Object) SetMetadata(ctx context.Context, metadata fs.Metadata) error {
do, ok := o.Object.(fs.SetMetadataer)
if !ok {
return fs.ErrorNotImplemented
}
return do.SetMetadata(ctx, metadata)
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
@ -521,6 +562,8 @@ var (
_ fs.Abouter = (*Fs)(nil)
_ fs.Wrapper = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.DirSetModTimer = (*Fs)(nil)
_ fs.MkdirMetadataer = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)

View file

@ -60,9 +60,11 @@ func (f *Fs) testUploadFromCrypt(t *testing.T) {
assert.NotNil(t, dst)
// check that hash was created
hash, err = f.getRawHash(ctx, hashType, fileName, anyFingerprint, longTime)
assert.NoError(t, err)
assert.NotEmpty(t, hash)
if f.opt.MaxAge > 0 {
hash, err = f.getRawHash(ctx, hashType, fileName, anyFingerprint, longTime)
assert.NoError(t, err)
assert.NotEmpty(t, hash)
}
//t.Logf("hash is %q", hash)
_ = operations.Purge(ctx, f, dirName)
}

View file

@ -37,4 +37,9 @@ func TestIntegration(t *testing.T) {
opt.QuickTestOK = true
}
fstests.Run(t, &opt)
// test again with MaxAge = 0
if *fstest.RemoteName == "" {
opt.ExtraConfig = append(opt.ExtraConfig, fstests.ExtraConfigItem{Name: "TestHasher", Key: "max_age", Value: "0"})
fstests.Run(t, &opt)
}
}

View file

@ -71,7 +71,14 @@ func (o *Object) Hash(ctx context.Context, hashType hash.Type) (hashVal string,
f := o.f
if f.passHashes.Contains(hashType) {
fs.Debugf(o, "pass %s", hashType)
return o.Object.Hash(ctx, hashType)
hashVal, err = o.Object.Hash(ctx, hashType)
if hashVal != "" {
return hashVal, err
}
if err != nil {
fs.Debugf(o, "error passing %s: %v", hashType, err)
}
fs.Debugf(o, "passed %s is blank -- trying other methods", hashType)
}
if !f.suppHashes.Contains(hashType) {
fs.Debugf(o, "unsupp %s", hashType)

View file

@ -1,5 +1,4 @@
//go:build !plan9
// +build !plan9
package hdfs
@ -93,7 +92,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
options := hdfs.ClientOptions{
Addresses: []string{opt.Namenode},
Addresses: opt.Namenode,
UseDatanodeHostname: false,
}
@ -150,7 +149,7 @@ func (f *Fs) Root() string {
// String returns a description of the FS
func (f *Fs) String() string {
return fmt.Sprintf("hdfs://%s", f.opt.Namenode)
return fmt.Sprintf("hdfs://%s/%s", f.opt.Namenode, f.root)
}
// Features returns the optional features of this Fs
@ -210,7 +209,8 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
fs: f,
remote: remote,
size: x.Size(),
modTime: x.ModTime()})
modTime: x.ModTime(),
})
}
}
return entries, nil

View file

@ -1,5 +1,4 @@
//go:build !plan9
// +build !plan9
// Package hdfs provides an interface to the HDFS storage system.
package hdfs
@ -20,9 +19,10 @@ func init() {
NewFs: NewFs,
Options: []fs.Option{{
Name: "namenode",
Help: "Hadoop name node and port.\n\nE.g. \"namenode:8020\" to connect to host namenode at port 8020.",
Help: "Hadoop name nodes and ports.\n\nE.g. \"namenode-1:8020,namenode-2:8020,...\" to connect to host namenodes at port 8020.",
Required: true,
Sensitive: true,
Default: fs.CommaSepList{},
}, {
Name: "username",
Help: "Hadoop user name.",
@ -65,7 +65,7 @@ and 'privacy'. Used only with KERBEROS enabled.`,
// Options for this backend
type Options struct {
Namenode string `config:"namenode"`
Namenode fs.CommaSepList `config:"namenode"`
Username string `config:"username"`
ServicePrincipalName string `config:"service_principal_name"`
DataTransferProtection string `config:"data_transfer_protection"`

View file

@ -1,7 +1,6 @@
// Test HDFS filesystem interface
//go:build !plan9
// +build !plan9
package hdfs_test

View file

@ -2,6 +2,6 @@
// about "no buildable Go source files "
//go:build plan9
// +build plan9
// Package hdfs provides an interface to the HDFS storage system.
package hdfs

View file

@ -1,5 +1,4 @@
//go:build !plan9
// +build !plan9
package hdfs

View file

@ -762,6 +762,12 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return nil
}
// Shutdown shutdown the fs
func (f *Fs) Shutdown(ctx context.Context) error {
f.tokenRenewer.Shutdown()
return nil
}
// ------------------------------------------------------------
// Fs returns the parent Fs.
@ -997,6 +1003,7 @@ var (
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.Object = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
)

View file

@ -36,6 +36,7 @@ func init() {
Name: "http",
Description: "HTTP",
NewFs: NewFs,
CommandHelp: commandHelp,
Options: []fs.Option{{
Name: "url",
Help: "URL of HTTP host to connect to.\n\nE.g. \"https://example.com\", or \"https://user:pass@example.com\" to use a username and password.",
@ -88,6 +89,10 @@ that directory listings are much quicker, but rclone won't have the times or
sizes of any files, and some files that don't exist may be in the listing.`,
Default: false,
Advanced: true,
}, {
Name: "no_escape",
Help: "Do not escape URL metacharacters in path names.",
Default: false,
}},
}
fs.Register(fsi)
@ -99,6 +104,7 @@ type Options struct {
NoSlash bool `config:"no_slash"`
NoHead bool `config:"no_head"`
Headers fs.CommaSepList `config:"headers"`
NoEscape bool `config:"no_escape"`
}
// Fs stores the interface to the remote HTTP files
@ -210,6 +216,42 @@ func getFsEndpoint(ctx context.Context, client *http.Client, url string, opt *Op
return createFileResult()
}
// Make the http connection with opt
func (f *Fs) httpConnection(ctx context.Context, opt *Options) (isFile bool, err error) {
if len(opt.Headers)%2 != 0 {
return false, errors.New("odd number of headers supplied")
}
if !strings.HasSuffix(opt.Endpoint, "/") {
opt.Endpoint += "/"
}
// Parse the endpoint and stick the root onto it
base, err := url.Parse(opt.Endpoint)
if err != nil {
return false, err
}
u, err := rest.URLJoin(base, rest.URLPathEscape(f.root))
if err != nil {
return false, err
}
client := fshttp.NewClient(ctx)
endpoint, isFile := getFsEndpoint(ctx, client, u.String(), opt)
fs.Debugf(nil, "Root: %s", endpoint)
u, err = url.Parse(endpoint)
if err != nil {
return false, err
}
// Update f with the new parameters
f.httpClient = client
f.endpoint = u
f.endpointURL = u.String()
return isFile, nil
}
// NewFs creates a new Fs object from the name and root. It connects to
// the host specified in the config file.
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
@ -220,47 +262,23 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
return nil, err
}
if len(opt.Headers)%2 != 0 {
return nil, errors.New("odd number of headers supplied")
}
if !strings.HasSuffix(opt.Endpoint, "/") {
opt.Endpoint += "/"
}
// Parse the endpoint and stick the root onto it
base, err := url.Parse(opt.Endpoint)
if err != nil {
return nil, err
}
u, err := rest.URLJoin(base, rest.URLPathEscape(root))
if err != nil {
return nil, err
}
client := fshttp.NewClient(ctx)
endpoint, isFile := getFsEndpoint(ctx, client, u.String(), opt)
fs.Debugf(nil, "Root: %s", endpoint)
u, err = url.Parse(endpoint)
if err != nil {
return nil, err
}
ci := fs.GetConfig(ctx)
f := &Fs{
name: name,
root: root,
opt: *opt,
ci: ci,
httpClient: client,
endpoint: u,
endpointURL: u.String(),
name: name,
root: root,
opt: *opt,
ci: ci,
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
}).Fill(ctx, f)
// Make the http connection
isFile, err := f.httpConnection(ctx, opt)
if err != nil {
return nil, err
}
if isFile {
// return an error with an fs which points to the parent
return f, fs.ErrorIsFile
@ -313,6 +331,11 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// Join's the remote onto the base URL
func (f *Fs) url(remote string) string {
if f.opt.NoEscape {
// Directly concatenate without escaping, no_escape behavior
return f.endpointURL + remote
}
// Default behavior
return f.endpointURL + rest.URLPathEscape(remote)
}
@ -685,10 +708,66 @@ func (o *Object) MimeType(ctx context.Context) string {
return o.contentType
}
var commandHelp = []fs.CommandHelp{{
Name: "set",
Short: "Set command for updating the config parameters.",
Long: `This set command can be used to update the config parameters
for a running http backend.
Usage Examples:
rclone backend set remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=remote: -o url=https://example.com
The option keys are named as they are in the config file.
This rebuilds the connection to the http backend when it is called with
the new parameters. Only new parameters need be passed as the values
will default to those currently in use.
It doesn't return anything.
`,
}}
// Command the backend to run a named command
//
// The command run is name
// args may be used to read arguments from
// opts may be used to read optional arguments from
//
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
switch name {
case "set":
newOpt := f.opt
err := configstruct.Set(configmap.Simple(opt), &newOpt)
if err != nil {
return nil, fmt.Errorf("reading config: %w", err)
}
_, err = f.httpConnection(ctx, &newOpt)
if err != nil {
return nil, fmt.Errorf("updating session: %w", err)
}
f.opt = newOpt
keys := []string{}
for k := range opt {
keys = append(keys, k)
}
fs.Logf(f, "Updated config values: %s", strings.Join(keys, ", "))
return nil, nil
default:
return nil, fs.ErrorCommandNotFound
}
}
// Check the interfaces are satisfied
var (
_ fs.Fs = &Fs{}
_ fs.PutStreamer = &Fs{}
_ fs.Object = &Object{}
_ fs.MimeTyper = &Object{}
_ fs.Commander = &Fs{}
)

View file

@ -0,0 +1,66 @@
// Package client provides a client for interacting with the ImageKit API.
package client
import (
"context"
"fmt"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/lib/rest"
)
// ImageKit main struct
type ImageKit struct {
Prefix string
UploadPrefix string
Timeout int64
UploadTimeout int64
PrivateKey string
PublicKey string
URLEndpoint string
HTTPClient *rest.Client
}
// NewParams is a struct to define parameters to imagekit
type NewParams struct {
PrivateKey string
PublicKey string
URLEndpoint string
}
// New returns ImageKit object from environment variables
func New(ctx context.Context, params NewParams) (*ImageKit, error) {
privateKey := params.PrivateKey
publicKey := params.PublicKey
endpointURL := params.URLEndpoint
switch {
case privateKey == "":
return nil, fmt.Errorf("ImageKit.io URL endpoint is required")
case publicKey == "":
return nil, fmt.Errorf("ImageKit.io public key is required")
case endpointURL == "":
return nil, fmt.Errorf("ImageKit.io private key is required")
}
cliCtx, cliCfg := fs.AddConfig(ctx)
cliCfg.UserAgent = "rclone/imagekit"
client := rest.NewClient(fshttp.NewClient(cliCtx))
client.SetUserPass(privateKey, "")
client.SetHeader("Accept", "application/json")
return &ImageKit{
Prefix: "https://api.imagekit.io/v2",
UploadPrefix: "https://upload.imagekit.io/api/v2",
Timeout: 60,
UploadTimeout: 3600,
PrivateKey: params.PrivateKey,
PublicKey: params.PublicKey,
URLEndpoint: params.URLEndpoint,
HTTPClient: client,
}, nil
}

View file

@ -0,0 +1,252 @@
package client
import (
"context"
"errors"
"fmt"
"net/http"
"net/url"
"time"
"github.com/rclone/rclone/lib/rest"
"gopkg.in/validator.v2"
)
// FilesOrFolderParam struct is a parameter type to ListFiles() function to search / list media library files.
type FilesOrFolderParam struct {
Path string `json:"path,omitempty"`
Limit int `json:"limit,omitempty"`
Skip int `json:"skip,omitempty"`
SearchQuery string `json:"searchQuery,omitempty"`
}
// AITag represents an AI tag for a media library file.
type AITag struct {
Name string `json:"name"`
Confidence float32 `json:"confidence"`
Source string `json:"source"`
}
// File represents media library File details.
type File struct {
FileID string `json:"fileId"`
Name string `json:"name"`
FilePath string `json:"filePath"`
Type string `json:"type"`
VersionInfo map[string]string `json:"versionInfo"`
IsPrivateFile *bool `json:"isPrivateFile"`
CustomCoordinates *string `json:"customCoordinates"`
URL string `json:"url"`
Thumbnail string `json:"thumbnail"`
FileType string `json:"fileType"`
Mime string `json:"mime"`
Height int `json:"height"`
Width int `json:"Width"`
Size uint64 `json:"size"`
HasAlpha bool `json:"hasAlpha"`
CustomMetadata map[string]any `json:"customMetadata,omitempty"`
EmbeddedMetadata map[string]any `json:"embeddedMetadata"`
CreatedAt time.Time `json:"createdAt"`
UpdatedAt time.Time `json:"updatedAt"`
Tags []string `json:"tags"`
AITags []AITag `json:"AITags"`
}
// Folder represents media library Folder details.
type Folder struct {
*File
FolderPath string `json:"folderPath"`
}
// CreateFolderParam represents parameter to create folder api
type CreateFolderParam struct {
FolderName string `validate:"nonzero" json:"folderName"`
ParentFolderPath string `validate:"nonzero" json:"parentFolderPath"`
}
// DeleteFolderParam represents parameter to delete folder api
type DeleteFolderParam struct {
FolderPath string `validate:"nonzero" json:"folderPath"`
}
// MoveFolderParam represents parameter to move folder api
type MoveFolderParam struct {
SourceFolderPath string `validate:"nonzero" json:"sourceFolderPath"`
DestinationPath string `validate:"nonzero" json:"destinationPath"`
}
// JobIDResponse respresents response struct with JobID for folder operations
type JobIDResponse struct {
JobID string `json:"jobId"`
}
// JobStatus represents response Data to job status api
type JobStatus struct {
JobID string `json:"jobId"`
Type string `json:"type"`
Status string `json:"status"`
}
// File represents media library File details.
func (ik *ImageKit) File(ctx context.Context, fileID string) (*http.Response, *File, error) {
data := &File{}
response, err := ik.HTTPClient.CallJSON(ctx, &rest.Opts{
Method: "GET",
Path: fmt.Sprintf("/files/%s/details", fileID),
RootURL: ik.Prefix,
IgnoreStatus: true,
}, nil, data)
return response, data, err
}
// Files retrieves media library files. Filter options can be supplied as FilesOrFolderParam.
func (ik *ImageKit) Files(ctx context.Context, params FilesOrFolderParam, includeVersion bool) (*http.Response, *[]File, error) {
var SearchQuery = `type = "file"`
if includeVersion {
SearchQuery = `type IN ["file", "file-version"]`
}
if params.SearchQuery != "" {
SearchQuery = params.SearchQuery
}
parameters := url.Values{}
parameters.Set("skip", fmt.Sprintf("%d", params.Skip))
parameters.Set("limit", fmt.Sprintf("%d", params.Limit))
parameters.Set("path", params.Path)
parameters.Set("searchQuery", SearchQuery)
data := &[]File{}
response, err := ik.HTTPClient.CallJSON(ctx, &rest.Opts{
Method: "GET",
Path: "/files",
RootURL: ik.Prefix,
Parameters: parameters,
}, nil, data)
return response, data, err
}
// DeleteFile removes file by FileID from media library
func (ik *ImageKit) DeleteFile(ctx context.Context, fileID string) (*http.Response, error) {
var err error
if fileID == "" {
return nil, errors.New("fileID can not be empty")
}
response, err := ik.HTTPClient.CallJSON(ctx, &rest.Opts{
Method: "DELETE",
Path: fmt.Sprintf("/files/%s", fileID),
RootURL: ik.Prefix,
NoResponse: true,
}, nil, nil)
return response, err
}
// Folders retrieves media library files. Filter options can be supplied as FilesOrFolderParam.
func (ik *ImageKit) Folders(ctx context.Context, params FilesOrFolderParam) (*http.Response, *[]Folder, error) {
var SearchQuery = `type = "folder"`
if params.SearchQuery != "" {
SearchQuery = params.SearchQuery
}
parameters := url.Values{}
parameters.Set("skip", fmt.Sprintf("%d", params.Skip))
parameters.Set("limit", fmt.Sprintf("%d", params.Limit))
parameters.Set("path", params.Path)
parameters.Set("searchQuery", SearchQuery)
data := &[]Folder{}
resp, err := ik.HTTPClient.CallJSON(ctx, &rest.Opts{
Method: "GET",
Path: "/files",
RootURL: ik.Prefix,
Parameters: parameters,
}, nil, data)
if err != nil {
return resp, data, err
}
return resp, data, err
}
// CreateFolder creates a new folder in media library
func (ik *ImageKit) CreateFolder(ctx context.Context, param CreateFolderParam) (*http.Response, error) {
var err error
if err = validator.Validate(&param); err != nil {
return nil, err
}
response, err := ik.HTTPClient.CallJSON(ctx, &rest.Opts{
Method: "POST",
Path: "/folder",
RootURL: ik.Prefix,
NoResponse: true,
}, param, nil)
return response, err
}
// DeleteFolder removes the folder from media library
func (ik *ImageKit) DeleteFolder(ctx context.Context, param DeleteFolderParam) (*http.Response, error) {
var err error
if err = validator.Validate(&param); err != nil {
return nil, err
}
response, err := ik.HTTPClient.CallJSON(ctx, &rest.Opts{
Method: "DELETE",
Path: "/folder",
RootURL: ik.Prefix,
NoResponse: true,
}, param, nil)
return response, err
}
// MoveFolder moves given folder to new path in media library
func (ik *ImageKit) MoveFolder(ctx context.Context, param MoveFolderParam) (*http.Response, *JobIDResponse, error) {
var err error
var response = &JobIDResponse{}
if err = validator.Validate(&param); err != nil {
return nil, nil, err
}
resp, err := ik.HTTPClient.CallJSON(ctx, &rest.Opts{
Method: "PUT",
Path: "bulkJobs/moveFolder",
RootURL: ik.Prefix,
}, param, response)
return resp, response, err
}
// BulkJobStatus retrieves the status of a bulk job by job ID.
func (ik *ImageKit) BulkJobStatus(ctx context.Context, jobID string) (*http.Response, *JobStatus, error) {
var err error
var response = &JobStatus{}
if jobID == "" {
return nil, nil, errors.New("jobId can not be blank")
}
resp, err := ik.HTTPClient.CallJSON(ctx, &rest.Opts{
Method: "GET",
Path: "bulkJobs/" + jobID,
RootURL: ik.Prefix,
}, nil, response)
return resp, response, err
}

View file

@ -0,0 +1,96 @@
package client
import (
"context"
"errors"
"fmt"
"io"
"net/http"
"net/url"
"github.com/rclone/rclone/lib/rest"
)
// UploadParam defines upload parameters
type UploadParam struct {
FileName string `json:"fileName"`
Folder string `json:"folder,omitempty"` // default value: /
Tags string `json:"tags,omitempty"`
IsPrivateFile *bool `json:"isPrivateFile,omitempty"` // default: false
}
// UploadResult defines the response structure for the upload API
type UploadResult struct {
FileID string `json:"fileId"`
Name string `json:"name"`
URL string `json:"url"`
ThumbnailURL string `json:"thumbnailUrl"`
Height int `json:"height"`
Width int `json:"Width"`
Size uint64 `json:"size"`
FilePath string `json:"filePath"`
AITags []map[string]any `json:"AITags"`
VersionInfo map[string]string `json:"versionInfo"`
}
// Upload uploads an asset to a imagekit account.
//
// The asset can be:
// - the actual data (io.Reader)
// - the Data URI (Base64 encoded), max ~60 MB (62,910,000 chars)
// - the remote FTP, HTTP or HTTPS URL address of an existing file
//
// https://docs.imagekit.io/api-reference/upload-file-api/server-side-file-upload
func (ik *ImageKit) Upload(ctx context.Context, file io.Reader, param UploadParam) (*http.Response, *UploadResult, error) {
var err error
if param.FileName == "" {
return nil, nil, errors.New("Upload: Filename is required")
}
// Initialize URL values
formParams := url.Values{}
formParams.Add("useUniqueFileName", fmt.Sprint(false))
// Add individual fields to URL values
if param.FileName != "" {
formParams.Add("fileName", param.FileName)
}
if param.Tags != "" {
formParams.Add("tags", param.Tags)
}
if param.Folder != "" {
formParams.Add("folder", param.Folder)
}
if param.IsPrivateFile != nil {
formParams.Add("isPrivateFile", fmt.Sprintf("%v", *param.IsPrivateFile))
}
response := &UploadResult{}
formReader, contentType, _, err := rest.MultipartUpload(ctx, file, formParams, "file", param.FileName)
if err != nil {
return nil, nil, fmt.Errorf("failed to make multipart upload: %w", err)
}
opts := rest.Opts{
Method: "POST",
Path: "/files/upload",
RootURL: ik.UploadPrefix,
Body: formReader,
ContentType: contentType,
}
resp, err := ik.HTTPClient.CallJSON(ctx, &opts, nil, response)
if err != nil {
return resp, response, err
}
return resp, response, err
}

View file

@ -0,0 +1,72 @@
package client
import (
"crypto/hmac"
"crypto/sha1"
"encoding/hex"
"fmt"
neturl "net/url"
"strconv"
"strings"
"time"
)
// URLParam represents parameters for generating url
type URLParam struct {
Path string
Src string
URLEndpoint string
Signed bool
ExpireSeconds int64
QueryParameters map[string]string
}
// URL generates url from URLParam
func (ik *ImageKit) URL(params URLParam) (string, error) {
var resultURL string
var url *neturl.URL
var err error
var endpoint = params.URLEndpoint
if endpoint == "" {
endpoint = ik.URLEndpoint
}
endpoint = strings.TrimRight(endpoint, "/") + "/"
if params.QueryParameters == nil {
params.QueryParameters = make(map[string]string)
}
if url, err = neturl.Parse(params.Src); err != nil {
return "", err
}
query := url.Query()
for k, v := range params.QueryParameters {
query.Set(k, v)
}
url.RawQuery = query.Encode()
resultURL = url.String()
if params.Signed {
now := time.Now().Unix()
var expires = strconv.FormatInt(now+params.ExpireSeconds, 10)
var path = strings.Replace(resultURL, endpoint, "", 1)
path += expires
mac := hmac.New(sha1.New, []byte(ik.PrivateKey))
mac.Write([]byte(path))
signature := hex.EncodeToString(mac.Sum(nil))
if strings.Contains(resultURL, "?") {
resultURL = resultURL + "&" + fmt.Sprintf("ik-t=%s&ik-s=%s", expires, signature)
} else {
resultURL = resultURL + "?" + fmt.Sprintf("ik-t=%s&ik-s=%s", expires, signature)
}
}
return resultURL, nil
}

View file

@ -0,0 +1,828 @@
// Package imagekit provides an interface to the ImageKit.io media library.
package imagekit
import (
"context"
"errors"
"fmt"
"io"
"math"
"net/http"
"path"
"strconv"
"strings"
"time"
"github.com/rclone/rclone/backend/imagekit/client"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/readers"
"github.com/rclone/rclone/lib/version"
)
const (
minSleep = 1 * time.Millisecond
maxSleep = 100 * time.Millisecond
decayConstant = 2
)
var systemMetadataInfo = map[string]fs.MetadataHelp{
"btime": {
Help: "Time of file birth (creation) read from Last-Modified header",
Type: "RFC 3339",
Example: "2006-01-02T15:04:05.999999999Z07:00",
ReadOnly: true,
},
"size": {
Help: "Size of the object in bytes",
Type: "int64",
ReadOnly: true,
},
"file-type": {
Help: "Type of the file",
Type: "string",
Example: "image",
ReadOnly: true,
},
"height": {
Help: "Height of the image or video in pixels",
Type: "int",
ReadOnly: true,
},
"width": {
Help: "Width of the image or video in pixels",
Type: "int",
ReadOnly: true,
},
"has-alpha": {
Help: "Whether the image has alpha channel or not",
Type: "bool",
ReadOnly: true,
},
"tags": {
Help: "Tags associated with the file",
Type: "string",
Example: "tag1,tag2",
ReadOnly: true,
},
"google-tags": {
Help: "AI generated tags by Google Cloud Vision associated with the image",
Type: "string",
Example: "tag1,tag2",
ReadOnly: true,
},
"aws-tags": {
Help: "AI generated tags by AWS Rekognition associated with the image",
Type: "string",
Example: "tag1,tag2",
ReadOnly: true,
},
"is-private-file": {
Help: "Whether the file is private or not",
Type: "bool",
ReadOnly: true,
},
"custom-coordinates": {
Help: "Custom coordinates of the file",
Type: "string",
Example: "0,0,100,100",
ReadOnly: true,
},
}
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "imagekit",
Description: "ImageKit.io",
NewFs: NewFs,
MetadataInfo: &fs.MetadataInfo{
System: systemMetadataInfo,
Help: `Any metadata supported by the underlying remote is read and written.`,
},
Options: []fs.Option{
{
Name: "endpoint",
Help: "You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)",
Required: true,
},
{
Name: "public_key",
Help: "You can find your ImageKit.io public key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)",
Required: true,
Sensitive: true,
},
{
Name: "private_key",
Help: "You can find your ImageKit.io private key in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)",
Required: true,
Sensitive: true,
},
{
Name: "only_signed",
Help: "If you have configured `Restrict unsigned image URLs` in your dashboard settings, set this to true.",
Default: false,
Advanced: true,
},
{
Name: "versions",
Help: "Include old versions in directory listings.",
Default: false,
Advanced: true,
},
{
Name: "upload_tags",
Help: "Tags to add to the uploaded files, e.g. \"tag1,tag2\".",
Default: "",
Advanced: true,
},
{
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: (encoder.EncodeZero |
encoder.EncodeSlash |
encoder.EncodeQuestion |
encoder.EncodeHashPercent |
encoder.EncodeCtl |
encoder.EncodeDel |
encoder.EncodeDot |
encoder.EncodeDoubleQuote |
encoder.EncodePercent |
encoder.EncodeBackSlash |
encoder.EncodeDollar |
encoder.EncodeLtGt |
encoder.EncodeSquareBracket |
encoder.EncodeInvalidUtf8),
},
},
})
}
// Options defines the configuration for this backend
type Options struct {
Endpoint string `config:"endpoint"`
PublicKey string `config:"public_key"`
PrivateKey string `config:"private_key"`
OnlySigned bool `config:"only_signed"`
Versions bool `config:"versions"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// Fs represents a remote to ImageKit
type Fs struct {
name string // name of remote
root string // root path
opt Options // parsed options
features *fs.Features // optional features
ik *client.ImageKit // ImageKit client
pacer *fs.Pacer // pacer for API calls
}
// Object describes a ImageKit file
type Object struct {
fs *Fs // The Fs this object is part of
remote string // The remote path
filePath string // The path to the file
contentType string // The content type of the object if known - may be ""
timestamp time.Time // The timestamp of the object if known - may be zero
file client.File // The media file if known - may be nil
versionID string // If present this points to an object version
}
// NewFs constructs an Fs from the path, container:path
func NewFs(ctx context.Context, name string, root string, m configmap.Mapper) (fs.Fs, error) {
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
ik, err := client.New(ctx, client.NewParams{
URLEndpoint: opt.Endpoint,
PublicKey: opt.PublicKey,
PrivateKey: opt.PrivateKey,
})
if err != nil {
return nil, err
}
f := &Fs{
name: name,
opt: *opt,
ik: ik,
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.root = path.Join("/", root)
f.features = (&fs.Features{
CaseInsensitive: false,
DuplicateFiles: false,
ReadMimeType: true,
WriteMimeType: false,
CanHaveEmptyDirectories: true,
BucketBased: false,
ServerSideAcrossConfigs: false,
IsLocal: false,
SlowHash: true,
ReadMetadata: true,
WriteMetadata: false,
UserMetadata: false,
FilterAware: true,
PartialUploads: false,
NoMultiThreading: false,
}).Fill(ctx, f)
if f.root != "/" {
r := f.root
folderPath := f.EncodePath(r[:strings.LastIndex(r, "/")+1])
fileName := f.EncodeFileName(r[strings.LastIndex(r, "/")+1:])
file := f.getFileByName(ctx, folderPath, fileName)
if file != nil {
newRoot := path.Dir(f.root)
f.root = newRoot
return f, fs.ErrorIsFile
}
}
return f, nil
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return strings.TrimLeft(f.root, "/")
}
// String returns a description of the FS
func (f *Fs) String() string {
return fmt.Sprintf("FS imagekit: %s", f.root)
}
// Precision of the ModTimes in this Fs
func (f *Fs) Precision() time.Duration {
return fs.ModTimeNotSupported
}
// Hashes returns the supported hash types of the filesystem.
func (f *Fs) Hashes() hash.Set {
return hash.NewHashSet()
}
// Features returns the optional features of this Fs.
func (f *Fs) Features() *fs.Features {
return f.features
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
remote := path.Join(f.root, dir)
remote = f.EncodePath(remote)
if remote != "/" {
parentFolderPath, folderName := path.Split(remote)
folderExists, err := f.getFolderByName(ctx, parentFolderPath, folderName)
if err != nil {
return make(fs.DirEntries, 0), err
}
if folderExists == nil {
return make(fs.DirEntries, 0), fs.ErrorDirNotFound
}
}
folders, folderError := f.getFolders(ctx, remote)
if folderError != nil {
return make(fs.DirEntries, 0), folderError
}
files, fileError := f.getFiles(ctx, remote, f.opt.Versions)
if fileError != nil {
return make(fs.DirEntries, 0), fileError
}
res := make([]fs.DirEntry, 0, len(folders)+len(files))
for _, folder := range folders {
folderPath := f.DecodePath(strings.TrimLeft(strings.Replace(folder.FolderPath, f.EncodePath(f.root), "", 1), "/"))
res = append(res, fs.NewDir(folderPath, folder.UpdatedAt))
}
for _, file := range files {
res = append(res, f.newObject(ctx, remote, file))
}
return res, nil
}
func (f *Fs) newObject(ctx context.Context, remote string, file client.File) *Object {
remoteFile := strings.TrimLeft(strings.Replace(file.FilePath, f.EncodePath(f.root), "", 1), "/")
folderPath, fileName := path.Split(remoteFile)
folderPath = f.DecodePath(folderPath)
fileName = f.DecodeFileName(fileName)
remoteFile = path.Join(folderPath, fileName)
if file.Type == "file-version" {
remoteFile = version.Add(remoteFile, file.UpdatedAt)
return &Object{
fs: f,
remote: remoteFile,
filePath: file.FilePath,
contentType: file.Mime,
timestamp: file.UpdatedAt,
file: file,
versionID: file.VersionInfo["id"],
}
}
return &Object{
fs: f,
remote: remoteFile,
filePath: file.FilePath,
contentType: file.Mime,
timestamp: file.UpdatedAt,
file: file,
}
}
// NewObject finds the Object at remote. If it can't be found
// it returns the error ErrorObjectNotFound.
//
// If remote points to a directory then it should return
// ErrorIsDir if possible without doing any extra work,
// otherwise ErrorObjectNotFound.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
r := path.Join(f.root, remote)
folderPath, fileName := path.Split(r)
folderPath = f.EncodePath(folderPath)
fileName = f.EncodeFileName(fileName)
isFolder, err := f.getFolderByName(ctx, folderPath, fileName)
if err != nil {
return nil, err
}
if isFolder != nil {
return nil, fs.ErrorIsDir
}
file := f.getFileByName(ctx, folderPath, fileName)
if file == nil {
return nil, fs.ErrorObjectNotFound
}
return f.newObject(ctx, r, *file), nil
}
// Put in to the remote path with the modTime given of the given size
//
// When called from outside an Fs by rclone, src.Size() will always be >= 0.
// But for unknown-sized objects (indicated by src.Size() == -1), Put should either
// return an error or upload it properly (rather than e.g. calling panic).
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return uploadFile(ctx, f, in, src.Remote(), options...)
}
// Mkdir makes the directory (container, bucket)
//
// Shouldn't return an error if it already exists
func (f *Fs) Mkdir(ctx context.Context, dir string) (err error) {
remote := path.Join(f.root, dir)
parentFolderPath, folderName := path.Split(remote)
parentFolderPath = f.EncodePath(parentFolderPath)
folderName = f.EncodeFileName(folderName)
err = f.pacer.Call(func() (bool, error) {
var res *http.Response
res, err = f.ik.CreateFolder(ctx, client.CreateFolderParam{
ParentFolderPath: parentFolderPath,
FolderName: folderName,
})
return f.shouldRetry(ctx, res, err)
})
return err
}
// Rmdir removes the directory (container, bucket) if empty
//
// Return an error if it doesn't exist or isn't empty
func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
entries, err := f.List(ctx, dir)
if err != nil {
return err
}
if len(entries) > 0 {
return errors.New("directory is not empty")
}
err = f.pacer.Call(func() (bool, error) {
var res *http.Response
res, err = f.ik.DeleteFolder(ctx, client.DeleteFolderParam{
FolderPath: f.EncodePath(path.Join(f.root, dir)),
})
if res.StatusCode == http.StatusNotFound {
return false, fs.ErrorDirNotFound
}
return f.shouldRetry(ctx, res, err)
})
return err
}
// Purge deletes all the files and the container
//
// Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the
// result of List()
func (f *Fs) Purge(ctx context.Context, dir string) (err error) {
remote := path.Join(f.root, dir)
err = f.pacer.Call(func() (bool, error) {
var res *http.Response
res, err = f.ik.DeleteFolder(ctx, client.DeleteFolderParam{
FolderPath: f.EncodePath(remote),
})
if res.StatusCode == http.StatusNotFound {
return false, fs.ErrorDirNotFound
}
return f.shouldRetry(ctx, res, err)
})
return err
}
// PublicLink generates a public link to the remote path (usually readable by anyone)
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) {
duration := time.Duration(math.Abs(float64(expire)))
expireSeconds := duration.Seconds()
fileRemote := path.Join(f.root, remote)
folderPath, fileName := path.Split(fileRemote)
folderPath = f.EncodePath(folderPath)
fileName = f.EncodeFileName(fileName)
file := f.getFileByName(ctx, folderPath, fileName)
if file == nil {
return "", fs.ErrorObjectNotFound
}
// Pacer not needed as this doesn't use the API
url, err := f.ik.URL(client.URLParam{
Src: file.URL,
Signed: *file.IsPrivateFile || f.opt.OnlySigned,
ExpireSeconds: int64(expireSeconds),
QueryParameters: map[string]string{
"updatedAt": file.UpdatedAt.String(),
},
})
if err != nil {
return "", err
}
return url, nil
}
// Fs returns read only access to the Fs that this object is part of
func (o *Object) Fs() fs.Info {
return o.fs
}
// Hash returns the selected checksum of the file
// If no checksum is available it returns ""
func (o *Object) Hash(ctx context.Context, ty hash.Type) (string, error) {
return "", hash.ErrUnsupported
}
// Storable says whether this object can be stored
func (o *Object) Storable() bool {
return true
}
// String returns a description of the Object
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.file.Name
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// ModTime returns the modification date of the file
// It should return a best guess if one isn't available
func (o *Object) ModTime(context.Context) time.Time {
return o.file.UpdatedAt
}
// Size returns the size of the file
func (o *Object) Size() int64 {
return int64(o.file.Size)
}
// MimeType returns the MIME type of the file
func (o *Object) MimeType(context.Context) string {
return o.contentType
}
// Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) {
// Offset and Count for range download
var offset int64
var count int64
fs.FixRangeOption(options, -1)
partialContent := false
for _, option := range options {
switch x := option.(type) {
case *fs.RangeOption:
offset, count = x.Decode(-1)
partialContent = true
case *fs.SeekOption:
offset = x.Offset
partialContent = true
default:
if option.Mandatory() {
fs.Logf(o, "Unsupported mandatory option: %v", option)
}
}
}
// Pacer not needed as this doesn't use the API
url, err := o.fs.ik.URL(client.URLParam{
Src: o.file.URL,
Signed: *o.file.IsPrivateFile || o.fs.opt.OnlySigned,
QueryParameters: map[string]string{
"tr": "orig-true",
"updatedAt": o.file.UpdatedAt.String(),
},
})
if err != nil {
return nil, err
}
client := &http.Client{}
req, _ := http.NewRequest("GET", url, nil)
req.Header.Set("Range", fmt.Sprintf("bytes=%d-%d", offset, offset+count-1))
resp, err := client.Do(req)
if err != nil {
return nil, err
}
end := resp.ContentLength
if partialContent && resp.StatusCode == http.StatusOK {
skip := offset
if offset < 0 {
skip = end + offset + 1
}
_, err = io.CopyN(io.Discard, resp.Body, skip)
if err != nil {
if resp != nil {
_ = resp.Body.Close()
}
return nil, err
}
return readers.NewLimitedReadCloser(resp.Body, end-skip), nil
}
return resp.Body, nil
}
// Update in to the object with the modTime given of the given size
//
// When called from outside an Fs by rclone, src.Size() will always be >= 0.
// But for unknown-sized objects (indicated by src.Size() == -1), Upload should either
// return an error or update the object properly (rather than e.g. calling panic).
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
srcRemote := o.Remote()
remote := path.Join(o.fs.root, srcRemote)
folderPath, fileName := path.Split(remote)
UseUniqueFileName := new(bool)
*UseUniqueFileName = false
var resp *client.UploadResult
err = o.fs.pacer.Call(func() (bool, error) {
var res *http.Response
res, resp, err = o.fs.ik.Upload(ctx, in, client.UploadParam{
FileName: fileName,
Folder: folderPath,
IsPrivateFile: o.file.IsPrivateFile,
})
return o.fs.shouldRetry(ctx, res, err)
})
if err != nil {
return err
}
fileID := resp.FileID
_, file, err := o.fs.ik.File(ctx, fileID)
if err != nil {
return err
}
o.file = *file
return nil
}
// Remove this object
func (o *Object) Remove(ctx context.Context) (err error) {
err = o.fs.pacer.Call(func() (bool, error) {
var res *http.Response
res, err = o.fs.ik.DeleteFile(ctx, o.file.FileID)
return o.fs.shouldRetry(ctx, res, err)
})
return err
}
// SetModTime sets the metadata on the object to set the modification date
func (o *Object) SetModTime(ctx context.Context, t time.Time) error {
return fs.ErrorCantSetModTime
}
func uploadFile(ctx context.Context, f *Fs, in io.Reader, srcRemote string, options ...fs.OpenOption) (fs.Object, error) {
remote := path.Join(f.root, srcRemote)
folderPath, fileName := path.Split(remote)
folderPath = f.EncodePath(folderPath)
fileName = f.EncodeFileName(fileName)
UseUniqueFileName := new(bool)
*UseUniqueFileName = false
err := f.pacer.Call(func() (bool, error) {
var res *http.Response
var err error
res, _, err = f.ik.Upload(ctx, in, client.UploadParam{
FileName: fileName,
Folder: folderPath,
IsPrivateFile: &f.opt.OnlySigned,
})
return f.shouldRetry(ctx, res, err)
})
if err != nil {
return nil, err
}
return f.NewObject(ctx, srcRemote)
}
// Metadata returns the metadata for the object
func (o *Object) Metadata(ctx context.Context) (metadata fs.Metadata, err error) {
metadata.Set("btime", o.file.CreatedAt.Format(time.RFC3339))
metadata.Set("size", strconv.FormatUint(o.file.Size, 10))
metadata.Set("file-type", o.file.FileType)
metadata.Set("height", strconv.Itoa(o.file.Height))
metadata.Set("width", strconv.Itoa(o.file.Width))
metadata.Set("has-alpha", strconv.FormatBool(o.file.HasAlpha))
for k, v := range o.file.EmbeddedMetadata {
metadata.Set(k, fmt.Sprint(v))
}
if o.file.Tags != nil {
metadata.Set("tags", strings.Join(o.file.Tags, ","))
}
if o.file.CustomCoordinates != nil {
metadata.Set("custom-coordinates", *o.file.CustomCoordinates)
}
if o.file.IsPrivateFile != nil {
metadata.Set("is-private-file", strconv.FormatBool(*o.file.IsPrivateFile))
}
if o.file.AITags != nil {
googleTags := []string{}
awsTags := []string{}
for _, tag := range o.file.AITags {
if tag.Source == "google-auto-tagging" {
googleTags = append(googleTags, tag.Name)
} else if tag.Source == "aws-auto-tagging" {
awsTags = append(awsTags, tag.Name)
}
}
if len(googleTags) > 0 {
metadata.Set("google-tags", strings.Join(googleTags, ","))
}
if len(awsTags) > 0 {
metadata.Set("aws-tags", strings.Join(awsTags, ","))
}
}
return metadata, nil
}
// Copy src to this remote using server-side move operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
return nil, fs.ErrorCantMove
}
file, err := srcObj.Open(ctx)
if err != nil {
return nil, err
}
return uploadFile(ctx, f, file, remote)
}
// Check the interfaces are satisfied.
var (
_ fs.Fs = &Fs{}
_ fs.Purger = &Fs{}
_ fs.PublicLinker = &Fs{}
_ fs.Object = &Object{}
_ fs.Copier = &Fs{}
)

View file

@ -0,0 +1,18 @@
package imagekit
import (
"testing"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
func TestIntegration(t *testing.T) {
debug := true
fstest.Verbose = &debug
fstests.Run(t, &fstests.Opt{
RemoteName: "TestImageKit:",
NilObject: (*Object)(nil),
SkipFsCheckWrap: true,
})
}

193
backend/imagekit/util.go Normal file
View file

@ -0,0 +1,193 @@
package imagekit
import (
"context"
"fmt"
"net/http"
"strconv"
"time"
"github.com/rclone/rclone/backend/imagekit/client"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/pacer"
)
func (f *Fs) getFiles(ctx context.Context, path string, includeVersions bool) (files []client.File, err error) {
files = make([]client.File, 0)
var hasMore = true
for hasMore {
err = f.pacer.Call(func() (bool, error) {
var data *[]client.File
var res *http.Response
res, data, err = f.ik.Files(ctx, client.FilesOrFolderParam{
Skip: len(files),
Limit: 100,
Path: path,
}, includeVersions)
hasMore = !(len(*data) == 0 || len(*data) < 100)
if len(*data) > 0 {
files = append(files, *data...)
}
return f.shouldRetry(ctx, res, err)
})
}
if err != nil {
return make([]client.File, 0), err
}
return files, nil
}
func (f *Fs) getFolders(ctx context.Context, path string) (folders []client.Folder, err error) {
folders = make([]client.Folder, 0)
var hasMore = true
for hasMore {
err = f.pacer.Call(func() (bool, error) {
var data *[]client.Folder
var res *http.Response
res, data, err = f.ik.Folders(ctx, client.FilesOrFolderParam{
Skip: len(folders),
Limit: 100,
Path: path,
})
hasMore = !(len(*data) == 0 || len(*data) < 100)
if len(*data) > 0 {
folders = append(folders, *data...)
}
return f.shouldRetry(ctx, res, err)
})
}
if err != nil {
return make([]client.Folder, 0), err
}
return folders, nil
}
func (f *Fs) getFileByName(ctx context.Context, path string, name string) (file *client.File) {
err := f.pacer.Call(func() (bool, error) {
res, data, err := f.ik.Files(ctx, client.FilesOrFolderParam{
Limit: 1,
Path: path,
SearchQuery: fmt.Sprintf(`type = "file" AND name = %s`, strconv.Quote(name)),
}, false)
if len(*data) == 0 {
file = nil
} else {
file = &(*data)[0]
}
return f.shouldRetry(ctx, res, err)
})
if err != nil {
return nil
}
return file
}
func (f *Fs) getFolderByName(ctx context.Context, path string, name string) (folder *client.Folder, err error) {
err = f.pacer.Call(func() (bool, error) {
res, data, err := f.ik.Folders(ctx, client.FilesOrFolderParam{
Limit: 1,
Path: path,
SearchQuery: fmt.Sprintf(`type = "folder" AND name = %s`, strconv.Quote(name)),
})
if len(*data) == 0 {
folder = nil
} else {
folder = &(*data)[0]
}
return f.shouldRetry(ctx, res, err)
})
if err != nil {
return nil, err
}
return folder, nil
}
// retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{
401, // Unauthorized (e.g. "Token has expired")
408, // Request Timeout
429, // Rate exceeded.
500, // Get occasional 500 Internal Server Error
503, // Service Unavailable
504, // Gateway Time-out
}
func shouldRetryHTTP(resp *http.Response, retryErrorCodes []int) bool {
if resp == nil {
return false
}
for _, e := range retryErrorCodes {
if resp.StatusCode == e {
return true
}
}
return false
}
func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if resp != nil && (resp.StatusCode == 429 || resp.StatusCode == 503) {
var retryAfter = 1
retryAfterString := resp.Header.Get("X-RateLimit-Reset")
if retryAfterString != "" {
var err error
retryAfter, err = strconv.Atoi(retryAfterString)
if err != nil {
fs.Errorf(f, "Malformed %s header %q: %v", "X-RateLimit-Reset", retryAfterString, err)
}
}
return true, pacer.RetryAfterError(err, time.Duration(retryAfter)*time.Millisecond)
}
return fserrors.ShouldRetry(err) || shouldRetryHTTP(resp, retryErrorCodes), err
}
// EncodePath encapsulates the logic for encoding a path
func (f *Fs) EncodePath(str string) string {
return f.opt.Enc.FromStandardPath(str)
}
// DecodePath encapsulates the logic for decoding a path
func (f *Fs) DecodePath(str string) string {
return f.opt.Enc.ToStandardPath(str)
}
// EncodeFileName encapsulates the logic for encoding a file name
func (f *Fs) EncodeFileName(str string) string {
return f.opt.Enc.FromStandardName(str)
}
// DecodeFileName encapsulates the logic for decoding a file name
func (f *Fs) DecodeFileName(str string) string {
return f.opt.Enc.ToStandardName(str)
}

Some files were not shown because too many files have changed in this diff Show more