Compare commits

...
Sign in to create a new pull request.

33 commits

Author SHA1 Message Date
Nick Craig-Wood
ea73ac75ba Version v1.59.2 2022-09-15 10:34:50 +01:00
Nick Craig-Wood
50657752fd config: move locking to fix fatal error: concurrent map read and map write
Before this change we assumed that github.com/Unknwon/goconfig was
threadsafe as documented.

However it turns out it is not threadsafe and looking at the code it
appears that making it threadsafe might be quite hard.

So this change increases the lock coverage in configfile to cover the
goconfig uses also.

Fixes #6378
2022-09-14 17:04:01 +01:00
Nick Craig-Wood
603efbfe76 azureblob,b2,s3: fix chunksize calculations producing too many parts
Before this fix, the chunksize calculator was using the previous size
of the object, not the new size of the object to calculate the chunk
sizes.

This meant that uploading a replacement object which needed a new
chunk size would fail, using too many parts.

This fix fixes the calculator to take the size explicitly.
2022-08-09 12:56:55 +01:00
Nick Craig-Wood
831c79b11d local: disable xattr support if the filesystems indicates it is not supported
Before this change, if rclone was run with `-M` on a filesystem
without xattr support, it would error out.

This patch makes rclone detect the not supported errors and disable
xattrs from then on. It prints one ERROR level message about this.

See: https://forum.rclone.org/t/metadata-update-local-s3/32277/7
2022-08-09 12:56:55 +01:00
Nick Craig-Wood
acec3dbf11 Start v1.59.2-DEV development 2022-08-08 19:00:01 +01:00
Nick Craig-Wood
5f710a2d48 Version v1.59.1 2022-08-08 18:10:58 +01:00
Joram Schrijver
f675da9053 dlna: fix SOAP action header parsing - fixes #6354
Changes in github.com/anacrolix/dms changed upnp.ServiceURN to include a
namespace identifier. This identifier was previously hardcoded, but is
now parsed out of the URN. The old SOAP action header parsing logic was
duplicated in rclone and did not handle this field. Resulting responses
included a URN with an empty namespace identifier, breaking clients.
2022-08-07 12:18:08 +01:00
Nick Craig-Wood
5788d3fc29 accounting: fix panic in core/stats-reset with unknown group - fixes #6327
This also adds tests for the rc commands for stats groups
2022-08-05 17:32:19 +01:00
Nick Craig-Wood
c5a371d9e4 build: disable goimports linter to avoid backporting lots of changes 2022-08-05 17:27:21 +01:00
Nick Craig-Wood
5e441673e3 serve sftp: fix checksum detection - Fixes #6351
Before this change, rclone serve sftp operating with a new rclone
after the md5sum/sha1sum detection was reworked to just run a plain
`md5sum`/`sha1sum` command in

3ea82032e7 sftp: support md5/sha1 with rsync.net #3254

Failed to signal to the remote that md5sum/sha1sum wasn't supported as
in

71e172a139 serve/sftp: support empty "md5sum" and "sha1sum" commands

We unconditionally return good hashes even if the remote being served
doesn't support the hash type in question.

This fix checks the hash type is supported and returns an error

    MD5 hash not supported

When the backend is first contacted this will cause the sftp backend
to detect that the hash type isn't available.

Unfortunately this may have cached the wrong state so editing or
remaking the config may be necessary to fix it.
2022-08-05 17:26:32 +01:00
Nick Craig-Wood
274eca148c fs: fix parsing of times and durations of the form "YYYY-MM-DD HH:MM:SS"
Before this fix, the parsing code gave an error like this

    parsing "2022-08-02 07:00:00" as fs.Time failed: expected newline

This was due to the Scan call failing to read all the data.

This patch fixes that, and redoes the tests
2022-08-05 17:26:23 +01:00
Nick Craig-Wood
28925414b8 dropox: fix ChangeNotify was unable to decrypt errors
Before this fix, the dropbox backend wasn't decoding the file names
received in changenotify events into rclone standard format.

This meant that changenotify events for filenames which had encoded
characters were failing to be decrypted properly if wrapped in crypt.

See: https://forum.rclone.org/t/rclone-vfs-cache-says-file-name-too-long/31535
2022-08-04 10:26:42 +01:00
Nick Craig-Wood
9563a770a1 mega: Fix nil pointer exception when bad node received
Fixes: #6336
2022-08-04 10:23:44 +01:00
Nick Craig-Wood
386ca20792 combine: fix errors with backends shutting down while in use
Before this patch backends could be shutdown when they fell out of the
cache when they were in use with combine. This was particularly
noticeable with the dropbox backend which gave this error when
uploading files after the backend was Shutdown.

    Failed to copy: upload failed: batcher is shutting down

This patch gets the combine remote to pin them until it is finished.

See: https://forum.rclone.org/t/rclone-combine-upload-failed-batcher-is-shutting-down/32168
2022-08-04 10:18:07 +01:00
Nick Craig-Wood
8390ba4ca9 build: fix android build after GitHub actions change
Before this change the android build started failing with

    gomobile: ANDROID_NDK_HOME specifies /usr/local/lib/android/sdk/ndk/25.0.8775105
    which is unusable: unsupported API version 16 (not in 19..33)

This was caused by a change to github actions, but is ultimately due
to an issue in gomobile with the newest version of the SDK.

This change fixes the problem by declaring a minimum API version of 21
and using version 21 compilers to build everything and using the
default NDK in github actions.

See: https://github.com/actions/virtual-environments/issues/5930
See: https://github.com/lightningnetwork/lnd/issues/6651
2022-08-04 10:18:07 +01:00
Nick Craig-Wood
4eea0ca8bb dropbox: fix infinite loop on uploading a corrupted file
Before this change, if rclone attempted to upload a file which read
more bytes than the size it declared then the uploader would enter an
infinite loop.

See: https://forum.rclone.org/t/transfer-percentages-100-again/32109
2022-08-04 10:18:07 +01:00
albertony
60d59e2600 jottacloud: do not store username in config when using standard auth
Previously, with standard auth, the username would be stored in config - but only after
entering the non-standard device/mountpoint sequence during config (a feature introduced
with #5926). Regardless of that, rclone always requests the username from the api at
startup (NewFS).

In #6270 (commit 9dbed02329) this was changed to always
store username in config (consistency), and then also use it to avoid the repeated
customer info request in NewFs (performance). But, as reported in #6309, it did not work
with legacy auth, where user enters username manually, if user entered an email address
instead of the internal username required for api requests. This change was therefore
recently reverted.

The current commit takes another step back to not store the username in config during
the non-standard device/mountpoint config sequence (consistentcy). The username will
now only be stored in config when using legacy auth, where it is an input parameter.
2022-07-25 18:25:15 +01:00
Nick Craig-Wood
bf0c7e0a6b Revert "jottacloud: always store username in config and use it to avoid initial api request"
This reverts commit 9dbed02329.

See: #6309
2022-07-25 18:25:15 +01:00
Lesmiscore
8a8a77ebc5 internetarchive: handle hash symbol in the middle of filename 2022-07-22 13:09:44 +01:00
Nick Craig-Wood
d211372c9e build: disable revive linter pending a fix in golangci-lint
The revive linter got extremely slow in golangci-lint 1.47.1 causing
the CI to time out.

Disable for the time being until it is fixed.

See: https://github.com/golangci/golangci-lint/issues/2997
2022-07-20 23:09:04 +01:00
albertony
1ebe9a800d sftp: fix issue with WS_FTP by working around failing RealPath 2022-07-20 18:08:42 +01:00
Nick Craig-Wood
31f0db544f s3: fix --s3-no-head panic: reflect: Elem of invalid type s3.PutObjectInput
In

22abd785eb s3: implement reading and writing of metadata #111

The reading information of objects was refactored to use the
s3.HeadObjectOutput structure.

Unfortunately the code branch with `--s3-no-head` was not tested
otherwise this panic would have been discovered.

This shows that this is path is not integration tested, so this adds a
new integration test.

Fixes #6322
2022-07-18 23:39:11 +01:00
Lesmiscore
37bcc3df14 backend/internetarchive: ignore checksums for files using the different method 2022-07-17 17:01:09 +01:00
Nick Craig-Wood
74401077dc dropbox: fix hang on quit with --dropbox-batch-mode off
This problem was created by the fact that we are much more diligent
about calling Shutdown now, and the dropbox backend had a hang if the
batch mode was "off" in the Shutdown method.

See: https://forum.rclone.org/t/dropbox-lsjson-in-1-59-stuck-on-commiting-upload/31853
2022-07-17 17:01:09 +01:00
Nick Naumann
d96789e1b8 sync: update docs and error messages to reflect fixes to overlap checks 2022-07-17 17:01:09 +01:00
Nick Naumann
69165c0924 sync: add filter-sensitivity to --backup-dir option
The old Overlapping function and corresponding tests have been removed, as it has been completely replaced by the OverlappingFilterCheck function.
2022-07-17 17:01:09 +01:00
albertony
c8a2aa310e docs: fix links to mount command from install docs 2022-07-17 17:00:35 +01:00
r-ricci
9417732f07 union: fix panic due to misalignment of struct field in 32 bit architectures
`FS.cacheExpiry` is accessed through sync/atomic.
According to the documentation, "On ARM, 386, and 32-bit MIPS, it is
the caller's responsibility to arrange for 64-bit alignment of 64-bit
words accessed atomically. The first word in a variable or in an
allocated struct, array, or slice can be relied upon to be 64-bit
aligned."
Before commit 1d2fe0d856 this field was
aligned, but then a new field was added to the structure, causing the
test suite to panic on linux/386.
No other field is used with sync/atomic, so `cacheExpiry` can just be
placed at the beginning of the stuct to ensure it is always aligned.
2022-07-17 16:59:50 +01:00
Nick Craig-Wood
9ba253a355 union: fix multiple files being uploaded when roots don't exist
See: https://forum.rclone.org/t/union-backend-copying-to-all-remotes-while-it-shouldnt/31781
2022-07-17 16:59:50 +01:00
Nick Craig-Wood
a6fba1f0c6 union: fix duplicated files when using directories with leading /
See: https://forum.rclone.org/t/union-backend-copying-to-all-remotes-while-it-shouldnt/31781
2022-07-17 16:59:50 +01:00
Nick Craig-Wood
80c5850ee8 combine: throw error if duplicate directory name is specified
See: https://forum.rclone.org/t/v1-59-combine-qs/31814
2022-07-17 16:59:50 +01:00
Nick Craig-Wood
727387ab1e combine: fix docs showing remote= instead of upstreams=
See: https://forum.rclone.org/t/v1-59-combine-qs/31814
2022-07-17 16:59:50 +01:00
Nick Craig-Wood
8226b6ada2 Start v1.59.1-DEV development 2022-07-17 16:56:29 +01:00
51 changed files with 1056 additions and 402 deletions

View file

@ -245,10 +245,6 @@ jobs:
with:
go-version: 1.18.x
# Upgrade together with Go version. Using a GitHub-provided version saves around 2 minutes.
- name: Force NDK version
run: echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install "ndk;23.1.7779620" | grep -v = || true
- name: Go module cache
uses: actions/cache@v2
with:
@ -271,27 +267,29 @@ jobs:
go install golang.org/x/mobile/cmd/gobind@latest
go install golang.org/x/mobile/cmd/gomobile@latest
env PATH=$PATH:~/go/bin gomobile init
echo "RCLONE_NDK_VERSION=21" >> $GITHUB_ENV
- name: arm-v7a gomobile build
run: env PATH=$PATH:~/go/bin gomobile bind -v -target=android/arm -javapkg=org.rclone -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} github.com/rclone/rclone/librclone/gomobile
run: env PATH=$PATH:~/go/bin gomobile bind -androidapi ${RCLONE_NDK_VERSION} -v -target=android/arm -javapkg=org.rclone -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} github.com/rclone/rclone/librclone/gomobile
- name: arm-v7a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_HOME/ndk/23.1.7779620/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi16-clang)" >> $GITHUB_ENV
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=arm' >> $GITHUB_ENV
echo 'GOARM=7' >> $GITHUB_ENV
echo 'CGO_ENABLED=1' >> $GITHUB_ENV
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: arm-v7a build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-16-armv7a .
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-armv7a .
- name: arm64-v8a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_HOME/ndk/23.1.7779620/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android21-clang)" >> $GITHUB_ENV
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=arm64' >> $GITHUB_ENV
@ -299,12 +297,12 @@ jobs:
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: arm64-v8a build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-21-armv8a .
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-armv8a .
- name: x86 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_HOME/ndk/23.1.7779620/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android16-clang)" >> $GITHUB_ENV
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=386' >> $GITHUB_ENV
@ -312,12 +310,12 @@ jobs:
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: x86 build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-16-x86 .
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-x86 .
- name: x64 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_HOME/ndk/23.1.7779620/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android21-clang)" >> $GITHUB_ENV
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=amd64' >> $GITHUB_ENV
@ -325,7 +323,7 @@ jobs:
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: x64 build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-21-x64 .
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-x64 .
- name: Upload artifacts
run: |

View file

@ -4,8 +4,8 @@ linters:
enable:
- deadcode
- errcheck
- goimports
- revive
#- goimports
#- revive
- ineffassign
- structcheck
- varcheck

124
MANUAL.html generated
View file

@ -19,7 +19,7 @@
<header id="title-block-header">
<h1 class="title">rclone(1) User Manual</h1>
<p class="author">Nick Craig-Wood</p>
<p class="date">Jul 09, 2022</p>
<p class="date">Sep 15, 2022</p>
</header>
<h1 id="rclone-syncs-your-files-to-cloud-storage">Rclone syncs your files to cloud storage</h1>
<p><img width="50%" src="https://rclone.org/img/logo_on_light__horizontal_color.svg" alt="rclone logo" style="float:right; padding: 5px;" ></p>
@ -300,7 +300,7 @@ go build</code></pre>
<p>Run the <a href="https://rclone.org/commands/rclone_config_paths/">config paths</a> command to see the locations that rclone will use.</p>
<p>To override them set the corresponding options (as command-line arguments, or as <a href="https://rclone.org/docs/#environment-variables">environment variables</a>): - <a href="https://rclone.org/docs/#config-config-file">--config</a> - <a href="https://rclone.org/docs/#cache-dir-dir">--cache-dir</a> - <a href="https://rclone.org/docs/#temp-dir-dir">--temp-dir</a></p>
<h2 id="autostart">Autostart</h2>
<p>After installing and configuring rclone, as described above, you are ready to use rclone as an interactive command line utility. If your goal is to perform <em>periodic</em> operations, such as a regular <a href="https://rclone.org/commands/rclone_sync/">sync</a>, you will probably want to configure your rclone command in your operating system's scheduler. If you need to expose <em>service</em>-like features, such as <a href="https://rclone.org/rc/">remote control</a>, <a href="https://rclone.org/gui/">GUI</a>, <a href="https://rclone.org/commands/rclone_serve/">serve</a> or <a href="https://rclone.org/commands/rclone_move/">mount</a>, you will often want an rclone command always running in the background, and configuring it to run in a service infrastructure may be a better option. Below are some alternatives on how to achieve this on different operating systems.</p>
<p>After installing and configuring rclone, as described above, you are ready to use rclone as an interactive command line utility. If your goal is to perform <em>periodic</em> operations, such as a regular <a href="https://rclone.org/commands/rclone_sync/">sync</a>, you will probably want to configure your rclone command in your operating system's scheduler. If you need to expose <em>service</em>-like features, such as <a href="https://rclone.org/rc/">remote control</a>, <a href="https://rclone.org/gui/">GUI</a>, <a href="https://rclone.org/commands/rclone_serve/">serve</a> or <a href="https://rclone.org/commands/rclone_mount/">mount</a>, you will often want an rclone command always running in the background, and configuring it to run in a service infrastructure may be a better option. Below are some alternatives on how to achieve this on different operating systems.</p>
<p>NOTE: Before setting up autorun it is highly recommended that you have tested your command manually from a Command Prompt first.</p>
<h3 id="autostart-on-windows">Autostart on Windows</h3>
<p>The most relevant alternatives for autostart on Windows are: - Run at user log on using the Startup folder - Run at user log on, at system startup or at schedule using Task Scheduler - Run at system startup using Windows service</p>
@ -309,7 +309,7 @@ go build</code></pre>
<p>Example command to run a sync in background:</p>
<pre><code>c:\rclone\rclone.exe sync c:\files remote:/files --no-console --log-file c:\rclone\logs\sync_files.txt</code></pre>
<h4 id="user-account">User account</h4>
<p>As mentioned in the <a href="https://rclone.org/commands/rclone_move/">mount</a> documentation, mounted drives created as Administrator are not visible to other accounts, not even the account that was elevated as Administrator. By running the mount command as the built-in <code>SYSTEM</code> user account, it will create drives accessible for everyone on the system. Both scheduled task and Windows service can be used to achieve this.</p>
<p>As mentioned in the <a href="https://rclone.org/commands/rclone_mount/">mount</a> documentation, mounted drives created as Administrator are not visible to other accounts, not even the account that was elevated as Administrator. By running the mount command as the built-in <code>SYSTEM</code> user account, it will create drives accessible for everyone on the system. Both scheduled task and Windows service can be used to achieve this.</p>
<p>NOTE: Remember that when rclone runs as the <code>SYSTEM</code> user, the user profile that it sees will not be yours. This means that if you normally run rclone with configuration file in the default location, to be able to use the same configuration when running as the system user you must explicitely tell rclone where to find it with the <a href="https://rclone.org/docs/#config-config-file"><code>--config</code></a> option, or else it will look in the system users profile path (<code>C:\Windows\System32\config\systemprofile</code>). To test your command manually from a Command Prompt, you can run it with the <a href="https://docs.microsoft.com/en-us/sysinternals/downloads/psexec">PsExec</a> utility from Microsoft's Sysinternals suite, which takes option <code>-s</code> to execute commands as the <code>SYSTEM</code> user.</p>
<h4 id="start-from-startup-folder">Start from Startup folder</h4>
<p>To quickly execute an rclone command you can simply create a standard Windows Explorer shortcut for the complete rclone command you want to run. If you store this shortcut in the special "Startup" start-menu folder, Windows will automatically run it at login. To open this folder in Windows Explorer, enter path <code>%APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup</code>, or <code>C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp</code> if you want the command to start for <em>every</em> user that logs in.</p>
@ -469,6 +469,7 @@ destpath/sourcepath/two.txt</code></pre>
<p>Note that files in the destination won't be deleted if there were any errors at any point. Duplicate objects (files with the same name, on those providers that support it) are also not yet handled.</p>
<p>It is always the contents of the directory that is synced, not the directory itself. So when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the <a href="https://rclone.org/commands/rclone_copy/">copy</a> command if unsure.</p>
<p>If dest:path doesn't exist, it is created and the source:path contents go there.</p>
<p>It is not possible to sync overlapping remotes. However, you may exclude the destination from the sync with a filter rule or by putting an exclude-if-present file inside the destination directory and sync to a destination that is inside the source directory.</p>
<p><strong>Note</strong>: Use the <code>-P</code>/<code>--progress</code> flag to view real-time transfer statistics</p>
<p><strong>Note</strong>: Use the <code>rclone dedupe</code> command to deal with "Duplicate object/directory found in source/destination - ignoring" errors. See <a href="https://forum.rclone.org/t/sync-not-clearing-duplicates/14372">this forum post</a> for more info.</p>
<pre><code>rclone sync source:path dest:path [flags]</code></pre>
@ -4236,7 +4237,7 @@ rclone sync -i /path/to/files remote:current-backup</code></pre>
<h3 id="backup-dirdir">--backup-dir=DIR</h3>
<p>When using <code>sync</code>, <code>copy</code> or <code>move</code> any files which would have been overwritten or deleted are moved in their original hierarchy into this directory.</p>
<p>If <code>--suffix</code> is set, then the moved files will have the suffix added to them. If there is a file with the same path (after the suffix has been added) in DIR, then it will be overwritten.</p>
<p>The remote in use must support server-side move or copy and you must use the same remote as the destination of the sync. The backup directory must not overlap the destination directory.</p>
<p>The remote in use must support server-side move or copy and you must use the same remote as the destination of the sync. The backup directory must not overlap the destination directory without it being excluded by a filter rule.</p>
<p>For example</p>
<pre><code>rclone sync -i /path/to/local remote:current --backup-dir remote:old</code></pre>
<p>will sync <code>/path/to/local</code> to <code>remote:current</code>, but for any files which would have been updated or deleted will be stored in <code>remote:old</code>.</p>
@ -8378,7 +8379,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default &quot;rclone/v1.59.0&quot;)
--user-agent string Set the user-agent to a specified string (default &quot;rclone/v1.59.2&quot;)
-v, --verbose count Print lots more stuff (repeat for more)</code></pre>
<h2 id="backend-flags">Backend Flags</h2>
<p>These flags are available for every command. They control the backends and may be set in the config file.</p>
@ -17156,7 +17157,7 @@ remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
[AllDrives]
type = combine
remote = &quot;My Drive=My Drive:&quot; &quot;Test Drive=Test Drive:&quot;</code></pre>
upstreams = &quot;My Drive=My Drive:&quot; &quot;Test Drive=Test Drive:&quot;</code></pre>
<p>If you then add that config to your config file (find it with <code>rclone config file</code>) then you can access all the shared drives in one place with the <code>AllDrives:</code> remote.</p>
<p>See <a href="https://rclone.org/drive/#drives">the Google Drive docs</a> for full info.</p>
<h3 id="standard-options-11">Standard options</h3>
@ -19657,7 +19658,7 @@ remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
[AllDrives]
type = combine
remote = &quot;My Drive=My Drive:&quot; &quot;Test Drive=Test Drive:&quot;</code></pre>
upstreams = &quot;My Drive=My Drive:&quot; &quot;Test Drive=Test Drive:&quot;</code></pre>
<p>Adding this to the rclone config file will cause those team drives to be accessible with the aliases shown. Any illegal charactes will be substituted with "_" and duplicate names will have numbers suffixed. It will also add a remote called AllDrives which shows all the shared drives combined into one directory tree.</p>
<h3 id="untrash">untrash</h3>
<p>Untrash files and directories</p>
@ -20952,8 +20953,8 @@ y/e/d&gt; y</code></pre>
<p>The Internet Archive backend utilizes Items on <a href="https://archive.org/">archive.org</a></p>
<p>Refer to <a href="https://archive.org/services/docs/api/ias3.html">IAS3 API documentation</a> for the API this backend uses.</p>
<p>Paths are specified as <code>remote:bucket</code> (or <code>remote:</code> for the <code>lsd</code> command.) You may put subdirectories in too, e.g. <code>remote:item/path/to/dir</code>.</p>
<p>Once you have made a remote (see the provider specific section above) you can use it like this:</p>
<p>Unlike S3, listing up all items uploaded by you isn't supported.</p>
<p>Once you have made a remote, you can use it like this:</p>
<p>Make a new item</p>
<pre><code>rclone mkdir remote:item</code></pre>
<p>List the contents of a item</p>
@ -20965,7 +20966,7 @@ y/e/d&gt; y</code></pre>
<p>You can optionally wait for the server's processing to finish, by setting non-zero value to <code>wait_archive</code> key. By making it wait, rclone can do normal file comparison. Make sure to set a large enough value (e.g. <code>30m0s</code> for smaller files) as it can take a long time depending on server's queue.</p>
<h2 id="about-metadata">About metadata</h2>
<p>This backend supports setting, updating and reading metadata of each file. The metadata will appear as file metadata on Internet Archive. However, some fields are reserved by both Internet Archive and rclone.</p>
<p>The following are reserved by Internet Archive: - <code>name</code> - <code>source</code> - <code>size</code> - <code>md5</code> - <code>crc32</code> - <code>sha1</code> - <code>format</code> - <code>old_version</code> - <code>viruscheck</code></p>
<p>The following are reserved by Internet Archive: - <code>name</code> - <code>source</code> - <code>size</code> - <code>md5</code> - <code>crc32</code> - <code>sha1</code> - <code>format</code> - <code>old_version</code> - <code>viruscheck</code> - <code>summation</code></p>
<p>Trying to set values to these keys is ignored with a warning. Only setting <code>mtime</code> is an exception. Doing so make it the identical behavior as setting ModTime.</p>
<p>rclone reserves all the keys starting with <code>rclone-</code>. Setting value for these keys will give you warnings, but values are set according to request.</p>
<p>If there are multiple values for a key, only the first one is returned. This is a limitation of rclone, that supports one value per one key. It can be triggered when you did a server-side copy.</p>
@ -21138,42 +21139,42 @@ y/e/d&gt; y</code></pre>
<td>CRC32 calculated by Internet Archive</td>
<td>string</td>
<td>01234567</td>
<td>N</td>
<td><strong>Y</strong></td>
</tr>
<tr class="even">
<td>format</td>
<td>Name of format identified by Internet Archive</td>
<td>string</td>
<td>Comma-Separated Values</td>
<td>N</td>
<td><strong>Y</strong></td>
</tr>
<tr class="odd">
<td>md5</td>
<td>MD5 hash calculated by Internet Archive</td>
<td>string</td>
<td>01234567012345670123456701234567</td>
<td>N</td>
<td><strong>Y</strong></td>
</tr>
<tr class="even">
<td>mtime</td>
<td>Time of last modification, managed by Rclone</td>
<td>RFC 3339</td>
<td>2006-01-02T15:04:05.999999999Z</td>
<td>N</td>
<td><strong>Y</strong></td>
</tr>
<tr class="odd">
<td>name</td>
<td>Full file path, without the bucket part</td>
<td>filename</td>
<td>backend/internetarchive/internetarchive.go</td>
<td>N</td>
<td><strong>Y</strong></td>
</tr>
<tr class="even">
<td>old_version</td>
<td>Whether the file was replaced and moved by keep-old-version flag</td>
<td>boolean</td>
<td>true</td>
<td>N</td>
<td><strong>Y</strong></td>
</tr>
<tr class="odd">
<td>rclone-ia-mtime</td>
@ -21201,28 +21202,35 @@ y/e/d&gt; y</code></pre>
<td>SHA1 hash calculated by Internet Archive</td>
<td>string</td>
<td>0123456701234567012345670123456701234567</td>
<td>N</td>
<td><strong>Y</strong></td>
</tr>
<tr class="odd">
<td>size</td>
<td>File size in bytes</td>
<td>decimal number</td>
<td>123456</td>
<td>N</td>
<td><strong>Y</strong></td>
</tr>
<tr class="even">
<td>source</td>
<td>The source of the file</td>
<td>string</td>
<td>original</td>
<td>N</td>
<td><strong>Y</strong></td>
</tr>
<tr class="odd">
<td>summation</td>
<td>Check https://forum.rclone.org/t/31922 for how it is used</td>
<td>string</td>
<td>md5</td>
<td><strong>Y</strong></td>
</tr>
<tr class="even">
<td>viruscheck</td>
<td>The last time viruscheck process was run for the file (?)</td>
<td>unixtime</td>
<td>1654191352</td>
<td>N</td>
<td><strong>Y</strong></td>
</tr>
</tbody>
</table>
@ -27757,6 +27765,84 @@ $ tree /tmp/b
<li>"error": return an error based on option value</li>
</ul>
<h1 id="changelog">Changelog</h1>
<h2 id="v1.59.2---2022-09-15">v1.59.2 - 2022-09-15</h2>
<p><a href="https://github.com/rclone/rclone/compare/v1.59.1...v1.59.2">See commits</a></p>
<ul>
<li>Bug Fixes
<ul>
<li>config: Move locking to fix fatal error: concurrent map read and map write (Nick Craig-Wood)</li>
</ul></li>
<li>Local
<ul>
<li>Disable xattr support if the filesystems indicates it is not supported (Nick Craig-Wood)</li>
</ul></li>
<li>Azure Blob
<ul>
<li>Fix chunksize calculations producing too many parts (Nick Craig-Wood)</li>
</ul></li>
<li>B2
<ul>
<li>Fix chunksize calculations producing too many parts (Nick Craig-Wood)</li>
</ul></li>
<li>S3
<ul>
<li>Fix chunksize calculations producing too many parts (Nick Craig-Wood)</li>
</ul></li>
</ul>
<h2 id="v1.59.1---2022-08-08">v1.59.1 - 2022-08-08</h2>
<p><a href="https://github.com/rclone/rclone/compare/v1.59.0...v1.59.1">See commits</a></p>
<ul>
<li>Bug Fixes
<ul>
<li>accounting: Fix panic in core/stats-reset with unknown group (Nick Craig-Wood)</li>
<li>build: Fix android build after GitHub actions change (Nick Craig-Wood)</li>
<li>dlna: Fix SOAP action header parsing (Joram Schrijver)</li>
<li>docs: Fix links to mount command from install docs (albertony)</li>
<li>dropox: Fix ChangeNotify was unable to decrypt errors (Nick Craig-Wood)</li>
<li>fs: Fix parsing of times and durations of the form "YYYY-MM-DD HH:MM:SS" (Nick Craig-Wood)</li>
<li>serve sftp: Fix checksum detection (Nick Craig-Wood)</li>
<li>sync: Add accidentally missed filter-sensitivity to --backup-dir option (Nick Naumann)</li>
</ul></li>
<li>Combine
<ul>
<li>Fix docs showing <code>remote=</code> instead of <code>upstreams=</code> (Nick Craig-Wood)</li>
<li>Throw error if duplicate directory name is specified (Nick Craig-Wood)</li>
<li>Fix errors with backends shutting down while in use (Nick Craig-Wood)</li>
</ul></li>
<li>Dropbox
<ul>
<li>Fix hang on quit with --dropbox-batch-mode off (Nick Craig-Wood)</li>
<li>Fix infinite loop on uploading a corrupted file (Nick Craig-Wood)</li>
</ul></li>
<li>Internetarchive
<ul>
<li>Ignore checksums for files using the different method (Lesmiscore)</li>
<li>Handle hash symbol in the middle of filename (Lesmiscore)</li>
</ul></li>
<li>Jottacloud
<ul>
<li>Fix working with whitelabel Elgiganten Cloud</li>
<li>Do not store username in config when using standard auth (albertony)</li>
</ul></li>
<li>Mega
<ul>
<li>Fix nil pointer exception when bad node received (Nick Craig-Wood)</li>
</ul></li>
<li>S3
<ul>
<li>Fix --s3-no-head panic: reflect: Elem of invalid type s3.PutObjectInput (Nick Craig-Wood)</li>
</ul></li>
<li>SFTP
<ul>
<li>Fix issue with WS_FTP by working around failing RealPath (albertony)</li>
</ul></li>
<li>Union
<ul>
<li>Fix duplicated files when using directories with leading / (Nick Craig-Wood)</li>
<li>Fix multiple files being uploaded when roots don't exist (Nick Craig-Wood)</li>
<li>Fix panic due to misalignment of struct field in 32 bit architectures (r-ricci)</li>
</ul></li>
</ul>
<h2 id="v1.59.0---2022-07-09">v1.59.0 - 2022-07-09</h2>
<p><a href="https://github.com/rclone/rclone/compare/v1.58.0...v1.59.0">See commits</a></p>
<ul>

99
MANUAL.md generated
View file

@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
% Jul 09, 2022
% Sep 15, 2022
# Rclone syncs your files to cloud storage
@ -506,7 +506,7 @@ such as a regular [sync](https://rclone.org/commands/rclone_sync/), you will pro
to configure your rclone command in your operating system's scheduler. If you need to
expose *service*-like features, such as [remote control](https://rclone.org/rc/),
[GUI](https://rclone.org/gui/), [serve](https://rclone.org/commands/rclone_serve/)
or [mount](https://rclone.org/commands/rclone_move/), you will often want an rclone
or [mount](https://rclone.org/commands/rclone_mount/), you will often want an rclone
command always running in the background, and configuring it to run in a service infrastructure
may be a better option. Below are some alternatives on how to achieve this on
different operating systems.
@ -539,7 +539,7 @@ c:\rclone\rclone.exe sync c:\files remote:/files --no-console --log-file c:\rclo
#### User account
As mentioned in the [mount](https://rclone.org/commands/rclone_move/) documentation,
As mentioned in the [mount](https://rclone.org/commands/rclone_mount/) documentation,
mounted drives created as Administrator are not visible to other accounts, not even the
account that was elevated as Administrator. By running the mount command as the
built-in `SYSTEM` user account, it will create drives accessible for everyone on
@ -897,6 +897,11 @@ extended explanation in the [copy](https://rclone.org/commands/rclone_copy/) com
If dest:path doesn't exist, it is created and the source:path contents
go there.
It is not possible to sync overlapping remotes. However, you may exclude
the destination from the sync with a filter rule or by putting an
exclude-if-present file inside the destination directory and sync to a
destination that is inside the source directory.
**Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics
**Note**: Use the `rclone dedupe` command to deal with "Duplicate object/directory found in source/destination - ignoring" errors.
@ -8735,7 +8740,8 @@ been added) in DIR, then it will be overwritten.
The remote in use must support server-side move or copy and you must
use the same remote as the destination of the sync. The backup
directory must not overlap the destination directory.
directory must not overlap the destination directory without it being
excluded by a filter rule.
For example
@ -14336,7 +14342,7 @@ These flags are available for every command.
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.2")
-v, --verbose count Print lots more stuff (repeat for more)
```
@ -25136,7 +25142,7 @@ This would produce something like this:
[AllDrives]
type = combine
remote = "My Drive=My Drive:" "Test Drive=Test Drive:"
upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
If you then add that config to your config file (find it with `rclone
config file`) then you can access all the shared drives in one place
@ -28343,7 +28349,7 @@ drives found and a combined drive.
[AllDrives]
type = combine
remote = "My Drive=My Drive:" "Test Drive=Test Drive:"
upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
Adding this to the rclone config file will cause those team drives to
be accessible with the aliases shown. Any illegal charactes will be
@ -30495,11 +30501,10 @@ Refer to [IAS3 API documentation](https://archive.org/services/docs/api/ias3.htm
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g. `remote:item/path/to/dir`.
Once you have made a remote (see the provider specific section above)
you can use it like this:
Unlike S3, listing up all items uploaded by you isn't supported.
Once you have made a remote, you can use it like this:
Make a new item
rclone mkdir remote:item
@ -30536,6 +30541,7 @@ The following are reserved by Internet Archive:
- `format`
- `old_version`
- `viruscheck`
- `summation`
Trying to set values to these keys is ignored with a warning.
Only setting `mtime` is an exception. Doing so make it the identical behavior as setting ModTime.
@ -30741,19 +30747,20 @@ Here are the possible system metadata items for the internetarchive backend.
| Name | Help | Type | Example | Read Only |
|------|------|------|---------|-----------|
| crc32 | CRC32 calculated by Internet Archive | string | 01234567 | N |
| format | Name of format identified by Internet Archive | string | Comma-Separated Values | N |
| md5 | MD5 hash calculated by Internet Archive | string | 01234567012345670123456701234567 | N |
| mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N |
| name | Full file path, without the bucket part | filename | backend/internetarchive/internetarchive.go | N |
| old_version | Whether the file was replaced and moved by keep-old-version flag | boolean | true | N |
| crc32 | CRC32 calculated by Internet Archive | string | 01234567 | **Y** |
| format | Name of format identified by Internet Archive | string | Comma-Separated Values | **Y** |
| md5 | MD5 hash calculated by Internet Archive | string | 01234567012345670123456701234567 | **Y** |
| mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | **Y** |
| name | Full file path, without the bucket part | filename | backend/internetarchive/internetarchive.go | **Y** |
| old_version | Whether the file was replaced and moved by keep-old-version flag | boolean | true | **Y** |
| rclone-ia-mtime | Time of last modification, managed by Internet Archive | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N |
| rclone-mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N |
| rclone-update-track | Random value used by Rclone for tracking changes inside Internet Archive | string | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa | N |
| sha1 | SHA1 hash calculated by Internet Archive | string | 0123456701234567012345670123456701234567 | N |
| size | File size in bytes | decimal number | 123456 | N |
| source | The source of the file | string | original | N |
| viruscheck | The last time viruscheck process was run for the file (?) | unixtime | 1654191352 | N |
| sha1 | SHA1 hash calculated by Internet Archive | string | 0123456701234567012345670123456701234567 | **Y** |
| size | File size in bytes | decimal number | 123456 | **Y** |
| source | The source of the file | string | original | **Y** |
| summation | Check https://forum.rclone.org/t/31922 for how it is used | string | md5 | **Y** |
| viruscheck | The last time viruscheck process was run for the file (?) | unixtime | 1654191352 | **Y** |
See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
@ -39413,6 +39420,58 @@ Options:
# Changelog
## v1.59.2 - 2022-09-15
[See commits](https://github.com/rclone/rclone/compare/v1.59.1...v1.59.2)
* Bug Fixes
* config: Move locking to fix fatal error: concurrent map read and map write (Nick Craig-Wood)
* Local
* Disable xattr support if the filesystems indicates it is not supported (Nick Craig-Wood)
* Azure Blob
* Fix chunksize calculations producing too many parts (Nick Craig-Wood)
* B2
* Fix chunksize calculations producing too many parts (Nick Craig-Wood)
* S3
* Fix chunksize calculations producing too many parts (Nick Craig-Wood)
## v1.59.1 - 2022-08-08
[See commits](https://github.com/rclone/rclone/compare/v1.59.0...v1.59.1)
* Bug Fixes
* accounting: Fix panic in core/stats-reset with unknown group (Nick Craig-Wood)
* build: Fix android build after GitHub actions change (Nick Craig-Wood)
* dlna: Fix SOAP action header parsing (Joram Schrijver)
* docs: Fix links to mount command from install docs (albertony)
* dropox: Fix ChangeNotify was unable to decrypt errors (Nick Craig-Wood)
* fs: Fix parsing of times and durations of the form "YYYY-MM-DD HH:MM:SS" (Nick Craig-Wood)
* serve sftp: Fix checksum detection (Nick Craig-Wood)
* sync: Add accidentally missed filter-sensitivity to --backup-dir option (Nick Naumann)
* Combine
* Fix docs showing `remote=` instead of `upstreams=` (Nick Craig-Wood)
* Throw error if duplicate directory name is specified (Nick Craig-Wood)
* Fix errors with backends shutting down while in use (Nick Craig-Wood)
* Dropbox
* Fix hang on quit with --dropbox-batch-mode off (Nick Craig-Wood)
* Fix infinite loop on uploading a corrupted file (Nick Craig-Wood)
* Internetarchive
* Ignore checksums for files using the different method (Lesmiscore)
* Handle hash symbol in the middle of filename (Lesmiscore)
* Jottacloud
* Fix working with whitelabel Elgiganten Cloud
* Do not store username in config when using standard auth (albertony)
* Mega
* Fix nil pointer exception when bad node received (Nick Craig-Wood)
* S3
* Fix --s3-no-head panic: reflect: Elem of invalid type s3.PutObjectInput (Nick Craig-Wood)
* SFTP
* Fix issue with WS_FTP by working around failing RealPath (albertony)
* Union
* Fix duplicated files when using directories with leading / (Nick Craig-Wood)
* Fix multiple files being uploaded when roots don't exist (Nick Craig-Wood)
* Fix panic due to misalignment of struct field in 32 bit architectures (r-ricci)
## v1.59.0 - 2022-07-09
[See commits](https://github.com/rclone/rclone/compare/v1.58.0...v1.59.0)

165
MANUAL.txt generated
View file

@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
Jul 09, 2022
Sep 15, 2022
Rclone syncs your files to cloud storage
@ -857,6 +857,11 @@ extended explanation in the copy command if unsure.
If dest:path doesn't exist, it is created and the source:path contents
go there.
It is not possible to sync overlapping remotes. However, you may exclude
the destination from the sync with a filter rule or by putting an
exclude-if-present file inside the destination directory and sync to a
destination that is inside the source directory.
Note: Use the -P/--progress flag to view real-time transfer statistics
Note: Use the rclone dedupe command to deal with "Duplicate
@ -8332,7 +8337,8 @@ added) in DIR, then it will be overwritten.
The remote in use must support server-side move or copy and you must use
the same remote as the destination of the sync. The backup directory
must not overlap the destination directory.
must not overlap the destination directory without it being excluded by
a filter rule.
For example
@ -13887,7 +13893,7 @@ These flags are available for every command.
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.2")
-v, --verbose count Print lots more stuff (repeat for more)
Backend Flags
@ -24544,7 +24550,7 @@ This would produce something like this:
[AllDrives]
type = combine
remote = "My Drive=My Drive:" "Test Drive=Test Drive:"
upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
If you then add that config to your config file (find it with
rclone config file) then you can access all the shared drives in one
@ -27752,7 +27758,7 @@ found and a combined drive.
[AllDrives]
type = combine
remote = "My Drive=My Drive:" "Test Drive=Test Drive:"
upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
Adding this to the rclone config file will cause those team drives to be
accessible with the aliases shown. Any illegal charactes will be
@ -29886,11 +29892,10 @@ Refer to IAS3 API documentation for the API this backend uses.
Paths are specified as remote:bucket (or remote: for the lsd command.)
You may put subdirectories in too, e.g. remote:item/path/to/dir.
Once you have made a remote (see the provider specific section above)
you can use it like this:
Unlike S3, listing up all items uploaded by you isn't supported.
Once you have made a remote, you can use it like this:
Make a new item
rclone mkdir remote:item
@ -29929,7 +29934,7 @@ file. The metadata will appear as file metadata on Internet Archive.
However, some fields are reserved by both Internet Archive and rclone.
The following are reserved by Internet Archive: - name - source - size -
md5 - crc32 - sha1 - format - old_version - viruscheck
md5 - crc32 - sha1 - format - old_version - viruscheck - summation
Trying to set values to these keys is ignored with a warning. Only
setting mtime is an exception. Doing so make it the identical behavior
@ -30140,65 +30145,52 @@ including them.
Here are the possible system metadata items for the internetarchive
backend.
----------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------------
Name Help Type Example Read Only
--------------------- ------------------ ----------- -------------------------------------------- --------------------
crc32 CRC32 calculated string 01234567 N
by Internet
--------------------- ---------------------------------- ----------- -------------------------------------------- --------------------
crc32 CRC32 calculated by Internet string 01234567 Y
Archive
format Name of format string Comma-Separated Values N
identified by
format Name of format identified by string Comma-Separated Values Y
Internet Archive
md5 MD5 hash string 01234567012345670123456701234567 N
calculated by
Internet Archive
md5 MD5 hash calculated by Internet string 01234567012345670123456701234567 Y
Archive
mtime Time of last RFC 3339 2006-01-02T15:04:05.999999999Z N
modification,
managed by Rclone
mtime Time of last modification, managed RFC 3339 2006-01-02T15:04:05.999999999Z Y
by Rclone
name Full file path, filename backend/internetarchive/internetarchive.go N
without the bucket
name Full file path, without the bucket filename backend/internetarchive/internetarchive.go Y
part
old_version Whether the file boolean true N
was replaced and
moved by
keep-old-version
flag
old_version Whether the file was replaced and boolean true Y
moved by keep-old-version flag
rclone-ia-mtime Time of last RFC 3339 2006-01-02T15:04:05.999999999Z N
modification,
managed by
Internet Archive
rclone-ia-mtime Time of last modification, managed RFC 3339 2006-01-02T15:04:05.999999999Z N
by Internet Archive
rclone-mtime Time of last RFC 3339 2006-01-02T15:04:05.999999999Z N
modification,
managed by Rclone
rclone-mtime Time of last modification, managed RFC 3339 2006-01-02T15:04:05.999999999Z N
by Rclone
rclone-update-track Random value used string aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa N
by Rclone for
tracking changes
inside Internet
rclone-update-track Random value used by Rclone for string aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa N
tracking changes inside Internet
Archive
sha1 SHA1 hash string 0123456701234567012345670123456701234567 N
calculated by
Internet Archive
sha1 SHA1 hash calculated by Internet string 0123456701234567012345670123456701234567 Y
Archive
size File size in bytes decimal 123456 N
size File size in bytes decimal 123456 Y
number
source The source of the string original N
file
source The source of the file string original Y
viruscheck The last time unixtime 1654191352 N
viruscheck process
was run for the
file (?)
----------------------------------------------------------------------------------------------------------------------
summation Check string md5 Y
https://forum.rclone.org/t/31922
for how it is used
viruscheck The last time viruscheck process unixtime 1654191352 Y
was run for the file (?)
--------------------------------------------------------------------------------------------------------------------------------------
See the metadata docs for more info.
@ -38939,6 +38931,79 @@ Options:
Changelog
v1.59.2 - 2022-09-15
See commits
- Bug Fixes
- config: Move locking to fix fatal error: concurrent map read and
map write (Nick Craig-Wood)
- Local
- Disable xattr support if the filesystems indicates it is not
supported (Nick Craig-Wood)
- Azure Blob
- Fix chunksize calculations producing too many parts (Nick
Craig-Wood)
- B2
- Fix chunksize calculations producing too many parts (Nick
Craig-Wood)
- S3
- Fix chunksize calculations producing too many parts (Nick
Craig-Wood)
v1.59.1 - 2022-08-08
See commits
- Bug Fixes
- accounting: Fix panic in core/stats-reset with unknown group
(Nick Craig-Wood)
- build: Fix android build after GitHub actions change (Nick
Craig-Wood)
- dlna: Fix SOAP action header parsing (Joram Schrijver)
- docs: Fix links to mount command from install docs (albertony)
- dropox: Fix ChangeNotify was unable to decrypt errors (Nick
Craig-Wood)
- fs: Fix parsing of times and durations of the form "YYYY-MM-DD
HH:MM:SS" (Nick Craig-Wood)
- serve sftp: Fix checksum detection (Nick Craig-Wood)
- sync: Add accidentally missed filter-sensitivity to --backup-dir
option (Nick Naumann)
- Combine
- Fix docs showing remote= instead of upstreams= (Nick Craig-Wood)
- Throw error if duplicate directory name is specified (Nick
Craig-Wood)
- Fix errors with backends shutting down while in use (Nick
Craig-Wood)
- Dropbox
- Fix hang on quit with --dropbox-batch-mode off (Nick Craig-Wood)
- Fix infinite loop on uploading a corrupted file (Nick
Craig-Wood)
- Internetarchive
- Ignore checksums for files using the different method
(Lesmiscore)
- Handle hash symbol in the middle of filename (Lesmiscore)
- Jottacloud
- Fix working with whitelabel Elgiganten Cloud
- Do not store username in config when using standard auth
(albertony)
- Mega
- Fix nil pointer exception when bad node received (Nick
Craig-Wood)
- S3
- Fix --s3-no-head panic: reflect: Elem of invalid type
s3.PutObjectInput (Nick Craig-Wood)
- SFTP
- Fix issue with WS_FTP by working around failing RealPath
(albertony)
- Union
- Fix duplicated files when using directories with leading / (Nick
Craig-Wood)
- Fix multiple files being uploaded when roots don't exist (Nick
Craig-Wood)
- Fix panic due to misalignment of struct field in 32 bit
architectures (r-ricci)
v1.59.0 - 2022-07-09
See commits

View file

@ -1 +1 @@
v1.59.0
v1.59.2

View file

@ -1676,14 +1676,14 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}
}
uploadParts := int64(maxUploadParts)
uploadParts := maxUploadParts
if uploadParts < 1 {
uploadParts = 1
} else if uploadParts > maxUploadParts {
uploadParts = maxUploadParts
}
// calculate size of parts/blocks
partSize := chunksize.Calculator(o, int(uploadParts), o.fs.opt.ChunkSize)
partSize := chunksize.Calculator(o, src.Size(), uploadParts, o.fs.opt.ChunkSize)
putBlobOptions := azblob.UploadStreamToBlockBlobOptions{
BufferSize: int(partSize),

View file

@ -97,7 +97,7 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
if size == -1 {
fs.Debugf(o, "Streaming upload with --b2-chunk-size %s allows uploads of up to %s and will fail only when that limit is reached.", f.opt.ChunkSize, maxParts*f.opt.ChunkSize)
} else {
chunkSize = chunksize.Calculator(src, maxParts, defaultChunkSize)
chunkSize = chunksize.Calculator(o, size, maxParts, defaultChunkSize)
parts = size / int64(chunkSize)
if size%int64(chunkSize) != 0 {
parts++

View file

@ -145,6 +145,7 @@ func (f *Fs) newUpstream(ctx context.Context, dir, remote string) (*upstream, er
dir: dir,
pathAdjustment: newAdjustment(f.root, dir),
}
cache.PinUntilFinalized(u.f, u)
return u, nil
}
@ -206,9 +207,13 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
return err
}
mu.Lock()
if _, found := f.upstreams[dir]; found {
err = fmt.Errorf("duplicate directory name %q", dir)
} else {
f.upstreams[dir] = u
}
mu.Unlock()
return nil
return err
})
}
err = g.Wait()

View file

@ -3299,7 +3299,7 @@ drives found and a combined drive.
[AllDrives]
type = combine
remote = "My Drive=My Drive:" "Test Drive=Test Drive:"
upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
Adding this to the rclone config file will cause those team drives to
be accessible with the aliases shown. Any illegal charactes will be

View file

@ -304,6 +304,9 @@ outer:
//
// Can be called from atexit handler
func (b *batcher) Shutdown() {
if !b.Batching() {
return
}
b.shutOnce.Do(func() {
atexit.Unregister(b.atexit)
fs.Infof(b.f, "Commiting uploads - please wait...")

View file

@ -1435,7 +1435,7 @@ func (f *Fs) changeNotifyRunner(ctx context.Context, notifyFunc func(string, fs.
}
if entryPath != "" {
notifyFunc(entryPath, entryType)
notifyFunc(f.opt.Enc.ToStandardPath(entryPath), entryType)
}
}
if !changeList.HasMore {
@ -1697,6 +1697,9 @@ func (o *Object) uploadChunked(ctx context.Context, in0 io.Reader, commitInfo *f
if size > 0 {
// if size is known, check if next chunk is final
appendArg.Close = uint64(size)-in.BytesRead() <= uint64(chunkSize)
if in.BytesRead() > uint64(size) {
return nil, fmt.Errorf("expected %d bytes in input, but have read %d so far", size, in.BytesRead())
}
} else {
// if size is unknown, upload as long as we can read full chunks from the reader
appendArg.Close = in.BytesRead()-cursor.Offset < uint64(chunkSize)

View file

@ -45,51 +45,67 @@ func init() {
Help: "Full file path, without the bucket part",
Type: "filename",
Example: "backend/internetarchive/internetarchive.go",
ReadOnly: true,
},
"source": {
Help: "The source of the file",
Type: "string",
Example: "original",
ReadOnly: true,
},
"mtime": {
Help: "Time of last modification, managed by Rclone",
Type: "RFC 3339",
Example: "2006-01-02T15:04:05.999999999Z",
ReadOnly: true,
},
"size": {
Help: "File size in bytes",
Type: "decimal number",
Example: "123456",
ReadOnly: true,
},
"md5": {
Help: "MD5 hash calculated by Internet Archive",
Type: "string",
Example: "01234567012345670123456701234567",
ReadOnly: true,
},
"crc32": {
Help: "CRC32 calculated by Internet Archive",
Type: "string",
Example: "01234567",
ReadOnly: true,
},
"sha1": {
Help: "SHA1 hash calculated by Internet Archive",
Type: "string",
Example: "0123456701234567012345670123456701234567",
ReadOnly: true,
},
"format": {
Help: "Name of format identified by Internet Archive",
Type: "string",
Example: "Comma-Separated Values",
ReadOnly: true,
},
"old_version": {
Help: "Whether the file was replaced and moved by keep-old-version flag",
Type: "boolean",
Example: "true",
ReadOnly: true,
},
"viruscheck": {
Help: "The last time viruscheck process was run for the file (?)",
Type: "unixtime",
Example: "1654191352",
ReadOnly: true,
},
"summation": {
Help: "Check https://forum.rclone.org/t/31922 for how it is used",
Type: "string",
Example: "md5",
ReadOnly: true,
},
"rclone-ia-mtime": {
@ -173,7 +189,7 @@ var roMetadataKey = map[string]interface{}{
// do not add mtime here, it's a documented exception
"name": nil, "source": nil, "size": nil, "md5": nil,
"crc32": nil, "sha1": nil, "format": nil, "old_version": nil,
"viruscheck": nil,
"viruscheck": nil, "summation": nil,
}
// Options defines the configuration for this backend
@ -222,6 +238,7 @@ type IAFile struct {
Md5 string `json:"md5"`
Crc32 string `json:"crc32"`
Sha1 string `json:"sha1"`
Summation string `json:"summation"`
rawData json.RawMessage
}
@ -555,7 +572,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
return "", err
}
bucket, bucketPath := f.split(remote)
return path.Join(f.opt.FrontEndpoint, "/download/", bucket, bucketPath), nil
return path.Join(f.opt.FrontEndpoint, "/download/", bucket, quotePath(bucketPath)), nil
}
// Copy src to this remote using server-side copy operations.
@ -743,7 +760,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
// make a GET request to (frontend)/download/:item/:path
opts := rest.Opts{
Method: "GET",
Path: path.Join("/download/", o.fs.root, o.fs.opt.Enc.FromStandardPath(o.remote)),
Path: path.Join("/download/", o.fs.root, quotePath(o.fs.opt.Enc.FromStandardPath(o.remote))),
Options: optionsFixed,
}
err = o.fs.pacer.Call(func() (bool, error) {
@ -1135,16 +1152,21 @@ func (f *Fs) waitDelete(ctx context.Context, bucket, bucketPath string) (err err
}
func makeValidObject(f *Fs, remote string, file IAFile, mtime time.Time, size int64) *Object {
return &Object{
ret := &Object{
fs: f,
remote: remote,
modTime: mtime,
size: size,
md5: file.Md5,
crc32: file.Crc32,
sha1: file.Sha1,
rawData: file.rawData,
}
// hashes from _files.xml (where summation != "") is different from one in other files
// https://forum.rclone.org/t/internet-archive-md5-tag-in-id-files-xml-interpreted-incorrectly/31922
if file.Summation == "" {
ret.md5 = file.Md5
ret.crc32 = file.Crc32
ret.sha1 = file.Sha1
}
return ret
}
func makeValidObject2(f *Fs, file IAFile, bucket string) *Object {

View file

@ -152,7 +152,7 @@ func Config(ctx context.Context, name string, m configmap.Mapper, config fs.Conf
m.Set(configClientSecret, "")
srv := rest.NewClient(fshttp.NewClient(ctx))
token, tokenEndpoint, username, err := doTokenAuth(ctx, srv, loginToken)
token, tokenEndpoint, err := doTokenAuth(ctx, srv, loginToken)
if err != nil {
return nil, fmt.Errorf("failed to get oauth token: %w", err)
}
@ -161,7 +161,6 @@ func Config(ctx context.Context, name string, m configmap.Mapper, config fs.Conf
if err != nil {
return nil, fmt.Errorf("error while saving token: %w", err)
}
m.Set(configUsername, username)
return fs.ConfigGoto("choose_device")
case "legacy": // configure a jottacloud backend using legacy authentication
m.Set("configVersion", fmt.Sprint(legacyConfigVersion))
@ -272,30 +271,21 @@ sync or the backup section, for example, you must choose yes.`)
if config.Result != "true" {
m.Set(configDevice, "")
m.Set(configMountpoint, "")
}
username, userOk := m.Get(configUsername)
if userOk && config.Result != "true" {
return fs.ConfigGoto("end")
}
oAuthClient, _, err := getOAuthClient(ctx, name, m)
if err != nil {
return nil, err
}
if !userOk {
jfsSrv := rest.NewClient(oAuthClient).SetRoot(jfsURL)
apiSrv := rest.NewClient(oAuthClient).SetRoot(apiURL)
cust, err := getCustomerInfo(ctx, apiSrv)
if err != nil {
return nil, err
}
username = cust.Username
m.Set(configUsername, username)
if config.Result != "true" {
return fs.ConfigGoto("end")
}
}
jfsSrv := rest.NewClient(oAuthClient).SetRoot(jfsURL)
acc, err := getDriveInfo(ctx, jfsSrv, username)
acc, err := getDriveInfo(ctx, jfsSrv, cust.Username)
if err != nil {
return nil, err
}
@ -326,10 +316,14 @@ a new by entering a unique name.`, defaultDevice)
return nil, err
}
jfsSrv := rest.NewClient(oAuthClient).SetRoot(jfsURL)
apiSrv := rest.NewClient(oAuthClient).SetRoot(apiURL)
username, _ := m.Get(configUsername)
cust, err := getCustomerInfo(ctx, apiSrv)
if err != nil {
return nil, err
}
acc, err := getDriveInfo(ctx, jfsSrv, username)
acc, err := getDriveInfo(ctx, jfsSrv, cust.Username)
if err != nil {
return nil, err
}
@ -344,7 +338,7 @@ a new by entering a unique name.`, defaultDevice)
var dev *api.JottaDevice
if isNew {
fs.Debugf(nil, "Creating new device: %s", device)
dev, err = createDevice(ctx, jfsSrv, path.Join(username, device))
dev, err = createDevice(ctx, jfsSrv, path.Join(cust.Username, device))
if err != nil {
return nil, err
}
@ -352,7 +346,7 @@ a new by entering a unique name.`, defaultDevice)
m.Set(configDevice, device)
if !isNew {
dev, err = getDeviceInfo(ctx, jfsSrv, path.Join(username, device))
dev, err = getDeviceInfo(ctx, jfsSrv, path.Join(cust.Username, device))
if err != nil {
return nil, err
}
@ -382,11 +376,16 @@ You may create a new by entering a unique name.`, device)
return nil, err
}
jfsSrv := rest.NewClient(oAuthClient).SetRoot(jfsURL)
apiSrv := rest.NewClient(oAuthClient).SetRoot(apiURL)
cust, err := getCustomerInfo(ctx, apiSrv)
if err != nil {
return nil, err
}
username, _ := m.Get(configUsername)
device, _ := m.Get(configDevice)
dev, err := getDeviceInfo(ctx, jfsSrv, path.Join(username, device))
dev, err := getDeviceInfo(ctx, jfsSrv, path.Join(cust.Username, device))
if err != nil {
return nil, err
}
@ -404,7 +403,7 @@ You may create a new by entering a unique name.`, device)
return nil, fmt.Errorf("custom mountpoints not supported on built-in %s device: %w", defaultDevice, err)
}
fs.Debugf(nil, "Creating new mountpoint: %s", mountpoint)
_, err := createMountPoint(ctx, jfsSrv, path.Join(username, device, mountpoint))
_, err := createMountPoint(ctx, jfsSrv, path.Join(cust.Username, device, mountpoint))
if err != nil {
return nil, err
}
@ -591,10 +590,10 @@ func doLegacyAuth(ctx context.Context, srv *rest.Client, oauthConfig *oauth2.Con
}
// doTokenAuth runs the actual token request for V2 authentication
func doTokenAuth(ctx context.Context, apiSrv *rest.Client, loginTokenBase64 string) (token oauth2.Token, tokenEndpoint string, username string, err error) {
func doTokenAuth(ctx context.Context, apiSrv *rest.Client, loginTokenBase64 string) (token oauth2.Token, tokenEndpoint string, err error) {
loginTokenBytes, err := base64.RawURLEncoding.DecodeString(loginTokenBase64)
if err != nil {
return token, "", "", err
return token, "", err
}
// decode login token
@ -602,7 +601,7 @@ func doTokenAuth(ctx context.Context, apiSrv *rest.Client, loginTokenBase64 stri
decoder := json.NewDecoder(bytes.NewReader(loginTokenBytes))
err = decoder.Decode(&loginToken)
if err != nil {
return token, "", "", err
return token, "", err
}
// retrieve endpoint urls
@ -613,7 +612,7 @@ func doTokenAuth(ctx context.Context, apiSrv *rest.Client, loginTokenBase64 stri
var wellKnown api.WellKnown
_, err = apiSrv.CallJSON(ctx, &opts, nil, &wellKnown)
if err != nil {
return token, "", "", err
return token, "", err
}
// prepare out token request with username and password
@ -635,14 +634,14 @@ func doTokenAuth(ctx context.Context, apiSrv *rest.Client, loginTokenBase64 stri
var jsonToken api.TokenJSON
_, err = apiSrv.CallJSON(ctx, &opts, nil, &jsonToken)
if err != nil {
return token, "", "", err
return token, "", err
}
token.AccessToken = jsonToken.AccessToken
token.RefreshToken = jsonToken.RefreshToken
token.TokenType = jsonToken.TokenType
token.Expiry = time.Now().Add(time.Duration(jsonToken.ExpiresIn) * time.Second)
return token, wellKnown.TokenEndpoint, loginToken.Username, err
return token, wellKnown.TokenEndpoint, err
}
// getCustomerInfo queries general information about the account
@ -944,17 +943,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
return err
})
user, userOk := m.Get(configUsername)
if userOk {
f.user = user
} else {
fs.Infof(nil, "Username not found in config and must be looked up, reconfigure to avoid the extra request")
cust, err := getCustomerInfo(ctx, f.apiSrv)
if err != nil {
return nil, err
}
f.user = cust.Username
}
f.setEndpoints()
if root != "" && !rootIsDir {

View file

@ -243,6 +243,7 @@ type Fs struct {
precision time.Duration // precision of local filesystem
warnedMu sync.Mutex // used for locking access to 'warned'.
warned map[string]struct{} // whether we have warned about this string
xattrSupported int32 // whether xattrs are supported (atomic access)
// do os.Lstat or os.Stat
lstat func(name string) (os.FileInfo, error)
@ -286,6 +287,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
dev: devUnset,
lstat: os.Lstat,
}
if xattrSupported {
f.xattrSupported = 1
}
f.root = cleanRootPath(root, f.opt.NoUNC, f.opt.Enc)
f.features = (&fs.Features{
CaseInsensitive: f.caseInsensitive(),

View file

@ -6,6 +6,8 @@ package local
import (
"fmt"
"strings"
"sync/atomic"
"syscall"
"github.com/pkg/xattr"
"github.com/rclone/rclone/fs"
@ -16,12 +18,30 @@ const (
xattrSupported = xattr.XATTR_SUPPORTED
)
// Check to see if the error supplied is a not supported error, and if
// so, disable xattrs
func (f *Fs) xattrIsNotSupported(err error) bool {
xattrErr, ok := err.(*xattr.Error)
if !ok {
return false
}
// Xattrs not supported can be ENOTSUP or ENOATTR or EINVAL (on Solaris)
if xattrErr.Err == syscall.EINVAL || xattrErr.Err == syscall.ENOTSUP || xattrErr.Err == xattr.ENOATTR {
// Show xattrs not supported
if atomic.CompareAndSwapInt32(&f.xattrSupported, 1, 0) {
fs.Errorf(f, "xattrs not supported - disabling: %v", err)
}
return true
}
return false
}
// getXattr returns the extended attributes for an object
//
// It doesn't return any attributes owned by this backend in
// metadataKeys
func (o *Object) getXattr() (metadata fs.Metadata, err error) {
if !xattrSupported {
if !xattrSupported || atomic.LoadInt32(&o.fs.xattrSupported) == 0 {
return nil, nil
}
var list []string
@ -31,6 +51,9 @@ func (o *Object) getXattr() (metadata fs.Metadata, err error) {
list, err = xattr.LList(o.path)
}
if err != nil {
if o.fs.xattrIsNotSupported(err) {
return nil, nil
}
return nil, fmt.Errorf("failed to read xattr: %w", err)
}
if len(list) == 0 {
@ -45,6 +68,9 @@ func (o *Object) getXattr() (metadata fs.Metadata, err error) {
v, err = xattr.LGet(o.path, k)
}
if err != nil {
if o.fs.xattrIsNotSupported(err) {
return nil, nil
}
return nil, fmt.Errorf("failed to read xattr key %q: %w", k, err)
}
k = strings.ToLower(k)
@ -64,7 +90,7 @@ func (o *Object) getXattr() (metadata fs.Metadata, err error) {
//
// It doesn't set any attributes owned by this backend in metadataKeys
func (o *Object) setXattr(metadata fs.Metadata) (err error) {
if !xattrSupported {
if !xattrSupported || atomic.LoadInt32(&o.fs.xattrSupported) == 0 {
return nil
}
for k, value := range metadata {
@ -80,6 +106,9 @@ func (o *Object) setXattr(metadata fs.Metadata) (err error) {
err = xattr.LSet(o.path, k, v)
}
if err != nil {
if o.fs.xattrIsNotSupported(err) {
return nil
}
return fmt.Errorf("failed to set xattr key %q: %w", k, err)
}
}

View file

@ -2076,7 +2076,7 @@ type Options struct {
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
CopyCutoff fs.SizeSuffix `config:"copy_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"`
MaxUploadParts int64 `config:"max_upload_parts"`
MaxUploadParts int `config:"max_upload_parts"`
DisableChecksum bool `config:"disable_checksum"`
SharedCredentialsFile string `config:"shared_credentials_file"`
Profile string `config:"profile"`
@ -4108,10 +4108,10 @@ func (o *Object) uploadMultipart(ctx context.Context, req *s3.PutObjectInput, si
if size == -1 {
warnStreamUpload.Do(func() {
fs.Logf(f, "Streaming uploads using chunk size %v will have maximum file size of %v",
f.opt.ChunkSize, fs.SizeSuffix(int64(partSize)*uploadParts))
f.opt.ChunkSize, fs.SizeSuffix(int64(partSize)*int64(uploadParts)))
})
} else {
partSize = chunksize.Calculator(o, int(uploadParts), f.opt.ChunkSize)
partSize = chunksize.Calculator(o, size, uploadParts, f.opt.ChunkSize)
}
memPool := f.getMemoryPool(int64(partSize))
@ -4570,7 +4570,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// uploaded properly. If size < 0 then we need to do the HEAD.
if o.fs.opt.NoHead && size >= 0 {
var head s3.HeadObjectOutput
structs.SetFrom(&head, req)
structs.SetFrom(&head, &req)
head.ETag = &md5sumHex // doesn't matter quotes are misssing
head.ContentLength = &size
// If we have done a single part PUT request then we can read these

View file

@ -67,8 +67,26 @@ func (f *Fs) InternalTestMetadata(t *testing.T) {
}
}
func (f *Fs) InternalTestNoHead(t *testing.T) {
ctx := context.Background()
// Set NoHead for this test
f.opt.NoHead = true
defer func() {
f.opt.NoHead = false
}()
contents := random.String(1000)
item := fstest.NewItem("test-no-head", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
obj := fstests.PutTestContents(ctx, t, f, &item, contents, true)
defer func() {
assert.NoError(t, obj.Remove(ctx))
}()
// PutTestcontests checks the received object
}
func (f *Fs) InternalTest(t *testing.T) {
t.Run("Metadata", f.InternalTestMetadata)
t.Run("NoHead", f.InternalTestNoHead)
}
var _ fstests.InternalTester = (*Fs)(nil)

View file

@ -935,11 +935,22 @@ func NewFsWithConnection(ctx context.Context, f *Fs, name string, root string, m
// It appears that WS FTP doesn't like relative paths,
// and the openssh sftp tool also uses absolute paths.
if !path.IsAbs(f.root) {
path, err := c.sftpClient.RealPath(f.root)
// Trying RealPath first, to perform proper server-side canonicalize.
// It may fail (SSH_FX_FAILURE reported on WS FTP) and will then resort
// to simple path join with current directory from Getwd (which can work
// on WS FTP, even though it is also based on RealPath).
absRoot, err := c.sftpClient.RealPath(f.root)
if err != nil {
fs.Debugf(f, "Failed to resolve path - using relative paths: %v", err)
fs.Debugf(f, "Failed to resolve path using RealPath: %v", err)
cwd, err := c.sftpClient.Getwd()
if err != nil {
fs.Debugf(f, "Failed to to read current directory - using relative paths: %v", err)
} else {
f.absRoot = path
f.absRoot = path.Join(cwd, f.root)
fs.Debugf(f, "Relative path joined with current directory to get absolute path %q", f.absRoot)
}
} else {
f.absRoot = absRoot
fs.Debugf(f, "Relative path resolved to %q", f.absRoot)
}
}

View file

@ -169,7 +169,11 @@ func (f *Fs) mkdir(ctx context.Context, dir string) ([]*upstream.Fs, error) {
if err != nil {
return nil, err
}
return upstreams, nil
// If created roots then choose one
if dir == "" {
upstreams, err = f.create(ctx, dir)
}
return upstreams, err
}
// Mkdir makes the root directory of the Fs object
@ -834,6 +838,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
}
root = strings.Trim(root, "/")
upstreams := make([]*upstream.Fs, len(opt.Upstreams))
errs := Errors(make([]error, len(opt.Upstreams)))
multithread(len(opt.Upstreams), func(i int) {

View file

@ -24,6 +24,10 @@ var (
// Fs is a wrap of any fs and its configs
type Fs struct {
// In order to ensure memory alignment on 32-bit architectures
// when this field is accessed through sync/atomic functions,
// it must be the first entry in the struct
cacheExpiry int64 // usage cache expiry time
fs.Fs
RootFs fs.Fs
RootPath string
@ -32,7 +36,6 @@ type Fs struct {
creatable bool
usage *fs.Usage // Cache the usage
cacheTime time.Duration // cache duration
cacheExpiry int64 // usage cache expiry time
cacheMutex sync.RWMutex
cacheOnce sync.Once
cacheUpdate bool // if the cache is updating

View file

@ -186,7 +186,7 @@ func (s *server) rootDescHandler(w http.ResponseWriter, r *http.Request) {
// Handle a service control HTTP request.
func (s *server) serviceControlHandler(w http.ResponseWriter, r *http.Request) {
soapActionString := r.Header.Get("SOAPACTION")
soapAction, err := parseActionHTTPHeader(soapActionString)
soapAction, err := upnp.ParseActionHTTPHeader(soapActionString)
if err != nil {
serveError(s, w, "Could not parse SOAPACTION header", err)
return

View file

@ -119,6 +119,8 @@ func TestContentDirectoryBrowseMetadata(t *testing.T) {
assert.Equal(t, http.StatusOK, resp.StatusCode)
body, err := ioutil.ReadAll(resp.Body)
require.NoError(t, err)
// should contain an appropriate URN
require.Contains(t, string(body), "urn:schemas-upnp-org:service:ContentDirectory:1")
// expect a <container> element
require.Contains(t, string(body), html.EscapeString("<container "))
require.NotContains(t, string(body), html.EscapeString("<item "))

View file

@ -3,7 +3,6 @@ package dlna
import (
"crypto/md5"
"encoding/xml"
"errors"
"fmt"
"io"
"log"
@ -12,9 +11,6 @@ import (
"net/http/httptest"
"net/http/httputil"
"os"
"regexp"
"strconv"
"strings"
"github.com/anacrolix/dms/soap"
"github.com/anacrolix/dms/upnp"
@ -89,36 +85,6 @@ func marshalSOAPResponse(sa upnp.SoapAction, args map[string]string) []byte {
sa.Action, sa.ServiceURN.String(), mustMarshalXML(soapArgs)))
}
var serviceURNRegexp = regexp.MustCompile(`:service:(\w+):(\d+)$`)
func parseServiceType(s string) (ret upnp.ServiceURN, err error) {
matches := serviceURNRegexp.FindStringSubmatch(s)
if matches == nil {
err = errors.New(s)
return
}
if len(matches) != 3 {
log.Panicf("Invalid serviceURNRegexp ?")
}
ret.Type = matches[1]
ret.Version, err = strconv.ParseUint(matches[2], 0, 0)
return
}
func parseActionHTTPHeader(s string) (ret upnp.SoapAction, err error) {
if s[0] != '"' || s[len(s)-1] != '"' {
return
}
s = s[1 : len(s)-1]
hashIndex := strings.LastIndex(s, "#")
if hashIndex == -1 {
return
}
ret.Action = s[hashIndex+1:]
ret.ServiceURN, err = parseServiceType(s[:hashIndex])
return
}
type loggingResponseWriter struct {
http.ResponseWriter
request *http.Request

View file

@ -101,6 +101,9 @@ func (c *conn) execCommand(ctx context.Context, out io.Writer, command string) (
if binary == "sha1sum" {
ht = hash.SHA1
}
if !c.vfs.Fs().Hashes().Contains(ht) {
return fmt.Errorf("%v hash not supported", ht)
}
var hashSum string
if args == "" {
// empty hash for no input

View file

@ -49,6 +49,11 @@ extended explanation in the [copy](/commands/rclone_copy/) command if unsure.
If dest:path doesn't exist, it is created and the source:path contents
go there.
It is not possible to sync overlapping remotes. However, you may exclude
the destination from the sync with a filter rule or by putting an
exclude-if-present file inside the destination directory and sync to a
destination that is inside the source directory.
**Note**: Use the ` + "`-P`" + `/` + "`--progress`" + ` flag to view real-time transfer statistics
**Note**: Use the ` + "`rclone dedupe`" + ` command to deal with "Duplicate object/directory found in source/destination - ignoring" errors.

View file

@ -5,6 +5,58 @@ description: "Rclone Changelog"
# Changelog
## v1.59.2 - 2022-09-15
[See commits](https://github.com/rclone/rclone/compare/v1.59.1...v1.59.2)
* Bug Fixes
* config: Move locking to fix fatal error: concurrent map read and map write (Nick Craig-Wood)
* Local
* Disable xattr support if the filesystems indicates it is not supported (Nick Craig-Wood)
* Azure Blob
* Fix chunksize calculations producing too many parts (Nick Craig-Wood)
* B2
* Fix chunksize calculations producing too many parts (Nick Craig-Wood)
* S3
* Fix chunksize calculations producing too many parts (Nick Craig-Wood)
## v1.59.1 - 2022-08-08
[See commits](https://github.com/rclone/rclone/compare/v1.59.0...v1.59.1)
* Bug Fixes
* accounting: Fix panic in core/stats-reset with unknown group (Nick Craig-Wood)
* build: Fix android build after GitHub actions change (Nick Craig-Wood)
* dlna: Fix SOAP action header parsing (Joram Schrijver)
* docs: Fix links to mount command from install docs (albertony)
* dropox: Fix ChangeNotify was unable to decrypt errors (Nick Craig-Wood)
* fs: Fix parsing of times and durations of the form "YYYY-MM-DD HH:MM:SS" (Nick Craig-Wood)
* serve sftp: Fix checksum detection (Nick Craig-Wood)
* sync: Add accidentally missed filter-sensitivity to --backup-dir option (Nick Naumann)
* Combine
* Fix docs showing `remote=` instead of `upstreams=` (Nick Craig-Wood)
* Throw error if duplicate directory name is specified (Nick Craig-Wood)
* Fix errors with backends shutting down while in use (Nick Craig-Wood)
* Dropbox
* Fix hang on quit with --dropbox-batch-mode off (Nick Craig-Wood)
* Fix infinite loop on uploading a corrupted file (Nick Craig-Wood)
* Internetarchive
* Ignore checksums for files using the different method (Lesmiscore)
* Handle hash symbol in the middle of filename (Lesmiscore)
* Jottacloud
* Fix working with whitelabel Elgiganten Cloud
* Do not store username in config when using standard auth (albertony)
* Mega
* Fix nil pointer exception when bad node received (Nick Craig-Wood)
* S3
* Fix --s3-no-head panic: reflect: Elem of invalid type s3.PutObjectInput (Nick Craig-Wood)
* SFTP
* Fix issue with WS_FTP by working around failing RealPath (albertony)
* Union
* Fix duplicated files when using directories with leading / (Nick Craig-Wood)
* Fix multiple files being uploaded when roots don't exist (Nick Craig-Wood)
* Fix panic due to misalignment of struct field in 32 bit architectures (r-ricci)
## v1.59.0 - 2022-07-09
[See commits](https://github.com/rclone/rclone/compare/v1.58.0...v1.59.0)

View file

@ -116,7 +116,7 @@ This would produce something like this:
[AllDrives]
type = combine
remote = "My Drive=My Drive:" "Test Drive=Test Drive:"
upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
If you then add that config to your config file (find it with `rclone
config file`) then you can access all the shared drives in one place

View file

@ -37,6 +37,11 @@ extended explanation in the [copy](/commands/rclone_copy/) command if unsure.
If dest:path doesn't exist, it is created and the source:path contents
go there.
It is not possible to sync overlapping remotes. However, you may exclude
the destination from the sync with a filter rule or by putting an
exclude-if-present file inside the destination directory and sync to a
destination that is inside the source directory.
**Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics
**Note**: Use the `rclone dedupe` command to deal with "Duplicate object/directory found in source/destination - ignoring" errors.

View file

@ -582,7 +582,8 @@ been added) in DIR, then it will be overwritten.
The remote in use must support server-side move or copy and you must
use the same remote as the destination of the sync. The backup
directory must not overlap the destination directory.
directory must not overlap the destination directory without it being
excluded by a filter rule.
For example

View file

@ -1332,7 +1332,7 @@ drives found and a combined drive.
[AllDrives]
type = combine
remote = "My Drive=My Drive:" "Test Drive=Test Drive:"
upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
Adding this to the rclone config file will cause those team drives to
be accessible with the aliases shown. Any illegal charactes will be

View file

@ -160,7 +160,7 @@ These flags are available for every command.
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.59.2")
-v, --verbose count Print lots more stuff (repeat for more)
```

View file

@ -318,7 +318,7 @@ such as a regular [sync](https://rclone.org/commands/rclone_sync/), you will pro
to configure your rclone command in your operating system's scheduler. If you need to
expose *service*-like features, such as [remote control](https://rclone.org/rc/),
[GUI](https://rclone.org/gui/), [serve](https://rclone.org/commands/rclone_serve/)
or [mount](https://rclone.org/commands/rclone_move/), you will often want an rclone
or [mount](https://rclone.org/commands/rclone_mount/), you will often want an rclone
command always running in the background, and configuring it to run in a service infrastructure
may be a better option. Below are some alternatives on how to achieve this on
different operating systems.
@ -351,7 +351,7 @@ c:\rclone\rclone.exe sync c:\files remote:/files --no-console --log-file c:\rclo
#### User account
As mentioned in the [mount](https://rclone.org/commands/rclone_move/) documentation,
As mentioned in the [mount](https://rclone.org/commands/rclone_mount/) documentation,
mounted drives created as Administrator are not visible to other accounts, not even the
account that was elevated as Administrator. By running the mount command as the
built-in `SYSTEM` user account, it will create drives accessible for everyone on

View file

@ -12,11 +12,10 @@ Refer to [IAS3 API documentation](https://archive.org/services/docs/api/ias3.htm
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g. `remote:item/path/to/dir`.
Once you have made a remote (see the provider specific section above)
you can use it like this:
Unlike S3, listing up all items uploaded by you isn't supported.
Once you have made a remote, you can use it like this:
Make a new item
rclone mkdir remote:item
@ -53,6 +52,7 @@ The following are reserved by Internet Archive:
- `format`
- `old_version`
- `viruscheck`
- `summation`
Trying to set values to these keys is ignored with a warning.
Only setting `mtime` is an exception. Doing so make it the identical behavior as setting ModTime.
@ -258,19 +258,20 @@ Here are the possible system metadata items for the internetarchive backend.
| Name | Help | Type | Example | Read Only |
|------|------|------|---------|-----------|
| crc32 | CRC32 calculated by Internet Archive | string | 01234567 | N |
| format | Name of format identified by Internet Archive | string | Comma-Separated Values | N |
| md5 | MD5 hash calculated by Internet Archive | string | 01234567012345670123456701234567 | N |
| mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N |
| name | Full file path, without the bucket part | filename | backend/internetarchive/internetarchive.go | N |
| old_version | Whether the file was replaced and moved by keep-old-version flag | boolean | true | N |
| crc32 | CRC32 calculated by Internet Archive | string | 01234567 | **Y** |
| format | Name of format identified by Internet Archive | string | Comma-Separated Values | **Y** |
| md5 | MD5 hash calculated by Internet Archive | string | 01234567012345670123456701234567 | **Y** |
| mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | **Y** |
| name | Full file path, without the bucket part | filename | backend/internetarchive/internetarchive.go | **Y** |
| old_version | Whether the file was replaced and moved by keep-old-version flag | boolean | true | **Y** |
| rclone-ia-mtime | Time of last modification, managed by Internet Archive | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N |
| rclone-mtime | Time of last modification, managed by Rclone | RFC 3339 | 2006-01-02T15:04:05.999999999Z | N |
| rclone-update-track | Random value used by Rclone for tracking changes inside Internet Archive | string | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa | N |
| sha1 | SHA1 hash calculated by Internet Archive | string | 0123456701234567012345670123456701234567 | N |
| size | File size in bytes | decimal number | 123456 | N |
| source | The source of the file | string | original | N |
| viruscheck | The last time viruscheck process was run for the file (?) | unixtime | 1654191352 | N |
| sha1 | SHA1 hash calculated by Internet Archive | string | 0123456701234567012345670123456701234567 | **Y** |
| size | File size in bytes | decimal number | 123456 | **Y** |
| source | The source of the file | string | original | **Y** |
| summation | Check https://forum.rclone.org/t/31922 for how it is used | string | md5 | **Y** |
| viruscheck | The last time viruscheck process was run for the file (?) | unixtime | 1654191352 | **Y** |
See the [metadata](/docs/#metadata) docs for more info.

View file

@ -1 +1 @@
v1.59.0
v1.59.2

View file

@ -2,6 +2,7 @@ package accounting
import (
"context"
"fmt"
"sync"
"github.com/rclone/rclone/fs/rc"
@ -190,6 +191,9 @@ func rcResetStats(ctx context.Context, in rc.Params) (rc.Params, error) {
if group != "" {
stats := groups.get(group)
if stats == nil {
return rc.Params{}, fmt.Errorf("group %q not found", group)
}
stats.ResetErrors()
stats.ResetCounters()
} else {

View file

@ -7,8 +7,10 @@ import (
"testing"
"time"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/fstest/testy"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestStatsGroupOperations(t *testing.T) {
@ -117,6 +119,89 @@ func TestStatsGroupOperations(t *testing.T) {
t.Errorf("HeapObjects = %d, expected %d", end.HeapObjects, start.HeapObjects)
}
})
testGroupStatsInfo := NewStatsGroup(ctx, "test-group")
testGroupStatsInfo.Deletes(1)
GlobalStats().Deletes(41)
t.Run("core/group-list", func(t *testing.T) {
call := rc.Calls.Get("core/group-list")
require.NotNil(t, call)
got, err := call.Fn(ctx, rc.Params{})
require.NoError(t, err)
require.Equal(t, rc.Params{
"groups": []string{
"test-group",
},
}, got)
})
t.Run("core/stats", func(t *testing.T) {
call := rc.Calls.Get("core/stats")
require.NotNil(t, call)
gotNoGroup, err := call.Fn(ctx, rc.Params{})
require.NoError(t, err)
gotGroup, err := call.Fn(ctx, rc.Params{"group": "test-group"})
require.NoError(t, err)
assert.Equal(t, int64(42), gotNoGroup["deletes"])
assert.Equal(t, int64(1), gotGroup["deletes"])
})
t.Run("core/transferred", func(t *testing.T) {
call := rc.Calls.Get("core/transferred")
require.NotNil(t, call)
gotNoGroup, err := call.Fn(ctx, rc.Params{})
require.NoError(t, err)
gotGroup, err := call.Fn(ctx, rc.Params{"group": "test-group"})
require.NoError(t, err)
assert.Equal(t, rc.Params{
"transferred": []TransferSnapshot{},
}, gotNoGroup)
assert.Equal(t, rc.Params{
"transferred": []TransferSnapshot{},
}, gotGroup)
})
t.Run("core/stats-reset", func(t *testing.T) {
call := rc.Calls.Get("core/stats-reset")
require.NotNil(t, call)
assert.Equal(t, int64(41), GlobalStats().deletes)
assert.Equal(t, int64(1), testGroupStatsInfo.deletes)
_, err := call.Fn(ctx, rc.Params{"group": "test-group"})
require.NoError(t, err)
assert.Equal(t, int64(41), GlobalStats().deletes)
assert.Equal(t, int64(0), testGroupStatsInfo.deletes)
_, err = call.Fn(ctx, rc.Params{})
require.NoError(t, err)
assert.Equal(t, int64(0), GlobalStats().deletes)
assert.Equal(t, int64(0), testGroupStatsInfo.deletes)
_, err = call.Fn(ctx, rc.Params{"group": "not-found"})
require.ErrorContains(t, err, `group "not-found" not found`)
})
testGroupStatsInfo = NewStatsGroup(ctx, "test-group")
t.Run("core/stats-delete", func(t *testing.T) {
call := rc.Calls.Get("core/stats-delete")
require.NotNil(t, call)
assert.Equal(t, []string{"test-group"}, groups.names())
_, err := call.Fn(ctx, rc.Params{"group": "test-group"})
require.NoError(t, err)
assert.Equal(t, []string{}, groups.names())
_, err = call.Fn(ctx, rc.Params{"group": "not-found"})
require.NoError(t, err)
})
}
func percentDiff(start, end uint64) uint64 {

View file

@ -5,18 +5,26 @@ import (
"github.com/rclone/rclone/fs"
)
/*
Calculator calculates the minimum chunk size needed to fit within the maximum number of parts, rounded up to the nearest fs.Mebi
// Calculator calculates the minimum chunk size needed to fit within
// the maximum number of parts, rounded up to the nearest fs.Mebi.
//
// For most backends, (chunk_size) * (concurrent_upload_routines)
// memory will be required so we want to use the smallest possible
// chunk size that's going to allow the upload to proceed. Rounding up
// to the nearest fs.Mebi on the assumption that some backends may
// only allow integer type parameters when specifying the chunk size.
//
// Returns the default chunk size if it is sufficiently large enough
// to support the given file size otherwise returns the smallest chunk
// size necessary to allow the upload to proceed.
func Calculator(o interface{}, size int64, maxParts int, defaultChunkSize fs.SizeSuffix) fs.SizeSuffix {
// If streaming then use default chunk size
if size < 0 {
fs.Debugf(o, "Streaming upload with chunk_size %s allows uploads of up to %s and will fail only when that limit is reached.", defaultChunkSize, fs.SizeSuffix(maxParts)*defaultChunkSize)
For most backends, (chunk_size) * (concurrent_upload_routines) memory will be required so we want to use the smallest
possible chunk size that's going to allow the upload to proceed. Rounding up to the nearest fs.Mebi on the assumption
that some backends may only allow integer type parameters when specifying the chunk size.
Returns the default chunk size if it is sufficiently large enough to support the given file size otherwise returns the
smallest chunk size necessary to allow the upload to proceed.
*/
func Calculator(objInfo fs.ObjectInfo, maxParts int, defaultChunkSize fs.SizeSuffix) fs.SizeSuffix {
fileSize := fs.SizeSuffix(objInfo.Size())
return defaultChunkSize
}
fileSize := fs.SizeSuffix(size)
requiredChunks := fileSize / defaultChunkSize
if requiredChunks < fs.SizeSuffix(maxParts) || (requiredChunks == fs.SizeSuffix(maxParts) && fileSize%defaultChunkSize == 0) {
return defaultChunkSize
@ -31,6 +39,6 @@ func Calculator(objInfo fs.ObjectInfo, maxParts int, defaultChunkSize fs.SizeSuf
minChunk += fs.Mebi
}
fs.Debugf(objInfo, "size: %v, parts: %v, default: %v, new: %v; default chunk size insufficient, returned new chunk size", fileSize, maxParts, defaultChunkSize, minChunk)
fs.Debugf(o, "size: %v, parts: %v, default: %v, new: %v; default chunk size insufficient, returned new chunk size", fileSize, maxParts, defaultChunkSize, minChunk)
return minChunk
}

View file

@ -2,34 +2,100 @@ package chunksize
import (
"testing"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/object"
)
func TestComputeChunkSize(t *testing.T) {
tests := map[string]struct {
fileSize fs.SizeSuffix
for _, test := range []struct {
name string
size fs.SizeSuffix
maxParts int
defaultChunkSize fs.SizeSuffix
expected fs.SizeSuffix
want fs.SizeSuffix
}{
"default size returned when file size is small enough": {fileSize: 1000, maxParts: 10000, defaultChunkSize: toSizeSuffixMiB(10), expected: toSizeSuffixMiB(10)},
"default size returned when file size is just 1 byte small enough": {fileSize: toSizeSuffixMiB(100000) - 1, maxParts: 10000, defaultChunkSize: toSizeSuffixMiB(10), expected: toSizeSuffixMiB(10)},
"no rounding up when everything divides evenly": {fileSize: toSizeSuffixMiB(1000000), maxParts: 10000, defaultChunkSize: toSizeSuffixMiB(100), expected: toSizeSuffixMiB(100)},
"rounding up to nearest MiB when not quite enough parts": {fileSize: toSizeSuffixMiB(1000000), maxParts: 9999, defaultChunkSize: toSizeSuffixMiB(100), expected: toSizeSuffixMiB(101)},
"rounding up to nearest MiB when one extra byte": {fileSize: toSizeSuffixMiB(1000000) + 1, maxParts: 10000, defaultChunkSize: toSizeSuffixMiB(100), expected: toSizeSuffixMiB(101)},
"expected MiB value when rounding sets to absolute minimum": {fileSize: toSizeSuffixMiB(1) - 1, maxParts: 1, defaultChunkSize: toSizeSuffixMiB(1), expected: toSizeSuffixMiB(1)},
"expected MiB value when rounding to absolute min with extra": {fileSize: toSizeSuffixMiB(1) + 1, maxParts: 1, defaultChunkSize: toSizeSuffixMiB(1), expected: toSizeSuffixMiB(2)},
{
name: "streaming file",
size: -1,
maxParts: 10000,
defaultChunkSize: toSizeSuffixMiB(10),
want: toSizeSuffixMiB(10),
}, {
name: "default size returned when file size is small enough",
size: 1000,
maxParts: 10000,
defaultChunkSize: toSizeSuffixMiB(10),
want: toSizeSuffixMiB(10),
}, {
name: "default size returned when file size is just 1 byte small enough",
size: toSizeSuffixMiB(100000) - 1,
maxParts: 10000,
defaultChunkSize: toSizeSuffixMiB(10),
want: toSizeSuffixMiB(10),
}, {
name: "no rounding up when everything divides evenly",
size: toSizeSuffixMiB(1000000),
maxParts: 10000,
defaultChunkSize: toSizeSuffixMiB(100),
want: toSizeSuffixMiB(100),
}, {
name: "rounding up to nearest MiB when not quite enough parts",
size: toSizeSuffixMiB(1000000),
maxParts: 9999,
defaultChunkSize: toSizeSuffixMiB(100),
want: toSizeSuffixMiB(101),
}, {
name: "rounding up to nearest MiB when one extra byte",
size: toSizeSuffixMiB(1000000) + 1,
maxParts: 10000,
defaultChunkSize: toSizeSuffixMiB(100),
want: toSizeSuffixMiB(101),
}, {
name: "expected MiB value when rounding sets to absolute minimum",
size: toSizeSuffixMiB(1) - 1,
maxParts: 1,
defaultChunkSize: toSizeSuffixMiB(1),
want: toSizeSuffixMiB(1),
}, {
name: "expected MiB value when rounding to absolute min with extra",
size: toSizeSuffixMiB(1) + 1,
maxParts: 1,
defaultChunkSize: toSizeSuffixMiB(1),
want: toSizeSuffixMiB(2),
}, {
name: "issue from forum #1",
size: 120864818840,
maxParts: 10000,
defaultChunkSize: 5 * 1024 * 1024,
want: toSizeSuffixMiB(12),
},
} {
t.Run(test.name, func(t *testing.T) {
got := Calculator(test.name, int64(test.size), test.maxParts, test.defaultChunkSize)
if got != test.want {
t.Fatalf("expected: %v, got: %v", test.want, got)
}
if test.size < 0 {
return
}
parts := func(result fs.SizeSuffix) int {
n := test.size / result
r := test.size % result
if r != 0 {
n++
}
return int(n)
}
// Check this gives the parts in range
if parts(got) > test.maxParts {
t.Fatalf("too many parts %d", parts(got))
}
// Check that setting chunk size smaller gave too many parts
if got > test.defaultChunkSize {
if parts(got-toSizeSuffixMiB(1)) <= test.maxParts {
t.Fatalf("chunk size %v too big as %v only gives %d parts", got, got-toSizeSuffixMiB(1), parts(got-toSizeSuffixMiB(1)))
}
for name, tc := range tests {
t.Run(name, func(t *testing.T) {
src := object.NewStaticObjectInfo("mock", time.Now(), int64(tc.fileSize), true, nil, nil)
result := Calculator(src, tc.maxParts, tc.defaultChunkSize)
if result != tc.expected {
t.Fatalf("expected: %v, got: %v", tc.expected, result)
}
})
}

View file

@ -24,16 +24,15 @@ func Install() {
// Storage implements config.Storage for saving and loading config
// data in a simple INI based file.
type Storage struct {
gc *goconfig.ConfigFile // config file loaded - thread safe
mu sync.Mutex // to protect the following variables
gc *goconfig.ConfigFile // config file loaded - not thread safe
fi os.FileInfo // stat of the file when last loaded
}
// Check to see if we need to reload the config
func (s *Storage) check() {
s.mu.Lock()
defer s.mu.Unlock()
//
// mu must be held when calling this
func (s *Storage) _check() {
if configPath := config.GetConfigPath(); configPath != "" {
// Check to see if config file has changed since it was last loaded
fi, err := os.Stat(configPath)
@ -174,7 +173,10 @@ func (s *Storage) Save() error {
// Serialize the config into a string
func (s *Storage) Serialize() (string, error) {
s.check()
s.mu.Lock()
defer s.mu.Unlock()
s._check()
var buf bytes.Buffer
if err := goconfig.SaveConfigData(s.gc, &buf); err != nil {
return "", fmt.Errorf("failed to save config file: %w", err)
@ -185,7 +187,10 @@ func (s *Storage) Serialize() (string, error) {
// HasSection returns true if section exists in the config file
func (s *Storage) HasSection(section string) bool {
s.check()
s.mu.Lock()
defer s.mu.Unlock()
s._check()
_, err := s.gc.GetSection(section)
return err == nil
}
@ -193,26 +198,38 @@ func (s *Storage) HasSection(section string) bool {
// DeleteSection removes the named section and all config from the
// config file
func (s *Storage) DeleteSection(section string) {
s.check()
s.mu.Lock()
defer s.mu.Unlock()
s._check()
s.gc.DeleteSection(section)
}
// GetSectionList returns a slice of strings with names for all the
// sections
func (s *Storage) GetSectionList() []string {
s.check()
s.mu.Lock()
defer s.mu.Unlock()
s._check()
return s.gc.GetSectionList()
}
// GetKeyList returns the keys in this section
func (s *Storage) GetKeyList(section string) []string {
s.check()
s.mu.Lock()
defer s.mu.Unlock()
s._check()
return s.gc.GetKeyList(section)
}
// GetValue returns the key in section with a found flag
func (s *Storage) GetValue(section string, key string) (value string, found bool) {
s.check()
s.mu.Lock()
defer s.mu.Unlock()
s._check()
value, err := s.gc.GetValue(section, key)
if err != nil {
return "", false
@ -222,7 +239,10 @@ func (s *Storage) GetValue(section string, key string) (value string, found bool
// SetValue sets the value under key in section
func (s *Storage) SetValue(section string, key string, value string) {
s.check()
s.mu.Lock()
defer s.mu.Unlock()
s._check()
if strings.HasPrefix(section, ":") {
fs.Logf(nil, "Can't save config %q for on the fly backend %q", key, section)
return
@ -232,7 +252,10 @@ func (s *Storage) SetValue(section string, key string, value string) {
// DeleteKey removes the key under section
func (s *Storage) DeleteKey(section string, key string) bool {
s.check()
s.mu.Lock()
defer s.mu.Unlock()
s._check()
return s.gc.DeleteKey(section, key)
}

View file

@ -40,7 +40,7 @@ var (
ErrorNotAFile = errors.New("is not a regular file")
ErrorNotDeleting = errors.New("not deleting files as there were IO errors")
ErrorNotDeletingDirs = errors.New("not deleting directories as there were IO errors")
ErrorOverlapping = errors.New("can't sync or move files on overlapping remotes")
ErrorOverlapping = errors.New("can't sync or move files on overlapping remotes (try excluding the destination with a filter rule)")
ErrorDirectoryNotEmpty = errors.New("directory not empty")
ErrorImmutableModified = errors.New("immutable file modified")
ErrorPermissionDenied = errors.New("permission denied")

View file

@ -814,17 +814,6 @@ func fixRoot(f fs.Info) string {
return s
}
// Overlapping returns true if fdst and fsrc point to the same
// underlying Fs and they overlap.
func Overlapping(fdst, fsrc fs.Info) bool {
if !SameConfig(fdst, fsrc) {
return false
}
fdstRoot := fixRoot(fdst)
fsrcRoot := fixRoot(fsrc)
return strings.HasPrefix(fdstRoot, fsrcRoot) || strings.HasPrefix(fsrcRoot, fdstRoot)
}
// OverlappingFilterCheck returns true if fdst and fsrc point to the same
// underlying Fs and they overlap without fdst being excluded by any filter rule.
func OverlappingFilterCheck(ctx context.Context, fdst fs.Fs, fsrc fs.Fs) bool {
@ -1848,10 +1837,10 @@ func BackupDir(ctx context.Context, fdst fs.Fs, fsrc fs.Fs, srcFileName string)
return nil, fserrors.FatalError(errors.New("parameter to --backup-dir has to be on the same remote as destination"))
}
if srcFileName == "" {
if Overlapping(fdst, backupDir) {
if OverlappingFilterCheck(ctx, backupDir, fdst) {
return nil, fserrors.FatalError(errors.New("destination and parameter to --backup-dir mustn't overlap"))
}
if Overlapping(fsrc, backupDir) {
if OverlappingFilterCheck(ctx, backupDir, fsrc) {
return nil, fserrors.FatalError(errors.New("source and parameter to --backup-dir mustn't overlap"))
}
} else {

View file

@ -1243,35 +1243,6 @@ func TestSame(t *testing.T) {
}
}
func TestOverlapping(t *testing.T) {
a := &testFsInfo{name: "name", root: "root"}
slash := string(os.PathSeparator) // native path separator
for _, test := range []struct {
name string
root string
expected bool
}{
{"name", "root", true},
{"namey", "root", false},
{"name", "rooty", false},
{"namey", "rooty", false},
{"name", "roo", false},
{"name", "root/toot", true},
{"name", "root/toot/", true},
{"name", "root" + slash + "toot", true},
{"name", "root" + slash + "toot" + slash, true},
{"name", "", true},
{"name", "/", true},
} {
b := &testFsInfo{name: test.name, root: test.root}
what := fmt.Sprintf("(%q,%q) vs (%q,%q)", a.name, a.root, b.name, b.root)
actual := operations.Overlapping(a, b)
assert.Equal(t, test.expected, actual, what)
actual = operations.Overlapping(b, a)
assert.Equal(t, test.expected, actual, what)
}
}
// testFs is for unit testing fs.Fs
type testFs struct {
testFsInfo

View file

@ -126,7 +126,7 @@ func parseDurationFromNow(age string, getNow func() time.Time) (d time.Duration,
// ParseDuration parses a duration string. Accept ms|s|m|h|d|w|M|y suffixes. Defaults to second if not provided
func ParseDuration(age string) (time.Duration, error) {
return parseDurationFromNow(age, time.Now)
return parseDurationFromNow(age, timeNowFunc)
}
// ReadableString parses d into a human-readable duration.
@ -216,7 +216,7 @@ func (d *Duration) UnmarshalJSON(in []byte) error {
// Scan implements the fmt.Scanner interface
func (d *Duration) Scan(s fmt.ScanState, ch rune) error {
token, err := s.Token(true, nil)
token, err := s.Token(true, func(rune) bool { return true })
if err != nil {
return err
}

View file

@ -145,11 +145,28 @@ func TestDurationReadableString(t *testing.T) {
}
func TestDurationScan(t *testing.T) {
var v Duration
n, err := fmt.Sscan(" 17m ", &v)
now := time.Date(2020, 9, 5, 8, 15, 5, 250, time.UTC)
oldTimeNowFunc := timeNowFunc
timeNowFunc = func() time.Time { return now }
defer func() { timeNowFunc = oldTimeNowFunc }()
for _, test := range []struct {
in string
want Duration
}{
{"17m", Duration(17 * time.Minute)},
{"-12h", Duration(-12 * time.Hour)},
{"0", Duration(0)},
{"off", DurationOff},
{"2022-03-26T17:48:19Z", Duration(now.Sub(time.Date(2022, 03, 26, 17, 48, 19, 0, time.UTC)))},
{"2022-03-26 17:48:19", Duration(now.Sub(time.Date(2022, 03, 26, 17, 48, 19, 0, time.Local)))},
} {
var got Duration
n, err := fmt.Sscan(test.in, &got)
require.NoError(t, err)
assert.Equal(t, 1, n)
assert.Equal(t, Duration(17*60*time.Second), v)
assert.Equal(t, test.want, got)
}
}
func TestParseUnmarshalJSON(t *testing.T) {

View file

@ -83,7 +83,7 @@ func (t *Time) UnmarshalJSON(in []byte) error {
// Scan implements the fmt.Scanner interface
func (t *Time) Scan(s fmt.ScanState, ch rune) error {
token, err := s.Token(true, nil)
token, err := s.Token(true, func(rune) bool { return true })
if err != nil {
return err
}

View file

@ -93,15 +93,23 @@ func TestTimeScan(t *testing.T) {
timeNowFunc = func() time.Time { return now }
defer func() { timeNowFunc = oldTimeNowFunc }()
var v1, v2, v3, v4, v5 Time
n, err := fmt.Sscan(" 17m -12h 0 off 2022-03-26T17:48:19Z ", &v1, &v2, &v3, &v4, &v5)
for _, test := range []struct {
in string
want Time
}{
{"17m", Time(now.Add(-17 * time.Minute))},
{"-12h", Time(now.Add(12 * time.Hour))},
{"0", Time(now)},
{"off", Time(time.Time{})},
{"2022-03-26T17:48:19Z", Time(time.Date(2022, 03, 26, 17, 48, 19, 0, time.UTC))},
{"2022-03-26 17:48:19", Time(time.Date(2022, 03, 26, 17, 48, 19, 0, time.Local))},
} {
var got Time
n, err := fmt.Sscan(test.in, &got)
require.NoError(t, err)
assert.Equal(t, 5, n)
assert.Equal(t, Time(now.Add(-17*time.Minute)), v1)
assert.Equal(t, Time(now.Add(12*time.Hour)), v2)
assert.Equal(t, Time(now), v3)
assert.Equal(t, Time(time.Time{}), v4)
assert.Equal(t, Time(time.Date(2022, 03, 26, 17, 48, 19, 0, time.UTC)), v5)
assert.Equal(t, 1, n)
assert.Equal(t, test.want, got)
}
}
func TestParseTimeUnmarshalJSON(t *testing.T) {

View file

@ -1,4 +1,4 @@
package fs
// VersionTag of rclone
var VersionTag = "v1.59.0"
var VersionTag = "v1.59.2"

2
go.mod
View file

@ -51,7 +51,7 @@ require (
github.com/spf13/cobra v1.4.0
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.7.2
github.com/t3rm1n4l/go-mega v0.0.0-20200416171014-ffad7fcb44b8
github.com/t3rm1n4l/go-mega v0.0.0-20220725095014-c4e0c2b5debf
github.com/winfsp/cgofuse v1.5.1-0.20220421173602-ce7e5a65cac7
github.com/xanzy/ssh-agent v0.3.1
github.com/youmark/pkcs8 v0.0.0-20201027041543-1326539a0a0a

4
go.sum
View file

@ -591,8 +591,8 @@ github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.2 h1:4jaiDzPyXQvSd7D0EjG45355tLlV3VOECpq10pLC+8s=
github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals=
github.com/t3rm1n4l/go-mega v0.0.0-20200416171014-ffad7fcb44b8 h1:IGJQmLBLYBdAknj21W3JsVof0yjEXfy1Q0K3YZebDOg=
github.com/t3rm1n4l/go-mega v0.0.0-20200416171014-ffad7fcb44b8/go.mod h1:XWL4vDyd3JKmJx+hZWUVgCNmmhZ2dTBcaNDcxH465s0=
github.com/t3rm1n4l/go-mega v0.0.0-20220725095014-c4e0c2b5debf h1:Y43S3e9P1NPs/QF4R5/SdlXj2d31540hP4Gk8VKNvDg=
github.com/t3rm1n4l/go-mega v0.0.0-20220725095014-c4e0c2b5debf/go.mod h1:c+cGNU1qi9bO7ZF4IRMYk+KaZTNiQ/gQrSbyMmGFq1Q=
github.com/tarm/serial v0.0.0-20180830185346-98f6abe2eb07/go.mod h1:kDXzergiv9cbyO7IOYJZWg1U88JhDg3PB6klq9Hg2pA=
github.com/tinylib/msgp v1.0.2/go.mod h1:+d+yLhGm8mzTaHzB+wgMYrodPfmZrzkirds8fDWklFE=
github.com/tklauser/go-sysconf v0.3.10 h1:IJ1AZGZRWbY8T5Vfk04D9WOA5WSejdflXxP03OUqALw=

190
rclone.1 generated
View file

@ -1,7 +1,7 @@
.\"t
.\" Automatically generated by Pandoc 2.9.2.1
.\"
.TH "rclone" "1" "Jul 09, 2022" "User Manual" ""
.TH "rclone" "1" "Sep 15, 2022" "User Manual" ""
.hy
.SH Rclone syncs your files to cloud storage
.PP
@ -732,9 +732,9 @@ system\[aq]s scheduler.
If you need to expose \f[I]service\f[R]-like features, such as remote
control (https://rclone.org/rc/), GUI (https://rclone.org/gui/),
serve (https://rclone.org/commands/rclone_serve/) or
mount (https://rclone.org/commands/rclone_move/), you will often want an
rclone command always running in the background, and configuring it to
run in a service infrastructure may be a better option.
mount (https://rclone.org/commands/rclone_mount/), you will often want
an rclone command always running in the background, and configuring it
to run in a service infrastructure may be a better option.
Below are some alternatives on how to achieve this on different
operating systems.
.PP
@ -770,7 +770,7 @@ c:\[rs]rclone\[rs]rclone.exe sync c:\[rs]files remote:/files --no-console --log-
.fi
.SS User account
.PP
As mentioned in the mount (https://rclone.org/commands/rclone_move/)
As mentioned in the mount (https://rclone.org/commands/rclone_mount/)
documentation, mounted drives created as Administrator are not visible
to other accounts, not even the account that was elevated as
Administrator.
@ -1271,6 +1271,11 @@ copy (https://rclone.org/commands/rclone_copy/) command if unsure.
If dest:path doesn\[aq]t exist, it is created and the source:path
contents go there.
.PP
It is not possible to sync overlapping remotes.
However, you may exclude the destination from the sync with a filter
rule or by putting an exclude-if-present file inside the destination
directory and sync to a destination that is inside the source directory.
.PP
\f[B]Note\f[R]: Use the \f[C]-P\f[R]/\f[C]--progress\f[R] flag to view
real-time transfer statistics
.PP
@ -10973,7 +10978,8 @@ in DIR, then it will be overwritten.
.PP
The remote in use must support server-side move or copy and you must use
the same remote as the destination of the sync.
The backup directory must not overlap the destination directory.
The backup directory must not overlap the destination directory without
it being excluded by a filter rule.
.PP
For example
.IP
@ -19707,7 +19713,7 @@ These flags are available for every command.
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.59.0\[dq])
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.59.2\[dq])
-v, --verbose count Print lots more stuff (repeat for more)
\f[R]
.fi
@ -34701,7 +34707,7 @@ remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
[AllDrives]
type = combine
remote = \[dq]My Drive=My Drive:\[dq] \[dq]Test Drive=Test Drive:\[dq]
upstreams = \[dq]My Drive=My Drive:\[dq] \[dq]Test Drive=Test Drive:\[dq]
\f[R]
.fi
.PP
@ -39081,7 +39087,7 @@ remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
[AllDrives]
type = combine
remote = \[dq]My Drive=My Drive:\[dq] \[dq]Test Drive=Test Drive:\[dq]
upstreams = \[dq]My Drive=My Drive:\[dq] \[dq]Test Drive=Test Drive:\[dq]
\f[R]
.fi
.PP
@ -41682,11 +41688,10 @@ Paths are specified as \f[C]remote:bucket\f[R] (or \f[C]remote:\f[R] for
the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g.
\f[C]remote:item/path/to/dir\f[R].
.PP
Once you have made a remote (see the provider specific section above)
you can use it like this:
.PP
Unlike S3, listing up all items uploaded by you isn\[aq]t supported.
.PP
Once you have made a remote, you can use it like this:
.PP
Make a new item
.IP
.nf
@ -41741,7 +41746,7 @@ However, some fields are reserved by both Internet Archive and rclone.
The following are reserved by Internet Archive: - \f[C]name\f[R] -
\f[C]source\f[R] - \f[C]size\f[R] - \f[C]md5\f[R] - \f[C]crc32\f[R] -
\f[C]sha1\f[R] - \f[C]format\f[R] - \f[C]old_version\f[R] -
\f[C]viruscheck\f[R]
\f[C]viruscheck\f[R] - \f[C]summation\f[R]
.PP
Trying to set values to these keys is ignored with a warning.
Only setting \f[C]mtime\f[R] is an exception.
@ -41999,7 +42004,7 @@ string
T}@T{
01234567
T}@T{
N
\f[B]Y\f[R]
T}
T{
format
@ -42010,7 +42015,7 @@ string
T}@T{
Comma-Separated Values
T}@T{
N
\f[B]Y\f[R]
T}
T{
md5
@ -42021,7 +42026,7 @@ string
T}@T{
01234567012345670123456701234567
T}@T{
N
\f[B]Y\f[R]
T}
T{
mtime
@ -42032,7 +42037,7 @@ RFC 3339
T}@T{
2006-01-02T15:04:05.999999999Z
T}@T{
N
\f[B]Y\f[R]
T}
T{
name
@ -42043,7 +42048,7 @@ filename
T}@T{
backend/internetarchive/internetarchive.go
T}@T{
N
\f[B]Y\f[R]
T}
T{
old_version
@ -42054,7 +42059,7 @@ boolean
T}@T{
true
T}@T{
N
\f[B]Y\f[R]
T}
T{
rclone-ia-mtime
@ -42098,7 +42103,7 @@ string
T}@T{
0123456701234567012345670123456701234567
T}@T{
N
\f[B]Y\f[R]
T}
T{
size
@ -42109,7 +42114,7 @@ decimal number
T}@T{
123456
T}@T{
N
\f[B]Y\f[R]
T}
T{
source
@ -42120,7 +42125,18 @@ string
T}@T{
original
T}@T{
N
\f[B]Y\f[R]
T}
T{
summation
T}@T{
Check https://forum.rclone.org/t/31922 for how it is used
T}@T{
string
T}@T{
md5
T}@T{
\f[B]Y\f[R]
T}
T{
viruscheck
@ -42131,7 +42147,7 @@ unixtime
T}@T{
1654191352
T}@T{
N
\f[B]Y\f[R]
T}
.TE
.PP
@ -53965,6 +53981,134 @@ Options:
.IP \[bu] 2
\[dq]error\[dq]: return an error based on option value
.SH Changelog
.SS v1.59.2 - 2022-09-15
.PP
See commits (https://github.com/rclone/rclone/compare/v1.59.1...v1.59.2)
.IP \[bu] 2
Bug Fixes
.RS 2
.IP \[bu] 2
config: Move locking to fix fatal error: concurrent map read and map
write (Nick Craig-Wood)
.RE
.IP \[bu] 2
Local
.RS 2
.IP \[bu] 2
Disable xattr support if the filesystems indicates it is not supported
(Nick Craig-Wood)
.RE
.IP \[bu] 2
Azure Blob
.RS 2
.IP \[bu] 2
Fix chunksize calculations producing too many parts (Nick Craig-Wood)
.RE
.IP \[bu] 2
B2
.RS 2
.IP \[bu] 2
Fix chunksize calculations producing too many parts (Nick Craig-Wood)
.RE
.IP \[bu] 2
S3
.RS 2
.IP \[bu] 2
Fix chunksize calculations producing too many parts (Nick Craig-Wood)
.RE
.SS v1.59.1 - 2022-08-08
.PP
See commits (https://github.com/rclone/rclone/compare/v1.59.0...v1.59.1)
.IP \[bu] 2
Bug Fixes
.RS 2
.IP \[bu] 2
accounting: Fix panic in core/stats-reset with unknown group (Nick
Craig-Wood)
.IP \[bu] 2
build: Fix android build after GitHub actions change (Nick Craig-Wood)
.IP \[bu] 2
dlna: Fix SOAP action header parsing (Joram Schrijver)
.IP \[bu] 2
docs: Fix links to mount command from install docs (albertony)
.IP \[bu] 2
dropox: Fix ChangeNotify was unable to decrypt errors (Nick Craig-Wood)
.IP \[bu] 2
fs: Fix parsing of times and durations of the form \[dq]YYYY-MM-DD
HH:MM:SS\[dq] (Nick Craig-Wood)
.IP \[bu] 2
serve sftp: Fix checksum detection (Nick Craig-Wood)
.IP \[bu] 2
sync: Add accidentally missed filter-sensitivity to --backup-dir option
(Nick Naumann)
.RE
.IP \[bu] 2
Combine
.RS 2
.IP \[bu] 2
Fix docs showing \f[C]remote=\f[R] instead of \f[C]upstreams=\f[R] (Nick
Craig-Wood)
.IP \[bu] 2
Throw error if duplicate directory name is specified (Nick Craig-Wood)
.IP \[bu] 2
Fix errors with backends shutting down while in use (Nick Craig-Wood)
.RE
.IP \[bu] 2
Dropbox
.RS 2
.IP \[bu] 2
Fix hang on quit with --dropbox-batch-mode off (Nick Craig-Wood)
.IP \[bu] 2
Fix infinite loop on uploading a corrupted file (Nick Craig-Wood)
.RE
.IP \[bu] 2
Internetarchive
.RS 2
.IP \[bu] 2
Ignore checksums for files using the different method (Lesmiscore)
.IP \[bu] 2
Handle hash symbol in the middle of filename (Lesmiscore)
.RE
.IP \[bu] 2
Jottacloud
.RS 2
.IP \[bu] 2
Fix working with whitelabel Elgiganten Cloud
.IP \[bu] 2
Do not store username in config when using standard auth (albertony)
.RE
.IP \[bu] 2
Mega
.RS 2
.IP \[bu] 2
Fix nil pointer exception when bad node received (Nick Craig-Wood)
.RE
.IP \[bu] 2
S3
.RS 2
.IP \[bu] 2
Fix --s3-no-head panic: reflect: Elem of invalid type s3.PutObjectInput
(Nick Craig-Wood)
.RE
.IP \[bu] 2
SFTP
.RS 2
.IP \[bu] 2
Fix issue with WS_FTP by working around failing RealPath (albertony)
.RE
.IP \[bu] 2
Union
.RS 2
.IP \[bu] 2
Fix duplicated files when using directories with leading / (Nick
Craig-Wood)
.IP \[bu] 2
Fix multiple files being uploaded when roots don\[aq]t exist (Nick
Craig-Wood)
.IP \[bu] 2
Fix panic due to misalignment of struct field in 32 bit architectures
(r-ricci)
.RE
.SS v1.59.0 - 2022-07-09
.PP
See commits (https://github.com/rclone/rclone/compare/v1.58.0...v1.59.0)