Compare commits

...
Sign in to create a new pull request.

41 commits

Author SHA1 Message Date
Nick Craig-Wood
0b2e17b396 Version v1.64.2 2023-10-19 10:09:04 +01:00
Nick Craig-Wood
4325a7c362 selfupdate: fix "invalid hashsum signature" error
This was caused by a change to the upstream library
ProtonMail/go-crypto checking the flags on the keys more strictly.

However the signing key for rclone is very old and does not have those
flags. Adding those flags using `gpg --edit-key` and then the
`change-usage` subcommand to remove, save, quite then re-add, save
quit the signing capabilities caused the key to work.

This also adds tests for the verification and adds the selfupdate
tests into the integration test harness as they had been disabled on
CI because they rely on external sources and are sometimes unreliable.

Fixes #7373
2023-10-18 17:57:34 +01:00
Nick Craig-Wood
2730e9ff08 build: fix docker build running out of space
This removes some unused SDKs from the build machine to free some
space up before building. It also adds some lines to show the free
space.
2023-10-18 17:56:46 +01:00
Nick Craig-Wood
929d8b8a6d Start v1.64.2-DEV development 2023-10-17 18:35:56 +01:00
Nick Craig-Wood
583eee635f Version v1.64.1 2023-10-17 16:50:14 +01:00
Nick Craig-Wood
ba836e729e mount: fix automount not detecting drive is ready
With automount the target mount drive appears twice in /proc/self/mountinfo.

    379 27 0:70 / /mnt/rclone rw,relatime shared:433 - autofs systemd-1 rw,fd=57,...
    566 379 0:90 / /mnt/rclone rw,nosuid,nodev,relatime shared:488 - fuse.rclone remote: rw,...

Before this fix we only looked for the mount once in
/proc/self/mountinfo. It finds the automount line and since this
doesn't have fs type rclone it concludes the mount isn't ready yet.

This patch makes rclone look through all the mounts and if any of them
have fs type rclone it concludes the mount is ready.

See: https://forum.rclone.org/t/systemd-mount-works-but-automount-does-not/42287/
2023-10-16 12:14:34 +01:00
Nick Craig-Wood
a6db2e7320 serve sftp: return not supported error for not supported commands
Before this change, if a hardlink command was issued, rclone would
just ignore it and not return an error.

This changes any unknown operations (including hardlink) to return an
unsupported error.
2023-10-16 12:10:02 +01:00
Nick Craig-Wood
31486aa7e3 b2: fix chunked streaming uploads
Streaming uploads are used by rclone rcat and rclone mount
--vfs-cache-mode off.

After the multipart chunker refactor the multipart chunked streaming
upload was accidentally mixing the first and the second parts up which
was causing corrupted uploads.

This was caused by a simple off by one error in the refactoring where
we went from 1 based part number counting to 0 based part number
counting.

Fixing this revealed that the metadata wasn't being re-read for the
copied object either.

This fixes both of those issues and adds an integration tests so it
won't happen again.

Fixes #7367
2023-10-14 13:06:08 +01:00
Nick Craig-Wood
d24e5bc7e4 build: upgrade golang.org/x/net to v0.17.0 to fix HTTP/2 rapid reset
Vulnerability1: GO-2023-2102

HTTP/2 rapid reset can cause excessive work in net/http

More info: https://pkg.go.dev/vuln/GO-2023-2102
2023-10-12 20:08:30 +01:00
Nick Craig-Wood
077c1a0f57 b2: fix server side copies greater than 4GB
After the multipart chunker refactor the multipart chunked server side
copy was accidentally sending one part too many. The last part was 0
length which was rejected by b2.

This was caused by a simple off by one error in the refactoring where
we went from 1 based part number counting to 0 based part number
counting.

Fixing this revealed that the metadata wasn't being re-read for the
copied object either.

This fixes both of those issues and adds an integration tests so it
won't happen again.

See: https://forum.rclone.org/t/large-server-side-copy-in-b2-fails-due-to-bad-byte-range/42294
2023-10-12 20:08:24 +01:00
Nick Craig-Wood
3bb82b4dd5 cmd: Make --progress output logs in the same format as without
See: https://forum.rclone.org/t/using-progress-change-dates-from-2023-10-05-to-2023-10-05/42173
2023-10-11 12:13:55 +01:00
Nick Craig-Wood
1591592936 operations: fix error message on delete to have file name - fixes #7355 2023-10-11 12:13:55 +01:00
Vitor Gomes
cc036884d4 operations: fix OpenOptions ignored in copy if operation was a multiThreadCopy 2023-10-11 12:13:55 +01:00
Nick Craig-Wood
f6b9fdf7c6 build: fix docker beta build running out of space
This removes some unused SDKs from the build machine to free some
space up before building. It also adds some lines to show the free
space.
2023-10-11 12:13:55 +01:00
Nick Craig-Wood
c9fe2f75a8 oracleobjectstorage: fix OpenOptions being ignored in uploadMultipart with chunkWriter 2023-10-11 12:13:55 +01:00
Vitor Gomes
340a67c012 s3: fix OpenOptions being ignored in uploadMultipart with chunkWriter 2023-10-11 12:13:55 +01:00
Saleh Dindar
264b3f0c90 vfs: [bugfix] Update dir modification time
A subtle bug where dir modification time is not updated when the dir already exists
in the cache. It is only noticeable when some clients use dir modification time to
invalidate cache.
2023-10-11 12:13:55 +01:00
Nick Craig-Wood
a7978cea56 operations: close file in TestUploadFile test so it can be deleted on Windows 2023-10-11 12:13:55 +01:00
Nick Craig-Wood
bebd82c586 b2: reduce default --b2-upload-concurrency to 4 to reduce memory usage
In v1.63 memory usage in the b2 backend was limited to `--transfers` *
`--b2-chunk-size`

However in v1.64 this was raised to `--transfers` * `--b2-chunk-size`
* `--b2-upload-concurrency`.

The default value for this was accidently set quite high at 16 which
means by default rclone could use up to 6.4GB of memory!

The new default sets a more reasonable (but still high) max memory of 1.6GB.
2023-10-11 12:13:55 +01:00
Nick Craig-Wood
af02c3b2a7 b2: fix locking window when getting mutipart upload URL
Before this change, the lock was held while the upload URL was being
fetched from the server.

This meant that any other threads were blocked from getting upload
URLs unecessarily.

It also increased the potential for deadlock.
2023-10-11 12:13:55 +01:00
Nick Craig-Wood
77dfe5f1fd pacer: fix b2 deadlock by defaulting max connections to unlimited
Before this change, the maximum number of connections was set to 10.

This means that b2 could deadlock while uploading multipart uploads
due to a lock being held longer than it should have been.
2023-10-11 12:13:55 +01:00
Nick Craig-Wood
e9a95a78de s3: fix slice bounds out of range error when listing
In this commit:

5f938fb9ed s3: fix "Entry doesn't belong in directory" errors when using directory markers

We checked that the remote has the prefix and then changed the remote
before removing the prefix. This sometimes causes:

    panic: runtime error: slice bounds out of range [56:55]

The fix is to do the modification of the remote after removing the
prefix.

See: https://forum.rclone.org/t/cryptcheck-panic-runtime-error-slice-bounds-out-of-range/41977
2023-10-11 12:13:55 +01:00
Nick Craig-Wood
82ca5295f4 docs: fix backend doc generator to not output duplicate config names
This was always the intention, it was just implemented wrong.

This shortens the s3 docs by 1369 bringing them down to half the size
just about.

Fixes #7325
2023-10-11 12:13:55 +01:00
Dimitri Papadopoulos Orfanos
9d8a40b813 docs: fix typos found by codespell in docs and code comments 2023-10-11 12:13:55 +01:00
Nick Craig-Wood
12d80c5219 onedrive: fix the configurator to allow /teams/ID in the config
See: https://forum.rclone.org/t/sharepoint-to-google/41548/
2023-10-11 12:08:40 +01:00
Nick Craig-Wood
038a87c569 lsjson: make sure we set the global metadata flag too 2023-10-11 12:08:40 +01:00
Nick Craig-Wood
3ef97993ad b2: fix multipart upload: corrupted on transfer: sizes differ XXX vs 0
Before this change the b2 backend wasn't writing the metadata to the
object properly after a multipart upload.

The symptom of this was that sometimes it would give the error:

    corrupted on transfer: sizes differ XXX vs 0

This was fixed by returning the metadata in the chunk writer and setting it in Update.

See: https://forum.rclone.org/t/multipart-upload-to-b2-sometimes-failing-with-corrupted-on-transfer-sizes-differ/41829
2023-10-11 12:08:40 +01:00
Nick Craig-Wood
04bba67cd5 azureblob: fix "fatal error: concurrent map writes"
Before this change, the metadata map could be accessed from multiple
goroutines at once, sometimes causing this error.

This fix adds a global mutex for adjusting the metadata map to make
all accesses safe.

See: https://forum.rclone.org/t/azure-blob-storage-with-vfs-cache-concurrent-map-writes-exception/41686
2023-10-11 12:08:40 +01:00
dependabot[bot]
29dd29b9f3 build(deps): bump docker/setup-qemu-action from 2 to 3
Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 2 to 3.
- [Release notes](https://github.com/docker/setup-qemu-action/releases)
- [Commits](https://github.com/docker/setup-qemu-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/setup-qemu-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-11 12:08:40 +01:00
dependabot[bot]
532248352b build(deps): bump docker/setup-buildx-action from 2 to 3
Bumps [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action) from 2 to 3.
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](https://github.com/docker/setup-buildx-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-11 12:08:40 +01:00
Kaloyan Raev
ab803942de storj: update storj.io/uplink to v1.12.0
The improved upload logic is active by default in uplink v1.12.0, so the
`testuplink.WithConcurrentSegmentUploadsDefaultConfig(ctx)` is not
required anymore.

See https://github.com/rclone/rclone/pull/7198
2023-10-11 12:08:40 +01:00
Nick Craig-Wood
f933e80258 docs: add notes on how to update the website between releases 2023-10-11 12:08:40 +01:00
Nick Craig-Wood
1c6f0101a5 docs: remove minio sponsor box for the moment 2023-10-11 12:08:40 +01:00
Nick Craig-Wood
c6f161de90 docs: update Storj partner link 2023-10-11 12:08:40 +01:00
Herby Gillot
bdcf7fe28c docs: add MacPorts install info
https://ports.macports.org/port/rclone/
2023-10-11 12:08:40 +01:00
dependabot[bot]
776dc47eb8 build(deps): bump docker/metadata-action from 4 to 5
Bumps [docker/metadata-action](https://github.com/docker/metadata-action) from 4 to 5.
- [Release notes](https://github.com/docker/metadata-action/releases)
- [Upgrade guide](https://github.com/docker/metadata-action/blob/master/UPGRADE.md)
- [Commits](https://github.com/docker/metadata-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: docker/metadata-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-11 12:08:40 +01:00
dependabot[bot]
167046e21a build(deps): bump docker/login-action from 2 to 3
Bumps [docker/login-action](https://github.com/docker/login-action) from 2 to 3.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-11 12:08:40 +01:00
dependabot[bot]
98d50d545a build(deps): bump docker/build-push-action from 4 to 5
Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 4 to 5.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v4...v5)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-10-11 12:08:40 +01:00
Manoj Ghosh
48242c5357 fix overview of oracle object storage as it supports multithreaded 2023-10-11 12:08:40 +01:00
Pat Patterson
e437e6c209 operations: ensure concurrency is no greater than the number of chunks - fixes #7299 2023-10-11 12:08:40 +01:00
Nick Craig-Wood
b813a01718 Start v1.64.1-DEV development 2023-10-11 11:55:24 +01:00
57 changed files with 926 additions and 9884 deletions

View file

@ -10,26 +10,35 @@ jobs:
runs-on: ubuntu-latest
name: Build image job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Login to Docker Hub
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Extract metadata (tags, labels) for Docker
id: meta
uses: docker/metadata-action@v4
uses: docker/metadata-action@v5
with:
images: ghcr.io/${{ github.repository }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
uses: docker/login-action@v2
uses: docker/login-action@v3
with:
registry: ghcr.io
# This is the user that triggered the Workflow. In this case, it will
@ -42,9 +51,12 @@ jobs:
# See https://docs.github.com/en/actions/security-guides/automatic-token-authentication#about-the-github_token-secret
# for more detailed information.
password: ${{ secrets.GITHUB_TOKEN }}
- name: Show disk usage
shell: bash
run: |
df -h .
- name: Build and publish image
uses: docker/build-push-action@v4
uses: docker/build-push-action@v5
with:
file: Dockerfile
context: .
@ -54,8 +66,12 @@ jobs:
rclone/rclone:beta
labels: ${{ steps.meta.outputs.labels }}
platforms: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6
cache-from: type=gha
cache-to: type=gha,mode=max
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, mode=max, scope=${{ github.workflow }}
provenance: false
# Eventually cache will need to be cleared if builds more frequent than once a week
# https://github.com/docker/build-push-action/issues/252
- name: Show disk usage
shell: bash
run: |
df -h .

View file

@ -10,6 +10,15 @@ jobs:
runs-on: ubuntu-latest
name: Build image job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:
@ -39,6 +48,15 @@ jobs:
runs-on: ubuntu-latest
name: Build docker plugin job
steps:
- name: Free some space
shell: bash
run: |
df -h .
# Remove android SDK
sudo rm -rf /usr/local/lib/android || true
# Remove .net runtime
sudo rm -rf /usr/share/dotnet || true
df -h .
- name: Checkout master
uses: actions/checkout@v4
with:

2288
MANUAL.html generated

File diff suppressed because it is too large Load diff

1485
MANUAL.md generated

File diff suppressed because it is too large Load diff

1505
MANUAL.txt generated

File diff suppressed because it is too large Load diff

View file

@ -90,6 +90,28 @@ Now
* git commit -a -v -m "Changelog updates from Version ${NEW_TAG}"
* git push
## Update the website between releases
Create an update website branch based off the last release
git co -b update-website
If the branch already exists, double check there are no commits that need saving.
Now reset the branch to the last release
git reset --hard v1.64.0
Create the changes, check them in, test with `make serve` then
make upload_test_website
Check out https://test.rclone.org and when happy
make upload_website
Cherry pick any changes back to master and the stable branch if it is active.
## Making a manual build of docker
The rclone docker image should autobuild on via GitHub actions. If it doesn't

View file

@ -1 +1 @@
v1.64.0
v1.64.2

View file

@ -71,6 +71,12 @@ const (
var (
errCantUpdateArchiveTierBlobs = fserrors.NoRetryError(errors.New("can't update archive tier blob without --azureblob-archive-tier-delete"))
// Take this when changing or reading metadata.
//
// It acts as global metadata lock so we don't bloat Object
// with an extra lock that will only very rarely be contended.
metadataMu sync.Mutex
)
// Register with Fs
@ -461,7 +467,7 @@ type Object struct {
size int64 // Size of the object
mimeType string // Content-Type of the object
accessTier blob.AccessTier // Blob Access Tier
meta map[string]string // blob metadata
meta map[string]string // blob metadata - take metadataMu when accessing
}
// ------------------------------------------------------------
@ -955,6 +961,9 @@ func (f *Fs) getBlockBlobSVC(container, containerPath string) *blockblob.Client
// updateMetadataWithModTime adds the modTime passed in to o.meta.
func (o *Object) updateMetadataWithModTime(modTime time.Time) {
metadataMu.Lock()
defer metadataMu.Unlock()
// Make sure o.meta is not nil
if o.meta == nil {
o.meta = make(map[string]string, 1)
@ -1623,6 +1632,9 @@ func (o *Object) Size() int64 {
// Set o.metadata from metadata
func (o *Object) setMetadata(metadata map[string]*string) {
metadataMu.Lock()
defer metadataMu.Unlock()
if len(metadata) > 0 {
// Lower case the metadata
o.meta = make(map[string]string, len(metadata))
@ -1647,6 +1659,9 @@ func (o *Object) setMetadata(metadata map[string]*string) {
// Get metadata from o.meta
func (o *Object) getMetadata() (metadata map[string]*string) {
metadataMu.Lock()
defer metadataMu.Unlock()
if len(o.meta) == 0 {
return nil
}
@ -1858,12 +1873,7 @@ func (o *Object) ModTime(ctx context.Context) (result time.Time) {
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
// Make sure o.meta is not nil
if o.meta == nil {
o.meta = make(map[string]string, 1)
}
// Set modTimeKey in it
o.meta[modTimeKey] = modTime.Format(timeFormatOut)
o.updateMetadataWithModTime(modTime)
blb := o.getBlobSVC()
opt := blob.SetMetadataOptions{}
@ -2109,7 +2119,7 @@ func (w *azChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader
return currentChunkSize, err
}
// Abort the multpart upload.
// Abort the multipart upload.
//
// FIXME it would be nice to delete uncommitted blocks.
//
@ -2233,7 +2243,9 @@ func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options [
return ui, fmt.Errorf("can't upload to root - need a container")
}
// Create parent dir/bucket if not saving directory marker
metadataMu.Lock()
_, ui.isDirMarker = o.meta[dirMetaKey]
metadataMu.Unlock()
if !ui.isDirMarker {
err = o.fs.mkdirParent(ctx, o.remote)
if err != nil {

View file

@ -158,7 +158,7 @@ concurrently.
Note that chunks are stored in memory and there may be up to
"--transfers" * "--b2-upload-concurrency" chunks stored at once
in memory.`,
Default: 16,
Default: 4,
Advanced: true,
}, {
Name: "disable_checksum",
@ -1297,7 +1297,11 @@ func (f *Fs) copy(ctx context.Context, dstObj *Object, srcObj *Object, newInfo *
if err != nil {
return err
}
return up.Copy(ctx)
err = up.Copy(ctx)
if err != nil {
return err
}
return dstObj.decodeMetaDataFileInfo(up.info)
}
dstBucket, dstPath := dstObj.split()
@ -1884,7 +1888,11 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return err
}
// NB Stream returns the buffer and token
return up.Stream(ctx, rw)
err = up.Stream(ctx, rw)
if err != nil {
return err
}
return o.decodeMetaDataFileInfo(up.info)
} else if err == io.EOF {
fs.Debugf(o, "File has %d bytes, which makes only one chunk. Using direct upload.", n)
defer o.fs.putRW(rw)
@ -1895,11 +1903,15 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return err
}
} else if size > int64(o.fs.opt.UploadCutoff) {
_, err := multipart.UploadMultipart(ctx, src, in, multipart.UploadMultipartOptions{
chunkWriter, err := multipart.UploadMultipart(ctx, src, in, multipart.UploadMultipartOptions{
Open: o.fs,
OpenOptions: options,
})
return err
if err != nil {
return err
}
up := chunkWriter.(*largeUpload)
return o.decodeMetaDataFileInfo(up.info)
}
modTime := src.ModTime(ctx)

View file

@ -1,10 +1,19 @@
package b2
import (
"bytes"
"context"
"fmt"
"testing"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Test b2 string encoding
@ -168,3 +177,100 @@ func TestParseTimeString(t *testing.T) {
}
}
// The integration tests do a reasonable job of testing the normal
// copy but don't test the chunked copy.
func (f *Fs) InternalTestChunkedCopy(t *testing.T) {
ctx := context.Background()
contents := random.String(8 * 1024 * 1024)
item := fstest.NewItem("chunked-copy", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
src := fstests.PutTestContents(ctx, t, f, &item, contents, true)
defer func() {
assert.NoError(t, src.Remove(ctx))
}()
var itemCopy = item
itemCopy.Path += ".copy"
// Set copy cutoff to mininum value so we make chunks
origCutoff := f.opt.CopyCutoff
f.opt.CopyCutoff = minChunkSize
defer func() {
f.opt.CopyCutoff = origCutoff
}()
// Do the copy
dst, err := f.Copy(ctx, src, itemCopy.Path)
require.NoError(t, err)
defer func() {
assert.NoError(t, dst.Remove(ctx))
}()
// Check size
assert.Equal(t, src.Size(), dst.Size())
// Check modtime
srcModTime := src.ModTime(ctx)
dstModTime := dst.ModTime(ctx)
assert.True(t, srcModTime.Equal(dstModTime))
// Make sure contents are correct
gotContents := fstests.ReadObject(ctx, t, dst, -1)
assert.Equal(t, contents, gotContents)
}
// The integration tests do a reasonable job of testing the normal
// streaming upload but don't test the chunked streaming upload.
func (f *Fs) InternalTestChunkedStreamingUpload(t *testing.T, size int) {
ctx := context.Background()
contents := random.String(size)
item := fstest.NewItem(fmt.Sprintf("chunked-streaming-upload-%d", size), contents, fstest.Time("2001-05-06T04:05:06.499Z"))
// Set chunk size to mininum value so we make chunks
origOpt := f.opt
f.opt.ChunkSize = minChunkSize
f.opt.UploadCutoff = 0
defer func() {
f.opt = origOpt
}()
// Do the streaming upload
src := object.NewStaticObjectInfo(item.Path, item.ModTime, -1, true, item.Hashes, f)
in := bytes.NewBufferString(contents)
dst, err := f.PutStream(ctx, in, src)
require.NoError(t, err)
defer func() {
assert.NoError(t, dst.Remove(ctx))
}()
// Check size
assert.Equal(t, int64(size), dst.Size())
// Check modtime
srcModTime := src.ModTime(ctx)
dstModTime := dst.ModTime(ctx)
assert.Equal(t, srcModTime, dstModTime)
// Make sure contents are correct
gotContents := fstests.ReadObject(ctx, t, dst, -1)
assert.Equal(t, contents, gotContents, "Contents incorrect")
}
// -run TestIntegration/FsMkdir/FsPutFiles/Internal
func (f *Fs) InternalTest(t *testing.T) {
t.Run("ChunkedCopy", f.InternalTestChunkedCopy)
for _, size := range []fs.SizeSuffix{
minChunkSize - 1,
minChunkSize,
minChunkSize + 1,
(3 * minChunkSize) / 2,
(5 * minChunkSize) / 2,
} {
t.Run(fmt.Sprintf("ChunkedStreamingUpload/%d", size), func(t *testing.T) {
f.InternalTestChunkedStreamingUpload(t, int(size))
})
}
}
var _ fstests.InternalTester = (*Fs)(nil)

View file

@ -85,6 +85,7 @@ type largeUpload struct {
uploads []*api.GetUploadPartURLResponse // result of get upload URL calls
chunkSize int64 // chunk size to use
src *Object // if copying, object we are reading from
info *api.FileInfo // final response with info about the object
}
// newLargeUpload starts an upload of object o from in with metadata in src
@ -168,24 +169,26 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
// This should be returned with returnUploadURL when finished
func (up *largeUpload) getUploadURL(ctx context.Context) (upload *api.GetUploadPartURLResponse, err error) {
up.uploadMu.Lock()
defer up.uploadMu.Unlock()
if len(up.uploads) == 0 {
opts := rest.Opts{
Method: "POST",
Path: "/b2_get_upload_part_url",
}
var request = api.GetUploadPartURLRequest{
ID: up.id,
}
err := up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &upload)
return up.f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("failed to get upload URL: %w", err)
}
} else {
if len(up.uploads) > 0 {
upload, up.uploads = up.uploads[0], up.uploads[1:]
up.uploadMu.Unlock()
return upload, nil
}
up.uploadMu.Unlock()
opts := rest.Opts{
Method: "POST",
Path: "/b2_get_upload_part_url",
}
var request = api.GetUploadPartURLRequest{
ID: up.id,
}
err = up.f.pacer.Call(func() (bool, error) {
resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &upload)
return up.f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("failed to get upload URL: %w", err)
}
return upload, nil
}
@ -352,7 +355,8 @@ func (up *largeUpload) Close(ctx context.Context) error {
if err != nil {
return err
}
return up.o.decodeMetaDataFileInfo(&response)
up.info = &response
return nil
}
// Abort aborts the large upload
@ -389,10 +393,11 @@ func (up *largeUpload) Stream(ctx context.Context, initialUploadBlock *pool.RW)
hasMoreParts = true
)
up.size = initialUploadBlock.Size()
up.parts = 0
for part := 0; hasMoreParts; part++ {
// Get a block of memory from the pool and token which limits concurrency.
var rw *pool.RW
if part == 1 {
if part == 0 {
rw = initialUploadBlock
} else {
rw = up.f.getRW(false)
@ -407,7 +412,7 @@ func (up *largeUpload) Stream(ctx context.Context, initialUploadBlock *pool.RW)
// Read the chunk
var n int64
if part == 1 {
if part == 0 {
n = rw.Size()
} else {
n, err = io.CopyN(rw, up.in, up.chunkSize)
@ -422,7 +427,7 @@ func (up *largeUpload) Stream(ctx context.Context, initialUploadBlock *pool.RW)
}
// Keep stats up to date
up.parts = part
up.parts += 1
up.size += n
if part > maxParts {
up.f.putRW(rw)
@ -452,7 +457,7 @@ func (up *largeUpload) Copy(ctx context.Context) (err error) {
remaining = up.size
)
g.SetLimit(up.f.opt.UploadConcurrency)
for part := 0; part <= up.parts; part++ {
for part := 0; part < up.parts; part++ {
// Fail fast, in case an errgroup managed function returns an error
// gCtx is cancelled. There is no point in copying all the other parts.
if gCtx.Err() != nil {

View file

@ -154,7 +154,7 @@ func init() {
Default: "",
Help: `Impersonate this user ID when using a service account.
Settng this flag allows rclone, when using a JWT service account, to
Setting this flag allows rclone, when using a JWT service account, to
act on behalf of another user by setting the as-user header.
The user ID is the Box identifier for a user. User IDs can found for

View file

@ -206,7 +206,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}
ci := fs.GetConfig(ctx)
// cache *mega.Mega on username so we can re-use and share
// cache *mega.Mega on username so we can reuse and share
// them between remotes. They are expensive to make as they
// contain all the objects and sharing the objects makes the
// move code easier as we don't have to worry about mixing

View file

@ -572,15 +572,18 @@ func Config(ctx context.Context, name string, m configmap.Mapper, config fs.Conf
case "url":
return fs.ConfigInput("url_end", "config_site_url", `Site URL
Example: "https://contoso.sharepoint.com/sites/mysite" or "mysite"
Examples:
- "mysite"
- "https://XXX.sharepoint.com/sites/mysite"
- "https://XXX.sharepoint.com/teams/ID"
`)
case "url_end":
siteURL := config.Result
re := regexp.MustCompile(`https://.*\.sharepoint\.com/sites/(.*)`)
re := regexp.MustCompile(`https://.*\.sharepoint\.com(/.*)`)
match := re.FindStringSubmatch(siteURL)
if len(match) == 2 {
return chooseDrive(ctx, name, m, srv, chooseDriveOpt{
relativePath: "/sites/" + match[1],
relativePath: match[1],
})
}
return chooseDrive(ctx, name, m, srv, chooseDriveOpt{

View file

@ -401,7 +401,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
multipart = false
}
if multipart {
err = o.uploadMultipart(ctx, src, in)
err = o.uploadMultipart(ctx, src, in, options...)
if err != nil {
return err
}

View file

@ -1215,7 +1215,7 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, sha1Str stri
return nil, fmt.Errorf("failed to upload: %w", err)
}
// refresh uploaded file info
// Compared to `newfile.File` this upgrades several feilds...
// Compared to `newfile.File` this upgrades several fields...
// audit, links, modified_time, phase, revision, and web_content_link
return f.getFile(ctx, newfile.File.ID)
}

View file

@ -3800,11 +3800,13 @@ func (f *Fs) list(ctx context.Context, opt listOpt, fn listFn) error {
if remote == opt.directory {
continue
}
// process directory markers as directories
remote = strings.TrimRight(remote, "/")
}
}
remote = remote[len(opt.prefix):]
if isDirectory {
// process directory markers as directories
remote = strings.TrimRight(remote, "/")
}
if opt.addBucket {
remote = bucket.Join(opt.bucket, remote)
}
@ -5611,7 +5613,7 @@ func (w *s3ChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader
return currentChunkSize, err
}
// Abort the multpart upload
// Abort the multipart upload
func (w *s3ChunkWriter) Abort(ctx context.Context) error {
err := w.f.pacer.Call(func() (bool, error) {
_, err := w.f.c.AbortMultipartUploadWithContext(context.Background(), &s3.AbortMultipartUploadInput{
@ -6000,7 +6002,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var err error
var ui uploadInfo
if multipart {
wantETag, gotETag, versionID, ui, err = o.uploadMultipart(ctx, src, in)
wantETag, gotETag, versionID, ui, err = o.uploadMultipart(ctx, src, in, options...)
} else {
ui, err = o.prepareUpload(ctx, src, options)
if err != nil {

View file

@ -1014,7 +1014,7 @@ func (f *Fs) keyboardInteractiveReponse(user, instruction string, questions []st
// save it so on reconnection we give back the previous string.
// This removes the ability to let the user correct a mistaken entry,
// but means that reconnects are transparent.
// We'll re-use config.Pass for this, 'cos we know it's not been
// We'll reuse config.Pass for this, 'cos we know it's not been
// specified.
func (f *Fs) getPass() (string, error) {
for f.savedpswd == "" {
@ -1602,7 +1602,7 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
fs.Debugf(f, "About path %q", aboutPath)
vfsStats, err = c.sftpClient.StatVFS(aboutPath)
}
f.putSftpConnection(&c, err) // Return to pool asap, if running shell command below it will be re-used
f.putSftpConnection(&c, err) // Return to pool asap, if running shell command below it will be reused
if vfsStats != nil {
total := vfsStats.TotalSpace()
free := vfsStats.FreeSpace()
@ -2044,7 +2044,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if err != nil {
return fmt.Errorf("Update: %w", err)
}
// Hang on to the connection for the whole upload so it doesn't get re-used while we are uploading
// Hang on to the connection for the whole upload so it doesn't get reused while we are uploading
file, err := c.sftpClient.OpenFile(o.path(), os.O_WRONLY|os.O_CREATE|os.O_TRUNC)
if err != nil {
o.fs.putSftpConnection(&c, err)

View file

@ -24,7 +24,6 @@ import (
"storj.io/uplink"
"storj.io/uplink/edge"
"storj.io/uplink/private/testuplink"
)
const (
@ -277,8 +276,6 @@ func (f *Fs) connect(ctx context.Context) (project *uplink.Project, err error) {
UserAgent: "rclone",
}
ctx = testuplink.WithConcurrentSegmentUploadsDefaultConfig(ctx)
project, err = cfg.OpenProject(ctx, f.access)
if err != nil {
return nil, fmt.Errorf("storj: project: %w", err)

View file

@ -1,6 +1,6 @@
// Package cmd implements the rclone command
//
// It is in a sub package so it's internals can be re-used elsewhere
// It is in a sub package so it's internals can be reused elsewhere
package cmd
// FIXME only attach the remote flags when using a remote???

View file

@ -309,6 +309,7 @@ func showBackend(name string) {
if _, doneAlready := done[opt.Name]; doneAlready {
continue
}
done[opt.Name] = struct{}{}
if opt.Advanced {
advancedOptions = append(advancedOptions, opt)
} else {

View file

@ -117,6 +117,12 @@ can be processed line by line as each item is written one to a line.
"groups": "Filter,Listing",
},
RunE: func(command *cobra.Command, args []string) error {
// Make sure we set the global Metadata flag too as it
// isn't parsed by cobra. We need to do this first
// before any backends are created.
ci := fs.GetConfig(context.Background())
ci.Metadata = opt.Metadata
cmd.CheckArgs(1, 1, command, args)
var fsrc fs.Fs
var remote string

View file

@ -83,7 +83,7 @@ func mountOptions(fsys *FS, f fs.Fs, opt *mountlib.Options) (mountOpts *fuse.Mou
// (128 kiB on Linux) and cannot be larger than MaxWrite.
//
// MaxReadAhead only affects buffered reads (=non-direct-io), but even then, the
// kernel can and does send larger reads to satisfy read reqests from applications
// kernel can and does send larger reads to satisfy read requests from applications
// (up to MaxWrite or VM_READAHEAD_PAGES=128 kiB, whichever is less).
MaxReadAhead int

View file

@ -47,6 +47,15 @@ func CheckMountEmpty(mountpoint string) error {
return checkMountEmpty(mountpoint)
}
// singleEntryFilter looks for a specific entry.
//
// It may appear more than once and we return all of them if so.
func singleEntryFilter(mp string) mountinfo.FilterFunc {
return func(m *mountinfo.Info) (skip, stop bool) {
return m.Mountpoint != mp, false
}
}
// CheckMountReady checks whether mountpoint is mounted by rclone.
// Only mounts with type "rclone" or "fuse.rclone" count.
func CheckMountReady(mountpoint string) error {
@ -57,7 +66,7 @@ func CheckMountReady(mountpoint string) error {
return fmt.Errorf("cannot get absolute path: %s: %w", mountpoint, err)
}
infos, err := mountinfo.GetMounts(mountinfo.SingleEntryFilter(mountpointAbs))
infos, err := mountinfo.GetMounts(singleEntryFilter(mountpointAbs))
if err != nil {
return fmt.Errorf("cannot get mounts: %w", err)
}

View file

@ -20,7 +20,7 @@ const (
// interval between progress prints
defaultProgressInterval = 500 * time.Millisecond
// time format for logging
logTimeFormat = "2006-01-02 15:04:05"
logTimeFormat = "2006/01/02 15:04:05"
)
// startProgress starts the progress bar printing

View file

@ -14,6 +14,7 @@ import (
"time"
"github.com/rclone/rclone/fs"
_ "github.com/rclone/rclone/fstest" // needed to run under integration tests
"github.com/rclone/rclone/fstest/testy"
"github.com/stretchr/testify/assert"
)

View file

@ -0,0 +1,10 @@
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
b20b47f579a2c790ca752fb5d8e5651fade7d5867cbac0a4f71e805fc5c468d0 archive.zip
-----BEGIN PGP SIGNATURE-----
iF0EARECAB0WIQT79zfs6firGGBL0qyTk14C/ztU+gUCZS+oVQAKCRCTk14C/ztU
+lNsAJ9XRiODlM4fIW9yqiltO3N+lLeucwCfRzD3cXk6BCB5wdz7pTgnItk9N74=
=1GTr
-----END PGP SIGNATURE-----

Binary file not shown.

View file

@ -26,24 +26,37 @@ QbogRGodbKhqY4v+cMNkKiemBuTQiWPkpKjifwNsD1fNjNKfDP3pJ64Yz7a4fuzV
X1YwBACpKVuEen34lmcX6ziY4jq8rKibKBs4JjQCRO24kYoHDULVe+RS9krQWY5b
e0foDhru4dsKccefK099G+WEzKVCKxupstWkTT/iJwajR8mIqd4AhD0wO9W3MCfV
Ov8ykMDZ7qBWk1DHc87Ep3W1o8t8wq74ifV+HjhhWg8QAylXg7QlTmljayBDcmFp
Zy1Xb29kIDxuaWNrQGNyYWlnLXdvb2QuY29tPohxBBMRCAAxBQsHCgMEAxUDAgMW
AgECF4AWIQT79zfs6firGGBL0qyTk14C/ztU+gUCXjg2UgIZAQAKCRCTk14C/ztU
+lmmAJ4jH5FyULzStjisuTvHLTVz6G44eQCfaR5QGZFPseenE5ic2WeQcBcmtoG5
Ag0EO7LdgRAIAI6QdFBg3/xa1gFKPYy1ihV9eSdGqwWZGJvokWsfCvHy5180tj/v
UNOLAJrdqglMSvevNTXe8bT65D6423AAsLhch9wq/aNqrHolTYABzxRigjcS1//T
yln5naGUzlVQXDVfrDk3Md/NrkdOFj7r/YyMF0+iWwpFz2qAjL95i5wfVZ1kWGrT
2AmivE1wD1sWT/Ja3FDI0NRkU0Nbz/a0TKe4ml8iLVtZXpTRbxxCCPdkHXXgSyu1
eZ4NrF/wTJuvwGn12TJ1EF95aVkHxAUw0+KmLGdcyBG+IKuHamrsjWIAXGXV///K
AxPgUthccQ03HMjltFsrdmen5Q034YM3eOsAAwUH/jAKiIAA8LpZmZPnt9GZ4+Ol
Zp22VAfyfDOFl4Ol+cWjkLAgjAFsm5gnOKcRSE/9XPxnQqkhw7+ZygYuUMgTDJ99
/5IM1UQL3ooS+oFrDaE99S8bLeOe17skcdXcA/K83VqD9m93rQRnbtD+75zqKkZn
9WNFyKCXg5P6PFPdNYRtlQKOcwFR9mHRLUmapQSAM8Y2pCgALZ7GViKQca8/TT1T
gZk9fJMZYGez+IlOPxTJxjn80+vywk4/wdIWSiQj+8u5RzT9sjmm77wbMVNGRqYd
W/EemW9Zz9vi0CIvJGgbPMqcuxw8e/5lnuQ6Mi3uDR0P2RNIAhFrdZpVSME8xQaI
RgQYEQIABgUCO7LdgQAKCRCTk14C/ztU+mLBAKC2cdFy7eLaQAvyzcE2VK6HVIjn
JACguA00bxLQuJ4+RCJrLFZP8ZlN2sc=
=TtR5
-----END PGP PUBLIC KEY BLOCK-----`
Zy1Xb29kIDxuaWNrQGNyYWlnLXdvb2QuY29tPoh0BBMRCAA0BQsHCgMEAxUDAgMW
AgECF4ACGQEWIQT79zfs6firGGBL0qyTk14C/ztU+gUCZS/mXAIbIwAKCRCTk14C
/ztU+tX+AJ9CUAnPvT4w5yRAPRfDiwWIPUqBOgCgiTelkzvUxvLWnYmpowwzKmsx
qaSJAjMEEAEIAB0WIQTjs1jchY+zB/SBcLnLDb68XzLIHQUCZPRnNAAKCRDLDb68
XzLIHZSAD/oCk9Z0xJfbpriphTBxFy7bWyPKF1lM1GZZaLKkktGfunf1i0Q7rhwp
Nu+u1launlOTp6ZoY36Ce2Qa1eSxWAQdjVajw9kOHXCAewrTREOMY/mb7RVGjajo
0Egl8T9iD3JRyaxu2iVtbpZYuqehtGG28CaCzmtqE+EJcx1cGqAGSuuaDWRYlVX8
KDip44GQB5Lut30vwSIoZG1CPCR6VE82u4cl3mYZUfcJkCHsiLzoeadVzb+fOd+2
ybzBn8Y77ifGgM+dSFSHe03mFfcHPdp0QImF9HQR7XI0UMZmEJsw7c2vDrRa+kRY
2A4/amGn4Tahuazq8g2yqgGm3yAj49qGNarAau849lDr7R49j73ESnNVBGJ9ShzU
4Ls+S1A5gohZVu2s1fkE3mbAmoTfU4JCrpRydOuL9xRJk5gbL44sKeuGODNshyTP
JzG9DmRHpLsBn59v8mg5tqSfBIGqcqBxxnYHJnkK801MkaLW2m7wDmtz6P3TW86g
GukzfIN3/OufLjnpN3Nx376JwWDDIyif7sn6/q+ZMwGz9uLKZkAeM5c3Dh4ygpgl
iSLoV2bZzDz0iLxKWW7QOVVdWHmlEqbTldpQ7gUEPG7mxpzVo0xd6nHncSq0M91x
29It4B3fATx/iJB2eardMzSsbzHiwTg0eswhYYGpSKZLgp4RShnVAbkCDQQ7st2B
EAgAjpB0UGDf/FrWAUo9jLWKFX15J0arBZkYm+iRax8K8fLnXzS2P+9Q04sAmt2q
CUxK9681Nd7xtPrkPrjbcACwuFyH3Cr9o2qseiVNgAHPFGKCNxLX/9PKWfmdoZTO
VVBcNV+sOTcx382uR04WPuv9jIwXT6JbCkXPaoCMv3mLnB9VnWRYatPYCaK8TXAP
WxZP8lrcUMjQ1GRTQ1vP9rRMp7iaXyItW1lelNFvHEII92QddeBLK7V5ng2sX/BM
m6/AafXZMnUQX3lpWQfEBTDT4qYsZ1zIEb4gq4dqauyNYgBcZdX//8oDE+BS2Fxx
DTccyOW0Wyt2Z6flDTfhgzd46wADBQf+MAqIgADwulmZk+e30Znj46VmnbZUB/J8
M4WXg6X5xaOQsCCMAWybmCc4pxFIT/1c/GdCqSHDv5nKBi5QyBMMn33/kgzVRAve
ihL6gWsNoT31Lxst457XuyRx1dwD8rzdWoP2b3etBGdu0P7vnOoqRmf1Y0XIoJeD
k/o8U901hG2VAo5zAVH2YdEtSZqlBIAzxjakKAAtnsZWIpBxrz9NPVOBmT18kxlg
Z7P4iU4/FMnGOfzT6/LCTj/B0hZKJCP7y7lHNP2yOabvvBsxU0ZGph1b8R6Zb1nP
2+LQIi8kaBs8ypy7HDx7/mWe5DoyLe4NHQ/ZE0gCEWt1mlVIwTzFBohGBBgRAgAG
BQI7st2BAAoJEJOTXgL/O1T6YsEAoLZx0XLt4tpAC/LNwTZUrodUiOckAKC4DTRv
EtC4nj5EImssVk/xmU3axw==
=VUqh
-----END PGP PUBLIC KEY BLOCK-----
`
func verifyHashsum(ctx context.Context, siteURL, version, archive string, hash []byte) error {
sumsURL := fmt.Sprintf("%s/%s/SHA256SUMS", siteURL, version)
@ -52,16 +65,26 @@ func verifyHashsum(ctx context.Context, siteURL, version, archive string, hash [
return err
}
fs.Debugf(nil, "downloaded hashsum list: %s", sumsURL)
return verifyHashsumDownloaded(ctx, sumsBuf, archive, hash)
}
func verifyHashsumDownloaded(ctx context.Context, sumsBuf []byte, archive string, hash []byte) error {
keyRing, err := openpgp.ReadArmoredKeyRing(strings.NewReader(ncwPublicKeyPGP))
if err != nil {
return errors.New("unsupported signing key")
return fmt.Errorf("unsupported signing key: %w", err)
}
block, rest := clearsign.Decode(sumsBuf)
// block.Bytes = block.Bytes[1:] // uncomment to test invalid signature
if block == nil {
return errors.New("invalid hashsum signature: couldn't find detached signature")
}
if len(rest) > 0 {
return fmt.Errorf("invalid hashsum signature: %d bytes of unsigned data", len(rest))
}
_, err = openpgp.CheckDetachedSignature(keyRing, bytes.NewReader(block.Bytes), block.ArmoredSignature.Body, nil)
if err != nil || len(rest) > 0 {
return errors.New("invalid hashsum signature")
if err != nil {
return fmt.Errorf("invalid hashsum signature: %w", err)
}
wantHash, err := findFileHash(sumsBuf, archive)

View file

@ -0,0 +1,40 @@
package selfupdate
import (
"context"
"encoding/hex"
"os"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestVerify(t *testing.T) {
ctx := context.Background()
sumsBuf, err := os.ReadFile("testdata/verify/SHA256SUMS")
require.NoError(t, err)
hash, err := hex.DecodeString("b20b47f579a2c790ca752fb5d8e5651fade7d5867cbac0a4f71e805fc5c468d0")
require.NoError(t, err)
t.Run("NoError", func(t *testing.T) {
err = verifyHashsumDownloaded(ctx, sumsBuf, "archive.zip", hash)
require.NoError(t, err)
})
t.Run("BadSig", func(t *testing.T) {
sumsBuf[0x60] ^= 1 // change the signature by one bit
err = verifyHashsumDownloaded(ctx, sumsBuf, "archive.zip", hash)
assert.ErrorContains(t, err, "invalid signature")
sumsBuf[0x60] ^= 1 // undo the change
})
t.Run("BadSum", func(t *testing.T) {
hash[0] ^= 1 // change the SHA256 by one bit
err = verifyHashsumDownloaded(ctx, sumsBuf, "archive.zip", hash)
assert.ErrorContains(t, err, "archive hash mismatch")
hash[0] ^= 1 // undo the change
})
t.Run("BadName", func(t *testing.T) {
err = verifyHashsumDownloaded(ctx, sumsBuf, "archive.zipX", hash)
assert.ErrorContains(t, err, "unable to find hash")
})
}

View file

@ -83,6 +83,10 @@ func (v vfsHandler) Filecmd(r *sftp.Request) error {
// link.symlink = r.Filepath
// v.files[r.Target] = link
return sftp.ErrSshFxOpUnsupported
case "Link":
return sftp.ErrSshFxOpUnsupported
default:
return sftp.ErrSshFxOpUnsupported
}
return nil
}

View file

@ -508,7 +508,7 @@ Properties:
- Config: upload_concurrency
- Env Var: RCLONE_B2_UPLOAD_CONCURRENCY
- Type: int
- Default: 16
- Default: 4
#### --b2-disable-checksum

View file

@ -442,7 +442,7 @@ Properties:
Impersonate this user ID when using a service account.
Settng this flag allows rclone, when using a JWT service account, to
Setting this flag allows rclone, when using a JWT service account, to
act on behalf of another user by setting the as-user header.
The user ID is the Box identifier for a user. User IDs can found for

View file

@ -5,6 +5,51 @@ description: "Rclone Changelog"
# Changelog
## v1.64.2 - 2023-10-19
[See commits](https://github.com/rclone/rclone/compare/v1.64.1...v1.64.2)
* Bug Fixes
* selfupdate: Fix "invalid hashsum signature" error (Nick Craig-Wood)
* build: Fix docker build running out of space (Nick Craig-Wood)
## v1.64.1 - 2023-10-17
[See commits](https://github.com/rclone/rclone/compare/v1.64.0...v1.64.1)
* Bug Fixes
* cmd: Make `--progress` output logs in the same format as without (Nick Craig-Wood)
* docs fixes (Dimitri Papadopoulos Orfanos, Herby Gillot, Manoj Ghosh, Nick Craig-Wood)
* lsjson: Make sure we set the global metadata flag too (Nick Craig-Wood)
* operations
* Ensure concurrency is no greater than the number of chunks (Pat Patterson)
* Fix OpenOptions ignored in copy if operation was a multiThreadCopy (Vitor Gomes)
* Fix error message on delete to have file name (Nick Craig-Wood)
* serve sftp: Return not supported error for not supported commands (Nick Craig-Wood)
* build: Upgrade golang.org/x/net to v0.17.0 to fix HTTP/2 rapid reset (Nick Craig-Wood)
* pacer: Fix b2 deadlock by defaulting max connections to unlimited (Nick Craig-Wood)
* Mount
* Fix automount not detecting drive is ready (Nick Craig-Wood)
* VFS
* Fix update dir modification time (Saleh Dindar)
* Azure Blob
* Fix "fatal error: concurrent map writes" (Nick Craig-Wood)
* B2
* Fix multipart upload: corrupted on transfer: sizes differ XXX vs 0 (Nick Craig-Wood)
* Fix locking window when getting mutipart upload URL (Nick Craig-Wood)
* Fix server side copies greater than 4GB (Nick Craig-Wood)
* Fix chunked streaming uploads (Nick Craig-Wood)
* Reduce default `--b2-upload-concurrency` to 4 to reduce memory usage (Nick Craig-Wood)
* Onedrive
* Fix the configurator to allow `/teams/ID` in the config (Nick Craig-Wood)
* Oracleobjectstorage
* Fix OpenOptions being ignored in uploadMultipart with chunkWriter (Nick Craig-Wood)
* S3
* Fix slice bounds out of range error when listing (Nick Craig-Wood)
* Fix OpenOptions being ignored in uploadMultipart with chunkWriter (Vitor Gomes)
* Storj
* Update storj.io/uplink to v1.12.0 (Kaloyan Raev)
## v1.64.0 - 2023-09-11
[See commits](https://github.com/rclone/rclone/compare/v1.63.0...v1.64.0)
@ -105,14 +150,14 @@ description: "Rclone Changelog"
* Fix 425 "TLS session of data connection not resumed" errors (Nick Craig-Wood)
* Hdfs
* Retry "replication in progress" errors when uploading (Nick Craig-Wood)
* Fix uploading to the wrong object on Update with overriden remote name (Nick Craig-Wood)
* Fix uploading to the wrong object on Update with overridden remote name (Nick Craig-Wood)
* HTTP
* CORS should not be sent if not set (yuudi)
* Fix webdav OPTIONS response (yuudi)
* Opendrive
* Fix List on a just deleted and remade directory (Nick Craig-Wood)
* Oracleobjectstorage
* Use rclone's rate limiter in mutipart transfers (Manoj Ghosh)
* Use rclone's rate limiter in multipart transfers (Manoj Ghosh)
* Implement `OpenChunkWriter` and multi-thread uploads (Manoj Ghosh)
* S3
* Refactor multipart upload to use `OpenChunkWriter` and `ChunkWriter` (Vitor Gomes)
@ -285,14 +330,14 @@ description: "Rclone Changelog"
* Fix quickxorhash on 32 bit architectures (Nick Craig-Wood)
* Report any list errors during `rclone cleanup` (albertony)
* Putio
* Fix uploading to the wrong object on Update with overriden remote name (Nick Craig-Wood)
* Fix uploading to the wrong object on Update with overridden remote name (Nick Craig-Wood)
* Fix modification times not being preserved for server side copy and move (Nick Craig-Wood)
* Fix server side copy failures (400 errors) (Nick Craig-Wood)
* S3
* Empty directory markers (Jānis Bebrītis, Nick Craig-Wood)
* Update Scaleway storage classes (Brian Starkey)
* Fix `--s3-versions` on individual objects (Nick Craig-Wood)
* Fix hang on aborting multpart upload with iDrive e2 (Nick Craig-Wood)
* Fix hang on aborting multipart upload with iDrive e2 (Nick Craig-Wood)
* Fix missing "tier" metadata (Nick Craig-Wood)
* Fix V3sign: add missing subresource delete (cc)
* Fix Arvancloud Domain and region changes and alphabetise the provider (Ehsan Tadayon)
@ -309,7 +354,7 @@ description: "Rclone Changelog"
* Code cleanup to avoid overwriting ctx before first use (fixes issue reported by the staticcheck linter) (albertony)
* Storj
* Fix "uplink: too many requests" errors when uploading to the same file (Nick Craig-Wood)
* Fix uploading to the wrong object on Update with overriden remote name (Nick Craig-Wood)
* Fix uploading to the wrong object on Update with overridden remote name (Nick Craig-Wood)
* Swift
* Ignore 404 error when deleting an object (Nick Craig-Wood)
* Union
@ -3938,7 +3983,7 @@ Point release to fix hubic and azureblob backends.
* Revert to copy when moving file across file system boundaries
* `--skip-links` to suppress symlink warnings (thanks Zhiming Wang)
* Mount
* Re-use `rcat` internals to support uploads from all remotes
* Reuse `rcat` internals to support uploads from all remotes
* Dropbox
* Fix "entry doesn't belong in directory" error
* Stop using deprecated API methods

View file

@ -80,7 +80,7 @@ rclone [flags]
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging
--b2-upload-concurrency int Concurrency for multipart uploads (default 16)
--b2-upload-concurrency int Concurrency for multipart uploads (default 4)
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--b2-version-at Time Show file versions as they were at the specified time (default off)
--b2-versions Include old versions in directory listings
@ -784,7 +784,7 @@ rclone [flags]
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.64.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.64.2")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)

View file

@ -712,7 +712,7 @@ has a header and is divided into chunks.
The initial nonce is generated from the operating systems crypto
strong random number generator. The nonce is incremented for each
chunk read making sure each nonce is unique for each block written.
The chance of a nonce being re-used is minuscule. If you wrote an
The chance of a nonce being reused is minuscule. If you wrote an
exabyte of data (10¹⁸ bytes) you would have a probability of
approximately 2×10⁻³² of re-using a nonce.

View file

@ -111,7 +111,7 @@ General networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.64.0")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.64.2")
```
@ -345,7 +345,7 @@ Backend only flags. These can be set in the config file also.
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files
--b2-key string Application Key
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging
--b2-upload-concurrency int Concurrency for multipart uploads (default 16)
--b2-upload-concurrency int Concurrency for multipart uploads (default 4)
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--b2-version-at Time Show file versions as they were at the specified time (default off)
--b2-versions Include old versions in directory listings

View file

@ -80,6 +80,19 @@ developers so it may be out of date. Its current version is as below.
[![Homebrew package](https://repology.org/badge/version-for-repo/homebrew/rclone.svg)](https://repology.org/project/rclone/versions)
### Installation with MacPorts (#macos-macports)
On macOS, rclone can also be installed via [MacPorts](https://www.macports.org):
sudo port install rclone
Note that this is a third party installer not controlled by the rclone
developers so it may be out of date. Its current version is as below.
[![MacPorts port](https://repology.org/badge/version-for-repo/macports/rclone.svg)](https://repology.org/project/rclone/versions)
More information [here](https://ports.macports.org/port/rclone/).
### Precompiled binary, using curl {#macos-precompiled}
To avoid problems with macOS gatekeeper enforcing the binary to be signed and
@ -302,7 +315,7 @@ Make sure you have [Snapd installed](https://snapcraft.io/docs/installing-snapd)
```bash
$ sudo snap install rclone
```
Due to the strict confinement of Snap, rclone snap cannot acess real /home/$USER/.config/rclone directory, default config path is as below.
Due to the strict confinement of Snap, rclone snap cannot access real /home/$USER/.config/rclone directory, default config path is as below.
- Default config directory:
- /home/$USER/snap/rclone/current/.config/rclone
@ -572,7 +585,7 @@ It requires .NET Framework, but it is preinstalled on newer versions of Windows,
also provides alternative standalone distributions which includes necessary runtime (.NET 5).
WinSW is a command-line only utility, where you have to manually create an XML file with
service configuration. This may be a drawback for some, but it can also be an advantage
as it is easy to back up and re-use the configuration
as it is easy to back up and reuse the configuration
settings, without having go through manual steps in a GUI. One thing to note is that
by default it does not restart the service on error, one have to explicit enable this
in the configuration file (via the "onfailure" parameter).

View file

@ -171,34 +171,6 @@ Properties:
- Type: string
- Required: true
#### --koofr-password
Your password for rclone (generate one at https://storage.rcs-rds.ro/app/admin/preferences/password).
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
Properties:
- Config: password
- Env Var: RCLONE_KOOFR_PASSWORD
- Provider: digistorage
- Type: string
- Required: true
#### --koofr-password
Your password for rclone (generate one at your service's settings page).
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
Properties:
- Config: password
- Env Var: RCLONE_KOOFR_PASSWORD
- Provider: other
- Type: string
- Required: true
### Advanced options
Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).

View file

@ -209,7 +209,7 @@ rclone mount \
# its exact meaning will depend on the backend. For HTTP based backends it is an HTTP PUT/GET/POST/etc and its response
--cache-dir /tmp/rclone/cache # Directory rclone will use for caching.
--dir-cache-time 5m \ # Time to cache directory entries for (default 5m0s)
--vfs-cache-mode writes \ # Cache mode off|minimal|writes|full (default off), writes gives the maximum compatiblity like a local disk
--vfs-cache-mode writes \ # Cache mode off|minimal|writes|full (default off), writes gives the maximum compatibility like a local disk
--vfs-cache-max-age 20m \ # Max age of objects in the cache (default 1h0m0s)
--vfs-cache-max-size 10G \ # Max total size of objects in the cache (default off)
--vfs-cache-poll-interval 1m \ # Interval to poll the cache for stale objects (default 1m0s)
@ -372,7 +372,7 @@ Install NFS Utils
sudo yum install -y nfs-utils
```
Export the desired directory via NFS Server in the same machine where rclone has mounted to, ensure NFS serivce has
Export the desired directory via NFS Server in the same machine where rclone has mounted to, ensure NFS service has
desired permissions to read the directory. If it runs as root, then it will have permissions for sure, but if it runs
as separate user then ensure that user has necessary desired privileges.
```shell

View file

@ -495,7 +495,7 @@ upon backend-specific capabilities.
| Microsoft OneDrive | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
| OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | No | Yes |
| OpenStack Swift | Yes † | Yes | No | No | No | Yes | Yes | No | No | Yes | No |
| Oracle Object Storage | No | Yes | No | No | Yes | Yes | Yes | No | No | No | No |
| Oracle Object Storage | No | Yes | No | No | Yes | Yes | Yes | Yes | No | No | No |
| pCloud | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
| PikPak | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
| premiumize.me | Yes | No | Yes | Yes | No | No | No | No | Yes | Yes | Yes |

File diff suppressed because it is too large Load diff

View file

@ -25,15 +25,7 @@
Silver Sponsor
</div>
<div class="card-body">
<a href="https://min.io/persona/developers?utm_source=rclone&utm_medium=display&utm_campaign=developers-know-rclone-part-0823" target="_blank" rel="noopener" title="High Performance Object Storage, Any Cloud, Any HW, Any Workload"><img src="/img/logos/minio.svg"></a><br />
</div>
</div>
<div class="card">
<div class="card-header" style="padding: 5px 15px;">
Silver Sponsor
</div>
<div class="card-body">
<a href="https://www.storj.io/partner-solutions/rclone" target="_blank" rel="noopener" title="Visit rclone's sponsor Storj to see offer"><img src="/img/logos/storj-rclone-highlight.jpeg" style="max-width: 100%; height: auto;"></a><br />
<a href="https://hubs.li/Q0225cFG0" target="_blank" rel="noopener" title="Visit rclone's sponsor Storj to see offer"><img src="/img/logos/storj-rclone-highlight.jpeg" style="max-width: 100%; height: auto;"></a><br />
</div>
</div>
{{end}}

View file

@ -1 +1 @@
v1.64.0
v1.64.2

View file

@ -152,7 +152,7 @@ func TestMemoryObject(t *testing.T) {
err = o.Update(context.Background(), newContent, src)
assert.NoError(t, err)
checkContent(o, newStr)
assert.Equal(t, "Rutaba", string(content)) // check we didn't re-use the buffer
assert.Equal(t, "Rutaba", string(content)) // check we didn't reuse the buffer
// now try streaming
newStr = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"

View file

@ -127,7 +127,7 @@ func calculateNumChunks(size int64, chunkSize int64) int {
// Copy src to (f, remote) using streams download threads. It tries to use the OpenChunkWriter feature
// and if that's not available it creates an adapter using OpenWriterAt
func multiThreadCopy(ctx context.Context, f fs.Fs, remote string, src fs.Object, concurrency int, tr *accounting.Transfer) (newDst fs.Object, err error) {
func multiThreadCopy(ctx context.Context, f fs.Fs, remote string, src fs.Object, concurrency int, tr *accounting.Transfer, options ...fs.OpenOption) (newDst fs.Object, err error) {
openChunkWriter := f.Features().OpenChunkWriter
ci := fs.GetConfig(ctx)
noseek := false
@ -148,7 +148,7 @@ func multiThreadCopy(ctx context.Context, f fs.Fs, remote string, src fs.Object,
return nil, fmt.Errorf("multi-thread copy: can't copy zero sized file")
}
info, chunkWriter, err := openChunkWriter(ctx, remote, src)
info, chunkWriter, err := openChunkWriter(ctx, remote, src, options...)
if err != nil {
return nil, fmt.Errorf("multi-thread copy: failed to open chunk writer: %w", err)
}
@ -172,17 +172,18 @@ func multiThreadCopy(ctx context.Context, f fs.Fs, remote string, src fs.Object,
info.ChunkSize = src.Size()
}
// Use the backend concurrency if it is higher than --multi-thread-streams or if --multi-thread-streams wasn't set explicitly
if !ci.MultiThreadSet || info.Concurrency > concurrency {
fs.Debugf(src, "multi-thread copy: using backend concurrency of %d instead of --multi-thread-streams %d", info.Concurrency, concurrency)
concurrency = info.Concurrency
}
numChunks := calculateNumChunks(src.Size(), info.ChunkSize)
if concurrency > numChunks {
fs.Debugf(src, "multi-thread copy: number of streams %d was bigger than number of chunks %d", concurrency, numChunks)
concurrency = numChunks
}
// Use the backend concurrency if it is higher than --multi-thread-streams or if --multi-thread-streams wasn't set explicitly
if !ci.MultiThreadSet || info.Concurrency > concurrency {
fs.Debugf(src, "multi-thread copy: using backend concurrency of %d instead of --multi-thread-streams %d", info.Concurrency, concurrency)
concurrency = info.Concurrency
}
if concurrency < 1 {
concurrency = 1
}

View file

@ -418,8 +418,17 @@ func Copy(ctx context.Context, f fs.Fs, dst fs.Object, remote string, src fs.Obj
removeFailedPartialCopy(ctx, f, remotePartial)
})
}
uploadOptions := []fs.OpenOption{hashOption}
for _, option := range ci.UploadHeaders {
uploadOptions = append(uploadOptions, option)
}
if ci.MetadataSet != nil {
uploadOptions = append(uploadOptions, fs.MetadataOption(ci.MetadataSet))
}
if doMultiThreadCopy(ctx, f, src) {
dst, err = multiThreadCopy(ctx, f, remotePartial, src, ci.MultiThreadStreams, tr)
dst, err = multiThreadCopy(ctx, f, remotePartial, src, ci.MultiThreadStreams, tr, uploadOptions...)
if err == nil {
newDst = dst
}
@ -463,17 +472,10 @@ func Copy(ctx context.Context, f fs.Fs, dst fs.Object, remote string, src fs.Obj
if src.Remote() != remotePartial {
wrappedSrc = fs.NewOverrideRemote(src, remotePartial)
}
options := []fs.OpenOption{hashOption}
for _, option := range ci.UploadHeaders {
options = append(options, option)
}
if ci.MetadataSet != nil {
options = append(options, fs.MetadataOption(ci.MetadataSet))
}
if doUpdate && inplace {
err = dst.Update(ctx, in, wrappedSrc, options...)
err = dst.Update(ctx, in, wrappedSrc, uploadOptions...)
} else {
dst, err = f.Put(ctx, in, wrappedSrc, options...)
dst, err = f.Put(ctx, in, wrappedSrc, uploadOptions...)
}
if doUpdate {
actionTaken = "Copied (replaced existing)"
@ -766,7 +768,7 @@ func DeleteFilesWithBackupDir(ctx context.Context, toBeDeleted fs.ObjectsChan, b
if err != nil {
errorCount.Add(1)
if fserrors.IsFatalError(err) {
fs.Errorf(nil, "Got fatal error on delete: %s", err)
fs.Errorf(dst, "Got fatal error on delete: %s", err)
fatalErrorCount.Add(1)
return
}

View file

@ -318,7 +318,7 @@ func TestRcSetTier(t *testing.T) {
r.CheckRemoteItems(t, file1)
// Because we don't know what the current tier options here are, let's
// just get the current tier, and re-use that
// just get the current tier, and reuse that
o, err := r.Fremote.NewObject(ctx, file1.Path)
require.NoError(t, err)
trr, ok := o.(fs.GetTierer)
@ -345,7 +345,7 @@ func TestRcSetTierFile(t *testing.T) {
r.CheckRemoteItems(t, file1)
// Because we don't know what the current tier options here are, let's
// just get the current tier, and re-use that
// just get the current tier, and reuse that
o, err := r.Fremote.NewObject(ctx, file1.Path)
require.NoError(t, err)
trr, ok := o.(fs.GetTierer)
@ -544,7 +544,7 @@ func TestUploadFile(t *testing.T) {
r, call := rcNewRun(t, "operations/uploadfile")
ctx := context.Background()
testFileName := "test.txt"
testFileName := "uploadfile-test.txt"
testFileContent := "Hello World"
r.WriteFile(testFileName, testFileContent, t1)
testItem1 := fstest.NewItem(testFileName, testFileContent, t1)
@ -553,6 +553,10 @@ func TestUploadFile(t *testing.T) {
currentFile, err := os.Open(path.Join(r.LocalName, testFileName))
require.NoError(t, err)
defer func() {
assert.NoError(t, currentFile.Close())
}()
formReader, contentType, _, err := rest.MultipartUpload(ctx, currentFile, url.Values{}, "file", testFileName)
require.NoError(t, err)
@ -572,10 +576,14 @@ func TestUploadFile(t *testing.T) {
assert.NoError(t, r.Fremote.Mkdir(context.Background(), "subdir"))
currentFile, err = os.Open(path.Join(r.LocalName, testFileName))
currentFile2, err := os.Open(path.Join(r.LocalName, testFileName))
require.NoError(t, err)
formReader, contentType, _, err = rest.MultipartUpload(ctx, currentFile, url.Values{}, "file", testFileName)
defer func() {
assert.NoError(t, currentFile2.Close())
}()
formReader, contentType, _, err = rest.MultipartUpload(ctx, currentFile2, url.Values{}, "file", testFileName)
require.NoError(t, err)
httpReq = httptest.NewRequest("POST", "/", formReader)

View file

@ -29,7 +29,7 @@ func NewPacer(ctx context.Context, c pacer.Calculator) *Pacer {
p := &Pacer{
Pacer: pacer.New(
pacer.InvokerOption(pacerInvoker),
pacer.MaxConnectionsOption(ci.Checkers+ci.Transfers),
// pacer.MaxConnectionsOption(ci.Checkers+ci.Transfers),
pacer.RetriesOption(retries),
pacer.CalculatorOption(c),
),

View file

@ -1,4 +1,4 @@
package fs
// VersionTag of rclone
var VersionTag = "v1.64.0"
var VersionTag = "v1.64.2"

View file

@ -10,6 +10,8 @@ tests:
- path: vfs
- path: cmd/serve/restic
localonly: true
- path: cmd/selfupdate
localonly: true
backends:
# - backend: "amazonclouddrive"
# remote: "TestAmazonCloudDrive:"

14
go.mod
View file

@ -65,16 +65,16 @@ require (
github.com/yunify/qingstor-sdk-go/v3 v3.2.0
go.etcd.io/bbolt v1.3.7
goftp.io/server/v2 v2.0.1
golang.org/x/crypto v0.13.0
golang.org/x/net v0.15.0
golang.org/x/crypto v0.14.0
golang.org/x/net v0.17.0
golang.org/x/oauth2 v0.10.0
golang.org/x/sync v0.3.0
golang.org/x/sys v0.12.0
golang.org/x/sys v0.13.0
golang.org/x/text v0.13.0
golang.org/x/time v0.3.0
google.golang.org/api v0.134.0
gopkg.in/yaml.v2 v2.4.0
storj.io/uplink v1.11.0
storj.io/uplink v1.12.0
)
require (
@ -156,9 +156,9 @@ require (
google.golang.org/grpc v1.56.2 // indirect
google.golang.org/protobuf v1.31.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
storj.io/common v0.0.0-20230602145716-d6ea82d58b3d // indirect
storj.io/common v0.0.0-20230907123639-5fd0608fd947 // indirect
storj.io/drpc v0.0.33 // indirect
storj.io/picobuf v0.0.1 // indirect
storj.io/picobuf v0.0.2-0.20230906122608-c4ba17033c6c // indirect
)
require (
@ -169,5 +169,5 @@ require (
github.com/google/go-querystring v1.1.0 // indirect
github.com/pkg/xattr v0.4.9
golang.org/x/mobile v0.0.0-20230531173138-3c911d8e3eda
golang.org/x/term v0.12.0
golang.org/x/term v0.13.0
)

35
go.sum
View file

@ -197,6 +197,7 @@ github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJn
github.com/go-playground/validator/v10 v10.14.0 h1:vgvQWe3XCz3gIeFDm/HnTIbj6UGmg/+t63MyGU2n5js=
github.com/go-resty/resty/v2 v2.7.0 h1:me+K9p3uhSmXtrBZ4k9jcEAfJmuC8IivWHwaLZwPrFY=
github.com/go-resty/resty/v2 v2.7.0/go.mod h1:9PWDzw47qPphMRFfhsyk0NnSgvluHcljSMVIq3w7q0I=
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI=
github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gofrs/flock v0.8.1 h1:+gYjHKf32LDeiEEFhQaotPbLuUXjY5ZqxKgXy7n59aw=
@ -218,6 +219,7 @@ github.com/golang/mock v1.4.0/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt
github.com/golang/mock v1.4.1/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.3/go.mod h1:UOMv5ysSaYNkG+OFQykRIcU/QvvxJf3p21QfJ2Bt3cw=
github.com/golang/mock v1.4.4/go.mod h1:l3mdAwkq5BuhzHwde/uurv3sEJeZMXNpwsxVWU71h+4=
github.com/golang/mock v1.6.0 h1:ErTB+efbowRARo13NNdxyJji2egdxLGQhRaY+DUumQc=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
@ -396,6 +398,7 @@ github.com/ncw/swift/v2 v2.0.2/go.mod h1:z0A9RVdYPjNjXVo2pDOPxZ4eu3oarO1P91fTItc
github.com/nfnt/resize v0.0.0-20180221191011-83c6a9932646/go.mod h1:jpp1/29i3P1S/RLdc7JQKbRpFeM1dOBd8T9ki5s+AY8=
github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
github.com/onsi/ginkgo/v2 v2.9.5 h1:+6Hr4uxzP4XIUyAkg61dWBw8lb/gc4/X5luuxN/EC+Q=
github.com/onsi/gomega v1.27.1 h1:rfztXRbg6nv/5f+Raen9RcGoSecHIFgBBLQK3Wdj754=
github.com/oracle/oci-go-sdk/v65 v65.45.0 h1:EpCst/iZma9s8eYS0QJ9qsTmGxX5GPehYGN1jwGIteU=
github.com/oracle/oci-go-sdk/v65 v65.45.0/go.mod h1:IBEV9l1qBzUpo7zgGaRUhbB05BVfcDGYRFBCPlTcPp0=
@ -429,6 +432,8 @@ github.com/prometheus/procfs v0.9.0 h1:wzCHvIvM5SxWqYvwgVL7yJY8Lz3PKn49KQtpgMYJf
github.com/prometheus/procfs v0.9.0/go.mod h1:+pB4zwohETzFnmlpe6yd2lSc+0/46IYZRB/chUwxUZY=
github.com/putdotio/go-putio/putio v0.0.0-20200123120452-16d982cac2b8 h1:Y258uzXU/potCYnQd1r6wlAnoMB68BiCkCcCnKx1SH8=
github.com/putdotio/go-putio/putio v0.0.0-20200123120452-16d982cac2b8/go.mod h1:bSJjRokAHHOhA+XFxplld8w2R/dXLH7Z3BZ532vhFwU=
github.com/quic-go/qtls-go1-20 v0.3.2 h1:rRgN3WfnKbyik4dBV8A6girlJVxGand/d+jVKbQq5GI=
github.com/quic-go/quic-go v0.38.0 h1:T45lASr5q/TrVwt+jrVccmqHhPL2XuSyoCLVCpfOSLc=
github.com/relvacode/iso8601 v1.3.0 h1:HguUjsGpIMh/zsTczGN3DVJFxTU/GX+MMmzcKoMO7ko=
github.com/relvacode/iso8601 v1.3.0/go.mod h1:FlNp+jz+TXpyRqgmM7tnzHHzBnz776kmAH2h3sZCn0I=
github.com/rfjakob/eme v1.1.2 h1:SxziR8msSOElPayZNFfQw4Tjx/Sbaeeh3eRvrHVMUs4=
@ -555,8 +560,8 @@ golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw
golang.org/x/crypto v0.3.1-0.20221117191849-2c476679df9a/go.mod h1:hebNnKkNXi2UzZN1eVRvBB7co0a+JxK6XbPiWVs/3J4=
golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58=
golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU=
golang.org/x/crypto v0.13.0 h1:mvySKfSWJ+UKUii46M40LOvyWfN0s2U+46/jDd0e6Ck=
golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=
golang.org/x/crypto v0.14.0 h1:wBqGXzWJW6m1XrIKlAH0Hs1JJ7+9KBwnIO8v66Q9cHc=
golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@ -593,6 +598,7 @@ golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.12.0 h1:rmsUpXtvNzj340zd98LZ4KntptpfRHwpFOHG188oHXc=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@ -635,8 +641,8 @@ golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc=
golang.org/x/net v0.9.0/go.mod h1:d48xBJpPfHeWQsugry2m+kC02ZBRGRgulfHnEXEuWns=
golang.org/x/net v0.15.0 h1:ugBLEUaxABaB5AJqW9enI0ACdci2RUd4eP51NTBvuJ8=
golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
golang.org/x/net v0.17.0 h1:pVaXccu2ozPjCXewfr1S7xza/zcXTity9cCdXQYSjIM=
golang.org/x/net v0.17.0/go.mod h1:NxSsAGuq816PNPmqtQdLE42eU2Fs7NoRIZrHJAlaCOE=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@ -714,8 +720,8 @@ golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.9.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0 h1:CM0HF96J0hcLAwsHPJZjfdNzs0gftsLfgKt57wWHJ0o=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.13.0 h1:Af8nKPmuFypiUBjVoU9V20FiaFXOcuZI21p0ycVYYGE=
golang.org/x/sys v0.13.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
@ -723,8 +729,8 @@ golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U=
golang.org/x/term v0.7.0/go.mod h1:P32HKFT3hSsZrRxla30E9HqToFYAQPCMs/zFMBUFqPY=
golang.org/x/term v0.12.0 h1:/ZfYdc3zq+q02Rv9vGqTeSItdzZTSNDmfTi0mBAuidU=
golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
golang.org/x/term v0.13.0 h1:bb+I9cTfFazGW51MZqBVmZy7+JEJMouUHTUSKVQLBek=
golang.org/x/term v0.13.0/go.mod h1:LTmsnFJwVN6bCy1rVCoS+qHT1HhALEFxKncY3WNNh4U=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@ -790,6 +796,7 @@ golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.13.0 h1:Iey4qkscZuv0VvIt8E0neZjtPVQFSc870HQ448QgEmQ=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@ -912,11 +919,11 @@ honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
storj.io/common v0.0.0-20230602145716-d6ea82d58b3d h1:AXdJxmg4Jqdz1nmogSrImKOHAU+bn8JCy8lHYnTwP0Y=
storj.io/common v0.0.0-20230602145716-d6ea82d58b3d/go.mod h1:zu2L8WdpvfIBrCbBTgPsz4qhHSArYSiDgRcV1RLlIF8=
storj.io/common v0.0.0-20230907123639-5fd0608fd947 h1:X75A5hX1nFjQH8GIvei4T1LNQTLa++bsDKMxXxfPHE8=
storj.io/common v0.0.0-20230907123639-5fd0608fd947/go.mod h1:FMVOxf2+SgsmfjxwFCM1MZCKwXis4U7l22M/6nIhIas=
storj.io/drpc v0.0.33 h1:yCGZ26r66ZdMP0IcTYsj7WDAUIIjzXk6DJhbhvt9FHI=
storj.io/drpc v0.0.33/go.mod h1:vR804UNzhBa49NOJ6HeLjd2H3MakC1j5Gv8bsOQT6N4=
storj.io/picobuf v0.0.1 h1:ekEvxSQCbEjTVIi/qxj2za13SJyfRE37yE30IBkZeT0=
storj.io/picobuf v0.0.1/go.mod h1:7ZTAMs6VesgTHbbhFU79oQ9hDaJ+MD4uoFQZ1P4SEz0=
storj.io/uplink v1.11.0 h1:zGmCcMx1JMRI4NlQi/pN8+z2Jzy7pVVCUDhMVTfboHw=
storj.io/uplink v1.11.0/go.mod h1:cDlpDWGJykXfYE7NtO1EeArGFy12K5Xj8pV8ufpUCKE=
storj.io/picobuf v0.0.2-0.20230906122608-c4ba17033c6c h1:or/DtG5uaZpzimL61ahlgAA+MTYn/U3txz4fe+XBFUg=
storj.io/picobuf v0.0.2-0.20230906122608-c4ba17033c6c/go.mod h1:JCuc3C0gzCJHQ4J6SOx/Yjg+QTpX0D+Fvs5H46FETCk=
storj.io/uplink v1.12.0 h1:rTODjbKRo/lzz5Hp0isjoRfqDcH7kJg6aujD2M9v9Ro=
storj.io/uplink v1.12.0/go.mod h1:nMAuoWi5AHio+8NQa33VRzCiRg0B0UhYKuT0a0CdXOg=

View file

@ -75,7 +75,7 @@ type Paced func() (bool, error)
// New returns a Pacer with sensible defaults.
func New(options ...Option) *Pacer {
opts := pacerOptions{
maxConnections: 10,
maxConnections: 0,
retries: 3,
}
for _, o := range options {
@ -103,7 +103,7 @@ func New(options ...Option) *Pacer {
// SetMaxConnections sets the maximum number of concurrent connections.
// Setting the value to 0 will allow unlimited number of connections.
// Should not be changed once you have started calling the pacer.
// By default this will be set to fs.Config.Checkers.
// By default this will be 0.
func (p *Pacer) SetMaxConnections(n int) {
p.mu.Lock()
defer p.mu.Unlock()

3433
rclone.1 generated

File diff suppressed because it is too large Load diff

View file

@ -694,9 +694,10 @@ func (d *Dir) _readDirFromEntries(entries fs.DirEntries, dirTree dirtree.DirTree
if node == nil || !node.IsDir() {
node = newDir(d.vfs, d.f, d, item)
}
dir := node.(*Dir)
dir.mu.Lock()
dir.modTime = item.ModTime(context.TODO())
if dirTree != nil {
dir := node.(*Dir)
dir.mu.Lock()
err = dir._readDirFromDirTree(dirTree, when)
if err != nil {
dir.read = time.Time{}
@ -704,10 +705,10 @@ func (d *Dir) _readDirFromEntries(entries fs.DirEntries, dirTree dirtree.DirTree
dir.read = when
dir.cleanupTimer.Reset(d.vfs.Opt.DirCacheTime * 2)
}
dir.mu.Unlock()
if err != nil {
return err
}
}
dir.mu.Unlock()
if err != nil {
return err
}
default:
err = fmt.Errorf("unknown type %T", item)

View file

@ -345,7 +345,7 @@ func (dls *Downloaders) _ensureDownloader(r ranges.Range) (err error) {
start, offset := dl.getRange()
// The downloader's offset to offset+window is the gap
// in which we would like to re-use this
// in which we would like to reuse this
// downloader. The downloader will never reach before
// start and offset+windows is too far away - we'd
// rather start another downloader.