There were a lot of instances of this lint error
printf: non-constant format string in call to github.com/rclone/rclone/fs.Logf (govet)
Which were fixed by re-arranging the arguments and adding "%s".
There were quite a few genuine bugs which were found too.
This implements --auth-proxy for serve s3. In addition it:
* add listbuckets tests with and without authProxy
* use auth proxy test framework
* servetest: implement workaround for #7454
* update github.com/rclone/gofakes3 to fix race condition
This also
- move in use options (Opt) from vfsflags to vfscommon
- change os.FileMode to vfscommon.FileMode in parameters
- rework vfscommon.FileMode and add tests
Before this change, serve s3 did not consistently save the correct modtime value
in memory after putting or copying an object, which could sometimes cause an
incorrect modtime to be returned. This change fixes the issue by ensuring that
both "mtime" and "X-Amz-Meta-Mtime" are updated in b.meta when we have fresh data.
The issue was discovered on the TestBisyncRemoteRemote/ext_paths test.
Before this change, if a user unmounted externally (for example, via the Finder
UI), rclone would not be aware of this and wait forever to exit -- effectively
causing a deadlock that would require Ctrl+C to terminate.
After this change, when the handler detects an external unmount, it calls a
function which allows rclone to cleanly shutdown the VFS and exit.
Before this change, writing files to an `nfsmount` via Finder on macOS would
cause critical errors, rendering `nfsmount` effectively unusable on macOS. This
change fixes the issue so that writes via Finder should be possible.
The issue was primarily caused by the handler's HandleLimit being set to -1. -1 is
the correct default for a NullAuthHandler, but not for a CachingHandler, which
interprets -1 not as "no limit" but as "no cache".
This change sets a high default of 1000000, and gives the user control over it
with a new --nfs-cache-handle-limit flag (available in both `serve nfs` and
`nfsmount`. A minimum of 5 is enforced, as any lower than this will be
insufficient to support directory listing.
Before this change it wasn't possible to see where transfers were
going from and to in core/stats and core/transferred.
When use in rclone mount in particular this made interpreting the
stats very hard.
* serve restic: return internal error if listing failed
If listing a remote failed, then rclone returned http status "not
found". This has become a problem since restic 0.16.0 which ignores "not
found"-errors while listing a directory.
Just return internal server error, if something unexpected happens while
listing a directory.
* serve restic: fix error handling if getting a file fails
If the call to `newObject` in `serveObject` fails, then rclone always
returned a "not found" error. This prevents restic from distinguishing
permanent "not found" errors from everything else.
Thus, only return "not found" if the object is not found and an internal
server error otherwise.
This updates the direct dependencies.
The latest github.com/willscott/go-nfs has changed the interface
slightly so this implements a dummy InvalidateHandle method in order
to satisfy it.
Before this change, listing a subdirectory gave errors like this:
Entry doesn't belong in directory "" (contains subdir) - ignoring
It also did full recursive listings when it didn't need to.
This was caused by the code using the underlying Fs to do recursive
listings on bucket based backends.
Using both the VFS and the underlying Fs is a mistake so this patch
removes the code which uses the underlying Fs and just uses the VFS.
Fixes#7500
Before this change overwriting an existing file with a 0 length file
didn't update the file size.
This change corrects the issue and makes sure the file is truncated
properly.
This was discovered by the full integration tests.
Before this change serve s3 would return NoSuchKey errors when a non
existent prefix was listed.
This change fixes it to return an empty list like AWS does.
This was discovered by the full integration tests.
- Changes
- Rename `--s3-authkey` to `--auth-key` to get it out of the s3 backend namespace
- Enable `Content-MD5` integrity checks
- Remove locking after code audit
- Documentation
- Factor out documentation into seperate file
- Add Quickstart to docs
- Add Bugs section to docs
- Add experimental tag to docs
- Add rclone provider to s3 backend docs
- Fixes
- Correct quirks in s3 backend
- Change fmt.Printlns into fs.Logs
- Make metadata storage per backend not global
- Log on startup if anonymous access is enabled
- Coding style fixes
- rename fs to vfs to save confusion with the rest of rclone code
- rename db to b for *s3Backend
Fixes#7062
- add context to log and fallthrough to error log level
- test: use rclone random lib to generate random strings
- calculate hash from vfs cache if file is uploading
- add server started log with server url
- remove md5 hasher
Before this change, if a hardlink command was issued, rclone would
just ignore it and not return an error.
This changes any unknown operations (including hardlink) to return an
unsupported error.