Before this change all exports were exported as root and the --uid and
--gid flags of the VFS were ignored.
This fixes the issue by exporting the UID and GID correctly which
default to the current user and group unless set explicitly.
This problem was caused by the defaults not being set for the options
after the conversion to the new config system in
28ba4b832d serve nfs: convert options to new style
This makes the nfs serve options globally available so nfsmount can
use them directly.
Fixes#8029
The code currently hardcodes `text/srt` for all subtitles.
`text/srt` is wrong, it seems `application/x-subrip` is the official
extension coming from the official mime database, at least (and still
works with the Samsung TV I tested with). Also add that one to `fs/
mimetype.go`.
Compared to previous iterations of this PR, I dropped tests ensuring
certain mime types are present - as detection still seems to be fairly
platform-specific.
.idx and .sub subtitle files only work if both are present, but the code
was overwriting the first-inserted element to subtitlesByName, as it was
keyed by the basename without extension.
Make subtitlesByName point to a slice of nodes instead.
Apparently it seems pretty common for subtitles to be put in a
subdirectory called "Subs", rather than in the same directory as the
media file itself.
This covers that usecase, by checking the returned listing for a
directory called "Subs" to exist.
If it does, its child nodes are added to the list before they're being
passed to mediaWithResources, allowing these subtitles to be discovered
automatically.
This adds a new optional parameter to the backend, to specify a path
to a unix domain socket to connect to, instead the specified URL.
The URL itself is still used for the rest of the HTTP client, allowing
host and subpath to stay intact.
This allows using rclone with the webdav backend to connect to a WebDAV
server provided at a Unix Domain socket:
rclone serve webdav --addr unix:///tmp/my.socket remote:path
rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav:
There were a lot of instances of this lint error
printf: non-constant format string in call to github.com/rclone/rclone/fs.Logf (govet)
Which were fixed by re-arranging the arguments and adding "%s".
There were quite a few genuine bugs which were found too.
This implements --auth-proxy for serve s3. In addition it:
* add listbuckets tests with and without authProxy
* use auth proxy test framework
* servetest: implement workaround for #7454
* update github.com/rclone/gofakes3 to fix race condition
This also
- move in use options (Opt) from vfsflags to vfscommon
- change os.FileMode to vfscommon.FileMode in parameters
- rework vfscommon.FileMode and add tests
Before this change, serve s3 did not consistently save the correct modtime value
in memory after putting or copying an object, which could sometimes cause an
incorrect modtime to be returned. This change fixes the issue by ensuring that
both "mtime" and "X-Amz-Meta-Mtime" are updated in b.meta when we have fresh data.
The issue was discovered on the TestBisyncRemoteRemote/ext_paths test.
Before this change, if a user unmounted externally (for example, via the Finder
UI), rclone would not be aware of this and wait forever to exit -- effectively
causing a deadlock that would require Ctrl+C to terminate.
After this change, when the handler detects an external unmount, it calls a
function which allows rclone to cleanly shutdown the VFS and exit.
Before this change, writing files to an `nfsmount` via Finder on macOS would
cause critical errors, rendering `nfsmount` effectively unusable on macOS. This
change fixes the issue so that writes via Finder should be possible.
The issue was primarily caused by the handler's HandleLimit being set to -1. -1 is
the correct default for a NullAuthHandler, but not for a CachingHandler, which
interprets -1 not as "no limit" but as "no cache".
This change sets a high default of 1000000, and gives the user control over it
with a new --nfs-cache-handle-limit flag (available in both `serve nfs` and
`nfsmount`. A minimum of 5 is enforced, as any lower than this will be
insufficient to support directory listing.
Before this change it wasn't possible to see where transfers were
going from and to in core/stats and core/transferred.
When use in rclone mount in particular this made interpreting the
stats very hard.
* serve restic: return internal error if listing failed
If listing a remote failed, then rclone returned http status "not
found". This has become a problem since restic 0.16.0 which ignores "not
found"-errors while listing a directory.
Just return internal server error, if something unexpected happens while
listing a directory.
* serve restic: fix error handling if getting a file fails
If the call to `newObject` in `serveObject` fails, then rclone always
returned a "not found" error. This prevents restic from distinguishing
permanent "not found" errors from everything else.
Thus, only return "not found" if the object is not found and an internal
server error otherwise.
This updates the direct dependencies.
The latest github.com/willscott/go-nfs has changed the interface
slightly so this implements a dummy InvalidateHandle method in order
to satisfy it.
Before this change, listing a subdirectory gave errors like this:
Entry doesn't belong in directory "" (contains subdir) - ignoring
It also did full recursive listings when it didn't need to.
This was caused by the code using the underlying Fs to do recursive
listings on bucket based backends.
Using both the VFS and the underlying Fs is a mistake so this patch
removes the code which uses the underlying Fs and just uses the VFS.
Fixes#7500
Before this change overwriting an existing file with a 0 length file
didn't update the file size.
This change corrects the issue and makes sure the file is truncated
properly.
This was discovered by the full integration tests.