- add `-macos-sdk` and `-macos-arch` to adjust CGO_CFLAGS and CGO_LDFLAGS
- select macOS SDK 11.1 and arch arm64 when building
- add -cgo-cflags and -cgo-ldflags to set CGO_CFLAGS and CGO_LDFLAGS
- add back /usr/local to pickup fuse headers and library
- add `-env` to cross-compile
- add macOS/arm64 to download matrix
Before this change options were read and set in native format. This
means for example nanoseconds for durations or an integer for
enumerated types, which isn't very convenient for humans.
This change enables these types to be set with a string with the
syntax as used in the command line instead, so `"10s"` rather than
`10000000000` or `"DEBUG"` rather than `8` for log level.
This is an attempt at rewriting the rclone filter documentation page.
I have drawn largely from what appears to be the strong original
structure of the page; existing text, and forum comments.
The term flag is used throughout rather than differentiating `--`
options with more complex arguments. That diverges from some standard
practice but is consistent with messages in the rclone binary and `go`
documentation.
The term directory not folder is used throughout.
I tried referring to objects more broadly rather than files and it
just did not seem to work. Apart from a note at the top the
explanations refer entirely to paths, directories and files. My
justification is that bucket store users understand the concept of
files. Not all users of directory aware storage are so familiar with
objects, keys and metadata.
Many of the changes I have made involve moving issues into what seemed
to me to be more relevant parts of the original page structure. I
still find the content repetitious and overly long but that may be
inevitable when users can only be expected to read the section of the
page they think most relevant.
I have eliminated the rsync section from the original structure. It is
hard enough explaining how rclone filters work without also setting
out how they do not. Comment on sync is instead relegated to a
paragraph in the directory filter section.
The structure of the page is intended to work with a hugo toc card
from html Header2 to Header3.
My original intention was to establish a separate examples section. I
have instead retained examples in each section, added to them and
tried to make clear what is documentation and what example.
The changes draw on Github and Forum issues too numerous to mention.
for instance:
https://forum.rclone.org/t/certain-exclusion-flags-seem-to-be-ignored/20049/2
I am **especially** grateful for
https://forum.rclone.org/t/object-key-remote-directory-filter-clarification/20386/2
for making sense of directory filters for me.
@ncw has a fun (and useful) online filter app at
https://filterdemo.rclone.org/ I have not referred to it at this stage
though I particularly like the fact that it is tied to the same
codebase as an rclone version.
I have added cautions about mixing the `--filter...` flags with
`--exclude...` or `--include...`. The same issues seem to arise as
already recognised between the latter two.
The formal summary of glob syntax introduced at the top of the page is
shamelessly stolen from https://godoc.org/github.com/gobwas/glob
I have tried not to alter too many header descriptions and thereby
break existing links to them.
The reference to 'lass' in the example has been retained to confuse
all those not of Scottish or Yorkshire heritage.
Some of my activity was to remove ambiguity and I anticipate
suggestions to roll that back where it has become overly complex.
I tried particularly to bring together and make clear material about
directory filters. It was previously scattered throughout the page and
I couldn't understand it. I am particularly grateful for the
explanations I received about directory filters though any remaining
errors are entirely my own.
Removed erroneous references to non existent `--filter...` flags.
In some ways the best person to write this page would be one with no
knowledge whatsoever of how rclone filters work. The further I got
into it the better qualified I found myself to be.
E&OE
- fix test case FsNewObjectCaseInsensitive (PR #4830)
- continue PR #4917, add comments in metadata detection code
- add warning about metadata detection in user documentation
- change metadata size limits, make room for future development
- hide critical chunker parameters from command line
This includes an HDFS docker image to use with the integration tests.
Co-authored-by: Ivan Andreev <ivandeex@gmail.com>
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
I followed this guide and successfully set up OneDrive on rclone, but there is a detail which stumped me for some time.
You may not copy and paste `http://localhost:53682/` from this document to Microsoft's website, you must type it, otherwise it is not recognized as a valid address. I edited the line with this information.
This commit modifies the operations.hashSum function by adding an alternate code path. This code path is triggered by passing downloadFlag = True. When activated, rclone will download files from the remote and hash them locally. downloadFlag = False preserves the existing behavior of using the remote to retrieve the hash.
This commit modifies HashLister to support the new hashSum method as well as consolidating the roles of HashLister, HashListerBase64, Md5sum, and Sha1sum. The printing of hashes from the function defined in HashLister has been revised to work with --progress. There are light changes to operations.syncFprintf and cmd.startProgress for this.
The unit test operations_test.TestHashSums is modified to support this change and test the download functionality.
The command functions hashsum, md5sum, sha1sum, and dbhashsum are modified to support this change. A download flag has been added and an output-file flag has been added. The output-file flag writes hashes to a file instead of stdout to avoid the need to redirect stdout.
Yandex appears to ignore mime types set as part of the PUT request or
as part of a PATCH request.
The docs make no mention of being able to set a mime type, so set
WriteMimeType=false indicating the backend can't set mime types on
uploaded files.
Create a full loop of documentation for rclone about, backends overview
and individual backend pages.
Discussion:
https://github.com/rclone/rclone/pull/4774 relates
Previously pull was requested, in part, under ref
https://github.com/rclone/rclone/pull/4801
Notes:
Introduce a tentative draft see-link format the end of sections to try
rather than lots of in-para links.
Update about.go incl link to list of backends not supporting about.
In list of backends not supporting about, include link to about command
reference.
I appreciate there may be decisions to make going forward about whether
command links should be code formatted, and using proper pretty url
links, but I have fudged that for now.
Update backend pages that do not support about with wording used
previously for ftp - it is in passive voice but I can live with it. (my
own wording and fault). The note is applied to a limitations section. If
one does not already exist it is created (even if there are other
limitations with their own sections)
Before this change, small objects uploaded with SSE-AWS/SSE-C would
not have MD5 sums.
This change adds metadata for these objects in the same way that the
metadata is stored for multipart uploaded objects.
See: #1824#2827
If rclone is configured for server side encryption - either aws:kms or
sse-c (but not sse-s3) then don't treat the ETags returned on objects
as MD5 hashes.
This fixes being able to upload small files.
Fixes#1824
The topic is mostly about so limitations so all of these are grouped together with a section hyperlink near the top of the page. Intention is to avoid potential duplication and make it more straightforward (there is a place and it is essentially just a list so wording doesn't need to be elegant) to add notes about limitations in future, harvested from rclone forum postings.
Minor wording alterations that do not intend to change meaning
Adds a flag, --progress-terminal-title, that when used with --progress,
will print the string `ETA: %s` to the terminal title.
This also adds WriteTerminalTitle to lib/terminal
Based on Issue 4087
https://github.com/rclone/rclone/issues/4087
Current behaviour is insecure. If the user specifies this value then we
switch to validating the server hostkey and so can detect server changes
or MITM-type attacks.
From now on the betas will be numbered for the version that they will
become, so:
v1.53.0-beta.NNNN.CCCCC
Where N is commit number and C is commit. When released this will
become v1.53.0 and the beta will become v1.54.0-beta.NNN.CCCCC.
The commit number is the count of the commits since the root of the
tree since we can no longer use the the git version numbers since the
last tag.
This will simplify building the stable branch but that release
procedure hasn't been revised yet.
This commit also injects the name of the branch for the beta builds
into the download path.
The go team made the decision to drop support for 32 bit macOS as 32
bit apps are no longer supported by macOS and 32 bit hardware hasn't
been produced by Apple for over 10 years.
This implements `rclone cleanup` to remove multipart uploads over 24
hours old. It also implements the backend command
`list-multipart-uploads` to see which ones are available and `cleanup`
to delete them with a configurable expiry interval.
See #4302
If the parameter being passed is an object then it can be passed as a
JSON string rather than using the `--json` flag which simplifies the
command line.
rclone rc operations/list fs=/tmp remote=test opt='{"showHash": true}'
Rather than
rclone rc operations/list --json '{"fs": "/tmp", "remote": "test", "opt": {"showHash": true}}'
Removed comma from the end of the Azure AD Applications List Blade URL since it was not resolving and customers were opening up support tickets with the Microsoft Azure AD team.
Currently credentials are required to download a public bucket file
which is not really necessary and makes automated usage more complex.
Add a new option "anonymous" which when enabled configures the gcs
backend to use an anonymous HTTP client. This of course only works
for read access and trying to write will lead to errors like that:
"googleapi: Error 401: Anonymous caller does not not have
storage.objects.create access to the Google Cloud Storage object.",
as expected. By default the anonymous access option is disabled so that
the GCS Application Default Credentials are still used by default as
before and an error is given if they can't be found.
Dropbox files protected by copyright can't be synced and results
in an error. This issue reflects that in the Dropbox's limitations
section to warn rclone users and give info about why this error
occurs due that by default Dropbox API doesn't give enough info.
Proposed by the original repo owner in this comment:
https://github.com/rclone/rclone/issues/2301#issuecomment-388291079
Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
The option of 'other' seems to be gone from the
https://console.developers.google.com/apis/credentials/oauthclient
page. It only lists these now:
- Web application
- Android
- Chrome app
- iOS
- TVs and Limited Input devices
- Desktop app
- Universal Windows Platform (UWP)
If your filenames contain two near-identical Unicode characters,
rclone will normalize these, making them identical. This flag
gives you the ability to keep them unique. This might
create unintended side effects, such as duplicating files that
contain certain Unicode characters, when downloading them from
certain cloud providers to a macOS filesystem.
Fixes#4228
Before this code we were settig the headers on the PUT request. However this isn't where GCS needs them.
After this fix we set the headers in the object upload request itself.
This means that we only support a limited range of headers
- Cache-Control
- Content-Disposition
- Content-Encoding
- Content-Language
- Content-Type
- X-Goog-Meta-
Note for the last of those are for setting custom metadata in the form
"X-Goog-Meta-Key: value".
Before this change rclone would skip all shortcuts with a message
Ignoring unknown document type "application/vnd.google-apps.shortcut"
After this message rclone resolves the shortcuts by default to the
actual files that they point to. See the docs for more info.
The --drive-skip-shortcuts flag can be used to skip shortcuts.
This allows rclone to exit with a non-zero return code if no files are
transferred. This is useful when calling rclone as part of a workflow/script
pipeline as it allows the end user to stop processing if no files have been
transferred.
NB: Enabling this option will return in rclone exiting non-zero if there are no
transfers. Depending on how your're currently using rclone in your scripts,
this may break existing setups!
--files-from parses input files by ignoring comments starting with # and ;
and stripping whitespace from start and end of strings.
The --files-from-raw flag was added that reads every line from the file ignoring
comment characters and not stripping whitespace while maintaining
backwards compatibility.
Fixes#3762
Without explaining exactly how this is generated, it can be confusing
and worrying to not know how the password that encrypts your data is
stored.
This also brings peace of mind to the user that even though
the same password is obscured differently each time, all the data to
get back to the original password remains. Explaining how it works
is much better than the reader of the documentation having to trust
a blackboxy/magical mechanism.
This commit adds the `--track-renames-strategy` flag which allows the
user to choose the strategy for tracking renames when using the
`--track-renames` flag.
This can be "hash" or "modtime" or both currently.
This, when used with `--track-renames-strategy modtime` enables
support for tracking renames in encrypted remotes.
Fixes#3696Fixes#2721
In 8a0775ce3c which was released in v1.49.0 we inadvertently
stopped SAS URLs working from the root without a container name.
Previously to this change you could use `rclone mount azsas:` and it
would actually be equivalent to `rclone mount azsas:container`. After
this change, only `rclone mount azsas:container` will work, `rclone
mount azsas:` will have a directory in the root called "container".
After some discussion it was decided not to revert this change as the
current behaviour is more logical and in line with the similar
behaviour for the b2 backend.
Instead the documentation was updated to show exactly how container
level SAS URLs behave.
Fixes#4028