Compare commits
34 commits
pr-6343-on
...
tcl/master
Author | SHA1 | Date | |
---|---|---|---|
|
f2d16ab4c5 | ||
|
c0fc4fe0ca | ||
|
669b2f2669 | ||
|
e1ba10a86e | ||
|
022442cf58 | ||
|
5cc4488294 | ||
|
ec9566c5c3 | ||
|
f6976eb4c4 | ||
|
c242c00799 | ||
|
bf954b74ff | ||
|
88f0770d0a | ||
|
41d905c9b0 | ||
|
300a063b5e | ||
|
61bf29ed5e | ||
|
3191717572 | ||
|
961dfe97b5 | ||
|
22612b4b38 | ||
|
b9927461c3 | ||
|
6d04be99f2 | ||
|
06ae0dfa54 | ||
|
912f29b5b8 | ||
|
8d78768aaa | ||
|
6aa924f28d | ||
|
48f2c2db70 | ||
|
a88066aff3 | ||
|
75f5b06ff7 | ||
|
daeeb7c145 | ||
|
d6a5fc6ffa | ||
|
c0bfedf99c | ||
|
76b76c30bf | ||
|
737fcc804f | ||
|
70f3965354 | ||
|
d5c100edaf | ||
|
dc7458cea0 |
53 changed files with 1777 additions and 579 deletions
|
@ -32,15 +32,27 @@ jobs:
|
|||
- name: Get actual major version
|
||||
id: actual_major_version
|
||||
run: echo ::set-output name=ACTUAL_MAJOR_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g' | cut -d "." -f 1)
|
||||
- name: Build and publish image
|
||||
uses: ilteoood/docker_buildx@1.1.0
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
- name: Login to Docker Hub
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
tag: latest,${{ steps.actual_patch_version.outputs.ACTUAL_PATCH_VERSION }},${{ steps.actual_minor_version.outputs.ACTUAL_MINOR_VERSION }},${{ steps.actual_major_version.outputs.ACTUAL_MAJOR_VERSION }}
|
||||
imageName: rclone/rclone
|
||||
platform: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6
|
||||
publish: true
|
||||
dockerHubUser: ${{ secrets.DOCKER_HUB_USER }}
|
||||
dockerHubPassword: ${{ secrets.DOCKER_HUB_PASSWORD }}
|
||||
username: ${{ secrets.DOCKER_HUB_USER }}
|
||||
password: ${{ secrets.DOCKER_HUB_PASSWORD }}
|
||||
- name: Build and publish image
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
file: Dockerfile
|
||||
context: .
|
||||
platforms: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6
|
||||
push: true
|
||||
tags: |
|
||||
rclone/rclone:latest
|
||||
rclone/rclone:${{ steps.actual_patch_version.outputs.ACTUAL_PATCH_VERSION }}
|
||||
rclone/rclone:${{ steps.actual_minor_version.outputs.ACTUAL_MINOR_VERSION }}
|
||||
rclone/rclone:${{ steps.actual_major_version.outputs.ACTUAL_MAJOR_VERSION }}
|
||||
|
||||
build_docker_volume_plugin:
|
||||
if: github.repository == 'rclone/rclone'
|
||||
|
|
336
MANUAL.html
generated
336
MANUAL.html
generated
|
@ -81,7 +81,7 @@
|
|||
<header id="title-block-header">
|
||||
<h1 class="title">rclone(1) User Manual</h1>
|
||||
<p class="author">Nick Craig-Wood</p>
|
||||
<p class="date">Sep 08, 2024</p>
|
||||
<p class="date">Nov 15, 2024</p>
|
||||
</header>
|
||||
<h1 id="rclone-syncs-your-files-to-cloud-storage">Rclone syncs your files to cloud storage</h1>
|
||||
<p><img width="50%" src="https://rclone.org/img/logo_on_light__horizontal_color.svg" alt="rclone logo" style="float:right; padding: 5px;" ></p>
|
||||
|
@ -2964,7 +2964,9 @@ rclone mount remote:path/to/files \\cloud\remote</code></pre>
|
|||
<p>When running in background mode the user will have to stop the mount manually:</p>
|
||||
<pre><code># Linux
|
||||
fusermount -u /path/to/local/mount
|
||||
# OS X
|
||||
#... or on some systems
|
||||
fusermount3 -u /path/to/local/mount
|
||||
# OS X or Linux when using nfsmount
|
||||
umount /path/to/local/mount</code></pre>
|
||||
<p>The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually.</p>
|
||||
<p>The size of the mounted file system will be set according to information retrieved from the remote, the same as returned by the <a href="https://rclone.org/commands/rclone_about/">rclone about</a> command. Remotes with unlimited storage may report the used size only, then an additional 1 PiB of free space is assumed. If the remote does not <a href="https://rclone.org/overview/#optional-features">support</a> the about feature at all, then 1 PiB is set as both the total and the free size.</p>
|
||||
|
@ -3048,7 +3050,7 @@ sudo ln -s /opt/local/lib/libfuse.2.dylib</code></pre>
|
|||
<p>Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.</p>
|
||||
<h2 id="systemd">systemd</h2>
|
||||
<p>When running rclone mount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone mount service specified as a requirement will see all files and folders immediately in this mode.</p>
|
||||
<p>Note that systemd runs mount units without any environment variables including <code>PATH</code> or <code>HOME</code>. This means that tilde (<code>~</code>) expansion will not work and you should provide <code>--config</code> and <code>--cache-dir</code> explicitly as absolute paths via rclone arguments. Since mounting requires the <code>fusermount</code> program, rclone will use the fallback PATH of <code>/bin:/usr/bin</code> in this scenario. Please ensure that <code>fusermount</code> is present on this PATH.</p>
|
||||
<p>Note that systemd runs mount units without any environment variables including <code>PATH</code> or <code>HOME</code>. This means that tilde (<code>~</code>) expansion will not work and you should provide <code>--config</code> and <code>--cache-dir</code> explicitly as absolute paths via rclone arguments. Since mounting requires the <code>fusermount</code> or <code>fusermount3</code> program, rclone will use the fallback PATH of <code>/bin:/usr/bin</code> in this scenario. Please ensure that <code>fusermount</code>/<code>fusermount3</code> is present on this PATH.</p>
|
||||
<h2 id="rclone-as-unix-mount-helper">Rclone as Unix mount helper</h2>
|
||||
<p>The core Unix program <code>/bin/mount</code> normally takes the <code>-t FSTYPE</code> argument then runs the <code>/sbin/mount.FSTYPE</code> helper program passing it mount options as <code>-o key=val,...</code> or <code>--opt=...</code>. Automount (classic or systemd) behaves in a similar way.</p>
|
||||
<p>rclone by default expects GNU-style flags <code>--key val</code>. To run it as a mount helper you should symlink rclone binary to <code>/sbin/mount.rclone</code> and optionally <code>/usr/bin/rclonefs</code>, e.g. <code>ln -s /usr/bin/rclone /sbin/mount.rclone</code>. rclone will detect it and translate command-line arguments appropriately.</p>
|
||||
|
@ -3482,7 +3484,9 @@ rclone nfsmount remote:path/to/files \\cloud\remote</code></pre>
|
|||
<p>When running in background mode the user will have to stop the mount manually:</p>
|
||||
<pre><code># Linux
|
||||
fusermount -u /path/to/local/mount
|
||||
# OS X
|
||||
#... or on some systems
|
||||
fusermount3 -u /path/to/local/mount
|
||||
# OS X or Linux when using nfsmount
|
||||
umount /path/to/local/mount</code></pre>
|
||||
<p>The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually.</p>
|
||||
<p>The size of the mounted file system will be set according to information retrieved from the remote, the same as returned by the <a href="https://rclone.org/commands/rclone_about/">rclone about</a> command. Remotes with unlimited storage may report the used size only, then an additional 1 PiB of free space is assumed. If the remote does not <a href="https://rclone.org/overview/#optional-features">support</a> the about feature at all, then 1 PiB is set as both the total and the free size.</p>
|
||||
|
@ -3566,7 +3570,7 @@ sudo ln -s /opt/local/lib/libfuse.2.dylib</code></pre>
|
|||
<p>Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.</p>
|
||||
<h2 id="systemd-1">systemd</h2>
|
||||
<p>When running rclone nfsmount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone nfsmount service specified as a requirement will see all files and folders immediately in this mode.</p>
|
||||
<p>Note that systemd runs mount units without any environment variables including <code>PATH</code> or <code>HOME</code>. This means that tilde (<code>~</code>) expansion will not work and you should provide <code>--config</code> and <code>--cache-dir</code> explicitly as absolute paths via rclone arguments. Since mounting requires the <code>fusermount</code> program, rclone will use the fallback PATH of <code>/bin:/usr/bin</code> in this scenario. Please ensure that <code>fusermount</code> is present on this PATH.</p>
|
||||
<p>Note that systemd runs mount units without any environment variables including <code>PATH</code> or <code>HOME</code>. This means that tilde (<code>~</code>) expansion will not work and you should provide <code>--config</code> and <code>--cache-dir</code> explicitly as absolute paths via rclone arguments. Since mounting requires the <code>fusermount</code> or <code>fusermount3</code> program, rclone will use the fallback PATH of <code>/bin:/usr/bin</code> in this scenario. Please ensure that <code>fusermount</code>/<code>fusermount3</code> is present on this PATH.</p>
|
||||
<h2 id="rclone-as-unix-mount-helper-1">Rclone as Unix mount helper</h2>
|
||||
<p>The core Unix program <code>/bin/mount</code> normally takes the <code>-t FSTYPE</code> argument then runs the <code>/sbin/mount.FSTYPE</code> helper program passing it mount options as <code>-o key=val,...</code> or <code>--opt=...</code>. Automount (classic or systemd) behaves in a similar way.</p>
|
||||
<p>rclone by default expects GNU-style flags <code>--key val</code>. To run it as a mount helper you should symlink rclone binary to <code>/sbin/mount.rclone</code> and optionally <code>/usr/bin/rclonefs</code>, e.g. <code>ln -s /usr/bin/rclone /sbin/mount.rclone</code>. rclone will detect it and translate command-line arguments appropriately.</p>
|
||||
|
@ -4056,7 +4060,7 @@ htpasswd -B htpasswd anotherUser</code></pre>
|
|||
<h3 id="rc-options">RC Options</h3>
|
||||
<p>Flags to control the Remote Control API</p>
|
||||
<pre><code> --rc Enable the remote control server
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
|
||||
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
|
||||
--rc-baseurl string Prefix for URLs - leave blank for root
|
||||
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
|
||||
|
@ -7849,6 +7853,38 @@ export RCLONE_CONFIG_PASS</code></pre>
|
|||
<p>Verbosity is slightly different, the environment variable equivalent of <code>--verbose</code> or <code>-v</code> is <code>RCLONE_VERBOSE=1</code>, or for <code>-vv</code>, <code>RCLONE_VERBOSE=2</code>.</p>
|
||||
<p>The same parser is used for the options and the environment variables so they take exactly the same form.</p>
|
||||
<p>The options set by environment variables can be seen with the <code>-vv</code> flag, e.g. <code>rclone version -vv</code>.</p>
|
||||
<p>Options that can appear multiple times (type <code>stringArray</code>) are treated slighly differently as environment variables can only be defined once. In order to allow a simple mechanism for adding one or many items, the input is treated as a <a href="https://godoc.org/encoding/csv">CSV encoded</a> string. For example</p>
|
||||
<table>
|
||||
<colgroup>
|
||||
<col style="width: 52%" />
|
||||
<col style="width: 47%" />
|
||||
</colgroup>
|
||||
<thead>
|
||||
<tr class="header">
|
||||
<th>Environment Variable</th>
|
||||
<th>Equivalent options</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr class="odd">
|
||||
<td><code>RCLONE_EXCLUDE="*.jpg"</code></td>
|
||||
<td><code>--exclude "*.jpg"</code></td>
|
||||
</tr>
|
||||
<tr class="even">
|
||||
<td><code>RCLONE_EXCLUDE="*.jpg,*.png"</code></td>
|
||||
<td><code>--exclude "*.jpg"</code> <code>--exclude "*.png"</code></td>
|
||||
</tr>
|
||||
<tr class="odd">
|
||||
<td><code>RCLONE_EXCLUDE='"*.jpg","*.png"'</code></td>
|
||||
<td><code>--exclude "*.jpg"</code> <code>--exclude "*.png"</code></td>
|
||||
</tr>
|
||||
<tr class="even">
|
||||
<td><code>RCLONE_EXCLUDE='"/directory with comma , in it /**"'</code></td>
|
||||
<td>`--exclude "/directory with comma , in it /**"</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
<p>If <code>stringArray</code> options are defined as environment variables <strong>and</strong> options on the command line then all the values will be used.</p>
|
||||
<h3 id="config-file">Config file</h3>
|
||||
<p>You can set defaults for values in the config file on an individual remote basis. The names of the config items are documented in the page for each backend.</p>
|
||||
<p>To find the name of the environment variable, you need to set, take <code>RCLONE_CONFIG_</code> + name of remote + <code>_</code> + name of config file option and make it all uppercase. Note one implication here is the remote's name must be convertible into a valid environment variable name, so it can only contain letters, digits, or the <code>_</code> (underscore) character.</p>
|
||||
|
@ -8287,12 +8323,14 @@ file2.avi</code></pre>
|
|||
<p>Adds path/file names to an rclone command based on rules in a named file. The file contains a list of remarks and pattern rules. Include rules start with <code>+</code> and exclude rules with <code>-</code>. <code>!</code> clears existing rules. Rules are processed in the order they are defined.</p>
|
||||
<p>This flag can be repeated. See above for the order filter flags are processed in.</p>
|
||||
<p>Arrange the order of filter rules with the most restrictive first and work down.</p>
|
||||
<p>Lines starting with # or ; are ignored, and can be used to write comments. Inline comments are not supported. <em>Use <code>-vv --dump filters</code> to see how they appear in the final regexp.</em></p>
|
||||
<p>E.g. for <code>filter-file.txt</code>:</p>
|
||||
<pre><code># a sample filter rule file
|
||||
- secret*.jpg
|
||||
+ *.jpg
|
||||
+ *.png
|
||||
+ file2.avi
|
||||
- /dir/tmp/** # WARNING! This text will be treated as part of the path.
|
||||
- /dir/Trash/**
|
||||
+ /dir/**
|
||||
# exclude everything else
|
||||
|
@ -10382,7 +10420,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
|
|||
<tr class="odd">
|
||||
<td>pCloud</td>
|
||||
<td style="text-align: center;">MD5, SHA1 ⁷</td>
|
||||
<td style="text-align: center;">R</td>
|
||||
<td style="text-align: center;">R/W</td>
|
||||
<td style="text-align: center;">No</td>
|
||||
<td style="text-align: center;">No</td>
|
||||
<td style="text-align: center;">W</td>
|
||||
|
@ -11932,7 +11970,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
|
|||
--tpslimit float Limit HTTP transactions per second to this
|
||||
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
||||
--use-cookies Enable session cookiejar
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.0")</code></pre>
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.2")</code></pre>
|
||||
<h2 id="performance">Performance</h2>
|
||||
<p>Flags helpful for increasing performance.</p>
|
||||
<pre><code> --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
|
||||
|
@ -12033,7 +12071,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
|
|||
<h2 id="rc-1">RC</h2>
|
||||
<p>Flags to control the Remote Control API.</p>
|
||||
<pre><code> --rc Enable the remote control server
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
|
||||
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
|
||||
--rc-baseurl string Prefix for URLs - leave blank for root
|
||||
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
|
||||
|
@ -12063,7 +12101,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
|
|||
--rc-web-gui-update Check and update to latest version of web gui</code></pre>
|
||||
<h2 id="metrics-1">Metrics</h2>
|
||||
<p>Flags to control the Metrics HTTP endpoint..</p>
|
||||
<pre><code> --metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to (default [""])
|
||||
<pre><code> --metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to
|
||||
--metrics-allow-origin string Origin which cross-domain request (CORS) can be executed from
|
||||
--metrics-baseurl string Prefix for URLs - leave blank for root
|
||||
--metrics-cert string TLS PEM key (concatenation of certificate and CA certificate)
|
||||
|
@ -12551,21 +12589,18 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
|
|||
--pcloud-token string OAuth Access Token as a JSON blob
|
||||
--pcloud-token-url string Token server url
|
||||
--pcloud-username string Your pcloud username
|
||||
--pikpak-auth-url string Auth server URL
|
||||
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
|
||||
--pikpak-client-id string OAuth Client Id
|
||||
--pikpak-client-secret string OAuth Client Secret
|
||||
--pikpak-description string Description of the remote
|
||||
--pikpak-device-id string Device ID used for authorization
|
||||
--pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
|
||||
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
|
||||
--pikpak-pass string Pikpak password (obscured)
|
||||
--pikpak-root-folder-id string ID of the root folder
|
||||
--pikpak-token string OAuth Access Token as a JSON blob
|
||||
--pikpak-token-url string Token server url
|
||||
--pikpak-trashed-only Only show files that are in the trash
|
||||
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
|
||||
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
|
||||
--pikpak-user string Pikpak username
|
||||
--pikpak-user-agent string HTTP user agent for pikpak (default "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0")
|
||||
--pixeldrain-api-key string API key for your pixeldrain account
|
||||
--pixeldrain-api-url string The API endpoint to connect to. In the vast majority of cases it's fine to leave (default "https://pixeldrain.com/api")
|
||||
--pixeldrain-description string Description of the remote
|
||||
|
@ -14529,6 +14564,18 @@ y/e/d></code></pre>
|
|||
<p>By default, rclone will HEAD every object it uploads. It does this to check the object got uploaded correctly.</p>
|
||||
<p>You can disable this with the <a href="#s3-no-head">--s3-no-head</a> option - see there for more details.</p>
|
||||
<p>Setting this flag increases the chance for undetected upload failures.</p>
|
||||
<h3 id="increasing-performance">Increasing performance</h3>
|
||||
<h4 id="using-server-side-copy">Using server-side copy</h4>
|
||||
<p>If you are copying objects between S3 buckets in the same region, you should use server-side copy. This is much faster than downloading and re-uploading the objects, as no data is transferred.</p>
|
||||
<p>For rclone to use server-side copy, you must use the same remote for the source and destination.</p>
|
||||
<pre><code>rclone copy s3:source-bucket s3:destination-bucket</code></pre>
|
||||
<p>When using server-side copy, the performance is limited by the rate at which rclone issues API requests to S3. See below for how to increase the number of API requests rclone makes.</p>
|
||||
<h4 id="increasing-the-rate-of-api-requests">Increasing the rate of API requests</h4>
|
||||
<p>You can increase the rate of API requests to S3 by increasing the parallelism using <code>--transfers</code> and <code>--checkers</code> options.</p>
|
||||
<p>Rclone uses a very conservative defaults for these settings, as not all providers support high rates of requests. Depending on your provider, you can increase significantly the number of transfers and checkers.</p>
|
||||
<p>For example, with AWS S3, if you can increase the number of checkers to values like 200. If you are doing a server-side copy, you can also increase the number of transfers to 200.</p>
|
||||
<pre><code>rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket</code></pre>
|
||||
<p>You will need to experiment with these values to find the optimal settings for your setup.</p>
|
||||
<h3 id="versions">Versions</h3>
|
||||
<p>When bucket versioning is enabled (this can be done with rclone with the <a href="#versioning"><code>rclone backend versioning</code></a> command) when rclone uploads a new version of a file it creates a <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html">new version of it</a> Likewise when you delete a file, the old version will be marked hidden and still be available.</p>
|
||||
<p>Old versions of files, where available, are visible using the <a href="#s3-versions"><code>--s3-versions</code></a> flag.</p>
|
||||
|
@ -17123,7 +17170,7 @@ acl = private
|
|||
upload_cutoff = 5M
|
||||
chunk_size = 5M
|
||||
copy_cutoff = 5M</code></pre>
|
||||
<p><a href="https://www.online.net/en/storage/c14-cold-storage">C14 Cold Storage</a> is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" <code>storage_class</code>. So you can configure your remote with the <code>storage_class = GLACIER</code> option to upload directly to C14. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)</p>
|
||||
<p><a href="https://www.scaleway.com/en/glacier-cold-storage/">Scaleway Glacier</a> is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" <code>storage_class</code>. So you can configure your remote with the <code>storage_class = GLACIER</code> option to upload directly to Scaleway Glacier. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)</p>
|
||||
<h3 id="lyve">Seagate Lyve Cloud</h3>
|
||||
<p><a href="https://www.seagate.com/gb/en/services/cloud/storage/">Seagate Lyve Cloud</a> is an S3 compatible object storage platform from <a href="https://seagate.com/">Seagate</a> intended for enterprise use.</p>
|
||||
<p>Here is a config run through for a remote called <code>remote</code> - you may choose a different name of course. Note that to create an access key and secret key you will need to create a service account first.</p>
|
||||
|
@ -18322,7 +18369,7 @@ cos s3</code></pre>
|
|||
<p>For Netease NOS configure as per the configurator <code>rclone config</code> setting the provider <code>Netease</code>. This will automatically set <code>force_path_style = false</code> which is necessary for it to run properly.</p>
|
||||
<h3 id="petabox">Petabox</h3>
|
||||
<p>Here is an example of making a <a href="https://petabox.io/">Petabox</a> configuration. First run:</p>
|
||||
<div class="sourceCode" id="cb967"><pre class="sourceCode bash"><code class="sourceCode bash"><span id="cb967-1"><a href="#cb967-1" aria-hidden="true"></a><span class="ex">rclone</span> config</span></code></pre></div>
|
||||
<div class="sourceCode" id="cb969"><pre class="sourceCode bash"><code class="sourceCode bash"><span id="cb969-1"><a href="#cb969-1" aria-hidden="true"></a><span class="ex">rclone</span> config</span></code></pre></div>
|
||||
<p>This will guide you through an interactive setup process.</p>
|
||||
<pre><code>No remotes found, make a new one?
|
||||
n) New remote
|
||||
|
@ -24568,7 +24615,7 @@ rclone backend copyid drive: ID1 path1 ID2 path2</code></pre>
|
|||
<li><p>Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID".</p></li>
|
||||
<li><p>Choose an application type of "Desktop app" and click "Create". (the default name is fine)</p></li>
|
||||
<li><p>It will show you a client ID and client secret. Make a note of these.</p>
|
||||
<p>(If you selected "External" at Step 5 continue to Step 9. If you chose "Internal" you don't need to publish and can skip straight to Step 10 but your destination drive must be part of the same Google Workspace.)</p></li>
|
||||
<p>(If you selected "External" at Step 5 continue to Step 10. If you chose "Internal" you don't need to publish and can skip straight to Step 11 but your destination drive must be part of the same Google Workspace.)</p></li>
|
||||
<li><p>Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm. You will also want to add yourself as a test user.</p></li>
|
||||
<li><p>Provide the noted client ID and client secret to rclone.</p></li>
|
||||
</ol>
|
||||
|
@ -29660,75 +29707,75 @@ rclone rc vfs/refresh recursive=true</code></pre>
|
|||
<p>Permissions are also supported, if <code>--onedrive-metadata-permissions</code> is set. The accepted values for <code>--onedrive-metadata-permissions</code> are "<code>read</code>", "<code>write</code>", "<code>read,write</code>", and "<code>off</code>" (the default). "<code>write</code>" supports adding new permissions, updating the "role" of existing permissions, and removing permissions. Updating and removing require the Permission ID to be known, so it is recommended to use "<code>read,write</code>" instead of "<code>write</code>" if you wish to update/remove permissions.</p>
|
||||
<p>Permissions are read/written in JSON format using the same schema as the <a href="https://learn.microsoft.com/en-us/onedrive/developer/rest-api/resources/permission?view=odsp-graph-online">OneDrive API</a>, which differs slightly between OneDrive Personal and Business.</p>
|
||||
<p>Example for OneDrive Personal:</p>
|
||||
<div class="sourceCode" id="cb1258"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1258-1"><a href="#cb1258-1" aria-hidden="true"></a><span class="ot">[</span></span>
|
||||
<span id="cb1258-2"><a href="#cb1258-2" aria-hidden="true"></a> <span class="fu">{</span></span>
|
||||
<span id="cb1258-3"><a href="#cb1258-3" aria-hidden="true"></a> <span class="dt">"id"</span><span class="fu">:</span> <span class="st">"1234567890ABC!123"</span><span class="fu">,</span></span>
|
||||
<span id="cb1258-4"><a href="#cb1258-4" aria-hidden="true"></a> <span class="dt">"grantedTo"</span><span class="fu">:</span> <span class="fu">{</span></span>
|
||||
<span id="cb1258-5"><a href="#cb1258-5" aria-hidden="true"></a> <span class="dt">"user"</span><span class="fu">:</span> <span class="fu">{</span></span>
|
||||
<span id="cb1258-6"><a href="#cb1258-6" aria-hidden="true"></a> <span class="dt">"id"</span><span class="fu">:</span> <span class="st">"ryan@contoso.com"</span></span>
|
||||
<span id="cb1258-7"><a href="#cb1258-7" aria-hidden="true"></a> <span class="fu">},</span></span>
|
||||
<span id="cb1258-8"><a href="#cb1258-8" aria-hidden="true"></a> <span class="dt">"application"</span><span class="fu">:</span> <span class="fu">{},</span></span>
|
||||
<span id="cb1258-9"><a href="#cb1258-9" aria-hidden="true"></a> <span class="dt">"device"</span><span class="fu">:</span> <span class="fu">{}</span></span>
|
||||
<span id="cb1258-10"><a href="#cb1258-10" aria-hidden="true"></a> <span class="fu">},</span></span>
|
||||
<span id="cb1258-11"><a href="#cb1258-11" aria-hidden="true"></a> <span class="dt">"invitation"</span><span class="fu">:</span> <span class="fu">{</span></span>
|
||||
<span id="cb1258-12"><a href="#cb1258-12" aria-hidden="true"></a> <span class="dt">"email"</span><span class="fu">:</span> <span class="st">"ryan@contoso.com"</span></span>
|
||||
<span id="cb1258-13"><a href="#cb1258-13" aria-hidden="true"></a> <span class="fu">},</span></span>
|
||||
<span id="cb1258-14"><a href="#cb1258-14" aria-hidden="true"></a> <span class="dt">"link"</span><span class="fu">:</span> <span class="fu">{</span></span>
|
||||
<span id="cb1258-15"><a href="#cb1258-15" aria-hidden="true"></a> <span class="dt">"webUrl"</span><span class="fu">:</span> <span class="st">"https://1drv.ms/t/s!1234567890ABC"</span></span>
|
||||
<span id="cb1258-16"><a href="#cb1258-16" aria-hidden="true"></a> <span class="fu">},</span></span>
|
||||
<span id="cb1258-17"><a href="#cb1258-17" aria-hidden="true"></a> <span class="dt">"roles"</span><span class="fu">:</span> <span class="ot">[</span></span>
|
||||
<span id="cb1258-18"><a href="#cb1258-18" aria-hidden="true"></a> <span class="st">"read"</span></span>
|
||||
<span id="cb1258-19"><a href="#cb1258-19" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
|
||||
<span id="cb1258-20"><a href="#cb1258-20" aria-hidden="true"></a> <span class="dt">"shareId"</span><span class="fu">:</span> <span class="st">"s!1234567890ABC"</span></span>
|
||||
<span id="cb1258-21"><a href="#cb1258-21" aria-hidden="true"></a> <span class="fu">}</span></span>
|
||||
<span id="cb1258-22"><a href="#cb1258-22" aria-hidden="true"></a><span class="ot">]</span></span></code></pre></div>
|
||||
<div class="sourceCode" id="cb1260"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1260-1"><a href="#cb1260-1" aria-hidden="true"></a><span class="ot">[</span></span>
|
||||
<span id="cb1260-2"><a href="#cb1260-2" aria-hidden="true"></a> <span class="fu">{</span></span>
|
||||
<span id="cb1260-3"><a href="#cb1260-3" aria-hidden="true"></a> <span class="dt">"id"</span><span class="fu">:</span> <span class="st">"1234567890ABC!123"</span><span class="fu">,</span></span>
|
||||
<span id="cb1260-4"><a href="#cb1260-4" aria-hidden="true"></a> <span class="dt">"grantedTo"</span><span class="fu">:</span> <span class="fu">{</span></span>
|
||||
<span id="cb1260-5"><a href="#cb1260-5" aria-hidden="true"></a> <span class="dt">"user"</span><span class="fu">:</span> <span class="fu">{</span></span>
|
||||
<span id="cb1260-6"><a href="#cb1260-6" aria-hidden="true"></a> <span class="dt">"id"</span><span class="fu">:</span> <span class="st">"ryan@contoso.com"</span></span>
|
||||
<span id="cb1260-7"><a href="#cb1260-7" aria-hidden="true"></a> <span class="fu">},</span></span>
|
||||
<span id="cb1260-8"><a href="#cb1260-8" aria-hidden="true"></a> <span class="dt">"application"</span><span class="fu">:</span> <span class="fu">{},</span></span>
|
||||
<span id="cb1260-9"><a href="#cb1260-9" aria-hidden="true"></a> <span class="dt">"device"</span><span class="fu">:</span> <span class="fu">{}</span></span>
|
||||
<span id="cb1260-10"><a href="#cb1260-10" aria-hidden="true"></a> <span class="fu">},</span></span>
|
||||
<span id="cb1260-11"><a href="#cb1260-11" aria-hidden="true"></a> <span class="dt">"invitation"</span><span class="fu">:</span> <span class="fu">{</span></span>
|
||||
<span id="cb1260-12"><a href="#cb1260-12" aria-hidden="true"></a> <span class="dt">"email"</span><span class="fu">:</span> <span class="st">"ryan@contoso.com"</span></span>
|
||||
<span id="cb1260-13"><a href="#cb1260-13" aria-hidden="true"></a> <span class="fu">},</span></span>
|
||||
<span id="cb1260-14"><a href="#cb1260-14" aria-hidden="true"></a> <span class="dt">"link"</span><span class="fu">:</span> <span class="fu">{</span></span>
|
||||
<span id="cb1260-15"><a href="#cb1260-15" aria-hidden="true"></a> <span class="dt">"webUrl"</span><span class="fu">:</span> <span class="st">"https://1drv.ms/t/s!1234567890ABC"</span></span>
|
||||
<span id="cb1260-16"><a href="#cb1260-16" aria-hidden="true"></a> <span class="fu">},</span></span>
|
||||
<span id="cb1260-17"><a href="#cb1260-17" aria-hidden="true"></a> <span class="dt">"roles"</span><span class="fu">:</span> <span class="ot">[</span></span>
|
||||
<span id="cb1260-18"><a href="#cb1260-18" aria-hidden="true"></a> <span class="st">"read"</span></span>
|
||||
<span id="cb1260-19"><a href="#cb1260-19" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
|
||||
<span id="cb1260-20"><a href="#cb1260-20" aria-hidden="true"></a> <span class="dt">"shareId"</span><span class="fu">:</span> <span class="st">"s!1234567890ABC"</span></span>
|
||||
<span id="cb1260-21"><a href="#cb1260-21" aria-hidden="true"></a> <span class="fu">}</span></span>
|
||||
<span id="cb1260-22"><a href="#cb1260-22" aria-hidden="true"></a><span class="ot">]</span></span></code></pre></div>
|
||||
<p>Example for OneDrive Business:</p>
|
||||
<div class="sourceCode" id="cb1259"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1259-1"><a href="#cb1259-1" aria-hidden="true"></a><span class="ot">[</span></span>
|
||||
<span id="cb1259-2"><a href="#cb1259-2" aria-hidden="true"></a> <span class="fu">{</span></span>
|
||||
<span id="cb1259-3"><a href="#cb1259-3" aria-hidden="true"></a> <span class="dt">"id"</span><span class="fu">:</span> <span class="st">"48d31887-5fad-4d73-a9f5-3c356e68a038"</span><span class="fu">,</span></span>
|
||||
<span id="cb1259-4"><a href="#cb1259-4" aria-hidden="true"></a> <span class="dt">"grantedToIdentities"</span><span class="fu">:</span> <span class="ot">[</span></span>
|
||||
<span id="cb1259-5"><a href="#cb1259-5" aria-hidden="true"></a> <span class="fu">{</span></span>
|
||||
<span id="cb1259-6"><a href="#cb1259-6" aria-hidden="true"></a> <span class="dt">"user"</span><span class="fu">:</span> <span class="fu">{</span></span>
|
||||
<span id="cb1259-7"><a href="#cb1259-7" aria-hidden="true"></a> <span class="dt">"displayName"</span><span class="fu">:</span> <span class="st">"ryan@contoso.com"</span></span>
|
||||
<span id="cb1259-8"><a href="#cb1259-8" aria-hidden="true"></a> <span class="fu">},</span></span>
|
||||
<span id="cb1259-9"><a href="#cb1259-9" aria-hidden="true"></a> <span class="dt">"application"</span><span class="fu">:</span> <span class="fu">{},</span></span>
|
||||
<span id="cb1259-10"><a href="#cb1259-10" aria-hidden="true"></a> <span class="dt">"device"</span><span class="fu">:</span> <span class="fu">{}</span></span>
|
||||
<span id="cb1259-11"><a href="#cb1259-11" aria-hidden="true"></a> <span class="fu">}</span></span>
|
||||
<span id="cb1259-12"><a href="#cb1259-12" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
|
||||
<span id="cb1259-13"><a href="#cb1259-13" aria-hidden="true"></a> <span class="dt">"link"</span><span class="fu">:</span> <span class="fu">{</span></span>
|
||||
<span id="cb1259-14"><a href="#cb1259-14" aria-hidden="true"></a> <span class="dt">"type"</span><span class="fu">:</span> <span class="st">"view"</span><span class="fu">,</span></span>
|
||||
<span id="cb1259-15"><a href="#cb1259-15" aria-hidden="true"></a> <span class="dt">"scope"</span><span class="fu">:</span> <span class="st">"users"</span><span class="fu">,</span></span>
|
||||
<span id="cb1259-16"><a href="#cb1259-16" aria-hidden="true"></a> <span class="dt">"webUrl"</span><span class="fu">:</span> <span class="st">"https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"</span></span>
|
||||
<span id="cb1259-17"><a href="#cb1259-17" aria-hidden="true"></a> <span class="fu">},</span></span>
|
||||
<span id="cb1259-18"><a href="#cb1259-18" aria-hidden="true"></a> <span class="dt">"roles"</span><span class="fu">:</span> <span class="ot">[</span></span>
|
||||
<span id="cb1259-19"><a href="#cb1259-19" aria-hidden="true"></a> <span class="st">"read"</span></span>
|
||||
<span id="cb1259-20"><a href="#cb1259-20" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
|
||||
<span id="cb1259-21"><a href="#cb1259-21" aria-hidden="true"></a> <span class="dt">"shareId"</span><span class="fu">:</span> <span class="st">"u!LKj1lkdlals90j1nlkascl"</span></span>
|
||||
<span id="cb1259-22"><a href="#cb1259-22" aria-hidden="true"></a> <span class="fu">}</span><span class="ot">,</span></span>
|
||||
<span id="cb1259-23"><a href="#cb1259-23" aria-hidden="true"></a> <span class="fu">{</span></span>
|
||||
<span id="cb1259-24"><a href="#cb1259-24" aria-hidden="true"></a> <span class="dt">"id"</span><span class="fu">:</span> <span class="st">"5D33DD65C6932946"</span><span class="fu">,</span></span>
|
||||
<span id="cb1259-25"><a href="#cb1259-25" aria-hidden="true"></a> <span class="dt">"grantedTo"</span><span class="fu">:</span> <span class="fu">{</span></span>
|
||||
<span id="cb1259-26"><a href="#cb1259-26" aria-hidden="true"></a> <span class="dt">"user"</span><span class="fu">:</span> <span class="fu">{</span></span>
|
||||
<span id="cb1259-27"><a href="#cb1259-27" aria-hidden="true"></a> <span class="dt">"displayName"</span><span class="fu">:</span> <span class="st">"John Doe"</span><span class="fu">,</span></span>
|
||||
<span id="cb1259-28"><a href="#cb1259-28" aria-hidden="true"></a> <span class="dt">"id"</span><span class="fu">:</span> <span class="st">"efee1b77-fb3b-4f65-99d6-274c11914d12"</span></span>
|
||||
<span id="cb1259-29"><a href="#cb1259-29" aria-hidden="true"></a> <span class="fu">},</span></span>
|
||||
<span id="cb1259-30"><a href="#cb1259-30" aria-hidden="true"></a> <span class="dt">"application"</span><span class="fu">:</span> <span class="fu">{},</span></span>
|
||||
<span id="cb1259-31"><a href="#cb1259-31" aria-hidden="true"></a> <span class="dt">"device"</span><span class="fu">:</span> <span class="fu">{}</span></span>
|
||||
<span id="cb1259-32"><a href="#cb1259-32" aria-hidden="true"></a> <span class="fu">},</span></span>
|
||||
<span id="cb1259-33"><a href="#cb1259-33" aria-hidden="true"></a> <span class="dt">"roles"</span><span class="fu">:</span> <span class="ot">[</span></span>
|
||||
<span id="cb1259-34"><a href="#cb1259-34" aria-hidden="true"></a> <span class="st">"owner"</span></span>
|
||||
<span id="cb1259-35"><a href="#cb1259-35" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
|
||||
<span id="cb1259-36"><a href="#cb1259-36" aria-hidden="true"></a> <span class="dt">"shareId"</span><span class="fu">:</span> <span class="st">"FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"</span></span>
|
||||
<span id="cb1259-37"><a href="#cb1259-37" aria-hidden="true"></a> <span class="fu">}</span></span>
|
||||
<span id="cb1259-38"><a href="#cb1259-38" aria-hidden="true"></a><span class="ot">]</span></span></code></pre></div>
|
||||
<div class="sourceCode" id="cb1261"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1261-1"><a href="#cb1261-1" aria-hidden="true"></a><span class="ot">[</span></span>
|
||||
<span id="cb1261-2"><a href="#cb1261-2" aria-hidden="true"></a> <span class="fu">{</span></span>
|
||||
<span id="cb1261-3"><a href="#cb1261-3" aria-hidden="true"></a> <span class="dt">"id"</span><span class="fu">:</span> <span class="st">"48d31887-5fad-4d73-a9f5-3c356e68a038"</span><span class="fu">,</span></span>
|
||||
<span id="cb1261-4"><a href="#cb1261-4" aria-hidden="true"></a> <span class="dt">"grantedToIdentities"</span><span class="fu">:</span> <span class="ot">[</span></span>
|
||||
<span id="cb1261-5"><a href="#cb1261-5" aria-hidden="true"></a> <span class="fu">{</span></span>
|
||||
<span id="cb1261-6"><a href="#cb1261-6" aria-hidden="true"></a> <span class="dt">"user"</span><span class="fu">:</span> <span class="fu">{</span></span>
|
||||
<span id="cb1261-7"><a href="#cb1261-7" aria-hidden="true"></a> <span class="dt">"displayName"</span><span class="fu">:</span> <span class="st">"ryan@contoso.com"</span></span>
|
||||
<span id="cb1261-8"><a href="#cb1261-8" aria-hidden="true"></a> <span class="fu">},</span></span>
|
||||
<span id="cb1261-9"><a href="#cb1261-9" aria-hidden="true"></a> <span class="dt">"application"</span><span class="fu">:</span> <span class="fu">{},</span></span>
|
||||
<span id="cb1261-10"><a href="#cb1261-10" aria-hidden="true"></a> <span class="dt">"device"</span><span class="fu">:</span> <span class="fu">{}</span></span>
|
||||
<span id="cb1261-11"><a href="#cb1261-11" aria-hidden="true"></a> <span class="fu">}</span></span>
|
||||
<span id="cb1261-12"><a href="#cb1261-12" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
|
||||
<span id="cb1261-13"><a href="#cb1261-13" aria-hidden="true"></a> <span class="dt">"link"</span><span class="fu">:</span> <span class="fu">{</span></span>
|
||||
<span id="cb1261-14"><a href="#cb1261-14" aria-hidden="true"></a> <span class="dt">"type"</span><span class="fu">:</span> <span class="st">"view"</span><span class="fu">,</span></span>
|
||||
<span id="cb1261-15"><a href="#cb1261-15" aria-hidden="true"></a> <span class="dt">"scope"</span><span class="fu">:</span> <span class="st">"users"</span><span class="fu">,</span></span>
|
||||
<span id="cb1261-16"><a href="#cb1261-16" aria-hidden="true"></a> <span class="dt">"webUrl"</span><span class="fu">:</span> <span class="st">"https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"</span></span>
|
||||
<span id="cb1261-17"><a href="#cb1261-17" aria-hidden="true"></a> <span class="fu">},</span></span>
|
||||
<span id="cb1261-18"><a href="#cb1261-18" aria-hidden="true"></a> <span class="dt">"roles"</span><span class="fu">:</span> <span class="ot">[</span></span>
|
||||
<span id="cb1261-19"><a href="#cb1261-19" aria-hidden="true"></a> <span class="st">"read"</span></span>
|
||||
<span id="cb1261-20"><a href="#cb1261-20" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
|
||||
<span id="cb1261-21"><a href="#cb1261-21" aria-hidden="true"></a> <span class="dt">"shareId"</span><span class="fu">:</span> <span class="st">"u!LKj1lkdlals90j1nlkascl"</span></span>
|
||||
<span id="cb1261-22"><a href="#cb1261-22" aria-hidden="true"></a> <span class="fu">}</span><span class="ot">,</span></span>
|
||||
<span id="cb1261-23"><a href="#cb1261-23" aria-hidden="true"></a> <span class="fu">{</span></span>
|
||||
<span id="cb1261-24"><a href="#cb1261-24" aria-hidden="true"></a> <span class="dt">"id"</span><span class="fu">:</span> <span class="st">"5D33DD65C6932946"</span><span class="fu">,</span></span>
|
||||
<span id="cb1261-25"><a href="#cb1261-25" aria-hidden="true"></a> <span class="dt">"grantedTo"</span><span class="fu">:</span> <span class="fu">{</span></span>
|
||||
<span id="cb1261-26"><a href="#cb1261-26" aria-hidden="true"></a> <span class="dt">"user"</span><span class="fu">:</span> <span class="fu">{</span></span>
|
||||
<span id="cb1261-27"><a href="#cb1261-27" aria-hidden="true"></a> <span class="dt">"displayName"</span><span class="fu">:</span> <span class="st">"John Doe"</span><span class="fu">,</span></span>
|
||||
<span id="cb1261-28"><a href="#cb1261-28" aria-hidden="true"></a> <span class="dt">"id"</span><span class="fu">:</span> <span class="st">"efee1b77-fb3b-4f65-99d6-274c11914d12"</span></span>
|
||||
<span id="cb1261-29"><a href="#cb1261-29" aria-hidden="true"></a> <span class="fu">},</span></span>
|
||||
<span id="cb1261-30"><a href="#cb1261-30" aria-hidden="true"></a> <span class="dt">"application"</span><span class="fu">:</span> <span class="fu">{},</span></span>
|
||||
<span id="cb1261-31"><a href="#cb1261-31" aria-hidden="true"></a> <span class="dt">"device"</span><span class="fu">:</span> <span class="fu">{}</span></span>
|
||||
<span id="cb1261-32"><a href="#cb1261-32" aria-hidden="true"></a> <span class="fu">},</span></span>
|
||||
<span id="cb1261-33"><a href="#cb1261-33" aria-hidden="true"></a> <span class="dt">"roles"</span><span class="fu">:</span> <span class="ot">[</span></span>
|
||||
<span id="cb1261-34"><a href="#cb1261-34" aria-hidden="true"></a> <span class="st">"owner"</span></span>
|
||||
<span id="cb1261-35"><a href="#cb1261-35" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
|
||||
<span id="cb1261-36"><a href="#cb1261-36" aria-hidden="true"></a> <span class="dt">"shareId"</span><span class="fu">:</span> <span class="st">"FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"</span></span>
|
||||
<span id="cb1261-37"><a href="#cb1261-37" aria-hidden="true"></a> <span class="fu">}</span></span>
|
||||
<span id="cb1261-38"><a href="#cb1261-38" aria-hidden="true"></a><span class="ot">]</span></span></code></pre></div>
|
||||
<p>To write permissions, pass in a "permissions" metadata key using this same format. The <a href="https://rclone.org/docs/#metadata-mapper"><code>--metadata-mapper</code></a> tool can be very helpful for this.</p>
|
||||
<p>When adding permissions, an email address can be provided in the <code>User.ID</code> or <code>DisplayName</code> properties of <code>grantedTo</code> or <code>grantedToIdentities</code>. Alternatively, an ObjectID can be provided in <code>User.ID</code>. At least one valid recipient must be provided in order to add a permission for a user. Creating a Public Link is also supported, if <code>Link.Scope</code> is set to <code>"anonymous"</code>.</p>
|
||||
<p>Example request to add a "read" permission with <code>--metadata-mapper</code>:</p>
|
||||
<div class="sourceCode" id="cb1260"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1260-1"><a href="#cb1260-1" aria-hidden="true"></a><span class="fu">{</span></span>
|
||||
<span id="cb1260-2"><a href="#cb1260-2" aria-hidden="true"></a> <span class="dt">"Metadata"</span><span class="fu">:</span> <span class="fu">{</span></span>
|
||||
<span id="cb1260-3"><a href="#cb1260-3" aria-hidden="true"></a> <span class="dt">"permissions"</span><span class="fu">:</span> <span class="st">"[{</span><span class="ch">\"</span><span class="st">grantedToIdentities</span><span class="ch">\"</span><span class="st">:[{</span><span class="ch">\"</span><span class="st">user</span><span class="ch">\"</span><span class="st">:{</span><span class="ch">\"</span><span class="st">id</span><span class="ch">\"</span><span class="st">:</span><span class="ch">\"</span><span class="st">ryan@contoso.com</span><span class="ch">\"</span><span class="st">}}],</span><span class="ch">\"</span><span class="st">roles</span><span class="ch">\"</span><span class="st">:[</span><span class="ch">\"</span><span class="st">read</span><span class="ch">\"</span><span class="st">]}]"</span></span>
|
||||
<span id="cb1260-4"><a href="#cb1260-4" aria-hidden="true"></a> <span class="fu">}</span></span>
|
||||
<span id="cb1260-5"><a href="#cb1260-5" aria-hidden="true"></a><span class="fu">}</span></span></code></pre></div>
|
||||
<div class="sourceCode" id="cb1262"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1262-1"><a href="#cb1262-1" aria-hidden="true"></a><span class="fu">{</span></span>
|
||||
<span id="cb1262-2"><a href="#cb1262-2" aria-hidden="true"></a> <span class="dt">"Metadata"</span><span class="fu">:</span> <span class="fu">{</span></span>
|
||||
<span id="cb1262-3"><a href="#cb1262-3" aria-hidden="true"></a> <span class="dt">"permissions"</span><span class="fu">:</span> <span class="st">"[{</span><span class="ch">\"</span><span class="st">grantedToIdentities</span><span class="ch">\"</span><span class="st">:[{</span><span class="ch">\"</span><span class="st">user</span><span class="ch">\"</span><span class="st">:{</span><span class="ch">\"</span><span class="st">id</span><span class="ch">\"</span><span class="st">:</span><span class="ch">\"</span><span class="st">ryan@contoso.com</span><span class="ch">\"</span><span class="st">}}],</span><span class="ch">\"</span><span class="st">roles</span><span class="ch">\"</span><span class="st">:[</span><span class="ch">\"</span><span class="st">read</span><span class="ch">\"</span><span class="st">]}]"</span></span>
|
||||
<span id="cb1262-4"><a href="#cb1262-4" aria-hidden="true"></a> <span class="fu">}</span></span>
|
||||
<span id="cb1262-5"><a href="#cb1262-5" aria-hidden="true"></a><span class="fu">}</span></span></code></pre></div>
|
||||
<p>Note that adding a permission can fail if a conflicting permission already exists for the file/folder.</p>
|
||||
<p>To update an existing permission, include both the Permission ID and the new <code>roles</code> to be assigned. <code>roles</code> is the only property that can be changed.</p>
|
||||
<p>To remove permissions, pass in a blob containing only the permissions you wish to keep (which can be empty, to remove all.) Note that the <code>owner</code> role will be ignored, as it cannot be removed.</p>
|
||||
|
@ -32200,54 +32247,24 @@ y/e/d> y</code></pre>
|
|||
</ul>
|
||||
<h3 id="advanced-options-42">Advanced options</h3>
|
||||
<p>Here are the Advanced options specific to pikpak (PikPak).</p>
|
||||
<h4 id="pikpak-client-id">--pikpak-client-id</h4>
|
||||
<p>OAuth Client Id.</p>
|
||||
<p>Leave blank normally.</p>
|
||||
<h4 id="pikpak-device-id">--pikpak-device-id</h4>
|
||||
<p>Device ID used for authorization.</p>
|
||||
<p>Properties:</p>
|
||||
<ul>
|
||||
<li>Config: client_id</li>
|
||||
<li>Env Var: RCLONE_PIKPAK_CLIENT_ID</li>
|
||||
<li>Config: device_id</li>
|
||||
<li>Env Var: RCLONE_PIKPAK_DEVICE_ID</li>
|
||||
<li>Type: string</li>
|
||||
<li>Required: false</li>
|
||||
</ul>
|
||||
<h4 id="pikpak-client-secret">--pikpak-client-secret</h4>
|
||||
<p>OAuth Client Secret.</p>
|
||||
<p>Leave blank normally.</p>
|
||||
<h4 id="pikpak-user-agent">--pikpak-user-agent</h4>
|
||||
<p>HTTP user agent for pikpak.</p>
|
||||
<p>Defaults to "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0" or "--pikpak-user-agent" provided on command line.</p>
|
||||
<p>Properties:</p>
|
||||
<ul>
|
||||
<li>Config: client_secret</li>
|
||||
<li>Env Var: RCLONE_PIKPAK_CLIENT_SECRET</li>
|
||||
<li>Config: user_agent</li>
|
||||
<li>Env Var: RCLONE_PIKPAK_USER_AGENT</li>
|
||||
<li>Type: string</li>
|
||||
<li>Required: false</li>
|
||||
</ul>
|
||||
<h4 id="pikpak-token">--pikpak-token</h4>
|
||||
<p>OAuth Access Token as a JSON blob.</p>
|
||||
<p>Properties:</p>
|
||||
<ul>
|
||||
<li>Config: token</li>
|
||||
<li>Env Var: RCLONE_PIKPAK_TOKEN</li>
|
||||
<li>Type: string</li>
|
||||
<li>Required: false</li>
|
||||
</ul>
|
||||
<h4 id="pikpak-auth-url">--pikpak-auth-url</h4>
|
||||
<p>Auth server URL.</p>
|
||||
<p>Leave blank to use the provider defaults.</p>
|
||||
<p>Properties:</p>
|
||||
<ul>
|
||||
<li>Config: auth_url</li>
|
||||
<li>Env Var: RCLONE_PIKPAK_AUTH_URL</li>
|
||||
<li>Type: string</li>
|
||||
<li>Required: false</li>
|
||||
</ul>
|
||||
<h4 id="pikpak-token-url">--pikpak-token-url</h4>
|
||||
<p>Token server url.</p>
|
||||
<p>Leave blank to use the provider defaults.</p>
|
||||
<p>Properties:</p>
|
||||
<ul>
|
||||
<li>Config: token_url</li>
|
||||
<li>Env Var: RCLONE_PIKPAK_TOKEN_URL</li>
|
||||
<li>Type: string</li>
|
||||
<li>Required: false</li>
|
||||
<li>Default: "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0"</li>
|
||||
</ul>
|
||||
<h4 id="pikpak-root-folder-id">--pikpak-root-folder-id</h4>
|
||||
<p>ID of the root folder. Leave blank normally.</p>
|
||||
|
@ -36949,6 +36966,79 @@ $ tree /tmp/c
|
|||
<li>"error": return an error based on option value</li>
|
||||
</ul>
|
||||
<h1 id="changelog-1">Changelog</h1>
|
||||
<h2 id="v1.68.2---2024-11-15">v1.68.2 - 2024-11-15</h2>
|
||||
<p><a href="https://github.com/rclone/rclone/compare/v1.68.1...v1.68.2">See commits</a></p>
|
||||
<ul>
|
||||
<li>Security fixes
|
||||
<ul>
|
||||
<li>local backend: CVE-2024-52522: fix permission and ownership on symlinks with <code>--links</code> and <code>--metadata</code> (Nick Craig-Wood)
|
||||
<ul>
|
||||
<li>Only affects users using <code>--metadata</code> and <code>--links</code> and copying files to the local backend</li>
|
||||
<li>See https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv</li>
|
||||
</ul></li>
|
||||
<li>build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1 (dependabot)
|
||||
<ul>
|
||||
<li>This is an issue in a dependency which is used for JWT certificates</li>
|
||||
<li>See https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r</li>
|
||||
</ul></li>
|
||||
</ul></li>
|
||||
<li>Bug Fixes
|
||||
<ul>
|
||||
<li>accounting: Fix wrong message on SIGUSR2 to enable/disable bwlimit (Nick Craig-Wood)</li>
|
||||
<li>bisync: Fix output capture restoring the wrong output for logrus (Dimitrios Slamaris)</li>
|
||||
<li>dlna: Fix loggingResponseWriter disregarding log level (Simon Bos)</li>
|
||||
<li>serve s3: Fix excess locking which was making serve s3 single threaded (Nick Craig-Wood)</li>
|
||||
<li>doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy Bush)</li>
|
||||
</ul></li>
|
||||
<li>Local
|
||||
<ul>
|
||||
<li>Fix permission and ownership on symlinks with <code>--links</code> and <code>--metadata</code> (Nick Craig-Wood)</li>
|
||||
<li>Fix <code>--copy-links</code> on macOS when cloning (nielash)</li>
|
||||
</ul></li>
|
||||
<li>Onedrive
|
||||
<ul>
|
||||
<li>Fix Retry-After handling to look at 503 errors also (Nick Craig-Wood)</li>
|
||||
</ul></li>
|
||||
<li>Pikpak
|
||||
<ul>
|
||||
<li>Fix cid/gcid calculations for fs.OverrideRemote (wiserain)</li>
|
||||
<li>Fix fatal crash on startup with token that can't be refreshed (Nick Craig-Wood)</li>
|
||||
</ul></li>
|
||||
<li>S3
|
||||
<ul>
|
||||
<li>Fix crash when using <code>--s3-download-url</code> after migration to SDKv2 (Nick Craig-Wood)</li>
|
||||
<li>Storj provider: fix server-side copy of files bigger than 5GB (Kaloyan Raev)</li>
|
||||
<li>Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)</li>
|
||||
</ul></li>
|
||||
</ul>
|
||||
<h2 id="v1.68.1---2024-09-24">v1.68.1 - 2024-09-24</h2>
|
||||
<p><a href="https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1">See commits</a></p>
|
||||
<ul>
|
||||
<li>Bug Fixes
|
||||
<ul>
|
||||
<li>build: Fix docker release build (ttionya)</li>
|
||||
<li>doc fixes (Nick Craig-Wood, Pawel Palucha)</li>
|
||||
<li>fs
|
||||
<ul>
|
||||
<li>Fix <code>--dump filters</code> not always appearing (Nick Craig-Wood)</li>
|
||||
<li>Fix setting <code>stringArray</code> config values from environment variables (Nick Craig-Wood)</li>
|
||||
</ul></li>
|
||||
<li>rc: Fix default value of <code>--metrics-addr</code> (Nick Craig-Wood)</li>
|
||||
<li>serve docker: Add missing <code>vfs-read-chunk-streams</code> option in docker volume driver (Divyam)</li>
|
||||
</ul></li>
|
||||
<li>Onedrive
|
||||
<ul>
|
||||
<li>Fix spurious "Couldn't decode error response: EOF" DEBUG (Nick Craig-Wood)</li>
|
||||
</ul></li>
|
||||
<li>Pikpak
|
||||
<ul>
|
||||
<li>Fix login issue where token retrieval fails (wiserain)</li>
|
||||
</ul></li>
|
||||
<li>S3
|
||||
<ul>
|
||||
<li>Fix rclone ignoring static credentials when <code>env_auth=true</code> (Nick Craig-Wood)</li>
|
||||
</ul></li>
|
||||
</ul>
|
||||
<h2 id="v1.68.0---2024-09-08">v1.68.0 - 2024-09-08</h2>
|
||||
<p><a href="https://github.com/rclone/rclone/compare/v1.67.0...v1.68.0">See commits</a></p>
|
||||
<ul>
|
||||
|
|
206
MANUAL.md
generated
206
MANUAL.md
generated
|
@ -1,6 +1,6 @@
|
|||
% rclone(1) User Manual
|
||||
% Nick Craig-Wood
|
||||
% Sep 08, 2024
|
||||
% Nov 15, 2024
|
||||
|
||||
# Rclone syncs your files to cloud storage
|
||||
|
||||
|
@ -5259,7 +5259,9 @@ When running in background mode the user will have to stop the mount manually:
|
|||
|
||||
# Linux
|
||||
fusermount -u /path/to/local/mount
|
||||
# OS X
|
||||
#... or on some systems
|
||||
fusermount3 -u /path/to/local/mount
|
||||
# OS X or Linux when using nfsmount
|
||||
umount /path/to/local/mount
|
||||
|
||||
The umount operation can fail, for example when the mountpoint is busy.
|
||||
|
@ -5603,9 +5605,9 @@ Note that systemd runs mount units without any environment variables including
|
|||
`PATH` or `HOME`. This means that tilde (`~`) expansion will not work
|
||||
and you should provide `--config` and `--cache-dir` explicitly as absolute
|
||||
paths via rclone arguments.
|
||||
Since mounting requires the `fusermount` program, rclone will use the fallback
|
||||
PATH of `/bin:/usr/bin` in this scenario. Please ensure that `fusermount`
|
||||
is present on this PATH.
|
||||
Since mounting requires the `fusermount` or `fusermount3` program,
|
||||
rclone will use the fallback PATH of `/bin:/usr/bin` in this scenario.
|
||||
Please ensure that `fusermount`/`fusermount3` is present on this PATH.
|
||||
|
||||
## Rclone as Unix mount helper
|
||||
|
||||
|
@ -6472,7 +6474,9 @@ When running in background mode the user will have to stop the mount manually:
|
|||
|
||||
# Linux
|
||||
fusermount -u /path/to/local/mount
|
||||
# OS X
|
||||
#... or on some systems
|
||||
fusermount3 -u /path/to/local/mount
|
||||
# OS X or Linux when using nfsmount
|
||||
umount /path/to/local/mount
|
||||
|
||||
The umount operation can fail, for example when the mountpoint is busy.
|
||||
|
@ -6816,9 +6820,9 @@ Note that systemd runs mount units without any environment variables including
|
|||
`PATH` or `HOME`. This means that tilde (`~`) expansion will not work
|
||||
and you should provide `--config` and `--cache-dir` explicitly as absolute
|
||||
paths via rclone arguments.
|
||||
Since mounting requires the `fusermount` program, rclone will use the fallback
|
||||
PATH of `/bin:/usr/bin` in this scenario. Please ensure that `fusermount`
|
||||
is present on this PATH.
|
||||
Since mounting requires the `fusermount` or `fusermount3` program,
|
||||
rclone will use the fallback PATH of `/bin:/usr/bin` in this scenario.
|
||||
Please ensure that `fusermount`/`fusermount3` is present on this PATH.
|
||||
|
||||
## Rclone as Unix mount helper
|
||||
|
||||
|
@ -7734,7 +7738,7 @@ Flags to control the Remote Control API
|
|||
|
||||
```
|
||||
--rc Enable the remote control server
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
|
||||
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
|
||||
--rc-baseurl string Prefix for URLs - leave blank for root
|
||||
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
|
||||
|
@ -16254,6 +16258,22 @@ so they take exactly the same form.
|
|||
|
||||
The options set by environment variables can be seen with the `-vv` flag, e.g. `rclone version -vv`.
|
||||
|
||||
Options that can appear multiple times (type `stringArray`) are
|
||||
treated slighly differently as environment variables can only be
|
||||
defined once. In order to allow a simple mechanism for adding one or
|
||||
many items, the input is treated as a [CSV encoded](https://godoc.org/encoding/csv)
|
||||
string. For example
|
||||
|
||||
| Environment Variable | Equivalent options |
|
||||
|----------------------|--------------------|
|
||||
| `RCLONE_EXCLUDE="*.jpg"` | `--exclude "*.jpg"` |
|
||||
| `RCLONE_EXCLUDE="*.jpg,*.png"` | `--exclude "*.jpg"` `--exclude "*.png"` |
|
||||
| `RCLONE_EXCLUDE='"*.jpg","*.png"'` | `--exclude "*.jpg"` `--exclude "*.png"` |
|
||||
| `RCLONE_EXCLUDE='"/directory with comma , in it /**"'` | `--exclude "/directory with comma , in it /**" |
|
||||
|
||||
If `stringArray` options are defined as environment variables **and**
|
||||
options on the command line then all the values will be used.
|
||||
|
||||
### Config file ###
|
||||
|
||||
You can set defaults for values in the config file on an individual
|
||||
|
@ -16950,6 +16970,8 @@ processed in.
|
|||
Arrange the order of filter rules with the most restrictive first and
|
||||
work down.
|
||||
|
||||
Lines starting with # or ; are ignored, and can be used to write comments. Inline comments are not supported. _Use `-vv --dump filters` to see how they appear in the final regexp._
|
||||
|
||||
E.g. for `filter-file.txt`:
|
||||
|
||||
# a sample filter rule file
|
||||
|
@ -16957,6 +16979,7 @@ E.g. for `filter-file.txt`:
|
|||
+ *.jpg
|
||||
+ *.png
|
||||
+ file2.avi
|
||||
- /dir/tmp/** # WARNING! This text will be treated as part of the path.
|
||||
- /dir/Trash/**
|
||||
+ /dir/**
|
||||
# exclude everything else
|
||||
|
@ -19767,7 +19790,7 @@ Here is an overview of the major features of each cloud storage system.
|
|||
| OpenDrive | MD5 | R/W | Yes | Partial ⁸ | - | - |
|
||||
| OpenStack Swift | MD5 | R/W | No | No | R/W | - |
|
||||
| Oracle Object Storage | MD5 | R/W | No | No | R/W | - |
|
||||
| pCloud | MD5, SHA1 ⁷ | R | No | No | W | - |
|
||||
| pCloud | MD5, SHA1 ⁷ | R/W | No | No | W | - |
|
||||
| PikPak | MD5 | R | No | No | R | - |
|
||||
| Pixeldrain | SHA256 | R/W | No | No | R | RW |
|
||||
| premiumize.me | - | - | Yes | No | R | - |
|
||||
|
@ -20474,7 +20497,7 @@ Flags for general networking and HTTP stuff.
|
|||
--tpslimit float Limit HTTP transactions per second to this
|
||||
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
||||
--use-cookies Enable session cookiejar
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.0")
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.2")
|
||||
```
|
||||
|
||||
|
||||
|
@ -20623,7 +20646,7 @@ Flags to control the Remote Control API.
|
|||
|
||||
```
|
||||
--rc Enable the remote control server
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
|
||||
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
|
||||
--rc-baseurl string Prefix for URLs - leave blank for root
|
||||
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
|
||||
|
@ -20659,7 +20682,7 @@ Flags to control the Remote Control API.
|
|||
Flags to control the Metrics HTTP endpoint..
|
||||
|
||||
```
|
||||
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to (default [""])
|
||||
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to
|
||||
--metrics-allow-origin string Origin which cross-domain request (CORS) can be executed from
|
||||
--metrics-baseurl string Prefix for URLs - leave blank for root
|
||||
--metrics-cert string TLS PEM key (concatenation of certificate and CA certificate)
|
||||
|
@ -21153,21 +21176,18 @@ Backend-only flags (these can be set in the config file also).
|
|||
--pcloud-token string OAuth Access Token as a JSON blob
|
||||
--pcloud-token-url string Token server url
|
||||
--pcloud-username string Your pcloud username
|
||||
--pikpak-auth-url string Auth server URL
|
||||
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
|
||||
--pikpak-client-id string OAuth Client Id
|
||||
--pikpak-client-secret string OAuth Client Secret
|
||||
--pikpak-description string Description of the remote
|
||||
--pikpak-device-id string Device ID used for authorization
|
||||
--pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
|
||||
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
|
||||
--pikpak-pass string Pikpak password (obscured)
|
||||
--pikpak-root-folder-id string ID of the root folder
|
||||
--pikpak-token string OAuth Access Token as a JSON blob
|
||||
--pikpak-token-url string Token server url
|
||||
--pikpak-trashed-only Only show files that are in the trash
|
||||
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
|
||||
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
|
||||
--pikpak-user string Pikpak username
|
||||
--pikpak-user-agent string HTTP user agent for pikpak (default "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0")
|
||||
--pixeldrain-api-key string API key for your pixeldrain account
|
||||
--pixeldrain-api-url string The API endpoint to connect to. In the vast majority of cases it's fine to leave (default "https://pixeldrain.com/api")
|
||||
--pixeldrain-description string Description of the remote
|
||||
|
@ -24741,6 +24761,38 @@ there for more details.
|
|||
|
||||
Setting this flag increases the chance for undetected upload failures.
|
||||
|
||||
### Increasing performance
|
||||
|
||||
#### Using server-side copy
|
||||
|
||||
If you are copying objects between S3 buckets in the same region, you should
|
||||
use server-side copy.
|
||||
This is much faster than downloading and re-uploading the objects, as no data is transferred.
|
||||
|
||||
For rclone to use server-side copy, you must use the same remote for the source and destination.
|
||||
|
||||
rclone copy s3:source-bucket s3:destination-bucket
|
||||
|
||||
When using server-side copy, the performance is limited by the rate at which rclone issues
|
||||
API requests to S3.
|
||||
See below for how to increase the number of API requests rclone makes.
|
||||
|
||||
#### Increasing the rate of API requests
|
||||
|
||||
You can increase the rate of API requests to S3 by increasing the parallelism using `--transfers` and `--checkers`
|
||||
options.
|
||||
|
||||
Rclone uses a very conservative defaults for these settings, as not all providers support high rates of requests.
|
||||
Depending on your provider, you can increase significantly the number of transfers and checkers.
|
||||
|
||||
For example, with AWS S3, if you can increase the number of checkers to values like 200.
|
||||
If you are doing a server-side copy, you can also increase the number of transfers to 200.
|
||||
|
||||
rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket
|
||||
|
||||
You will need to experiment with these values to find the optimal settings for your setup.
|
||||
|
||||
|
||||
### Versions
|
||||
|
||||
When bucket versioning is enabled (this can be done with rclone with
|
||||
|
@ -27784,8 +27836,8 @@ chunk_size = 5M
|
|||
copy_cutoff = 5M
|
||||
```
|
||||
|
||||
[C14 Cold Storage](https://www.online.net/en/storage/c14-cold-storage) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`.
|
||||
So you can configure your remote with the `storage_class = GLACIER` option to upload directly to C14. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
|
||||
[Scaleway Glacier](https://www.scaleway.com/en/glacier-cold-storage/) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`.
|
||||
So you can configure your remote with the `storage_class = GLACIER` option to upload directly to Scaleway Glacier. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
|
||||
|
||||
### Seagate Lyve Cloud {#lyve}
|
||||
|
||||
|
@ -37849,9 +37901,9 @@ then select "OAuth client ID".
|
|||
|
||||
9. It will show you a client ID and client secret. Make a note of these.
|
||||
|
||||
(If you selected "External" at Step 5 continue to Step 9.
|
||||
(If you selected "External" at Step 5 continue to Step 10.
|
||||
If you chose "Internal" you don't need to publish and can skip straight to
|
||||
Step 10 but your destination drive must be part of the same Google Workspace.)
|
||||
Step 11 but your destination drive must be part of the same Google Workspace.)
|
||||
|
||||
10. Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm.
|
||||
You will also want to add yourself as a test user.
|
||||
|
@ -48038,68 +48090,29 @@ Properties:
|
|||
|
||||
Here are the Advanced options specific to pikpak (PikPak).
|
||||
|
||||
#### --pikpak-client-id
|
||||
#### --pikpak-device-id
|
||||
|
||||
OAuth Client Id.
|
||||
|
||||
Leave blank normally.
|
||||
Device ID used for authorization.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_id
|
||||
- Env Var: RCLONE_PIKPAK_CLIENT_ID
|
||||
- Config: device_id
|
||||
- Env Var: RCLONE_PIKPAK_DEVICE_ID
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --pikpak-client-secret
|
||||
#### --pikpak-user-agent
|
||||
|
||||
OAuth Client Secret.
|
||||
HTTP user agent for pikpak.
|
||||
|
||||
Leave blank normally.
|
||||
Defaults to "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0" or "--pikpak-user-agent" provided on command line.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_secret
|
||||
- Env Var: RCLONE_PIKPAK_CLIENT_SECRET
|
||||
- Config: user_agent
|
||||
- Env Var: RCLONE_PIKPAK_USER_AGENT
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --pikpak-token
|
||||
|
||||
OAuth Access Token as a JSON blob.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token
|
||||
- Env Var: RCLONE_PIKPAK_TOKEN
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --pikpak-auth-url
|
||||
|
||||
Auth server URL.
|
||||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: auth_url
|
||||
- Env Var: RCLONE_PIKPAK_AUTH_URL
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --pikpak-token-url
|
||||
|
||||
Token server url.
|
||||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token_url
|
||||
- Env Var: RCLONE_PIKPAK_TOKEN_URL
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Default: "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0"
|
||||
|
||||
#### --pikpak-root-folder-id
|
||||
|
||||
|
@ -54602,6 +54615,55 @@ Options:
|
|||
|
||||
# Changelog
|
||||
|
||||
## v1.68.2 - 2024-11-15
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.68.1...v1.68.2)
|
||||
|
||||
* Security fixes
|
||||
* local backend: CVE-2024-52522: fix permission and ownership on symlinks with `--links` and `--metadata` (Nick Craig-Wood)
|
||||
* Only affects users using `--metadata` and `--links` and copying files to the local backend
|
||||
* See https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
|
||||
* build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1 (dependabot)
|
||||
* This is an issue in a dependency which is used for JWT certificates
|
||||
* See https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
|
||||
* Bug Fixes
|
||||
* accounting: Fix wrong message on SIGUSR2 to enable/disable bwlimit (Nick Craig-Wood)
|
||||
* bisync: Fix output capture restoring the wrong output for logrus (Dimitrios Slamaris)
|
||||
* dlna: Fix loggingResponseWriter disregarding log level (Simon Bos)
|
||||
* serve s3: Fix excess locking which was making serve s3 single threaded (Nick Craig-Wood)
|
||||
* doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy Bush)
|
||||
* Local
|
||||
* Fix permission and ownership on symlinks with `--links` and `--metadata` (Nick Craig-Wood)
|
||||
* Fix `--copy-links` on macOS when cloning (nielash)
|
||||
* Onedrive
|
||||
* Fix Retry-After handling to look at 503 errors also (Nick Craig-Wood)
|
||||
* Pikpak
|
||||
* Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
|
||||
* Fix fatal crash on startup with token that can't be refreshed (Nick Craig-Wood)
|
||||
* S3
|
||||
* Fix crash when using `--s3-download-url` after migration to SDKv2 (Nick Craig-Wood)
|
||||
* Storj provider: fix server-side copy of files bigger than 5GB (Kaloyan Raev)
|
||||
* Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
|
||||
|
||||
## v1.68.1 - 2024-09-24
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1)
|
||||
|
||||
* Bug Fixes
|
||||
* build: Fix docker release build (ttionya)
|
||||
* doc fixes (Nick Craig-Wood, Pawel Palucha)
|
||||
* fs
|
||||
* Fix `--dump filters` not always appearing (Nick Craig-Wood)
|
||||
* Fix setting `stringArray` config values from environment variables (Nick Craig-Wood)
|
||||
* rc: Fix default value of `--metrics-addr` (Nick Craig-Wood)
|
||||
* serve docker: Add missing `vfs-read-chunk-streams` option in docker volume driver (Divyam)
|
||||
* Onedrive
|
||||
* Fix spurious "Couldn't decode error response: EOF" DEBUG (Nick Craig-Wood)
|
||||
* Pikpak
|
||||
* Fix login issue where token retrieval fails (wiserain)
|
||||
* S3
|
||||
* Fix rclone ignoring static credentials when `env_auth=true` (Nick Craig-Wood)
|
||||
|
||||
## v1.68.0 - 2024-09-08
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.67.0...v1.68.0)
|
||||
|
|
245
MANUAL.txt
generated
245
MANUAL.txt
generated
|
@ -1,6 +1,6 @@
|
|||
rclone(1) User Manual
|
||||
Nick Craig-Wood
|
||||
Sep 08, 2024
|
||||
Nov 15, 2024
|
||||
|
||||
Rclone syncs your files to cloud storage
|
||||
|
||||
|
@ -4843,7 +4843,9 @@ manually:
|
|||
|
||||
# Linux
|
||||
fusermount -u /path/to/local/mount
|
||||
# OS X
|
||||
#... or on some systems
|
||||
fusermount3 -u /path/to/local/mount
|
||||
# OS X or Linux when using nfsmount
|
||||
umount /path/to/local/mount
|
||||
|
||||
The umount operation can fail, for example when the mountpoint is busy.
|
||||
|
@ -5188,8 +5190,9 @@ Note that systemd runs mount units without any environment variables
|
|||
including PATH or HOME. This means that tilde (~) expansion will not
|
||||
work and you should provide --config and --cache-dir explicitly as
|
||||
absolute paths via rclone arguments. Since mounting requires the
|
||||
fusermount program, rclone will use the fallback PATH of /bin:/usr/bin
|
||||
in this scenario. Please ensure that fusermount is present on this PATH.
|
||||
fusermount or fusermount3 program, rclone will use the fallback PATH of
|
||||
/bin:/usr/bin in this scenario. Please ensure that
|
||||
fusermount/fusermount3 is present on this PATH.
|
||||
|
||||
Rclone as Unix mount helper
|
||||
|
||||
|
@ -6027,7 +6030,9 @@ manually:
|
|||
|
||||
# Linux
|
||||
fusermount -u /path/to/local/mount
|
||||
# OS X
|
||||
#... or on some systems
|
||||
fusermount3 -u /path/to/local/mount
|
||||
# OS X or Linux when using nfsmount
|
||||
umount /path/to/local/mount
|
||||
|
||||
The umount operation can fail, for example when the mountpoint is busy.
|
||||
|
@ -6372,8 +6377,9 @@ Note that systemd runs mount units without any environment variables
|
|||
including PATH or HOME. This means that tilde (~) expansion will not
|
||||
work and you should provide --config and --cache-dir explicitly as
|
||||
absolute paths via rclone arguments. Since mounting requires the
|
||||
fusermount program, rclone will use the fallback PATH of /bin:/usr/bin
|
||||
in this scenario. Please ensure that fusermount is present on this PATH.
|
||||
fusermount or fusermount3 program, rclone will use the fallback PATH of
|
||||
/bin:/usr/bin in this scenario. Please ensure that
|
||||
fusermount/fusermount3 is present on this PATH.
|
||||
|
||||
Rclone as Unix mount helper
|
||||
|
||||
|
@ -7298,7 +7304,7 @@ RC Options
|
|||
Flags to control the Remote Control API
|
||||
|
||||
--rc Enable the remote control server
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
|
||||
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
|
||||
--rc-baseurl string Prefix for URLs - leave blank for root
|
||||
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
|
||||
|
@ -15704,6 +15710,29 @@ they take exactly the same form.
|
|||
The options set by environment variables can be seen with the -vv flag,
|
||||
e.g. rclone version -vv.
|
||||
|
||||
Options that can appear multiple times (type stringArray) are treated
|
||||
slighly differently as environment variables can only be defined once.
|
||||
In order to allow a simple mechanism for adding one or many items, the
|
||||
input is treated as a CSV encoded string. For example
|
||||
|
||||
----------------------------------------------------------------------------------------
|
||||
Environment Variable Equivalent options
|
||||
------------------------------------------------------ ---------------------------------
|
||||
RCLONE_EXCLUDE="*.jpg" --exclude "*.jpg"
|
||||
|
||||
RCLONE_EXCLUDE="*.jpg,*.png" --exclude "*.jpg"
|
||||
--exclude "*.png"
|
||||
|
||||
RCLONE_EXCLUDE='"*.jpg","*.png"' --exclude "*.jpg"
|
||||
--exclude "*.png"
|
||||
|
||||
RCLONE_EXCLUDE='"/directory with comma , in it /**"' `--exclude "/directory with comma
|
||||
, in it /**"
|
||||
----------------------------------------------------------------------------------------
|
||||
|
||||
If stringArray options are defined as environment variables and options
|
||||
on the command line then all the values will be used.
|
||||
|
||||
Config file
|
||||
|
||||
You can set defaults for values in the config file on an individual
|
||||
|
@ -16399,6 +16428,10 @@ processed in.
|
|||
Arrange the order of filter rules with the most restrictive first and
|
||||
work down.
|
||||
|
||||
Lines starting with # or ; are ignored, and can be used to write
|
||||
comments. Inline comments are not supported. Use -vv --dump filters to
|
||||
see how they appear in the final regexp.
|
||||
|
||||
E.g. for filter-file.txt:
|
||||
|
||||
# a sample filter rule file
|
||||
|
@ -16406,6 +16439,7 @@ E.g. for filter-file.txt:
|
|||
+ *.jpg
|
||||
+ *.png
|
||||
+ file2.avi
|
||||
- /dir/tmp/** # WARNING! This text will be treated as part of the path.
|
||||
- /dir/Trash/**
|
||||
+ /dir/**
|
||||
# exclude everything else
|
||||
|
@ -19240,7 +19274,7 @@ Here is an overview of the major features of each cloud storage system.
|
|||
OpenDrive MD5 R/W Yes Partial ⁸ - -
|
||||
OpenStack Swift MD5 R/W No No R/W -
|
||||
Oracle Object Storage MD5 R/W No No R/W -
|
||||
pCloud MD5, SHA1 ⁷ R No No W -
|
||||
pCloud MD5, SHA1 ⁷ R/W No No W -
|
||||
PikPak MD5 R No No R -
|
||||
Pixeldrain SHA256 R/W No No R RW
|
||||
premiumize.me - - Yes No R -
|
||||
|
@ -20047,7 +20081,7 @@ Flags for general networking and HTTP stuff.
|
|||
--tpslimit float Limit HTTP transactions per second to this
|
||||
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
||||
--use-cookies Enable session cookiejar
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.0")
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.2")
|
||||
|
||||
Performance
|
||||
|
||||
|
@ -20172,7 +20206,7 @@ RC
|
|||
Flags to control the Remote Control API.
|
||||
|
||||
--rc Enable the remote control server
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
|
||||
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
|
||||
--rc-baseurl string Prefix for URLs - leave blank for root
|
||||
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
|
||||
|
@ -20205,7 +20239,7 @@ Metrics
|
|||
|
||||
Flags to control the Metrics HTTP endpoint..
|
||||
|
||||
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to (default [""])
|
||||
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to
|
||||
--metrics-allow-origin string Origin which cross-domain request (CORS) can be executed from
|
||||
--metrics-baseurl string Prefix for URLs - leave blank for root
|
||||
--metrics-cert string TLS PEM key (concatenation of certificate and CA certificate)
|
||||
|
@ -20696,21 +20730,18 @@ Backend-only flags (these can be set in the config file also).
|
|||
--pcloud-token string OAuth Access Token as a JSON blob
|
||||
--pcloud-token-url string Token server url
|
||||
--pcloud-username string Your pcloud username
|
||||
--pikpak-auth-url string Auth server URL
|
||||
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
|
||||
--pikpak-client-id string OAuth Client Id
|
||||
--pikpak-client-secret string OAuth Client Secret
|
||||
--pikpak-description string Description of the remote
|
||||
--pikpak-device-id string Device ID used for authorization
|
||||
--pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
|
||||
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
|
||||
--pikpak-pass string Pikpak password (obscured)
|
||||
--pikpak-root-folder-id string ID of the root folder
|
||||
--pikpak-token string OAuth Access Token as a JSON blob
|
||||
--pikpak-token-url string Token server url
|
||||
--pikpak-trashed-only Only show files that are in the trash
|
||||
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
|
||||
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
|
||||
--pikpak-user string Pikpak username
|
||||
--pikpak-user-agent string HTTP user agent for pikpak (default "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0")
|
||||
--pixeldrain-api-key string API key for your pixeldrain account
|
||||
--pixeldrain-api-url string The API endpoint to connect to. In the vast majority of cases it's fine to leave (default "https://pixeldrain.com/api")
|
||||
--pixeldrain-description string Description of the remote
|
||||
|
@ -24275,6 +24306,41 @@ details.
|
|||
|
||||
Setting this flag increases the chance for undetected upload failures.
|
||||
|
||||
Increasing performance
|
||||
|
||||
Using server-side copy
|
||||
|
||||
If you are copying objects between S3 buckets in the same region, you
|
||||
should use server-side copy. This is much faster than downloading and
|
||||
re-uploading the objects, as no data is transferred.
|
||||
|
||||
For rclone to use server-side copy, you must use the same remote for the
|
||||
source and destination.
|
||||
|
||||
rclone copy s3:source-bucket s3:destination-bucket
|
||||
|
||||
When using server-side copy, the performance is limited by the rate at
|
||||
which rclone issues API requests to S3. See below for how to increase
|
||||
the number of API requests rclone makes.
|
||||
|
||||
Increasing the rate of API requests
|
||||
|
||||
You can increase the rate of API requests to S3 by increasing the
|
||||
parallelism using --transfers and --checkers options.
|
||||
|
||||
Rclone uses a very conservative defaults for these settings, as not all
|
||||
providers support high rates of requests. Depending on your provider,
|
||||
you can increase significantly the number of transfers and checkers.
|
||||
|
||||
For example, with AWS S3, if you can increase the number of checkers to
|
||||
values like 200. If you are doing a server-side copy, you can also
|
||||
increase the number of transfers to 200.
|
||||
|
||||
rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket
|
||||
|
||||
You will need to experiment with these values to find the optimal
|
||||
settings for your setup.
|
||||
|
||||
Versions
|
||||
|
||||
When bucket versioning is enabled (this can be done with rclone with the
|
||||
|
@ -27303,13 +27369,13 @@ rclone like this:
|
|||
chunk_size = 5M
|
||||
copy_cutoff = 5M
|
||||
|
||||
C14 Cold Storage is the low-cost S3 Glacier alternative from Scaleway
|
||||
Scaleway Glacier is the low-cost S3 Glacier alternative from Scaleway
|
||||
and it works the same way as on S3 by accepting the "GLACIER"
|
||||
storage_class. So you can configure your remote with the
|
||||
storage_class = GLACIER option to upload directly to C14. Don't forget
|
||||
that in this state you can't read files back after, you will need to
|
||||
restore them to "STANDARD" storage_class first before being able to read
|
||||
them (see "restore" section above)
|
||||
storage_class = GLACIER option to upload directly to Scaleway Glacier.
|
||||
Don't forget that in this state you can't read files back after, you
|
||||
will need to restore them to "STANDARD" storage_class first before being
|
||||
able to read them (see "restore" section above)
|
||||
|
||||
Seagate Lyve Cloud
|
||||
|
||||
|
@ -37263,9 +37329,9 @@ Here is how to create your own Google Drive client ID for rclone:
|
|||
9. It will show you a client ID and client secret. Make a note of
|
||||
these.
|
||||
|
||||
(If you selected "External" at Step 5 continue to Step 9. If you
|
||||
(If you selected "External" at Step 5 continue to Step 10. If you
|
||||
chose "Internal" you don't need to publish and can skip straight to
|
||||
Step 10 but your destination drive must be part of the same Google
|
||||
Step 11 but your destination drive must be part of the same Google
|
||||
Workspace.)
|
||||
|
||||
10. Go to "Oauth consent screen" and then click "PUBLISH APP" button and
|
||||
|
@ -47695,68 +47761,32 @@ Advanced options
|
|||
|
||||
Here are the Advanced options specific to pikpak (PikPak).
|
||||
|
||||
--pikpak-client-id
|
||||
--pikpak-device-id
|
||||
|
||||
OAuth Client Id.
|
||||
|
||||
Leave blank normally.
|
||||
Device ID used for authorization.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_id
|
||||
- Env Var: RCLONE_PIKPAK_CLIENT_ID
|
||||
- Config: device_id
|
||||
- Env Var: RCLONE_PIKPAK_DEVICE_ID
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
--pikpak-client-secret
|
||||
--pikpak-user-agent
|
||||
|
||||
OAuth Client Secret.
|
||||
HTTP user agent for pikpak.
|
||||
|
||||
Leave blank normally.
|
||||
Defaults to "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0)
|
||||
Gecko/20100101 Firefox/129.0" or "--pikpak-user-agent" provided on
|
||||
command line.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_secret
|
||||
- Env Var: RCLONE_PIKPAK_CLIENT_SECRET
|
||||
- Config: user_agent
|
||||
- Env Var: RCLONE_PIKPAK_USER_AGENT
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
--pikpak-token
|
||||
|
||||
OAuth Access Token as a JSON blob.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token
|
||||
- Env Var: RCLONE_PIKPAK_TOKEN
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
--pikpak-auth-url
|
||||
|
||||
Auth server URL.
|
||||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: auth_url
|
||||
- Env Var: RCLONE_PIKPAK_AUTH_URL
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
--pikpak-token-url
|
||||
|
||||
Token server url.
|
||||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token_url
|
||||
- Env Var: RCLONE_PIKPAK_TOKEN_URL
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Default: "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0)
|
||||
Gecko/20100101 Firefox/129.0"
|
||||
|
||||
--pikpak-root-folder-id
|
||||
|
||||
|
@ -54265,6 +54295,75 @@ Options:
|
|||
|
||||
Changelog
|
||||
|
||||
v1.68.2 - 2024-11-15
|
||||
|
||||
See commits
|
||||
|
||||
- Security fixes
|
||||
- local backend: CVE-2024-52522: fix permission and ownership on
|
||||
symlinks with --links and --metadata (Nick Craig-Wood)
|
||||
- Only affects users using --metadata and --links and copying
|
||||
files to the local backend
|
||||
- See
|
||||
https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
|
||||
- build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1
|
||||
(dependabot)
|
||||
- This is an issue in a dependency which is used for JWT
|
||||
certificates
|
||||
- See
|
||||
https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
|
||||
- Bug Fixes
|
||||
- accounting: Fix wrong message on SIGUSR2 to enable/disable
|
||||
bwlimit (Nick Craig-Wood)
|
||||
- bisync: Fix output capture restoring the wrong output for logrus
|
||||
(Dimitrios Slamaris)
|
||||
- dlna: Fix loggingResponseWriter disregarding log level (Simon
|
||||
Bos)
|
||||
- serve s3: Fix excess locking which was making serve s3 single
|
||||
threaded (Nick Craig-Wood)
|
||||
- doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy
|
||||
Bush)
|
||||
- Local
|
||||
- Fix permission and ownership on symlinks with --links and
|
||||
--metadata (Nick Craig-Wood)
|
||||
- Fix --copy-links on macOS when cloning (nielash)
|
||||
- Onedrive
|
||||
- Fix Retry-After handling to look at 503 errors also (Nick
|
||||
Craig-Wood)
|
||||
- Pikpak
|
||||
- Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
|
||||
- Fix fatal crash on startup with token that can't be refreshed
|
||||
(Nick Craig-Wood)
|
||||
- S3
|
||||
- Fix crash when using --s3-download-url after migration to SDKv2
|
||||
(Nick Craig-Wood)
|
||||
- Storj provider: fix server-side copy of files bigger than 5GB
|
||||
(Kaloyan Raev)
|
||||
- Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
|
||||
|
||||
v1.68.1 - 2024-09-24
|
||||
|
||||
See commits
|
||||
|
||||
- Bug Fixes
|
||||
- build: Fix docker release build (ttionya)
|
||||
- doc fixes (Nick Craig-Wood, Pawel Palucha)
|
||||
- fs
|
||||
- Fix --dump filters not always appearing (Nick Craig-Wood)
|
||||
- Fix setting stringArray config values from environment
|
||||
variables (Nick Craig-Wood)
|
||||
- rc: Fix default value of --metrics-addr (Nick Craig-Wood)
|
||||
- serve docker: Add missing vfs-read-chunk-streams option in
|
||||
docker volume driver (Divyam)
|
||||
- Onedrive
|
||||
- Fix spurious "Couldn't decode error response: EOF" DEBUG (Nick
|
||||
Craig-Wood)
|
||||
- Pikpak
|
||||
- Fix login issue where token retrieval fails (wiserain)
|
||||
- S3
|
||||
- Fix rclone ignoring static credentials when env_auth=true (Nick
|
||||
Craig-Wood)
|
||||
|
||||
v1.68.0 - 2024-09-08
|
||||
|
||||
See commits
|
||||
|
|
|
@ -168,6 +168,8 @@ docker buildx build -t rclone/rclone:testing --progress=plain --platform linux/a
|
|||
|
||||
To make a full build then set the tags correctly and add `--push`
|
||||
|
||||
Note that you can't only build one architecture - you need to build them all.
|
||||
|
||||
```
|
||||
docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push .
|
||||
docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push .
|
||||
```
|
||||
|
|
2
VERSION
2
VERSION
|
@ -1 +1 @@
|
|||
v1.68.0
|
||||
v1.68.2
|
||||
|
|
|
@ -6,6 +6,7 @@ package local
|
|||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
|
||||
"github.com/go-darwin/apfs"
|
||||
|
@ -22,7 +23,7 @@ import (
|
|||
//
|
||||
// If it isn't possible then return fs.ErrorCantCopy
|
||||
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
|
||||
if runtime.GOOS != "darwin" || f.opt.TranslateSymlinks || f.opt.NoClone {
|
||||
if runtime.GOOS != "darwin" || f.opt.NoClone {
|
||||
return nil, fs.ErrorCantCopy
|
||||
}
|
||||
srcObj, ok := src.(*Object)
|
||||
|
@ -30,6 +31,9 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
|
|||
fs.Debugf(src, "Can't clone - not same remote type")
|
||||
return nil, fs.ErrorCantCopy
|
||||
}
|
||||
if f.opt.TranslateSymlinks && srcObj.translatedLink { // in --links mode, use cloning only for regular files
|
||||
return nil, fs.ErrorCantCopy
|
||||
}
|
||||
|
||||
// Fetch metadata if --metadata is in use
|
||||
meta, err := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
|
||||
|
@ -44,11 +48,18 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
|
|||
return nil, err
|
||||
}
|
||||
|
||||
err = Clone(srcObj.path, f.localPath(remote))
|
||||
srcPath := srcObj.path
|
||||
if f.opt.FollowSymlinks { // in --copy-links mode, find the real file being pointed to and pass that in instead
|
||||
srcPath, err = filepath.EvalSymlinks(srcPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
err = Clone(srcPath, f.localPath(remote))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
fs.Debugf(remote, "server-side cloned!")
|
||||
|
||||
// Set metadata if --metadata is in use
|
||||
if meta != nil {
|
||||
|
|
16
backend/local/lchmod.go
Normal file
16
backend/local/lchmod.go
Normal file
|
@ -0,0 +1,16 @@
|
|||
//go:build windows || plan9 || js || linux
|
||||
|
||||
package local
|
||||
|
||||
import "os"
|
||||
|
||||
const haveLChmod = false
|
||||
|
||||
// lChmod changes the mode of the named file to mode. If the file is a symbolic
|
||||
// link, it changes the link, not the target. If there is an error,
|
||||
// it will be of type *PathError.
|
||||
func lChmod(name string, mode os.FileMode) error {
|
||||
// Can't do this safely on this OS - chmoding a symlink always
|
||||
// changes the destination.
|
||||
return nil
|
||||
}
|
41
backend/local/lchmod_unix.go
Normal file
41
backend/local/lchmod_unix.go
Normal file
|
@ -0,0 +1,41 @@
|
|||
//go:build !windows && !plan9 && !js && !linux
|
||||
|
||||
package local
|
||||
|
||||
import (
|
||||
"os"
|
||||
"syscall"
|
||||
|
||||
"golang.org/x/sys/unix"
|
||||
)
|
||||
|
||||
const haveLChmod = true
|
||||
|
||||
// syscallMode returns the syscall-specific mode bits from Go's portable mode bits.
|
||||
//
|
||||
// Borrowed from the syscall source since it isn't public.
|
||||
func syscallMode(i os.FileMode) (o uint32) {
|
||||
o |= uint32(i.Perm())
|
||||
if i&os.ModeSetuid != 0 {
|
||||
o |= syscall.S_ISUID
|
||||
}
|
||||
if i&os.ModeSetgid != 0 {
|
||||
o |= syscall.S_ISGID
|
||||
}
|
||||
if i&os.ModeSticky != 0 {
|
||||
o |= syscall.S_ISVTX
|
||||
}
|
||||
return o
|
||||
}
|
||||
|
||||
// lChmod changes the mode of the named file to mode. If the file is a symbolic
|
||||
// link, it changes the link, not the target. If there is an error,
|
||||
// it will be of type *PathError.
|
||||
func lChmod(name string, mode os.FileMode) error {
|
||||
// NB linux does not support AT_SYMLINK_NOFOLLOW as a parameter to fchmodat
|
||||
// and returns ENOTSUP if you try, so we don't support this on linux
|
||||
if e := unix.Fchmodat(unix.AT_FDCWD, name, syscallMode(mode), unix.AT_SYMLINK_NOFOLLOW); e != nil {
|
||||
return &os.PathError{Op: "lChmod", Path: name, Err: e}
|
||||
}
|
||||
return nil
|
||||
}
|
|
@ -1,4 +1,4 @@
|
|||
//go:build windows || plan9 || js
|
||||
//go:build plan9 || js
|
||||
|
||||
package local
|
||||
|
||||
|
|
19
backend/local/lchtimes_windows.go
Normal file
19
backend/local/lchtimes_windows.go
Normal file
|
@ -0,0 +1,19 @@
|
|||
//go:build windows
|
||||
|
||||
package local
|
||||
|
||||
import (
|
||||
"time"
|
||||
)
|
||||
|
||||
const haveLChtimes = true
|
||||
|
||||
// lChtimes changes the access and modification times of the named
|
||||
// link, similar to the Unix utime() or utimes() functions.
|
||||
//
|
||||
// The underlying filesystem may truncate or round the values to a
|
||||
// less precise time unit.
|
||||
// If there is an error, it will be of type *PathError.
|
||||
func lChtimes(name string, atime time.Time, mtime time.Time) error {
|
||||
return setTimes(name, atime, mtime, time.Time{}, true)
|
||||
}
|
|
@ -73,7 +73,6 @@ func TestUpdatingCheck(t *testing.T) {
|
|||
r.WriteFile(filePath, "content updated", time.Now())
|
||||
_, err = in.Read(buf)
|
||||
require.NoError(t, err)
|
||||
|
||||
}
|
||||
|
||||
// Test corrupted on transfer
|
||||
|
@ -224,7 +223,7 @@ func TestHashOnUpdate(t *testing.T) {
|
|||
assert.Equal(t, "9a0364b9e99bb480dd25e1f0284c8555", md5)
|
||||
|
||||
// Reupload it with different contents but same size and timestamp
|
||||
var b = bytes.NewBufferString("CONTENT")
|
||||
b := bytes.NewBufferString("CONTENT")
|
||||
src := object.NewStaticObjectInfo(filePath, when, int64(b.Len()), true, nil, f)
|
||||
err = o.Update(ctx, b, src)
|
||||
require.NoError(t, err)
|
||||
|
@ -269,22 +268,66 @@ func TestMetadata(t *testing.T) {
|
|||
r := fstest.NewRun(t)
|
||||
const filePath = "metafile.txt"
|
||||
when := time.Now()
|
||||
const dayLength = len("2001-01-01")
|
||||
whenRFC := when.Format(time.RFC3339Nano)
|
||||
r.WriteFile(filePath, "metadata file contents", when)
|
||||
f := r.Flocal.(*Fs)
|
||||
|
||||
// Set fs into "-l" / "--links" mode
|
||||
f.opt.TranslateSymlinks = true
|
||||
|
||||
// Write a symlink to the file
|
||||
symlinkPath := "metafile-link.txt"
|
||||
osSymlinkPath := filepath.Join(f.root, symlinkPath)
|
||||
symlinkPath += linkSuffix
|
||||
require.NoError(t, os.Symlink(filePath, osSymlinkPath))
|
||||
symlinkModTime := fstest.Time("2002-02-03T04:05:10.123123123Z")
|
||||
require.NoError(t, lChtimes(osSymlinkPath, symlinkModTime, symlinkModTime))
|
||||
|
||||
// Get the object
|
||||
obj, err := f.NewObject(ctx, filePath)
|
||||
require.NoError(t, err)
|
||||
o := obj.(*Object)
|
||||
|
||||
// Get the symlink object
|
||||
symlinkObj, err := f.NewObject(ctx, symlinkPath)
|
||||
require.NoError(t, err)
|
||||
symlinkO := symlinkObj.(*Object)
|
||||
|
||||
// Record metadata for o
|
||||
oMeta, err := o.Metadata(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Test symlink first to check it doesn't mess up file
|
||||
t.Run("Symlink", func(t *testing.T) {
|
||||
testMetadata(t, r, symlinkO, symlinkModTime)
|
||||
})
|
||||
|
||||
// Read it again
|
||||
oMetaNew, err := o.Metadata(ctx)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Check that operating on the symlink didn't change the file it was pointing to
|
||||
// See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
|
||||
assert.Equal(t, oMeta, oMetaNew, "metadata setting on symlink messed up file")
|
||||
|
||||
// Now run the same tests on the file
|
||||
t.Run("File", func(t *testing.T) {
|
||||
testMetadata(t, r, o, when)
|
||||
})
|
||||
}
|
||||
|
||||
func testMetadata(t *testing.T, r *fstest.Run, o *Object, when time.Time) {
|
||||
ctx := context.Background()
|
||||
whenRFC := when.Format(time.RFC3339Nano)
|
||||
const dayLength = len("2001-01-01")
|
||||
|
||||
f := r.Flocal.(*Fs)
|
||||
features := f.Features()
|
||||
|
||||
var hasXID, hasAtime, hasBtime bool
|
||||
var hasXID, hasAtime, hasBtime, canSetXattrOnLinks bool
|
||||
switch runtime.GOOS {
|
||||
case "darwin", "freebsd", "netbsd", "linux":
|
||||
hasXID, hasAtime, hasBtime = true, true, true
|
||||
canSetXattrOnLinks = runtime.GOOS != "linux"
|
||||
case "openbsd", "solaris":
|
||||
hasXID, hasAtime = true, true
|
||||
case "windows":
|
||||
|
@ -307,6 +350,10 @@ func TestMetadata(t *testing.T) {
|
|||
require.NoError(t, err)
|
||||
assert.Nil(t, m)
|
||||
|
||||
if !canSetXattrOnLinks && o.translatedLink {
|
||||
t.Skip("Skip remainder of test as can't set xattr on symlinks on this OS")
|
||||
}
|
||||
|
||||
inM := fs.Metadata{
|
||||
"potato": "chips",
|
||||
"cabbage": "soup",
|
||||
|
@ -321,18 +368,21 @@ func TestMetadata(t *testing.T) {
|
|||
})
|
||||
|
||||
checkTime := func(m fs.Metadata, key string, when time.Time) {
|
||||
t.Helper()
|
||||
mt, ok := o.parseMetadataTime(m, key)
|
||||
assert.True(t, ok)
|
||||
dt := mt.Sub(when)
|
||||
precision := time.Second
|
||||
assert.True(t, dt >= -precision && dt <= precision, fmt.Sprintf("%s: dt %v outside +/- precision %v", key, dt, precision))
|
||||
assert.True(t, dt >= -precision && dt <= precision, fmt.Sprintf("%s: dt %v outside +/- precision %v want %v got %v", key, dt, precision, mt, when))
|
||||
}
|
||||
|
||||
checkInt := func(m fs.Metadata, key string, base int) int {
|
||||
t.Helper()
|
||||
value, ok := o.parseMetadataInt(m, key, base)
|
||||
assert.True(t, ok)
|
||||
return value
|
||||
}
|
||||
|
||||
t.Run("Read", func(t *testing.T) {
|
||||
m, err := o.Metadata(ctx)
|
||||
require.NoError(t, err)
|
||||
|
@ -342,13 +392,12 @@ func TestMetadata(t *testing.T) {
|
|||
checkInt(m, "mode", 8)
|
||||
checkTime(m, "mtime", when)
|
||||
|
||||
assert.Equal(t, len(whenRFC), len(m["mtime"]))
|
||||
assert.Equal(t, whenRFC[:dayLength], m["mtime"][:dayLength])
|
||||
|
||||
if hasAtime {
|
||||
if hasAtime && !o.translatedLink { // symlinks generally don't record atime
|
||||
checkTime(m, "atime", when)
|
||||
}
|
||||
if hasBtime {
|
||||
if hasBtime && !o.translatedLink { // symlinks generally don't record btime
|
||||
checkTime(m, "btime", when)
|
||||
}
|
||||
if hasXID {
|
||||
|
@ -372,6 +421,10 @@ func TestMetadata(t *testing.T) {
|
|||
"mode": "0767",
|
||||
"potato": "wedges",
|
||||
}
|
||||
if !canSetXattrOnLinks && o.translatedLink {
|
||||
// Don't change xattr if not supported on symlinks
|
||||
delete(newM, "potato")
|
||||
}
|
||||
err := o.writeMetadata(newM)
|
||||
require.NoError(t, err)
|
||||
|
||||
|
@ -381,7 +434,11 @@ func TestMetadata(t *testing.T) {
|
|||
|
||||
mode := checkInt(m, "mode", 8)
|
||||
if runtime.GOOS != "windows" {
|
||||
assert.Equal(t, 0767, mode&0777, fmt.Sprintf("mode wrong - expecting 0767 got 0%o", mode&0777))
|
||||
expectedMode := 0767
|
||||
if o.translatedLink && runtime.GOOS == "linux" {
|
||||
expectedMode = 0777 // perms of symlinks always read as 0777 on linux
|
||||
}
|
||||
assert.Equal(t, expectedMode, mode&0777, fmt.Sprintf("mode wrong - expecting 0%o got 0%o", expectedMode, mode&0777))
|
||||
}
|
||||
|
||||
checkTime(m, "mtime", newMtime)
|
||||
|
@ -391,11 +448,10 @@ func TestMetadata(t *testing.T) {
|
|||
if haveSetBTime {
|
||||
checkTime(m, "btime", newBtime)
|
||||
}
|
||||
if xattrSupported {
|
||||
if xattrSupported && (canSetXattrOnLinks || !o.translatedLink) {
|
||||
assert.Equal(t, "wedges", m["potato"])
|
||||
}
|
||||
})
|
||||
|
||||
}
|
||||
|
||||
func TestFilter(t *testing.T) {
|
||||
|
@ -572,4 +628,35 @@ func TestCopySymlink(t *testing.T) {
|
|||
linkContents, err := os.Readlink(dstPath)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, "file.txt", linkContents)
|
||||
|
||||
// Set fs into "-L/--copy-links" mode
|
||||
f.opt.FollowSymlinks = true
|
||||
f.opt.TranslateSymlinks = false
|
||||
f.lstat = os.Stat
|
||||
|
||||
// Create dst
|
||||
require.NoError(t, f.Mkdir(ctx, "dst2"))
|
||||
|
||||
// Do copy from src into dst
|
||||
src, err = f.NewObject(ctx, "src/link.txt")
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, src)
|
||||
dst, err = operations.Copy(ctx, f, nil, "dst2/link.txt", src)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, dst)
|
||||
|
||||
// Test that we made a NON-symlink and it has the right contents
|
||||
dstPath = filepath.Join(r.LocalName, "dst2", "link.txt")
|
||||
fi, err := os.Lstat(dstPath)
|
||||
require.NoError(t, err)
|
||||
assert.True(t, fi.Mode()&os.ModeSymlink == 0)
|
||||
want := fstest.NewItem("dst2/link.txt", "hello world", when)
|
||||
fstest.CompareItems(t, []fs.DirEntry{dst}, []fstest.Item{want}, nil, f.precision, "")
|
||||
|
||||
// Test that copying a normal file also works
|
||||
dst, err = operations.Copy(ctx, f, nil, "dst2/file.txt", dst)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, dst)
|
||||
want = fstest.NewItem("dst2/file.txt", "hello world", when)
|
||||
fstest.CompareItems(t, []fs.DirEntry{dst}, []fstest.Item{want}, nil, f.precision, "")
|
||||
}
|
||||
|
|
|
@ -105,7 +105,11 @@ func (o *Object) writeMetadataToFile(m fs.Metadata) (outErr error) {
|
|||
}
|
||||
if haveSetBTime {
|
||||
if btimeOK {
|
||||
err = setBTime(o.path, btime)
|
||||
if o.translatedLink {
|
||||
err = lsetBTime(o.path, btime)
|
||||
} else {
|
||||
err = setBTime(o.path, btime)
|
||||
}
|
||||
if err != nil {
|
||||
outErr = fmt.Errorf("failed to set birth (creation) time: %w", err)
|
||||
}
|
||||
|
@ -121,7 +125,11 @@ func (o *Object) writeMetadataToFile(m fs.Metadata) (outErr error) {
|
|||
if runtime.GOOS == "windows" || runtime.GOOS == "plan9" {
|
||||
fs.Debugf(o, "Ignoring request to set ownership %o.%o on this OS", gid, uid)
|
||||
} else {
|
||||
err = os.Chown(o.path, uid, gid)
|
||||
if o.translatedLink {
|
||||
err = os.Lchown(o.path, uid, gid)
|
||||
} else {
|
||||
err = os.Chown(o.path, uid, gid)
|
||||
}
|
||||
if err != nil {
|
||||
outErr = fmt.Errorf("failed to change ownership: %w", err)
|
||||
}
|
||||
|
@ -132,7 +140,16 @@ func (o *Object) writeMetadataToFile(m fs.Metadata) (outErr error) {
|
|||
if mode >= 0 {
|
||||
umode := uint(mode)
|
||||
if umode <= math.MaxUint32 {
|
||||
err = os.Chmod(o.path, os.FileMode(umode))
|
||||
if o.translatedLink {
|
||||
if haveLChmod {
|
||||
err = lChmod(o.path, os.FileMode(umode))
|
||||
} else {
|
||||
fs.Debugf(o, "Unable to set mode %v on a symlink on this OS", os.FileMode(umode))
|
||||
err = nil
|
||||
}
|
||||
} else {
|
||||
err = os.Chmod(o.path, os.FileMode(umode))
|
||||
}
|
||||
if err != nil {
|
||||
outErr = fmt.Errorf("failed to change permissions: %w", err)
|
||||
}
|
||||
|
|
|
@ -13,3 +13,9 @@ func setBTime(name string, btime time.Time) error {
|
|||
// Does nothing
|
||||
return nil
|
||||
}
|
||||
|
||||
// lsetBTime changes the birth time of the link passed in
|
||||
func lsetBTime(name string, btime time.Time) error {
|
||||
// Does nothing
|
||||
return nil
|
||||
}
|
||||
|
|
|
@ -9,15 +9,20 @@ import (
|
|||
|
||||
const haveSetBTime = true
|
||||
|
||||
// setBTime sets the birth time of the file passed in
|
||||
func setBTime(name string, btime time.Time) (err error) {
|
||||
// setTimes sets any of atime, mtime or btime
|
||||
// if link is set it sets a link rather than the target
|
||||
func setTimes(name string, atime, mtime, btime time.Time, link bool) (err error) {
|
||||
pathp, err := syscall.UTF16PtrFromString(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
fileFlag := uint32(syscall.FILE_FLAG_BACKUP_SEMANTICS)
|
||||
if link {
|
||||
fileFlag |= syscall.FILE_FLAG_OPEN_REPARSE_POINT
|
||||
}
|
||||
h, err := syscall.CreateFile(pathp,
|
||||
syscall.FILE_WRITE_ATTRIBUTES, syscall.FILE_SHARE_WRITE, nil,
|
||||
syscall.OPEN_EXISTING, syscall.FILE_FLAG_BACKUP_SEMANTICS, 0)
|
||||
syscall.OPEN_EXISTING, fileFlag, 0)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -27,6 +32,28 @@ func setBTime(name string, btime time.Time) (err error) {
|
|||
err = closeErr
|
||||
}
|
||||
}()
|
||||
bFileTime := syscall.NsecToFiletime(btime.UnixNano())
|
||||
return syscall.SetFileTime(h, &bFileTime, nil, nil)
|
||||
var patime, pmtime, pbtime *syscall.Filetime
|
||||
if !atime.IsZero() {
|
||||
t := syscall.NsecToFiletime(atime.UnixNano())
|
||||
patime = &t
|
||||
}
|
||||
if !mtime.IsZero() {
|
||||
t := syscall.NsecToFiletime(mtime.UnixNano())
|
||||
pmtime = &t
|
||||
}
|
||||
if !btime.IsZero() {
|
||||
t := syscall.NsecToFiletime(btime.UnixNano())
|
||||
pbtime = &t
|
||||
}
|
||||
return syscall.SetFileTime(h, pbtime, patime, pmtime)
|
||||
}
|
||||
|
||||
// setBTime sets the birth time of the file passed in
|
||||
func setBTime(name string, btime time.Time) (err error) {
|
||||
return setTimes(name, time.Time{}, time.Time{}, btime, false)
|
||||
}
|
||||
|
||||
// lsetBTime changes the birth time of the link passed in
|
||||
func lsetBTime(name string, btime time.Time) error {
|
||||
return setTimes(name, time.Time{}, time.Time{}, btime, true)
|
||||
}
|
||||
|
|
|
@ -202,9 +202,14 @@ type SharingLinkType struct {
|
|||
type LinkType string
|
||||
|
||||
const (
|
||||
ViewLinkType LinkType = "view" // ViewLinkType (role: read) A view-only sharing link, allowing read-only access.
|
||||
EditLinkType LinkType = "edit" // EditLinkType (role: write) An edit sharing link, allowing read-write access.
|
||||
EmbedLinkType LinkType = "embed" // EmbedLinkType (role: read) A view-only sharing link that can be used to embed content into a host webpage. Embed links are not available for OneDrive for Business or SharePoint.
|
||||
// ViewLinkType (role: read) A view-only sharing link, allowing read-only access.
|
||||
ViewLinkType LinkType = "view"
|
||||
// EditLinkType (role: write) An edit sharing link, allowing read-write access.
|
||||
EditLinkType LinkType = "edit"
|
||||
// EmbedLinkType (role: read) A view-only sharing link that can be used to embed
|
||||
// content into a host webpage. Embed links are not available for OneDrive for
|
||||
// Business or SharePoint.
|
||||
EmbedLinkType LinkType = "embed"
|
||||
)
|
||||
|
||||
// LinkScope represents the scope of the link represented by this permission.
|
||||
|
@ -212,9 +217,12 @@ const (
|
|||
type LinkScope string
|
||||
|
||||
const (
|
||||
AnonymousScope LinkScope = "anonymous" // AnonymousScope = Anyone with the link has access, without needing to sign in. This may include people outside of your organization.
|
||||
OrganizationScope LinkScope = "organization" // OrganizationScope = Anyone signed into your organization (tenant) can use the link to get access. Only available in OneDrive for Business and SharePoint.
|
||||
|
||||
// AnonymousScope = Anyone with the link has access, without needing to sign in.
|
||||
// This may include people outside of your organization.
|
||||
AnonymousScope LinkScope = "anonymous"
|
||||
// OrganizationScope = Anyone signed into your organization (tenant) can use the
|
||||
// link to get access. Only available in OneDrive for Business and SharePoint.
|
||||
OrganizationScope LinkScope = "organization"
|
||||
)
|
||||
|
||||
// PermissionsType provides information about a sharing permission granted for a DriveItem resource.
|
||||
|
@ -236,10 +244,14 @@ type PermissionsType struct {
|
|||
type Role string
|
||||
|
||||
const (
|
||||
ReadRole Role = "read" // ReadRole provides the ability to read the metadata and contents of the item.
|
||||
WriteRole Role = "write" // WriteRole provides the ability to read and modify the metadata and contents of the item.
|
||||
OwnerRole Role = "owner" // OwnerRole represents the owner role for SharePoint and OneDrive for Business.
|
||||
MemberRole Role = "member" // MemberRole represents the member role for SharePoint and OneDrive for Business.
|
||||
// ReadRole provides the ability to read the metadata and contents of the item.
|
||||
ReadRole Role = "read"
|
||||
// WriteRole provides the ability to read and modify the metadata and contents of the item.
|
||||
WriteRole Role = "write"
|
||||
// OwnerRole represents the owner role for SharePoint and OneDrive for Business.
|
||||
OwnerRole Role = "owner"
|
||||
// MemberRole represents the member role for SharePoint and OneDrive for Business.
|
||||
MemberRole Role = "member"
|
||||
)
|
||||
|
||||
// PermissionsResponse is the response to the list permissions method
|
||||
|
|
|
@ -827,7 +827,7 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
|
|||
retry = true
|
||||
fs.Debugf(nil, "HTTP 401: Unable to initialize RPS. Trying again.")
|
||||
}
|
||||
case 429: // Too Many Requests.
|
||||
case 429, 503: // Too Many Requests, Server Too Busy
|
||||
// see https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online
|
||||
if values := resp.Header["Retry-After"]; len(values) == 1 && values[0] != "" {
|
||||
retryAfter, parseErr := strconv.Atoi(values[0])
|
||||
|
@ -942,7 +942,8 @@ func errorHandler(resp *http.Response) error {
|
|||
// Decode error response
|
||||
errResponse := new(api.Error)
|
||||
err := rest.DecodeJSON(resp, &errResponse)
|
||||
if err != nil {
|
||||
// Redirects have no body so don't report an error
|
||||
if err != nil && resp.Header.Get("Location") == "" {
|
||||
fs.Debugf(nil, "Couldn't decode error response: %v", err)
|
||||
}
|
||||
if errResponse.ErrorInfo.Code == "" {
|
||||
|
|
|
@ -513,6 +513,72 @@ type RequestDecompress struct {
|
|||
DefaultParent bool `json:"default_parent,omitempty"`
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------ authorization
|
||||
|
||||
// CaptchaToken is a response to requestCaptchaToken api call
|
||||
type CaptchaToken struct {
|
||||
CaptchaToken string `json:"captcha_token"`
|
||||
ExpiresIn int64 `json:"expires_in"` // currently 300s
|
||||
// API doesn't provide Expiry field and thus it should be populated from ExpiresIn on retrieval
|
||||
Expiry time.Time `json:"expiry,omitempty"`
|
||||
URL string `json:"url,omitempty"` // a link for users to solve captcha
|
||||
}
|
||||
|
||||
// expired reports whether the token is expired.
|
||||
// t must be non-nil.
|
||||
func (t *CaptchaToken) expired() bool {
|
||||
if t.Expiry.IsZero() {
|
||||
return false
|
||||
}
|
||||
|
||||
expiryDelta := time.Duration(10) * time.Second // same as oauth2's defaultExpiryDelta
|
||||
return t.Expiry.Round(0).Add(-expiryDelta).Before(time.Now())
|
||||
}
|
||||
|
||||
// Valid reports whether t is non-nil, has an AccessToken, and is not expired.
|
||||
func (t *CaptchaToken) Valid() bool {
|
||||
return t != nil && t.CaptchaToken != "" && !t.expired()
|
||||
}
|
||||
|
||||
// CaptchaTokenRequest is to request for captcha token
|
||||
type CaptchaTokenRequest struct {
|
||||
Action string `json:"action,omitempty"`
|
||||
CaptchaToken string `json:"captcha_token,omitempty"`
|
||||
ClientID string `json:"client_id,omitempty"`
|
||||
DeviceID string `json:"device_id,omitempty"`
|
||||
Meta *CaptchaTokenMeta `json:"meta,omitempty"`
|
||||
}
|
||||
|
||||
// CaptchaTokenMeta contains meta info for CaptchaTokenRequest
|
||||
type CaptchaTokenMeta struct {
|
||||
CaptchaSign string `json:"captcha_sign,omitempty"`
|
||||
ClientVersion string `json:"client_version,omitempty"`
|
||||
PackageName string `json:"package_name,omitempty"`
|
||||
Timestamp string `json:"timestamp,omitempty"`
|
||||
UserID string `json:"user_id,omitempty"` // webdrive uses this instead of UserName
|
||||
UserName string `json:"username,omitempty"`
|
||||
Email string `json:"email,omitempty"`
|
||||
PhoneNumber string `json:"phone_number,omitempty"`
|
||||
}
|
||||
|
||||
// Token represents oauth2 token used for pikpak which needs to be converted to be compatible with oauth2.Token
|
||||
type Token struct {
|
||||
TokenType string `json:"token_type"`
|
||||
AccessToken string `json:"access_token"`
|
||||
RefreshToken string `json:"refresh_token"`
|
||||
ExpiresIn int `json:"expires_in"`
|
||||
Sub string `json:"sub"`
|
||||
}
|
||||
|
||||
// Expiry returns expiry from expires in, so it should be called on retrieval
|
||||
// e must be non-nil.
|
||||
func (e *Token) Expiry() (t time.Time) {
|
||||
if v := e.ExpiresIn; v != 0 {
|
||||
return time.Now().Add(time.Duration(v) * time.Second)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------
|
||||
|
||||
// NOT implemented YET
|
||||
|
|
|
@ -3,8 +3,10 @@ package pikpak
|
|||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"crypto/md5"
|
||||
"crypto/sha1"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
|
@ -14,10 +16,13 @@ import (
|
|||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/rclone/rclone/backend/pikpak/api"
|
||||
"github.com/rclone/rclone/fs"
|
||||
"github.com/rclone/rclone/fs/config/configmap"
|
||||
"github.com/rclone/rclone/fs/fserrors"
|
||||
"github.com/rclone/rclone/lib/rest"
|
||||
)
|
||||
|
||||
|
@ -262,15 +267,20 @@ func (f *Fs) getGcid(ctx context.Context, src fs.ObjectInfo) (gcid string, err e
|
|||
if err != nil {
|
||||
return
|
||||
}
|
||||
if src.Size() == 0 {
|
||||
// If src is zero-length, the API will return
|
||||
// Error "cid and file_size is required" (400)
|
||||
// In this case, we can simply return cid == gcid
|
||||
return cid, nil
|
||||
}
|
||||
|
||||
params := url.Values{}
|
||||
params.Set("cid", cid)
|
||||
params.Set("file_size", strconv.FormatInt(src.Size(), 10))
|
||||
opts := rest.Opts{
|
||||
Method: "GET",
|
||||
Path: "/drive/v1/resource/cid",
|
||||
Parameters: params,
|
||||
ExtraHeaders: map[string]string{"x-device-id": f.deviceID},
|
||||
Method: "GET",
|
||||
Path: "/drive/v1/resource/cid",
|
||||
Parameters: params,
|
||||
}
|
||||
|
||||
info := struct {
|
||||
|
@ -368,11 +378,23 @@ func calcGcid(r io.Reader, size int64) (string, error) {
|
|||
return hex.EncodeToString(totalHash.Sum(nil)), nil
|
||||
}
|
||||
|
||||
// unWrapObjectInfo returns the underlying Object unwrapped as much as
|
||||
// possible or nil even if it is an OverrideRemote
|
||||
func unWrapObjectInfo(oi fs.ObjectInfo) fs.Object {
|
||||
if o, ok := oi.(fs.Object); ok {
|
||||
return fs.UnWrapObject(o)
|
||||
} else if do, ok := oi.(*fs.OverrideRemote); ok {
|
||||
// Unwrap if it is an operations.OverrideRemote
|
||||
return do.UnWrap()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// calcCid calculates Cid from source
|
||||
//
|
||||
// Cid is a simplified version of Gcid
|
||||
func calcCid(ctx context.Context, src fs.ObjectInfo) (cid string, err error) {
|
||||
srcObj := fs.UnWrapObjectInfo(src)
|
||||
srcObj := unWrapObjectInfo(src)
|
||||
if srcObj == nil {
|
||||
return "", fmt.Errorf("failed to unwrap object from src: %s", src)
|
||||
}
|
||||
|
@ -408,6 +430,8 @@ func calcCid(ctx context.Context, src fs.ObjectInfo) (cid string, err error) {
|
|||
return
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------ authorization
|
||||
|
||||
// randomly generates device id used for request header 'x-device-id'
|
||||
//
|
||||
// original javascript implementation
|
||||
|
@ -428,3 +452,206 @@ func genDeviceID() string {
|
|||
}
|
||||
return string(base)
|
||||
}
|
||||
|
||||
var md5Salt = []string{
|
||||
"C9qPpZLN8ucRTaTiUMWYS9cQvWOE",
|
||||
"+r6CQVxjzJV6LCV",
|
||||
"F",
|
||||
"pFJRC",
|
||||
"9WXYIDGrwTCz2OiVlgZa90qpECPD6olt",
|
||||
"/750aCr4lm/Sly/c",
|
||||
"RB+DT/gZCrbV",
|
||||
"",
|
||||
"CyLsf7hdkIRxRm215hl",
|
||||
"7xHvLi2tOYP0Y92b",
|
||||
"ZGTXXxu8E/MIWaEDB+Sm/",
|
||||
"1UI3",
|
||||
"E7fP5Pfijd+7K+t6Tg/NhuLq0eEUVChpJSkrKxpO",
|
||||
"ihtqpG6FMt65+Xk+tWUH2",
|
||||
"NhXXU9rg4XXdzo7u5o",
|
||||
}
|
||||
|
||||
func md5Sum(text string) string {
|
||||
hash := md5.Sum([]byte(text))
|
||||
return hex.EncodeToString(hash[:])
|
||||
}
|
||||
|
||||
func calcCaptchaSign(deviceID string) (timestamp, sign string) {
|
||||
timestamp = fmt.Sprint(time.Now().UnixMilli())
|
||||
str := fmt.Sprint(clientID, clientVersion, packageName, deviceID, timestamp)
|
||||
for _, salt := range md5Salt {
|
||||
str = md5Sum(str + salt)
|
||||
}
|
||||
sign = "1." + str
|
||||
return
|
||||
}
|
||||
|
||||
func newCaptchaTokenRequest(action, oldToken string, opt *Options) (req *api.CaptchaTokenRequest) {
|
||||
req = &api.CaptchaTokenRequest{
|
||||
Action: action,
|
||||
CaptchaToken: oldToken, // can be empty initially
|
||||
ClientID: clientID,
|
||||
DeviceID: opt.DeviceID,
|
||||
Meta: new(api.CaptchaTokenMeta),
|
||||
}
|
||||
switch action {
|
||||
case "POST:/v1/auth/signin":
|
||||
req.Meta.UserName = opt.Username
|
||||
default:
|
||||
timestamp, captchaSign := calcCaptchaSign(opt.DeviceID)
|
||||
req.Meta.CaptchaSign = captchaSign
|
||||
req.Meta.Timestamp = timestamp
|
||||
req.Meta.ClientVersion = clientVersion
|
||||
req.Meta.PackageName = packageName
|
||||
req.Meta.UserID = opt.UserID
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// CaptchaTokenSource stores updated captcha tokens in the config file
|
||||
type CaptchaTokenSource struct {
|
||||
mu sync.Mutex
|
||||
m configmap.Mapper
|
||||
opt *Options
|
||||
token *api.CaptchaToken
|
||||
ctx context.Context
|
||||
rst *pikpakClient
|
||||
}
|
||||
|
||||
// initialize CaptchaTokenSource from rclone.conf if possible
|
||||
func newCaptchaTokenSource(ctx context.Context, opt *Options, m configmap.Mapper) *CaptchaTokenSource {
|
||||
token := new(api.CaptchaToken)
|
||||
tokenString, ok := m.Get("captcha_token")
|
||||
if !ok || tokenString == "" {
|
||||
fs.Debugf(nil, "failed to read captcha token out of config file")
|
||||
} else {
|
||||
if err := json.Unmarshal([]byte(tokenString), token); err != nil {
|
||||
fs.Debugf(nil, "failed to parse captcha token out of config file: %v", err)
|
||||
}
|
||||
}
|
||||
return &CaptchaTokenSource{
|
||||
m: m,
|
||||
opt: opt,
|
||||
token: token,
|
||||
ctx: ctx,
|
||||
rst: newPikpakClient(getClient(ctx, opt), opt),
|
||||
}
|
||||
}
|
||||
|
||||
// requestToken retrieves captcha token from API
|
||||
func (cts *CaptchaTokenSource) requestToken(ctx context.Context, req *api.CaptchaTokenRequest) (err error) {
|
||||
opts := rest.Opts{
|
||||
Method: "POST",
|
||||
RootURL: "https://user.mypikpak.com/v1/shield/captcha/init",
|
||||
}
|
||||
var info *api.CaptchaToken
|
||||
_, err = cts.rst.CallJSON(ctx, &opts, &req, &info)
|
||||
if err == nil && info.ExpiresIn != 0 {
|
||||
// populate to Expiry
|
||||
info.Expiry = time.Now().Add(time.Duration(info.ExpiresIn) * time.Second)
|
||||
cts.token = info // update with a new one
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (cts *CaptchaTokenSource) refreshToken(opts *rest.Opts) (string, error) {
|
||||
oldToken := ""
|
||||
if cts.token != nil {
|
||||
oldToken = cts.token.CaptchaToken
|
||||
}
|
||||
action := "GET:/drive/v1/about"
|
||||
if opts.RootURL == "" && opts.Path != "" {
|
||||
action = fmt.Sprintf("%s:%s", opts.Method, opts.Path)
|
||||
} else if u, err := url.Parse(opts.RootURL); err == nil {
|
||||
action = fmt.Sprintf("%s:%s", opts.Method, u.Path)
|
||||
}
|
||||
req := newCaptchaTokenRequest(action, oldToken, cts.opt)
|
||||
if err := cts.requestToken(cts.ctx, req); err != nil {
|
||||
return "", fmt.Errorf("failed to retrieve captcha token from api: %w", err)
|
||||
}
|
||||
|
||||
// put it into rclone.conf
|
||||
tokenBytes, err := json.Marshal(cts.token)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to marshal captcha token: %w", err)
|
||||
}
|
||||
cts.m.Set("captcha_token", string(tokenBytes))
|
||||
return cts.token.CaptchaToken, nil
|
||||
}
|
||||
|
||||
// Invalidate resets existing captcha token for a forced refresh
|
||||
func (cts *CaptchaTokenSource) Invalidate() {
|
||||
cts.mu.Lock()
|
||||
cts.token.CaptchaToken = ""
|
||||
cts.mu.Unlock()
|
||||
}
|
||||
|
||||
// Token returns a valid captcha token
|
||||
func (cts *CaptchaTokenSource) Token(opts *rest.Opts) (string, error) {
|
||||
cts.mu.Lock()
|
||||
defer cts.mu.Unlock()
|
||||
if cts.token.Valid() {
|
||||
return cts.token.CaptchaToken, nil
|
||||
}
|
||||
return cts.refreshToken(opts)
|
||||
}
|
||||
|
||||
// pikpakClient wraps rest.Client with a handle of captcha token
|
||||
type pikpakClient struct {
|
||||
opt *Options
|
||||
client *rest.Client
|
||||
captcha *CaptchaTokenSource
|
||||
}
|
||||
|
||||
// newPikpakClient takes an (oauth) http.Client and makes a new api instance for pikpak with
|
||||
// * error handler
|
||||
// * root url
|
||||
// * default headers
|
||||
func newPikpakClient(c *http.Client, opt *Options) *pikpakClient {
|
||||
client := rest.NewClient(c).SetErrorHandler(errorHandler).SetRoot(rootURL)
|
||||
for key, val := range map[string]string{
|
||||
"Referer": "https://mypikpak.com/",
|
||||
"x-client-id": clientID,
|
||||
"x-client-version": clientVersion,
|
||||
"x-device-id": opt.DeviceID,
|
||||
// "x-device-model": "firefox%2F129.0",
|
||||
// "x-device-name": "PC-Firefox",
|
||||
// "x-device-sign": fmt.Sprintf("wdi10.%sxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", opt.DeviceID),
|
||||
// "x-net-work-type": "NONE",
|
||||
// "x-os-version": "Win32",
|
||||
// "x-platform-version": "1",
|
||||
// "x-protocol-version": "301",
|
||||
// "x-provider-name": "NONE",
|
||||
// "x-sdk-version": "8.0.3",
|
||||
} {
|
||||
client.SetHeader(key, val)
|
||||
}
|
||||
return &pikpakClient{
|
||||
client: client,
|
||||
opt: opt,
|
||||
}
|
||||
}
|
||||
|
||||
// This should be called right after pikpakClient initialized
|
||||
func (c *pikpakClient) SetCaptchaTokener(ctx context.Context, m configmap.Mapper) *pikpakClient {
|
||||
c.captcha = newCaptchaTokenSource(ctx, c.opt, m)
|
||||
return c
|
||||
}
|
||||
|
||||
func (c *pikpakClient) CallJSON(ctx context.Context, opts *rest.Opts, request interface{}, response interface{}) (resp *http.Response, err error) {
|
||||
if c.captcha != nil {
|
||||
token, err := c.captcha.Token(opts)
|
||||
if err != nil || token == "" {
|
||||
return nil, fserrors.FatalError(fmt.Errorf("couldn't get captcha token: %v", err))
|
||||
}
|
||||
if opts.ExtraHeaders == nil {
|
||||
opts.ExtraHeaders = make(map[string]string)
|
||||
}
|
||||
opts.ExtraHeaders["x-captcha-token"] = token
|
||||
}
|
||||
return c.client.CallJSON(ctx, opts, request, response)
|
||||
}
|
||||
|
||||
func (c *pikpakClient) Call(ctx context.Context, opts *rest.Opts) (resp *http.Response, err error) {
|
||||
return c.client.Call(ctx, opts)
|
||||
}
|
||||
|
|
|
@ -23,6 +23,7 @@ package pikpak
|
|||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
|
@ -51,6 +52,7 @@ import (
|
|||
"github.com/rclone/rclone/fs/config/configstruct"
|
||||
"github.com/rclone/rclone/fs/config/obscure"
|
||||
"github.com/rclone/rclone/fs/fserrors"
|
||||
"github.com/rclone/rclone/fs/fshttp"
|
||||
"github.com/rclone/rclone/fs/hash"
|
||||
"github.com/rclone/rclone/lib/atexit"
|
||||
"github.com/rclone/rclone/lib/dircache"
|
||||
|
@ -64,15 +66,17 @@ import (
|
|||
|
||||
// Constants
|
||||
const (
|
||||
rcloneClientID = "YNxT9w7GMdWvEOKa"
|
||||
rcloneEncryptedClientSecret = "aqrmB6M1YJ1DWCBxVxFSjFo7wzWEky494YMmkqgAl1do1WKOe2E"
|
||||
minSleep = 100 * time.Millisecond
|
||||
maxSleep = 2 * time.Second
|
||||
taskWaitTime = 500 * time.Millisecond
|
||||
decayConstant = 2 // bigger for slower decay, exponential
|
||||
rootURL = "https://api-drive.mypikpak.com"
|
||||
minChunkSize = fs.SizeSuffix(manager.MinUploadPartSize)
|
||||
defaultUploadConcurrency = manager.DefaultUploadConcurrency
|
||||
clientID = "YUMx5nI8ZU8Ap8pm"
|
||||
clientVersion = "2.0.0"
|
||||
packageName = "mypikpak.com"
|
||||
defaultUserAgent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0"
|
||||
minSleep = 100 * time.Millisecond
|
||||
maxSleep = 2 * time.Second
|
||||
taskWaitTime = 500 * time.Millisecond
|
||||
decayConstant = 2 // bigger for slower decay, exponential
|
||||
rootURL = "https://api-drive.mypikpak.com"
|
||||
minChunkSize = fs.SizeSuffix(manager.MinUploadPartSize)
|
||||
defaultUploadConcurrency = manager.DefaultUploadConcurrency
|
||||
)
|
||||
|
||||
// Globals
|
||||
|
@ -85,43 +89,53 @@ var (
|
|||
TokenURL: "https://user.mypikpak.com/v1/auth/token",
|
||||
AuthStyle: oauth2.AuthStyleInParams,
|
||||
},
|
||||
ClientID: rcloneClientID,
|
||||
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
|
||||
RedirectURL: oauthutil.RedirectURL,
|
||||
ClientID: clientID,
|
||||
RedirectURL: oauthutil.RedirectURL,
|
||||
}
|
||||
)
|
||||
|
||||
// Returns OAuthOptions modified for pikpak
|
||||
func pikpakOAuthOptions() []fs.Option {
|
||||
opts := []fs.Option{}
|
||||
for _, opt := range oauthutil.SharedOptions {
|
||||
if opt.Name == config.ConfigClientID {
|
||||
opt.Advanced = true
|
||||
} else if opt.Name == config.ConfigClientSecret {
|
||||
opt.Advanced = true
|
||||
}
|
||||
opts = append(opts, opt)
|
||||
}
|
||||
return opts
|
||||
}
|
||||
|
||||
// pikpakAutorize retrieves OAuth token using user/pass and save it to rclone.conf
|
||||
func pikpakAuthorize(ctx context.Context, opt *Options, name string, m configmap.Mapper) error {
|
||||
// override default client id/secret
|
||||
if id, ok := m.Get("client_id"); ok && id != "" {
|
||||
oauthConfig.ClientID = id
|
||||
}
|
||||
if secret, ok := m.Get("client_secret"); ok && secret != "" {
|
||||
oauthConfig.ClientSecret = secret
|
||||
if opt.Username == "" {
|
||||
return errors.New("no username")
|
||||
}
|
||||
pass, err := obscure.Reveal(opt.Password)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to decode password - did you obscure it?: %w", err)
|
||||
}
|
||||
t, err := oauthConfig.PasswordCredentialsToken(ctx, opt.Username, pass)
|
||||
// new device id if necessary
|
||||
if len(opt.DeviceID) != 32 {
|
||||
opt.DeviceID = genDeviceID()
|
||||
m.Set("device_id", opt.DeviceID)
|
||||
fs.Infof(nil, "Using new device id %q", opt.DeviceID)
|
||||
}
|
||||
opts := rest.Opts{
|
||||
Method: "POST",
|
||||
RootURL: "https://user.mypikpak.com/v1/auth/signin",
|
||||
}
|
||||
req := map[string]string{
|
||||
"username": opt.Username,
|
||||
"password": pass,
|
||||
"client_id": clientID,
|
||||
}
|
||||
var token api.Token
|
||||
rst := newPikpakClient(getClient(ctx, opt), opt).SetCaptchaTokener(ctx, m)
|
||||
_, err = rst.CallJSON(ctx, &opts, req, &token)
|
||||
if apiErr, ok := err.(*api.Error); ok {
|
||||
if apiErr.Reason == "captcha_invalid" && apiErr.Code == 4002 {
|
||||
rst.captcha.Invalidate()
|
||||
_, err = rst.CallJSON(ctx, &opts, req, &token)
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to retrieve token using username/password: %w", err)
|
||||
}
|
||||
t := &oauth2.Token{
|
||||
AccessToken: token.AccessToken,
|
||||
TokenType: token.TokenType,
|
||||
RefreshToken: token.RefreshToken,
|
||||
Expiry: token.Expiry(),
|
||||
}
|
||||
return oauthutil.PutToken(name, m, t, false)
|
||||
}
|
||||
|
||||
|
@ -160,7 +174,7 @@ func init() {
|
|||
}
|
||||
return nil, fmt.Errorf("unknown state %q", config.State)
|
||||
},
|
||||
Options: append(pikpakOAuthOptions(), []fs.Option{{
|
||||
Options: []fs.Option{{
|
||||
Name: "user",
|
||||
Help: "Pikpak username.",
|
||||
Required: true,
|
||||
|
@ -170,6 +184,18 @@ func init() {
|
|||
Help: "Pikpak password.",
|
||||
Required: true,
|
||||
IsPassword: true,
|
||||
}, {
|
||||
Name: "device_id",
|
||||
Help: "Device ID used for authorization.",
|
||||
Advanced: true,
|
||||
Sensitive: true,
|
||||
}, {
|
||||
Name: "user_agent",
|
||||
Default: defaultUserAgent,
|
||||
Advanced: true,
|
||||
Help: fmt.Sprintf(`HTTP user agent for pikpak.
|
||||
|
||||
Defaults to "%s" or "--pikpak-user-agent" provided on command line.`, defaultUserAgent),
|
||||
}, {
|
||||
Name: "root_folder_id",
|
||||
Help: `ID of the root folder.
|
||||
|
@ -248,7 +274,7 @@ this may help to speed up the transfers.`,
|
|||
encoder.EncodeRightSpace |
|
||||
encoder.EncodeRightPeriod |
|
||||
encoder.EncodeInvalidUtf8),
|
||||
}}...),
|
||||
}},
|
||||
})
|
||||
}
|
||||
|
||||
|
@ -256,6 +282,9 @@ this may help to speed up the transfers.`,
|
|||
type Options struct {
|
||||
Username string `config:"user"`
|
||||
Password string `config:"pass"`
|
||||
UserID string `config:"user_id"` // only available during runtime
|
||||
DeviceID string `config:"device_id"`
|
||||
UserAgent string `config:"user_agent"`
|
||||
RootFolderID string `config:"root_folder_id"`
|
||||
UseTrash bool `config:"use_trash"`
|
||||
TrashedOnly bool `config:"trashed_only"`
|
||||
|
@ -271,11 +300,10 @@ type Fs struct {
|
|||
root string // the path we are working on
|
||||
opt Options // parsed options
|
||||
features *fs.Features // optional features
|
||||
rst *rest.Client // the connection to the server
|
||||
rst *pikpakClient // the connection to the server
|
||||
dirCache *dircache.DirCache // Map of directory path to directory id
|
||||
pacer *fs.Pacer // pacer for API calls
|
||||
rootFolderID string // the id of the root folder
|
||||
deviceID string // device id used for api requests
|
||||
client *http.Client // authorized client
|
||||
m configmap.Mapper
|
||||
tokenMu *sync.Mutex // when renewing tokens
|
||||
|
@ -429,6 +457,12 @@ func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (b
|
|||
} else if apiErr.Reason == "file_space_not_enough" {
|
||||
// "file_space_not_enough" (8): Storage space is not enough
|
||||
return false, fserrors.FatalError(err)
|
||||
} else if apiErr.Reason == "captcha_invalid" && apiErr.Code == 9 {
|
||||
// "captcha_invalid" (9): Verification code is invalid
|
||||
// This error occurred on the POST:/drive/v1/files endpoint
|
||||
// when a zero-byte file was uploaded with an invalid captcha token
|
||||
f.rst.captcha.Invalidate()
|
||||
return true, err
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -452,13 +486,36 @@ func errorHandler(resp *http.Response) error {
|
|||
return errResponse
|
||||
}
|
||||
|
||||
// getClient makes an http client according to the options
|
||||
func getClient(ctx context.Context, opt *Options) *http.Client {
|
||||
// Override few config settings and create a client
|
||||
newCtx, ci := fs.AddConfig(ctx)
|
||||
ci.UserAgent = opt.UserAgent
|
||||
return fshttp.NewClient(newCtx)
|
||||
}
|
||||
|
||||
// newClientWithPacer sets a new http/rest client with a pacer to Fs
|
||||
func (f *Fs) newClientWithPacer(ctx context.Context) (err error) {
|
||||
f.client, _, err = oauthutil.NewClient(ctx, f.name, f.m, oauthConfig)
|
||||
var ts *oauthutil.TokenSource
|
||||
f.client, ts, err = oauthutil.NewClientWithBaseClient(ctx, f.name, f.m, oauthConfig, getClient(ctx, &f.opt))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create oauth client: %w", err)
|
||||
}
|
||||
f.rst = rest.NewClient(f.client).SetRoot(rootURL).SetErrorHandler(errorHandler)
|
||||
token, err := ts.Token()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// parse user_id from oauth access token for later use
|
||||
if parts := strings.Split(token.AccessToken, "."); len(parts) > 1 {
|
||||
jsonStr, _ := base64.URLEncoding.DecodeString(parts[1] + "===")
|
||||
info := struct {
|
||||
UserID string `json:"sub,omitempty"`
|
||||
}{}
|
||||
if jsonErr := json.Unmarshal(jsonStr, &info); jsonErr == nil {
|
||||
f.opt.UserID = info.UserID
|
||||
}
|
||||
}
|
||||
f.rst = newPikpakClient(f.client, &f.opt).SetCaptchaTokener(ctx, f.m)
|
||||
f.pacer = fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant)))
|
||||
return nil
|
||||
}
|
||||
|
@ -491,9 +548,19 @@ func newFs(ctx context.Context, name, path string, m configmap.Mapper) (*Fs, err
|
|||
CanHaveEmptyDirectories: true, // can have empty directories
|
||||
NoMultiThreading: true, // can't have multiple threads downloading
|
||||
}).Fill(ctx, f)
|
||||
f.deviceID = genDeviceID()
|
||||
|
||||
// new device id if necessary
|
||||
if len(f.opt.DeviceID) != 32 {
|
||||
f.opt.DeviceID = genDeviceID()
|
||||
m.Set("device_id", f.opt.DeviceID)
|
||||
fs.Infof(nil, "Using new device id %q", f.opt.DeviceID)
|
||||
}
|
||||
|
||||
if err := f.newClientWithPacer(ctx); err != nil {
|
||||
// re-authorize if necessary
|
||||
if strings.Contains(err.Error(), "invalid_grant") {
|
||||
return f, f.reAuthorize(ctx)
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
|
@ -1707,7 +1774,7 @@ func (o *Object) upload(ctx context.Context, in io.Reader, src fs.ObjectInfo, wi
|
|||
gcid, err := o.fs.getGcid(ctx, src)
|
||||
if err != nil || gcid == "" {
|
||||
fs.Debugf(o, "calculating gcid: %v", err)
|
||||
if srcObj := fs.UnWrapObjectInfo(src); srcObj != nil && srcObj.Fs().Features().IsLocal {
|
||||
if srcObj := unWrapObjectInfo(src); srcObj != nil && srcObj.Fs().Features().IsLocal {
|
||||
// No buffering; directly calculate gcid from source
|
||||
rc, err := srcObj.Open(ctx)
|
||||
if err != nil {
|
||||
|
|
|
@ -3052,9 +3052,16 @@ func (s3logger) Logf(classification logging.Classification, format string, v ...
|
|||
func s3Connection(ctx context.Context, opt *Options, client *http.Client) (s3Client *s3.Client, err error) {
|
||||
ci := fs.GetConfig(ctx)
|
||||
var awsConfig aws.Config
|
||||
// Make the default static auth
|
||||
v := aws.Credentials{
|
||||
AccessKeyID: opt.AccessKeyID,
|
||||
SecretAccessKey: opt.SecretAccessKey,
|
||||
SessionToken: opt.SessionToken,
|
||||
}
|
||||
awsConfig.Credentials = &credentials.StaticCredentialsProvider{Value: v}
|
||||
|
||||
// Try to fill in the config from the environment if env_auth=true
|
||||
if opt.EnvAuth {
|
||||
if opt.EnvAuth && opt.AccessKeyID == "" && opt.SecretAccessKey == "" {
|
||||
configOpts := []func(*awsconfig.LoadOptions) error{}
|
||||
// Set the name of the profile if supplied
|
||||
if opt.Profile != "" {
|
||||
|
@ -3079,13 +3086,7 @@ func s3Connection(ctx context.Context, opt *Options, client *http.Client) (s3Cli
|
|||
case opt.SecretAccessKey == "":
|
||||
return nil, errors.New("secret_access_key not found")
|
||||
default:
|
||||
// Make the static auth
|
||||
v := aws.Credentials{
|
||||
AccessKeyID: opt.AccessKeyID,
|
||||
SecretAccessKey: opt.SecretAccessKey,
|
||||
SessionToken: opt.SessionToken,
|
||||
}
|
||||
awsConfig.Credentials = &credentials.StaticCredentialsProvider{Value: v}
|
||||
// static credentials are already set
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3367,6 +3368,10 @@ func setQuirks(opt *Options) {
|
|||
opt.ChunkSize = 64 * fs.Mebi
|
||||
}
|
||||
useAlreadyExists = false // returns BucketAlreadyExists
|
||||
// Storj doesn't support multi-part server side copy:
|
||||
// https://github.com/storj/roadmap/issues/40
|
||||
// So make cutoff very large which it does support
|
||||
opt.CopyCutoff = math.MaxInt64
|
||||
case "Synology":
|
||||
useMultipartEtag = false
|
||||
useAlreadyExists = false // untested
|
||||
|
@ -5745,7 +5750,7 @@ func (o *Object) downloadFromURL(ctx context.Context, bucketPath string, options
|
|||
ContentEncoding: header("Content-Encoding"),
|
||||
ContentLanguage: header("Content-Language"),
|
||||
ContentType: header("Content-Type"),
|
||||
StorageClass: types.StorageClass(*header("X-Amz-Storage-Class")),
|
||||
StorageClass: types.StorageClass(deref(header("X-Amz-Storage-Class"))),
|
||||
}
|
||||
o.setMetaData(&head)
|
||||
return resp.Body, err
|
||||
|
@ -5939,8 +5944,8 @@ func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectIn
|
|||
chunkSize: int64(chunkSize),
|
||||
size: size,
|
||||
f: f,
|
||||
bucket: mOut.Bucket,
|
||||
key: mOut.Key,
|
||||
bucket: ui.req.Bucket,
|
||||
key: ui.req.Key,
|
||||
uploadID: mOut.UploadId,
|
||||
multiPartUploadInput: &mReq,
|
||||
completedParts: make([]types.CompletedPart, 0),
|
||||
|
|
|
@ -5,20 +5,13 @@ import (
|
|||
"bytes"
|
||||
"log"
|
||||
|
||||
"github.com/rclone/rclone/fs"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
// CaptureOutput runs a function capturing its output.
|
||||
func CaptureOutput(fun func()) []byte {
|
||||
logSave := log.Writer()
|
||||
logrusSave := logrus.StandardLogger().Writer()
|
||||
defer func() {
|
||||
err := logrusSave.Close()
|
||||
if err != nil {
|
||||
fs.Errorf(nil, "error closing logrusSave: %v", err)
|
||||
}
|
||||
}()
|
||||
logrusSave := logrus.StandardLogger().Out
|
||||
buf := &bytes.Buffer{}
|
||||
log.SetOutput(buf)
|
||||
logrus.SetOutput(buf)
|
||||
|
|
|
@ -66,7 +66,8 @@ func quotePath(path string) string {
|
|||
return escapePath(path, true)
|
||||
}
|
||||
|
||||
var Colors bool // Colors controls whether terminal colors are enabled
|
||||
// Colors controls whether terminal colors are enabled
|
||||
var Colors bool
|
||||
|
||||
// Color handles terminal colors for bisync
|
||||
func Color(style string, s string) string {
|
||||
|
|
|
@ -42,7 +42,9 @@ When running in background mode the user will have to stop the mount manually:
|
|||
|
||||
# Linux
|
||||
fusermount -u /path/to/local/mount
|
||||
# OS X
|
||||
#... or on some systems
|
||||
fusermount3 -u /path/to/local/mount
|
||||
# OS X or Linux when using nfsmount
|
||||
umount /path/to/local/mount
|
||||
|
||||
The umount operation can fail, for example when the mountpoint is busy.
|
||||
|
@ -386,9 +388,9 @@ Note that systemd runs mount units without any environment variables including
|
|||
`PATH` or `HOME`. This means that tilde (`~`) expansion will not work
|
||||
and you should provide `--config` and `--cache-dir` explicitly as absolute
|
||||
paths via rclone arguments.
|
||||
Since mounting requires the `fusermount` program, rclone will use the fallback
|
||||
PATH of `/bin:/usr/bin` in this scenario. Please ensure that `fusermount`
|
||||
is present on this PATH.
|
||||
Since mounting requires the `fusermount` or `fusermount3` program,
|
||||
rclone will use the fallback PATH of `/bin:/usr/bin` in this scenario.
|
||||
Please ensure that `fusermount`/`fusermount3` is present on this PATH.
|
||||
|
||||
### Rclone as Unix mount helper
|
||||
|
||||
|
|
|
@ -107,7 +107,7 @@ func (lrw *loggingResponseWriter) logRequest(code int, err interface{}) {
|
|||
err = ""
|
||||
}
|
||||
|
||||
fs.LogPrintf(level, lrw.request.URL, "%s %s %d %s %s",
|
||||
fs.LogLevelPrintf(level, lrw.request.URL, "%s %s %d %s %s",
|
||||
lrw.request.RemoteAddr, lrw.request.Method, code,
|
||||
lrw.request.Header.Get("SOAPACTION"), err)
|
||||
}
|
||||
|
|
|
@ -251,6 +251,15 @@ func getVFSOption(vfsOpt *vfscommon.Options, opt rc.Params, key string) (ok bool
|
|||
err = getFVarP(&vfsOpt.ReadAhead, opt, key)
|
||||
case "vfs-used-is-size":
|
||||
vfsOpt.UsedIsSize, err = opt.GetBool(key)
|
||||
case "vfs-read-chunk-streams":
|
||||
intVal, err = opt.GetInt64(key)
|
||||
if err == nil {
|
||||
if intVal >= 0 && intVal <= math.MaxInt {
|
||||
vfsOpt.ChunkStreams = int(intVal)
|
||||
} else {
|
||||
err = fmt.Errorf("key %q (%v) overflows int", key, intVal)
|
||||
}
|
||||
}
|
||||
|
||||
// unprefixed vfs options
|
||||
case "no-modtime":
|
||||
|
|
|
@ -6,7 +6,9 @@ package cmdtest
|
|||
|
||||
import (
|
||||
"os"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
@ -344,4 +346,42 @@ func TestEnvironmentVariables(t *testing.T) {
|
|||
env = ""
|
||||
out, err = rcloneEnv(env, "version", "-vv", "--use-json-log")
|
||||
jsonLogOK()
|
||||
|
||||
// Find all the File filter lines in out and return them
|
||||
parseFileFilters := func(out string) (extensions []string) {
|
||||
// Match: - (^|/)[^/]*\.jpg$
|
||||
find := regexp.MustCompile(`^- \(\^\|\/\)\[\^\/\]\*\\\.(.*?)\$$`)
|
||||
for _, line := range strings.Split(out, "\n") {
|
||||
if m := find.FindStringSubmatch(line); m != nil {
|
||||
extensions = append(extensions, m[1])
|
||||
}
|
||||
}
|
||||
return extensions
|
||||
}
|
||||
|
||||
// Make sure that multiple valued (stringArray) environment variables are handled properly
|
||||
env = ``
|
||||
out, err = rcloneEnv(env, "version", "-vv", "--dump", "filters", "--exclude", "*.gif", "--exclude", "*.tif")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, []string{"gif", "tif"}, parseFileFilters(out))
|
||||
|
||||
env = `RCLONE_EXCLUDE=*.jpg`
|
||||
out, err = rcloneEnv(env, "version", "-vv", "--dump", "filters", "--exclude", "*.gif")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, []string{"jpg", "gif"}, parseFileFilters(out))
|
||||
|
||||
env = `RCLONE_EXCLUDE=*.jpg,*.png`
|
||||
out, err = rcloneEnv(env, "version", "-vv", "--dump", "filters", "--exclude", "*.gif", "--exclude", "*.tif")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, []string{"jpg", "png", "gif", "tif"}, parseFileFilters(out))
|
||||
|
||||
env = `RCLONE_EXCLUDE="*.jpg","*.png"`
|
||||
out, err = rcloneEnv(env, "version", "-vv", "--dump", "filters")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, []string{"jpg", "png"}, parseFileFilters(out))
|
||||
|
||||
env = `RCLONE_EXCLUDE="*.,,,","*.png"`
|
||||
out, err = rcloneEnv(env, "version", "-vv", "--dump", "filters")
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, []string{",,,", "png"}, parseFileFilters(out))
|
||||
}
|
||||
|
|
|
@ -5,6 +5,55 @@ description: "Rclone Changelog"
|
|||
|
||||
# Changelog
|
||||
|
||||
## v1.68.2 - 2024-11-15
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.68.1...v1.68.2)
|
||||
|
||||
* Security fixes
|
||||
* local backend: CVE-2024-52522: fix permission and ownership on symlinks with `--links` and `--metadata` (Nick Craig-Wood)
|
||||
* Only affects users using `--metadata` and `--links` and copying files to the local backend
|
||||
* See https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
|
||||
* build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1 (dependabot)
|
||||
* This is an issue in a dependency which is used for JWT certificates
|
||||
* See https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
|
||||
* Bug Fixes
|
||||
* accounting: Fix wrong message on SIGUSR2 to enable/disable bwlimit (Nick Craig-Wood)
|
||||
* bisync: Fix output capture restoring the wrong output for logrus (Dimitrios Slamaris)
|
||||
* dlna: Fix loggingResponseWriter disregarding log level (Simon Bos)
|
||||
* serve s3: Fix excess locking which was making serve s3 single threaded (Nick Craig-Wood)
|
||||
* doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy Bush)
|
||||
* Local
|
||||
* Fix permission and ownership on symlinks with `--links` and `--metadata` (Nick Craig-Wood)
|
||||
* Fix `--copy-links` on macOS when cloning (nielash)
|
||||
* Onedrive
|
||||
* Fix Retry-After handling to look at 503 errors also (Nick Craig-Wood)
|
||||
* Pikpak
|
||||
* Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
|
||||
* Fix fatal crash on startup with token that can't be refreshed (Nick Craig-Wood)
|
||||
* S3
|
||||
* Fix crash when using `--s3-download-url` after migration to SDKv2 (Nick Craig-Wood)
|
||||
* Storj provider: fix server-side copy of files bigger than 5GB (Kaloyan Raev)
|
||||
* Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
|
||||
|
||||
## v1.68.1 - 2024-09-24
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1)
|
||||
|
||||
* Bug Fixes
|
||||
* build: Fix docker release build (ttionya)
|
||||
* doc fixes (Nick Craig-Wood, Pawel Palucha)
|
||||
* fs
|
||||
* Fix `--dump filters` not always appearing (Nick Craig-Wood)
|
||||
* Fix setting `stringArray` config values from environment variables (Nick Craig-Wood)
|
||||
* rc: Fix default value of `--metrics-addr` (Nick Craig-Wood)
|
||||
* serve docker: Add missing `vfs-read-chunk-streams` option in docker volume driver (Divyam)
|
||||
* Onedrive
|
||||
* Fix spurious "Couldn't decode error response: EOF" DEBUG (Nick Craig-Wood)
|
||||
* Pikpak
|
||||
* Fix login issue where token retrieval fails (wiserain)
|
||||
* S3
|
||||
* Fix rclone ignoring static credentials when `env_auth=true` (Nick Craig-Wood)
|
||||
|
||||
## v1.68.0 - 2024-09-08
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.67.0...v1.68.0)
|
||||
|
|
|
@ -510,7 +510,7 @@ rclone [flags]
|
|||
--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
|
||||
--metadata-mapper SpaceSepList Program to run to transforming metadata before upload
|
||||
--metadata-set stringArray Add metadata key=value when uploading
|
||||
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to (default [""])
|
||||
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to
|
||||
--metrics-allow-origin string Origin which cross-domain request (CORS) can be executed from
|
||||
--metrics-baseurl string Prefix for URLs - leave blank for root
|
||||
--metrics-cert string TLS PEM key (concatenation of certificate and CA certificate)
|
||||
|
@ -616,21 +616,18 @@ rclone [flags]
|
|||
--pcloud-token string OAuth Access Token as a JSON blob
|
||||
--pcloud-token-url string Token server url
|
||||
--pcloud-username string Your pcloud username
|
||||
--pikpak-auth-url string Auth server URL
|
||||
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
|
||||
--pikpak-client-id string OAuth Client Id
|
||||
--pikpak-client-secret string OAuth Client Secret
|
||||
--pikpak-description string Description of the remote
|
||||
--pikpak-device-id string Device ID used for authorization
|
||||
--pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
|
||||
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
|
||||
--pikpak-pass string Pikpak password (obscured)
|
||||
--pikpak-root-folder-id string ID of the root folder
|
||||
--pikpak-token string OAuth Access Token as a JSON blob
|
||||
--pikpak-token-url string Token server url
|
||||
--pikpak-trashed-only Only show files that are in the trash
|
||||
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
|
||||
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
|
||||
--pikpak-user string Pikpak username
|
||||
--pikpak-user-agent string HTTP user agent for pikpak (default "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0")
|
||||
--pixeldrain-api-key string API key for your pixeldrain account
|
||||
--pixeldrain-api-url string The API endpoint to connect to. In the vast majority of cases it's fine to leave (default "https://pixeldrain.com/api")
|
||||
--pixeldrain-description string Description of the remote
|
||||
|
@ -683,7 +680,7 @@ rclone [flags]
|
|||
--quatrix-skip-project-folders Skip project folders in operations
|
||||
-q, --quiet Print as little stuff as possible
|
||||
--rc Enable the remote control server
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
|
||||
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
|
||||
--rc-baseurl string Prefix for URLs - leave blank for root
|
||||
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
|
||||
|
@ -932,7 +929,7 @@ rclone [flags]
|
|||
--use-json-log Use json log format
|
||||
--use-mmap Use mmap allocator (see docs)
|
||||
--use-server-modtime Use server modified time instead of object metadata
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.0")
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.2")
|
||||
-v, --verbose count Print lots more stuff (repeat for more)
|
||||
-V, --version Print the version number
|
||||
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
|
||||
|
|
|
@ -54,7 +54,9 @@ When running in background mode the user will have to stop the mount manually:
|
|||
|
||||
# Linux
|
||||
fusermount -u /path/to/local/mount
|
||||
# OS X
|
||||
#... or on some systems
|
||||
fusermount3 -u /path/to/local/mount
|
||||
# OS X or Linux when using nfsmount
|
||||
umount /path/to/local/mount
|
||||
|
||||
The umount operation can fail, for example when the mountpoint is busy.
|
||||
|
@ -398,9 +400,9 @@ Note that systemd runs mount units without any environment variables including
|
|||
`PATH` or `HOME`. This means that tilde (`~`) expansion will not work
|
||||
and you should provide `--config` and `--cache-dir` explicitly as absolute
|
||||
paths via rclone arguments.
|
||||
Since mounting requires the `fusermount` program, rclone will use the fallback
|
||||
PATH of `/bin:/usr/bin` in this scenario. Please ensure that `fusermount`
|
||||
is present on this PATH.
|
||||
Since mounting requires the `fusermount` or `fusermount3` program,
|
||||
rclone will use the fallback PATH of `/bin:/usr/bin` in this scenario.
|
||||
Please ensure that `fusermount`/`fusermount3` is present on this PATH.
|
||||
|
||||
## Rclone as Unix mount helper
|
||||
|
||||
|
|
|
@ -55,7 +55,9 @@ When running in background mode the user will have to stop the mount manually:
|
|||
|
||||
# Linux
|
||||
fusermount -u /path/to/local/mount
|
||||
# OS X
|
||||
#... or on some systems
|
||||
fusermount3 -u /path/to/local/mount
|
||||
# OS X or Linux when using nfsmount
|
||||
umount /path/to/local/mount
|
||||
|
||||
The umount operation can fail, for example when the mountpoint is busy.
|
||||
|
@ -399,9 +401,9 @@ Note that systemd runs mount units without any environment variables including
|
|||
`PATH` or `HOME`. This means that tilde (`~`) expansion will not work
|
||||
and you should provide `--config` and `--cache-dir` explicitly as absolute
|
||||
paths via rclone arguments.
|
||||
Since mounting requires the `fusermount` program, rclone will use the fallback
|
||||
PATH of `/bin:/usr/bin` in this scenario. Please ensure that `fusermount`
|
||||
is present on this PATH.
|
||||
Since mounting requires the `fusermount` or `fusermount3` program,
|
||||
rclone will use the fallback PATH of `/bin:/usr/bin` in this scenario.
|
||||
Please ensure that `fusermount`/`fusermount3` is present on this PATH.
|
||||
|
||||
## Rclone as Unix mount helper
|
||||
|
||||
|
|
|
@ -166,7 +166,7 @@ Flags to control the Remote Control API
|
|||
|
||||
```
|
||||
--rc Enable the remote control server
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
|
||||
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
|
||||
--rc-baseurl string Prefix for URLs - leave blank for root
|
||||
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
|
||||
|
|
|
@ -2911,6 +2911,22 @@ so they take exactly the same form.
|
|||
|
||||
The options set by environment variables can be seen with the `-vv` flag, e.g. `rclone version -vv`.
|
||||
|
||||
Options that can appear multiple times (type `stringArray`) are
|
||||
treated slighly differently as environment variables can only be
|
||||
defined once. In order to allow a simple mechanism for adding one or
|
||||
many items, the input is treated as a [CSV encoded](https://godoc.org/encoding/csv)
|
||||
string. For example
|
||||
|
||||
| Environment Variable | Equivalent options |
|
||||
|----------------------|--------------------|
|
||||
| `RCLONE_EXCLUDE="*.jpg"` | `--exclude "*.jpg"` |
|
||||
| `RCLONE_EXCLUDE="*.jpg,*.png"` | `--exclude "*.jpg"` `--exclude "*.png"` |
|
||||
| `RCLONE_EXCLUDE='"*.jpg","*.png"'` | `--exclude "*.jpg"` `--exclude "*.png"` |
|
||||
| `RCLONE_EXCLUDE='"/directory with comma , in it /**"'` | `--exclude "/directory with comma , in it /**" |
|
||||
|
||||
If `stringArray` options are defined as environment variables **and**
|
||||
options on the command line then all the values will be used.
|
||||
|
||||
### Config file ###
|
||||
|
||||
You can set defaults for values in the config file on an individual
|
||||
|
|
|
@ -1809,9 +1809,9 @@ then select "OAuth client ID".
|
|||
|
||||
9. It will show you a client ID and client secret. Make a note of these.
|
||||
|
||||
(If you selected "External" at Step 5 continue to Step 9.
|
||||
(If you selected "External" at Step 5 continue to Step 10.
|
||||
If you chose "Internal" you don't need to publish and can skip straight to
|
||||
Step 10 but your destination drive must be part of the same Google Workspace.)
|
||||
Step 11 but your destination drive must be part of the same Google Workspace.)
|
||||
|
||||
10. Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm.
|
||||
You will also want to add yourself as a test user.
|
||||
|
|
|
@ -505,6 +505,8 @@ processed in.
|
|||
Arrange the order of filter rules with the most restrictive first and
|
||||
work down.
|
||||
|
||||
Lines starting with # or ; are ignored, and can be used to write comments. Inline comments are not supported. _Use `-vv --dump filters` to see how they appear in the final regexp._
|
||||
|
||||
E.g. for `filter-file.txt`:
|
||||
|
||||
# a sample filter rule file
|
||||
|
@ -512,6 +514,7 @@ E.g. for `filter-file.txt`:
|
|||
+ *.jpg
|
||||
+ *.png
|
||||
+ file2.avi
|
||||
- /dir/tmp/** # WARNING! This text will be treated as part of the path.
|
||||
- /dir/Trash/**
|
||||
+ /dir/**
|
||||
# exclude everything else
|
||||
|
|
|
@ -115,7 +115,7 @@ Flags for general networking and HTTP stuff.
|
|||
--tpslimit float Limit HTTP transactions per second to this
|
||||
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
||||
--use-cookies Enable session cookiejar
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.0")
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.2")
|
||||
```
|
||||
|
||||
|
||||
|
@ -264,7 +264,7 @@ Flags to control the Remote Control API.
|
|||
|
||||
```
|
||||
--rc Enable the remote control server
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
|
||||
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
|
||||
--rc-baseurl string Prefix for URLs - leave blank for root
|
||||
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
|
||||
|
@ -300,7 +300,7 @@ Flags to control the Remote Control API.
|
|||
Flags to control the Metrics HTTP endpoint..
|
||||
|
||||
```
|
||||
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to (default [""])
|
||||
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to
|
||||
--metrics-allow-origin string Origin which cross-domain request (CORS) can be executed from
|
||||
--metrics-baseurl string Prefix for URLs - leave blank for root
|
||||
--metrics-cert string TLS PEM key (concatenation of certificate and CA certificate)
|
||||
|
@ -794,21 +794,18 @@ Backend-only flags (these can be set in the config file also).
|
|||
--pcloud-token string OAuth Access Token as a JSON blob
|
||||
--pcloud-token-url string Token server url
|
||||
--pcloud-username string Your pcloud username
|
||||
--pikpak-auth-url string Auth server URL
|
||||
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
|
||||
--pikpak-client-id string OAuth Client Id
|
||||
--pikpak-client-secret string OAuth Client Secret
|
||||
--pikpak-description string Description of the remote
|
||||
--pikpak-device-id string Device ID used for authorization
|
||||
--pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
|
||||
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
|
||||
--pikpak-pass string Pikpak password (obscured)
|
||||
--pikpak-root-folder-id string ID of the root folder
|
||||
--pikpak-token string OAuth Access Token as a JSON blob
|
||||
--pikpak-token-url string Token server url
|
||||
--pikpak-trashed-only Only show files that are in the trash
|
||||
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
|
||||
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
|
||||
--pikpak-user string Pikpak username
|
||||
--pikpak-user-agent string HTTP user agent for pikpak (default "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0")
|
||||
--pixeldrain-api-key string API key for your pixeldrain account
|
||||
--pixeldrain-api-url string The API endpoint to connect to. In the vast majority of cases it's fine to leave (default "https://pixeldrain.com/api")
|
||||
--pixeldrain-description string Description of the remote
|
||||
|
|
|
@ -46,7 +46,7 @@ Here is an overview of the major features of each cloud storage system.
|
|||
| OpenDrive | MD5 | R/W | Yes | Partial ⁸ | - | - |
|
||||
| OpenStack Swift | MD5 | R/W | No | No | R/W | - |
|
||||
| Oracle Object Storage | MD5 | R/W | No | No | R/W | - |
|
||||
| pCloud | MD5, SHA1 ⁷ | R | No | No | W | - |
|
||||
| pCloud | MD5, SHA1 ⁷ | R/W | No | No | W | - |
|
||||
| PikPak | MD5 | R | No | No | R | - |
|
||||
| Pixeldrain | SHA256 | R/W | No | No | R | RW |
|
||||
| premiumize.me | - | - | Yes | No | R | - |
|
||||
|
|
|
@ -111,68 +111,29 @@ Properties:
|
|||
|
||||
Here are the Advanced options specific to pikpak (PikPak).
|
||||
|
||||
#### --pikpak-client-id
|
||||
#### --pikpak-device-id
|
||||
|
||||
OAuth Client Id.
|
||||
|
||||
Leave blank normally.
|
||||
Device ID used for authorization.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_id
|
||||
- Env Var: RCLONE_PIKPAK_CLIENT_ID
|
||||
- Config: device_id
|
||||
- Env Var: RCLONE_PIKPAK_DEVICE_ID
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --pikpak-client-secret
|
||||
#### --pikpak-user-agent
|
||||
|
||||
OAuth Client Secret.
|
||||
HTTP user agent for pikpak.
|
||||
|
||||
Leave blank normally.
|
||||
Defaults to "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0" or "--pikpak-user-agent" provided on command line.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: client_secret
|
||||
- Env Var: RCLONE_PIKPAK_CLIENT_SECRET
|
||||
- Config: user_agent
|
||||
- Env Var: RCLONE_PIKPAK_USER_AGENT
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --pikpak-token
|
||||
|
||||
OAuth Access Token as a JSON blob.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token
|
||||
- Env Var: RCLONE_PIKPAK_TOKEN
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --pikpak-auth-url
|
||||
|
||||
Auth server URL.
|
||||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: auth_url
|
||||
- Env Var: RCLONE_PIKPAK_AUTH_URL
|
||||
- Type: string
|
||||
- Required: false
|
||||
|
||||
#### --pikpak-token-url
|
||||
|
||||
Token server url.
|
||||
|
||||
Leave blank to use the provider defaults.
|
||||
|
||||
Properties:
|
||||
|
||||
- Config: token_url
|
||||
- Env Var: RCLONE_PIKPAK_TOKEN_URL
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Default: "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0"
|
||||
|
||||
#### --pikpak-root-folder-id
|
||||
|
||||
|
|
|
@ -401,6 +401,38 @@ there for more details.
|
|||
|
||||
Setting this flag increases the chance for undetected upload failures.
|
||||
|
||||
### Increasing performance
|
||||
|
||||
#### Using server-side copy
|
||||
|
||||
If you are copying objects between S3 buckets in the same region, you should
|
||||
use server-side copy.
|
||||
This is much faster than downloading and re-uploading the objects, as no data is transferred.
|
||||
|
||||
For rclone to use server-side copy, you must use the same remote for the source and destination.
|
||||
|
||||
rclone copy s3:source-bucket s3:destination-bucket
|
||||
|
||||
When using server-side copy, the performance is limited by the rate at which rclone issues
|
||||
API requests to S3.
|
||||
See below for how to increase the number of API requests rclone makes.
|
||||
|
||||
#### Increasing the rate of API requests
|
||||
|
||||
You can increase the rate of API requests to S3 by increasing the parallelism using `--transfers` and `--checkers`
|
||||
options.
|
||||
|
||||
Rclone uses a very conservative defaults for these settings, as not all providers support high rates of requests.
|
||||
Depending on your provider, you can increase significantly the number of transfers and checkers.
|
||||
|
||||
For example, with AWS S3, if you can increase the number of checkers to values like 200.
|
||||
If you are doing a server-side copy, you can also increase the number of transfers to 200.
|
||||
|
||||
rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket
|
||||
|
||||
You will need to experiment with these values to find the optimal settings for your setup.
|
||||
|
||||
|
||||
### Versions
|
||||
|
||||
When bucket versioning is enabled (this can be done with rclone with
|
||||
|
@ -3444,8 +3476,8 @@ chunk_size = 5M
|
|||
copy_cutoff = 5M
|
||||
```
|
||||
|
||||
[C14 Cold Storage](https://www.online.net/en/storage/c14-cold-storage) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`.
|
||||
So you can configure your remote with the `storage_class = GLACIER` option to upload directly to C14. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
|
||||
[Scaleway Glacier](https://www.scaleway.com/en/glacier-cold-storage/) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`.
|
||||
So you can configure your remote with the `storage_class = GLACIER` option to upload directly to Scaleway Glacier. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
|
||||
|
||||
### Seagate Lyve Cloud {#lyve}
|
||||
|
||||
|
|
|
@ -61,3 +61,4 @@ Thank you very much to our sponsors:
|
|||
{{< sponsor src="/img/logos/warp.svg" width="300" height="200" title="Visit our sponsor warp.dev" link="https://www.warp.dev/?utm_source=rclone&utm_medium=referral&utm_campaign=rclone_20231103">}}
|
||||
{{< sponsor src="/img/logos/sia.svg" width="200" height="200" title="Visit our sponsor sia" link="https://sia.tech">}}
|
||||
{{< sponsor src="/img/logos/route4me.svg" width="400" height="200" title="Visit our sponsor Route4Me" link="https://route4me.com/">}}
|
||||
{{< sponsor src="/img/logos/rcloneview.svg" width="300" height="200" title="Visit our sponsor RcloneView" link="https://rcloneview.com/">}}
|
||||
|
|
|
@ -1 +1 @@
|
|||
v1.68.0
|
||||
v1.68.2
|
|
@ -41,7 +41,12 @@ type tokenBucket struct {
|
|||
//
|
||||
// Call with lock held
|
||||
func (bs *buckets) _isOff() bool { //nolint:unused // Don't include unused when running golangci-lint in case its on windows where this is not called
|
||||
return bs[0] == nil
|
||||
for i := range bs {
|
||||
if bs[i] != nil {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// Disable the limits
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
package configstruct
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"encoding/csv"
|
||||
"errors"
|
||||
"fmt"
|
||||
"reflect"
|
||||
|
@ -31,7 +31,7 @@ func camelToSnake(in string) string {
|
|||
//
|
||||
// Builtin types are expected to be encoding as their natural
|
||||
// stringificatons as produced by fmt.Sprint except for []string which
|
||||
// is expected to be encoded as JSON with empty array encoded as "".
|
||||
// is expected to be encoded a a CSV with empty array encoded as "".
|
||||
//
|
||||
// Any other types are expected to be encoded by their String()
|
||||
// methods and decoded by their `Set(s string) error` methods.
|
||||
|
@ -58,14 +58,18 @@ func StringToInterface(def interface{}, in string) (newValue interface{}, err er
|
|||
case time.Duration:
|
||||
newValue, err = time.ParseDuration(in)
|
||||
case []string:
|
||||
// JSON decode arrays of strings
|
||||
if in != "" {
|
||||
var out []string
|
||||
err = json.Unmarshal([]byte(in), &out)
|
||||
newValue = out
|
||||
} else {
|
||||
// Empty string we will treat as empty array
|
||||
// CSV decode arrays of strings - ideally we would use
|
||||
// fs.CommaSepList here but we can't as it would cause
|
||||
// a circular import.
|
||||
if len(in) == 0 {
|
||||
newValue = []string{}
|
||||
} else {
|
||||
r := csv.NewReader(strings.NewReader(in))
|
||||
newValue, err = r.Read()
|
||||
switch _err := err.(type) {
|
||||
case *csv.ParseError:
|
||||
err = _err.Err // remove line numbers from the error message
|
||||
}
|
||||
}
|
||||
default:
|
||||
// Try using a Set method
|
||||
|
|
|
@ -204,9 +204,11 @@ func TestStringToInterface(t *testing.T) {
|
|||
{"1m1s", fs.Duration(0), fs.Duration(61 * time.Second), ""},
|
||||
{"1potato", fs.Duration(0), nil, `parsing "1potato" as fs.Duration failed: parsing time "1potato" as "2006-01-02": cannot parse "1potato" as "2006"`},
|
||||
{``, []string{}, []string{}, ""},
|
||||
{`[]`, []string(nil), []string{}, ""},
|
||||
{`["hello"]`, []string{}, []string{"hello"}, ""},
|
||||
{`["hello","world!"]`, []string(nil), []string{"hello", "world!"}, ""},
|
||||
{`""`, []string(nil), []string{""}, ""},
|
||||
{`hello`, []string{}, []string{"hello"}, ""},
|
||||
{`"hello"`, []string{}, []string{"hello"}, ""},
|
||||
{`hello,world!`, []string(nil), []string{"hello", "world!"}, ""},
|
||||
{`"hello","world!"`, []string(nil), []string{"hello", "world!"}, ""},
|
||||
{"1s", time.Duration(0), time.Second, ""},
|
||||
{"1m1s", time.Duration(0), 61 * time.Second, ""},
|
||||
{"1potato", time.Duration(0), nil, `parsing "1potato" as time.Duration failed: time: unknown unit "potato" in duration "1potato"`},
|
||||
|
|
|
@ -143,12 +143,32 @@ func installFlag(flags *pflag.FlagSet, name string, groupsString string) {
|
|||
// Read default from environment if possible
|
||||
envKey := fs.OptionToEnv(name)
|
||||
if envValue, envFound := os.LookupEnv(envKey); envFound {
|
||||
err := flags.Set(name, envValue)
|
||||
if err != nil {
|
||||
fs.Fatalf(nil, "Invalid value when setting --%s from environment variable %s=%q: %v", name, envKey, envValue, err)
|
||||
isStringArray := false
|
||||
opt, isOption := flag.Value.(*fs.Option)
|
||||
if isOption {
|
||||
_, isStringArray = opt.Default.([]string)
|
||||
}
|
||||
if isStringArray {
|
||||
// Treat stringArray differently, treating the environment variable as a CSV array
|
||||
var list fs.CommaSepList
|
||||
err := list.Set(envValue)
|
||||
if err != nil {
|
||||
fs.Fatalf(nil, "Invalid value when setting stringArray --%s from environment variable %s=%q: %v", name, envKey, envValue, err)
|
||||
}
|
||||
// Set both the Value (so items on the command line get added) and DefValue so the help is correct
|
||||
opt.Value = ([]string)(list)
|
||||
flag.DefValue = list.String()
|
||||
for _, v := range list {
|
||||
fs.Debugf(nil, "Setting --%s %q from environment variable %s=%q", name, v, envKey, envValue)
|
||||
}
|
||||
} else {
|
||||
err := flags.Set(name, envValue)
|
||||
if err != nil {
|
||||
fs.Fatalf(nil, "Invalid value when setting --%s from environment variable %s=%q: %v", name, envKey, envValue, err)
|
||||
}
|
||||
fs.Debugf(nil, "Setting --%s %q from environment variable %s=%q", name, flag.Value, envKey, envValue)
|
||||
flag.DefValue = envValue
|
||||
}
|
||||
fs.Debugf(nil, "Setting --%s %q from environment variable %s=%q", name, flag.Value, envKey, envValue)
|
||||
flag.DefValue = envValue
|
||||
}
|
||||
|
||||
// Add flag to Group if it is a global flag
|
||||
|
|
|
@ -85,7 +85,7 @@ var OptionsInfo = fs.Options{{
|
|||
Groups: "RC",
|
||||
}, {
|
||||
Name: "metrics_addr",
|
||||
Default: []string{""},
|
||||
Default: []string{},
|
||||
Help: "IPaddress:Port or :Port to bind metrics server to",
|
||||
Groups: "Metrics",
|
||||
}}.
|
||||
|
|
|
@ -37,7 +37,7 @@ func init() {
|
|||
// If the server wasn't configured the *Server returned may be nil
|
||||
func MetricsStart(ctx context.Context, opt *rc.Options) (*MetricsServer, error) {
|
||||
jobs.SetOpt(opt) // set the defaults for jobs
|
||||
if opt.MetricsHTTP.ListenAddr[0] != "" {
|
||||
if len(opt.MetricsHTTP.ListenAddr) > 0 {
|
||||
// Serve on the DefaultServeMux so can have global registrations appear
|
||||
s, err := newMetricsServer(ctx, opt)
|
||||
if err != nil {
|
||||
|
|
|
@ -264,14 +264,9 @@ func (o *Option) String() string {
|
|||
if len(stringArray) == 0 {
|
||||
return ""
|
||||
}
|
||||
// Encode string arrays as JSON
|
||||
// Encode string arrays as CSV
|
||||
// The default Go encoding can't be decoded uniquely
|
||||
buf, err := json.Marshal(stringArray)
|
||||
if err != nil {
|
||||
Errorf(nil, "Can't encode default value for %q key - ignoring: %v", o.Name, err)
|
||||
return "[]"
|
||||
}
|
||||
return string(buf)
|
||||
return CommaSepList(stringArray).String()
|
||||
}
|
||||
return fmt.Sprint(v)
|
||||
}
|
||||
|
@ -531,7 +526,22 @@ func (oi *OptionsInfo) load() error {
|
|||
// their values read from the options, environment variables and
|
||||
// command line parameters.
|
||||
func GlobalOptionsInit() error {
|
||||
for _, opt := range OptionsRegistry {
|
||||
var keys []string
|
||||
for key := range OptionsRegistry {
|
||||
keys = append(keys, key)
|
||||
}
|
||||
sort.Slice(keys, func(i, j int) bool {
|
||||
// Sort alphabetically, but with "main" first
|
||||
if keys[i] == "main" {
|
||||
return true
|
||||
}
|
||||
if keys[j] == "main" {
|
||||
return false
|
||||
}
|
||||
return keys[i] < keys[j]
|
||||
})
|
||||
for _, key := range keys {
|
||||
opt := OptionsRegistry[key]
|
||||
err := opt.load()
|
||||
if err != nil {
|
||||
return err
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
package fs
|
||||
|
||||
// VersionTag of rclone
|
||||
var VersionTag = "v1.68.0"
|
||||
var VersionTag = "v1.68.2"
|
||||
|
|
4
go.mod
4
go.mod
|
@ -59,7 +59,7 @@ require (
|
|||
github.com/prometheus/client_golang v1.19.1
|
||||
github.com/putdotio/go-putio/putio v0.0.0-20200123120452-16d982cac2b8
|
||||
github.com/quasilyte/go-ruleguard/dsl v0.3.22
|
||||
github.com/rclone/gofakes3 v0.0.3-0.20240807151802-e80146f8de87
|
||||
github.com/rclone/gofakes3 v0.0.3
|
||||
github.com/rfjakob/eme v1.1.2
|
||||
github.com/rivo/uniseg v0.4.7
|
||||
github.com/rogpeppe/go-internal v1.12.0
|
||||
|
@ -223,7 +223,7 @@ require (
|
|||
require (
|
||||
github.com/Microsoft/go-winio v0.6.1 // indirect
|
||||
github.com/ProtonMail/go-crypto v1.0.0
|
||||
github.com/golang-jwt/jwt/v4 v4.5.0
|
||||
github.com/golang-jwt/jwt/v4 v4.5.1
|
||||
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
|
||||
github.com/pkg/xattr v0.4.9
|
||||
golang.org/x/mobile v0.0.0-20240716161057-1ad2df20a8b6
|
||||
|
|
8
go.sum
8
go.sum
|
@ -275,8 +275,8 @@ github.com/gofrs/flock v0.8.1 h1:+gYjHKf32LDeiEEFhQaotPbLuUXjY5ZqxKgXy7n59aw=
|
|||
github.com/gofrs/flock v0.8.1/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU=
|
||||
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||
github.com/golang-jwt/jwt/v4 v4.5.0 h1:7cYmW1XlMY7h7ii7UhUyChSgS5wUJEnm9uZVTGqOWzg=
|
||||
github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
|
||||
github.com/golang-jwt/jwt/v4 v4.5.1 h1:JdqV9zKUdtaa9gdPlywC3aeoEsR681PlKC+4F5gQgeo=
|
||||
github.com/golang-jwt/jwt/v4 v4.5.1/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
|
||||
github.com/golang-jwt/jwt/v5 v5.2.1 h1:OuVbFODueb089Lh128TAcimifWaLhJwVflnrgM17wHk=
|
||||
github.com/golang-jwt/jwt/v5 v5.2.1/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||
|
@ -519,8 +519,8 @@ github.com/quic-go/quic-go v0.40.1 h1:X3AGzUNFs0jVuO3esAGnTfvdgvL4fq655WaOi1snv1
|
|||
github.com/quic-go/quic-go v0.40.1/go.mod h1:PeN7kuVJ4xZbxSv/4OX6S1USOX8MJvydwpTx31vx60c=
|
||||
github.com/rasky/go-xdr v0.0.0-20170124162913-1a41d1a06c93 h1:UVArwN/wkKjMVhh2EQGC0tEc1+FqiLlvYXY5mQ2f8Wg=
|
||||
github.com/rasky/go-xdr v0.0.0-20170124162913-1a41d1a06c93/go.mod h1:Nfe4efndBz4TibWycNE+lqyJZiMX4ycx+QKV8Ta0f/o=
|
||||
github.com/rclone/gofakes3 v0.0.3-0.20240807151802-e80146f8de87 h1:0YRo2aYhE+SCZsjWYMFe8zLD18xieXy7wQ8M9Ywcr/g=
|
||||
github.com/rclone/gofakes3 v0.0.3-0.20240807151802-e80146f8de87/go.mod h1:z7+o2VUwitO0WuVHReQlOW9jZ03LpeJ0PUFSULyTIds=
|
||||
github.com/rclone/gofakes3 v0.0.3 h1:0sKCxJ8TUUAG5KXGuc/fcDKGnzB/j6IjNQui9ntIZPo=
|
||||
github.com/rclone/gofakes3 v0.0.3/go.mod h1:z7+o2VUwitO0WuVHReQlOW9jZ03LpeJ0PUFSULyTIds=
|
||||
github.com/relvacode/iso8601 v1.3.0 h1:HguUjsGpIMh/zsTczGN3DVJFxTU/GX+MMmzcKoMO7ko=
|
||||
github.com/relvacode/iso8601 v1.3.0/go.mod h1:FlNp+jz+TXpyRqgmM7tnzHHzBnz776kmAH2h3sZCn0I=
|
||||
github.com/rfjakob/eme v1.1.2 h1:SxziR8msSOElPayZNFfQw4Tjx/Sbaeeh3eRvrHVMUs4=
|
||||
|
|
|
@ -17,7 +17,8 @@ import (
|
|||
)
|
||||
|
||||
const (
|
||||
BufferSize = 1024 * 1024 // BufferSize is the default size of the pages used in the reader
|
||||
// BufferSize is the default size of the pages used in the reader
|
||||
BufferSize = 1024 * 1024
|
||||
bufferCacheSize = 64 // max number of buffers to keep in cache
|
||||
bufferCacheFlushTime = 5 * time.Second // flush the cached buffers after this long
|
||||
)
|
||||
|
|
340
rclone.1
generated
340
rclone.1
generated
|
@ -1,7 +1,7 @@
|
|||
.\"t
|
||||
.\" Automatically generated by Pandoc 2.9.2.1
|
||||
.\"
|
||||
.TH "rclone" "1" "Sep 08, 2024" "User Manual" ""
|
||||
.TH "rclone" "1" "Nov 15, 2024" "User Manual" ""
|
||||
.hy
|
||||
.SH Rclone syncs your files to cloud storage
|
||||
.PP
|
||||
|
@ -6429,7 +6429,9 @@ manually:
|
|||
\f[C]
|
||||
# Linux
|
||||
fusermount -u /path/to/local/mount
|
||||
# OS X
|
||||
#... or on some systems
|
||||
fusermount3 -u /path/to/local/mount
|
||||
# OS X or Linux when using nfsmount
|
||||
umount /path/to/local/mount
|
||||
\f[R]
|
||||
.fi
|
||||
|
@ -6859,9 +6861,11 @@ including \f[C]PATH\f[R] or \f[C]HOME\f[R].
|
|||
This means that tilde (\f[C]\[ti]\f[R]) expansion will not work and you
|
||||
should provide \f[C]--config\f[R] and \f[C]--cache-dir\f[R] explicitly
|
||||
as absolute paths via rclone arguments.
|
||||
Since mounting requires the \f[C]fusermount\f[R] program, rclone will
|
||||
use the fallback PATH of \f[C]/bin:/usr/bin\f[R] in this scenario.
|
||||
Please ensure that \f[C]fusermount\f[R] is present on this PATH.
|
||||
Since mounting requires the \f[C]fusermount\f[R] or
|
||||
\f[C]fusermount3\f[R] program, rclone will use the fallback PATH of
|
||||
\f[C]/bin:/usr/bin\f[R] in this scenario.
|
||||
Please ensure that \f[C]fusermount\f[R]/\f[C]fusermount3\f[R] is present
|
||||
on this PATH.
|
||||
.SS Rclone as Unix mount helper
|
||||
.PP
|
||||
The core Unix program \f[C]/bin/mount\f[R] normally takes the
|
||||
|
@ -7886,7 +7890,9 @@ manually:
|
|||
\f[C]
|
||||
# Linux
|
||||
fusermount -u /path/to/local/mount
|
||||
# OS X
|
||||
#... or on some systems
|
||||
fusermount3 -u /path/to/local/mount
|
||||
# OS X or Linux when using nfsmount
|
||||
umount /path/to/local/mount
|
||||
\f[R]
|
||||
.fi
|
||||
|
@ -8317,9 +8323,11 @@ including \f[C]PATH\f[R] or \f[C]HOME\f[R].
|
|||
This means that tilde (\f[C]\[ti]\f[R]) expansion will not work and you
|
||||
should provide \f[C]--config\f[R] and \f[C]--cache-dir\f[R] explicitly
|
||||
as absolute paths via rclone arguments.
|
||||
Since mounting requires the \f[C]fusermount\f[R] program, rclone will
|
||||
use the fallback PATH of \f[C]/bin:/usr/bin\f[R] in this scenario.
|
||||
Please ensure that \f[C]fusermount\f[R] is present on this PATH.
|
||||
Since mounting requires the \f[C]fusermount\f[R] or
|
||||
\f[C]fusermount3\f[R] program, rclone will use the fallback PATH of
|
||||
\f[C]/bin:/usr/bin\f[R] in this scenario.
|
||||
Please ensure that \f[C]fusermount\f[R]/\f[C]fusermount3\f[R] is present
|
||||
on this PATH.
|
||||
.SS Rclone as Unix mount helper
|
||||
.PP
|
||||
The core Unix program \f[C]/bin/mount\f[R] normally takes the
|
||||
|
@ -9527,7 +9535,7 @@ Flags to control the Remote Control API
|
|||
.nf
|
||||
\f[C]
|
||||
--rc Enable the remote control server
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default [\[dq]localhost:5572\[dq]])
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
|
||||
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
|
||||
--rc-baseurl string Prefix for URLs - leave blank for root
|
||||
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
|
||||
|
@ -19836,6 +19844,49 @@ they take exactly the same form.
|
|||
The options set by environment variables can be seen with the
|
||||
\f[C]-vv\f[R] flag, e.g.
|
||||
\f[C]rclone version -vv\f[R].
|
||||
.PP
|
||||
Options that can appear multiple times (type \f[C]stringArray\f[R]) are
|
||||
treated slighly differently as environment variables can only be defined
|
||||
once.
|
||||
In order to allow a simple mechanism for adding one or many items, the
|
||||
input is treated as a CSV encoded (https://godoc.org/encoding/csv)
|
||||
string.
|
||||
For example
|
||||
.PP
|
||||
.TS
|
||||
tab(@);
|
||||
lw(36.7n) lw(33.3n).
|
||||
T{
|
||||
Environment Variable
|
||||
T}@T{
|
||||
Equivalent options
|
||||
T}
|
||||
_
|
||||
T{
|
||||
\f[C]RCLONE_EXCLUDE=\[dq]*.jpg\[dq]\f[R]
|
||||
T}@T{
|
||||
\f[C]--exclude \[dq]*.jpg\[dq]\f[R]
|
||||
T}
|
||||
T{
|
||||
\f[C]RCLONE_EXCLUDE=\[dq]*.jpg,*.png\[dq]\f[R]
|
||||
T}@T{
|
||||
\f[C]--exclude \[dq]*.jpg\[dq]\f[R] \f[C]--exclude \[dq]*.png\[dq]\f[R]
|
||||
T}
|
||||
T{
|
||||
\f[C]RCLONE_EXCLUDE=\[aq]\[dq]*.jpg\[dq],\[dq]*.png\[dq]\[aq]\f[R]
|
||||
T}@T{
|
||||
\f[C]--exclude \[dq]*.jpg\[dq]\f[R] \f[C]--exclude \[dq]*.png\[dq]\f[R]
|
||||
T}
|
||||
T{
|
||||
\f[C]RCLONE_EXCLUDE=\[aq]\[dq]/directory with comma , in it /**\[dq]\[aq]\f[R]
|
||||
T}@T{
|
||||
\[ga]--exclude \[dq]/directory with comma , in it /**\[dq]
|
||||
T}
|
||||
.TE
|
||||
.PP
|
||||
If \f[C]stringArray\f[R] options are defined as environment variables
|
||||
\f[B]and\f[R] options on the command line then all the values will be
|
||||
used.
|
||||
.SS Config file
|
||||
.PP
|
||||
You can set defaults for values in the config file on an individual
|
||||
|
@ -20904,6 +20955,12 @@ See above for the order filter flags are processed in.
|
|||
Arrange the order of filter rules with the most restrictive first and
|
||||
work down.
|
||||
.PP
|
||||
Lines starting with # or ; are ignored, and can be used to write
|
||||
comments.
|
||||
Inline comments are not supported.
|
||||
\f[I]Use \f[CI]-vv --dump filters\f[I] to see how they appear in the
|
||||
final regexp.\f[R]
|
||||
.PP
|
||||
E.g.
|
||||
for \f[C]filter-file.txt\f[R]:
|
||||
.IP
|
||||
|
@ -20914,6 +20971,7 @@ for \f[C]filter-file.txt\f[R]:
|
|||
+ *.jpg
|
||||
+ *.png
|
||||
+ file2.avi
|
||||
- /dir/tmp/** # WARNING! This text will be treated as part of the path.
|
||||
- /dir/Trash/**
|
||||
+ /dir/**
|
||||
# exclude everything else
|
||||
|
@ -24914,7 +24972,7 @@ pCloud
|
|||
T}@T{
|
||||
MD5, SHA1 \[u2077]
|
||||
T}@T{
|
||||
R
|
||||
R/W
|
||||
T}@T{
|
||||
No
|
||||
T}@T{
|
||||
|
@ -27668,7 +27726,7 @@ Flags for general networking and HTTP stuff.
|
|||
--tpslimit float Limit HTTP transactions per second to this
|
||||
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
||||
--use-cookies Enable session cookiejar
|
||||
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.68.0\[dq])
|
||||
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.68.2\[dq])
|
||||
\f[R]
|
||||
.fi
|
||||
.SS Performance
|
||||
|
@ -27817,7 +27875,7 @@ Flags to control the Remote Control API.
|
|||
.nf
|
||||
\f[C]
|
||||
--rc Enable the remote control server
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default [\[dq]localhost:5572\[dq]])
|
||||
--rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
|
||||
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
|
||||
--rc-baseurl string Prefix for URLs - leave blank for root
|
||||
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
|
||||
|
@ -27853,7 +27911,7 @@ Flags to control the Metrics HTTP endpoint..
|
|||
.IP
|
||||
.nf
|
||||
\f[C]
|
||||
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to (default [\[dq]\[dq]])
|
||||
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to
|
||||
--metrics-allow-origin string Origin which cross-domain request (CORS) can be executed from
|
||||
--metrics-baseurl string Prefix for URLs - leave blank for root
|
||||
--metrics-cert string TLS PEM key (concatenation of certificate and CA certificate)
|
||||
|
@ -28347,21 +28405,18 @@ Backend-only flags (these can be set in the config file also).
|
|||
--pcloud-token string OAuth Access Token as a JSON blob
|
||||
--pcloud-token-url string Token server url
|
||||
--pcloud-username string Your pcloud username
|
||||
--pikpak-auth-url string Auth server URL
|
||||
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
|
||||
--pikpak-client-id string OAuth Client Id
|
||||
--pikpak-client-secret string OAuth Client Secret
|
||||
--pikpak-description string Description of the remote
|
||||
--pikpak-device-id string Device ID used for authorization
|
||||
--pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
|
||||
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
|
||||
--pikpak-pass string Pikpak password (obscured)
|
||||
--pikpak-root-folder-id string ID of the root folder
|
||||
--pikpak-token string OAuth Access Token as a JSON blob
|
||||
--pikpak-token-url string Token server url
|
||||
--pikpak-trashed-only Only show files that are in the trash
|
||||
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
|
||||
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
|
||||
--pikpak-user string Pikpak username
|
||||
--pikpak-user-agent string HTTP user agent for pikpak (default \[dq]Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0\[dq])
|
||||
--pixeldrain-api-key string API key for your pixeldrain account
|
||||
--pixeldrain-api-url string The API endpoint to connect to. In the vast majority of cases it\[aq]s fine to leave (default \[dq]https://pixeldrain.com/api\[dq])
|
||||
--pixeldrain-description string Description of the remote
|
||||
|
@ -33001,6 +33056,50 @@ You can disable this with the --s3-no-head option - see there for more
|
|||
details.
|
||||
.PP
|
||||
Setting this flag increases the chance for undetected upload failures.
|
||||
.SS Increasing performance
|
||||
.SS Using server-side copy
|
||||
.PP
|
||||
If you are copying objects between S3 buckets in the same region, you
|
||||
should use server-side copy.
|
||||
This is much faster than downloading and re-uploading the objects, as no
|
||||
data is transferred.
|
||||
.PP
|
||||
For rclone to use server-side copy, you must use the same remote for the
|
||||
source and destination.
|
||||
.IP
|
||||
.nf
|
||||
\f[C]
|
||||
rclone copy s3:source-bucket s3:destination-bucket
|
||||
\f[R]
|
||||
.fi
|
||||
.PP
|
||||
When using server-side copy, the performance is limited by the rate at
|
||||
which rclone issues API requests to S3.
|
||||
See below for how to increase the number of API requests rclone makes.
|
||||
.SS Increasing the rate of API requests
|
||||
.PP
|
||||
You can increase the rate of API requests to S3 by increasing the
|
||||
parallelism using \f[C]--transfers\f[R] and \f[C]--checkers\f[R]
|
||||
options.
|
||||
.PP
|
||||
Rclone uses a very conservative defaults for these settings, as not all
|
||||
providers support high rates of requests.
|
||||
Depending on your provider, you can increase significantly the number of
|
||||
transfers and checkers.
|
||||
.PP
|
||||
For example, with AWS S3, if you can increase the number of checkers to
|
||||
values like 200.
|
||||
If you are doing a server-side copy, you can also increase the number of
|
||||
transfers to 200.
|
||||
.IP
|
||||
.nf
|
||||
\f[C]
|
||||
rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket
|
||||
\f[R]
|
||||
.fi
|
||||
.PP
|
||||
You will need to experiment with these values to find the optimal
|
||||
settings for your setup.
|
||||
.SS Versions
|
||||
.PP
|
||||
When bucket versioning is enabled (this can be done with rclone with the
|
||||
|
@ -37301,11 +37400,12 @@ copy_cutoff = 5M
|
|||
\f[R]
|
||||
.fi
|
||||
.PP
|
||||
C14 Cold Storage (https://www.online.net/en/storage/c14-cold-storage) is
|
||||
Scaleway Glacier (https://www.scaleway.com/en/glacier-cold-storage/) is
|
||||
the low-cost S3 Glacier alternative from Scaleway and it works the same
|
||||
way as on S3 by accepting the \[dq]GLACIER\[dq] \f[C]storage_class\f[R].
|
||||
So you can configure your remote with the
|
||||
\f[C]storage_class = GLACIER\f[R] option to upload directly to C14.
|
||||
\f[C]storage_class = GLACIER\f[R] option to upload directly to Scaleway
|
||||
Glacier.
|
||||
Don\[aq]t forget that in this state you can\[aq]t read files back after,
|
||||
you will need to restore them to \[dq]STANDARD\[dq] storage_class first
|
||||
before being able to read them (see \[dq]restore\[dq] section above)
|
||||
|
@ -50124,9 +50224,9 @@ It will show you a client ID and client secret.
|
|||
Make a note of these.
|
||||
.RS 4
|
||||
.PP
|
||||
(If you selected \[dq]External\[dq] at Step 5 continue to Step 9.
|
||||
(If you selected \[dq]External\[dq] at Step 5 continue to Step 10.
|
||||
If you chose \[dq]Internal\[dq] you don\[aq]t need to publish and can
|
||||
skip straight to Step 10 but your destination drive must be part of the
|
||||
skip straight to Step 11 but your destination drive must be part of the
|
||||
same Google Workspace.)
|
||||
.RE
|
||||
.IP "10." 4
|
||||
|
@ -63841,79 +63941,37 @@ Required: true
|
|||
.SS Advanced options
|
||||
.PP
|
||||
Here are the Advanced options specific to pikpak (PikPak).
|
||||
.SS --pikpak-client-id
|
||||
.SS --pikpak-device-id
|
||||
.PP
|
||||
OAuth Client Id.
|
||||
.PP
|
||||
Leave blank normally.
|
||||
Device ID used for authorization.
|
||||
.PP
|
||||
Properties:
|
||||
.IP \[bu] 2
|
||||
Config: client_id
|
||||
Config: device_id
|
||||
.IP \[bu] 2
|
||||
Env Var: RCLONE_PIKPAK_CLIENT_ID
|
||||
Env Var: RCLONE_PIKPAK_DEVICE_ID
|
||||
.IP \[bu] 2
|
||||
Type: string
|
||||
.IP \[bu] 2
|
||||
Required: false
|
||||
.SS --pikpak-client-secret
|
||||
.SS --pikpak-user-agent
|
||||
.PP
|
||||
OAuth Client Secret.
|
||||
HTTP user agent for pikpak.
|
||||
.PP
|
||||
Leave blank normally.
|
||||
Defaults to \[dq]Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0)
|
||||
Gecko/20100101 Firefox/129.0\[dq] or \[dq]--pikpak-user-agent\[dq]
|
||||
provided on command line.
|
||||
.PP
|
||||
Properties:
|
||||
.IP \[bu] 2
|
||||
Config: client_secret
|
||||
Config: user_agent
|
||||
.IP \[bu] 2
|
||||
Env Var: RCLONE_PIKPAK_CLIENT_SECRET
|
||||
Env Var: RCLONE_PIKPAK_USER_AGENT
|
||||
.IP \[bu] 2
|
||||
Type: string
|
||||
.IP \[bu] 2
|
||||
Required: false
|
||||
.SS --pikpak-token
|
||||
.PP
|
||||
OAuth Access Token as a JSON blob.
|
||||
.PP
|
||||
Properties:
|
||||
.IP \[bu] 2
|
||||
Config: token
|
||||
.IP \[bu] 2
|
||||
Env Var: RCLONE_PIKPAK_TOKEN
|
||||
.IP \[bu] 2
|
||||
Type: string
|
||||
.IP \[bu] 2
|
||||
Required: false
|
||||
.SS --pikpak-auth-url
|
||||
.PP
|
||||
Auth server URL.
|
||||
.PP
|
||||
Leave blank to use the provider defaults.
|
||||
.PP
|
||||
Properties:
|
||||
.IP \[bu] 2
|
||||
Config: auth_url
|
||||
.IP \[bu] 2
|
||||
Env Var: RCLONE_PIKPAK_AUTH_URL
|
||||
.IP \[bu] 2
|
||||
Type: string
|
||||
.IP \[bu] 2
|
||||
Required: false
|
||||
.SS --pikpak-token-url
|
||||
.PP
|
||||
Token server url.
|
||||
.PP
|
||||
Leave blank to use the provider defaults.
|
||||
.PP
|
||||
Properties:
|
||||
.IP \[bu] 2
|
||||
Config: token_url
|
||||
.IP \[bu] 2
|
||||
Env Var: RCLONE_PIKPAK_TOKEN_URL
|
||||
.IP \[bu] 2
|
||||
Type: string
|
||||
.IP \[bu] 2
|
||||
Required: false
|
||||
Default: \[dq]Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0)
|
||||
Gecko/20100101 Firefox/129.0\[dq]
|
||||
.SS --pikpak-root-folder-id
|
||||
.PP
|
||||
ID of the root folder.
|
||||
|
@ -72472,6 +72530,132 @@ Options:
|
|||
.IP \[bu] 2
|
||||
\[dq]error\[dq]: return an error based on option value
|
||||
.SH Changelog
|
||||
.SS v1.68.2 - 2024-11-15
|
||||
.PP
|
||||
See commits (https://github.com/rclone/rclone/compare/v1.68.1...v1.68.2)
|
||||
.IP \[bu] 2
|
||||
Security fixes
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
local backend: CVE-2024-52522: fix permission and ownership on symlinks
|
||||
with \f[C]--links\f[R] and \f[C]--metadata\f[R] (Nick Craig-Wood)
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Only affects users using \f[C]--metadata\f[R] and \f[C]--links\f[R] and
|
||||
copying files to the local backend
|
||||
.IP \[bu] 2
|
||||
See
|
||||
https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1
|
||||
(dependabot)
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
This is an issue in a dependency which is used for JWT certificates
|
||||
.IP \[bu] 2
|
||||
See
|
||||
https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
|
||||
.RE
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
Bug Fixes
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
accounting: Fix wrong message on SIGUSR2 to enable/disable bwlimit (Nick
|
||||
Craig-Wood)
|
||||
.IP \[bu] 2
|
||||
bisync: Fix output capture restoring the wrong output for logrus
|
||||
(Dimitrios Slamaris)
|
||||
.IP \[bu] 2
|
||||
dlna: Fix loggingResponseWriter disregarding log level (Simon Bos)
|
||||
.IP \[bu] 2
|
||||
serve s3: Fix excess locking which was making serve s3 single threaded
|
||||
(Nick Craig-Wood)
|
||||
.IP \[bu] 2
|
||||
doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy Bush)
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
Local
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Fix permission and ownership on symlinks with \f[C]--links\f[R] and
|
||||
\f[C]--metadata\f[R] (Nick Craig-Wood)
|
||||
.IP \[bu] 2
|
||||
Fix \f[C]--copy-links\f[R] on macOS when cloning (nielash)
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
Onedrive
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Fix Retry-After handling to look at 503 errors also (Nick Craig-Wood)
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
Pikpak
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
|
||||
.IP \[bu] 2
|
||||
Fix fatal crash on startup with token that can\[aq]t be refreshed (Nick
|
||||
Craig-Wood)
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
S3
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Fix crash when using \f[C]--s3-download-url\f[R] after migration to
|
||||
SDKv2 (Nick Craig-Wood)
|
||||
.IP \[bu] 2
|
||||
Storj provider: fix server-side copy of files bigger than 5GB (Kaloyan
|
||||
Raev)
|
||||
.IP \[bu] 2
|
||||
Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
|
||||
.RE
|
||||
.SS v1.68.1 - 2024-09-24
|
||||
.PP
|
||||
See commits (https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1)
|
||||
.IP \[bu] 2
|
||||
Bug Fixes
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
build: Fix docker release build (ttionya)
|
||||
.IP \[bu] 2
|
||||
doc fixes (Nick Craig-Wood, Pawel Palucha)
|
||||
.IP \[bu] 2
|
||||
fs
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Fix \f[C]--dump filters\f[R] not always appearing (Nick Craig-Wood)
|
||||
.IP \[bu] 2
|
||||
Fix setting \f[C]stringArray\f[R] config values from environment
|
||||
variables (Nick Craig-Wood)
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
rc: Fix default value of \f[C]--metrics-addr\f[R] (Nick Craig-Wood)
|
||||
.IP \[bu] 2
|
||||
serve docker: Add missing \f[C]vfs-read-chunk-streams\f[R] option in
|
||||
docker volume driver (Divyam)
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
Onedrive
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Fix spurious \[dq]Couldn\[aq]t decode error response: EOF\[dq] DEBUG
|
||||
(Nick Craig-Wood)
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
Pikpak
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Fix login issue where token retrieval fails (wiserain)
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
S3
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Fix rclone ignoring static credentials when \f[C]env_auth=true\f[R]
|
||||
(Nick Craig-Wood)
|
||||
.RE
|
||||
.SS v1.68.0 - 2024-09-08
|
||||
.PP
|
||||
See commits (https://github.com/rclone/rclone/compare/v1.67.0...v1.68.0)
|
||||
|
|
Loading…
Reference in a new issue