Fix typos

This commit is contained in:
Andrea Gelmini 2023-12-06 13:11:55 +01:00
parent b72de5a883
commit 241916d55b
No known key found for this signature in database
GPG key ID: A4309075F05960F6
45 changed files with 67 additions and 67 deletions

View file

@ -2684,7 +2684,7 @@ Details
* Enhancement #3106: Parallelize scan of snapshot content in `copy` and `prune` * Enhancement #3106: Parallelize scan of snapshot content in `copy` and `prune`
The `copy` and `prune` commands used to traverse the directories of snapshots one by one to find The `copy` and `prune` commands used to traverse the directories of snapshots one by one to find
used data. This snapshot traversal is now parallized which can speed up this step several used data. This snapshot traversal is now parallelized which can speed up this step several
times. times.
In addition the `check` command now reports how many snapshots have already been processed. In addition the `check` command now reports how many snapshots have already been processed.
@ -2784,7 +2784,7 @@ Details
* Bugfix #1756: Mark repository files as read-only when using the local backend * Bugfix #1756: Mark repository files as read-only when using the local backend
Files stored in a local repository were marked as writeable on the filesystem for non-Windows Files stored in a local repository were marked as writable on the filesystem for non-Windows
systems, which did not prevent accidental file modifications outside of restic. In addition, systems, which did not prevent accidental file modifications outside of restic. In addition,
the local backend did not work with certain filesystems and network mounts which do not permit the local backend did not work with certain filesystems and network mounts which do not permit
modifications of file permissions. modifications of file permissions.
@ -2874,7 +2874,7 @@ Details
an exclusive lock through a filesystem snapshot. Restic was unable to backup those files an exclusive lock through a filesystem snapshot. Restic was unable to backup those files
before. This update enables backing up these files. before. This update enables backing up these files.
This needs to be enabled explicitely using the --use-fs-snapshot option of the backup This needs to be enabled explicitly using the --use-fs-snapshot option of the backup
command. command.
https://github.com/restic/restic/issues/340 https://github.com/restic/restic/issues/340
@ -3079,7 +3079,7 @@ Details
* Bugfix #2668: Don't abort the stats command when data blobs are missing * Bugfix #2668: Don't abort the stats command when data blobs are missing
Runing the stats command in the blobs-per-file mode on a repository with missing data blobs Running the stats command in the blobs-per-file mode on a repository with missing data blobs
previously resulted in a crash. previously resulted in a crash.
https://github.com/restic/restic/pull/2668 https://github.com/restic/restic/pull/2668
@ -3454,7 +3454,7 @@ Details
check will be disabled if the --ignore-inode flag was given. check will be disabled if the --ignore-inode flag was given.
If this change causes problems for you, please open an issue, and we can look in to adding a If this change causes problems for you, please open an issue, and we can look in to adding a
seperate flag to disable just the ctime check. separate flag to disable just the ctime check.
https://github.com/restic/restic/issues/2179 https://github.com/restic/restic/issues/2179
https://github.com/restic/restic/pull/2212 https://github.com/restic/restic/pull/2212
@ -3826,7 +3826,7 @@ Details
* Enhancement #1876: Display reason why forget keeps snapshots * Enhancement #1876: Display reason why forget keeps snapshots
We've added a column to the list of snapshots `forget` keeps which details the reasons to keep a We've added a column to the list of snapshots `forget` keeps which details the reasons to keep a
particuliar snapshot. This makes debugging policies for forget much easier. Please remember particular snapshot. This makes debugging policies for forget much easier. Please remember
to always try things out with `--dry-run`! to always try things out with `--dry-run`!
https://github.com/restic/restic/pull/1876 https://github.com/restic/restic/pull/1876
@ -4139,7 +4139,7 @@ Summary
* Enh #1665: Improve cache handling for `restic check` * Enh #1665: Improve cache handling for `restic check`
* Enh #1709: Improve messages `restic check` prints * Enh #1709: Improve messages `restic check` prints
* Enh #1721: Add `cache` command to list cache dirs * Enh #1721: Add `cache` command to list cache dirs
* Enh #1735: Allow keeping a time range of snaphots * Enh #1735: Allow keeping a time range of snapshots
* Enh #1758: Allow saving OneDrive folders in Windows * Enh #1758: Allow saving OneDrive folders in Windows
* Enh #1782: Use default AWS credentials chain for S3 backend * Enh #1782: Use default AWS credentials chain for S3 backend
@ -4339,7 +4339,7 @@ Details
https://github.com/restic/restic/issues/1721 https://github.com/restic/restic/issues/1721
https://github.com/restic/restic/pull/1749 https://github.com/restic/restic/pull/1749
* Enhancement #1735: Allow keeping a time range of snaphots * Enhancement #1735: Allow keeping a time range of snapshots
We've added the `--keep-within` option to the `forget` command. It instructs restic to keep We've added the `--keep-within` option to the `forget` command. It instructs restic to keep
all snapshots within the given duration since the newest snapshot. For example, running all snapshots within the given duration since the newest snapshot. For example, running
@ -4440,7 +4440,7 @@ Details
HTTP) and returning an error when the file already exists. HTTP) and returning an error when the file already exists.
This is not accurate, the file could have been created between the HTTP request testing for it, This is not accurate, the file could have been created between the HTTP request testing for it,
and when writing starts, so we've relaxed this requeriment, which saves one additional HTTP and when writing starts, so we've relaxed this requirement, which saves one additional HTTP
request per newly added file. request per newly added file.
https://github.com/restic/restic/pull/1623 https://github.com/restic/restic/pull/1623
@ -4463,7 +4463,7 @@ restic users. The changes are ordered by importance.
Summary Summary
------- -------
* Fix #1506: Limit bandwith at the http.RoundTripper for HTTP based backends * Fix #1506: Limit bandwidth at the http.RoundTripper for HTTP based backends
* Fix #1512: Restore directory permissions as the last step * Fix #1512: Restore directory permissions as the last step
* Fix #1528: Correctly create missing subdirs in data/ * Fix #1528: Correctly create missing subdirs in data/
* Fix #1589: Complete intermediate index upload * Fix #1589: Complete intermediate index upload
@ -4484,7 +4484,7 @@ Summary
Details Details
------- -------
* Bugfix #1506: Limit bandwith at the http.RoundTripper for HTTP based backends * Bugfix #1506: Limit bandwidth at the http.RoundTripper for HTTP based backends
https://github.com/restic/restic/issues/1506 https://github.com/restic/restic/issues/1506
https://github.com/restic/restic/pull/1511 https://github.com/restic/restic/pull/1511
@ -4537,7 +4537,7 @@ Details
* Bugfix #1595: Backup: Remove bandwidth display * Bugfix #1595: Backup: Remove bandwidth display
This commit removes the bandwidth displayed during backup process. It is misleading and This commit removes the bandwidth displayed during backup process. It is misleading and
seldomly correct, because it's neither the "read bandwidth" (only for the very first backup) seldom correct, because it's neither the "read bandwidth" (only for the very first backup)
nor the "upload bandwidth". Many users are confused about (and rightly so), c.f. #1581, #1033, nor the "upload bandwidth". Many users are confused about (and rightly so), c.f. #1581, #1033,
#1591 #1591
@ -4807,7 +4807,7 @@ Details
We've added a local cache for metadata so that restic doesn't need to load all metadata We've added a local cache for metadata so that restic doesn't need to load all metadata
(snapshots, indexes, ...) from the repo each time it starts. By default the cache is active, but (snapshots, indexes, ...) from the repo each time it starts. By default the cache is active, but
there's a new global option `--no-cache` that can be used to disable the cache. By deafult, the there's a new global option `--no-cache` that can be used to disable the cache. By default, the
cache a standard cache folder for the OS, which can be overridden with `--cache-dir`. The cache cache a standard cache folder for the OS, which can be overridden with `--cache-dir`. The cache
will automatically populate, indexes and snapshots are saved as they are loaded. Cache will automatically populate, indexes and snapshots are saved as they are loaded. Cache
directories for repos that haven't been used recently can automatically be removed by restic directories for repos that haven't been used recently can automatically be removed by restic
@ -4893,7 +4893,7 @@ Details
* Enhancement #1319: Make `check` print `no errors found` explicitly * Enhancement #1319: Make `check` print `no errors found` explicitly
The `check` command now explicetly prints `No errors were found` when no errors could be found. The `check` command now explicitly prints `No errors were found` when no errors could be found.
https://github.com/restic/restic/issues/1303 https://github.com/restic/restic/issues/1303
https://github.com/restic/restic/pull/1319 https://github.com/restic/restic/pull/1319

View file

@ -61,7 +61,7 @@ uploading it somewhere or post only the parts that are really relevant.
If restic gets stuck, please also include a stacktrace in the description. If restic gets stuck, please also include a stacktrace in the description.
On non-Windows systems, you can send a SIGQUIT signal to restic or press On non-Windows systems, you can send a SIGQUIT signal to restic or press
`Ctrl-\` to achieve the same result. This causes restic to print a stacktrace `Ctrl-\` to achieve the same result. This causes restic to print a stacktrace
and then exit immediatelly. This will not damage your repository, however, and then exit immediately. This will not damage your repository, however,
it might be necessary to manually clean up stale lock files using it might be necessary to manually clean up stale lock files using
`restic unlock`. `restic unlock`.

View file

@ -10,7 +10,7 @@ https://github.com/restic/restic/issues/2244
NOTE: This new implementation does not guarantee order in which blobs NOTE: This new implementation does not guarantee order in which blobs
are written to the target files and, for example, the last blob of a are written to the target files and, for example, the last blob of a
file can be written to the file before any of the preceeding file blobs. file can be written to the file before any of the preceding file blobs.
It is therefore possible to have gaps in the data written to the target It is therefore possible to have gaps in the data written to the target
files if restore fails or interrupted by the user. files if restore fails or interrupted by the user.

View file

@ -1,6 +1,6 @@
Bugfix: Don't abort the stats command when data blobs are missing Bugfix: Don't abort the stats command when data blobs are missing
Runing the stats command in the blobs-per-file mode on a repository with Running the stats command in the blobs-per-file mode on a repository with
missing data blobs previously resulted in a crash. missing data blobs previously resulted in a crash.
https://github.com/restic/restic/pull/2668 https://github.com/restic/restic/pull/2668

View file

@ -2,7 +2,7 @@ Enhancement: Parallelize scan of snapshot content in `copy` and `prune`
The `copy` and `prune` commands used to traverse the directories of The `copy` and `prune` commands used to traverse the directories of
snapshots one by one to find used data. This snapshot traversal is snapshots one by one to find used data. This snapshot traversal is
now parallized which can speed up this step several times. now parallelized which can speed up this step several times.
In addition the `check` command now reports how many snapshots have In addition the `check` command now reports how many snapshots have
already been processed. already been processed.

View file

@ -3,7 +3,7 @@ Enhancement: Add local metadata cache
We've added a local cache for metadata so that restic doesn't need to load We've added a local cache for metadata so that restic doesn't need to load
all metadata (snapshots, indexes, ...) from the repo each time it starts. By all metadata (snapshots, indexes, ...) from the repo each time it starts. By
default the cache is active, but there's a new global option `--no-cache` default the cache is active, but there's a new global option `--no-cache`
that can be used to disable the cache. By deafult, the cache a standard that can be used to disable the cache. By default, the cache a standard
cache folder for the OS, which can be overridden with `--cache-dir`. The cache folder for the OS, which can be overridden with `--cache-dir`. The
cache will automatically populate, indexes and snapshots are saved as they cache will automatically populate, indexes and snapshots are saved as they
are loaded. Cache directories for repos that haven't been used recently can are loaded. Cache directories for repos that haven't been used recently can

View file

@ -1,6 +1,6 @@
Enhancement: Make `check` print `no errors found` explicitly Enhancement: Make `check` print `no errors found` explicitly
The `check` command now explicetly prints `No errors were found` when no errors The `check` command now explicitly prints `No errors were found` when no errors
could be found. could be found.
https://github.com/restic/restic/pull/1319 https://github.com/restic/restic/pull/1319

View file

@ -1,4 +1,4 @@
Bugfix: Limit bandwith at the http.RoundTripper for HTTP based backends Bugfix: Limit bandwidth at the http.RoundTripper for HTTP based backends
https://github.com/restic/restic/issues/1506 https://github.com/restic/restic/issues/1506
https://github.com/restic/restic/pull/1511 https://github.com/restic/restic/pull/1511

View file

@ -6,7 +6,7 @@ that means making a request (e.g. via HTTP) and returning an error when the
file already exists. file already exists.
This is not accurate, the file could have been created between the HTTP request This is not accurate, the file could have been created between the HTTP request
testing for it, and when writing starts, so we've relaxed this requeriment, testing for it, and when writing starts, so we've relaxed this requirement,
which saves one additional HTTP request per newly added file. which saves one additional HTTP request per newly added file.
https://github.com/restic/restic/pull/1623 https://github.com/restic/restic/pull/1623

View file

@ -1,4 +1,4 @@
Enhancement: Allow keeping a time range of snaphots Enhancement: Allow keeping a time range of snapshots
We've added the `--keep-within` option to the `forget` command. It instructs We've added the `--keep-within` option to the `forget` command. It instructs
restic to keep all snapshots within the given duration since the newest restic to keep all snapshots within the given duration since the newest

View file

@ -1,7 +1,7 @@
Enhancement: Display reason why forget keeps snapshots Enhancement: Display reason why forget keeps snapshots
We've added a column to the list of snapshots `forget` keeps which details the We've added a column to the list of snapshots `forget` keeps which details the
reasons to keep a particuliar snapshot. This makes debugging policies for reasons to keep a particular snapshot. This makes debugging policies for
forget much easier. Please remember to always try things out with `--dry-run`! forget much easier. Please remember to always try things out with `--dry-run`!
https://github.com/restic/restic/pull/1876 https://github.com/restic/restic/pull/1876

View file

@ -9,7 +9,7 @@ file should be noticed, and the modified file will be backed up. The ctime check
will be disabled if the --ignore-inode flag was given. will be disabled if the --ignore-inode flag was given.
If this change causes problems for you, please open an issue, and we can look in If this change causes problems for you, please open an issue, and we can look in
to adding a seperate flag to disable just the ctime check. to adding a separate flag to disable just the ctime check.
https://github.com/restic/restic/issues/2179 https://github.com/restic/restic/issues/2179
https://github.com/restic/restic/pull/2212 https://github.com/restic/restic/pull/2212

View file

@ -417,7 +417,7 @@ func selectPacksByBucket(allPacks map[restic.ID]int64, bucket, totalBuckets uint
return packs return packs
} }
// selectRandomPacksByPercentage selects the given percentage of packs which are randomly choosen. // selectRandomPacksByPercentage selects the given percentage of packs which are randomly chosen.
func selectRandomPacksByPercentage(allPacks map[restic.ID]int64, percentage float64) map[restic.ID]int64 { func selectRandomPacksByPercentage(allPacks map[restic.ID]int64, percentage float64) map[restic.ID]int64 {
packCount := len(allPacks) packCount := len(allPacks)
packsToCheck := int(float64(packCount) * (percentage / 100.0)) packsToCheck := int(float64(packCount) * (percentage / 100.0))

View file

@ -71,7 +71,7 @@ func TestSelectPacksByBucket(t *testing.T) {
var testPacks = make(map[restic.ID]int64) var testPacks = make(map[restic.ID]int64)
for i := 1; i <= 10; i++ { for i := 1; i <= 10; i++ {
id := restic.NewRandomID() id := restic.NewRandomID()
// ensure relevant part of generated id is reproducable // ensure relevant part of generated id is reproducible
id[0] = byte(i) id[0] = byte(i)
testPacks[id] = 0 testPacks[id] = 0
} }
@ -124,7 +124,7 @@ func TestSelectRandomPacksByPercentage(t *testing.T) {
} }
func TestSelectNoRandomPacksByPercentage(t *testing.T) { func TestSelectNoRandomPacksByPercentage(t *testing.T) {
// that the a repository without pack files works // that the repository without pack files works
var testPacks = make(map[restic.ID]int64) var testPacks = make(map[restic.ID]int64)
selectedPacks := selectRandomPacksByPercentage(testPacks, 10.0) selectedPacks := selectRandomPacksByPercentage(testPacks, 10.0)
rtest.Assert(t, len(selectedPacks) == 0, "Expected 0 selected packs") rtest.Assert(t, len(selectedPacks) == 0, "Expected 0 selected packs")
@ -158,7 +158,7 @@ func TestSelectRandomPacksByFileSize(t *testing.T) {
} }
func TestSelectNoRandomPacksByFileSize(t *testing.T) { func TestSelectNoRandomPacksByFileSize(t *testing.T) {
// that the a repository without pack files works // that the repository without pack files works
var testPacks = make(map[restic.ID]int64) var testPacks = make(map[restic.ID]int64)
selectedPacks := selectRandomPacksByFileSize(testPacks, 10, 500) selectedPacks := selectRandomPacksByFileSize(testPacks, 10, 500)
rtest.Assert(t, len(selectedPacks) == 0, "Expected 0 selected packs") rtest.Assert(t, len(selectedPacks) == 0, "Expected 0 selected packs")

View file

@ -290,7 +290,7 @@ func tryRepairWithBitflip(ctx context.Context, key *crypto.Key, input []byte, by
}) })
err := wg.Wait() err := wg.Wait()
if err != nil { if err != nil {
panic("all go rountines can only return nil") panic("all go routines can only return nil")
} }
if !found { if !found {

View file

@ -406,7 +406,7 @@ func packInfoFromIndex(ctx context.Context, idx restic.MasterIndex, usedBlobs re
}) })
// if duplicate blobs exist, those will be set to either "used" or "unused": // if duplicate blobs exist, those will be set to either "used" or "unused":
// - mark only one occurence of duplicate blobs as used // - mark only one occurrence of duplicate blobs as used
// - if there are already some used blobs in a pack, possibly mark duplicates in this pack as "used" // - if there are already some used blobs in a pack, possibly mark duplicates in this pack as "used"
// - if there are no used blobs in a pack, possibly mark duplicates as "unused" // - if there are no used blobs in a pack, possibly mark duplicates as "unused"
if hasDuplicates { if hasDuplicates {
@ -415,7 +415,7 @@ func packInfoFromIndex(ctx context.Context, idx restic.MasterIndex, usedBlobs re
bh := blob.BlobHandle bh := blob.BlobHandle
count, ok := usedBlobs[bh] count, ok := usedBlobs[bh]
// skip non-duplicate, aka. normal blobs // skip non-duplicate, aka. normal blobs
// count == 0 is used to mark that this was a duplicate blob with only a single occurence remaining // count == 0 is used to mark that this was a duplicate blob with only a single occurrence remaining
if !ok || count == 1 { if !ok || count == 1 {
return return
} }
@ -424,7 +424,7 @@ func packInfoFromIndex(ctx context.Context, idx restic.MasterIndex, usedBlobs re
size := uint64(blob.Length) size := uint64(blob.Length)
switch { switch {
case ip.usedBlobs > 0, count == 0: case ip.usedBlobs > 0, count == 0:
// other used blobs in pack or "last" occurence -> transition to used // other used blobs in pack or "last" occurrence -> transition to used
ip.usedSize += size ip.usedSize += size
ip.usedBlobs++ ip.usedBlobs++
ip.unusedSize -= size ip.unusedSize -= size
@ -434,7 +434,7 @@ func packInfoFromIndex(ctx context.Context, idx restic.MasterIndex, usedBlobs re
stats.blobs.used++ stats.blobs.used++
stats.size.duplicate -= size stats.size.duplicate -= size
stats.blobs.duplicate-- stats.blobs.duplicate--
// let other occurences remain marked as unused // let other occurrences remain marked as unused
usedBlobs[bh] = 1 usedBlobs[bh] = 1
default: default:
// remain unused and decrease counter // remain unused and decrease counter

View file

@ -290,7 +290,7 @@ func PrintSnapshotGroupHeader(stdout io.Writer, groupKeyJSON string) error {
return nil return nil
} }
// Snapshot helps to print Snaphots as JSON with their ID included. // Snapshot helps to print Snapshots as JSON with their ID included.
type Snapshot struct { type Snapshot struct {
*restic.Snapshot *restic.Snapshot

View file

@ -127,7 +127,7 @@ func init() {
f.StringVarP(&globalOptions.KeyHint, "key-hint", "", "", "`key` ID of key to try decrypting first (default: $RESTIC_KEY_HINT)") f.StringVarP(&globalOptions.KeyHint, "key-hint", "", "", "`key` ID of key to try decrypting first (default: $RESTIC_KEY_HINT)")
f.StringVarP(&globalOptions.PasswordCommand, "password-command", "", "", "shell `command` to obtain the repository password from (default: $RESTIC_PASSWORD_COMMAND)") f.StringVarP(&globalOptions.PasswordCommand, "password-command", "", "", "shell `command` to obtain the repository password from (default: $RESTIC_PASSWORD_COMMAND)")
f.BoolVarP(&globalOptions.Quiet, "quiet", "q", false, "do not output comprehensive progress report") f.BoolVarP(&globalOptions.Quiet, "quiet", "q", false, "do not output comprehensive progress report")
// use empty paremeter name as `-v, --verbose n` instead of the correct `--verbose=n` is confusing // use empty parameter name as `-v, --verbose n` instead of the correct `--verbose=n` is confusing
f.CountVarP(&globalOptions.Verbose, "verbose", "v", "be verbose (specify multiple times or a level using --verbose=n``, max level/times is 2)") f.CountVarP(&globalOptions.Verbose, "verbose", "v", "be verbose (specify multiple times or a level using --verbose=n``, max level/times is 2)")
f.BoolVar(&globalOptions.NoLock, "no-lock", false, "do not lock the repository, this allows some operations on read-only repositories") f.BoolVar(&globalOptions.NoLock, "no-lock", false, "do not lock the repository, this allows some operations on read-only repositories")
f.DurationVar(&globalOptions.RetryLock, "retry-lock", 0, "retry to lock the repository if it is already locked, takes a value like 5m or 2h (default: no retries)") f.DurationVar(&globalOptions.RetryLock, "retry-lock", 0, "retry to lock the repository if it is already locked, takes a value like 5m or 2h (default: no retries)")

View file

@ -79,7 +79,7 @@ function __restic_clear_perform_completion_once_result
__restic_debug "" __restic_debug ""
__restic_debug "========= clearing previously set __restic_perform_completion_once_result variable ==========" __restic_debug "========= clearing previously set __restic_perform_completion_once_result variable =========="
set --erase __restic_perform_completion_once_result set --erase __restic_perform_completion_once_result
__restic_debug "Succesfully erased the variable __restic_perform_completion_once_result" __restic_debug "Successfully erased the variable __restic_perform_completion_once_result"
end end
function __restic_requires_order_preservation function __restic_requires_order_preservation

View file

@ -379,7 +379,7 @@ func readdir(dir string) []string {
} }
func sha256sums(inputDir, outputFile string) { func sha256sums(inputDir, outputFile string) {
msg("runnnig sha256sum in %v", inputDir) msg("running sha256sum in %v", inputDir)
filenames := readdir(inputDir) filenames := readdir(inputDir)

View file

@ -267,7 +267,7 @@ func (arch *Archiver) SaveDir(ctx context.Context, snPath string, dir string, fi
// FutureNode holds a reference to a channel that returns a FutureNodeResult // FutureNode holds a reference to a channel that returns a FutureNodeResult
// or a reference to an already existing result. If the result is available // or a reference to an already existing result. If the result is available
// immediatelly, then storing a reference directly requires less memory than // immediately, then storing a reference directly requires less memory than
// using the indirection via a channel. // using the indirection via a channel.
type FutureNode struct { type FutureNode struct {
ch <-chan futureNodeResult ch <-chan futureNodeResult

View file

@ -31,7 +31,7 @@ type b2Backend struct {
canDelete bool canDelete bool
} }
// Billing happens in 1000 item granlarity, but we are more interested in reducing the number of network round trips // Billing happens in 1000 item granularity, but we are more interested in reducing the number of network round trips
const defaultListMaxItems = 10 * 1000 const defaultListMaxItems = 10 * 1000
// ensure statically that *b2Backend implements backend.Backend. // ensure statically that *b2Backend implements backend.Backend.

View file

@ -18,7 +18,7 @@ type Backend interface {
// repository. // repository.
Location() string Location() string
// Connections returns the maxmimum number of concurrent backend operations. // Connections returns the maximum number of concurrent backend operations.
Connections() uint Connections() uint
// Hasher may return a hash function for calculating a content hash for the backend // Hasher may return a hash function for calculating a content hash for the backend

View file

@ -5,8 +5,8 @@ import (
"net/http" "net/http"
) )
// Limiter defines an interface that implementors can use to rate limit I/O // Limiter defines an interface that implementers can use to rate limit I/O
// according to some policy defined and configured by the implementor. // according to some policy defined and configured by the implementer.
type Limiter interface { type Limiter interface {
// Upstream returns a rate limited reader that is intended to be used in // Upstream returns a rate limited reader that is intended to be used in
// uploads. // uploads.

View file

@ -194,7 +194,7 @@ func (b *Local) Save(_ context.Context, h backend.Handle, rd backend.RewindReade
} }
} }
// try to mark file as read-only to avoid accidential modifications // try to mark file as read-only to avoid accidental modifications
// ignore if the operation fails as some filesystems don't allow the chmod call // ignore if the operation fails as some filesystems don't allow the chmod call
// e.g. exfat and network file systems with certain mount options // e.g. exfat and network file systems with certain mount options
err = setFileReadonly(finalname, b.Modes.File) err = setFileReadonly(finalname, b.Modes.File)

View file

@ -302,7 +302,7 @@ func Join(parts ...string) string {
} }
// tempSuffix generates a random string suffix that should be sufficiently long // tempSuffix generates a random string suffix that should be sufficiently long
// to avoid accidential conflicts // to avoid accidental conflicts
func tempSuffix() string { func tempSuffix() string {
var nonce [16]byte var nonce [16]byte
_, err := rand.Read(nonce[:]) _, err := rand.Read(nonce[:])

View file

@ -6,7 +6,7 @@ import (
"github.com/restic/restic/internal/errors" "github.com/restic/restic/internal/errors"
) )
// shellSplitter splits a command string into separater arguments. It supports // shellSplitter splits a command string into separated arguments. It supports
// single and double quoted strings. // single and double quoted strings.
type shellSplitter struct { type shellSplitter struct {
quote rune quote rune

View file

@ -11,7 +11,7 @@ import (
) )
func startForeground(cmd *exec.Cmd) (bg func() error, err error) { func startForeground(cmd *exec.Cmd) (bg func() error, err error) {
// run the command in it's own process group so that SIGINT // run the command in its own process group so that SIGINT
// is not sent to it. // is not sent to it.
cmd.SysProcAttr = &syscall.SysProcAttr{ cmd.SysProcAttr = &syscall.SysProcAttr{
Setpgid: true, Setpgid: true,

View file

@ -442,7 +442,7 @@ func (c *Checker) checkTree(id restic.ID, tree *restic.Tree) (errs []error) {
} }
// Note that we do not use the blob size. The "obvious" check // Note that we do not use the blob size. The "obvious" check
// whether the sum of the blob sizes matches the file size // whether the sum of the blob sizes matches the file size
// unfortunately fails in some cases that are not resolveable // unfortunately fails in some cases that are not resolvable
// by users, so we omit this check, see #1887 // by users, so we omit this check, see #1887
_, found := c.repo.LookupBlobSize(blobID, restic.DataBlob) _, found := c.repo.LookupBlobSize(blobID, restic.DataBlob)

View file

@ -166,7 +166,7 @@ func (h HRESULT) Str() string {
return "UNKNOWN" return "UNKNOWN"
} }
// VssError encapsulates errors retruned from calling VSS api. // VssError encapsulates errors returned from calling VSS api.
type vssError struct { type vssError struct {
text string text string
hresult HRESULT hresult HRESULT
@ -190,7 +190,7 @@ func (e *vssError) Error() string {
return fmt.Sprintf("VSS error: %s: %s (%#x)", e.text, e.hresult.Str(), e.hresult) return fmt.Sprintf("VSS error: %s: %s (%#x)", e.text, e.hresult.Str(), e.hresult)
} }
// VssError encapsulates errors retruned from calling VSS api. // VssError encapsulates errors returned from calling VSS api.
type vssTextError struct { type vssTextError struct {
text string text string
} }
@ -615,7 +615,7 @@ func (vssAsync *IVSSAsync) QueryStatus() (HRESULT, uint32) {
return HRESULT(result), state return HRESULT(result), state
} }
// WaitUntilAsyncFinished waits until either the async call is finshed or // WaitUntilAsyncFinished waits until either the async call is finished or
// the given timeout is reached. // the given timeout is reached.
func (vssAsync *IVSSAsync) WaitUntilAsyncFinished(millis uint32) error { func (vssAsync *IVSSAsync) WaitUntilAsyncFinished(millis uint32) error {
hresult := vssAsync.Wait(millis) hresult := vssAsync.Wait(millis)
@ -858,7 +858,7 @@ func NewVssSnapshot(
if err != nil { if err != nil {
// After calling PrepareForBackup one needs to call AbortBackup() before releasing the VSS // After calling PrepareForBackup one needs to call AbortBackup() before releasing the VSS
// instance for proper cleanup. // instance for proper cleanup.
// It is not neccessary to call BackupComplete before releasing the VSS instance afterwards. // It is not necessary to call BackupComplete before releasing the VSS instance afterwards.
iVssBackupComponents.AbortBackup() iVssBackupComponents.AbortBackup()
iVssBackupComponents.Release() iVssBackupComponents.Release()
return VssSnapshot{}, err return VssSnapshot{}, err

View file

@ -46,7 +46,7 @@ func newDir(root *Root, inode, parentInode uint64, node *restic.Node) (*dir, err
}, nil }, nil
} }
// returing a wrapped context.Canceled error will instead result in returing // returning a wrapped context.Canceled error will instead result in returning
// an input / output error to the user. Thus unwrap the error to match the // an input / output error to the user. Thus unwrap the error to match the
// expectations of bazil/fuse // expectations of bazil/fuse
func unwrapCtxCanceled(err error) error { func unwrapCtxCanceled(err error) error {

View file

@ -142,7 +142,7 @@ func (f *openFile) Read(ctx context.Context, req *fuse.ReadRequest, resp *fuse.R
// Multiple goroutines may call service methods simultaneously; // Multiple goroutines may call service methods simultaneously;
// the methods being called are responsible for appropriate synchronization. // the methods being called are responsible for appropriate synchronization.
// //
// However, no lock needed here as getBlobAt can be called conurrently // However, no lock needed here as getBlobAt can be called concurrently
// (blobCache has its own locking) // (blobCache has its own locking)
for i := startContent; remainingBytes > 0 && i < len(f.cumsize)-1; i++ { for i := startContent; remainingBytes > 0 && i < len(f.cumsize)-1; i++ {
blob, err := f.getBlobAt(ctx, i) blob, err := f.getBlobAt(ctx, i)

View file

@ -25,7 +25,7 @@ type MasterIndex struct {
func NewMasterIndex() *MasterIndex { func NewMasterIndex() *MasterIndex {
// Always add an empty final index, such that MergeFinalIndexes can merge into this. // Always add an empty final index, such that MergeFinalIndexes can merge into this.
// Note that removing this index could lead to a race condition in the rare // Note that removing this index could lead to a race condition in the rare
// sitation that only two indexes exist which are saved and merged concurrently. // situation that only two indexes exist which are saved and merged concurrently.
idx := []*Index{NewIndex()} idx := []*Index{NewIndex()}
idx[0].Finalize() idx[0].Finalize()
return &MasterIndex{idx: idx, pendingBlobs: restic.NewBlobSet()} return &MasterIndex{idx: idx, pendingBlobs: restic.NewBlobSet()}

View file

@ -189,7 +189,7 @@ const (
// MaxHeaderSize is the max size of header including header-length field // MaxHeaderSize is the max size of header including header-length field
MaxHeaderSize = 16*1024*1024 + headerLengthSize MaxHeaderSize = 16*1024*1024 + headerLengthSize
// number of header enries to download as part of header-length request // number of header entries to download as part of header-length request
eagerEntries = 15 eagerEntries = 15
) )

View file

@ -39,7 +39,7 @@ type packerManager struct {
packSize uint packSize uint
} }
// newPackerManager returns an new packer manager which writes temporary files // newPackerManager returns a new packer manager which writes temporary files
// to a temporary directory // to a temporary directory
func newPackerManager(key *crypto.Key, tpe restic.BlobType, packSize uint, queueFn func(ctx context.Context, t restic.BlobType, p *Packer) error) *packerManager { func newPackerManager(key *crypto.Key, tpe restic.BlobType, packSize uint, queueFn func(ctx context.Context, t restic.BlobType, p *Packer) error) *packerManager {
return &packerManager{ return &packerManager{

View file

@ -83,7 +83,7 @@ func createRandomWrongBlob(t testing.TB, repo restic.Repository) {
} }
// selectBlobs splits the list of all blobs randomly into two lists. A blob // selectBlobs splits the list of all blobs randomly into two lists. A blob
// will be contained in the firstone ith probability p. // will be contained in the firstone with probability p.
func selectBlobs(t *testing.T, repo restic.Repository, p float32) (list1, list2 restic.BlobSet) { func selectBlobs(t *testing.T, repo restic.Repository, p float32) (list1, list2 restic.BlobSet) {
list1 = restic.NewBlobSet() list1 = restic.NewBlobSet()
list2 = restic.NewBlobSet() list2 = restic.NewBlobSet()

View file

@ -932,7 +932,7 @@ func streamPackPart(ctx context.Context, beLoad BackendLoadFn, key *crypto.Key,
ctx, cancel := context.WithCancel(ctx) ctx, cancel := context.WithCancel(ctx)
// stream blobs in pack // stream blobs in pack
err = beLoad(ctx, h, int(dataEnd-dataStart), int64(dataStart), func(rd io.Reader) error { err = beLoad(ctx, h, int(dataEnd-dataStart), int64(dataStart), func(rd io.Reader) error {
// prevent callbacks after cancelation // prevent callbacks after cancellation
if ctx.Err() != nil { if ctx.Err() != nil {
return ctx.Err() return ctx.Err()
} }

View file

@ -523,7 +523,7 @@ func testStreamPack(t *testing.T, version uint) {
case 2: case 2:
compress = true compress = true
default: default:
t.Fatal("test does not suport repository version", version) t.Fatal("test does not support repository version", version)
} }
packfileBlobs, packfile := buildPackfileWithoutHeader(blobSizes, &key, compress) packfileBlobs, packfile := buildPackfileWithoutHeader(blobSizes, &key, compress)

View file

@ -13,7 +13,7 @@ func TestCountedBlobSet(t *testing.T) {
test.Equals(t, bs.List(), restic.BlobHandles{}) test.Equals(t, bs.List(), restic.BlobHandles{})
bh := restic.NewRandomBlobHandle() bh := restic.NewRandomBlobHandle()
// check non existant // check non existent
test.Equals(t, bs.Has(bh), false) test.Equals(t, bs.Has(bh), false)
// test insert // test insert

View file

@ -38,7 +38,7 @@ func TestGroupByOptions(t *testing.T) {
var opts restic.SnapshotGroupByOptions var opts restic.SnapshotGroupByOptions
test.OK(t, opts.Set(exp.from)) test.OK(t, opts.Set(exp.from))
if !cmp.Equal(opts, exp.opts) { if !cmp.Equal(opts, exp.opts) {
t.Errorf("unexpeted opts %s", cmp.Diff(opts, exp.opts)) t.Errorf("unexpected opts %s", cmp.Diff(opts, exp.opts))
} }
test.Equals(t, opts.String(), exp.normalized) test.Equals(t, opts.String(), exp.normalized)
} }

View file

@ -296,7 +296,7 @@ func testPartialDownloadError(t *testing.T, part int) {
// loader always returns an error // loader always returns an error
loader := repo.loader loader := repo.loader
repo.loader = func(ctx context.Context, h backend.Handle, length int, offset int64, fn func(rd io.Reader) error) error { repo.loader = func(ctx context.Context, h backend.Handle, length int, offset int64, fn func(rd io.Reader) error) error {
// only load partial data to execise fault handling in different places // only load partial data to exercise fault handling in different places
err := loader(ctx, h, length*part/100, offset, fn) err := loader(ctx, h, length*part/100, offset, fn)
if err == nil { if err == nil {
return nil return nil

View file

@ -22,7 +22,7 @@ func NewHardlinkIndex[T any]() *HardlinkIndex[T] {
} }
} }
// Has checks wether the link already exist in the index. // Has checks whether the link already exist in the index.
func (idx *HardlinkIndex[T]) Has(inode uint64, device uint64) bool { func (idx *HardlinkIndex[T]) Has(inode uint64, device uint64) bool {
idx.m.Lock() idx.m.Lock()
defer idx.m.Unlock() defer idx.m.Unlock()

View file

@ -791,7 +791,7 @@ func TestRestorerConsistentTimestampsAndPermissions(t *testing.T) {
} }
} }
// VerifyFiles must not report cancelation of its context through res.Error. // VerifyFiles must not report cancellation of its context through res.Error.
func TestVerifyCancel(t *testing.T) { func TestVerifyCancel(t *testing.T) {
snapshot := Snapshot{ snapshot := Snapshot{
Nodes: map[string]Node{ Nodes: map[string]Node{

View file

@ -325,7 +325,7 @@ func Truncate(s string, w int) string {
// Guess whether the first rune in s would occupy two terminal cells // Guess whether the first rune in s would occupy two terminal cells
// instead of one. This cannot be determined exactly without knowing // instead of one. This cannot be determined exactly without knowing
// the terminal font, so we treat all ambigous runes as full-width, // the terminal font, so we treat all ambiguous runes as full-width,
// i.e., two cells. // i.e., two cells.
func wideRune(s string) (wide bool, utfsize uint) { func wideRune(s string) (wide bool, utfsize uint) {
prop, size := width.LookupString(s) prop, size := width.LookupString(s)

View file

@ -69,7 +69,7 @@ func checkRewriteItemOrder(want []string) checkRewriteFunc {
} }
} }
// checkRewriteSkips excludes nodes if path is in skipFor, it checks that rewriting proceedes in the correct order. // checkRewriteSkips excludes nodes if path is in skipFor, it checks that rewriting proceeds in the correct order.
func checkRewriteSkips(skipFor map[string]struct{}, want []string, disableCache bool) checkRewriteFunc { func checkRewriteSkips(skipFor map[string]struct{}, want []string, disableCache bool) checkRewriteFunc {
var pos int var pos int