initial
This commit is contained in:
commit
1cf33e5ffd
87 changed files with 6283 additions and 0 deletions
1
.gitattributes
vendored
Normal file
1
.gitattributes
vendored
Normal file
|
@ -0,0 +1 @@
|
||||||
|
/**/*.pb.go -diff binary
|
3
.gitignore
vendored
Normal file
3
.gitignore
vendored
Normal file
|
@ -0,0 +1,3 @@
|
||||||
|
bin
|
||||||
|
temp
|
||||||
|
/vendor/
|
675
LICENSE.md
Normal file
675
LICENSE.md
Normal file
|
@ -0,0 +1,675 @@
|
||||||
|
### GNU GENERAL PUBLIC LICENSE
|
||||||
|
|
||||||
|
Version 3, 29 June 2007
|
||||||
|
|
||||||
|
Copyright (C) 2007 Free Software Foundation, Inc.
|
||||||
|
<https://fsf.org/>
|
||||||
|
|
||||||
|
Everyone is permitted to copy and distribute verbatim copies of this
|
||||||
|
license document, but changing it is not allowed.
|
||||||
|
|
||||||
|
### Preamble
|
||||||
|
|
||||||
|
The GNU General Public License is a free, copyleft license for
|
||||||
|
software and other kinds of works.
|
||||||
|
|
||||||
|
The licenses for most software and other practical works are designed
|
||||||
|
to take away your freedom to share and change the works. By contrast,
|
||||||
|
the GNU General Public License is intended to guarantee your freedom
|
||||||
|
to share and change all versions of a program--to make sure it remains
|
||||||
|
free software for all its users. We, the Free Software Foundation, use
|
||||||
|
the GNU General Public License for most of our software; it applies
|
||||||
|
also to any other work released this way by its authors. You can apply
|
||||||
|
it to your programs, too.
|
||||||
|
|
||||||
|
When we speak of free software, we are referring to freedom, not
|
||||||
|
price. Our General Public Licenses are designed to make sure that you
|
||||||
|
have the freedom to distribute copies of free software (and charge for
|
||||||
|
them if you wish), that you receive source code or can get it if you
|
||||||
|
want it, that you can change the software or use pieces of it in new
|
||||||
|
free programs, and that you know you can do these things.
|
||||||
|
|
||||||
|
To protect your rights, we need to prevent others from denying you
|
||||||
|
these rights or asking you to surrender the rights. Therefore, you
|
||||||
|
have certain responsibilities if you distribute copies of the
|
||||||
|
software, or if you modify it: responsibilities to respect the freedom
|
||||||
|
of others.
|
||||||
|
|
||||||
|
For example, if you distribute copies of such a program, whether
|
||||||
|
gratis or for a fee, you must pass on to the recipients the same
|
||||||
|
freedoms that you received. You must make sure that they, too, receive
|
||||||
|
or can get the source code. And you must show them these terms so they
|
||||||
|
know their rights.
|
||||||
|
|
||||||
|
Developers that use the GNU GPL protect your rights with two steps:
|
||||||
|
(1) assert copyright on the software, and (2) offer you this License
|
||||||
|
giving you legal permission to copy, distribute and/or modify it.
|
||||||
|
|
||||||
|
For the developers' and authors' protection, the GPL clearly explains
|
||||||
|
that there is no warranty for this free software. For both users' and
|
||||||
|
authors' sake, the GPL requires that modified versions be marked as
|
||||||
|
changed, so that their problems will not be attributed erroneously to
|
||||||
|
authors of previous versions.
|
||||||
|
|
||||||
|
Some devices are designed to deny users access to install or run
|
||||||
|
modified versions of the software inside them, although the
|
||||||
|
manufacturer can do so. This is fundamentally incompatible with the
|
||||||
|
aim of protecting users' freedom to change the software. The
|
||||||
|
systematic pattern of such abuse occurs in the area of products for
|
||||||
|
individuals to use, which is precisely where it is most unacceptable.
|
||||||
|
Therefore, we have designed this version of the GPL to prohibit the
|
||||||
|
practice for those products. If such problems arise substantially in
|
||||||
|
other domains, we stand ready to extend this provision to those
|
||||||
|
domains in future versions of the GPL, as needed to protect the
|
||||||
|
freedom of users.
|
||||||
|
|
||||||
|
Finally, every program is threatened constantly by software patents.
|
||||||
|
States should not allow patents to restrict development and use of
|
||||||
|
software on general-purpose computers, but in those that do, we wish
|
||||||
|
to avoid the special danger that patents applied to a free program
|
||||||
|
could make it effectively proprietary. To prevent this, the GPL
|
||||||
|
assures that patents cannot be used to render the program non-free.
|
||||||
|
|
||||||
|
The precise terms and conditions for copying, distribution and
|
||||||
|
modification follow.
|
||||||
|
|
||||||
|
### TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
#### 0. Definitions.
|
||||||
|
|
||||||
|
"This License" refers to version 3 of the GNU General Public License.
|
||||||
|
|
||||||
|
"Copyright" also means copyright-like laws that apply to other kinds
|
||||||
|
of works, such as semiconductor masks.
|
||||||
|
|
||||||
|
"The Program" refers to any copyrightable work licensed under this
|
||||||
|
License. Each licensee is addressed as "you". "Licensees" and
|
||||||
|
"recipients" may be individuals or organizations.
|
||||||
|
|
||||||
|
To "modify" a work means to copy from or adapt all or part of the work
|
||||||
|
in a fashion requiring copyright permission, other than the making of
|
||||||
|
an exact copy. The resulting work is called a "modified version" of
|
||||||
|
the earlier work or a work "based on" the earlier work.
|
||||||
|
|
||||||
|
A "covered work" means either the unmodified Program or a work based
|
||||||
|
on the Program.
|
||||||
|
|
||||||
|
To "propagate" a work means to do anything with it that, without
|
||||||
|
permission, would make you directly or secondarily liable for
|
||||||
|
infringement under applicable copyright law, except executing it on a
|
||||||
|
computer or modifying a private copy. Propagation includes copying,
|
||||||
|
distribution (with or without modification), making available to the
|
||||||
|
public, and in some countries other activities as well.
|
||||||
|
|
||||||
|
To "convey" a work means any kind of propagation that enables other
|
||||||
|
parties to make or receive copies. Mere interaction with a user
|
||||||
|
through a computer network, with no transfer of a copy, is not
|
||||||
|
conveying.
|
||||||
|
|
||||||
|
An interactive user interface displays "Appropriate Legal Notices" to
|
||||||
|
the extent that it includes a convenient and prominently visible
|
||||||
|
feature that (1) displays an appropriate copyright notice, and (2)
|
||||||
|
tells the user that there is no warranty for the work (except to the
|
||||||
|
extent that warranties are provided), that licensees may convey the
|
||||||
|
work under this License, and how to view a copy of this License. If
|
||||||
|
the interface presents a list of user commands or options, such as a
|
||||||
|
menu, a prominent item in the list meets this criterion.
|
||||||
|
|
||||||
|
#### 1. Source Code.
|
||||||
|
|
||||||
|
The "source code" for a work means the preferred form of the work for
|
||||||
|
making modifications to it. "Object code" means any non-source form of
|
||||||
|
a work.
|
||||||
|
|
||||||
|
A "Standard Interface" means an interface that either is an official
|
||||||
|
standard defined by a recognized standards body, or, in the case of
|
||||||
|
interfaces specified for a particular programming language, one that
|
||||||
|
is widely used among developers working in that language.
|
||||||
|
|
||||||
|
The "System Libraries" of an executable work include anything, other
|
||||||
|
than the work as a whole, that (a) is included in the normal form of
|
||||||
|
packaging a Major Component, but which is not part of that Major
|
||||||
|
Component, and (b) serves only to enable use of the work with that
|
||||||
|
Major Component, or to implement a Standard Interface for which an
|
||||||
|
implementation is available to the public in source code form. A
|
||||||
|
"Major Component", in this context, means a major essential component
|
||||||
|
(kernel, window system, and so on) of the specific operating system
|
||||||
|
(if any) on which the executable work runs, or a compiler used to
|
||||||
|
produce the work, or an object code interpreter used to run it.
|
||||||
|
|
||||||
|
The "Corresponding Source" for a work in object code form means all
|
||||||
|
the source code needed to generate, install, and (for an executable
|
||||||
|
work) run the object code and to modify the work, including scripts to
|
||||||
|
control those activities. However, it does not include the work's
|
||||||
|
System Libraries, or general-purpose tools or generally available free
|
||||||
|
programs which are used unmodified in performing those activities but
|
||||||
|
which are not part of the work. For example, Corresponding Source
|
||||||
|
includes interface definition files associated with source files for
|
||||||
|
the work, and the source code for shared libraries and dynamically
|
||||||
|
linked subprograms that the work is specifically designed to require,
|
||||||
|
such as by intimate data communication or control flow between those
|
||||||
|
subprograms and other parts of the work.
|
||||||
|
|
||||||
|
The Corresponding Source need not include anything that users can
|
||||||
|
regenerate automatically from other parts of the Corresponding Source.
|
||||||
|
|
||||||
|
The Corresponding Source for a work in source code form is that same
|
||||||
|
work.
|
||||||
|
|
||||||
|
#### 2. Basic Permissions.
|
||||||
|
|
||||||
|
All rights granted under this License are granted for the term of
|
||||||
|
copyright on the Program, and are irrevocable provided the stated
|
||||||
|
conditions are met. This License explicitly affirms your unlimited
|
||||||
|
permission to run the unmodified Program. The output from running a
|
||||||
|
covered work is covered by this License only if the output, given its
|
||||||
|
content, constitutes a covered work. This License acknowledges your
|
||||||
|
rights of fair use or other equivalent, as provided by copyright law.
|
||||||
|
|
||||||
|
You may make, run and propagate covered works that you do not convey,
|
||||||
|
without conditions so long as your license otherwise remains in force.
|
||||||
|
You may convey covered works to others for the sole purpose of having
|
||||||
|
them make modifications exclusively for you, or provide you with
|
||||||
|
facilities for running those works, provided that you comply with the
|
||||||
|
terms of this License in conveying all material for which you do not
|
||||||
|
control copyright. Those thus making or running the covered works for
|
||||||
|
you must do so exclusively on your behalf, under your direction and
|
||||||
|
control, on terms that prohibit them from making any copies of your
|
||||||
|
copyrighted material outside their relationship with you.
|
||||||
|
|
||||||
|
Conveying under any other circumstances is permitted solely under the
|
||||||
|
conditions stated below. Sublicensing is not allowed; section 10 makes
|
||||||
|
it unnecessary.
|
||||||
|
|
||||||
|
#### 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||||
|
|
||||||
|
No covered work shall be deemed part of an effective technological
|
||||||
|
measure under any applicable law fulfilling obligations under article
|
||||||
|
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||||
|
similar laws prohibiting or restricting circumvention of such
|
||||||
|
measures.
|
||||||
|
|
||||||
|
When you convey a covered work, you waive any legal power to forbid
|
||||||
|
circumvention of technological measures to the extent such
|
||||||
|
circumvention is effected by exercising rights under this License with
|
||||||
|
respect to the covered work, and you disclaim any intention to limit
|
||||||
|
operation or modification of the work as a means of enforcing, against
|
||||||
|
the work's users, your or third parties' legal rights to forbid
|
||||||
|
circumvention of technological measures.
|
||||||
|
|
||||||
|
#### 4. Conveying Verbatim Copies.
|
||||||
|
|
||||||
|
You may convey verbatim copies of the Program's source code as you
|
||||||
|
receive it, in any medium, provided that you conspicuously and
|
||||||
|
appropriately publish on each copy an appropriate copyright notice;
|
||||||
|
keep intact all notices stating that this License and any
|
||||||
|
non-permissive terms added in accord with section 7 apply to the code;
|
||||||
|
keep intact all notices of the absence of any warranty; and give all
|
||||||
|
recipients a copy of this License along with the Program.
|
||||||
|
|
||||||
|
You may charge any price or no price for each copy that you convey,
|
||||||
|
and you may offer support or warranty protection for a fee.
|
||||||
|
|
||||||
|
#### 5. Conveying Modified Source Versions.
|
||||||
|
|
||||||
|
You may convey a work based on the Program, or the modifications to
|
||||||
|
produce it from the Program, in the form of source code under the
|
||||||
|
terms of section 4, provided that you also meet all of these
|
||||||
|
conditions:
|
||||||
|
|
||||||
|
- a) The work must carry prominent notices stating that you modified
|
||||||
|
it, and giving a relevant date.
|
||||||
|
- b) The work must carry prominent notices stating that it is
|
||||||
|
released under this License and any conditions added under
|
||||||
|
section 7. This requirement modifies the requirement in section 4
|
||||||
|
to "keep intact all notices".
|
||||||
|
- c) You must license the entire work, as a whole, under this
|
||||||
|
License to anyone who comes into possession of a copy. This
|
||||||
|
License will therefore apply, along with any applicable section 7
|
||||||
|
additional terms, to the whole of the work, and all its parts,
|
||||||
|
regardless of how they are packaged. This License gives no
|
||||||
|
permission to license the work in any other way, but it does not
|
||||||
|
invalidate such permission if you have separately received it.
|
||||||
|
- d) If the work has interactive user interfaces, each must display
|
||||||
|
Appropriate Legal Notices; however, if the Program has interactive
|
||||||
|
interfaces that do not display Appropriate Legal Notices, your
|
||||||
|
work need not make them do so.
|
||||||
|
|
||||||
|
A compilation of a covered work with other separate and independent
|
||||||
|
works, which are not by their nature extensions of the covered work,
|
||||||
|
and which are not combined with it such as to form a larger program,
|
||||||
|
in or on a volume of a storage or distribution medium, is called an
|
||||||
|
"aggregate" if the compilation and its resulting copyright are not
|
||||||
|
used to limit the access or legal rights of the compilation's users
|
||||||
|
beyond what the individual works permit. Inclusion of a covered work
|
||||||
|
in an aggregate does not cause this License to apply to the other
|
||||||
|
parts of the aggregate.
|
||||||
|
|
||||||
|
#### 6. Conveying Non-Source Forms.
|
||||||
|
|
||||||
|
You may convey a covered work in object code form under the terms of
|
||||||
|
sections 4 and 5, provided that you also convey the machine-readable
|
||||||
|
Corresponding Source under the terms of this License, in one of these
|
||||||
|
ways:
|
||||||
|
|
||||||
|
- a) Convey the object code in, or embodied in, a physical product
|
||||||
|
(including a physical distribution medium), accompanied by the
|
||||||
|
Corresponding Source fixed on a durable physical medium
|
||||||
|
customarily used for software interchange.
|
||||||
|
- b) Convey the object code in, or embodied in, a physical product
|
||||||
|
(including a physical distribution medium), accompanied by a
|
||||||
|
written offer, valid for at least three years and valid for as
|
||||||
|
long as you offer spare parts or customer support for that product
|
||||||
|
model, to give anyone who possesses the object code either (1) a
|
||||||
|
copy of the Corresponding Source for all the software in the
|
||||||
|
product that is covered by this License, on a durable physical
|
||||||
|
medium customarily used for software interchange, for a price no
|
||||||
|
more than your reasonable cost of physically performing this
|
||||||
|
conveying of source, or (2) access to copy the Corresponding
|
||||||
|
Source from a network server at no charge.
|
||||||
|
- c) Convey individual copies of the object code with a copy of the
|
||||||
|
written offer to provide the Corresponding Source. This
|
||||||
|
alternative is allowed only occasionally and noncommercially, and
|
||||||
|
only if you received the object code with such an offer, in accord
|
||||||
|
with subsection 6b.
|
||||||
|
- d) Convey the object code by offering access from a designated
|
||||||
|
place (gratis or for a charge), and offer equivalent access to the
|
||||||
|
Corresponding Source in the same way through the same place at no
|
||||||
|
further charge. You need not require recipients to copy the
|
||||||
|
Corresponding Source along with the object code. If the place to
|
||||||
|
copy the object code is a network server, the Corresponding Source
|
||||||
|
may be on a different server (operated by you or a third party)
|
||||||
|
that supports equivalent copying facilities, provided you maintain
|
||||||
|
clear directions next to the object code saying where to find the
|
||||||
|
Corresponding Source. Regardless of what server hosts the
|
||||||
|
Corresponding Source, you remain obligated to ensure that it is
|
||||||
|
available for as long as needed to satisfy these requirements.
|
||||||
|
- e) Convey the object code using peer-to-peer transmission,
|
||||||
|
provided you inform other peers where the object code and
|
||||||
|
Corresponding Source of the work are being offered to the general
|
||||||
|
public at no charge under subsection 6d.
|
||||||
|
|
||||||
|
A separable portion of the object code, whose source code is excluded
|
||||||
|
from the Corresponding Source as a System Library, need not be
|
||||||
|
included in conveying the object code work.
|
||||||
|
|
||||||
|
A "User Product" is either (1) a "consumer product", which means any
|
||||||
|
tangible personal property which is normally used for personal,
|
||||||
|
family, or household purposes, or (2) anything designed or sold for
|
||||||
|
incorporation into a dwelling. In determining whether a product is a
|
||||||
|
consumer product, doubtful cases shall be resolved in favor of
|
||||||
|
coverage. For a particular product received by a particular user,
|
||||||
|
"normally used" refers to a typical or common use of that class of
|
||||||
|
product, regardless of the status of the particular user or of the way
|
||||||
|
in which the particular user actually uses, or expects or is expected
|
||||||
|
to use, the product. A product is a consumer product regardless of
|
||||||
|
whether the product has substantial commercial, industrial or
|
||||||
|
non-consumer uses, unless such uses represent the only significant
|
||||||
|
mode of use of the product.
|
||||||
|
|
||||||
|
"Installation Information" for a User Product means any methods,
|
||||||
|
procedures, authorization keys, or other information required to
|
||||||
|
install and execute modified versions of a covered work in that User
|
||||||
|
Product from a modified version of its Corresponding Source. The
|
||||||
|
information must suffice to ensure that the continued functioning of
|
||||||
|
the modified object code is in no case prevented or interfered with
|
||||||
|
solely because modification has been made.
|
||||||
|
|
||||||
|
If you convey an object code work under this section in, or with, or
|
||||||
|
specifically for use in, a User Product, and the conveying occurs as
|
||||||
|
part of a transaction in which the right of possession and use of the
|
||||||
|
User Product is transferred to the recipient in perpetuity or for a
|
||||||
|
fixed term (regardless of how the transaction is characterized), the
|
||||||
|
Corresponding Source conveyed under this section must be accompanied
|
||||||
|
by the Installation Information. But this requirement does not apply
|
||||||
|
if neither you nor any third party retains the ability to install
|
||||||
|
modified object code on the User Product (for example, the work has
|
||||||
|
been installed in ROM).
|
||||||
|
|
||||||
|
The requirement to provide Installation Information does not include a
|
||||||
|
requirement to continue to provide support service, warranty, or
|
||||||
|
updates for a work that has been modified or installed by the
|
||||||
|
recipient, or for the User Product in which it has been modified or
|
||||||
|
installed. Access to a network may be denied when the modification
|
||||||
|
itself materially and adversely affects the operation of the network
|
||||||
|
or violates the rules and protocols for communication across the
|
||||||
|
network.
|
||||||
|
|
||||||
|
Corresponding Source conveyed, and Installation Information provided,
|
||||||
|
in accord with this section must be in a format that is publicly
|
||||||
|
documented (and with an implementation available to the public in
|
||||||
|
source code form), and must require no special password or key for
|
||||||
|
unpacking, reading or copying.
|
||||||
|
|
||||||
|
#### 7. Additional Terms.
|
||||||
|
|
||||||
|
"Additional permissions" are terms that supplement the terms of this
|
||||||
|
License by making exceptions from one or more of its conditions.
|
||||||
|
Additional permissions that are applicable to the entire Program shall
|
||||||
|
be treated as though they were included in this License, to the extent
|
||||||
|
that they are valid under applicable law. If additional permissions
|
||||||
|
apply only to part of the Program, that part may be used separately
|
||||||
|
under those permissions, but the entire Program remains governed by
|
||||||
|
this License without regard to the additional permissions.
|
||||||
|
|
||||||
|
When you convey a copy of a covered work, you may at your option
|
||||||
|
remove any additional permissions from that copy, or from any part of
|
||||||
|
it. (Additional permissions may be written to require their own
|
||||||
|
removal in certain cases when you modify the work.) You may place
|
||||||
|
additional permissions on material, added by you to a covered work,
|
||||||
|
for which you have or can give appropriate copyright permission.
|
||||||
|
|
||||||
|
Notwithstanding any other provision of this License, for material you
|
||||||
|
add to a covered work, you may (if authorized by the copyright holders
|
||||||
|
of that material) supplement the terms of this License with terms:
|
||||||
|
|
||||||
|
- a) Disclaiming warranty or limiting liability differently from the
|
||||||
|
terms of sections 15 and 16 of this License; or
|
||||||
|
- b) Requiring preservation of specified reasonable legal notices or
|
||||||
|
author attributions in that material or in the Appropriate Legal
|
||||||
|
Notices displayed by works containing it; or
|
||||||
|
- c) Prohibiting misrepresentation of the origin of that material,
|
||||||
|
or requiring that modified versions of such material be marked in
|
||||||
|
reasonable ways as different from the original version; or
|
||||||
|
- d) Limiting the use for publicity purposes of names of licensors
|
||||||
|
or authors of the material; or
|
||||||
|
- e) Declining to grant rights under trademark law for use of some
|
||||||
|
trade names, trademarks, or service marks; or
|
||||||
|
- f) Requiring indemnification of licensors and authors of that
|
||||||
|
material by anyone who conveys the material (or modified versions
|
||||||
|
of it) with contractual assumptions of liability to the recipient,
|
||||||
|
for any liability that these contractual assumptions directly
|
||||||
|
impose on those licensors and authors.
|
||||||
|
|
||||||
|
All other non-permissive additional terms are considered "further
|
||||||
|
restrictions" within the meaning of section 10. If the Program as you
|
||||||
|
received it, or any part of it, contains a notice stating that it is
|
||||||
|
governed by this License along with a term that is a further
|
||||||
|
restriction, you may remove that term. If a license document contains
|
||||||
|
a further restriction but permits relicensing or conveying under this
|
||||||
|
License, you may add to a covered work material governed by the terms
|
||||||
|
of that license document, provided that the further restriction does
|
||||||
|
not survive such relicensing or conveying.
|
||||||
|
|
||||||
|
If you add terms to a covered work in accord with this section, you
|
||||||
|
must place, in the relevant source files, a statement of the
|
||||||
|
additional terms that apply to those files, or a notice indicating
|
||||||
|
where to find the applicable terms.
|
||||||
|
|
||||||
|
Additional terms, permissive or non-permissive, may be stated in the
|
||||||
|
form of a separately written license, or stated as exceptions; the
|
||||||
|
above requirements apply either way.
|
||||||
|
|
||||||
|
#### 8. Termination.
|
||||||
|
|
||||||
|
You may not propagate or modify a covered work except as expressly
|
||||||
|
provided under this License. Any attempt otherwise to propagate or
|
||||||
|
modify it is void, and will automatically terminate your rights under
|
||||||
|
this License (including any patent licenses granted under the third
|
||||||
|
paragraph of section 11).
|
||||||
|
|
||||||
|
However, if you cease all violation of this License, then your license
|
||||||
|
from a particular copyright holder is reinstated (a) provisionally,
|
||||||
|
unless and until the copyright holder explicitly and finally
|
||||||
|
terminates your license, and (b) permanently, if the copyright holder
|
||||||
|
fails to notify you of the violation by some reasonable means prior to
|
||||||
|
60 days after the cessation.
|
||||||
|
|
||||||
|
Moreover, your license from a particular copyright holder is
|
||||||
|
reinstated permanently if the copyright holder notifies you of the
|
||||||
|
violation by some reasonable means, this is the first time you have
|
||||||
|
received notice of violation of this License (for any work) from that
|
||||||
|
copyright holder, and you cure the violation prior to 30 days after
|
||||||
|
your receipt of the notice.
|
||||||
|
|
||||||
|
Termination of your rights under this section does not terminate the
|
||||||
|
licenses of parties who have received copies or rights from you under
|
||||||
|
this License. If your rights have been terminated and not permanently
|
||||||
|
reinstated, you do not qualify to receive new licenses for the same
|
||||||
|
material under section 10.
|
||||||
|
|
||||||
|
#### 9. Acceptance Not Required for Having Copies.
|
||||||
|
|
||||||
|
You are not required to accept this License in order to receive or run
|
||||||
|
a copy of the Program. Ancillary propagation of a covered work
|
||||||
|
occurring solely as a consequence of using peer-to-peer transmission
|
||||||
|
to receive a copy likewise does not require acceptance. However,
|
||||||
|
nothing other than this License grants you permission to propagate or
|
||||||
|
modify any covered work. These actions infringe copyright if you do
|
||||||
|
not accept this License. Therefore, by modifying or propagating a
|
||||||
|
covered work, you indicate your acceptance of this License to do so.
|
||||||
|
|
||||||
|
#### 10. Automatic Licensing of Downstream Recipients.
|
||||||
|
|
||||||
|
Each time you convey a covered work, the recipient automatically
|
||||||
|
receives a license from the original licensors, to run, modify and
|
||||||
|
propagate that work, subject to this License. You are not responsible
|
||||||
|
for enforcing compliance by third parties with this License.
|
||||||
|
|
||||||
|
An "entity transaction" is a transaction transferring control of an
|
||||||
|
organization, or substantially all assets of one, or subdividing an
|
||||||
|
organization, or merging organizations. If propagation of a covered
|
||||||
|
work results from an entity transaction, each party to that
|
||||||
|
transaction who receives a copy of the work also receives whatever
|
||||||
|
licenses to the work the party's predecessor in interest had or could
|
||||||
|
give under the previous paragraph, plus a right to possession of the
|
||||||
|
Corresponding Source of the work from the predecessor in interest, if
|
||||||
|
the predecessor has it or can get it with reasonable efforts.
|
||||||
|
|
||||||
|
You may not impose any further restrictions on the exercise of the
|
||||||
|
rights granted or affirmed under this License. For example, you may
|
||||||
|
not impose a license fee, royalty, or other charge for exercise of
|
||||||
|
rights granted under this License, and you may not initiate litigation
|
||||||
|
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||||
|
any patent claim is infringed by making, using, selling, offering for
|
||||||
|
sale, or importing the Program or any portion of it.
|
||||||
|
|
||||||
|
#### 11. Patents.
|
||||||
|
|
||||||
|
A "contributor" is a copyright holder who authorizes use under this
|
||||||
|
License of the Program or a work on which the Program is based. The
|
||||||
|
work thus licensed is called the contributor's "contributor version".
|
||||||
|
|
||||||
|
A contributor's "essential patent claims" are all patent claims owned
|
||||||
|
or controlled by the contributor, whether already acquired or
|
||||||
|
hereafter acquired, that would be infringed by some manner, permitted
|
||||||
|
by this License, of making, using, or selling its contributor version,
|
||||||
|
but do not include claims that would be infringed only as a
|
||||||
|
consequence of further modification of the contributor version. For
|
||||||
|
purposes of this definition, "control" includes the right to grant
|
||||||
|
patent sublicenses in a manner consistent with the requirements of
|
||||||
|
this License.
|
||||||
|
|
||||||
|
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||||
|
patent license under the contributor's essential patent claims, to
|
||||||
|
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||||
|
propagate the contents of its contributor version.
|
||||||
|
|
||||||
|
In the following three paragraphs, a "patent license" is any express
|
||||||
|
agreement or commitment, however denominated, not to enforce a patent
|
||||||
|
(such as an express permission to practice a patent or covenant not to
|
||||||
|
sue for patent infringement). To "grant" such a patent license to a
|
||||||
|
party means to make such an agreement or commitment not to enforce a
|
||||||
|
patent against the party.
|
||||||
|
|
||||||
|
If you convey a covered work, knowingly relying on a patent license,
|
||||||
|
and the Corresponding Source of the work is not available for anyone
|
||||||
|
to copy, free of charge and under the terms of this License, through a
|
||||||
|
publicly available network server or other readily accessible means,
|
||||||
|
then you must either (1) cause the Corresponding Source to be so
|
||||||
|
available, or (2) arrange to deprive yourself of the benefit of the
|
||||||
|
patent license for this particular work, or (3) arrange, in a manner
|
||||||
|
consistent with the requirements of this License, to extend the patent
|
||||||
|
license to downstream recipients. "Knowingly relying" means you have
|
||||||
|
actual knowledge that, but for the patent license, your conveying the
|
||||||
|
covered work in a country, or your recipient's use of the covered work
|
||||||
|
in a country, would infringe one or more identifiable patents in that
|
||||||
|
country that you have reason to believe are valid.
|
||||||
|
|
||||||
|
If, pursuant to or in connection with a single transaction or
|
||||||
|
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||||
|
covered work, and grant a patent license to some of the parties
|
||||||
|
receiving the covered work authorizing them to use, propagate, modify
|
||||||
|
or convey a specific copy of the covered work, then the patent license
|
||||||
|
you grant is automatically extended to all recipients of the covered
|
||||||
|
work and works based on it.
|
||||||
|
|
||||||
|
A patent license is "discriminatory" if it does not include within the
|
||||||
|
scope of its coverage, prohibits the exercise of, or is conditioned on
|
||||||
|
the non-exercise of one or more of the rights that are specifically
|
||||||
|
granted under this License. You may not convey a covered work if you
|
||||||
|
are a party to an arrangement with a third party that is in the
|
||||||
|
business of distributing software, under which you make payment to the
|
||||||
|
third party based on the extent of your activity of conveying the
|
||||||
|
work, and under which the third party grants, to any of the parties
|
||||||
|
who would receive the covered work from you, a discriminatory patent
|
||||||
|
license (a) in connection with copies of the covered work conveyed by
|
||||||
|
you (or copies made from those copies), or (b) primarily for and in
|
||||||
|
connection with specific products or compilations that contain the
|
||||||
|
covered work, unless you entered into that arrangement, or that patent
|
||||||
|
license was granted, prior to 28 March 2007.
|
||||||
|
|
||||||
|
Nothing in this License shall be construed as excluding or limiting
|
||||||
|
any implied license or other defenses to infringement that may
|
||||||
|
otherwise be available to you under applicable patent law.
|
||||||
|
|
||||||
|
#### 12. No Surrender of Others' Freedom.
|
||||||
|
|
||||||
|
If conditions are imposed on you (whether by court order, agreement or
|
||||||
|
otherwise) that contradict the conditions of this License, they do not
|
||||||
|
excuse you from the conditions of this License. If you cannot convey a
|
||||||
|
covered work so as to satisfy simultaneously your obligations under
|
||||||
|
this License and any other pertinent obligations, then as a
|
||||||
|
consequence you may not convey it at all. For example, if you agree to
|
||||||
|
terms that obligate you to collect a royalty for further conveying
|
||||||
|
from those to whom you convey the Program, the only way you could
|
||||||
|
satisfy both those terms and this License would be to refrain entirely
|
||||||
|
from conveying the Program.
|
||||||
|
|
||||||
|
#### 13. Use with the GNU Affero General Public License.
|
||||||
|
|
||||||
|
Notwithstanding any other provision of this License, you have
|
||||||
|
permission to link or combine any covered work with a work licensed
|
||||||
|
under version 3 of the GNU Affero General Public License into a single
|
||||||
|
combined work, and to convey the resulting work. The terms of this
|
||||||
|
License will continue to apply to the part which is the covered work,
|
||||||
|
but the special requirements of the GNU Affero General Public License,
|
||||||
|
section 13, concerning interaction through a network will apply to the
|
||||||
|
combination as such.
|
||||||
|
|
||||||
|
#### 14. Revised Versions of this License.
|
||||||
|
|
||||||
|
The Free Software Foundation may publish revised and/or new versions
|
||||||
|
of the GNU General Public License from time to time. Such new versions
|
||||||
|
will be similar in spirit to the present version, but may differ in
|
||||||
|
detail to address new problems or concerns.
|
||||||
|
|
||||||
|
Each version is given a distinguishing version number. If the Program
|
||||||
|
specifies that a certain numbered version of the GNU General Public
|
||||||
|
License "or any later version" applies to it, you have the option of
|
||||||
|
following the terms and conditions either of that numbered version or
|
||||||
|
of any later version published by the Free Software Foundation. If the
|
||||||
|
Program does not specify a version number of the GNU General Public
|
||||||
|
License, you may choose any version ever published by the Free
|
||||||
|
Software Foundation.
|
||||||
|
|
||||||
|
If the Program specifies that a proxy can decide which future versions
|
||||||
|
of the GNU General Public License can be used, that proxy's public
|
||||||
|
statement of acceptance of a version permanently authorizes you to
|
||||||
|
choose that version for the Program.
|
||||||
|
|
||||||
|
Later license versions may give you additional or different
|
||||||
|
permissions. However, no additional obligations are imposed on any
|
||||||
|
author or copyright holder as a result of your choosing to follow a
|
||||||
|
later version.
|
||||||
|
|
||||||
|
#### 15. Disclaimer of Warranty.
|
||||||
|
|
||||||
|
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||||
|
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||||
|
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT
|
||||||
|
WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT
|
||||||
|
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||||
|
A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND
|
||||||
|
PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE
|
||||||
|
DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR
|
||||||
|
CORRECTION.
|
||||||
|
|
||||||
|
#### 16. Limitation of Liability.
|
||||||
|
|
||||||
|
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||||
|
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR
|
||||||
|
CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES
|
||||||
|
ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT
|
||||||
|
NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR
|
||||||
|
LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM
|
||||||
|
TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER
|
||||||
|
PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
|
||||||
|
|
||||||
|
#### 17. Interpretation of Sections 15 and 16.
|
||||||
|
|
||||||
|
If the disclaimer of warranty and limitation of liability provided
|
||||||
|
above cannot be given local legal effect according to their terms,
|
||||||
|
reviewing courts shall apply local law that most closely approximates
|
||||||
|
an absolute waiver of all civil liability in connection with the
|
||||||
|
Program, unless a warranty or assumption of liability accompanies a
|
||||||
|
copy of the Program in return for a fee.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
### How to Apply These Terms to Your New Programs
|
||||||
|
|
||||||
|
If you develop a new program, and you want it to be of the greatest
|
||||||
|
possible use to the public, the best way to achieve this is to make it
|
||||||
|
free software which everyone can redistribute and change under these
|
||||||
|
terms.
|
||||||
|
|
||||||
|
To do so, attach the following notices to the program. It is safest to
|
||||||
|
attach them to the start of each source file to most effectively state
|
||||||
|
the exclusion of warranty; and each file should have at least the
|
||||||
|
"copyright" line and a pointer to where the full notice is found.
|
||||||
|
|
||||||
|
<one line to give the program's name and a brief idea of what it does.>
|
||||||
|
Copyright (C) <year> <name of author>
|
||||||
|
|
||||||
|
This program is free software: you can redistribute it and/or modify
|
||||||
|
it under the terms of the GNU General Public License as published by
|
||||||
|
the Free Software Foundation, either version 3 of the License, or
|
||||||
|
(at your option) any later version.
|
||||||
|
|
||||||
|
This program is distributed in the hope that it will be useful,
|
||||||
|
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||||
|
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||||
|
GNU General Public License for more details.
|
||||||
|
|
||||||
|
You should have received a copy of the GNU General Public License
|
||||||
|
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
Also add information on how to contact you by electronic and paper
|
||||||
|
mail.
|
||||||
|
|
||||||
|
If the program does terminal interaction, make it output a short
|
||||||
|
notice like this when it starts in an interactive mode:
|
||||||
|
|
||||||
|
<program> Copyright (C) <year> <name of author>
|
||||||
|
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||||
|
This is free software, and you are welcome to redistribute it
|
||||||
|
under certain conditions; type `show c' for details.
|
||||||
|
|
||||||
|
The hypothetical commands \`show w' and \`show c' should show the
|
||||||
|
appropriate parts of the General Public License. Of course, your
|
||||||
|
program's commands might be different; for a GUI interface, you would
|
||||||
|
use an "about box".
|
||||||
|
|
||||||
|
You should also get your employer (if you work as a programmer) or
|
||||||
|
school, if any, to sign a "copyright disclaimer" for the program, if
|
||||||
|
necessary. For more information on this, and how to apply and follow
|
||||||
|
the GNU GPL, see <https://www.gnu.org/licenses/>.
|
||||||
|
|
||||||
|
The GNU General Public License does not permit incorporating your
|
||||||
|
program into proprietary programs. If your program is a subroutine
|
||||||
|
library, you may consider it more useful to permit linking proprietary
|
||||||
|
applications with the library. If this is what you want to do, use the
|
||||||
|
GNU Lesser General Public License instead of this License. But first,
|
||||||
|
please read <https://www.gnu.org/licenses/why-not-lgpl.html>.
|
12
Makefile
Normal file
12
Makefile
Normal file
|
@ -0,0 +1,12 @@
|
||||||
|
protoc:
|
||||||
|
@go mod tidy -v
|
||||||
|
@go mod vendor
|
||||||
|
# Install specific version for gogo-proto
|
||||||
|
@go list -f '{{.Path}}/...@{{.Version}}' -m github.com/gogo/protobuf | xargs go get -v
|
||||||
|
# Install specific version for protobuf lib
|
||||||
|
@go list -f '{{.Path}}/...@{{.Version}}' -m github.com/golang/protobuf | xargs go get -v
|
||||||
|
# Protoc generate
|
||||||
|
@find . -type f -name '*.proto' -not -path './vendor/*' \
|
||||||
|
-exec protoc \
|
||||||
|
--proto_path=.:./vendor \
|
||||||
|
--gofast_out=plugins=grpc,paths=source_relative:. '{}' \;
|
99
README.md
Normal file
99
README.md
Normal file
|
@ -0,0 +1,99 @@
|
||||||
|
# NeoFS-proto
|
||||||
|
|
||||||
|
NeoFS-proto repository contains implementation of core NeoFS structures that
|
||||||
|
can be used for integration with NeoFS.
|
||||||
|
|
||||||
|
## Description
|
||||||
|
|
||||||
|
Repository contains 13 packages that implement NeoFS core structures. These
|
||||||
|
packages mostly contain protobuf files with service and structure definitions
|
||||||
|
or NeoFS core types with complemented functions.
|
||||||
|
|
||||||
|
### Accounting
|
||||||
|
|
||||||
|
Accounting package defines services and structures for accounting operations:
|
||||||
|
balance request and `cheque` operations for withdraw. `Cheque` is a structure
|
||||||
|
with inner ring signatures, which approve that user can withdraw requested
|
||||||
|
amount of assets. NeoFS smart contract takes binary formatted `cheque` as a
|
||||||
|
parameter in withdraw call.
|
||||||
|
|
||||||
|
### Bootstrap
|
||||||
|
|
||||||
|
Bootstrap package defines bootstrap service which is used by storage nodes to
|
||||||
|
connect to the storage network.
|
||||||
|
|
||||||
|
### Chain
|
||||||
|
|
||||||
|
Chain package contains util functions for operations with NEO Blockchain types:
|
||||||
|
wallet addresses, script-hashes.
|
||||||
|
|
||||||
|
### Container
|
||||||
|
|
||||||
|
Container package defines service and structures for operations with containers.
|
||||||
|
Objects in NeoFS are stored in containers. Container defines storage
|
||||||
|
policy for the objects.
|
||||||
|
|
||||||
|
### Decimal
|
||||||
|
|
||||||
|
Decimal defines custom decimal implementation which is used in accounting
|
||||||
|
operations.
|
||||||
|
|
||||||
|
### Hash
|
||||||
|
|
||||||
|
Hash package defines homomorphic hash type.
|
||||||
|
|
||||||
|
### Internal
|
||||||
|
|
||||||
|
Internal package defines constant error type and proto interface for custom
|
||||||
|
protobuf structures.
|
||||||
|
|
||||||
|
### Object
|
||||||
|
|
||||||
|
Object package defines service and structures for object operations. Object is
|
||||||
|
a core storage structure in NeoFS. Package contains detailed information
|
||||||
|
about object internal structure.
|
||||||
|
|
||||||
|
### Query
|
||||||
|
|
||||||
|
Query package defines structure for object search requests.
|
||||||
|
|
||||||
|
### Refs
|
||||||
|
|
||||||
|
Refs package defines core identity types: Object ID, Container ID, etc.
|
||||||
|
|
||||||
|
### Service
|
||||||
|
|
||||||
|
Service package defines util structure and functions for all NeoFS services
|
||||||
|
operations: TTL and request signature management, node roles, epoch retriever.
|
||||||
|
|
||||||
|
### Session
|
||||||
|
|
||||||
|
Session package defines service and structures for session obtain. Object
|
||||||
|
operations require an established session with pair of session keys signed by
|
||||||
|
owner of the object.
|
||||||
|
|
||||||
|
### State
|
||||||
|
|
||||||
|
State package defines service and structures for metrics gathering.
|
||||||
|
|
||||||
|
## How to use
|
||||||
|
|
||||||
|
NeoFS-proto packages contain godoc documentation. Examples of using most of
|
||||||
|
these packages can be found in NeoFS-CLI repository. CLI implements and
|
||||||
|
demonstrates all basic interactions with NeoFS: container, object, storage
|
||||||
|
group, and accounting operations.
|
||||||
|
|
||||||
|
Protobuf files are recompiled with the command:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ make protoc
|
||||||
|
```
|
||||||
|
|
||||||
|
## Contributing
|
||||||
|
|
||||||
|
At this moment, we do not accept contributions.
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
This project is licensed under the GPLv3 License -
|
||||||
|
see the [LICENSE.md](LICENSE.md) file for details
|
8
accounting/fixtures/cheque.sh
Executable file
8
accounting/fixtures/cheque.sh
Executable file
|
@ -0,0 +1,8 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
CHEQUE=d6520dabb6cb9b981792608c73670eff14775e9a65bbc189271723ba2703c53263e8d6e522dc32203339dcd8eee9c6b7439a0000000053724e000000000000001e61000603012d47e76210aec73be39ab3d186e0a40fe8d86bfa3d4fabfda57ba13b88f96abe1de4c7ecd46cb32081c0ff199e0b32708d2ce709dd146ce096484073a9b15a259ca799f8d848eb5bea16f6d0842a0181ccd47384af2cdb0fd0af0819e8a08802f7528ce97c9a93558efe7d4f62577aabdf771c931f54a71be6ad21e7d9cc1777686ad19b5dc4b80d7b8decf90054c5aad66c0e6fe63d8473b751cd77c1bd0557516e0f3e7d0ccb485809023b0c08a89f33ae38b2f99ce3f1ebc7905dddf0ed0f023e00f03a16e8707ce045eb42ee80d392451541ee510dc18e1c8befbac54d7426087d37d32d836537d317deafbbd193002a36f80fbdfbf3a730cf011bc6c75c7e6d5724f3adee7015fcb3068d321e2ae555e79107be0c46070efdae2f724dbc9f0340750b92789821683283bcb98e32b7e032b94f267b6964613fc31a7ce5813fddeea47a1db525634237e924178b5c8ea745549ae60aa3570ce6cf52e370e6ab87652bdf8a179176f1acaf48896bef9ab300818a53f410d86241d506a550f4915403fef27f744e829131d0ec980829fafa51db1714c2761d9f78762c008c323e9d6612e4f9efdc609f191fd9ca5431dd9dc037130150107ab8769780d728e9ffdf314019b57c8d2b940b9ec078afa951ed8b06c1bf352edd2037e29b8f24cca3ec700368a6f5829fb2a34fa03d0308ae6b05f433f2904d9a852fed1f5d2eb598ca79475b74ef6394e712d275cd798062c6d8e41fad822ac5a4fcb167f0a2e196f61f9f65a0adef9650f49150e7eb7bb08dd1739fa6e86b341f1b2cf5657fcd200637e8
|
||||||
|
DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
|
||||||
|
|
||||||
|
echo $CHEQUE | xxd -p -r > $DIR/cheque_data
|
||||||
|
|
||||||
|
exit 0
|
BIN
accounting/fixtures/cheque_data
Normal file
BIN
accounting/fixtures/cheque_data
Normal file
Binary file not shown.
49
accounting/service.go
Normal file
49
accounting/service.go
Normal file
|
@ -0,0 +1,49 @@
|
||||||
|
package accounting
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/nspcc-dev/neofs-proto/decimal"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/internal"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/refs"
|
||||||
|
)
|
||||||
|
|
||||||
|
type (
|
||||||
|
// OwnerID type alias.
|
||||||
|
OwnerID = refs.OwnerID
|
||||||
|
|
||||||
|
// Decimal type alias.
|
||||||
|
Decimal = decimal.Decimal
|
||||||
|
|
||||||
|
// Filter is used to filter accounts by criteria.
|
||||||
|
Filter func(acc *Account) bool
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
// ErrEmptyAddress is raised when passed Address is empty.
|
||||||
|
ErrEmptyAddress = internal.Error("empty address")
|
||||||
|
|
||||||
|
// ErrEmptyLockTarget is raised when passed LockTarget is empty.
|
||||||
|
ErrEmptyLockTarget = internal.Error("empty lock target")
|
||||||
|
|
||||||
|
// ErrEmptyContainerID is raised when passed CID is empty.
|
||||||
|
ErrEmptyContainerID = internal.Error("empty container ID")
|
||||||
|
|
||||||
|
// ErrEmptyParentAddress is raised when passed ParentAddress is empty.
|
||||||
|
ErrEmptyParentAddress = internal.Error("empty parent address")
|
||||||
|
)
|
||||||
|
|
||||||
|
// SetTTL sets ttl to BalanceRequest to satisfy TTLRequest interface.
|
||||||
|
func (m BalanceRequest) SetTTL(v uint32) { m.TTL = v }
|
||||||
|
|
||||||
|
// SumFunds goes through all accounts and sums up active funds.
|
||||||
|
func SumFunds(accounts []*Account) (res *decimal.Decimal) {
|
||||||
|
res = decimal.Zero.Copy()
|
||||||
|
|
||||||
|
for i := range accounts {
|
||||||
|
if accounts[i] == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
res = res.Add(accounts[i].ActiveFunds)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
BIN
accounting/service.pb.go
Normal file
BIN
accounting/service.pb.go
Normal file
Binary file not shown.
23
accounting/service.proto
Normal file
23
accounting/service.proto
Normal file
|
@ -0,0 +1,23 @@
|
||||||
|
syntax = "proto3";
|
||||||
|
package accounting;
|
||||||
|
option go_package = "github.com/nspcc-dev/neofs-proto/accounting";
|
||||||
|
|
||||||
|
import "decimal/decimal.proto";
|
||||||
|
import "accounting/types.proto";
|
||||||
|
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||||
|
|
||||||
|
option (gogoproto.stable_marshaler_all) = true;
|
||||||
|
|
||||||
|
service Accounting {
|
||||||
|
rpc Balance(BalanceRequest) returns (BalanceResponse);
|
||||||
|
}
|
||||||
|
|
||||||
|
message BalanceRequest {
|
||||||
|
bytes OwnerID = 1 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||||
|
uint32 TTL = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message BalanceResponse {
|
||||||
|
decimal.Decimal Balance = 1;
|
||||||
|
repeated Account LockAccounts = 2;
|
||||||
|
}
|
353
accounting/types.go
Normal file
353
accounting/types.go
Normal file
|
@ -0,0 +1,353 @@
|
||||||
|
package accounting
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/ecdsa"
|
||||||
|
"crypto/rand"
|
||||||
|
"encoding/binary"
|
||||||
|
"reflect"
|
||||||
|
|
||||||
|
"github.com/mr-tron/base58"
|
||||||
|
crypto "github.com/nspcc-dev/neofs-crypto"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/chain"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/decimal"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/internal"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/refs"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
type (
|
||||||
|
// Cheque structure that describes a user request for withdrawal of funds.
|
||||||
|
Cheque struct {
|
||||||
|
ID ChequeID
|
||||||
|
Owner refs.OwnerID
|
||||||
|
Amount *decimal.Decimal
|
||||||
|
Height uint64
|
||||||
|
Signatures []ChequeSignature
|
||||||
|
}
|
||||||
|
|
||||||
|
// BalanceReceiver interface that is used to retrieve user balance by address.
|
||||||
|
BalanceReceiver interface {
|
||||||
|
Balance(accountAddress string) (*Account, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ChequeID is identifier of user request for withdrawal of funds.
|
||||||
|
ChequeID string
|
||||||
|
|
||||||
|
// CID type alias.
|
||||||
|
CID = refs.CID
|
||||||
|
|
||||||
|
// SGID type alias.
|
||||||
|
SGID = refs.SGID
|
||||||
|
|
||||||
|
// ChequeSignature contains public key and hash, and is used to verify signatures.
|
||||||
|
ChequeSignature struct {
|
||||||
|
Key *ecdsa.PublicKey
|
||||||
|
Hash []byte
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
// ErrWrongSignature is raised when wrong signature is passed.
|
||||||
|
ErrWrongSignature = internal.Error("wrong signature")
|
||||||
|
|
||||||
|
// ErrWrongPublicKey is raised when wrong public key is passed.
|
||||||
|
ErrWrongPublicKey = internal.Error("wrong public key")
|
||||||
|
|
||||||
|
// ErrWrongChequeData is raised when passed bytes cannot not be parsed as valid Cheque.
|
||||||
|
ErrWrongChequeData = internal.Error("wrong cheque data")
|
||||||
|
|
||||||
|
// ErrInvalidLength is raised when passed bytes cannot not be parsed as valid ChequeID.
|
||||||
|
ErrInvalidLength = internal.Error("invalid length")
|
||||||
|
|
||||||
|
u16size = 2
|
||||||
|
u64size = 8
|
||||||
|
|
||||||
|
signaturesOffset = chain.AddressLength + refs.OwnerIDSize + u64size + u64size
|
||||||
|
)
|
||||||
|
|
||||||
|
// NewChequeID generates valid random ChequeID using crypto/rand.Reader.
|
||||||
|
func NewChequeID() (ChequeID, error) {
|
||||||
|
d := make([]byte, chain.AddressLength)
|
||||||
|
if _, err := rand.Read(d); err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
|
||||||
|
id := base58.Encode(d)
|
||||||
|
|
||||||
|
return ChequeID(id), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// String returns string representation of ChequeID.
|
||||||
|
func (b ChequeID) String() string { return string(b) }
|
||||||
|
|
||||||
|
// Empty returns true, if ChequeID is empty.
|
||||||
|
func (b ChequeID) Empty() bool { return len(b) == 0 }
|
||||||
|
|
||||||
|
// Valid validates ChequeID.
|
||||||
|
func (b ChequeID) Valid() bool {
|
||||||
|
d, err := base58.Decode(string(b))
|
||||||
|
return err == nil && len(d) == chain.AddressLength
|
||||||
|
}
|
||||||
|
|
||||||
|
// Bytes returns bytes representation of ChequeID.
|
||||||
|
func (b ChequeID) Bytes() []byte {
|
||||||
|
d, err := base58.Decode(string(b))
|
||||||
|
if err != nil {
|
||||||
|
return make([]byte, chain.AddressLength)
|
||||||
|
}
|
||||||
|
return d
|
||||||
|
}
|
||||||
|
|
||||||
|
// Equal checks that current ChequeID is equal to passed ChequeID.
|
||||||
|
func (b ChequeID) Equal(b2 ChequeID) bool {
|
||||||
|
return b.Valid() && b2.Valid() && string(b) == string(b2)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unmarshal tries to parse []byte into valid ChequeID.
|
||||||
|
func (b *ChequeID) Unmarshal(data []byte) error {
|
||||||
|
*b = ChequeID(base58.Encode(data))
|
||||||
|
if !b.Valid() {
|
||||||
|
return ErrInvalidLength
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Size returns size (chain.AddressLength).
|
||||||
|
func (b ChequeID) Size() int {
|
||||||
|
return chain.AddressLength
|
||||||
|
}
|
||||||
|
|
||||||
|
// MarshalTo tries to marshal ChequeID into passed bytes and returns
|
||||||
|
// count of copied bytes or error, if bytes len is not enough to contain ChequeID.
|
||||||
|
func (b ChequeID) MarshalTo(data []byte) (int, error) {
|
||||||
|
if len(data) < chain.AddressLength {
|
||||||
|
return 0, ErrInvalidLength
|
||||||
|
}
|
||||||
|
return copy(data, b.Bytes()), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Equals checks that m and tx are valid and equal Tx values.
|
||||||
|
func (m Tx) Equals(tx Tx) bool {
|
||||||
|
return m.From == tx.From &&
|
||||||
|
m.To == tx.To &&
|
||||||
|
m.Type == tx.Type &&
|
||||||
|
m.Amount == tx.Amount
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify validates current Cheque and Signatures that are generated for current Cheque.
|
||||||
|
func (b Cheque) Verify() error {
|
||||||
|
data := b.marshalBody()
|
||||||
|
for i, sign := range b.Signatures {
|
||||||
|
if err := crypto.VerifyRFC6979(sign.Key, data, sign.Hash); err != nil {
|
||||||
|
return errors.Wrapf(ErrWrongSignature, "item #%d: %s", i, err.Error())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sign is used to sign current Cheque and stores result inside b.Signatures.
|
||||||
|
func (b *Cheque) Sign(key *ecdsa.PrivateKey) error {
|
||||||
|
hash, err := crypto.SignRFC6979(key, b.marshalBody())
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
b.Signatures = append(b.Signatures, ChequeSignature{
|
||||||
|
Key: &key.PublicKey,
|
||||||
|
Hash: hash,
|
||||||
|
})
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *Cheque) marshalBody() []byte {
|
||||||
|
buf := make([]byte, signaturesOffset)
|
||||||
|
|
||||||
|
var offset int
|
||||||
|
|
||||||
|
offset += copy(buf, b.ID.Bytes())
|
||||||
|
offset += copy(buf[offset:], b.Owner.Bytes())
|
||||||
|
|
||||||
|
binary.BigEndian.PutUint64(buf[offset:], uint64(b.Amount.Value))
|
||||||
|
offset += u64size
|
||||||
|
|
||||||
|
binary.BigEndian.PutUint64(buf[offset:], b.Height)
|
||||||
|
|
||||||
|
return buf
|
||||||
|
}
|
||||||
|
|
||||||
|
func (b *Cheque) unmarshalBody(buf []byte) error {
|
||||||
|
var offset int
|
||||||
|
|
||||||
|
if len(buf) < signaturesOffset {
|
||||||
|
return ErrWrongChequeData
|
||||||
|
}
|
||||||
|
|
||||||
|
{ // unmarshal UUID
|
||||||
|
if err := b.ID.Unmarshal(buf[offset : offset+chain.AddressLength]); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
offset += chain.AddressLength
|
||||||
|
}
|
||||||
|
|
||||||
|
{ // unmarshal OwnerID
|
||||||
|
if err := b.Owner.Unmarshal(buf[offset : offset+refs.OwnerIDSize]); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
offset += refs.OwnerIDSize
|
||||||
|
}
|
||||||
|
|
||||||
|
{ // unmarshal amount
|
||||||
|
amount := int64(binary.BigEndian.Uint64(buf[offset:]))
|
||||||
|
b.Amount = decimal.New(amount)
|
||||||
|
offset += u64size
|
||||||
|
}
|
||||||
|
|
||||||
|
{ // unmarshal height
|
||||||
|
b.Height = binary.BigEndian.Uint64(buf[offset:])
|
||||||
|
offset += u64size
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// MarshalBinary is used to marshal Cheque into bytes.
|
||||||
|
func (b Cheque) MarshalBinary() ([]byte, error) {
|
||||||
|
var (
|
||||||
|
count = len(b.Signatures)
|
||||||
|
buf = make([]byte, b.Size())
|
||||||
|
offset = copy(buf, b.marshalBody())
|
||||||
|
)
|
||||||
|
|
||||||
|
binary.BigEndian.PutUint16(buf[offset:], uint16(count))
|
||||||
|
offset += u16size
|
||||||
|
|
||||||
|
for _, sign := range b.Signatures {
|
||||||
|
key := crypto.MarshalPublicKey(sign.Key)
|
||||||
|
offset += copy(buf[offset:], key)
|
||||||
|
offset += copy(buf[offset:], sign.Hash)
|
||||||
|
}
|
||||||
|
|
||||||
|
return buf, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Size returns size of Cheque (count of bytes needs to store it).
|
||||||
|
func (b Cheque) Size() int {
|
||||||
|
return signaturesOffset + u16size +
|
||||||
|
len(b.Signatures)*(crypto.PublicKeyCompressedSize+crypto.RFC6979SignatureSize)
|
||||||
|
}
|
||||||
|
|
||||||
|
// UnmarshalBinary tries to parse []byte into valid Cheque.
|
||||||
|
func (b *Cheque) UnmarshalBinary(buf []byte) error {
|
||||||
|
if err := b.unmarshalBody(buf); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
body := buf[:signaturesOffset]
|
||||||
|
|
||||||
|
count := int64(binary.BigEndian.Uint16(buf[signaturesOffset:]))
|
||||||
|
offset := signaturesOffset + u16size
|
||||||
|
|
||||||
|
if ln := count * int64(crypto.PublicKeyCompressedSize+crypto.RFC6979SignatureSize); ln > int64(len(buf[offset:])) {
|
||||||
|
return ErrWrongChequeData
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := int64(0); i < count; i++ {
|
||||||
|
sign := ChequeSignature{
|
||||||
|
Key: crypto.UnmarshalPublicKey(buf[offset : offset+crypto.PublicKeyCompressedSize]),
|
||||||
|
Hash: make([]byte, crypto.RFC6979SignatureSize),
|
||||||
|
}
|
||||||
|
|
||||||
|
offset += crypto.PublicKeyCompressedSize
|
||||||
|
if sign.Key == nil {
|
||||||
|
return errors.Wrapf(ErrWrongPublicKey, "item #%d", i)
|
||||||
|
}
|
||||||
|
|
||||||
|
offset += copy(sign.Hash, buf[offset:offset+crypto.RFC6979SignatureSize])
|
||||||
|
if err := crypto.VerifyRFC6979(sign.Key, body, sign.Hash); err != nil {
|
||||||
|
return errors.Wrapf(ErrWrongSignature, "item #%d: %s (offset=%d, len=%d)", i, err.Error(), offset, len(sign.Hash))
|
||||||
|
}
|
||||||
|
|
||||||
|
b.Signatures = append(b.Signatures, sign)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ErrNotEnoughFunds generates error using address and amounts.
|
||||||
|
func ErrNotEnoughFunds(addr string, needed, residue *decimal.Decimal) error {
|
||||||
|
return errors.Errorf("not enough funds (requested=%s, residue=%s, addr=%s", needed, residue, addr)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *Account) hasLockAcc(addr string) bool {
|
||||||
|
for i := range m.LockAccounts {
|
||||||
|
if m.LockAccounts[i].Address == addr {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// ValidateLock checks that account can be locked.
|
||||||
|
func (m *Account) ValidateLock() error {
|
||||||
|
switch {
|
||||||
|
case m.Address == "":
|
||||||
|
return ErrEmptyAddress
|
||||||
|
case m.ParentAddress == "":
|
||||||
|
return ErrEmptyParentAddress
|
||||||
|
case m.LockTarget == nil:
|
||||||
|
return ErrEmptyLockTarget
|
||||||
|
}
|
||||||
|
|
||||||
|
switch v := m.LockTarget.Target.(type) {
|
||||||
|
case *LockTarget_WithdrawTarget:
|
||||||
|
if v.WithdrawTarget.Cheque != m.Address {
|
||||||
|
return errors.Errorf("wrong cheque ID: expected %s, has %s", m.Address, v.WithdrawTarget.Cheque)
|
||||||
|
}
|
||||||
|
case *LockTarget_ContainerCreateTarget:
|
||||||
|
switch {
|
||||||
|
case v.ContainerCreateTarget.CID.Empty():
|
||||||
|
return ErrEmptyContainerID
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// CanLock checks possibility to lock funds.
|
||||||
|
func (m *Account) CanLock(lockAcc *Account) error {
|
||||||
|
switch {
|
||||||
|
case m.ActiveFunds.LT(lockAcc.ActiveFunds):
|
||||||
|
return ErrNotEnoughFunds(lockAcc.ParentAddress, lockAcc.ActiveFunds, m.ActiveFunds)
|
||||||
|
case m.hasLockAcc(lockAcc.Address):
|
||||||
|
return errors.Errorf("could not lock account(%s) funds: duplicating lock(%s)", m.Address, lockAcc.Address)
|
||||||
|
default:
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// LockForWithdraw checks that account contains locked funds by passed ChequeID.
|
||||||
|
func (m *Account) LockForWithdraw(chequeID string) bool {
|
||||||
|
switch v := m.LockTarget.Target.(type) {
|
||||||
|
case *LockTarget_WithdrawTarget:
|
||||||
|
return v.WithdrawTarget.Cheque == chequeID
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// LockForContainerCreate checks that account contains locked funds for container creation.
|
||||||
|
func (m *Account) LockForContainerCreate(cid refs.CID) bool {
|
||||||
|
switch v := m.LockTarget.Target.(type) {
|
||||||
|
case *LockTarget_ContainerCreateTarget:
|
||||||
|
return v.ContainerCreateTarget.CID.Equal(cid)
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Equal checks that current Settlement is equal to passed Settlement.
|
||||||
|
func (m *Settlement) Equal(s *Settlement) bool {
|
||||||
|
if s == nil || m.Epoch != s.Epoch || len(m.Transactions) != len(s.Transactions) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
return len(m.Transactions) == 0 || reflect.DeepEqual(m.Transactions, s.Transactions)
|
||||||
|
}
|
BIN
accounting/types.pb.go
Normal file
BIN
accounting/types.pb.go
Normal file
Binary file not shown.
106
accounting/types.proto
Normal file
106
accounting/types.proto
Normal file
|
@ -0,0 +1,106 @@
|
||||||
|
syntax = "proto3";
|
||||||
|
package accounting;
|
||||||
|
option go_package = "github.com/nspcc-dev/neofs-proto/accounting";
|
||||||
|
|
||||||
|
import "decimal/decimal.proto";
|
||||||
|
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||||
|
|
||||||
|
option (gogoproto.stable_marshaler_all) = true;
|
||||||
|
|
||||||
|
// Snapshot accounting messages
|
||||||
|
message Account {
|
||||||
|
bytes OwnerID = 1 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||||
|
string Address = 2;
|
||||||
|
string ParentAddress = 3;
|
||||||
|
decimal.Decimal ActiveFunds = 4;
|
||||||
|
Lifetime Lifetime = 5 [(gogoproto.nullable) = false];
|
||||||
|
LockTarget LockTarget = 6;
|
||||||
|
repeated Account LockAccounts = 7;
|
||||||
|
}
|
||||||
|
|
||||||
|
message LockTarget {
|
||||||
|
oneof Target {
|
||||||
|
WithdrawTarget WithdrawTarget = 1;
|
||||||
|
ContainerCreateTarget ContainerCreateTarget = 2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Snapshot balance messages
|
||||||
|
message Balances {
|
||||||
|
repeated Account Accounts = 1 [(gogoproto.nullable) = false];
|
||||||
|
}
|
||||||
|
|
||||||
|
// PayIn / PayOut messages
|
||||||
|
message PayIO {
|
||||||
|
uint64 BlockID = 1;
|
||||||
|
repeated Tx Transactions = 2 [(gogoproto.nullable) = false];
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clearing messages
|
||||||
|
message Clearing {
|
||||||
|
repeated Tx Transactions = 1 [(gogoproto.nullable) = false];
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clearing messages
|
||||||
|
message Withdraw {
|
||||||
|
string ID = 1;
|
||||||
|
uint64 Epoch = 2;
|
||||||
|
Tx Transaction = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Lifetime of locks
|
||||||
|
message Lifetime {
|
||||||
|
enum Unit {
|
||||||
|
Unlimited = 0;
|
||||||
|
NeoFSEpoch = 1;
|
||||||
|
NeoBlock = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
Unit unit = 1 [(gogoproto.customname) = "Unit"];
|
||||||
|
int64 Value = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Transaction messages
|
||||||
|
message Tx {
|
||||||
|
enum Type {
|
||||||
|
Unknown = 0;
|
||||||
|
Withdraw = 1;
|
||||||
|
PayIO = 2;
|
||||||
|
Inner = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
Type type = 1 [(gogoproto.customname) = "Type"];
|
||||||
|
string From = 2;
|
||||||
|
string To = 3;
|
||||||
|
decimal.Decimal Amount = 4;
|
||||||
|
bytes PublicKeys = 5; // of sender
|
||||||
|
}
|
||||||
|
|
||||||
|
message Settlement {
|
||||||
|
message Receiver {
|
||||||
|
string To = 1;
|
||||||
|
decimal.Decimal Amount = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message Container {
|
||||||
|
bytes CID = 1 [(gogoproto.customtype) = "CID", (gogoproto.nullable) = false];
|
||||||
|
repeated bytes SGIDs = 2 [(gogoproto.customtype) = "SGID", (gogoproto.nullable) = false];
|
||||||
|
}
|
||||||
|
|
||||||
|
message Tx {
|
||||||
|
string From = 1;
|
||||||
|
Container Container = 2 [(gogoproto.nullable) = false];
|
||||||
|
repeated Receiver Receivers = 3 [(gogoproto.nullable) = false];
|
||||||
|
}
|
||||||
|
|
||||||
|
uint64 Epoch = 1;
|
||||||
|
repeated Tx Transactions = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message ContainerCreateTarget {
|
||||||
|
bytes CID = 1 [(gogoproto.customtype) = "CID", (gogoproto.nullable) = false];
|
||||||
|
}
|
||||||
|
|
||||||
|
message WithdrawTarget {
|
||||||
|
string Cheque = 1;
|
||||||
|
}
|
84
accounting/types_test.go
Normal file
84
accounting/types_test.go
Normal file
|
@ -0,0 +1,84 @@
|
||||||
|
package accounting
|
||||||
|
|
||||||
|
import (
|
||||||
|
"io/ioutil"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/mr-tron/base58"
|
||||||
|
"github.com/nspcc-dev/neofs-crypto/test"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/chain"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/decimal"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/refs"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestCheque(t *testing.T) {
|
||||||
|
t.Run("new/valid", func(t *testing.T) {
|
||||||
|
id, err := NewChequeID()
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.True(t, id.Valid())
|
||||||
|
|
||||||
|
d := make([]byte, chain.AddressLength+1)
|
||||||
|
|
||||||
|
// expected size + 1 byte
|
||||||
|
str := base58.Encode(d)
|
||||||
|
require.False(t, ChequeID(str).Valid())
|
||||||
|
|
||||||
|
// expected size - 1 byte
|
||||||
|
str = base58.Encode(d[:len(d)-2])
|
||||||
|
require.False(t, ChequeID(str).Valid())
|
||||||
|
|
||||||
|
// wrong encoding
|
||||||
|
d = d[:len(d)-1] // normal size
|
||||||
|
require.False(t, ChequeID(string(d)).Valid())
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("marshal/unmarshal", func(t *testing.T) {
|
||||||
|
var b2 = new(Cheque)
|
||||||
|
|
||||||
|
key1 := test.DecodeKey(0)
|
||||||
|
key2 := test.DecodeKey(1)
|
||||||
|
|
||||||
|
id, err := NewChequeID()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
owner, err := refs.NewOwnerID(&key1.PublicKey)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
b1 := &Cheque{
|
||||||
|
ID: id,
|
||||||
|
Owner: owner,
|
||||||
|
Height: 100,
|
||||||
|
Amount: decimal.NewGAS(100),
|
||||||
|
}
|
||||||
|
|
||||||
|
require.NoError(t, b1.Sign(key1))
|
||||||
|
require.NoError(t, b1.Sign(key2))
|
||||||
|
|
||||||
|
data, err := b1.MarshalBinary()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Len(t, data, b1.Size())
|
||||||
|
require.NoError(t, b2.UnmarshalBinary(data))
|
||||||
|
require.Equal(t, b1, b2)
|
||||||
|
|
||||||
|
require.NoError(t, b1.Verify())
|
||||||
|
require.NoError(t, b2.Verify())
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("example from SC", func(t *testing.T) {
|
||||||
|
var pathToCheque = "fixtures/cheque_data"
|
||||||
|
expect, err := ioutil.ReadFile(pathToCheque)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
var cheque Cheque
|
||||||
|
require.NoError(t, cheque.UnmarshalBinary(expect))
|
||||||
|
|
||||||
|
actual, err := cheque.MarshalBinary()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, expect, actual)
|
||||||
|
|
||||||
|
require.NoError(t, cheque.Verify())
|
||||||
|
})
|
||||||
|
}
|
53
accounting/withdraw.go
Normal file
53
accounting/withdraw.go
Normal file
|
@ -0,0 +1,53 @@
|
||||||
|
package accounting
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/binary"
|
||||||
|
|
||||||
|
"github.com/nspcc-dev/neofs-proto/refs"
|
||||||
|
)
|
||||||
|
|
||||||
|
type (
|
||||||
|
// MessageID type alias.
|
||||||
|
MessageID = refs.MessageID
|
||||||
|
)
|
||||||
|
|
||||||
|
// SetTTL sets ttl to GetRequest to satisfy TTLRequest interface.
|
||||||
|
func (m *GetRequest) SetTTL(v uint32) { m.TTL = v }
|
||||||
|
|
||||||
|
// SetTTL sets ttl to PutRequest to satisfy TTLRequest interface.
|
||||||
|
func (m *PutRequest) SetTTL(v uint32) { m.TTL = v }
|
||||||
|
|
||||||
|
// SetTTL sets ttl to ListRequest to satisfy TTLRequest interface.
|
||||||
|
func (m *ListRequest) SetTTL(v uint32) { m.TTL = v }
|
||||||
|
|
||||||
|
// SetTTL sets ttl to DeleteRequest to satisfy TTLRequest interface.
|
||||||
|
func (m *DeleteRequest) SetTTL(v uint32) { m.TTL = v }
|
||||||
|
|
||||||
|
// SetSignature sets signature to PutRequest to satisfy SignedRequest interface.
|
||||||
|
func (m *PutRequest) SetSignature(v []byte) { m.Signature = v }
|
||||||
|
|
||||||
|
// SetSignature sets signature to DeleteRequest to satisfy SignedRequest interface.
|
||||||
|
func (m *DeleteRequest) SetSignature(v []byte) { m.Signature = v }
|
||||||
|
|
||||||
|
// PrepareData prepares bytes representation of PutRequest to satisfy SignedRequest interface.
|
||||||
|
func (m *PutRequest) PrepareData() ([]byte, error) {
|
||||||
|
var offset int
|
||||||
|
// MessageID-len + OwnerID-len + Amount + Height
|
||||||
|
buf := make([]byte, refs.UUIDSize+refs.OwnerIDSize+binary.MaxVarintLen64+binary.MaxVarintLen64)
|
||||||
|
offset += copy(buf[offset:], m.MessageID.Bytes())
|
||||||
|
offset += copy(buf[offset:], m.OwnerID.Bytes())
|
||||||
|
offset += binary.PutVarint(buf[offset:], m.Amount.Value)
|
||||||
|
binary.PutUvarint(buf[offset:], m.Height)
|
||||||
|
return buf, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// PrepareData prepares bytes representation of DeleteRequest to satisfy SignedRequest interface.
|
||||||
|
func (m *DeleteRequest) PrepareData() ([]byte, error) {
|
||||||
|
var offset int
|
||||||
|
// ID-len + OwnerID-len + MessageID-len
|
||||||
|
buf := make([]byte, refs.UUIDSize+refs.OwnerIDSize+refs.UUIDSize)
|
||||||
|
offset += copy(buf[offset:], m.ID.Bytes())
|
||||||
|
offset += copy(buf[offset:], m.OwnerID.Bytes())
|
||||||
|
copy(buf[offset:], m.MessageID.Bytes())
|
||||||
|
return buf, nil
|
||||||
|
}
|
BIN
accounting/withdraw.pb.go
Normal file
BIN
accounting/withdraw.pb.go
Normal file
Binary file not shown.
61
accounting/withdraw.proto
Normal file
61
accounting/withdraw.proto
Normal file
|
@ -0,0 +1,61 @@
|
||||||
|
syntax = "proto3";
|
||||||
|
package accounting;
|
||||||
|
option go_package = "github.com/nspcc-dev/neofs-proto/accounting";
|
||||||
|
|
||||||
|
import "decimal/decimal.proto";
|
||||||
|
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||||
|
|
||||||
|
option (gogoproto.stable_marshaler_all) = true;
|
||||||
|
|
||||||
|
service Withdraw {
|
||||||
|
rpc Get(GetRequest) returns (GetResponse);
|
||||||
|
rpc Put(PutRequest) returns (PutResponse);
|
||||||
|
rpc List(ListRequest) returns (ListResponse);
|
||||||
|
rpc Delete(DeleteRequest) returns (DeleteResponse);
|
||||||
|
}
|
||||||
|
|
||||||
|
message Item {
|
||||||
|
bytes ID = 1 [(gogoproto.customtype) = "ChequeID", (gogoproto.nullable) = false];
|
||||||
|
bytes OwnerID = 2 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||||
|
decimal.Decimal Amount = 3;
|
||||||
|
uint64 Height = 4;
|
||||||
|
bytes Payload = 5;
|
||||||
|
}
|
||||||
|
|
||||||
|
message GetRequest {
|
||||||
|
bytes ID = 1 [(gogoproto.customtype) = "ChequeID", (gogoproto.nullable) = false];
|
||||||
|
bytes OwnerID = 2 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||||
|
uint32 TTL = 3;
|
||||||
|
}
|
||||||
|
message GetResponse {
|
||||||
|
Item Withdraw = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message PutRequest {
|
||||||
|
bytes OwnerID = 1 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||||
|
decimal.Decimal Amount = 2;
|
||||||
|
uint64 Height = 3;
|
||||||
|
bytes MessageID = 4 [(gogoproto.customtype) = "MessageID", (gogoproto.nullable) = false];
|
||||||
|
bytes Signature = 5;
|
||||||
|
uint32 TTL = 6;
|
||||||
|
}
|
||||||
|
message PutResponse {
|
||||||
|
bytes ID = 1 [(gogoproto.customtype) = "ChequeID", (gogoproto.nullable) = false];
|
||||||
|
}
|
||||||
|
|
||||||
|
message ListRequest {
|
||||||
|
bytes OwnerID = 1 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||||
|
uint32 TTL = 2;
|
||||||
|
}
|
||||||
|
message ListResponse {
|
||||||
|
repeated Item Items = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message DeleteRequest {
|
||||||
|
bytes ID = 1 [(gogoproto.customtype) = "ChequeID", (gogoproto.nullable) = false];
|
||||||
|
bytes OwnerID = 2 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||||
|
bytes MessageID = 3 [(gogoproto.customtype) = "MessageID", (gogoproto.nullable) = false];
|
||||||
|
bytes Signature = 4;
|
||||||
|
uint32 TTL = 5;
|
||||||
|
}
|
||||||
|
message DeleteResponse {}
|
11
bootstrap/service.go
Normal file
11
bootstrap/service.go
Normal file
|
@ -0,0 +1,11 @@
|
||||||
|
package bootstrap
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/nspcc-dev/neofs-proto/service"
|
||||||
|
)
|
||||||
|
|
||||||
|
// NodeType type alias.
|
||||||
|
type NodeType = service.NodeRole
|
||||||
|
|
||||||
|
// SetTTL sets ttl to Request to satisfy TTLRequest interface.
|
||||||
|
func (m *Request) SetTTL(v uint32) { m.TTL = v }
|
BIN
bootstrap/service.pb.go
Normal file
BIN
bootstrap/service.pb.go
Normal file
Binary file not shown.
20
bootstrap/service.proto
Normal file
20
bootstrap/service.proto
Normal file
|
@ -0,0 +1,20 @@
|
||||||
|
syntax = "proto3";
|
||||||
|
package bootstrap;
|
||||||
|
option go_package = "github.com/nspcc-dev/neofs-proto/bootstrap";
|
||||||
|
|
||||||
|
import "bootstrap/types.proto";
|
||||||
|
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||||
|
|
||||||
|
option (gogoproto.stable_marshaler_all) = true;
|
||||||
|
|
||||||
|
// The Bootstrap service definition.
|
||||||
|
service Bootstrap {
|
||||||
|
rpc Process(Request) returns (bootstrap.SpreadMap);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Request message to communicate between DHT nodes
|
||||||
|
message Request {
|
||||||
|
int32 type = 1 [(gogoproto.customname) = "Type" , (gogoproto.nullable) = false, (gogoproto.customtype) = "NodeType"];
|
||||||
|
bootstrap.NodeInfo info = 2 [(gogoproto.nullable) = false];
|
||||||
|
uint32 TTL = 3;
|
||||||
|
}
|
100
bootstrap/types.go
Normal file
100
bootstrap/types.go
Normal file
|
@ -0,0 +1,100 @@
|
||||||
|
package bootstrap
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"encoding/hex"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/golang/protobuf/proto"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/object"
|
||||||
|
)
|
||||||
|
|
||||||
|
type (
|
||||||
|
// NodeStatus is a bitwise status field of the node.
|
||||||
|
NodeStatus uint64
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
storageFullMask = 0x1
|
||||||
|
|
||||||
|
optionCapacity = "/Capacity:"
|
||||||
|
optionPrice = "/Price:"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
_ proto.Message = (*NodeInfo)(nil)
|
||||||
|
_ proto.Message = (*SpreadMap)(nil)
|
||||||
|
)
|
||||||
|
|
||||||
|
// Equals checks whether two NodeInfo has same address.
|
||||||
|
func (m NodeInfo) Equals(n1 NodeInfo) bool {
|
||||||
|
return m.Address == n1.Address && bytes.Equal(m.PubKey, n1.PubKey)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Full checks if node has enough space for storing users objects.
|
||||||
|
func (n NodeStatus) Full() bool {
|
||||||
|
return n&storageFullMask > 0
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetFull changes state of node to indicate if node has enough space for storing users objects.
|
||||||
|
// If value is true - there's not enough space.
|
||||||
|
func (n *NodeStatus) SetFull(value bool) {
|
||||||
|
switch value {
|
||||||
|
case true:
|
||||||
|
*n |= NodeStatus(storageFullMask)
|
||||||
|
case false:
|
||||||
|
*n &= NodeStatus(^uint64(storageFullMask))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Price returns price in 1e-8*GAS/Megabyte per month.
|
||||||
|
// User set price in GAS/Terabyte per month.
|
||||||
|
func (m NodeInfo) Price() uint64 {
|
||||||
|
for i := range m.Options {
|
||||||
|
if strings.HasPrefix(m.Options[i], optionPrice) {
|
||||||
|
n, err := strconv.ParseFloat(m.Options[i][len(optionPrice):], 64)
|
||||||
|
if err != nil {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
return uint64(n*1e8) / uint64(object.UnitsMB) // UnitsMB == megabytes in 1 terabyte
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
// Capacity returns node's capacity as reported by user.
|
||||||
|
func (m NodeInfo) Capacity() uint64 {
|
||||||
|
for i := range m.Options {
|
||||||
|
if strings.HasPrefix(m.Options[i], optionCapacity) {
|
||||||
|
n, err := strconv.ParseUint(m.Options[i][len(optionCapacity):], 10, 64)
|
||||||
|
if err != nil {
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
return n
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
// String returns string representation of NodeInfo.
|
||||||
|
func (m NodeInfo) String() string {
|
||||||
|
return "(NodeInfo)<" +
|
||||||
|
"Address:" + m.Address +
|
||||||
|
", " +
|
||||||
|
"PublicKey:" + hex.EncodeToString(m.PubKey) +
|
||||||
|
", " +
|
||||||
|
"Options: [" + strings.Join(m.Options, ",") + "]>"
|
||||||
|
}
|
||||||
|
|
||||||
|
// String returns string representation of SpreadMap.
|
||||||
|
func (m SpreadMap) String() string {
|
||||||
|
result := make([]string, 0, len(m.NetMap))
|
||||||
|
for i := range m.NetMap {
|
||||||
|
result = append(result, m.NetMap[i].String())
|
||||||
|
}
|
||||||
|
return "(SpreadMap)<" +
|
||||||
|
"Epoch: " + strconv.FormatUint(m.Epoch, 10) +
|
||||||
|
", " +
|
||||||
|
"Netmap: [" + strings.Join(result, ",") + "]>"
|
||||||
|
}
|
BIN
bootstrap/types.pb.go
Normal file
BIN
bootstrap/types.pb.go
Normal file
Binary file not shown.
22
bootstrap/types.proto
Normal file
22
bootstrap/types.proto
Normal file
|
@ -0,0 +1,22 @@
|
||||||
|
syntax = "proto3";
|
||||||
|
package bootstrap;
|
||||||
|
option go_package = "github.com/nspcc-dev/neofs-proto/bootstrap";
|
||||||
|
|
||||||
|
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||||
|
|
||||||
|
option (gogoproto.stable_marshaler_all) = true;;
|
||||||
|
|
||||||
|
option (gogoproto.stringer_all) = false;
|
||||||
|
option (gogoproto.goproto_stringer_all) = false;
|
||||||
|
|
||||||
|
message SpreadMap {
|
||||||
|
uint64 Epoch = 1;
|
||||||
|
repeated NodeInfo NetMap = 2 [(gogoproto.nullable) = false];
|
||||||
|
}
|
||||||
|
|
||||||
|
message NodeInfo {
|
||||||
|
string Address = 1 [(gogoproto.jsontag) = "address"];
|
||||||
|
bytes PubKey = 2 [(gogoproto.jsontag) = "pubkey,omitempty"];
|
||||||
|
repeated string Options = 3 [(gogoproto.jsontag) = "options,omitempty"];
|
||||||
|
uint64 Status = 4 [(gogoproto.jsontag) = "status", (gogoproto.nullable) = false, (gogoproto.customtype) = "NodeStatus"];
|
||||||
|
}
|
185
chain/address.go
Normal file
185
chain/address.go
Normal file
|
@ -0,0 +1,185 @@
|
||||||
|
package chain
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"crypto/ecdsa"
|
||||||
|
"crypto/sha256"
|
||||||
|
"encoding/hex"
|
||||||
|
|
||||||
|
"github.com/mr-tron/base58"
|
||||||
|
crypto "github.com/nspcc-dev/neofs-crypto"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/internal"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"golang.org/x/crypto/ripemd160"
|
||||||
|
)
|
||||||
|
|
||||||
|
// WalletAddress implements NEO address.
|
||||||
|
type WalletAddress [AddressLength]byte
|
||||||
|
|
||||||
|
const (
|
||||||
|
// AddressLength contains size of address,
|
||||||
|
// 0x17 byte (address version) + 20 bytes of ScriptHash + 4 bytes of checksum.
|
||||||
|
AddressLength = 25
|
||||||
|
|
||||||
|
// ScriptHashLength contains size of ScriptHash.
|
||||||
|
ScriptHashLength = 20
|
||||||
|
|
||||||
|
// ErrEmptyAddress is raised when empty Address is passed.
|
||||||
|
ErrEmptyAddress = internal.Error("empty address")
|
||||||
|
|
||||||
|
// ErrAddressLength is raised when passed address has wrong size.
|
||||||
|
ErrAddressLength = internal.Error("wrong address length")
|
||||||
|
)
|
||||||
|
|
||||||
|
func checksum(sign []byte) []byte {
|
||||||
|
hash := sha256.Sum256(sign)
|
||||||
|
hash = sha256.Sum256(hash[:])
|
||||||
|
return hash[:4]
|
||||||
|
}
|
||||||
|
|
||||||
|
// FetchPublicKeys tries to parse public keys from verification script.
|
||||||
|
func FetchPublicKeys(vs []byte) []*ecdsa.PublicKey {
|
||||||
|
var (
|
||||||
|
count int
|
||||||
|
offset int
|
||||||
|
ln = len(vs)
|
||||||
|
result []*ecdsa.PublicKey
|
||||||
|
)
|
||||||
|
|
||||||
|
switch {
|
||||||
|
case ln < 1: // wrong data size
|
||||||
|
return nil
|
||||||
|
case vs[ln-1] == 0xac: // last byte is CHECKSIG
|
||||||
|
count = 1
|
||||||
|
case vs[ln-1] == 0xae: // last byte is CHECKMULTISIG
|
||||||
|
// 2nd byte from the end indicates about PK's count
|
||||||
|
count = int(vs[ln-2] - 0x50)
|
||||||
|
// ignores CHECKMULTISIG
|
||||||
|
offset = 1
|
||||||
|
default: // unknown type
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
result = make([]*ecdsa.PublicKey, 0, count)
|
||||||
|
for i := 0; i < count; i++ {
|
||||||
|
// ignores PUSHBYTE33 and tries to parse
|
||||||
|
from, to := offset+1, offset+1+crypto.PublicKeyCompressedSize
|
||||||
|
|
||||||
|
// when passed VerificationScript has wrong size
|
||||||
|
if len(vs) < to {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
key := crypto.UnmarshalPublicKey(vs[from:to])
|
||||||
|
// when wrong public key is passed
|
||||||
|
if key == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
result = append(result, key)
|
||||||
|
|
||||||
|
offset += 1 + crypto.PublicKeyCompressedSize
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
|
// VerificationScript returns VerificationScript composed from public keys.
|
||||||
|
func VerificationScript(pubs ...*ecdsa.PublicKey) []byte {
|
||||||
|
var (
|
||||||
|
pre []byte
|
||||||
|
suf []byte
|
||||||
|
body []byte
|
||||||
|
offset int
|
||||||
|
lnPK = len(pubs)
|
||||||
|
ln = crypto.PublicKeyCompressedSize*lnPK + lnPK // 33 * count + count * 1 (PUSHBYTES33)
|
||||||
|
)
|
||||||
|
|
||||||
|
if len(pubs) > 1 {
|
||||||
|
pre = []byte{0x51} // one address
|
||||||
|
suf = []byte{byte(0x50 + lnPK), 0xae} // count of PK's + CHECKMULTISIG
|
||||||
|
} else {
|
||||||
|
suf = []byte{0xac} // CHECKSIG
|
||||||
|
}
|
||||||
|
|
||||||
|
ln += len(pre) + len(suf)
|
||||||
|
|
||||||
|
body = make([]byte, ln)
|
||||||
|
offset += copy(body, pre)
|
||||||
|
|
||||||
|
for i := range pubs {
|
||||||
|
body[offset] = 0x21
|
||||||
|
offset++
|
||||||
|
offset += copy(body[offset:], crypto.MarshalPublicKey(pubs[i]))
|
||||||
|
}
|
||||||
|
|
||||||
|
copy(body[offset:], suf)
|
||||||
|
|
||||||
|
return body
|
||||||
|
}
|
||||||
|
|
||||||
|
// KeysToAddress return NEO address composed from public keys.
|
||||||
|
func KeysToAddress(pubs ...*ecdsa.PublicKey) string {
|
||||||
|
if len(pubs) == 0 {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
return Address(VerificationScript(pubs...))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Address returns NEO address based on passed VerificationScript.
|
||||||
|
func Address(verificationScript []byte) string {
|
||||||
|
sign := [AddressLength]byte{0x17}
|
||||||
|
hash := sha256.Sum256(verificationScript)
|
||||||
|
ripe := ripemd160.New()
|
||||||
|
ripe.Write(hash[:])
|
||||||
|
copy(sign[1:], ripe.Sum(nil))
|
||||||
|
copy(sign[21:], checksum(sign[:21]))
|
||||||
|
return base58.Encode(sign[:])
|
||||||
|
}
|
||||||
|
|
||||||
|
// ReversedScriptHashToAddress parses script hash and returns valid NEO address.
|
||||||
|
func ReversedScriptHashToAddress(sc string) (addr string, err error) {
|
||||||
|
var data []byte
|
||||||
|
if data, err = DecodeScriptHash(sc); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
sign := [AddressLength]byte{0x17}
|
||||||
|
copy(sign[1:], data)
|
||||||
|
copy(sign[1+ScriptHashLength:], checksum(sign[:1+ScriptHashLength]))
|
||||||
|
return base58.Encode(sign[:]), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsAddress checks that passed NEO Address is valid.
|
||||||
|
func IsAddress(s string) error {
|
||||||
|
if s == "" {
|
||||||
|
return ErrEmptyAddress
|
||||||
|
} else if addr, err := base58.Decode(s); err != nil {
|
||||||
|
return errors.Wrap(err, "base58 decode")
|
||||||
|
} else if ln := len(addr); ln != AddressLength {
|
||||||
|
return errors.Wrapf(ErrAddressLength, "length %d != %d", AddressLength, ln)
|
||||||
|
} else if sum := checksum(addr[:21]); !bytes.Equal(addr[21:], sum) {
|
||||||
|
return errors.Errorf("wrong checksum %0x != %0x",
|
||||||
|
addr[21:], sum)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ReverseBytes returns reversed []byte of given.
|
||||||
|
func ReverseBytes(data []byte) []byte {
|
||||||
|
for i, j := 0, len(data)-1; i < j; i, j = i+1, j-1 {
|
||||||
|
data[i], data[j] = data[j], data[i]
|
||||||
|
}
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
// DecodeScriptHash parses script hash into slice of bytes.
|
||||||
|
func DecodeScriptHash(s string) ([]byte, error) {
|
||||||
|
if s == "" {
|
||||||
|
return nil, ErrEmptyAddress
|
||||||
|
} else if addr, err := hex.DecodeString(s); err != nil {
|
||||||
|
return nil, errors.Wrap(err, "hex decode")
|
||||||
|
} else if ln := len(addr); ln != ScriptHashLength {
|
||||||
|
return nil, errors.Wrapf(ErrAddressLength, "length %d != %d", ScriptHashLength, ln)
|
||||||
|
} else {
|
||||||
|
return addr, nil
|
||||||
|
}
|
||||||
|
}
|
292
chain/address_test.go
Normal file
292
chain/address_test.go
Normal file
|
@ -0,0 +1,292 @@
|
||||||
|
package chain
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/ecdsa"
|
||||||
|
"encoding/hex"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
crypto "github.com/nspcc-dev/neofs-crypto"
|
||||||
|
"github.com/nspcc-dev/neofs-crypto/test"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestAddress(t *testing.T) {
|
||||||
|
var (
|
||||||
|
multiSigVerificationScript = "512103c02a93134f98d9c78ec54b1b1f97fc64cd81360f53a293f41e4ad54aac3c57172103fea219d4ccfd7641cebbb2439740bb4bd7c4730c1abd6ca1dc44386533816df952ae"
|
||||||
|
multiSigAddress = "ANbvKqa2SfgTUkq43NRUhCiyxPrpUPn7S3"
|
||||||
|
|
||||||
|
normalVerificationScript = "2102a33413277a319cc6fd4c54a2feb9032eba668ec587f307e319dc48733087fa61ac"
|
||||||
|
normalAddress = "AcraNnCuPKnUYtPYyrACRCVJhLpvskbfhu"
|
||||||
|
)
|
||||||
|
|
||||||
|
t.Run("check multi-sig address", func(t *testing.T) {
|
||||||
|
data, err := hex.DecodeString(multiSigVerificationScript)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, multiSigAddress, Address(data))
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("check normal address", func(t *testing.T) {
|
||||||
|
data, err := hex.DecodeString(normalVerificationScript)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, normalAddress, Address(data))
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestVerificationScript(t *testing.T) {
|
||||||
|
t.Run("check normal", func(t *testing.T) {
|
||||||
|
pkString := "02a33413277a319cc6fd4c54a2feb9032eba668ec587f307e319dc48733087fa61"
|
||||||
|
|
||||||
|
pkBytes, err := hex.DecodeString(pkString)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
pk := crypto.UnmarshalPublicKey(pkBytes)
|
||||||
|
|
||||||
|
expect, err := hex.DecodeString(
|
||||||
|
"21" + pkString + // PUSHBYTES33
|
||||||
|
"ac", // CHECKSIG
|
||||||
|
)
|
||||||
|
|
||||||
|
require.Equal(t, expect, VerificationScript(pk))
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("check multisig", func(t *testing.T) {
|
||||||
|
pk1String := "03c02a93134f98d9c78ec54b1b1f97fc64cd81360f53a293f41e4ad54aac3c5717"
|
||||||
|
pk2String := "03fea219d4ccfd7641cebbb2439740bb4bd7c4730c1abd6ca1dc44386533816df9"
|
||||||
|
|
||||||
|
pk1Bytes, err := hex.DecodeString(pk1String)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
pk1 := crypto.UnmarshalPublicKey(pk1Bytes)
|
||||||
|
|
||||||
|
pk2Bytes, err := hex.DecodeString(pk2String)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
pk2 := crypto.UnmarshalPublicKey(pk2Bytes)
|
||||||
|
|
||||||
|
expect, err := hex.DecodeString(
|
||||||
|
"51" + // one address
|
||||||
|
"21" + pk1String + // PUSHBYTES33
|
||||||
|
"21" + pk2String + // PUSHBYTES33
|
||||||
|
"52" + // 2 PublicKeys
|
||||||
|
"ae", // CHECKMULTISIG
|
||||||
|
)
|
||||||
|
|
||||||
|
require.Equal(t, expect, VerificationScript(pk1, pk2))
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestKeysToAddress(t *testing.T) {
|
||||||
|
t.Run("check normal", func(t *testing.T) {
|
||||||
|
pkString := "02a33413277a319cc6fd4c54a2feb9032eba668ec587f307e319dc48733087fa61"
|
||||||
|
|
||||||
|
pkBytes, err := hex.DecodeString(pkString)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
pk := crypto.UnmarshalPublicKey(pkBytes)
|
||||||
|
|
||||||
|
expect := "AcraNnCuPKnUYtPYyrACRCVJhLpvskbfhu"
|
||||||
|
|
||||||
|
actual := KeysToAddress(pk)
|
||||||
|
require.Equal(t, expect, actual)
|
||||||
|
require.NoError(t, IsAddress(actual))
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("check multisig", func(t *testing.T) {
|
||||||
|
pk1String := "03c02a93134f98d9c78ec54b1b1f97fc64cd81360f53a293f41e4ad54aac3c5717"
|
||||||
|
pk2String := "03fea219d4ccfd7641cebbb2439740bb4bd7c4730c1abd6ca1dc44386533816df9"
|
||||||
|
|
||||||
|
pk1Bytes, err := hex.DecodeString(pk1String)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
pk1 := crypto.UnmarshalPublicKey(pk1Bytes)
|
||||||
|
|
||||||
|
pk2Bytes, err := hex.DecodeString(pk2String)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
pk2 := crypto.UnmarshalPublicKey(pk2Bytes)
|
||||||
|
|
||||||
|
expect := "ANbvKqa2SfgTUkq43NRUhCiyxPrpUPn7S3"
|
||||||
|
actual := KeysToAddress(pk1, pk2)
|
||||||
|
require.Equal(t, expect, actual)
|
||||||
|
require.NoError(t, IsAddress(actual))
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFetchPublicKeys(t *testing.T) {
|
||||||
|
var (
|
||||||
|
multiSigVerificationScript = "512103c02a93134f98d9c78ec54b1b1f97fc64cd81360f53a293f41e4ad54aac3c57172103fea219d4ccfd7641cebbb2439740bb4bd7c4730c1abd6ca1dc44386533816df952ae"
|
||||||
|
normalVerificationScript = "2102a33413277a319cc6fd4c54a2feb9032eba668ec587f307e319dc48733087fa61ac"
|
||||||
|
|
||||||
|
pk1String = "03c02a93134f98d9c78ec54b1b1f97fc64cd81360f53a293f41e4ad54aac3c5717"
|
||||||
|
pk2String = "03fea219d4ccfd7641cebbb2439740bb4bd7c4730c1abd6ca1dc44386533816df9"
|
||||||
|
pk3String = "02a33413277a319cc6fd4c54a2feb9032eba668ec587f307e319dc48733087fa61"
|
||||||
|
)
|
||||||
|
|
||||||
|
t.Run("shouls not fail", func(t *testing.T) {
|
||||||
|
wrongVS, err := hex.DecodeString(multiSigVerificationScript)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
wrongVS[len(wrongVS)-1] = 0x1
|
||||||
|
|
||||||
|
wrongPK, err := hex.DecodeString(multiSigVerificationScript)
|
||||||
|
require.NoError(t, err)
|
||||||
|
wrongPK[2] = 0x1
|
||||||
|
|
||||||
|
var testCases = []struct {
|
||||||
|
name string
|
||||||
|
value []byte
|
||||||
|
}{
|
||||||
|
{name: "empty VerificationScript"},
|
||||||
|
{
|
||||||
|
name: "wrong size VerificationScript",
|
||||||
|
value: []byte{0x1},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "wrong VerificationScript type",
|
||||||
|
value: wrongVS,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "wrong public key in VerificationScript",
|
||||||
|
value: wrongPK,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := range testCases {
|
||||||
|
tt := testCases[i]
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
var keys []*ecdsa.PublicKey
|
||||||
|
require.NotPanics(t, func() {
|
||||||
|
keys = FetchPublicKeys(tt.value)
|
||||||
|
})
|
||||||
|
require.Nil(t, keys)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("check multi-sig address", func(t *testing.T) {
|
||||||
|
data, err := hex.DecodeString(multiSigVerificationScript)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
pk1Bytes, err := hex.DecodeString(pk1String)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
pk2Bytes, err := hex.DecodeString(pk2String)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
pk1 := crypto.UnmarshalPublicKey(pk1Bytes)
|
||||||
|
pk2 := crypto.UnmarshalPublicKey(pk2Bytes)
|
||||||
|
|
||||||
|
keys := FetchPublicKeys(data)
|
||||||
|
require.Len(t, keys, 2)
|
||||||
|
require.Equal(t, keys[0], pk1)
|
||||||
|
require.Equal(t, keys[1], pk2)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("check normal address", func(t *testing.T) {
|
||||||
|
data, err := hex.DecodeString(normalVerificationScript)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
pkBytes, err := hex.DecodeString(pk3String)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
pk := crypto.UnmarshalPublicKey(pkBytes)
|
||||||
|
|
||||||
|
keys := FetchPublicKeys(data)
|
||||||
|
require.Len(t, keys, 1)
|
||||||
|
require.Equal(t, keys[0], pk)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("generate 10 keys VerificationScript and try parse it", func(t *testing.T) {
|
||||||
|
var (
|
||||||
|
count = 10
|
||||||
|
expect = make([]*ecdsa.PublicKey, 0, count)
|
||||||
|
)
|
||||||
|
|
||||||
|
for i := 0; i < count; i++ {
|
||||||
|
key := test.DecodeKey(i)
|
||||||
|
expect = append(expect, &key.PublicKey)
|
||||||
|
}
|
||||||
|
|
||||||
|
vs := VerificationScript(expect...)
|
||||||
|
|
||||||
|
actual := FetchPublicKeys(vs)
|
||||||
|
require.Equal(t, expect, actual)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestReversedScriptHashToAddress(t *testing.T) {
|
||||||
|
var testCases = []struct {
|
||||||
|
name string
|
||||||
|
value string
|
||||||
|
expect string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "first",
|
||||||
|
expect: "APfiG5imQgn8dzTTfaDfqHnxo3QDUkF69A",
|
||||||
|
value: "5696acd07f0927fd5f01946828638c9e2c90c5dc",
|
||||||
|
},
|
||||||
|
|
||||||
|
{
|
||||||
|
name: "second",
|
||||||
|
expect: "AK2nJJpJr6o664CWJKi1QRXjqeic2zRp8y",
|
||||||
|
value: "23ba2703c53263e8d6e522dc32203339dcd8eee9",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := range testCases {
|
||||||
|
tt := testCases[i]
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
actual, err := ReversedScriptHashToAddress(tt.value)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, tt.expect, actual)
|
||||||
|
require.NoError(t, IsAddress(actual))
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestReverseBytes(t *testing.T) {
|
||||||
|
var testCases = []struct {
|
||||||
|
name string
|
||||||
|
value []byte
|
||||||
|
expect []byte
|
||||||
|
}{
|
||||||
|
{name: "empty"},
|
||||||
|
{
|
||||||
|
name: "single byte",
|
||||||
|
expect: []byte{0x1},
|
||||||
|
value: []byte{0x1},
|
||||||
|
},
|
||||||
|
|
||||||
|
{
|
||||||
|
name: "two bytes",
|
||||||
|
expect: []byte{0x2, 0x1},
|
||||||
|
value: []byte{0x1, 0x2},
|
||||||
|
},
|
||||||
|
|
||||||
|
{
|
||||||
|
name: "three bytes",
|
||||||
|
expect: []byte{0x3, 0x2, 0x1},
|
||||||
|
value: []byte{0x1, 0x2, 0x3},
|
||||||
|
},
|
||||||
|
|
||||||
|
{
|
||||||
|
name: "five bytes",
|
||||||
|
expect: []byte{0x5, 0x4, 0x3, 0x2, 0x1},
|
||||||
|
value: []byte{0x1, 0x2, 0x3, 0x4, 0x5},
|
||||||
|
},
|
||||||
|
|
||||||
|
{
|
||||||
|
name: "eight bytes",
|
||||||
|
expect: []byte{0x8, 0x7, 0x6, 0x5, 0x4, 0x3, 0x2, 0x1},
|
||||||
|
value: []byte{0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := range testCases {
|
||||||
|
tt := testCases[i]
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
actual := ReverseBytes(tt.value)
|
||||||
|
require.Equal(t, tt.expect, actual)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
68
container/service.go
Normal file
68
container/service.go
Normal file
|
@ -0,0 +1,68 @@
|
||||||
|
package container
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"encoding/binary"
|
||||||
|
|
||||||
|
"github.com/nspcc-dev/neofs-proto/refs"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
type (
|
||||||
|
// CID type alias.
|
||||||
|
CID = refs.CID
|
||||||
|
// UUID type alias.
|
||||||
|
UUID = refs.UUID
|
||||||
|
// OwnerID type alias.
|
||||||
|
OwnerID = refs.OwnerID
|
||||||
|
// OwnerID type alias.
|
||||||
|
MessageID = refs.MessageID
|
||||||
|
)
|
||||||
|
|
||||||
|
// SetTTL sets ttl to GetRequest to satisfy TTLRequest interface.
|
||||||
|
func (m *GetRequest) SetTTL(v uint32) { m.TTL = v }
|
||||||
|
|
||||||
|
// SetTTL sets ttl to PutRequest to satisfy TTLRequest interface.
|
||||||
|
func (m *PutRequest) SetTTL(v uint32) { m.TTL = v }
|
||||||
|
|
||||||
|
// SetTTL sets ttl to ListRequest to satisfy TTLRequest interface.
|
||||||
|
func (m *ListRequest) SetTTL(v uint32) { m.TTL = v }
|
||||||
|
|
||||||
|
// SetTTL sets ttl to DeleteRequest to satisfy TTLRequest interface.
|
||||||
|
func (m *DeleteRequest) SetTTL(v uint32) { m.TTL = v }
|
||||||
|
|
||||||
|
// SetSignature sets signature to PutRequest to satisfy SignedRequest interface.
|
||||||
|
func (m *PutRequest) SetSignature(v []byte) { m.Signature = v }
|
||||||
|
|
||||||
|
// SetSignature sets signature to DeleteRequest to satisfy SignedRequest interface.
|
||||||
|
func (m *DeleteRequest) SetSignature(v []byte) { m.Signature = v }
|
||||||
|
|
||||||
|
// PrepareData prepares bytes representation of PutRequest to satisfy SignedRequest interface.
|
||||||
|
func (m *PutRequest) PrepareData() ([]byte, error) {
|
||||||
|
var (
|
||||||
|
err error
|
||||||
|
buf = new(bytes.Buffer)
|
||||||
|
capBytes = make([]byte, 8)
|
||||||
|
)
|
||||||
|
|
||||||
|
binary.BigEndian.PutUint64(capBytes, m.Capacity)
|
||||||
|
|
||||||
|
if _, err = buf.Write(m.MessageID.Bytes()); err != nil {
|
||||||
|
return nil, errors.Wrap(err, "could not write message id")
|
||||||
|
} else if _, err = buf.Write(capBytes); err != nil {
|
||||||
|
return nil, errors.Wrap(err, "could not write capacity")
|
||||||
|
} else if _, err = buf.Write(m.OwnerID.Bytes()); err != nil {
|
||||||
|
return nil, errors.Wrap(err, "could not write pub")
|
||||||
|
} else if data, err := m.Rules.Marshal(); err != nil {
|
||||||
|
return nil, errors.Wrap(err, "could not marshal placement")
|
||||||
|
} else if _, err = buf.Write(data); err != nil {
|
||||||
|
return nil, errors.Wrap(err, "could not write placement")
|
||||||
|
}
|
||||||
|
|
||||||
|
return buf.Bytes(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// PrepareData prepares bytes representation of DeleteRequest to satisfy SignedRequest interface.
|
||||||
|
func (m *DeleteRequest) PrepareData() ([]byte, error) {
|
||||||
|
return m.CID.Bytes(), nil
|
||||||
|
}
|
BIN
container/service.pb.go
Normal file
BIN
container/service.pb.go
Normal file
Binary file not shown.
68
container/service.proto
Normal file
68
container/service.proto
Normal file
|
@ -0,0 +1,68 @@
|
||||||
|
syntax = "proto3";
|
||||||
|
package container;
|
||||||
|
option go_package = "github.com/nspcc-dev/neofs-proto/container";
|
||||||
|
|
||||||
|
import "container/types.proto";
|
||||||
|
import "github.com/nspcc-dev/netmap/selector.proto";
|
||||||
|
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||||
|
|
||||||
|
option (gogoproto.stable_marshaler_all) = true;
|
||||||
|
|
||||||
|
service Service {
|
||||||
|
// Create container
|
||||||
|
rpc Put(PutRequest) returns (PutResponse);
|
||||||
|
|
||||||
|
// Delete container ... discuss implementation later
|
||||||
|
rpc Delete(DeleteRequest) returns (DeleteResponse);
|
||||||
|
|
||||||
|
// Get container
|
||||||
|
rpc Get(GetRequest) returns (GetResponse);
|
||||||
|
|
||||||
|
rpc List(ListRequest) returns (ListResponse);
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewRequest message to create new container
|
||||||
|
message PutRequest {
|
||||||
|
bytes MessageID = 1 [(gogoproto.customtype) = "MessageID", (gogoproto.nullable) = false];
|
||||||
|
uint64 Capacity = 2; // not actual size in megabytes, but probability of storage availability
|
||||||
|
bytes OwnerID = 3 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||||
|
netmap.PlacementRule rules = 4 [(gogoproto.nullable) = false];
|
||||||
|
bytes Signature = 5;
|
||||||
|
uint32 TTL = 6;
|
||||||
|
}
|
||||||
|
|
||||||
|
// PutResponse message to respond about container uuid
|
||||||
|
message PutResponse {
|
||||||
|
bytes CID = 1 [(gogoproto.customtype) = "CID", (gogoproto.nullable) = false];
|
||||||
|
}
|
||||||
|
|
||||||
|
message DeleteRequest {
|
||||||
|
bytes CID = 1 [(gogoproto.customtype) = "CID", (gogoproto.nullable) = false];
|
||||||
|
uint32 TTL = 2;
|
||||||
|
bytes Signature = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
message DeleteResponse { }
|
||||||
|
|
||||||
|
|
||||||
|
// GetRequest message to fetch container placement rules
|
||||||
|
message GetRequest {
|
||||||
|
bytes CID = 1 [(gogoproto.customtype) = "CID", (gogoproto.nullable) = false];
|
||||||
|
uint32 TTL = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetResponse message with container structure
|
||||||
|
message GetResponse {
|
||||||
|
container.Container Container = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ListRequest message to list containers for user
|
||||||
|
message ListRequest {
|
||||||
|
bytes OwnerID = 1 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||||
|
uint32 TTL = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ListResponse message to respond about all user containers
|
||||||
|
message ListResponse {
|
||||||
|
repeated bytes CID = 1 [(gogoproto.customtype) = "CID", (gogoproto.nullable) = false];
|
||||||
|
}
|
94
container/types.go
Normal file
94
container/types.go
Normal file
|
@ -0,0 +1,94 @@
|
||||||
|
package container
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"github.com/nspcc-dev/neofs-crypto/test"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/internal"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/refs"
|
||||||
|
"github.com/nspcc-dev/netmap"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
_ internal.Custom = (*Container)(nil)
|
||||||
|
|
||||||
|
emptySalt = (UUID{}).Bytes()
|
||||||
|
emptyOwner = (OwnerID{}).Bytes()
|
||||||
|
)
|
||||||
|
|
||||||
|
// New creates new user container based on capacity, OwnerID and PlacementRules.
|
||||||
|
func New(cap uint64, owner OwnerID, rules netmap.PlacementRule) (*Container, error) {
|
||||||
|
if bytes.Equal(owner[:], emptyOwner) {
|
||||||
|
return nil, refs.ErrEmptyOwner
|
||||||
|
} else if cap == 0 {
|
||||||
|
return nil, refs.ErrEmptyCapacity
|
||||||
|
}
|
||||||
|
|
||||||
|
salt, err := uuid.NewRandom()
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.Wrap(err, "could not create salt")
|
||||||
|
}
|
||||||
|
|
||||||
|
return &Container{
|
||||||
|
OwnerID: owner,
|
||||||
|
Salt: UUID(salt),
|
||||||
|
Capacity: cap,
|
||||||
|
Rules: rules,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Bytes returns bytes representation of Container.
|
||||||
|
func (m *Container) Bytes() []byte {
|
||||||
|
data, err := m.Marshal()
|
||||||
|
if err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return data
|
||||||
|
}
|
||||||
|
|
||||||
|
// ID returns generated ContainerID based on Container (data).
|
||||||
|
func (m *Container) ID() (CID, error) {
|
||||||
|
if m.Empty() {
|
||||||
|
return CID{}, refs.ErrEmptyContainer
|
||||||
|
}
|
||||||
|
data, err := m.Marshal()
|
||||||
|
if err != nil {
|
||||||
|
return CID{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return refs.CIDForBytes(data), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Empty checks that container is empty.
|
||||||
|
func (m *Container) Empty() bool {
|
||||||
|
return m.Capacity == 0 || bytes.Equal(m.Salt.Bytes(), emptySalt) || bytes.Equal(m.OwnerID.Bytes(), emptyOwner)
|
||||||
|
}
|
||||||
|
|
||||||
|
// -- Test container definition -- //
|
||||||
|
// NewTestContainer returns test container.
|
||||||
|
//
|
||||||
|
// WARNING: DON'T USE THIS OUTSIDE TESTS.
|
||||||
|
func NewTestContainer() (*Container, error) {
|
||||||
|
key := test.DecodeKey(0)
|
||||||
|
owner, err := refs.NewOwnerID(&key.PublicKey)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return New(100, owner, netmap.PlacementRule{
|
||||||
|
ReplFactor: 2,
|
||||||
|
SFGroups: []netmap.SFGroup{
|
||||||
|
{
|
||||||
|
Selectors: []netmap.Select{
|
||||||
|
{Key: "Country", Count: 1},
|
||||||
|
{Key: netmap.NodesBucket, Count: 2},
|
||||||
|
},
|
||||||
|
Filters: []netmap.Filter{
|
||||||
|
{Key: "Country", F: netmap.FilterIn("USA")},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
})
|
||||||
|
}
|
BIN
container/types.pb.go
Normal file
BIN
container/types.pb.go
Normal file
Binary file not shown.
16
container/types.proto
Normal file
16
container/types.proto
Normal file
|
@ -0,0 +1,16 @@
|
||||||
|
syntax = "proto3";
|
||||||
|
package container;
|
||||||
|
option go_package = "github.com/nspcc-dev/neofs-proto/container";
|
||||||
|
|
||||||
|
import "github.com/nspcc-dev/netmap/selector.proto";
|
||||||
|
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||||
|
|
||||||
|
option (gogoproto.stable_marshaler_all) = true;
|
||||||
|
|
||||||
|
// The Container service definition.
|
||||||
|
message Container {
|
||||||
|
bytes OwnerID = 1 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||||
|
bytes Salt = 2 [(gogoproto.customtype) = "UUID", (gogoproto.nullable) = false];
|
||||||
|
uint64 Capacity = 3;
|
||||||
|
netmap.PlacementRule Rules = 4 [(gogoproto.nullable) = false];
|
||||||
|
}
|
57
container/types_test.go
Normal file
57
container/types_test.go
Normal file
|
@ -0,0 +1,57 @@
|
||||||
|
package container
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/gogo/protobuf/proto"
|
||||||
|
"github.com/nspcc-dev/neofs-crypto/test"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/refs"
|
||||||
|
"github.com/nspcc-dev/netmap"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestCID(t *testing.T) {
|
||||||
|
t.Run("check that marshal/unmarshal works like expected", func(t *testing.T) {
|
||||||
|
var (
|
||||||
|
c2 Container
|
||||||
|
cid2 CID
|
||||||
|
key = test.DecodeKey(0)
|
||||||
|
)
|
||||||
|
|
||||||
|
rules := netmap.PlacementRule{
|
||||||
|
ReplFactor: 2,
|
||||||
|
SFGroups: []netmap.SFGroup{
|
||||||
|
{
|
||||||
|
Selectors: []netmap.Select{
|
||||||
|
{Key: "Country", Count: 1},
|
||||||
|
{Key: netmap.NodesBucket, Count: 2},
|
||||||
|
},
|
||||||
|
Filters: []netmap.Filter{
|
||||||
|
{Key: "Country", F: netmap.FilterIn("USA")},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
owner, err := refs.NewOwnerID(&key.PublicKey)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
c1, err := New(10, owner, rules)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
data, err := proto.Marshal(c1)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.NoError(t, c2.Unmarshal(data))
|
||||||
|
require.Equal(t, c1, &c2)
|
||||||
|
|
||||||
|
cid1, err := c1.ID()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
data, err = proto.Marshal(&cid1)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NoError(t, cid2.Unmarshal(data))
|
||||||
|
|
||||||
|
require.Equal(t, cid1, cid2)
|
||||||
|
})
|
||||||
|
}
|
110
decimal/decimal.go
Normal file
110
decimal/decimal.go
Normal file
|
@ -0,0 +1,110 @@
|
||||||
|
package decimal
|
||||||
|
|
||||||
|
import (
|
||||||
|
"math"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
// GASPrecision contains precision for NEO Gas token.
|
||||||
|
const GASPrecision = 8
|
||||||
|
|
||||||
|
// Zero is empty Decimal value.
|
||||||
|
var Zero = &Decimal{}
|
||||||
|
|
||||||
|
// New returns new Decimal (in satoshi).
|
||||||
|
func New(v int64) *Decimal {
|
||||||
|
return NewWithPrecision(v, GASPrecision)
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewGAS returns new Decimal * 1e8 (in GAS).
|
||||||
|
func NewGAS(v int64) *Decimal {
|
||||||
|
v *= int64(math.Pow10(GASPrecision))
|
||||||
|
return NewWithPrecision(v, GASPrecision)
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewWithPrecision returns new Decimal with custom precision.
|
||||||
|
func NewWithPrecision(v int64, p uint32) *Decimal {
|
||||||
|
return &Decimal{Value: v, Precision: p}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ParseFloat return new Decimal parsed from float64 * 1e8 (in GAS).
|
||||||
|
func ParseFloat(v float64) *Decimal {
|
||||||
|
return new(Decimal).Parse(v, GASPrecision)
|
||||||
|
}
|
||||||
|
|
||||||
|
// ParseFloatWithPrecision returns new Decimal parsed from float64 * 1^p.
|
||||||
|
func ParseFloatWithPrecision(v float64, p int) *Decimal {
|
||||||
|
return new(Decimal).Parse(v, p)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Copy returns copy of current Decimal.
|
||||||
|
func (m *Decimal) Copy() *Decimal { return &Decimal{Value: m.Value, Precision: m.Precision} }
|
||||||
|
|
||||||
|
// Parse returns parsed Decimal from float64 * 1^p.
|
||||||
|
func (m *Decimal) Parse(v float64, p int) *Decimal {
|
||||||
|
m.Value = int64(v * math.Pow10(p))
|
||||||
|
m.Precision = uint32(p)
|
||||||
|
return m
|
||||||
|
}
|
||||||
|
|
||||||
|
// String returns string representation of Decimal.
|
||||||
|
func (m Decimal) String() string {
|
||||||
|
buf := new(strings.Builder)
|
||||||
|
val := m.Value
|
||||||
|
dec := int64(math.Pow10(int(m.Precision)))
|
||||||
|
if val < 0 {
|
||||||
|
buf.WriteRune('-')
|
||||||
|
val = -val
|
||||||
|
}
|
||||||
|
str := strconv.FormatInt(val/dec, 10)
|
||||||
|
buf.WriteString(str)
|
||||||
|
val %= dec
|
||||||
|
if val > 0 {
|
||||||
|
buf.WriteRune('.')
|
||||||
|
str = strconv.FormatInt(val, 10)
|
||||||
|
for i := len(str); i < int(m.Precision); i++ {
|
||||||
|
buf.WriteRune('0')
|
||||||
|
}
|
||||||
|
buf.WriteString(strings.TrimRight(str, "0"))
|
||||||
|
}
|
||||||
|
return buf.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add returns d + m.
|
||||||
|
func (m Decimal) Add(d *Decimal) *Decimal {
|
||||||
|
precision := m.Precision
|
||||||
|
if precision < d.Precision {
|
||||||
|
precision = d.Precision
|
||||||
|
}
|
||||||
|
return &Decimal{
|
||||||
|
Value: m.Value + d.Value,
|
||||||
|
Precision: precision,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Zero checks that Decimal is empty.
|
||||||
|
func (m Decimal) Zero() bool { return m.Value == 0 }
|
||||||
|
|
||||||
|
// Equal checks that current Decimal is equal to passed Decimal.
|
||||||
|
func (m Decimal) Equal(v *Decimal) bool { return m.Value == v.Value && m.Precision == v.Precision }
|
||||||
|
|
||||||
|
// GT checks that m > v.
|
||||||
|
func (m Decimal) GT(v *Decimal) bool { return m.Value > v.Value }
|
||||||
|
|
||||||
|
// GTE checks that m >= v.
|
||||||
|
func (m Decimal) GTE(v *Decimal) bool { return m.Value >= v.Value }
|
||||||
|
|
||||||
|
// LT checks that m < v.
|
||||||
|
func (m Decimal) LT(v *Decimal) bool { return m.Value < v.Value }
|
||||||
|
|
||||||
|
// LTE checks that m <= v.
|
||||||
|
func (m Decimal) LTE(v *Decimal) bool { return m.Value <= v.Value }
|
||||||
|
|
||||||
|
// Neg returns negative representation of current Decimal (m * -1).
|
||||||
|
func (m Decimal) Neg() *Decimal {
|
||||||
|
return &Decimal{
|
||||||
|
Value: m.Value * -1,
|
||||||
|
Precision: m.Precision,
|
||||||
|
}
|
||||||
|
}
|
BIN
decimal/decimal.pb.go
Normal file
BIN
decimal/decimal.pb.go
Normal file
Binary file not shown.
14
decimal/decimal.proto
Normal file
14
decimal/decimal.proto
Normal file
|
@ -0,0 +1,14 @@
|
||||||
|
syntax = "proto3";
|
||||||
|
package decimal;
|
||||||
|
option go_package = "github.com/nspcc-dev/neofs-proto/decimal";
|
||||||
|
|
||||||
|
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||||
|
|
||||||
|
option (gogoproto.stable_marshaler_all) = true;
|
||||||
|
|
||||||
|
message Decimal {
|
||||||
|
option (gogoproto.goproto_stringer) = false;
|
||||||
|
|
||||||
|
int64 Value = 1;
|
||||||
|
uint32 Precision = 2;
|
||||||
|
}
|
445
decimal/decimal_test.go
Normal file
445
decimal/decimal_test.go
Normal file
|
@ -0,0 +1,445 @@
|
||||||
|
package decimal
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestDecimal_Parse(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
value float64
|
||||||
|
name string
|
||||||
|
expect *Decimal
|
||||||
|
}{
|
||||||
|
{name: "empty", expect: &Decimal{Precision: GASPrecision}},
|
||||||
|
|
||||||
|
{
|
||||||
|
value: 100,
|
||||||
|
name: "100 GAS",
|
||||||
|
expect: &Decimal{Value: 1e10, Precision: GASPrecision},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for i := range tests {
|
||||||
|
tt := tests[i]
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
require.Equal(t, tt.expect, ParseFloat(tt.value))
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDecimal_ParseWithPrecision(t *testing.T) {
|
||||||
|
type args struct {
|
||||||
|
v float64
|
||||||
|
p int
|
||||||
|
}
|
||||||
|
tests := []struct {
|
||||||
|
args args
|
||||||
|
name string
|
||||||
|
expect *Decimal
|
||||||
|
}{
|
||||||
|
{name: "empty", expect: &Decimal{}},
|
||||||
|
|
||||||
|
{
|
||||||
|
name: "empty precision",
|
||||||
|
expect: &Decimal{Value: 0, Precision: 0},
|
||||||
|
},
|
||||||
|
|
||||||
|
{
|
||||||
|
name: "100 GAS",
|
||||||
|
args: args{100, GASPrecision},
|
||||||
|
expect: &Decimal{Value: 1e10, Precision: GASPrecision},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for i := range tests {
|
||||||
|
tt := tests[i]
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
require.Equal(t, tt.expect,
|
||||||
|
ParseFloatWithPrecision(tt.args.v, tt.args.p))
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestNew(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
val int64
|
||||||
|
expect *Decimal
|
||||||
|
}{
|
||||||
|
{name: "empty", expect: &Decimal{Value: 0, Precision: GASPrecision}},
|
||||||
|
{name: "100 GAS", val: 1e10, expect: &Decimal{Value: 1e10, Precision: GASPrecision}},
|
||||||
|
}
|
||||||
|
for i := range tests {
|
||||||
|
tt := tests[i]
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
require.Equalf(t, tt.expect, New(tt.val), tt.name)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestNewGAS(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
val int64
|
||||||
|
expect *Decimal
|
||||||
|
}{
|
||||||
|
{name: "empty", expect: &Decimal{Value: 0, Precision: GASPrecision}},
|
||||||
|
{name: "100 GAS", val: 100, expect: &Decimal{Value: 1e10, Precision: GASPrecision}},
|
||||||
|
}
|
||||||
|
for i := range tests {
|
||||||
|
tt := tests[i]
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
require.Equalf(t, tt.expect, NewGAS(tt.val), tt.name)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
func TestNewWithPrecision(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
val int64
|
||||||
|
pre uint32
|
||||||
|
expect *Decimal
|
||||||
|
}{
|
||||||
|
{name: "empty", expect: &Decimal{}},
|
||||||
|
{name: "100 GAS", val: 1e10, pre: GASPrecision, expect: &Decimal{Value: 1e10, Precision: GASPrecision}},
|
||||||
|
}
|
||||||
|
for i := range tests {
|
||||||
|
tt := tests[i]
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
require.Equalf(t, tt.expect, NewWithPrecision(tt.val, tt.pre), tt.name)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDecimal_Neg(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
val int64
|
||||||
|
expect *Decimal
|
||||||
|
}{
|
||||||
|
{name: "empty", expect: &Decimal{Value: 0, Precision: GASPrecision}},
|
||||||
|
{name: "100 GAS", val: 1e10, expect: &Decimal{Value: -1e10, Precision: GASPrecision}},
|
||||||
|
}
|
||||||
|
for i := range tests {
|
||||||
|
tt := tests[i]
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
require.NotPanicsf(t, func() {
|
||||||
|
require.Equalf(t, tt.expect, New(tt.val).Neg(), tt.name)
|
||||||
|
}, tt.name)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDecimal_String(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
expect string
|
||||||
|
value *Decimal
|
||||||
|
}{
|
||||||
|
{name: "empty", expect: "0", value: &Decimal{}},
|
||||||
|
{name: "100 GAS", expect: "100", value: &Decimal{Value: 1e10, Precision: GASPrecision}},
|
||||||
|
{name: "-100 GAS", expect: "-100", value: &Decimal{Value: -1e10, Precision: GASPrecision}},
|
||||||
|
}
|
||||||
|
for i := range tests {
|
||||||
|
tt := tests[i]
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
require.Equalf(t, tt.expect, tt.value.String(), tt.name)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const SomethingElsePrecision = 5
|
||||||
|
|
||||||
|
func TestDecimal_Add(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
expect *Decimal
|
||||||
|
values [2]*Decimal
|
||||||
|
}{
|
||||||
|
{name: "empty", expect: &Decimal{}, values: [2]*Decimal{{}, {}}},
|
||||||
|
{
|
||||||
|
name: "5 GAS + 2 GAS",
|
||||||
|
expect: &Decimal{Value: 7e8, Precision: GASPrecision},
|
||||||
|
values: [2]*Decimal{
|
||||||
|
{Value: 2e8, Precision: GASPrecision},
|
||||||
|
{Value: 5e8, Precision: GASPrecision},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "1e2 + 1e3",
|
||||||
|
expect: &Decimal{Value: 1.1e3, Precision: 3},
|
||||||
|
values: [2]*Decimal{
|
||||||
|
{Value: 1e2, Precision: 2},
|
||||||
|
{Value: 1e3, Precision: 3},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "5 GAS + 10 SomethingElse",
|
||||||
|
expect: &Decimal{Value: 5.01e8, Precision: GASPrecision},
|
||||||
|
values: [2]*Decimal{
|
||||||
|
{Value: 5e8, Precision: GASPrecision},
|
||||||
|
{Value: 1e6, Precision: SomethingElsePrecision},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for i := range tests {
|
||||||
|
tt := tests[i]
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
require.NotPanicsf(t, func() {
|
||||||
|
{ // A + B
|
||||||
|
one := tt.values[0]
|
||||||
|
two := tt.values[1]
|
||||||
|
require.Equalf(t, tt.expect, one.Add(two), tt.name)
|
||||||
|
t.Log(one.Add(two))
|
||||||
|
}
|
||||||
|
|
||||||
|
{ // B + A
|
||||||
|
one := tt.values[0]
|
||||||
|
two := tt.values[1]
|
||||||
|
require.Equalf(t, tt.expect, two.Add(one), tt.name)
|
||||||
|
t.Log(two.Add(one))
|
||||||
|
}
|
||||||
|
}, tt.name)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDecimal_Copy(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
expect *Decimal
|
||||||
|
value *Decimal
|
||||||
|
}{
|
||||||
|
{name: "zero", expect: Zero},
|
||||||
|
{
|
||||||
|
name: "5 GAS",
|
||||||
|
expect: &Decimal{Value: 5e8, Precision: GASPrecision},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "100 GAS",
|
||||||
|
expect: &Decimal{Value: 1e10, Precision: GASPrecision},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for i := range tests {
|
||||||
|
tt := tests[i]
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
require.NotPanicsf(t, func() {
|
||||||
|
require.Equal(t, tt.expect, tt.expect.Copy())
|
||||||
|
}, tt.name)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDecimal_Zero(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
expect bool
|
||||||
|
value *Decimal
|
||||||
|
}{
|
||||||
|
{name: "zero", expect: true, value: Zero},
|
||||||
|
{
|
||||||
|
name: "5 GAS",
|
||||||
|
expect: false,
|
||||||
|
value: &Decimal{Value: 5e8, Precision: GASPrecision},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "100 GAS",
|
||||||
|
expect: false,
|
||||||
|
value: &Decimal{Value: 1e10, Precision: GASPrecision},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for i := range tests {
|
||||||
|
tt := tests[i]
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
require.NotPanicsf(t, func() {
|
||||||
|
require.Truef(t, tt.expect == tt.value.Zero(), tt.name)
|
||||||
|
}, tt.name)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDecimal_Equal(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
expect bool
|
||||||
|
values [2]*Decimal
|
||||||
|
}{
|
||||||
|
{name: "zero == zero", expect: true, values: [2]*Decimal{Zero, Zero}},
|
||||||
|
{
|
||||||
|
name: "5 GAS != 2 GAS",
|
||||||
|
expect: false,
|
||||||
|
values: [2]*Decimal{
|
||||||
|
{Value: 5e8, Precision: GASPrecision},
|
||||||
|
{Value: 2e8, Precision: GASPrecision},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "100 GAS == 100 GAS",
|
||||||
|
expect: true,
|
||||||
|
values: [2]*Decimal{
|
||||||
|
{Value: 1e10, Precision: GASPrecision},
|
||||||
|
{Value: 1e10, Precision: GASPrecision},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for i := range tests {
|
||||||
|
tt := tests[i]
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
require.NotPanicsf(t, func() {
|
||||||
|
require.Truef(t, tt.expect == (tt.values[0].Equal(tt.values[1])), tt.name)
|
||||||
|
}, tt.name)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDecimal_GT(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
expect bool
|
||||||
|
values [2]*Decimal
|
||||||
|
}{
|
||||||
|
{name: "two zeros", expect: false, values: [2]*Decimal{Zero, Zero}},
|
||||||
|
{
|
||||||
|
name: "5 GAS > 2 GAS",
|
||||||
|
expect: true,
|
||||||
|
values: [2]*Decimal{
|
||||||
|
{Value: 5e8, Precision: GASPrecision},
|
||||||
|
{Value: 2e8, Precision: GASPrecision},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "100 GAS !> 100 GAS",
|
||||||
|
expect: false,
|
||||||
|
values: [2]*Decimal{
|
||||||
|
{Value: 1e10, Precision: GASPrecision},
|
||||||
|
{Value: 1e10, Precision: GASPrecision},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for i := range tests {
|
||||||
|
tt := tests[i]
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
require.NotPanicsf(t, func() {
|
||||||
|
require.Truef(t, tt.expect == (tt.values[0].GT(tt.values[1])), tt.name)
|
||||||
|
}, tt.name)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDecimal_GTE(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
expect bool
|
||||||
|
values [2]*Decimal
|
||||||
|
}{
|
||||||
|
{name: "two zeros", expect: true, values: [2]*Decimal{Zero, Zero}},
|
||||||
|
{
|
||||||
|
name: "5 GAS >= 2 GAS",
|
||||||
|
expect: true,
|
||||||
|
values: [2]*Decimal{
|
||||||
|
{Value: 5e8, Precision: GASPrecision},
|
||||||
|
{Value: 2e8, Precision: GASPrecision},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "1 GAS !>= 100 GAS",
|
||||||
|
expect: false,
|
||||||
|
values: [2]*Decimal{
|
||||||
|
{Value: 1e8, Precision: GASPrecision},
|
||||||
|
{Value: 1e10, Precision: GASPrecision},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for i := range tests {
|
||||||
|
tt := tests[i]
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
require.NotPanicsf(t, func() {
|
||||||
|
require.Truef(t, tt.expect == (tt.values[0].GTE(tt.values[1])), tt.name)
|
||||||
|
}, tt.name)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDecimal_LT(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
expect bool
|
||||||
|
values [2]*Decimal
|
||||||
|
}{
|
||||||
|
{name: "two zeros", expect: false, values: [2]*Decimal{Zero, Zero}},
|
||||||
|
{
|
||||||
|
name: "5 GAS !< 2 GAS",
|
||||||
|
expect: false,
|
||||||
|
values: [2]*Decimal{
|
||||||
|
{Value: 5e8, Precision: GASPrecision},
|
||||||
|
{Value: 2e8, Precision: GASPrecision},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "1 GAS < 100 GAS",
|
||||||
|
expect: true,
|
||||||
|
values: [2]*Decimal{
|
||||||
|
{Value: 1e8, Precision: GASPrecision},
|
||||||
|
{Value: 1e10, Precision: GASPrecision},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "100 GAS !< 100 GAS",
|
||||||
|
expect: false,
|
||||||
|
values: [2]*Decimal{
|
||||||
|
{Value: 1e10, Precision: GASPrecision},
|
||||||
|
{Value: 1e10, Precision: GASPrecision},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for i := range tests {
|
||||||
|
tt := tests[i]
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
require.NotPanicsf(t, func() {
|
||||||
|
require.Truef(t, tt.expect == (tt.values[0].LT(tt.values[1])), tt.name)
|
||||||
|
}, tt.name)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDecimal_LTE(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
expect bool
|
||||||
|
values [2]*Decimal
|
||||||
|
}{
|
||||||
|
{name: "two zeros", expect: true, values: [2]*Decimal{Zero, Zero}},
|
||||||
|
{
|
||||||
|
name: "5 GAS <= 2 GAS",
|
||||||
|
expect: false,
|
||||||
|
values: [2]*Decimal{
|
||||||
|
{Value: 5e8, Precision: GASPrecision},
|
||||||
|
{Value: 2e8, Precision: GASPrecision},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "1 GAS <= 100 GAS",
|
||||||
|
expect: true,
|
||||||
|
values: [2]*Decimal{
|
||||||
|
{Value: 1e8, Precision: GASPrecision},
|
||||||
|
{Value: 1e10, Precision: GASPrecision},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "100 GAS !<= 1 GAS",
|
||||||
|
expect: false,
|
||||||
|
values: [2]*Decimal{
|
||||||
|
{Value: 1e10, Precision: GASPrecision},
|
||||||
|
{Value: 1e8, Precision: GASPrecision},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
for i := range tests {
|
||||||
|
tt := tests[i]
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
require.NotPanicsf(t, func() {
|
||||||
|
require.Truef(t, tt.expect == (tt.values[0].LTE(tt.values[1])), tt.name)
|
||||||
|
}, tt.name)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
22
go.mod
Normal file
22
go.mod
Normal file
|
@ -0,0 +1,22 @@
|
||||||
|
module github.com/nspcc-dev/neofs-proto
|
||||||
|
|
||||||
|
go 1.13
|
||||||
|
|
||||||
|
require (
|
||||||
|
code.cloudfoundry.org/bytefmt v0.0.0-20190819182555-854d396b647c
|
||||||
|
github.com/gogo/protobuf v1.3.1
|
||||||
|
github.com/golang/protobuf v1.3.2
|
||||||
|
github.com/google/uuid v1.1.1
|
||||||
|
github.com/mr-tron/base58 v1.1.2
|
||||||
|
github.com/nspcc-dev/neofs-crypto v0.2.1
|
||||||
|
github.com/nspcc-dev/netmap v1.6.1
|
||||||
|
github.com/nspcc-dev/tzhash v1.3.0
|
||||||
|
github.com/onsi/ginkgo v1.10.2 // indirect
|
||||||
|
github.com/onsi/gomega v1.7.0 // indirect
|
||||||
|
github.com/pkg/errors v0.8.1
|
||||||
|
github.com/prometheus/client_golang v1.2.1
|
||||||
|
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4
|
||||||
|
github.com/stretchr/testify v1.4.0
|
||||||
|
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550
|
||||||
|
google.golang.org/grpc v1.24.0
|
||||||
|
)
|
165
go.sum
Normal file
165
go.sum
Normal file
|
@ -0,0 +1,165 @@
|
||||||
|
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||||
|
code.cloudfoundry.org/bytefmt v0.0.0-20190819182555-854d396b647c h1:2RuXx1+tSNWRjxhY0Bx52kjV2odJQ0a6MTbfTPhGAkg=
|
||||||
|
code.cloudfoundry.org/bytefmt v0.0.0-20190819182555-854d396b647c/go.mod h1:wN/zk7mhREp/oviagqUXY3EwuHhWyOvAdsn5Y4CzOrc=
|
||||||
|
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||||
|
github.com/abiosoft/ishell v2.0.0+incompatible/go.mod h1:HQR9AqF2R3P4XXpMpI0NAzgHf/aS6+zVXRj14cVk9qg=
|
||||||
|
github.com/abiosoft/readline v0.0.0-20180607040430-155bce2042db/go.mod h1:rB3B4rKii8V21ydCbIzH5hZiCQE7f5E9SzUb/ZZx530=
|
||||||
|
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||||
|
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||||
|
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
||||||
|
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
||||||
|
github.com/awalterschulze/gographviz v0.0.0-20181013152038-b2885df04310 h1:t+qxRrRtwNiUYA+Xh2jSXhoG2grnMCMKX4Fg6lx9X1U=
|
||||||
|
github.com/awalterschulze/gographviz v0.0.0-20181013152038-b2885df04310/go.mod h1:GEV5wmg4YquNw7v1kkyoX9etIk8yVmXj+AkDHuuETHs=
|
||||||
|
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||||
|
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
|
||||||
|
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
||||||
|
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
||||||
|
github.com/cespare/xxhash/v2 v2.1.0 h1:yTUvW7Vhb89inJ+8irsUqiWjh8iT6sQPZiQzI6ReGkA=
|
||||||
|
github.com/cespare/xxhash/v2 v2.1.0/go.mod h1:dgIUBU3pDso/gPgZ1osOZ0iQf77oPR28Tjxl5dIMyVM=
|
||||||
|
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
|
||||||
|
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
|
||||||
|
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
|
||||||
|
github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
|
||||||
|
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
|
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||||
|
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
|
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
|
||||||
|
github.com/flynn-archive/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:rZfgFAXFS/z/lEd6LJmf9HVZ1LkgYiHx5pHhV5DR16M=
|
||||||
|
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
|
||||||
|
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
||||||
|
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||||
|
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||||
|
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
|
||||||
|
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
|
||||||
|
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||||
|
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||||
|
github.com/gogo/protobuf v1.3.0/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
|
||||||
|
github.com/gogo/protobuf v1.3.1 h1:DqDEcV5aeaTmdFBePNpYsp3FlcVH/2ISVVM9Qf8PSls=
|
||||||
|
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
|
||||||
|
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
|
||||||
|
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||||
|
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||||
|
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||||
|
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||||
|
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
|
||||||
|
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||||
|
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
|
||||||
|
github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
|
||||||
|
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||||
|
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||||
|
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
|
||||||
|
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||||
|
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
|
||||||
|
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
|
||||||
|
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
|
||||||
|
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
||||||
|
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
|
||||||
|
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
|
||||||
|
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||||
|
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||||
|
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
|
||||||
|
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
|
||||||
|
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
|
||||||
|
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
|
||||||
|
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
|
||||||
|
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||||
|
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||||
|
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||||
|
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||||
|
github.com/mr-tron/base58 v1.1.2 h1:ZEw4I2EgPKDJ2iEw0cNmLB3ROrEmkOtXIkaG7wZg+78=
|
||||||
|
github.com/mr-tron/base58 v1.1.2/go.mod h1:BinMc/sQntlIE1frQmRFPUoPA1Zkr8VRgBdjWI2mNwc=
|
||||||
|
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
||||||
|
github.com/nspcc-dev/hrw v1.0.8 h1:vwRuJXZXgkMvf473vFzeWGCfY1WBVeSHAEHvR4u3/Cg=
|
||||||
|
github.com/nspcc-dev/hrw v1.0.8/go.mod h1:l/W2vx83vMQo6aStyx2AuZrJ+07lGv2JQGlVkPG06MU=
|
||||||
|
github.com/nspcc-dev/neofs-crypto v0.2.1 h1:NxKexcW88vlHO/u7EYjx5Q1UaOQ7XhYrCsLSVgOcCxw=
|
||||||
|
github.com/nspcc-dev/neofs-crypto v0.2.1/go.mod h1:F/96fUzPM3wR+UGsPi3faVNmFlA9KAEAUQR7dMxZmNA=
|
||||||
|
github.com/nspcc-dev/netmap v1.6.1 h1:Pigqpqi6QSdRiusbq5XlO20A18k6Eyu7j9MzOfAE3CM=
|
||||||
|
github.com/nspcc-dev/netmap v1.6.1/go.mod h1:mhV3UOg9ljQmu0teQShD6+JYX09XY5gu2I4hIByCH9M=
|
||||||
|
github.com/nspcc-dev/rfc6979 v0.1.0 h1:Lwg7esRRoyK1Up/IN1vAef1EmvrBeMHeeEkek2fAJ6c=
|
||||||
|
github.com/nspcc-dev/rfc6979 v0.1.0/go.mod h1:exhIh1PdpDC5vQmyEsGvc4YDM/lyQp/452QxGq/UEso=
|
||||||
|
github.com/nspcc-dev/tzhash v1.3.0 h1:n6FTHsfPYbMi5Jmo6SwGVVRQD8i2w1P2ScCaW6rz69Q=
|
||||||
|
github.com/nspcc-dev/tzhash v1.3.0/go.mod h1:Lc4DersKS8MNIrunTmsAzANO56qnG+LZ4GOE/WYGVzU=
|
||||||
|
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||||
|
github.com/onsi/ginkgo v1.10.2 h1:uqH7bpe+ERSiDa34FDOF7RikN6RzXgduUF8yarlZp94=
|
||||||
|
github.com/onsi/ginkgo v1.10.2/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||||
|
github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME=
|
||||||
|
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
||||||
|
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||||
|
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
|
||||||
|
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||||
|
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||||
|
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||||
|
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
||||||
|
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
|
||||||
|
github.com/prometheus/client_golang v1.2.1 h1:JnMpQc6ppsNgw9QPAGF6Dod479itz7lvlsMzzNayLOI=
|
||||||
|
github.com/prometheus/client_golang v1.2.1/go.mod h1:XMU6Z2MjaRKVu/dC1qupJI9SiNkDYzz3xecMgSW/F+U=
|
||||||
|
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||||
|
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||||
|
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 h1:gQz4mCbXsO+nc9n1hCxHcGA3Zx3Eo+UHZoInFGUIXNM=
|
||||||
|
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||||
|
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
||||||
|
github.com/prometheus/common v0.7.0 h1:L+1lyG48J1zAQXA3RBX/nG/B3gjlHq0zTt2tlbJLyCY=
|
||||||
|
github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA=
|
||||||
|
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||||
|
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
||||||
|
github.com/prometheus/procfs v0.0.5 h1:3+auTFlqw+ZaQYJARz6ArODtkaIwtvBTx3N2NehQlL8=
|
||||||
|
github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
|
||||||
|
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
|
||||||
|
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
|
||||||
|
github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
|
||||||
|
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
|
||||||
|
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||||
|
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||||
|
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||||
|
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||||
|
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
|
||||||
|
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||||
|
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||||
|
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||||
|
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550 h1:ObdrDkeb4kJdCP557AjRjq69pTHfNouLtWZG7j9rPN8=
|
||||||
|
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||||
|
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||||
|
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||||
|
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||||
|
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||||
|
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3 h1:0GoQqolDA55aaLxZyTzK/Y2ePZzZTUrRacwib7cNsYQ=
|
||||||
|
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||||
|
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980 h1:dfGZHvZk057jK2MCeWus/TowKpJ8y4AmooUzdBSR9GU=
|
||||||
|
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||||
|
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||||
|
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
|
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
|
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
|
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||||
|
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
|
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
|
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
|
golang.org/x/sys v0.0.0-20181228144115-9a3f9b0469bb/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
|
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
|
golang.org/x/sys v0.0.0-20190412213103-97732733099d h1:+R4KGOnez64A81RvjARKc4UT5/tI9ujCIVX+P5KiHuI=
|
||||||
|
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
golang.org/x/sys v0.0.0-20191010194322-b09406accb47 h1:/XfQ9z7ib8eEJX2hdgFTZJ/ntt0swNk5oYBziWeTCvY=
|
||||||
|
golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
|
||||||
|
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||||
|
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||||
|
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||||
|
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||||
|
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||||
|
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8 h1:Nw54tB0rB7hY/N0NQvRW8DG4Yk3Q6T9cu9RcFQDu1tc=
|
||||||
|
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||||
|
google.golang.org/grpc v1.24.0 h1:vb/1TCsVn3DcJlQ0Gs1yB1pKI6Do2/QNwxdKqmc/b0s=
|
||||||
|
google.golang.org/grpc v1.24.0/go.mod h1:XDChyiUovWa60DnaeDeZmSW86xtLtjtZbwvSiRnRtcA=
|
||||||
|
gopkg.in/abiosoft/ishell.v2 v2.0.0/go.mod h1:sFp+cGtH6o4s1FtpVPTMcHq2yue+c4DGOVohJCPUzwY=
|
||||||
|
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
|
||||||
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
|
||||||
|
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||||
|
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
|
||||||
|
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
|
||||||
|
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
|
||||||
|
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
|
||||||
|
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||||
|
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
|
||||||
|
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||||
|
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
98
hash/hash.go
Normal file
98
hash/hash.go
Normal file
|
@ -0,0 +1,98 @@
|
||||||
|
package hash
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
|
||||||
|
"github.com/mr-tron/base58"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/internal"
|
||||||
|
"github.com/nspcc-dev/tzhash/tz"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
// HomomorphicHashSize contains size of HH.
|
||||||
|
const HomomorphicHashSize = 64
|
||||||
|
|
||||||
|
// Hash is implementation of HomomorphicHash.
|
||||||
|
type Hash [HomomorphicHashSize]byte
|
||||||
|
|
||||||
|
// ErrWrongDataSize raised when wrong size of bytes is passed to unmarshal HH.
|
||||||
|
const ErrWrongDataSize = internal.Error("wrong data size")
|
||||||
|
|
||||||
|
var (
|
||||||
|
_ internal.Custom = (*Hash)(nil)
|
||||||
|
|
||||||
|
emptyHH [HomomorphicHashSize]byte
|
||||||
|
)
|
||||||
|
|
||||||
|
// Size returns size of Hash (HomomorphicHashSize).
|
||||||
|
func (h Hash) Size() int { return HomomorphicHashSize }
|
||||||
|
|
||||||
|
// Empty checks that Hash is empty.
|
||||||
|
func (h Hash) Empty() bool { return bytes.Equal(h.Bytes(), emptyHH[:]) }
|
||||||
|
|
||||||
|
// Reset sets current Hash to empty value.
|
||||||
|
func (h *Hash) Reset() { *h = Hash{} }
|
||||||
|
|
||||||
|
// ProtoMessage method to satisfy proto.Message interface.
|
||||||
|
func (h Hash) ProtoMessage() {}
|
||||||
|
|
||||||
|
// Bytes represents Hash as bytes.
|
||||||
|
func (h Hash) Bytes() []byte {
|
||||||
|
buf := make([]byte, HomomorphicHashSize)
|
||||||
|
copy(buf, h[:])
|
||||||
|
return h[:]
|
||||||
|
}
|
||||||
|
|
||||||
|
// Marshal returns bytes representation of Hash.
|
||||||
|
func (h Hash) Marshal() ([]byte, error) { return h.Bytes(), nil }
|
||||||
|
|
||||||
|
// MarshalTo tries to marshal Hash into passed bytes and returns count of copied bytes.
|
||||||
|
func (h *Hash) MarshalTo(data []byte) (int, error) { return copy(data, h.Bytes()), nil }
|
||||||
|
|
||||||
|
// Unmarshal tries to parse bytes into valid Hash.
|
||||||
|
func (h *Hash) Unmarshal(data []byte) error {
|
||||||
|
if ln := len(data); ln != HomomorphicHashSize {
|
||||||
|
return errors.Wrapf(ErrWrongDataSize, "expect=%d, actual=%d", HomomorphicHashSize, ln)
|
||||||
|
}
|
||||||
|
|
||||||
|
copy((*h)[:], data)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// String returns string representation of Hash.
|
||||||
|
func (h Hash) String() string { return base58.Encode(h[:]) }
|
||||||
|
|
||||||
|
// Equal checks that current Hash is equal to passed Hash.
|
||||||
|
func (h Hash) Equal(hash Hash) bool { return h == hash }
|
||||||
|
|
||||||
|
// Verify validates if current hash generated from passed data.
|
||||||
|
func (h Hash) Verify(data []byte) bool { return h.Equal(Sum(data)) }
|
||||||
|
|
||||||
|
// Validate checks if combined hashes are equal to current Hash.
|
||||||
|
func (h Hash) Validate(hashes []Hash) bool {
|
||||||
|
var hashBytes = make([][]byte, 0, len(hashes))
|
||||||
|
for i := range hashes {
|
||||||
|
hashBytes = append(hashBytes, hashes[i].Bytes())
|
||||||
|
}
|
||||||
|
ok, err := tz.Validate(h.Bytes(), hashBytes)
|
||||||
|
return err == nil && ok
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sum returns Tillich-Zémor checksum of data.
|
||||||
|
func Sum(data []byte) Hash { return tz.Sum(data) }
|
||||||
|
|
||||||
|
// Concat combines hashes based on homomorphic property.
|
||||||
|
func Concat(hashes []Hash) (Hash, error) {
|
||||||
|
var (
|
||||||
|
hash Hash
|
||||||
|
h = make([][]byte, 0, len(hashes))
|
||||||
|
)
|
||||||
|
for i := range hashes {
|
||||||
|
h = append(h, hashes[i].Bytes())
|
||||||
|
}
|
||||||
|
cat, err := tz.Concat(h)
|
||||||
|
if err != nil {
|
||||||
|
return hash, err
|
||||||
|
}
|
||||||
|
return hash, hash.Unmarshal(cat)
|
||||||
|
}
|
166
hash/hash_test.go
Normal file
166
hash/hash_test.go
Normal file
|
@ -0,0 +1,166 @@
|
||||||
|
package hash
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"crypto/rand"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func Test_Sum(t *testing.T) {
|
||||||
|
var (
|
||||||
|
data = []byte("Hello world")
|
||||||
|
sum = Sum(data)
|
||||||
|
hash = []byte{0, 0, 0, 0, 1, 79, 16, 173, 134, 90, 176, 77, 114, 165, 253, 114, 0, 0, 0, 0, 0, 148,
|
||||||
|
172, 222, 98, 248, 15, 99, 205, 129, 66, 91, 0, 0, 0, 0, 0, 138, 173, 39, 228, 231, 239, 123,
|
||||||
|
170, 96, 186, 61, 0, 0, 0, 0, 0, 90, 69, 237, 131, 90, 161, 73, 38, 164, 185, 55}
|
||||||
|
)
|
||||||
|
|
||||||
|
require.Equal(t, hash, sum.Bytes())
|
||||||
|
}
|
||||||
|
|
||||||
|
func Test_Validate(t *testing.T) {
|
||||||
|
var (
|
||||||
|
data = []byte("Hello world")
|
||||||
|
hash = Sum(data)
|
||||||
|
pieces = splitData(data, 2)
|
||||||
|
ln = len(pieces)
|
||||||
|
hashes = make([]Hash, 0, ln)
|
||||||
|
)
|
||||||
|
|
||||||
|
for i := 0; i < ln; i++ {
|
||||||
|
hashes = append(hashes, Sum(pieces[i]))
|
||||||
|
}
|
||||||
|
|
||||||
|
require.True(t, hash.Validate(hashes))
|
||||||
|
}
|
||||||
|
|
||||||
|
func Test_Concat(t *testing.T) {
|
||||||
|
var (
|
||||||
|
data = []byte("Hello world")
|
||||||
|
hash = Sum(data)
|
||||||
|
pieces = splitData(data, 2)
|
||||||
|
ln = len(pieces)
|
||||||
|
hashes = make([]Hash, 0, ln)
|
||||||
|
)
|
||||||
|
|
||||||
|
for i := 0; i < ln; i++ {
|
||||||
|
hashes = append(hashes, Sum(pieces[i]))
|
||||||
|
}
|
||||||
|
|
||||||
|
res, err := Concat(hashes)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, hash, res)
|
||||||
|
}
|
||||||
|
|
||||||
|
func Test_HashChunks(t *testing.T) {
|
||||||
|
var (
|
||||||
|
chars = []byte("+")
|
||||||
|
size = 1400
|
||||||
|
data = bytes.Repeat(chars, size)
|
||||||
|
hash = Sum(data)
|
||||||
|
count = 150
|
||||||
|
)
|
||||||
|
|
||||||
|
hashes, err := dataHashes(data, count)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Len(t, hashes, count)
|
||||||
|
|
||||||
|
require.True(t, hash.Validate(hashes))
|
||||||
|
|
||||||
|
// 100 / 150 = 0
|
||||||
|
hashes, err = dataHashes(data[:100], count)
|
||||||
|
require.Error(t, err)
|
||||||
|
require.Nil(t, hashes)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestXOR(t *testing.T) {
|
||||||
|
var (
|
||||||
|
dl = 10
|
||||||
|
data = make([]byte, dl)
|
||||||
|
)
|
||||||
|
|
||||||
|
_, err := rand.Read(data)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
t.Run("XOR with <nil> salt", func(t *testing.T) {
|
||||||
|
res := SaltXOR(data, nil)
|
||||||
|
require.Equal(t, res, data)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("XOR with empty salt", func(t *testing.T) {
|
||||||
|
xorWithSalt(t, data, 0)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("XOR with salt same data size", func(t *testing.T) {
|
||||||
|
xorWithSalt(t, data, dl)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("XOR with salt shorter than data aliquot", func(t *testing.T) {
|
||||||
|
xorWithSalt(t, data, dl/2)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("XOR with salt shorter than data aliquant", func(t *testing.T) {
|
||||||
|
xorWithSalt(t, data, dl/3/+1)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("XOR with salt longer than data aliquot", func(t *testing.T) {
|
||||||
|
xorWithSalt(t, data, dl*2)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("XOR with salt longer than data aliquant", func(t *testing.T) {
|
||||||
|
xorWithSalt(t, data, dl*2-1)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func xorWithSalt(t *testing.T, data []byte, saltSize int) {
|
||||||
|
var (
|
||||||
|
direct, reverse []byte
|
||||||
|
salt = make([]byte, saltSize)
|
||||||
|
)
|
||||||
|
|
||||||
|
_, err := rand.Read(salt)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
direct = SaltXOR(data, salt)
|
||||||
|
require.Len(t, direct, len(data))
|
||||||
|
|
||||||
|
reverse = SaltXOR(direct, salt)
|
||||||
|
require.Len(t, reverse, len(data))
|
||||||
|
|
||||||
|
require.Equal(t, reverse, data)
|
||||||
|
}
|
||||||
|
|
||||||
|
func splitData(buf []byte, lim int) [][]byte {
|
||||||
|
var piece []byte
|
||||||
|
pieces := make([][]byte, 0, len(buf)/lim+1)
|
||||||
|
for len(buf) >= lim {
|
||||||
|
piece, buf = buf[:lim], buf[lim:]
|
||||||
|
pieces = append(pieces, piece)
|
||||||
|
}
|
||||||
|
if len(buf) > 0 {
|
||||||
|
pieces = append(pieces, buf)
|
||||||
|
}
|
||||||
|
return pieces
|
||||||
|
}
|
||||||
|
|
||||||
|
func dataHashes(data []byte, count int) ([]Hash, error) {
|
||||||
|
var (
|
||||||
|
ln = len(data)
|
||||||
|
mis = ln / count
|
||||||
|
off = (count - 1) * mis
|
||||||
|
hashes = make([]Hash, 0, count)
|
||||||
|
)
|
||||||
|
if mis == 0 {
|
||||||
|
return nil, errors.Errorf("could not split %d bytes to %d pieces", ln, count)
|
||||||
|
}
|
||||||
|
|
||||||
|
pieces := splitData(data[:off], mis)
|
||||||
|
pieces = append(pieces, data[off:])
|
||||||
|
for i := 0; i < count; i++ {
|
||||||
|
hashes = append(hashes, Sum(pieces[i]))
|
||||||
|
}
|
||||||
|
return hashes, nil
|
||||||
|
}
|
20
hash/hashesslice.go
Normal file
20
hash/hashesslice.go
Normal file
|
@ -0,0 +1,20 @@
|
||||||
|
package hash
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
)
|
||||||
|
|
||||||
|
// HashesSlice is a collection that satisfies sort.Interface and can be
|
||||||
|
// sorted by the routines in sort package.
|
||||||
|
type HashesSlice []Hash
|
||||||
|
|
||||||
|
// -- HashesSlice -- an inner type to sort Objects
|
||||||
|
// Len is the number of elements in the collection.
|
||||||
|
func (hs HashesSlice) Len() int { return len(hs) }
|
||||||
|
|
||||||
|
// Less reports whether the element with
|
||||||
|
// index i should be sorted before the element with index j.
|
||||||
|
func (hs HashesSlice) Less(i, j int) bool { return bytes.Compare(hs[i].Bytes(), hs[j].Bytes()) == -1 }
|
||||||
|
|
||||||
|
// Swap swaps the elements with indexes i and j.
|
||||||
|
func (hs HashesSlice) Swap(i, j int) { hs[i], hs[j] = hs[j], hs[i] }
|
17
hash/salt.go
Normal file
17
hash/salt.go
Normal file
|
@ -0,0 +1,17 @@
|
||||||
|
package hash
|
||||||
|
|
||||||
|
// SaltXOR xors bits of data with salt
|
||||||
|
// repeating salt if necessary.
|
||||||
|
func SaltXOR(data, salt []byte) (result []byte) {
|
||||||
|
result = make([]byte, len(data))
|
||||||
|
ls := len(salt)
|
||||||
|
if ls == 0 {
|
||||||
|
copy(result, data)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := range result {
|
||||||
|
result[i] = data[i] ^ salt[i%ls]
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
7
internal/error.go
Normal file
7
internal/error.go
Normal file
|
@ -0,0 +1,7 @@
|
||||||
|
package internal
|
||||||
|
|
||||||
|
// Error is a custom error.
|
||||||
|
type Error string
|
||||||
|
|
||||||
|
// Error is an implementation of error interface.
|
||||||
|
func (e Error) Error() string { return string(e) }
|
16
internal/proto.go
Normal file
16
internal/proto.go
Normal file
|
@ -0,0 +1,16 @@
|
||||||
|
package internal
|
||||||
|
|
||||||
|
import "github.com/gogo/protobuf/proto"
|
||||||
|
|
||||||
|
// Custom contains methods to satisfy proto.Message
|
||||||
|
// including custom methods to satisfy protobuf for
|
||||||
|
// non-proto defined types.
|
||||||
|
type Custom interface {
|
||||||
|
Size() int
|
||||||
|
Empty() bool
|
||||||
|
Bytes() []byte
|
||||||
|
Marshal() ([]byte, error)
|
||||||
|
MarshalTo(data []byte) (int, error)
|
||||||
|
Unmarshal(data []byte) error
|
||||||
|
proto.Message
|
||||||
|
}
|
143
object/doc.go
Normal file
143
object/doc.go
Normal file
|
@ -0,0 +1,143 @@
|
||||||
|
/*
|
||||||
|
Package object manages main storage structure in the system. All storage
|
||||||
|
operations are performed with the objects. During lifetime object might be
|
||||||
|
transformed into another object by cutting its payload or adding meta
|
||||||
|
information. All transformation may be reversed, therefore source object
|
||||||
|
will be able to restore.
|
||||||
|
|
||||||
|
Object structure
|
||||||
|
|
||||||
|
Object consists of Payload and Header. Payload is unlimited but storage nodes
|
||||||
|
may have a policy to store objects with a limited payload. In this case object
|
||||||
|
with large payload will be transformed into the chain of objects with small
|
||||||
|
payload.
|
||||||
|
|
||||||
|
Headers are simple key-value fields that divided into two groups: system
|
||||||
|
headers and extended headers. System headers contain information about
|
||||||
|
protocol version, object id, payload length in bytes, owner id, container id
|
||||||
|
and object creation timestamp (both in epochs and unix time). All these fields
|
||||||
|
must be set up in the correct object.
|
||||||
|
|
||||||
|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
|
||||||
|
| System Headers |
|
||||||
|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
|
||||||
|
| Version : 1 |
|
||||||
|
| Payload Length : 21673465 |
|
||||||
|
| Object ID : 465208e2-ba4f-4f99-ad47-82a59f4192d4 |
|
||||||
|
| Owner ID : AShvoCbSZ7VfRiPkVb1tEcBLiJrcbts1tt |
|
||||||
|
| Container ID : FGobtRZA6sBZv2i9k4L7TiTtnuP6E788qa278xfj3Fxj |
|
||||||
|
| Created At : Epoch#10, 1573033162 |
|
||||||
|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
|
||||||
|
| Extended Headers |
|
||||||
|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
|
||||||
|
| User Header : <user-defined-key>, <user-defined-value> |
|
||||||
|
| Verification Header : <session public key>, <owner's signature> |
|
||||||
|
| Homomorphic Hash : 0x23d35a56ae... |
|
||||||
|
| Payload Checksum : 0x1bd34abs75... |
|
||||||
|
| Integrity Header : <header checksum>, <session signature> |
|
||||||
|
| Transformation : Payload Split |
|
||||||
|
| Link-parent : cae08935-b4ba-499a-bf6c-98276c1e6c0b |
|
||||||
|
| Link-next : c3b40fbf-3798-4b61-a189-2992b5fb5070 |
|
||||||
|
| Payload Checksum : 0x1f387a5c36... |
|
||||||
|
| Integrity Header : <header checksum>, <session signature> |
|
||||||
|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
|
||||||
|
| Payload |
|
||||||
|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
|
||||||
|
| 0xd1581963a342d231... |
|
||||||
|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
|
||||||
|
|
||||||
|
There are different kinds of extended headers. A correct object must contain
|
||||||
|
verification header, homomorphic hash header, payload checksum and
|
||||||
|
integrity header. The order of headers is matter. Let's look through all
|
||||||
|
these headers.
|
||||||
|
|
||||||
|
Link header points to the connected objects. During object transformation, large
|
||||||
|
object might be transformed into the chain of smaller objects. One of these
|
||||||
|
objects drops payload and has several "Child" links. We call this object as
|
||||||
|
zero-object. Others will have "Parent" link to the zero-object, "Previous"
|
||||||
|
and "Next" links in the payload chain.
|
||||||
|
|
||||||
|
[ Object ID:1 ] = > transformed
|
||||||
|
`- [ Zero-Object ID:1 ]
|
||||||
|
`- Link-child ID:2
|
||||||
|
`- Link-child ID:3
|
||||||
|
`- Link-child ID:4
|
||||||
|
`- Payload [null]
|
||||||
|
`- [ Object ID:2 ]
|
||||||
|
`- Link-parent ID:1
|
||||||
|
`- Link-next ID:3
|
||||||
|
`- Payload [ 0x13ba... ]
|
||||||
|
`- [ Object ID:3 ]
|
||||||
|
`- Link-parent ID:1
|
||||||
|
`- Link-previous ID:2
|
||||||
|
`- Link-next ID:4
|
||||||
|
`- Payload [ 0xcd34... ]
|
||||||
|
`- [ Object ID:4 ]
|
||||||
|
`- Link-parent ID:1
|
||||||
|
`- Link-previous ID:3
|
||||||
|
`- Payload [ 0xef86... ]
|
||||||
|
|
||||||
|
Storage groups are also objects. They have "Storage Group" links to all
|
||||||
|
objects in the group. Links are set by nodes during transformations and,
|
||||||
|
in general, they should not be set by user manually.
|
||||||
|
|
||||||
|
Redirect headers are not used yet, they will be implemented and described
|
||||||
|
later.
|
||||||
|
|
||||||
|
User header is a key-value pair of string that can be defined by user. User
|
||||||
|
can use these headers as search attribute. You can store any meta information
|
||||||
|
about object there, e.g. object's nicename.
|
||||||
|
|
||||||
|
Transformation header notifies that object was transformed by some pre-defined
|
||||||
|
way. This header sets up before object is transformed and all headers after
|
||||||
|
transformation must be located after transformation header. During reverse
|
||||||
|
transformation, all headers under transformation header will be cut out.
|
||||||
|
|
||||||
|
+-+-+-+-+-+-+-+-+-+- +-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+
|
||||||
|
| Payload checksum | | Payload checksum | | Payload checksum |
|
||||||
|
| Integrity header | => | Integrity header | + | Integrity header |
|
||||||
|
+-+-+-+-+-+-+-+-+-+- | Transformation | | Transformation |
|
||||||
|
| Large payload | | New Checksum | | New Checksum |
|
||||||
|
+-+-+-+-+-+-+-+-+-+- | New Integrity | | New Integrity |
|
||||||
|
+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+
|
||||||
|
| Small payload | | Small payload |
|
||||||
|
+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+
|
||||||
|
|
||||||
|
For now, we use only one type of transformation: payload split transformation.
|
||||||
|
This header set up by node automatically.
|
||||||
|
|
||||||
|
Tombstone header notifies that this object was deleted by user. Objects with
|
||||||
|
tombstone header do not have payload, but they still contain meta information
|
||||||
|
in the headers. This way we implement two-phase commit for object removal.
|
||||||
|
Storage nodes will eventually delete all tombstone objects. If you want to
|
||||||
|
delete object, you must create new object with the same object id, with
|
||||||
|
tombstone header, correct signatures and without payload.
|
||||||
|
|
||||||
|
Verification header contains session information. To put the object in
|
||||||
|
the system user must create session. It is required because objects might
|
||||||
|
be transformed and therefore must be re-signed. To do that node creates
|
||||||
|
a pair of session public and private keys. Object owner delegates permission to
|
||||||
|
re-sign objects by signing session public key. This header contains session
|
||||||
|
public key and owner's signature of this key. You must specify this header
|
||||||
|
manually.
|
||||||
|
|
||||||
|
Homomorphic hash header contains homomorphic hash of the source object.
|
||||||
|
Transformations do not affect this header. This header used by data audit and
|
||||||
|
set by node automatically.
|
||||||
|
|
||||||
|
Payload checksum contains checksum of the actual object payload. All payload
|
||||||
|
transformation must set new payload checksum headers. This header set by node
|
||||||
|
automatically.
|
||||||
|
|
||||||
|
Integrity header contains checksum of the header and signature of the
|
||||||
|
session key. This header must be last in the list of extended headers.
|
||||||
|
Checksum is calculated by marshaling all above headers, including system
|
||||||
|
headers. This header set by node automatically.
|
||||||
|
|
||||||
|
Storage group header is presented in storage group objects. It contains
|
||||||
|
information for data audit: size of validated data, homomorphic has of this
|
||||||
|
data, storage group expiration time in epochs or unix time.
|
||||||
|
|
||||||
|
|
||||||
|
*/
|
||||||
|
package object
|
84
object/extensions.go
Normal file
84
object/extensions.go
Normal file
|
@ -0,0 +1,84 @@
|
||||||
|
package object
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/nspcc-dev/neofs-proto/hash"
|
||||||
|
)
|
||||||
|
|
||||||
|
// IsLinking checks if object has children links to another objects.
|
||||||
|
// We have to check payload size because zero-object must have zero
|
||||||
|
// payload and non-zero payload length field in system header.
|
||||||
|
func (m Object) IsLinking() bool {
|
||||||
|
for i := range m.Headers {
|
||||||
|
switch v := m.Headers[i].Value.(type) {
|
||||||
|
case *Header_Link:
|
||||||
|
if v.Link.GetType() == Link_Child {
|
||||||
|
return m.SystemHeader.PayloadLength > 0 && len(m.Payload) == 0
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// VerificationHeader returns verification header if it is presented in extended headers.
|
||||||
|
func (m Object) VerificationHeader() (*VerificationHeader, error) {
|
||||||
|
_, vh := m.LastHeader(HeaderType(VerifyHdr))
|
||||||
|
if vh == nil {
|
||||||
|
return nil, ErrHeaderNotFound
|
||||||
|
}
|
||||||
|
return vh.Value.(*Header_Verify).Verify, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetVerificationHeader sets verification header in the object.
|
||||||
|
// It will replace existing verification header or add a new one.
|
||||||
|
func (m *Object) SetVerificationHeader(header *VerificationHeader) {
|
||||||
|
m.SetHeader(&Header{Value: &Header_Verify{Verify: header}})
|
||||||
|
}
|
||||||
|
|
||||||
|
// Links returns slice of ids of specified link type
|
||||||
|
func (m *Object) Links(t Link_Type) []ID {
|
||||||
|
var res []ID
|
||||||
|
for i := range m.Headers {
|
||||||
|
switch v := m.Headers[i].Value.(type) {
|
||||||
|
case *Header_Link:
|
||||||
|
if v.Link.GetType() == t {
|
||||||
|
res = append(res, v.Link.ID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return res
|
||||||
|
}
|
||||||
|
|
||||||
|
// Tombstone returns tombstone header if it is presented in extended headers.
|
||||||
|
func (m Object) Tombstone() *Tombstone {
|
||||||
|
_, h := m.LastHeader(HeaderType(TombstoneHdr))
|
||||||
|
if h != nil {
|
||||||
|
return h.Value.(*Header_Tombstone).Tombstone
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsTombstone checks if object has tombstone header.
|
||||||
|
func (m Object) IsTombstone() bool {
|
||||||
|
n, _ := m.LastHeader(HeaderType(TombstoneHdr))
|
||||||
|
return n != -1
|
||||||
|
}
|
||||||
|
|
||||||
|
// StorageGroup returns storage group structure if it is presented in extended headers.
|
||||||
|
func (m Object) StorageGroup() (*StorageGroup, error) {
|
||||||
|
_, sgHdr := m.LastHeader(HeaderType(StorageGroupHdr))
|
||||||
|
if sgHdr == nil {
|
||||||
|
return nil, ErrHeaderNotFound
|
||||||
|
}
|
||||||
|
return sgHdr.Value.(*Header_StorageGroup).StorageGroup, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetStorageGroup sets storage group header in the object.
|
||||||
|
// It will replace existing storage group header or add a new one.
|
||||||
|
func (m *Object) SetStorageGroup(sg *StorageGroup) {
|
||||||
|
m.SetHeader(&Header{Value: &Header_StorageGroup{StorageGroup: sg}})
|
||||||
|
}
|
||||||
|
|
||||||
|
// Empty checks if storage group has some data for validation.
|
||||||
|
func (m StorageGroup) Empty() bool {
|
||||||
|
return m.ValidationDataSize == 0 && m.ValidationHash.Equal(hash.Hash{})
|
||||||
|
}
|
215
object/service.go
Normal file
215
object/service.go
Normal file
|
@ -0,0 +1,215 @@
|
||||||
|
package object
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/nspcc-dev/neofs-proto/hash"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/internal"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/refs"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/service"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/session"
|
||||||
|
)
|
||||||
|
|
||||||
|
type (
|
||||||
|
// ID is a type alias of object id.
|
||||||
|
ID = refs.ObjectID
|
||||||
|
|
||||||
|
// CID is a type alias of container id.
|
||||||
|
CID = refs.CID
|
||||||
|
|
||||||
|
// SGID is a type alias of storage group id.
|
||||||
|
SGID = refs.SGID
|
||||||
|
|
||||||
|
// OwnerID is a type alias of owner id.
|
||||||
|
OwnerID = refs.OwnerID
|
||||||
|
|
||||||
|
// Hash is a type alias of Homomorphic hash.
|
||||||
|
Hash = hash.Hash
|
||||||
|
|
||||||
|
// Token is a type alias of session token.
|
||||||
|
Token = session.Token
|
||||||
|
|
||||||
|
// Request defines object rpc requests.
|
||||||
|
// All object operations must have TTL, Epoch, Container ID and
|
||||||
|
// permission of usage previous network map.
|
||||||
|
Request interface {
|
||||||
|
service.TTLRequest
|
||||||
|
service.EpochRequest
|
||||||
|
|
||||||
|
CID() CID
|
||||||
|
AllowPreviousNetMap() bool
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
// UnitsB starts enum for amount of bytes.
|
||||||
|
UnitsB int64 = 1 << (10 * iota)
|
||||||
|
|
||||||
|
// UnitsKB defines amount of bytes in one kilobyte.
|
||||||
|
UnitsKB
|
||||||
|
|
||||||
|
// UnitsMB defines amount of bytes in one megabyte.
|
||||||
|
UnitsMB
|
||||||
|
|
||||||
|
// UnitsGB defines amount of bytes in one gigabyte.
|
||||||
|
UnitsGB
|
||||||
|
|
||||||
|
// UnitsTB defines amount of bytes in one terabyte.
|
||||||
|
UnitsTB
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
// ErrNotFound is raised when object is not found in the system.
|
||||||
|
ErrNotFound = internal.Error("could not find object")
|
||||||
|
|
||||||
|
// ErrHeaderExpected is raised when first message in protobuf stream does not contain user header.
|
||||||
|
ErrHeaderExpected = internal.Error("expected header as a first message in stream")
|
||||||
|
|
||||||
|
// KeyStorageGroup is a key for a search object by storage group id.
|
||||||
|
KeyStorageGroup = "STORAGE_GROUP"
|
||||||
|
|
||||||
|
// KeyNoChildren is a key for searching object that have no children links.
|
||||||
|
KeyNoChildren = "LEAF"
|
||||||
|
|
||||||
|
// KeyParent is a key for searching object by id of parent object.
|
||||||
|
KeyParent = "PARENT"
|
||||||
|
|
||||||
|
// KeyHasParent is a key for searching object that have parent link.
|
||||||
|
KeyHasParent = "HAS_PAR"
|
||||||
|
|
||||||
|
// KeyTombstone is a key for searching object that have tombstone header.
|
||||||
|
KeyTombstone = "TOMBSTONE"
|
||||||
|
|
||||||
|
// KeyChild is a key for searching object by id of child link.
|
||||||
|
KeyChild = "CHILD"
|
||||||
|
|
||||||
|
// KeyPrev is a key for searching object by id of previous link.
|
||||||
|
KeyPrev = "PREV"
|
||||||
|
|
||||||
|
// KeyNext is a key for searching object by id of next link.
|
||||||
|
KeyNext = "NEXT"
|
||||||
|
|
||||||
|
// KeyID is a key for searching object by object id.
|
||||||
|
KeyID = "ID"
|
||||||
|
|
||||||
|
// KeyCID is a key for searching object by container id.
|
||||||
|
KeyCID = "CID"
|
||||||
|
|
||||||
|
// KeyOwnerID is a key for searching object by owner id.
|
||||||
|
KeyOwnerID = "OWNERID"
|
||||||
|
|
||||||
|
// KeyRootObject is a key for searching object that are zero-object or do
|
||||||
|
// not have any children.
|
||||||
|
KeyRootObject = "ROOT_OBJECT"
|
||||||
|
)
|
||||||
|
|
||||||
|
func checkIsNotFull(v interface{}) bool {
|
||||||
|
var obj *Object
|
||||||
|
|
||||||
|
switch t := v.(type) {
|
||||||
|
case *GetResponse:
|
||||||
|
obj = t.GetObject()
|
||||||
|
case *PutRequest:
|
||||||
|
if h := t.GetHeader(); h != nil {
|
||||||
|
obj = h.Object
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
panic("unknown type")
|
||||||
|
}
|
||||||
|
|
||||||
|
return obj == nil || obj.SystemHeader.PayloadLength != uint64(len(obj.Payload)) && !obj.IsLinking()
|
||||||
|
}
|
||||||
|
|
||||||
|
// NotFull checks if protobuf stream provided whole object for get operation.
|
||||||
|
func (m *GetResponse) NotFull() bool { return checkIsNotFull(m) }
|
||||||
|
|
||||||
|
// NotFull checks if protobuf stream provided whole object for put operation.
|
||||||
|
func (m *PutRequest) NotFull() bool { return checkIsNotFull(m) }
|
||||||
|
|
||||||
|
// GetTTL returns TTL value from object put request.
|
||||||
|
func (m *PutRequest) GetTTL() uint32 { return m.GetHeader().TTL }
|
||||||
|
|
||||||
|
// GetEpoch returns epoch value from object put request.
|
||||||
|
func (m *PutRequest) GetEpoch() uint64 { return m.GetHeader().GetEpoch() }
|
||||||
|
|
||||||
|
// SetTTL sets TTL value into object put request.
|
||||||
|
func (m *PutRequest) SetTTL(ttl uint32) { m.GetHeader().TTL = ttl }
|
||||||
|
|
||||||
|
// SetTTL sets TTL value into object get request.
|
||||||
|
func (m *GetRequest) SetTTL(ttl uint32) { m.TTL = ttl }
|
||||||
|
|
||||||
|
// SetTTL sets TTL value into object head request.
|
||||||
|
func (m *HeadRequest) SetTTL(ttl uint32) { m.TTL = ttl }
|
||||||
|
|
||||||
|
// SetTTL sets TTL value into object search request.
|
||||||
|
func (m *SearchRequest) SetTTL(ttl uint32) { m.TTL = ttl }
|
||||||
|
|
||||||
|
// SetTTL sets TTL value into object delete request.
|
||||||
|
func (m *DeleteRequest) SetTTL(ttl uint32) { m.TTL = ttl }
|
||||||
|
|
||||||
|
// SetTTL sets TTL value into object get range request.
|
||||||
|
func (m *GetRangeRequest) SetTTL(ttl uint32) { m.TTL = ttl }
|
||||||
|
|
||||||
|
// SetTTL sets TTL value into object get range hash request.
|
||||||
|
func (m *GetRangeHashRequest) SetTTL(ttl uint32) { m.TTL = ttl }
|
||||||
|
|
||||||
|
// SetEpoch sets epoch value into object put request.
|
||||||
|
func (m *PutRequest) SetEpoch(v uint64) { m.GetHeader().Epoch = v }
|
||||||
|
|
||||||
|
// SetEpoch sets epoch value into object get request.
|
||||||
|
func (m *GetRequest) SetEpoch(v uint64) { m.Epoch = v }
|
||||||
|
|
||||||
|
// SetEpoch sets epoch value into object head request.
|
||||||
|
func (m *HeadRequest) SetEpoch(v uint64) { m.Epoch = v }
|
||||||
|
|
||||||
|
// SetEpoch sets epoch value into object search request.
|
||||||
|
func (m *SearchRequest) SetEpoch(v uint64) { m.Epoch = v }
|
||||||
|
|
||||||
|
// SetEpoch sets epoch value into object delete request.
|
||||||
|
func (m *DeleteRequest) SetEpoch(v uint64) { m.Epoch = v }
|
||||||
|
|
||||||
|
// SetEpoch sets epoch value into object get range request.
|
||||||
|
func (m *GetRangeRequest) SetEpoch(v uint64) { m.Epoch = v }
|
||||||
|
|
||||||
|
// SetEpoch sets epoch value into object get range hash request.
|
||||||
|
func (m *GetRangeHashRequest) SetEpoch(v uint64) { m.Epoch = v }
|
||||||
|
|
||||||
|
// CID returns container id value from object put request.
|
||||||
|
func (m *PutRequest) CID() CID { return m.GetHeader().Object.SystemHeader.CID }
|
||||||
|
|
||||||
|
// CID returns container id value from object get request.
|
||||||
|
func (m *GetRequest) CID() CID { return m.Address.CID }
|
||||||
|
|
||||||
|
// CID returns container id value from object head request.
|
||||||
|
func (m *HeadRequest) CID() CID { return m.Address.CID }
|
||||||
|
|
||||||
|
// CID returns container id value from object search request.
|
||||||
|
func (m *SearchRequest) CID() CID { return m.ContainerID }
|
||||||
|
|
||||||
|
// CID returns container id value from object delete request.
|
||||||
|
func (m *DeleteRequest) CID() CID { return m.Address.CID }
|
||||||
|
|
||||||
|
// CID returns container id value from object get range request.
|
||||||
|
func (m *GetRangeRequest) CID() CID { return m.Address.CID }
|
||||||
|
|
||||||
|
// CID returns container id value from object get range hash request.
|
||||||
|
func (m *GetRangeHashRequest) CID() CID { return m.Address.CID }
|
||||||
|
|
||||||
|
// AllowPreviousNetMap returns permission to use previous network map in object put request.
|
||||||
|
func (m *PutRequest) AllowPreviousNetMap() bool { return false }
|
||||||
|
|
||||||
|
// AllowPreviousNetMap returns permission to use previous network map in object get request.
|
||||||
|
func (m *GetRequest) AllowPreviousNetMap() bool { return true }
|
||||||
|
|
||||||
|
// AllowPreviousNetMap returns permission to use previous network map in object head request.
|
||||||
|
func (m *HeadRequest) AllowPreviousNetMap() bool { return true }
|
||||||
|
|
||||||
|
// AllowPreviousNetMap returns permission to use previous network map in object search request.
|
||||||
|
func (m *SearchRequest) AllowPreviousNetMap() bool { return true }
|
||||||
|
|
||||||
|
// AllowPreviousNetMap returns permission to use previous network map in object delete request.
|
||||||
|
func (m *DeleteRequest) AllowPreviousNetMap() bool { return false }
|
||||||
|
|
||||||
|
// AllowPreviousNetMap returns permission to use previous network map in object get range request.
|
||||||
|
func (m *GetRangeRequest) AllowPreviousNetMap() bool { return false }
|
||||||
|
|
||||||
|
// AllowPreviousNetMap returns permission to use previous network map in object get range hash request.
|
||||||
|
func (m *GetRangeHashRequest) AllowPreviousNetMap() bool { return false }
|
BIN
object/service.pb.go
Normal file
BIN
object/service.pb.go
Normal file
Binary file not shown.
119
object/service.proto
Normal file
119
object/service.proto
Normal file
|
@ -0,0 +1,119 @@
|
||||||
|
syntax = "proto3";
|
||||||
|
package object;
|
||||||
|
option go_package = "github.com/nspcc-dev/neofs-proto/object";
|
||||||
|
|
||||||
|
import "refs/types.proto";
|
||||||
|
import "object/types.proto";
|
||||||
|
import "session/types.proto";
|
||||||
|
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||||
|
|
||||||
|
option (gogoproto.stable_marshaler_all) = true;
|
||||||
|
|
||||||
|
service Service {
|
||||||
|
// Get the object from a container
|
||||||
|
rpc Get(GetRequest) returns (stream GetResponse);
|
||||||
|
|
||||||
|
// Put the object into a container
|
||||||
|
rpc Put(stream PutRequest) returns (PutResponse);
|
||||||
|
|
||||||
|
// Delete the object from a container
|
||||||
|
rpc Delete(DeleteRequest) returns (DeleteResponse);
|
||||||
|
|
||||||
|
// Get MetaInfo
|
||||||
|
rpc Head(HeadRequest) returns (HeadResponse);
|
||||||
|
|
||||||
|
// Search by MetaInfo
|
||||||
|
rpc Search(SearchRequest) returns (SearchResponse);
|
||||||
|
|
||||||
|
// Get ranges of object payload
|
||||||
|
rpc GetRange(GetRangeRequest) returns (GetRangeResponse);
|
||||||
|
|
||||||
|
// Get hashes of object ranges
|
||||||
|
rpc GetRangeHash(GetRangeHashRequest) returns (GetRangeHashResponse);
|
||||||
|
}
|
||||||
|
|
||||||
|
message GetRequest {
|
||||||
|
uint64 Epoch = 1;
|
||||||
|
refs.Address Address = 2 [(gogoproto.nullable) = false];
|
||||||
|
uint32 TTL = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
message GetResponse {
|
||||||
|
oneof R {
|
||||||
|
Object object = 1;
|
||||||
|
bytes Chunk = 2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
message PutRequest {
|
||||||
|
message PutHeader {
|
||||||
|
uint64 Epoch = 1;
|
||||||
|
Object Object = 2;
|
||||||
|
uint32 TTL = 3;
|
||||||
|
session.Token Token = 4;
|
||||||
|
}
|
||||||
|
|
||||||
|
oneof R {
|
||||||
|
PutHeader Header = 1;
|
||||||
|
bytes Chunk = 2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
message PutResponse {
|
||||||
|
refs.Address Address = 1 [(gogoproto.nullable) = false];
|
||||||
|
}
|
||||||
|
message DeleteRequest {
|
||||||
|
uint64 Epoch = 1;
|
||||||
|
refs.Address Address = 2 [(gogoproto.nullable) = false];
|
||||||
|
bytes OwnerID = 3 [(gogoproto.nullable) = false, (gogoproto.customtype) = "OwnerID"];
|
||||||
|
uint32 TTL = 4;
|
||||||
|
session.Token Token = 5;
|
||||||
|
}
|
||||||
|
message DeleteResponse {}
|
||||||
|
|
||||||
|
// HeadRequest.FullHeader == true, for fetch all headers
|
||||||
|
message HeadRequest {
|
||||||
|
uint64 Epoch = 1;
|
||||||
|
refs.Address Address = 2 [(gogoproto.nullable) = false, (gogoproto.customtype) = "Address"];
|
||||||
|
bool FullHeaders = 3;
|
||||||
|
uint32 TTL = 4;
|
||||||
|
}
|
||||||
|
message HeadResponse {
|
||||||
|
Object Object = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message SearchRequest {
|
||||||
|
uint64 Epoch = 1;
|
||||||
|
uint32 Version = 2;
|
||||||
|
bytes ContainerID = 3 [(gogoproto.nullable) = false, (gogoproto.customtype) = "CID"];
|
||||||
|
bytes Query = 4;
|
||||||
|
uint32 TTL = 5;
|
||||||
|
}
|
||||||
|
|
||||||
|
message SearchResponse {
|
||||||
|
repeated refs.Address Addresses = 1 [(gogoproto.nullable) = false];
|
||||||
|
}
|
||||||
|
|
||||||
|
message GetRangeRequest {
|
||||||
|
uint64 Epoch = 1;
|
||||||
|
refs.Address Address = 2 [(gogoproto.nullable) = false];
|
||||||
|
repeated Range Ranges = 3 [(gogoproto.nullable) = false];
|
||||||
|
uint32 TTL = 4;
|
||||||
|
}
|
||||||
|
|
||||||
|
message GetRangeResponse {
|
||||||
|
repeated bytes Fragments = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message GetRangeHashRequest {
|
||||||
|
uint64 Epoch = 1;
|
||||||
|
refs.Address Address = 2 [(gogoproto.nullable) = false];
|
||||||
|
repeated Range Ranges = 3 [(gogoproto.nullable) = false];
|
||||||
|
bytes Salt = 4;
|
||||||
|
uint32 TTL = 5;
|
||||||
|
}
|
||||||
|
|
||||||
|
message GetRangeHashResponse {
|
||||||
|
repeated bytes Hashes = 1 [(gogoproto.customtype) = "Hash", (gogoproto.nullable) = false];
|
||||||
|
}
|
||||||
|
|
66
object/sg.go
Normal file
66
object/sg.go
Normal file
|
@ -0,0 +1,66 @@
|
||||||
|
package object
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"sort"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Here are defined getter functions for objects that contain storage group
|
||||||
|
// information.
|
||||||
|
|
||||||
|
type (
|
||||||
|
// IDList is a slice of object ids, that can be sorted.
|
||||||
|
IDList []ID
|
||||||
|
|
||||||
|
// ZoneInfo provides validation info of storage group.
|
||||||
|
ZoneInfo struct {
|
||||||
|
Hash
|
||||||
|
Size uint64
|
||||||
|
}
|
||||||
|
|
||||||
|
// IdentificationInfo provides meta information about storage group.
|
||||||
|
IdentificationInfo struct {
|
||||||
|
SGID
|
||||||
|
CID
|
||||||
|
OwnerID
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
// Len returns amount of object ids in IDList.
|
||||||
|
func (s IDList) Len() int { return len(s) }
|
||||||
|
|
||||||
|
// Less returns byte comparision between IDList[i] and IDList[j].
|
||||||
|
func (s IDList) Less(i, j int) bool { return bytes.Compare(s[i].Bytes(), s[j].Bytes()) == -1 }
|
||||||
|
|
||||||
|
// Swap swaps element with i and j index in IDList.
|
||||||
|
func (s IDList) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
|
||||||
|
|
||||||
|
// Group returns slice of object ids that are part of a storage group.
|
||||||
|
func (m *Object) Group() []ID {
|
||||||
|
sgLinks := m.Links(Link_StorageGroup)
|
||||||
|
sort.Sort(IDList(sgLinks))
|
||||||
|
return sgLinks
|
||||||
|
}
|
||||||
|
|
||||||
|
// Zones returns validation zones of storage group.
|
||||||
|
func (m *Object) Zones() []ZoneInfo {
|
||||||
|
sgInfo, err := m.StorageGroup()
|
||||||
|
if err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return []ZoneInfo{
|
||||||
|
{
|
||||||
|
Hash: sgInfo.ValidationHash,
|
||||||
|
Size: sgInfo.ValidationDataSize,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// IDInfo returns meta information about storage group.
|
||||||
|
func (m *Object) IDInfo() *IdentificationInfo {
|
||||||
|
return &IdentificationInfo{
|
||||||
|
SGID: m.SystemHeader.ID,
|
||||||
|
CID: m.SystemHeader.CID,
|
||||||
|
OwnerID: m.SystemHeader.OwnerID,
|
||||||
|
}
|
||||||
|
}
|
87
object/sg_test.go
Normal file
87
object/sg_test.go
Normal file
|
@ -0,0 +1,87 @@
|
||||||
|
package object
|
||||||
|
|
||||||
|
import (
|
||||||
|
"math/rand"
|
||||||
|
"sort"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/nspcc-dev/neofs-proto/hash"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestObject_StorageGroup(t *testing.T) {
|
||||||
|
t.Run("group method", func(t *testing.T) {
|
||||||
|
var linkCount byte = 100
|
||||||
|
|
||||||
|
obj := &Object{Headers: make([]Header, 0, linkCount)}
|
||||||
|
require.Empty(t, obj.Group())
|
||||||
|
|
||||||
|
idList := make([]ID, linkCount)
|
||||||
|
for i := byte(0); i < linkCount; i++ {
|
||||||
|
idList[i] = ID{i}
|
||||||
|
obj.Headers = append(obj.Headers, Header{
|
||||||
|
Value: &Header_Link{Link: &Link{
|
||||||
|
Type: Link_StorageGroup,
|
||||||
|
ID: idList[i],
|
||||||
|
}},
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
rand.Shuffle(len(obj.Headers), func(i, j int) { obj.Headers[i], obj.Headers[j] = obj.Headers[j], obj.Headers[i] })
|
||||||
|
sort.Sort(IDList(idList))
|
||||||
|
require.Equal(t, idList, obj.Group())
|
||||||
|
})
|
||||||
|
t.Run("identification method", func(t *testing.T) {
|
||||||
|
oid, cid, owner := ID{1}, CID{2}, OwnerID{3}
|
||||||
|
obj := &Object{
|
||||||
|
SystemHeader: SystemHeader{
|
||||||
|
ID: oid,
|
||||||
|
OwnerID: owner,
|
||||||
|
CID: cid,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
idInfo := obj.IDInfo()
|
||||||
|
require.Equal(t, oid, idInfo.SGID)
|
||||||
|
require.Equal(t, cid, idInfo.CID)
|
||||||
|
require.Equal(t, owner, idInfo.OwnerID)
|
||||||
|
})
|
||||||
|
t.Run("zones method", func(t *testing.T) {
|
||||||
|
sgSize := uint64(100)
|
||||||
|
|
||||||
|
d := make([]byte, sgSize)
|
||||||
|
_, err := rand.Read(d)
|
||||||
|
require.NoError(t, err)
|
||||||
|
sgHash := hash.Sum(d)
|
||||||
|
|
||||||
|
obj := &Object{
|
||||||
|
Headers: []Header{
|
||||||
|
{
|
||||||
|
Value: &Header_StorageGroup{
|
||||||
|
StorageGroup: &StorageGroup{
|
||||||
|
ValidationDataSize: sgSize,
|
||||||
|
ValidationHash: sgHash,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
sumSize uint64
|
||||||
|
zones = obj.Zones()
|
||||||
|
hashes = make([]Hash, len(zones))
|
||||||
|
)
|
||||||
|
|
||||||
|
for i := range zones {
|
||||||
|
sumSize += zones[i].Size
|
||||||
|
hashes[i] = zones[i].Hash
|
||||||
|
}
|
||||||
|
|
||||||
|
sumHash, err := hash.Concat(hashes)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.Equal(t, sgSize, sumSize)
|
||||||
|
require.Equal(t, sgHash, sumHash)
|
||||||
|
})
|
||||||
|
}
|
219
object/types.go
Normal file
219
object/types.go
Normal file
|
@ -0,0 +1,219 @@
|
||||||
|
package object
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"context"
|
||||||
|
|
||||||
|
"github.com/gogo/protobuf/proto"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/internal"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/refs"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/session"
|
||||||
|
)
|
||||||
|
|
||||||
|
type (
|
||||||
|
// Pred defines a predicate function that can check if passed header
|
||||||
|
// satisfies predicate condition. It is used to find headers of
|
||||||
|
// specific type.
|
||||||
|
Pred = func(*Header) bool
|
||||||
|
|
||||||
|
// Address is a type alias of object Address.
|
||||||
|
Address = refs.Address
|
||||||
|
|
||||||
|
// VerificationHeader is a type alias of session's verification header.
|
||||||
|
VerificationHeader = session.VerificationHeader
|
||||||
|
|
||||||
|
// PositionReader defines object reader that returns slice of bytes
|
||||||
|
// for specified object and data range.
|
||||||
|
PositionReader interface {
|
||||||
|
PRead(ctx context.Context, addr refs.Address, rng Range) ([]byte, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
headerType int
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
// ErrVerifyPayload is raised when payload checksum cannot be verified.
|
||||||
|
ErrVerifyPayload = internal.Error("can't verify payload")
|
||||||
|
|
||||||
|
// ErrVerifyHeader is raised when object integrity cannot be verified.
|
||||||
|
ErrVerifyHeader = internal.Error("can't verify header")
|
||||||
|
|
||||||
|
// ErrHeaderNotFound is raised when requested header not found.
|
||||||
|
ErrHeaderNotFound = internal.Error("header not found")
|
||||||
|
|
||||||
|
// ErrVerifySignature is raised when signature cannot be verified.
|
||||||
|
ErrVerifySignature = internal.Error("can't verify signature")
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
_ headerType = iota
|
||||||
|
// LinkHdr is a link header type.
|
||||||
|
LinkHdr
|
||||||
|
// RedirectHdr is a redirect header type.
|
||||||
|
RedirectHdr
|
||||||
|
// UserHdr is a user defined header type.
|
||||||
|
UserHdr
|
||||||
|
// TransformHdr is a transformation header type.
|
||||||
|
TransformHdr
|
||||||
|
// TombstoneHdr is a tombstone header type.
|
||||||
|
TombstoneHdr
|
||||||
|
// VerifyHdr is a verification header type.
|
||||||
|
VerifyHdr
|
||||||
|
// HomoHashHdr is a homomorphic hash header type.
|
||||||
|
HomoHashHdr
|
||||||
|
// PayloadChecksumHdr is a payload checksum header type.
|
||||||
|
PayloadChecksumHdr
|
||||||
|
// IntegrityHdr is a integrity header type.
|
||||||
|
IntegrityHdr
|
||||||
|
// StorageGroupHdr is a storage group header type.
|
||||||
|
StorageGroupHdr
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
_ internal.Custom = (*Object)(nil)
|
||||||
|
|
||||||
|
emptyObject = new(Object).Bytes()
|
||||||
|
)
|
||||||
|
|
||||||
|
// Bytes returns marshaled object in a binary format.
|
||||||
|
func (m Object) Bytes() []byte { data, _ := m.Marshal(); return data }
|
||||||
|
|
||||||
|
// Empty checks if object does not contain any information.
|
||||||
|
func (m Object) Empty() bool { return bytes.Equal(m.Bytes(), emptyObject) }
|
||||||
|
|
||||||
|
// LastHeader returns last header of the specified type. Type must be
|
||||||
|
// specified as a Pred function.
|
||||||
|
func (m Object) LastHeader(f Pred) (int, *Header) {
|
||||||
|
for i := len(m.Headers) - 1; i >= 0; i-- {
|
||||||
|
if f != nil && f(&m.Headers[i]) {
|
||||||
|
return i, &m.Headers[i]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return -1, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// AddHeader adds passed header to the end of extended header list.
|
||||||
|
func (m *Object) AddHeader(h *Header) {
|
||||||
|
m.Headers = append(m.Headers, *h)
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetPayload sets payload field and payload length in the system header.
|
||||||
|
func (m *Object) SetPayload(payload []byte) {
|
||||||
|
m.Payload = payload
|
||||||
|
m.SystemHeader.PayloadLength = uint64(len(payload))
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetHeader replaces existing extended header or adds new one to the end of
|
||||||
|
// extended header list.
|
||||||
|
func (m *Object) SetHeader(h *Header) {
|
||||||
|
// looking for the header of that type
|
||||||
|
for i := range m.Headers {
|
||||||
|
if m.Headers[i].typeOf(h.Value) {
|
||||||
|
// if we found one - set it with new value and return
|
||||||
|
m.Headers[i] = *h
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// if we did not find one - add this header
|
||||||
|
m.AddHeader(h)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m Header) typeOf(t isHeader_Value) (ok bool) {
|
||||||
|
switch t.(type) {
|
||||||
|
case *Header_Link:
|
||||||
|
_, ok = m.Value.(*Header_Link)
|
||||||
|
case *Header_Redirect:
|
||||||
|
_, ok = m.Value.(*Header_Redirect)
|
||||||
|
case *Header_UserHeader:
|
||||||
|
_, ok = m.Value.(*Header_UserHeader)
|
||||||
|
case *Header_Transform:
|
||||||
|
_, ok = m.Value.(*Header_Transform)
|
||||||
|
case *Header_Tombstone:
|
||||||
|
_, ok = m.Value.(*Header_Tombstone)
|
||||||
|
case *Header_Verify:
|
||||||
|
_, ok = m.Value.(*Header_Verify)
|
||||||
|
case *Header_HomoHash:
|
||||||
|
_, ok = m.Value.(*Header_HomoHash)
|
||||||
|
case *Header_PayloadChecksum:
|
||||||
|
_, ok = m.Value.(*Header_PayloadChecksum)
|
||||||
|
case *Header_Integrity:
|
||||||
|
_, ok = m.Value.(*Header_Integrity)
|
||||||
|
case *Header_StorageGroup:
|
||||||
|
_, ok = m.Value.(*Header_StorageGroup)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// HeaderType returns predicate that check if extended header is a header
|
||||||
|
// of specified type.
|
||||||
|
func HeaderType(t headerType) Pred {
|
||||||
|
switch t {
|
||||||
|
case LinkHdr:
|
||||||
|
return func(h *Header) bool { _, ok := h.Value.(*Header_Link); return ok }
|
||||||
|
case RedirectHdr:
|
||||||
|
return func(h *Header) bool { _, ok := h.Value.(*Header_Redirect); return ok }
|
||||||
|
case UserHdr:
|
||||||
|
return func(h *Header) bool { _, ok := h.Value.(*Header_UserHeader); return ok }
|
||||||
|
case TransformHdr:
|
||||||
|
return func(h *Header) bool { _, ok := h.Value.(*Header_Transform); return ok }
|
||||||
|
case TombstoneHdr:
|
||||||
|
return func(h *Header) bool { _, ok := h.Value.(*Header_Tombstone); return ok }
|
||||||
|
case VerifyHdr:
|
||||||
|
return func(h *Header) bool { _, ok := h.Value.(*Header_Verify); return ok }
|
||||||
|
case HomoHashHdr:
|
||||||
|
return func(h *Header) bool { _, ok := h.Value.(*Header_HomoHash); return ok }
|
||||||
|
case PayloadChecksumHdr:
|
||||||
|
return func(h *Header) bool { _, ok := h.Value.(*Header_PayloadChecksum); return ok }
|
||||||
|
case IntegrityHdr:
|
||||||
|
return func(h *Header) bool { _, ok := h.Value.(*Header_Integrity); return ok }
|
||||||
|
case StorageGroupHdr:
|
||||||
|
return func(h *Header) bool { _, ok := h.Value.(*Header_StorageGroup); return ok }
|
||||||
|
default:
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Copy creates full copy of the object.
|
||||||
|
func (m *Object) Copy() (obj *Object) {
|
||||||
|
obj = new(Object)
|
||||||
|
m.CopyTo(obj)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// CopyTo creates fills passed object with the data from the current object.
|
||||||
|
// This function creates copies on every available data slice.
|
||||||
|
func (m *Object) CopyTo(o *Object) {
|
||||||
|
o.SystemHeader = m.SystemHeader
|
||||||
|
o.Headers = make([]Header, len(m.Headers))
|
||||||
|
o.Payload = make([]byte, len(m.Payload))
|
||||||
|
|
||||||
|
for i := range m.Headers {
|
||||||
|
switch v := m.Headers[i].Value.(type) {
|
||||||
|
case *Header_Link:
|
||||||
|
link := *v.Link
|
||||||
|
o.Headers[i] = Header{
|
||||||
|
Value: &Header_Link{
|
||||||
|
Link: &link,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
case *Header_HomoHash:
|
||||||
|
o.Headers[i] = Header{
|
||||||
|
Value: &Header_HomoHash{
|
||||||
|
HomoHash: v.HomoHash,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
default:
|
||||||
|
o.Headers[i] = *proto.Clone(&m.Headers[i]).(*Header)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
copy(o.Payload, m.Payload)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Address returns object's address.
|
||||||
|
func (m Object) Address() *refs.Address {
|
||||||
|
return &refs.Address{
|
||||||
|
ObjectID: m.SystemHeader.ID,
|
||||||
|
CID: m.SystemHeader.CID,
|
||||||
|
}
|
||||||
|
}
|
BIN
object/types.pb.go
Normal file
BIN
object/types.pb.go
Normal file
Binary file not shown.
107
object/types.proto
Normal file
107
object/types.proto
Normal file
|
@ -0,0 +1,107 @@
|
||||||
|
syntax = "proto3";
|
||||||
|
package object;
|
||||||
|
option go_package = "github.com/nspcc-dev/neofs-proto/object";
|
||||||
|
|
||||||
|
import "refs/types.proto";
|
||||||
|
import "session/types.proto";
|
||||||
|
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||||
|
|
||||||
|
option (gogoproto.stable_marshaler_all) = true;
|
||||||
|
|
||||||
|
message Range {
|
||||||
|
uint64 Offset = 1;
|
||||||
|
uint64 Length = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message UserHeader {
|
||||||
|
string Key = 1;
|
||||||
|
string Value = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message Header {
|
||||||
|
oneof Value {
|
||||||
|
Link Link = 1;
|
||||||
|
refs.Address Redirect = 2;
|
||||||
|
UserHeader UserHeader = 3;
|
||||||
|
Transform Transform = 4;
|
||||||
|
Tombstone Tombstone = 5;
|
||||||
|
// session-related info: session.VerificationHeader
|
||||||
|
session.VerificationHeader Verify = 6;
|
||||||
|
// integrity-related info
|
||||||
|
bytes HomoHash = 7 [(gogoproto.customtype) = "Hash"];
|
||||||
|
bytes PayloadChecksum = 8;
|
||||||
|
IntegrityHeader Integrity = 9;
|
||||||
|
StorageGroup StorageGroup = 10;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
message Tombstone {
|
||||||
|
uint64 Epoch = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message SystemHeader {
|
||||||
|
uint64 Version = 1;
|
||||||
|
uint64 PayloadLength = 2;
|
||||||
|
|
||||||
|
bytes ID = 3 [(gogoproto.customtype) = "ID", (gogoproto.nullable) = false];
|
||||||
|
bytes OwnerID = 4 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||||
|
bytes CID = 5 [(gogoproto.customtype) = "CID", (gogoproto.nullable) = false];
|
||||||
|
CreationPoint CreatedAt = 6 [(gogoproto.nullable) = false];
|
||||||
|
}
|
||||||
|
|
||||||
|
message CreationPoint {
|
||||||
|
int64 UnixTime = 1;
|
||||||
|
uint64 Epoch = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message IntegrityHeader {
|
||||||
|
bytes HeadersChecksum = 1;
|
||||||
|
bytes ChecksumSignature = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message Link {
|
||||||
|
enum Type {
|
||||||
|
Unknown = 0;
|
||||||
|
Parent = 1;
|
||||||
|
Previous = 2;
|
||||||
|
Next = 3;
|
||||||
|
Child = 4;
|
||||||
|
StorageGroup = 5;
|
||||||
|
}
|
||||||
|
Type type = 1;
|
||||||
|
bytes ID = 2 [(gogoproto.customtype) = "ID", (gogoproto.nullable) = false];
|
||||||
|
}
|
||||||
|
|
||||||
|
message Transform {
|
||||||
|
enum Type {
|
||||||
|
Unknown = 0;
|
||||||
|
Split = 1;
|
||||||
|
Sign = 2;
|
||||||
|
Mould = 3;
|
||||||
|
}
|
||||||
|
Type type = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
message Object {
|
||||||
|
SystemHeader SystemHeader = 1 [(gogoproto.nullable) = false];
|
||||||
|
repeated Header Headers = 2 [(gogoproto.nullable) = false];
|
||||||
|
bytes Payload = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
message StorageGroup {
|
||||||
|
uint64 ValidationDataSize = 1;
|
||||||
|
bytes ValidationHash = 2 [(gogoproto.customtype) = "Hash", (gogoproto.nullable) = false];
|
||||||
|
|
||||||
|
message Lifetime {
|
||||||
|
enum Unit {
|
||||||
|
Unlimited = 0;
|
||||||
|
NeoFSEpoch = 1;
|
||||||
|
UnixTime = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
Unit unit = 1 [(gogoproto.customname) = "Unit"];
|
||||||
|
int64 Value = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
Lifetime lifetime = 3 [(gogoproto.customname) = "Lifetime"];
|
||||||
|
}
|
107
object/utils.go
Normal file
107
object/utils.go
Normal file
|
@ -0,0 +1,107 @@
|
||||||
|
package object
|
||||||
|
|
||||||
|
import (
|
||||||
|
"io"
|
||||||
|
|
||||||
|
"code.cloudfoundry.org/bytefmt"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/session"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
const maxGetPayloadSize = 3584 * 1024 // 3.5 MiB
|
||||||
|
|
||||||
|
func splitBytes(data []byte, maxSize int) (result [][]byte) {
|
||||||
|
l := len(data)
|
||||||
|
if l == 0 {
|
||||||
|
return [][]byte{data}
|
||||||
|
}
|
||||||
|
for i := 0; i < l; i += maxSize {
|
||||||
|
last := i + maxSize
|
||||||
|
if last > l {
|
||||||
|
last = l
|
||||||
|
}
|
||||||
|
result = append(result, data[i:last])
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// SendPutRequest prepares object and sends it in chunks through protobuf stream.
|
||||||
|
func SendPutRequest(s Service_PutClient, obj *Object, epoch uint64, ttl uint32) (*PutResponse, error) {
|
||||||
|
// TODO split must take into account size of the marshalled Object
|
||||||
|
chunks := splitBytes(obj.Payload, maxGetPayloadSize)
|
||||||
|
obj.Payload = chunks[0]
|
||||||
|
if err := s.Send(MakePutRequestHeader(obj, epoch, ttl, nil)); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
for i := range chunks[1:] {
|
||||||
|
if err := s.Send(MakePutRequestChunk(chunks[i+1])); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
resp, err := s.CloseAndRecv()
|
||||||
|
if err != nil && err != io.EOF {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return resp, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// MakePutRequestHeader combines object, epoch, ttl and session token value
|
||||||
|
// into header of object put request.
|
||||||
|
func MakePutRequestHeader(obj *Object, epoch uint64, ttl uint32, token *session.Token) *PutRequest {
|
||||||
|
return &PutRequest{
|
||||||
|
R: &PutRequest_Header{
|
||||||
|
Header: &PutRequest_PutHeader{
|
||||||
|
Epoch: epoch,
|
||||||
|
Object: obj,
|
||||||
|
TTL: ttl,
|
||||||
|
Token: token,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MakePutRequestChunk splits data into chunks that will be transferred
|
||||||
|
// in the protobuf stream.
|
||||||
|
func MakePutRequestChunk(chunk []byte) *PutRequest {
|
||||||
|
return &PutRequest{R: &PutRequest_Chunk{Chunk: chunk}}
|
||||||
|
}
|
||||||
|
|
||||||
|
func errMaxSizeExceeded(size uint64) error {
|
||||||
|
return errors.Errorf("object payload size exceed: %s", bytefmt.ByteSize(size))
|
||||||
|
}
|
||||||
|
|
||||||
|
// ReceiveGetResponse receives object by chunks from the protobuf stream
|
||||||
|
// and combine it into single get response structure.
|
||||||
|
func ReceiveGetResponse(c Service_GetClient, maxSize uint64) (*GetResponse, error) {
|
||||||
|
res, err := c.Recv()
|
||||||
|
if err == io.EOF {
|
||||||
|
return res, err
|
||||||
|
} else if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
obj := res.GetObject()
|
||||||
|
if obj == nil {
|
||||||
|
return nil, ErrHeaderExpected
|
||||||
|
}
|
||||||
|
|
||||||
|
if obj.SystemHeader.PayloadLength > maxSize {
|
||||||
|
return nil, errMaxSizeExceeded(maxSize)
|
||||||
|
}
|
||||||
|
|
||||||
|
if res.NotFull() {
|
||||||
|
payload := make([]byte, obj.SystemHeader.PayloadLength)
|
||||||
|
offset := copy(payload, obj.Payload)
|
||||||
|
|
||||||
|
var r *GetResponse
|
||||||
|
for r, err = c.Recv(); err == nil; r, err = c.Recv() {
|
||||||
|
offset += copy(payload[offset:], r.GetChunk())
|
||||||
|
}
|
||||||
|
if err != io.EOF {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
obj.Payload = payload
|
||||||
|
}
|
||||||
|
|
||||||
|
return res, nil
|
||||||
|
}
|
132
object/verification.go
Normal file
132
object/verification.go
Normal file
|
@ -0,0 +1,132 @@
|
||||||
|
package object
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"crypto/ecdsa"
|
||||||
|
"crypto/sha256"
|
||||||
|
|
||||||
|
crypto "github.com/nspcc-dev/neofs-crypto"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
func (m Object) headersData(check bool) ([]byte, error) {
|
||||||
|
var bytebuf = new(bytes.Buffer)
|
||||||
|
|
||||||
|
// fixme: we must marshal fields one by one without protobuf marshaling
|
||||||
|
// protobuf marshaling does not guarantee the same result
|
||||||
|
|
||||||
|
if sysheader, err := m.SystemHeader.Marshal(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
} else if _, err := bytebuf.Write(sysheader); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
n, _ := m.LastHeader(HeaderType(IntegrityHdr))
|
||||||
|
for i := range m.Headers {
|
||||||
|
if check && i == n {
|
||||||
|
// ignore last integrity header in order to check headers data
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if header, err := m.Headers[i].Marshal(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
} else if _, err := bytebuf.Write(header); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return bytebuf.Bytes(), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m Object) headersChecksum(check bool) ([]byte, error) {
|
||||||
|
data, err := m.headersData(check)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
checksum := sha256.Sum256(data)
|
||||||
|
return checksum[:], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// PayloadChecksum calculates sha256 checksum of object payload.
|
||||||
|
func (m Object) PayloadChecksum() []byte {
|
||||||
|
checksum := sha256.Sum256(m.Payload)
|
||||||
|
return checksum[:]
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m Object) verifySignature(key []byte, ih *IntegrityHeader) error {
|
||||||
|
pk := crypto.UnmarshalPublicKey(key)
|
||||||
|
if crypto.Verify(pk, ih.HeadersChecksum, ih.ChecksumSignature) == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return ErrVerifySignature
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify performs local integrity check by finding verification header and
|
||||||
|
// integrity header. If header integrity is passed, function verifies
|
||||||
|
// checksum of the object payload.
|
||||||
|
func (m Object) Verify() error {
|
||||||
|
var (
|
||||||
|
err error
|
||||||
|
checksum []byte
|
||||||
|
)
|
||||||
|
// Prepare structures
|
||||||
|
_, vh := m.LastHeader(HeaderType(VerifyHdr))
|
||||||
|
if vh == nil {
|
||||||
|
return ErrHeaderNotFound
|
||||||
|
}
|
||||||
|
verify := vh.Value.(*Header_Verify).Verify
|
||||||
|
|
||||||
|
_, ih := m.LastHeader(HeaderType(IntegrityHdr))
|
||||||
|
if ih == nil {
|
||||||
|
return ErrHeaderNotFound
|
||||||
|
}
|
||||||
|
integrity := ih.Value.(*Header_Integrity).Integrity
|
||||||
|
|
||||||
|
// Verify signature
|
||||||
|
err = m.verifySignature(verify.PublicKey, integrity)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrapf(err, "public key: %x", verify.PublicKey)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify checksum of header
|
||||||
|
checksum, err = m.headersChecksum(true)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if !bytes.Equal(integrity.HeadersChecksum, checksum) {
|
||||||
|
return ErrVerifyHeader
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify checksum of payload
|
||||||
|
if m.SystemHeader.PayloadLength > 0 && !m.IsLinking() {
|
||||||
|
checksum = m.PayloadChecksum()
|
||||||
|
|
||||||
|
_, ph := m.LastHeader(HeaderType(PayloadChecksumHdr))
|
||||||
|
if ph == nil {
|
||||||
|
return ErrHeaderNotFound
|
||||||
|
}
|
||||||
|
if !bytes.Equal(ph.Value.(*Header_PayloadChecksum).PayloadChecksum, checksum) {
|
||||||
|
return ErrVerifyPayload
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sign creates new integrity header and adds it to the end of the list of
|
||||||
|
// extended headers.
|
||||||
|
func (m *Object) Sign(key *ecdsa.PrivateKey) error {
|
||||||
|
headerChecksum, err := m.headersChecksum(false)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
headerChecksumSignature, err := crypto.Sign(key, headerChecksum)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
m.AddHeader(&Header{Value: &Header_Integrity{
|
||||||
|
Integrity: &IntegrityHeader{
|
||||||
|
HeadersChecksum: headerChecksum,
|
||||||
|
ChecksumSignature: headerChecksumSignature,
|
||||||
|
},
|
||||||
|
}})
|
||||||
|
return nil
|
||||||
|
}
|
105
object/verification_test.go
Normal file
105
object/verification_test.go
Normal file
|
@ -0,0 +1,105 @@
|
||||||
|
package object
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
|
crypto "github.com/nspcc-dev/neofs-crypto"
|
||||||
|
"github.com/nspcc-dev/neofs-crypto/test"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/container"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/refs"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/session"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestObject_Verify(t *testing.T) {
|
||||||
|
key := test.DecodeKey(0)
|
||||||
|
sessionkey := test.DecodeKey(1)
|
||||||
|
|
||||||
|
payload := make([]byte, 1024*1024)
|
||||||
|
|
||||||
|
cnr, err := container.NewTestContainer()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
cid, err := cnr.ID()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
id, err := uuid.NewRandom()
|
||||||
|
uid := refs.UUID(id)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
obj := &Object{
|
||||||
|
SystemHeader: SystemHeader{
|
||||||
|
ID: uid,
|
||||||
|
CID: cid,
|
||||||
|
OwnerID: refs.OwnerID([refs.OwnerIDSize]byte{}),
|
||||||
|
},
|
||||||
|
Headers: []Header{
|
||||||
|
{
|
||||||
|
Value: &Header_UserHeader{
|
||||||
|
UserHeader: &UserHeader{
|
||||||
|
Key: "Profession",
|
||||||
|
Value: "Developer",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Value: &Header_UserHeader{
|
||||||
|
UserHeader: &UserHeader{
|
||||||
|
Key: "Language",
|
||||||
|
Value: "GO",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
obj.SetPayload(payload)
|
||||||
|
obj.SetHeader(&Header{Value: &Header_PayloadChecksum{[]byte("incorrect checksum")}})
|
||||||
|
|
||||||
|
t.Run("error no integrity header", func(t *testing.T) {
|
||||||
|
err = obj.Verify()
|
||||||
|
require.EqualError(t, err, ErrHeaderNotFound.Error())
|
||||||
|
})
|
||||||
|
|
||||||
|
badHeaderChecksum := []byte("incorrect checksum")
|
||||||
|
signature, err := crypto.Sign(sessionkey, badHeaderChecksum)
|
||||||
|
require.NoError(t, err)
|
||||||
|
ih := &IntegrityHeader{
|
||||||
|
HeadersChecksum: badHeaderChecksum,
|
||||||
|
ChecksumSignature: signature,
|
||||||
|
}
|
||||||
|
obj.SetHeader(&Header{Value: &Header_Integrity{ih}})
|
||||||
|
|
||||||
|
t.Run("error no validation header", func(t *testing.T) {
|
||||||
|
err = obj.Verify()
|
||||||
|
require.EqualError(t, err, ErrHeaderNotFound.Error())
|
||||||
|
})
|
||||||
|
|
||||||
|
dataPK := crypto.MarshalPublicKey(&sessionkey.PublicKey)
|
||||||
|
signature, err = crypto.Sign(key, dataPK)
|
||||||
|
vh := &session.VerificationHeader{
|
||||||
|
PublicKey: dataPK,
|
||||||
|
KeySignature: signature,
|
||||||
|
}
|
||||||
|
obj.SetVerificationHeader(vh)
|
||||||
|
|
||||||
|
t.Run("error invalid header checksum", func(t *testing.T) {
|
||||||
|
err = obj.Verify()
|
||||||
|
require.EqualError(t, err, ErrVerifyHeader.Error())
|
||||||
|
})
|
||||||
|
|
||||||
|
require.NoError(t, obj.Sign(sessionkey))
|
||||||
|
|
||||||
|
t.Run("error invalid payload checksum", func(t *testing.T) {
|
||||||
|
err = obj.Verify()
|
||||||
|
require.EqualError(t, err, ErrVerifyPayload.Error())
|
||||||
|
})
|
||||||
|
|
||||||
|
obj.SetHeader(&Header{Value: &Header_PayloadChecksum{obj.PayloadChecksum()}})
|
||||||
|
require.NoError(t, obj.Sign(sessionkey))
|
||||||
|
|
||||||
|
t.Run("correct", func(t *testing.T) {
|
||||||
|
err = obj.Verify()
|
||||||
|
require.NoError(t, err)
|
||||||
|
})
|
||||||
|
}
|
7
proto.go
Normal file
7
proto.go
Normal file
|
@ -0,0 +1,7 @@
|
||||||
|
package neofs_proto // import "github.com/nspcc-dev/neofs-proto"
|
||||||
|
|
||||||
|
import (
|
||||||
|
_ "github.com/gogo/protobuf/gogoproto"
|
||||||
|
_ "github.com/gogo/protobuf/proto"
|
||||||
|
_ "github.com/golang/protobuf/proto"
|
||||||
|
)
|
43
query/types.go
Normal file
43
query/types.go
Normal file
|
@ -0,0 +1,43 @@
|
||||||
|
package query
|
||||||
|
|
||||||
|
import (
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/gogo/protobuf/proto"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
_ proto.Message = (*Query)(nil)
|
||||||
|
_ proto.Message = (*Filter)(nil)
|
||||||
|
)
|
||||||
|
|
||||||
|
// String returns string representation of Filter.
|
||||||
|
func (m Filter) String() string {
|
||||||
|
b := new(strings.Builder)
|
||||||
|
b.WriteString("<Filter '$" + m.Name + "' ")
|
||||||
|
switch m.Type {
|
||||||
|
case Filter_Exact:
|
||||||
|
b.WriteString("==")
|
||||||
|
case Filter_Regex:
|
||||||
|
b.WriteString("~=")
|
||||||
|
default:
|
||||||
|
b.WriteString("??")
|
||||||
|
}
|
||||||
|
b.WriteString(" '" + m.Value + "'>")
|
||||||
|
return b.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
// String returns string representation of Query.
|
||||||
|
func (m Query) String() string {
|
||||||
|
b := new(strings.Builder)
|
||||||
|
b.WriteString("<Query [")
|
||||||
|
ln := len(m.Filters)
|
||||||
|
for i := 0; i < ln; i++ {
|
||||||
|
b.WriteString(m.Filters[i].String())
|
||||||
|
if ln-1 != i {
|
||||||
|
b.WriteByte(',')
|
||||||
|
}
|
||||||
|
}
|
||||||
|
b.WriteByte(']')
|
||||||
|
return b.String()
|
||||||
|
}
|
BIN
query/types.pb.go
Normal file
BIN
query/types.pb.go
Normal file
Binary file not shown.
25
query/types.proto
Normal file
25
query/types.proto
Normal file
|
@ -0,0 +1,25 @@
|
||||||
|
syntax = "proto3";
|
||||||
|
package query;
|
||||||
|
option go_package = "github.com/nspcc-dev/neofs-proto/query";
|
||||||
|
|
||||||
|
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||||
|
|
||||||
|
option (gogoproto.stable_marshaler_all) = true;
|
||||||
|
|
||||||
|
message Filter {
|
||||||
|
option (gogoproto.goproto_stringer) = false;
|
||||||
|
|
||||||
|
enum Type {
|
||||||
|
Exact = 0;
|
||||||
|
Regex = 1;
|
||||||
|
}
|
||||||
|
Type type = 1 [(gogoproto.customname) = "Type"];
|
||||||
|
string Name = 2;
|
||||||
|
string Value = 3;
|
||||||
|
}
|
||||||
|
|
||||||
|
message Query {
|
||||||
|
option (gogoproto.goproto_stringer) = false;
|
||||||
|
|
||||||
|
repeated Filter Filters = 1 [(gogoproto.nullable) = false];
|
||||||
|
}
|
68
refs/address.go
Normal file
68
refs/address.go
Normal file
|
@ -0,0 +1,68 @@
|
||||||
|
package refs
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/sha256"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/nspcc-dev/neofs-proto/internal"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
joinSeparator = "/"
|
||||||
|
|
||||||
|
// ErrWrongAddress is raised when wrong address is passed to Address.Parse ParseAddress.
|
||||||
|
ErrWrongAddress = internal.Error("wrong address")
|
||||||
|
|
||||||
|
// ErrEmptyAddress is raised when empty address is passed to Address.Parse ParseAddress.
|
||||||
|
ErrEmptyAddress = internal.Error("empty address")
|
||||||
|
)
|
||||||
|
|
||||||
|
// ParseAddress parses address from string representation into new Address.
|
||||||
|
func ParseAddress(str string) (*Address, error) {
|
||||||
|
var addr Address
|
||||||
|
return &addr, addr.Parse(str)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse parses address from string representation into current Address.
|
||||||
|
func (m *Address) Parse(addr string) error {
|
||||||
|
if m == nil {
|
||||||
|
return ErrEmptyAddress
|
||||||
|
}
|
||||||
|
|
||||||
|
items := strings.Split(addr, joinSeparator)
|
||||||
|
if len(items) != 2 {
|
||||||
|
return ErrWrongAddress
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := m.CID.Parse(items[0]); err != nil {
|
||||||
|
return err
|
||||||
|
} else if err := m.ObjectID.Parse(items[1]); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// String returns string representation of Address.
|
||||||
|
func (m Address) String() string {
|
||||||
|
return strings.Join([]string{m.CID.String(), m.ObjectID.String()}, joinSeparator)
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsFull checks that ContainerID and ObjectID is not empty.
|
||||||
|
func (m Address) IsFull() bool {
|
||||||
|
return !m.CID.Empty() && !m.ObjectID.Empty()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Equal checks that current Address is equal to passed Address.
|
||||||
|
func (m Address) Equal(a2 *Address) bool {
|
||||||
|
return m.CID.Equal(a2.CID) && m.ObjectID.Equal(a2.ObjectID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Hash returns []byte that used as a key for storage bucket.
|
||||||
|
func (m Address) Hash() ([]byte, error) {
|
||||||
|
if !m.IsFull() {
|
||||||
|
return nil, ErrEmptyAddress
|
||||||
|
}
|
||||||
|
h := sha256.Sum256(append(m.ObjectID.Bytes(), m.CID.Bytes()...))
|
||||||
|
return h[:], nil
|
||||||
|
}
|
96
refs/cid.go
Normal file
96
refs/cid.go
Normal file
|
@ -0,0 +1,96 @@
|
||||||
|
package refs
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"crypto/sha256"
|
||||||
|
|
||||||
|
"github.com/mr-tron/base58"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
// CIDForBytes creates CID for passed bytes.
|
||||||
|
func CIDForBytes(data []byte) CID { return sha256.Sum256(data) }
|
||||||
|
|
||||||
|
// CIDFromBytes parses CID from passed bytes.
|
||||||
|
func CIDFromBytes(data []byte) (cid CID, err error) {
|
||||||
|
if ln := len(data); ln != CIDSize {
|
||||||
|
return CID{}, errors.Wrapf(ErrWrongDataSize, "expect=%d, actual=%d", CIDSize, ln)
|
||||||
|
}
|
||||||
|
|
||||||
|
copy(cid[:], data)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// CIDFromString parses CID from string representation of CID.
|
||||||
|
func CIDFromString(c string) (CID, error) {
|
||||||
|
var cid CID
|
||||||
|
decoded, err := base58.Decode(c)
|
||||||
|
if err != nil {
|
||||||
|
return cid, err
|
||||||
|
}
|
||||||
|
return CIDFromBytes(decoded)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Size returns size of CID (CIDSize).
|
||||||
|
func (c CID) Size() int { return CIDSize }
|
||||||
|
|
||||||
|
// Parse tries to parse CID from string representation.
|
||||||
|
func (c *CID) Parse(cid string) error {
|
||||||
|
var err error
|
||||||
|
if *c, err = CIDFromString(cid); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Empty checks that current CID is empty.
|
||||||
|
func (c CID) Empty() bool { return bytes.Equal(c.Bytes(), emptyCID) }
|
||||||
|
|
||||||
|
// Equal checks that current CID is equal to passed CID.
|
||||||
|
func (c CID) Equal(cid CID) bool { return bytes.Equal(c.Bytes(), cid.Bytes()) }
|
||||||
|
|
||||||
|
// Marshal returns CID bytes representation.
|
||||||
|
func (c CID) Marshal() ([]byte, error) { return c.Bytes(), nil }
|
||||||
|
|
||||||
|
// MarshalBinary returns CID bytes representation.
|
||||||
|
func (c CID) MarshalBinary() ([]byte, error) { return c.Bytes(), nil }
|
||||||
|
|
||||||
|
// MarshalTo marshal CID to bytes representation into passed bytes.
|
||||||
|
func (c *CID) MarshalTo(data []byte) (int, error) { return copy(data, c.Bytes()), nil }
|
||||||
|
|
||||||
|
// ProtoMessage method to satisfy proto.Message interface.
|
||||||
|
func (c CID) ProtoMessage() {}
|
||||||
|
|
||||||
|
// String returns string representation of CID.
|
||||||
|
func (c CID) String() string { return base58.Encode(c[:]) }
|
||||||
|
|
||||||
|
// Reset resets current CID to zero value.
|
||||||
|
func (c *CID) Reset() { *c = CID{} }
|
||||||
|
|
||||||
|
// Bytes returns CID bytes representation.
|
||||||
|
func (c CID) Bytes() []byte {
|
||||||
|
buf := make([]byte, CIDSize)
|
||||||
|
copy(buf, c[:])
|
||||||
|
return buf
|
||||||
|
}
|
||||||
|
|
||||||
|
// UnmarshalBinary tries to parse bytes representation of CID.
|
||||||
|
func (c *CID) UnmarshalBinary(data []byte) error { return c.Unmarshal(data) }
|
||||||
|
|
||||||
|
// Unmarshal tries to parse bytes representation of CID.
|
||||||
|
func (c *CID) Unmarshal(data []byte) error {
|
||||||
|
if ln := len(data); ln != CIDSize {
|
||||||
|
return errors.Wrapf(ErrWrongDataSize, "expect=%d, actual=%d", CIDSize, ln)
|
||||||
|
}
|
||||||
|
|
||||||
|
copy((*c)[:], data)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify validates that current CID is generated for passed bytes data.
|
||||||
|
func (c CID) Verify(data []byte) error {
|
||||||
|
if id := CIDForBytes(data); !bytes.Equal(c[:], id[:]) {
|
||||||
|
return errors.New("wrong hash for data")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
65
refs/owner.go
Normal file
65
refs/owner.go
Normal file
|
@ -0,0 +1,65 @@
|
||||||
|
package refs
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"crypto/ecdsa"
|
||||||
|
|
||||||
|
"github.com/mr-tron/base58"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/chain"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
// NewOwnerID returns generated OwnerID from passed public keys.
|
||||||
|
func NewOwnerID(keys ...*ecdsa.PublicKey) (owner OwnerID, err error) {
|
||||||
|
if len(keys) == 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
var d []byte
|
||||||
|
d, err = base58.Decode(chain.KeysToAddress(keys...))
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
copy(owner[:], d)
|
||||||
|
return owner, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Size returns OwnerID size in bytes (OwnerIDSize).
|
||||||
|
func (OwnerID) Size() int { return OwnerIDSize }
|
||||||
|
|
||||||
|
// Empty checks that current OwnerID is empty value.
|
||||||
|
func (o OwnerID) Empty() bool { return bytes.Equal(o.Bytes(), emptyOwner) }
|
||||||
|
|
||||||
|
// Equal checks that current OwnerID is equal to passed OwnerID.
|
||||||
|
func (o OwnerID) Equal(id OwnerID) bool { return bytes.Equal(o.Bytes(), id.Bytes()) }
|
||||||
|
|
||||||
|
// Reset sets current OwnerID to empty value.
|
||||||
|
func (o *OwnerID) Reset() { *o = OwnerID{} }
|
||||||
|
|
||||||
|
// ProtoMessage method to satisfy proto.Message interface.
|
||||||
|
func (OwnerID) ProtoMessage() {}
|
||||||
|
|
||||||
|
// Marshal returns OwnerID bytes representation.
|
||||||
|
func (o OwnerID) Marshal() ([]byte, error) { return o.Bytes(), nil }
|
||||||
|
|
||||||
|
// MarshalTo copies OwnerID bytes representation into passed slice of bytes.
|
||||||
|
func (o OwnerID) MarshalTo(data []byte) (int, error) { return copy(data, o.Bytes()), nil }
|
||||||
|
|
||||||
|
// String returns string representation of OwnerID.
|
||||||
|
func (o OwnerID) String() string { return base58.Encode(o[:]) }
|
||||||
|
|
||||||
|
// Bytes returns OwnerID bytes representation.
|
||||||
|
func (o OwnerID) Bytes() []byte {
|
||||||
|
buf := make([]byte, OwnerIDSize)
|
||||||
|
copy(buf, o[:])
|
||||||
|
return buf
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unmarshal tries to parse OwnerID bytes representation into current OwnerID.
|
||||||
|
func (o *OwnerID) Unmarshal(data []byte) error {
|
||||||
|
if ln := len(data); ln != OwnerIDSize {
|
||||||
|
return errors.Wrapf(ErrWrongDataSize, "expect=%d, actual=%d", OwnerIDSize, ln)
|
||||||
|
}
|
||||||
|
|
||||||
|
copy((*o)[:], data)
|
||||||
|
return nil
|
||||||
|
}
|
14
refs/sgid.go
Normal file
14
refs/sgid.go
Normal file
|
@ -0,0 +1,14 @@
|
||||||
|
package refs
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
// SGIDFromBytes parse bytes representation of SGID into new SGID value.
|
||||||
|
func SGIDFromBytes(data []byte) (sgid SGID, err error) {
|
||||||
|
if ln := len(data); ln != SGIDSize {
|
||||||
|
return SGID{}, errors.Wrapf(ErrWrongDataSize, "expect=%d, actual=%d", SGIDSize, ln)
|
||||||
|
}
|
||||||
|
copy(sgid[:], data)
|
||||||
|
return
|
||||||
|
}
|
106
refs/types.go
Normal file
106
refs/types.go
Normal file
|
@ -0,0 +1,106 @@
|
||||||
|
// This package contains basic structures implemented in Go, such as
|
||||||
|
//
|
||||||
|
// CID - container id
|
||||||
|
// OwnerID - owner id
|
||||||
|
// ObjectID - object id
|
||||||
|
// SGID - storage group id
|
||||||
|
// Address - contains object id and container id
|
||||||
|
// UUID - a 128 bit (16 byte) Universal Unique Identifier as defined in RFC 4122
|
||||||
|
|
||||||
|
package refs
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/sha256"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/chain"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/internal"
|
||||||
|
)
|
||||||
|
|
||||||
|
type (
|
||||||
|
// CID is implementation of ContainerID.
|
||||||
|
CID [CIDSize]byte
|
||||||
|
|
||||||
|
// UUID wrapper over github.com/google/uuid.UUID.
|
||||||
|
UUID uuid.UUID
|
||||||
|
|
||||||
|
// SGID is type alias of UUID.
|
||||||
|
SGID = UUID
|
||||||
|
|
||||||
|
// ObjectID is type alias of UUID.
|
||||||
|
ObjectID = UUID
|
||||||
|
|
||||||
|
// MessageID is type alias of UUID.
|
||||||
|
MessageID = UUID
|
||||||
|
|
||||||
|
// OwnerID is wrapper over neofs-proto/chain.WalletAddress.
|
||||||
|
OwnerID chain.WalletAddress
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
// UUIDSize contains size of UUID.
|
||||||
|
UUIDSize = 16
|
||||||
|
|
||||||
|
// SGIDSize contains size of SGID.
|
||||||
|
SGIDSize = UUIDSize
|
||||||
|
|
||||||
|
// CIDSize contains size of CID.
|
||||||
|
CIDSize = sha256.Size
|
||||||
|
|
||||||
|
// OwnerIDSize contains size of OwnerID.
|
||||||
|
OwnerIDSize = chain.AddressLength
|
||||||
|
|
||||||
|
// ErrWrongDataSize is raised when passed bytes into Unmarshal have wrong size.
|
||||||
|
ErrWrongDataSize = internal.Error("wrong data size")
|
||||||
|
|
||||||
|
// ErrEmptyOwner is raised when empty OwnerID is passed into container.New.
|
||||||
|
ErrEmptyOwner = internal.Error("owner cant be empty")
|
||||||
|
|
||||||
|
// ErrEmptyCapacity is raised when empty Capacity is passed container.New.
|
||||||
|
ErrEmptyCapacity = internal.Error("capacity cant be empty")
|
||||||
|
|
||||||
|
// ErrEmptyContainer is raised when it CID method is called for an empty container.
|
||||||
|
ErrEmptyContainer = internal.Error("cannot return ID for empty container")
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
emptyCID = (CID{}).Bytes()
|
||||||
|
emptyUUID = (UUID{}).Bytes()
|
||||||
|
emptyOwner = (OwnerID{}).Bytes()
|
||||||
|
|
||||||
|
_ internal.Custom = (*CID)(nil)
|
||||||
|
_ internal.Custom = (*SGID)(nil)
|
||||||
|
_ internal.Custom = (*UUID)(nil)
|
||||||
|
_ internal.Custom = (*OwnerID)(nil)
|
||||||
|
_ internal.Custom = (*ObjectID)(nil)
|
||||||
|
_ internal.Custom = (*MessageID)(nil)
|
||||||
|
|
||||||
|
// NewSGID method alias.
|
||||||
|
NewSGID = NewUUID
|
||||||
|
|
||||||
|
// NewObjectID method alias.
|
||||||
|
NewObjectID = NewUUID
|
||||||
|
|
||||||
|
// NewMessageID method alias.
|
||||||
|
NewMessageID = NewUUID
|
||||||
|
)
|
||||||
|
|
||||||
|
// NewUUID returns a Random (Version 4) UUID.
|
||||||
|
//
|
||||||
|
// The strength of the UUIDs is based on the strength of the crypto/rand
|
||||||
|
// package.
|
||||||
|
//
|
||||||
|
// A note about uniqueness derived from the UUID Wikipedia entry:
|
||||||
|
//
|
||||||
|
// Randomly generated UUIDs have 122 random bits. One's annual risk of being
|
||||||
|
// hit by a meteorite is estimated to be one chance in 17 billion, that
|
||||||
|
// means the probability is about 0.00000000006 (6 × 10−11),
|
||||||
|
// equivalent to the odds of creating a few tens of trillions of UUIDs in a
|
||||||
|
// year and having one duplicate.
|
||||||
|
func NewUUID() (UUID, error) {
|
||||||
|
id, err := uuid.NewRandom()
|
||||||
|
if err != nil {
|
||||||
|
return UUID{}, err
|
||||||
|
}
|
||||||
|
return UUID(id), nil
|
||||||
|
}
|
BIN
refs/types.pb.go
Normal file
BIN
refs/types.pb.go
Normal file
Binary file not shown.
15
refs/types.proto
Normal file
15
refs/types.proto
Normal file
|
@ -0,0 +1,15 @@
|
||||||
|
syntax = "proto3";
|
||||||
|
package refs;
|
||||||
|
option go_package = "github.com/nspcc-dev/neofs-proto/refs";
|
||||||
|
|
||||||
|
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||||
|
|
||||||
|
option (gogoproto.stable_marshaler_all) = true;
|
||||||
|
|
||||||
|
option (gogoproto.stringer_all) = false;
|
||||||
|
option (gogoproto.goproto_stringer_all) = false;
|
||||||
|
|
||||||
|
message Address {
|
||||||
|
bytes ObjectID = 1[(gogoproto.customtype) = "ObjectID", (gogoproto.nullable) = false]; // UUID
|
||||||
|
bytes CID = 2[(gogoproto.customtype) = "CID", (gogoproto.nullable) = false]; // sha256
|
||||||
|
}
|
112
refs/types_test.go
Normal file
112
refs/types_test.go
Normal file
|
@ -0,0 +1,112 @@
|
||||||
|
package refs
|
||||||
|
|
||||||
|
import (
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/gogo/protobuf/proto"
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"github.com/nspcc-dev/neofs-crypto/test"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestSGID(t *testing.T) {
|
||||||
|
t.Run("check that marshal/unmarshal works like expected", func(t *testing.T) {
|
||||||
|
var sgid1, sgid2 UUID
|
||||||
|
|
||||||
|
sgid1, err := NewSGID()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
data, err := proto.Marshal(&sgid1)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.NoError(t, sgid2.Unmarshal(data))
|
||||||
|
require.Equal(t, sgid1, sgid2)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestUUID(t *testing.T) {
|
||||||
|
t.Run("parse should work like expected", func(t *testing.T) {
|
||||||
|
var u UUID
|
||||||
|
|
||||||
|
id, err := uuid.NewRandom()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.NoError(t, u.Parse(id.String()))
|
||||||
|
require.Equal(t, id.String(), u.String())
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("check that marshal/unmarshal works like expected", func(t *testing.T) {
|
||||||
|
var u1, u2 UUID
|
||||||
|
|
||||||
|
u1 = UUID{0x8f, 0xe4, 0xeb, 0xa0, 0xb8, 0xfb, 0x49, 0x3b, 0xbb, 0x1d, 0x1d, 0x13, 0x6e, 0x69, 0xfc, 0xf7}
|
||||||
|
|
||||||
|
data, err := proto.Marshal(&u1)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.NoError(t, u2.Unmarshal(data))
|
||||||
|
require.Equal(t, u1, u2)
|
||||||
|
})
|
||||||
|
|
||||||
|
t.Run("check that marshal/unmarshal works like expected even for msg id", func(t *testing.T) {
|
||||||
|
var u2 MessageID
|
||||||
|
|
||||||
|
u1, err := NewMessageID()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
data, err := proto.Marshal(&u1)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.NoError(t, u2.Unmarshal(data))
|
||||||
|
require.Equal(t, u1, u2)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestOwnerID(t *testing.T) {
|
||||||
|
t.Run("check that marshal/unmarshal works like expected", func(t *testing.T) {
|
||||||
|
var u1, u2 OwnerID
|
||||||
|
|
||||||
|
owner, err := NewOwnerID()
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.True(t, owner.Empty())
|
||||||
|
|
||||||
|
key := test.DecodeKey(0)
|
||||||
|
|
||||||
|
u1, err = NewOwnerID(&key.PublicKey)
|
||||||
|
require.NoError(t, err)
|
||||||
|
data, err := proto.Marshal(&u1)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
require.NoError(t, u2.Unmarshal(data))
|
||||||
|
require.Equal(t, u1, u2)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAddress(t *testing.T) {
|
||||||
|
cid := CIDForBytes([]byte("test"))
|
||||||
|
|
||||||
|
id, err := NewObjectID()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
expect := strings.Join([]string{
|
||||||
|
cid.String(),
|
||||||
|
id.String(),
|
||||||
|
}, joinSeparator)
|
||||||
|
|
||||||
|
require.NotPanics(t, func() {
|
||||||
|
actual := (Address{
|
||||||
|
ObjectID: id,
|
||||||
|
CID: cid,
|
||||||
|
}).String()
|
||||||
|
|
||||||
|
require.Equal(t, expect, actual)
|
||||||
|
})
|
||||||
|
|
||||||
|
var temp Address
|
||||||
|
require.NoError(t, temp.Parse(expect))
|
||||||
|
require.Equal(t, expect, temp.String())
|
||||||
|
|
||||||
|
actual, err := ParseAddress(expect)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Equal(t, expect, actual.String())
|
||||||
|
}
|
76
refs/uuid.go
Normal file
76
refs/uuid.go
Normal file
|
@ -0,0 +1,76 @@
|
||||||
|
package refs
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"encoding/hex"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
func encodeHex(dst []byte, uuid UUID) {
|
||||||
|
hex.Encode(dst, uuid[:4])
|
||||||
|
dst[8] = '-'
|
||||||
|
hex.Encode(dst[9:13], uuid[4:6])
|
||||||
|
dst[13] = '-'
|
||||||
|
hex.Encode(dst[14:18], uuid[6:8])
|
||||||
|
dst[18] = '-'
|
||||||
|
hex.Encode(dst[19:23], uuid[8:10])
|
||||||
|
dst[23] = '-'
|
||||||
|
hex.Encode(dst[24:], uuid[10:])
|
||||||
|
}
|
||||||
|
|
||||||
|
// Size returns size in bytes of UUID (UUIDSize).
|
||||||
|
func (UUID) Size() int { return UUIDSize }
|
||||||
|
|
||||||
|
// Empty checks that current UUID is empty value.
|
||||||
|
func (u UUID) Empty() bool { return bytes.Equal(u.Bytes(), emptyUUID) }
|
||||||
|
|
||||||
|
// Reset sets current UUID to empty value.
|
||||||
|
func (u *UUID) Reset() { *u = [UUIDSize]byte{} }
|
||||||
|
|
||||||
|
// ProtoMessage method to satisfy proto.Message.
|
||||||
|
func (UUID) ProtoMessage() {}
|
||||||
|
|
||||||
|
// Marshal returns UUID bytes representation.
|
||||||
|
func (u UUID) Marshal() ([]byte, error) { return u.Bytes(), nil }
|
||||||
|
|
||||||
|
// MarshalTo returns UUID bytes representation.
|
||||||
|
func (u UUID) MarshalTo(data []byte) (int, error) { return copy(data, u[:]), nil }
|
||||||
|
|
||||||
|
// Bytes returns UUID bytes representation.
|
||||||
|
func (u UUID) Bytes() []byte {
|
||||||
|
buf := make([]byte, UUIDSize)
|
||||||
|
copy(buf, u[:])
|
||||||
|
return buf
|
||||||
|
}
|
||||||
|
|
||||||
|
// Equal checks that current UUID is equal to passed UUID.
|
||||||
|
func (u UUID) Equal(u2 UUID) bool { return bytes.Equal(u.Bytes(), u2.Bytes()) }
|
||||||
|
|
||||||
|
func (u UUID) String() string {
|
||||||
|
var buf [36]byte
|
||||||
|
encodeHex(buf[:], u)
|
||||||
|
return string(buf[:])
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unmarshal tries to parse UUID bytes representation.
|
||||||
|
func (u *UUID) Unmarshal(data []byte) error {
|
||||||
|
if ln := len(data); ln != UUIDSize {
|
||||||
|
return errors.Wrapf(ErrWrongDataSize, "expect=%d, actual=%d", UUIDSize, ln)
|
||||||
|
}
|
||||||
|
|
||||||
|
copy((*u)[:], data)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse tries to parse UUID string representation.
|
||||||
|
func (u *UUID) Parse(id string) error {
|
||||||
|
tmp, err := uuid.Parse(id)
|
||||||
|
if err != nil {
|
||||||
|
return errors.Wrapf(err, "could not parse `%s`", id)
|
||||||
|
}
|
||||||
|
|
||||||
|
copy((*u)[:], tmp[:])
|
||||||
|
return nil
|
||||||
|
}
|
7
service/epoch.go
Normal file
7
service/epoch.go
Normal file
|
@ -0,0 +1,7 @@
|
||||||
|
package service
|
||||||
|
|
||||||
|
// EpochRequest interface gives possibility to get or set epoch in RPC Requests.
|
||||||
|
type EpochRequest interface {
|
||||||
|
GetEpoch() uint64
|
||||||
|
SetEpoch(v uint64)
|
||||||
|
}
|
24
service/role.go
Normal file
24
service/role.go
Normal file
|
@ -0,0 +1,24 @@
|
||||||
|
package service
|
||||||
|
|
||||||
|
// NodeRole to identify in Bootstrap service.
|
||||||
|
type NodeRole int32
|
||||||
|
|
||||||
|
const (
|
||||||
|
_ NodeRole = iota
|
||||||
|
// InnerRingNode that work like IR node.
|
||||||
|
InnerRingNode
|
||||||
|
// StorageNode that work like a storage node.
|
||||||
|
StorageNode
|
||||||
|
)
|
||||||
|
|
||||||
|
// String is method, that represent NodeRole as string.
|
||||||
|
func (nt NodeRole) String() string {
|
||||||
|
switch nt {
|
||||||
|
case InnerRingNode:
|
||||||
|
return "InnerRingNode"
|
||||||
|
case StorageNode:
|
||||||
|
return "StorageNode"
|
||||||
|
default:
|
||||||
|
return "Unknown"
|
||||||
|
}
|
||||||
|
}
|
22
service/role_test.go
Normal file
22
service/role_test.go
Normal file
|
@ -0,0 +1,22 @@
|
||||||
|
package service
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestNodeRole_String(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
nt NodeRole
|
||||||
|
want string
|
||||||
|
}{
|
||||||
|
{want: "Unknown"},
|
||||||
|
{nt: StorageNode, want: "StorageNode"},
|
||||||
|
{nt: InnerRingNode, want: "InnerRingNode"},
|
||||||
|
}
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.want, func(t *testing.T) {
|
||||||
|
require.Equal(t, tt.want, tt.nt.String())
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
47
service/sign.go
Normal file
47
service/sign.go
Normal file
|
@ -0,0 +1,47 @@
|
||||||
|
package service
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/ecdsa"
|
||||||
|
|
||||||
|
crypto "github.com/nspcc-dev/neofs-crypto"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/internal"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ErrWrongSignature should be raised when wrong signature is passed into VerifyRequest.
|
||||||
|
const ErrWrongSignature = internal.Error("wrong signature")
|
||||||
|
|
||||||
|
// SignedRequest interface allows sign and verify requests.
|
||||||
|
type SignedRequest interface {
|
||||||
|
PrepareData() ([]byte, error)
|
||||||
|
GetSignature() []byte
|
||||||
|
SetSignature([]byte)
|
||||||
|
}
|
||||||
|
|
||||||
|
// SignRequest with passed private key.
|
||||||
|
func SignRequest(r SignedRequest, key *ecdsa.PrivateKey) error {
|
||||||
|
var signature []byte
|
||||||
|
if data, err := r.PrepareData(); err != nil {
|
||||||
|
return err
|
||||||
|
} else if signature, err = crypto.Sign(key, data); err != nil {
|
||||||
|
return errors.Wrap(err, "could not sign data")
|
||||||
|
}
|
||||||
|
|
||||||
|
r.SetSignature(signature)
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// VerifyRequest by passed public keys.
|
||||||
|
func VerifyRequest(r SignedRequest, keys ...*ecdsa.PublicKey) bool {
|
||||||
|
data, err := r.PrepareData()
|
||||||
|
if err != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
for i := range keys {
|
||||||
|
if err := crypto.Verify(keys[i], data, r.GetSignature()); err == nil {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
45
service/ttl.go
Normal file
45
service/ttl.go
Normal file
|
@ -0,0 +1,45 @@
|
||||||
|
package service
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/nspcc-dev/neofs-proto/internal"
|
||||||
|
"google.golang.org/grpc/codes"
|
||||||
|
"google.golang.org/grpc/status"
|
||||||
|
)
|
||||||
|
|
||||||
|
// TTLRequest to verify and update ttl requests.
|
||||||
|
type TTLRequest interface {
|
||||||
|
GetTTL() uint32
|
||||||
|
SetTTL(uint32)
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
// ZeroTTL is empty ttl, should produce ErrZeroTTL.
|
||||||
|
ZeroTTL = iota
|
||||||
|
|
||||||
|
// NonForwardingTTL is a ttl that allows direct connections only.
|
||||||
|
NonForwardingTTL
|
||||||
|
|
||||||
|
// SingleForwardingTTL is a ttl that allows connections through another node.
|
||||||
|
SingleForwardingTTL
|
||||||
|
|
||||||
|
// ErrZeroTTL is raised when zero ttl is passed.
|
||||||
|
ErrZeroTTL = internal.Error("zero ttl")
|
||||||
|
|
||||||
|
// ErrIncorrectTTL is raised when NonForwardingTTL is passed and NodeRole != InnerRingNode.
|
||||||
|
ErrIncorrectTTL = internal.Error("incorrect ttl")
|
||||||
|
)
|
||||||
|
|
||||||
|
// CheckTTLRequest validates and update ttl requests.
|
||||||
|
func CheckTTLRequest(req TTLRequest, role NodeRole) error {
|
||||||
|
var ttl = req.GetTTL()
|
||||||
|
|
||||||
|
if ttl == ZeroTTL {
|
||||||
|
return status.New(codes.InvalidArgument, ErrZeroTTL.Error()).Err()
|
||||||
|
} else if ttl == NonForwardingTTL && role != InnerRingNode {
|
||||||
|
return status.New(codes.InvalidArgument, ErrIncorrectTTL.Error()).Err()
|
||||||
|
}
|
||||||
|
|
||||||
|
req.SetTTL(ttl - 1)
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
72
service/ttl_test.go
Normal file
72
service/ttl_test.go
Normal file
|
@ -0,0 +1,72 @@
|
||||||
|
package service
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
"google.golang.org/grpc/codes"
|
||||||
|
"google.golang.org/grpc/status"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
type mockedRequest struct {
|
||||||
|
msg string
|
||||||
|
ttl uint32
|
||||||
|
name string
|
||||||
|
role NodeRole
|
||||||
|
code codes.Code
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mockedRequest) SetTTL(v uint32) { m.ttl = v }
|
||||||
|
func (m mockedRequest) GetTTL() uint32 { return m.ttl }
|
||||||
|
|
||||||
|
func TestCheckTTLRequest(t *testing.T) {
|
||||||
|
tests := []mockedRequest{
|
||||||
|
{
|
||||||
|
ttl: NonForwardingTTL,
|
||||||
|
role: InnerRingNode,
|
||||||
|
name: "direct to ir node",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
ttl: NonForwardingTTL,
|
||||||
|
role: StorageNode,
|
||||||
|
code: codes.InvalidArgument,
|
||||||
|
msg: ErrIncorrectTTL.Error(),
|
||||||
|
name: "direct to storage node",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
ttl: ZeroTTL,
|
||||||
|
role: StorageNode,
|
||||||
|
msg: ErrZeroTTL.Error(),
|
||||||
|
code: codes.InvalidArgument,
|
||||||
|
name: "zero ttl",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
ttl: SingleForwardingTTL,
|
||||||
|
role: InnerRingNode,
|
||||||
|
name: "default to ir node",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
ttl: SingleForwardingTTL,
|
||||||
|
role: StorageNode,
|
||||||
|
name: "default to storage node",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := range tests {
|
||||||
|
tt := tests[i]
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
before := tt.ttl
|
||||||
|
err := CheckTTLRequest(&tt, tt.role)
|
||||||
|
if tt.msg != "" {
|
||||||
|
require.Errorf(t, err, tt.msg)
|
||||||
|
|
||||||
|
state, ok := status.FromError(err)
|
||||||
|
require.True(t, ok)
|
||||||
|
require.Equal(t, state.Code(), tt.code)
|
||||||
|
require.Equal(t, state.Message(), tt.msg)
|
||||||
|
} else {
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NotEqualf(t, before, tt.ttl, "ttl should be changed: %d vs %d", before, tt.ttl)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
57
session/service.go
Normal file
57
session/service.go
Normal file
|
@ -0,0 +1,57 @@
|
||||||
|
package session
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"crypto/ecdsa"
|
||||||
|
|
||||||
|
crypto "github.com/nspcc-dev/neofs-crypto"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/refs"
|
||||||
|
)
|
||||||
|
|
||||||
|
type (
|
||||||
|
// KeyStore is an interface that describes storage,
|
||||||
|
// that allows to fetch public keys by OwnerID.
|
||||||
|
KeyStore interface {
|
||||||
|
Get(ctx context.Context, id refs.OwnerID) ([]*ecdsa.PublicKey, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
// TokenStore is a PToken storage manipulation interface.
|
||||||
|
TokenStore interface {
|
||||||
|
// New returns new token with specified parameters.
|
||||||
|
New(p TokenParams) *PToken
|
||||||
|
|
||||||
|
// Fetch tries to fetch a token with specified id.
|
||||||
|
Fetch(id TokenID) *PToken
|
||||||
|
|
||||||
|
// Remove removes token with id from store.
|
||||||
|
Remove(id TokenID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// TokenParams contains params to create new PToken.
|
||||||
|
TokenParams struct {
|
||||||
|
FirstEpoch uint64
|
||||||
|
LastEpoch uint64
|
||||||
|
ObjectID []ObjectID
|
||||||
|
OwnerID OwnerID
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
// NewInitRequest returns new initialization CreateRequest from passed Token.
|
||||||
|
func NewInitRequest(t *Token) *CreateRequest {
|
||||||
|
return &CreateRequest{Message: &CreateRequest_Init{Init: t}}
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewSignedRequest returns new signed CreateRequest from passed Token.
|
||||||
|
func NewSignedRequest(t *Token) *CreateRequest {
|
||||||
|
return &CreateRequest{Message: &CreateRequest_Signed{Signed: t}}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sign signs contents of the header with the private key.
|
||||||
|
func (m *VerificationHeader) Sign(key *ecdsa.PrivateKey) error {
|
||||||
|
s, err := crypto.Sign(key, m.PublicKey)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
m.KeySignature = s
|
||||||
|
return nil
|
||||||
|
}
|
BIN
session/service.pb.go
Normal file
BIN
session/service.pb.go
Normal file
Binary file not shown.
27
session/service.proto
Normal file
27
session/service.proto
Normal file
|
@ -0,0 +1,27 @@
|
||||||
|
syntax = "proto3";
|
||||||
|
package session;
|
||||||
|
option go_package = "github.com/nspcc-dev/neofs-proto/session";
|
||||||
|
|
||||||
|
import "session/types.proto";
|
||||||
|
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||||
|
|
||||||
|
option (gogoproto.stable_marshaler_all) = true;
|
||||||
|
|
||||||
|
service Session {
|
||||||
|
rpc Create (stream CreateRequest) returns (stream CreateResponse);
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
message CreateRequest {
|
||||||
|
oneof Message {
|
||||||
|
session.Token Init = 1;
|
||||||
|
session.Token Signed = 2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
message CreateResponse {
|
||||||
|
oneof Message {
|
||||||
|
session.Token Unsigned = 1;
|
||||||
|
session.Token Result = 2;
|
||||||
|
}
|
||||||
|
}
|
81
session/store.go
Normal file
81
session/store.go
Normal file
|
@ -0,0 +1,81 @@
|
||||||
|
package session
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/ecdsa"
|
||||||
|
"crypto/elliptic"
|
||||||
|
"crypto/rand"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
crypto "github.com/nspcc-dev/neofs-crypto"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/refs"
|
||||||
|
)
|
||||||
|
|
||||||
|
type simpleStore struct {
|
||||||
|
*sync.RWMutex
|
||||||
|
|
||||||
|
tokens map[TokenID]*PToken
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO get curve from neofs-crypto
|
||||||
|
func defaultCurve() elliptic.Curve {
|
||||||
|
return elliptic.P256()
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewSimpleStore creates simple token storage
|
||||||
|
func NewSimpleStore() TokenStore {
|
||||||
|
return &simpleStore{
|
||||||
|
RWMutex: new(sync.RWMutex),
|
||||||
|
tokens: make(map[TokenID]*PToken),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// New returns new token with specified parameters.
|
||||||
|
func (s *simpleStore) New(p TokenParams) *PToken {
|
||||||
|
tid, err := refs.NewUUID()
|
||||||
|
if err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
key, err := ecdsa.GenerateKey(defaultCurve(), rand.Reader)
|
||||||
|
if err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if p.FirstEpoch > p.LastEpoch || p.OwnerID.Empty() {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
t := &PToken{
|
||||||
|
mtx: new(sync.Mutex),
|
||||||
|
Token: Token{
|
||||||
|
ID: tid,
|
||||||
|
Header: VerificationHeader{PublicKey: crypto.MarshalPublicKey(&key.PublicKey)},
|
||||||
|
FirstEpoch: p.FirstEpoch,
|
||||||
|
LastEpoch: p.LastEpoch,
|
||||||
|
ObjectID: p.ObjectID,
|
||||||
|
OwnerID: p.OwnerID,
|
||||||
|
},
|
||||||
|
PrivateKey: key,
|
||||||
|
}
|
||||||
|
|
||||||
|
s.Lock()
|
||||||
|
s.tokens[t.ID] = t
|
||||||
|
s.Unlock()
|
||||||
|
|
||||||
|
return t
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fetch tries to fetch a token with specified id.
|
||||||
|
func (s *simpleStore) Fetch(id TokenID) *PToken {
|
||||||
|
s.RLock()
|
||||||
|
defer s.RUnlock()
|
||||||
|
|
||||||
|
return s.tokens[id]
|
||||||
|
}
|
||||||
|
|
||||||
|
// Remove removes token with id from store.
|
||||||
|
func (s *simpleStore) Remove(id TokenID) {
|
||||||
|
s.Lock()
|
||||||
|
delete(s.tokens, id)
|
||||||
|
s.Unlock()
|
||||||
|
}
|
84
session/store_test.go
Normal file
84
session/store_test.go
Normal file
|
@ -0,0 +1,84 @@
|
||||||
|
package session
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/ecdsa"
|
||||||
|
"crypto/rand"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
crypto "github.com/nspcc-dev/neofs-crypto"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/refs"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
type testClient struct {
|
||||||
|
*ecdsa.PrivateKey
|
||||||
|
OwnerID OwnerID
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *testClient) Sign(data []byte) ([]byte, error) {
|
||||||
|
return crypto.Sign(c.PrivateKey, data)
|
||||||
|
}
|
||||||
|
|
||||||
|
func newTestClient(t *testing.T) *testClient {
|
||||||
|
key, err := ecdsa.GenerateKey(defaultCurve(), rand.Reader)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
owner, err := refs.NewOwnerID(&key.PublicKey)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
return &testClient{PrivateKey: key, OwnerID: owner}
|
||||||
|
}
|
||||||
|
|
||||||
|
func signToken(t *testing.T, token *PToken, c *testClient) {
|
||||||
|
require.NotNil(t, token)
|
||||||
|
|
||||||
|
signH, err := c.Sign(token.Header.PublicKey)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NotNil(t, signH)
|
||||||
|
|
||||||
|
// data is not yet signed
|
||||||
|
require.False(t, token.Verify(&c.PublicKey))
|
||||||
|
|
||||||
|
signT, err := c.Sign(token.verificationData())
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NotNil(t, signT)
|
||||||
|
|
||||||
|
token.AddSignatures(signH, signT)
|
||||||
|
require.True(t, token.Verify(&c.PublicKey))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestTokenStore(t *testing.T) {
|
||||||
|
s := NewSimpleStore()
|
||||||
|
|
||||||
|
oid, err := refs.NewObjectID()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
c := newTestClient(t)
|
||||||
|
require.NotNil(t, c)
|
||||||
|
|
||||||
|
// create new token
|
||||||
|
token := s.New(TokenParams{ObjectID: []ObjectID{oid}, OwnerID: c.OwnerID})
|
||||||
|
signToken(t, token, c)
|
||||||
|
|
||||||
|
// check that it can be fetched
|
||||||
|
t1 := s.Fetch(token.ID)
|
||||||
|
require.NotNil(t, t1)
|
||||||
|
require.Equal(t, token, t1)
|
||||||
|
|
||||||
|
// create and sign another token by the same client
|
||||||
|
t1 = s.New(TokenParams{ObjectID: []ObjectID{oid}, OwnerID: c.OwnerID})
|
||||||
|
signToken(t, t1, c)
|
||||||
|
|
||||||
|
data := []byte{1, 2, 3}
|
||||||
|
sign, err := t1.SignData(data)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.Error(t, token.Header.VerifyData(data, sign))
|
||||||
|
|
||||||
|
sign, err = token.SignData(data)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NoError(t, token.Header.VerifyData(data, sign))
|
||||||
|
|
||||||
|
s.Remove(token.ID)
|
||||||
|
require.Nil(t, s.Fetch(token.ID))
|
||||||
|
require.NotNil(t, s.Fetch(t1.ID))
|
||||||
|
}
|
159
session/types.go
Normal file
159
session/types.go
Normal file
|
@ -0,0 +1,159 @@
|
||||||
|
package session
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/ecdsa"
|
||||||
|
"encoding/binary"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
crypto "github.com/nspcc-dev/neofs-crypto"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/internal"
|
||||||
|
"github.com/nspcc-dev/neofs-proto/refs"
|
||||||
|
"github.com/pkg/errors"
|
||||||
|
)
|
||||||
|
|
||||||
|
type (
|
||||||
|
// ObjectID type alias.
|
||||||
|
ObjectID = refs.ObjectID
|
||||||
|
// OwnerID type alias.
|
||||||
|
OwnerID = refs.OwnerID
|
||||||
|
// TokenID type alias.
|
||||||
|
TokenID = refs.UUID
|
||||||
|
|
||||||
|
// PToken is a wrapper around Token that allows to sign data
|
||||||
|
// and to do thread-safe manipulations.
|
||||||
|
PToken struct {
|
||||||
|
Token
|
||||||
|
|
||||||
|
mtx *sync.Mutex
|
||||||
|
PrivateKey *ecdsa.PrivateKey
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
// ErrWrongFirstEpoch is raised when passed Token contains wrong first epoch.
|
||||||
|
// First epoch is an epoch since token is valid
|
||||||
|
ErrWrongFirstEpoch = internal.Error("wrong first epoch")
|
||||||
|
|
||||||
|
// ErrWrongLastEpoch is raised when passed Token contains wrong last epoch.
|
||||||
|
// Last epoch is an epoch until token is valid
|
||||||
|
ErrWrongLastEpoch = internal.Error("wrong last epoch")
|
||||||
|
|
||||||
|
// ErrWrongOwner is raised when passed Token contains wrong OwnerID.
|
||||||
|
ErrWrongOwner = internal.Error("wrong owner")
|
||||||
|
|
||||||
|
// ErrEmptyPublicKey is raised when passed Token contains wrong public key.
|
||||||
|
ErrEmptyPublicKey = internal.Error("empty public key")
|
||||||
|
|
||||||
|
// ErrWrongObjectsCount is raised when passed Token contains wrong objects count.
|
||||||
|
ErrWrongObjectsCount = internal.Error("wrong objects count")
|
||||||
|
|
||||||
|
// ErrWrongObjects is raised when passed Token contains wrong object ids.
|
||||||
|
ErrWrongObjects = internal.Error("wrong objects")
|
||||||
|
|
||||||
|
// ErrInvalidSignature is raised when wrong signature is passed to VerificationHeader.VerifyData().
|
||||||
|
ErrInvalidSignature = internal.Error("invalid signature")
|
||||||
|
)
|
||||||
|
|
||||||
|
// verificationData returns byte array to sign.
|
||||||
|
// Note: protobuf serialization is inconsistent as
|
||||||
|
// wire order is unspecified.
|
||||||
|
func (m *Token) verificationData() (data []byte) {
|
||||||
|
var size int
|
||||||
|
if l := len(m.ObjectID); l > 0 {
|
||||||
|
size = m.ObjectID[0].Size()
|
||||||
|
data = make([]byte, 16+l*size)
|
||||||
|
} else {
|
||||||
|
data = make([]byte, 16)
|
||||||
|
}
|
||||||
|
binary.BigEndian.PutUint64(data, m.FirstEpoch)
|
||||||
|
binary.BigEndian.PutUint64(data[8:], m.LastEpoch)
|
||||||
|
for i := range m.ObjectID {
|
||||||
|
copy(data[16+i*size:], m.ObjectID[i].Bytes())
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsSame checks if the passed token is valid and equal to current token
|
||||||
|
func (m *Token) IsSame(t *Token) error {
|
||||||
|
switch {
|
||||||
|
case m.FirstEpoch != t.FirstEpoch:
|
||||||
|
return ErrWrongFirstEpoch
|
||||||
|
case m.LastEpoch != t.LastEpoch:
|
||||||
|
return ErrWrongLastEpoch
|
||||||
|
case !m.OwnerID.Equal(t.OwnerID):
|
||||||
|
return ErrWrongOwner
|
||||||
|
case m.Header.PublicKey == nil:
|
||||||
|
return ErrEmptyPublicKey
|
||||||
|
case len(m.ObjectID) != len(t.ObjectID):
|
||||||
|
return ErrWrongObjectsCount
|
||||||
|
default:
|
||||||
|
for i := range m.ObjectID {
|
||||||
|
if !m.ObjectID[i].Equal(t.ObjectID[i]) {
|
||||||
|
return errors.Wrapf(ErrWrongObjects, "expect %s, actual: %s", m.ObjectID[i], t.ObjectID[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sign tries to sign current Token data and stores signature inside it.
|
||||||
|
func (m *Token) Sign(key *ecdsa.PrivateKey) error {
|
||||||
|
if err := m.Header.Sign(key); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
s, err := crypto.Sign(key, m.verificationData())
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
m.Signature = s
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify checks if token is correct and signed.
|
||||||
|
func (m *Token) Verify(keys ...*ecdsa.PublicKey) bool {
|
||||||
|
if m.FirstEpoch > m.LastEpoch {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
for i := range keys {
|
||||||
|
if m.Header.Verify(keys[i]) && crypto.Verify(keys[i], m.verificationData(), m.Signature) == nil {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sign adds token signatures.
|
||||||
|
func (t *PToken) AddSignatures(signH, signT []byte) {
|
||||||
|
t.mtx.Lock()
|
||||||
|
|
||||||
|
t.Header.KeySignature = signH
|
||||||
|
t.Signature = signT
|
||||||
|
|
||||||
|
t.mtx.Unlock()
|
||||||
|
}
|
||||||
|
|
||||||
|
// SignData signs data with session private key.
|
||||||
|
func (t *PToken) SignData(data []byte) ([]byte, error) {
|
||||||
|
return crypto.Sign(t.PrivateKey, data)
|
||||||
|
}
|
||||||
|
|
||||||
|
// VerifyData checks if signature of data by token t
|
||||||
|
// is equal to sign.
|
||||||
|
func (m *VerificationHeader) VerifyData(data, sign []byte) error {
|
||||||
|
if crypto.Verify(crypto.UnmarshalPublicKey(m.PublicKey), data, sign) != nil {
|
||||||
|
return ErrInvalidSignature
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify checks if verification header was issued by id.
|
||||||
|
func (m *VerificationHeader) Verify(keys ...*ecdsa.PublicKey) bool {
|
||||||
|
for i := range keys {
|
||||||
|
if crypto.Verify(keys[i], m.PublicKey, m.KeySignature) == nil {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
BIN
session/types.pb.go
Normal file
BIN
session/types.pb.go
Normal file
Binary file not shown.
22
session/types.proto
Normal file
22
session/types.proto
Normal file
|
@ -0,0 +1,22 @@
|
||||||
|
syntax = "proto3";
|
||||||
|
package session;
|
||||||
|
option go_package = "github.com/nspcc-dev/neofs-proto/session";
|
||||||
|
|
||||||
|
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||||
|
|
||||||
|
option (gogoproto.stable_marshaler_all) = true;
|
||||||
|
|
||||||
|
message VerificationHeader {
|
||||||
|
bytes PublicKey = 1;
|
||||||
|
bytes KeySignature = 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
message Token {
|
||||||
|
VerificationHeader Header = 1 [(gogoproto.nullable) = false];
|
||||||
|
bytes OwnerID = 2 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||||
|
uint64 FirstEpoch = 3;
|
||||||
|
uint64 LastEpoch = 4;
|
||||||
|
repeated bytes ObjectID = 5 [(gogoproto.customtype) = "ObjectID", (gogoproto.nullable) = false];
|
||||||
|
bytes Signature = 6;
|
||||||
|
bytes ID = 7 [(gogoproto.customtype) = "TokenID", (gogoproto.nullable) = false];
|
||||||
|
}
|
48
state/service.go
Normal file
48
state/service.go
Normal file
|
@ -0,0 +1,48 @@
|
||||||
|
package state
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/golang/protobuf/proto"
|
||||||
|
"github.com/prometheus/client_golang/prometheus"
|
||||||
|
dto "github.com/prometheus/client_model/go"
|
||||||
|
"google.golang.org/grpc/codes"
|
||||||
|
"google.golang.org/grpc/status"
|
||||||
|
)
|
||||||
|
|
||||||
|
// MetricFamily is type alias for proto.Message generated
|
||||||
|
// from github.com/prometheus/client_model/metrics.proto.
|
||||||
|
type MetricFamily = dto.MetricFamily
|
||||||
|
|
||||||
|
// EncodeMetrics encodes metrics from gatherer into MetricsResponse message,
|
||||||
|
// if something went wrong returns gRPC Status error (can be returned from service).
|
||||||
|
func EncodeMetrics(g prometheus.Gatherer) (*MetricsResponse, error) {
|
||||||
|
metrics, err := g.Gather()
|
||||||
|
if err != nil {
|
||||||
|
return nil, status.New(codes.Internal, err.Error()).Err()
|
||||||
|
}
|
||||||
|
|
||||||
|
results := make([][]byte, 0, len(metrics))
|
||||||
|
for _, mf := range metrics {
|
||||||
|
item, err := proto.Marshal(mf)
|
||||||
|
if err != nil {
|
||||||
|
return nil, status.New(codes.Internal, err.Error()).Err()
|
||||||
|
}
|
||||||
|
|
||||||
|
results = append(results, item)
|
||||||
|
}
|
||||||
|
|
||||||
|
return &MetricsResponse{Metrics: results}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// DecodeMetrics decodes metrics from MetricsResponse to []MetricFamily,
|
||||||
|
// if something went wrong returns error.
|
||||||
|
func DecodeMetrics(r *MetricsResponse) ([]*MetricFamily, error) {
|
||||||
|
metrics := make([]*dto.MetricFamily, 0, len(r.Metrics))
|
||||||
|
for i := range r.Metrics {
|
||||||
|
mf := new(MetricFamily)
|
||||||
|
if err := proto.Unmarshal(r.Metrics[i], mf); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return metrics, nil
|
||||||
|
}
|
BIN
state/service.pb.go
Normal file
BIN
state/service.pb.go
Normal file
Binary file not shown.
37
state/service.proto
Normal file
37
state/service.proto
Normal file
|
@ -0,0 +1,37 @@
|
||||||
|
syntax = "proto3";
|
||||||
|
package state;
|
||||||
|
option go_package = "github.com/nspcc-dev/neofs-proto/state";
|
||||||
|
|
||||||
|
import "bootstrap/types.proto";
|
||||||
|
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||||
|
|
||||||
|
option (gogoproto.stable_marshaler_all) = true;
|
||||||
|
|
||||||
|
// The Status service definition.
|
||||||
|
service Status {
|
||||||
|
rpc Netmap(NetmapRequest) returns (bootstrap.SpreadMap);
|
||||||
|
rpc Metrics(MetricsRequest) returns (MetricsResponse);
|
||||||
|
rpc HealthCheck(HealthRequest) returns (HealthResponse);
|
||||||
|
}
|
||||||
|
|
||||||
|
// NetmapRequest message to request current node netmap
|
||||||
|
message NetmapRequest {}
|
||||||
|
|
||||||
|
// MetricsRequest message to request node metrics
|
||||||
|
message MetricsRequest {}
|
||||||
|
|
||||||
|
// MetricsResponse contains [][]byte,
|
||||||
|
// every []byte is marshaled MetricFamily proto message
|
||||||
|
// from github.com/prometheus/client_model/metrics.proto
|
||||||
|
message MetricsResponse {
|
||||||
|
repeated bytes Metrics = 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
// HealthRequest message to check current state
|
||||||
|
message HealthRequest {}
|
||||||
|
|
||||||
|
// HealthResponse message with current state
|
||||||
|
message HealthResponse {
|
||||||
|
bool Healthy = 1;
|
||||||
|
string Status = 2;
|
||||||
|
}
|
Loading…
Reference in a new issue