forked from TrueCloudLab/frostfs-api-go
initial
This commit is contained in:
commit
1cf33e5ffd
87 changed files with 29835 additions and 0 deletions
1
.gitattributes
vendored
Normal file
1
.gitattributes
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
/**/*.pb.go -diff binary
|
3
.gitignore
vendored
Normal file
3
.gitignore
vendored
Normal file
|
@ -0,0 +1,3 @@
|
|||
bin
|
||||
temp
|
||||
/vendor/
|
675
LICENSE.md
Normal file
675
LICENSE.md
Normal file
|
@ -0,0 +1,675 @@
|
|||
### GNU GENERAL PUBLIC LICENSE
|
||||
|
||||
Version 3, 29 June 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc.
|
||||
<https://fsf.org/>
|
||||
|
||||
Everyone is permitted to copy and distribute verbatim copies of this
|
||||
license document, but changing it is not allowed.
|
||||
|
||||
### Preamble
|
||||
|
||||
The GNU General Public License is a free, copyleft license for
|
||||
software and other kinds of works.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
the GNU General Public License is intended to guarantee your freedom
|
||||
to share and change all versions of a program--to make sure it remains
|
||||
free software for all its users. We, the Free Software Foundation, use
|
||||
the GNU General Public License for most of our software; it applies
|
||||
also to any other work released this way by its authors. You can apply
|
||||
it to your programs, too.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
To protect your rights, we need to prevent others from denying you
|
||||
these rights or asking you to surrender the rights. Therefore, you
|
||||
have certain responsibilities if you distribute copies of the
|
||||
software, or if you modify it: responsibilities to respect the freedom
|
||||
of others.
|
||||
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must pass on to the recipients the same
|
||||
freedoms that you received. You must make sure that they, too, receive
|
||||
or can get the source code. And you must show them these terms so they
|
||||
know their rights.
|
||||
|
||||
Developers that use the GNU GPL protect your rights with two steps:
|
||||
(1) assert copyright on the software, and (2) offer you this License
|
||||
giving you legal permission to copy, distribute and/or modify it.
|
||||
|
||||
For the developers' and authors' protection, the GPL clearly explains
|
||||
that there is no warranty for this free software. For both users' and
|
||||
authors' sake, the GPL requires that modified versions be marked as
|
||||
changed, so that their problems will not be attributed erroneously to
|
||||
authors of previous versions.
|
||||
|
||||
Some devices are designed to deny users access to install or run
|
||||
modified versions of the software inside them, although the
|
||||
manufacturer can do so. This is fundamentally incompatible with the
|
||||
aim of protecting users' freedom to change the software. The
|
||||
systematic pattern of such abuse occurs in the area of products for
|
||||
individuals to use, which is precisely where it is most unacceptable.
|
||||
Therefore, we have designed this version of the GPL to prohibit the
|
||||
practice for those products. If such problems arise substantially in
|
||||
other domains, we stand ready to extend this provision to those
|
||||
domains in future versions of the GPL, as needed to protect the
|
||||
freedom of users.
|
||||
|
||||
Finally, every program is threatened constantly by software patents.
|
||||
States should not allow patents to restrict development and use of
|
||||
software on general-purpose computers, but in those that do, we wish
|
||||
to avoid the special danger that patents applied to a free program
|
||||
could make it effectively proprietary. To prevent this, the GPL
|
||||
assures that patents cannot be used to render the program non-free.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
### TERMS AND CONDITIONS
|
||||
|
||||
#### 0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds
|
||||
of works, such as semiconductor masks.
|
||||
|
||||
"The Program" refers to any copyrightable work licensed under this
|
||||
License. Each licensee is addressed as "you". "Licensees" and
|
||||
"recipients" may be individuals or organizations.
|
||||
|
||||
To "modify" a work means to copy from or adapt all or part of the work
|
||||
in a fashion requiring copyright permission, other than the making of
|
||||
an exact copy. The resulting work is called a "modified version" of
|
||||
the earlier work or a work "based on" the earlier work.
|
||||
|
||||
A "covered work" means either the unmodified Program or a work based
|
||||
on the Program.
|
||||
|
||||
To "propagate" a work means to do anything with it that, without
|
||||
permission, would make you directly or secondarily liable for
|
||||
infringement under applicable copyright law, except executing it on a
|
||||
computer or modifying a private copy. Propagation includes copying,
|
||||
distribution (with or without modification), making available to the
|
||||
public, and in some countries other activities as well.
|
||||
|
||||
To "convey" a work means any kind of propagation that enables other
|
||||
parties to make or receive copies. Mere interaction with a user
|
||||
through a computer network, with no transfer of a copy, is not
|
||||
conveying.
|
||||
|
||||
An interactive user interface displays "Appropriate Legal Notices" to
|
||||
the extent that it includes a convenient and prominently visible
|
||||
feature that (1) displays an appropriate copyright notice, and (2)
|
||||
tells the user that there is no warranty for the work (except to the
|
||||
extent that warranties are provided), that licensees may convey the
|
||||
work under this License, and how to view a copy of this License. If
|
||||
the interface presents a list of user commands or options, such as a
|
||||
menu, a prominent item in the list meets this criterion.
|
||||
|
||||
#### 1. Source Code.
|
||||
|
||||
The "source code" for a work means the preferred form of the work for
|
||||
making modifications to it. "Object code" means any non-source form of
|
||||
a work.
|
||||
|
||||
A "Standard Interface" means an interface that either is an official
|
||||
standard defined by a recognized standards body, or, in the case of
|
||||
interfaces specified for a particular programming language, one that
|
||||
is widely used among developers working in that language.
|
||||
|
||||
The "System Libraries" of an executable work include anything, other
|
||||
than the work as a whole, that (a) is included in the normal form of
|
||||
packaging a Major Component, but which is not part of that Major
|
||||
Component, and (b) serves only to enable use of the work with that
|
||||
Major Component, or to implement a Standard Interface for which an
|
||||
implementation is available to the public in source code form. A
|
||||
"Major Component", in this context, means a major essential component
|
||||
(kernel, window system, and so on) of the specific operating system
|
||||
(if any) on which the executable work runs, or a compiler used to
|
||||
produce the work, or an object code interpreter used to run it.
|
||||
|
||||
The "Corresponding Source" for a work in object code form means all
|
||||
the source code needed to generate, install, and (for an executable
|
||||
work) run the object code and to modify the work, including scripts to
|
||||
control those activities. However, it does not include the work's
|
||||
System Libraries, or general-purpose tools or generally available free
|
||||
programs which are used unmodified in performing those activities but
|
||||
which are not part of the work. For example, Corresponding Source
|
||||
includes interface definition files associated with source files for
|
||||
the work, and the source code for shared libraries and dynamically
|
||||
linked subprograms that the work is specifically designed to require,
|
||||
such as by intimate data communication or control flow between those
|
||||
subprograms and other parts of the work.
|
||||
|
||||
The Corresponding Source need not include anything that users can
|
||||
regenerate automatically from other parts of the Corresponding Source.
|
||||
|
||||
The Corresponding Source for a work in source code form is that same
|
||||
work.
|
||||
|
||||
#### 2. Basic Permissions.
|
||||
|
||||
All rights granted under this License are granted for the term of
|
||||
copyright on the Program, and are irrevocable provided the stated
|
||||
conditions are met. This License explicitly affirms your unlimited
|
||||
permission to run the unmodified Program. The output from running a
|
||||
covered work is covered by this License only if the output, given its
|
||||
content, constitutes a covered work. This License acknowledges your
|
||||
rights of fair use or other equivalent, as provided by copyright law.
|
||||
|
||||
You may make, run and propagate covered works that you do not convey,
|
||||
without conditions so long as your license otherwise remains in force.
|
||||
You may convey covered works to others for the sole purpose of having
|
||||
them make modifications exclusively for you, or provide you with
|
||||
facilities for running those works, provided that you comply with the
|
||||
terms of this License in conveying all material for which you do not
|
||||
control copyright. Those thus making or running the covered works for
|
||||
you must do so exclusively on your behalf, under your direction and
|
||||
control, on terms that prohibit them from making any copies of your
|
||||
copyrighted material outside their relationship with you.
|
||||
|
||||
Conveying under any other circumstances is permitted solely under the
|
||||
conditions stated below. Sublicensing is not allowed; section 10 makes
|
||||
it unnecessary.
|
||||
|
||||
#### 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||
|
||||
No covered work shall be deemed part of an effective technological
|
||||
measure under any applicable law fulfilling obligations under article
|
||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||
similar laws prohibiting or restricting circumvention of such
|
||||
measures.
|
||||
|
||||
When you convey a covered work, you waive any legal power to forbid
|
||||
circumvention of technological measures to the extent such
|
||||
circumvention is effected by exercising rights under this License with
|
||||
respect to the covered work, and you disclaim any intention to limit
|
||||
operation or modification of the work as a means of enforcing, against
|
||||
the work's users, your or third parties' legal rights to forbid
|
||||
circumvention of technological measures.
|
||||
|
||||
#### 4. Conveying Verbatim Copies.
|
||||
|
||||
You may convey verbatim copies of the Program's source code as you
|
||||
receive it, in any medium, provided that you conspicuously and
|
||||
appropriately publish on each copy an appropriate copyright notice;
|
||||
keep intact all notices stating that this License and any
|
||||
non-permissive terms added in accord with section 7 apply to the code;
|
||||
keep intact all notices of the absence of any warranty; and give all
|
||||
recipients a copy of this License along with the Program.
|
||||
|
||||
You may charge any price or no price for each copy that you convey,
|
||||
and you may offer support or warranty protection for a fee.
|
||||
|
||||
#### 5. Conveying Modified Source Versions.
|
||||
|
||||
You may convey a work based on the Program, or the modifications to
|
||||
produce it from the Program, in the form of source code under the
|
||||
terms of section 4, provided that you also meet all of these
|
||||
conditions:
|
||||
|
||||
- a) The work must carry prominent notices stating that you modified
|
||||
it, and giving a relevant date.
|
||||
- b) The work must carry prominent notices stating that it is
|
||||
released under this License and any conditions added under
|
||||
section 7. This requirement modifies the requirement in section 4
|
||||
to "keep intact all notices".
|
||||
- c) You must license the entire work, as a whole, under this
|
||||
License to anyone who comes into possession of a copy. This
|
||||
License will therefore apply, along with any applicable section 7
|
||||
additional terms, to the whole of the work, and all its parts,
|
||||
regardless of how they are packaged. This License gives no
|
||||
permission to license the work in any other way, but it does not
|
||||
invalidate such permission if you have separately received it.
|
||||
- d) If the work has interactive user interfaces, each must display
|
||||
Appropriate Legal Notices; however, if the Program has interactive
|
||||
interfaces that do not display Appropriate Legal Notices, your
|
||||
work need not make them do so.
|
||||
|
||||
A compilation of a covered work with other separate and independent
|
||||
works, which are not by their nature extensions of the covered work,
|
||||
and which are not combined with it such as to form a larger program,
|
||||
in or on a volume of a storage or distribution medium, is called an
|
||||
"aggregate" if the compilation and its resulting copyright are not
|
||||
used to limit the access or legal rights of the compilation's users
|
||||
beyond what the individual works permit. Inclusion of a covered work
|
||||
in an aggregate does not cause this License to apply to the other
|
||||
parts of the aggregate.
|
||||
|
||||
#### 6. Conveying Non-Source Forms.
|
||||
|
||||
You may convey a covered work in object code form under the terms of
|
||||
sections 4 and 5, provided that you also convey the machine-readable
|
||||
Corresponding Source under the terms of this License, in one of these
|
||||
ways:
|
||||
|
||||
- a) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by the
|
||||
Corresponding Source fixed on a durable physical medium
|
||||
customarily used for software interchange.
|
||||
- b) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by a
|
||||
written offer, valid for at least three years and valid for as
|
||||
long as you offer spare parts or customer support for that product
|
||||
model, to give anyone who possesses the object code either (1) a
|
||||
copy of the Corresponding Source for all the software in the
|
||||
product that is covered by this License, on a durable physical
|
||||
medium customarily used for software interchange, for a price no
|
||||
more than your reasonable cost of physically performing this
|
||||
conveying of source, or (2) access to copy the Corresponding
|
||||
Source from a network server at no charge.
|
||||
- c) Convey individual copies of the object code with a copy of the
|
||||
written offer to provide the Corresponding Source. This
|
||||
alternative is allowed only occasionally and noncommercially, and
|
||||
only if you received the object code with such an offer, in accord
|
||||
with subsection 6b.
|
||||
- d) Convey the object code by offering access from a designated
|
||||
place (gratis or for a charge), and offer equivalent access to the
|
||||
Corresponding Source in the same way through the same place at no
|
||||
further charge. You need not require recipients to copy the
|
||||
Corresponding Source along with the object code. If the place to
|
||||
copy the object code is a network server, the Corresponding Source
|
||||
may be on a different server (operated by you or a third party)
|
||||
that supports equivalent copying facilities, provided you maintain
|
||||
clear directions next to the object code saying where to find the
|
||||
Corresponding Source. Regardless of what server hosts the
|
||||
Corresponding Source, you remain obligated to ensure that it is
|
||||
available for as long as needed to satisfy these requirements.
|
||||
- e) Convey the object code using peer-to-peer transmission,
|
||||
provided you inform other peers where the object code and
|
||||
Corresponding Source of the work are being offered to the general
|
||||
public at no charge under subsection 6d.
|
||||
|
||||
A separable portion of the object code, whose source code is excluded
|
||||
from the Corresponding Source as a System Library, need not be
|
||||
included in conveying the object code work.
|
||||
|
||||
A "User Product" is either (1) a "consumer product", which means any
|
||||
tangible personal property which is normally used for personal,
|
||||
family, or household purposes, or (2) anything designed or sold for
|
||||
incorporation into a dwelling. In determining whether a product is a
|
||||
consumer product, doubtful cases shall be resolved in favor of
|
||||
coverage. For a particular product received by a particular user,
|
||||
"normally used" refers to a typical or common use of that class of
|
||||
product, regardless of the status of the particular user or of the way
|
||||
in which the particular user actually uses, or expects or is expected
|
||||
to use, the product. A product is a consumer product regardless of
|
||||
whether the product has substantial commercial, industrial or
|
||||
non-consumer uses, unless such uses represent the only significant
|
||||
mode of use of the product.
|
||||
|
||||
"Installation Information" for a User Product means any methods,
|
||||
procedures, authorization keys, or other information required to
|
||||
install and execute modified versions of a covered work in that User
|
||||
Product from a modified version of its Corresponding Source. The
|
||||
information must suffice to ensure that the continued functioning of
|
||||
the modified object code is in no case prevented or interfered with
|
||||
solely because modification has been made.
|
||||
|
||||
If you convey an object code work under this section in, or with, or
|
||||
specifically for use in, a User Product, and the conveying occurs as
|
||||
part of a transaction in which the right of possession and use of the
|
||||
User Product is transferred to the recipient in perpetuity or for a
|
||||
fixed term (regardless of how the transaction is characterized), the
|
||||
Corresponding Source conveyed under this section must be accompanied
|
||||
by the Installation Information. But this requirement does not apply
|
||||
if neither you nor any third party retains the ability to install
|
||||
modified object code on the User Product (for example, the work has
|
||||
been installed in ROM).
|
||||
|
||||
The requirement to provide Installation Information does not include a
|
||||
requirement to continue to provide support service, warranty, or
|
||||
updates for a work that has been modified or installed by the
|
||||
recipient, or for the User Product in which it has been modified or
|
||||
installed. Access to a network may be denied when the modification
|
||||
itself materially and adversely affects the operation of the network
|
||||
or violates the rules and protocols for communication across the
|
||||
network.
|
||||
|
||||
Corresponding Source conveyed, and Installation Information provided,
|
||||
in accord with this section must be in a format that is publicly
|
||||
documented (and with an implementation available to the public in
|
||||
source code form), and must require no special password or key for
|
||||
unpacking, reading or copying.
|
||||
|
||||
#### 7. Additional Terms.
|
||||
|
||||
"Additional permissions" are terms that supplement the terms of this
|
||||
License by making exceptions from one or more of its conditions.
|
||||
Additional permissions that are applicable to the entire Program shall
|
||||
be treated as though they were included in this License, to the extent
|
||||
that they are valid under applicable law. If additional permissions
|
||||
apply only to part of the Program, that part may be used separately
|
||||
under those permissions, but the entire Program remains governed by
|
||||
this License without regard to the additional permissions.
|
||||
|
||||
When you convey a copy of a covered work, you may at your option
|
||||
remove any additional permissions from that copy, or from any part of
|
||||
it. (Additional permissions may be written to require their own
|
||||
removal in certain cases when you modify the work.) You may place
|
||||
additional permissions on material, added by you to a covered work,
|
||||
for which you have or can give appropriate copyright permission.
|
||||
|
||||
Notwithstanding any other provision of this License, for material you
|
||||
add to a covered work, you may (if authorized by the copyright holders
|
||||
of that material) supplement the terms of this License with terms:
|
||||
|
||||
- a) Disclaiming warranty or limiting liability differently from the
|
||||
terms of sections 15 and 16 of this License; or
|
||||
- b) Requiring preservation of specified reasonable legal notices or
|
||||
author attributions in that material or in the Appropriate Legal
|
||||
Notices displayed by works containing it; or
|
||||
- c) Prohibiting misrepresentation of the origin of that material,
|
||||
or requiring that modified versions of such material be marked in
|
||||
reasonable ways as different from the original version; or
|
||||
- d) Limiting the use for publicity purposes of names of licensors
|
||||
or authors of the material; or
|
||||
- e) Declining to grant rights under trademark law for use of some
|
||||
trade names, trademarks, or service marks; or
|
||||
- f) Requiring indemnification of licensors and authors of that
|
||||
material by anyone who conveys the material (or modified versions
|
||||
of it) with contractual assumptions of liability to the recipient,
|
||||
for any liability that these contractual assumptions directly
|
||||
impose on those licensors and authors.
|
||||
|
||||
All other non-permissive additional terms are considered "further
|
||||
restrictions" within the meaning of section 10. If the Program as you
|
||||
received it, or any part of it, contains a notice stating that it is
|
||||
governed by this License along with a term that is a further
|
||||
restriction, you may remove that term. If a license document contains
|
||||
a further restriction but permits relicensing or conveying under this
|
||||
License, you may add to a covered work material governed by the terms
|
||||
of that license document, provided that the further restriction does
|
||||
not survive such relicensing or conveying.
|
||||
|
||||
If you add terms to a covered work in accord with this section, you
|
||||
must place, in the relevant source files, a statement of the
|
||||
additional terms that apply to those files, or a notice indicating
|
||||
where to find the applicable terms.
|
||||
|
||||
Additional terms, permissive or non-permissive, may be stated in the
|
||||
form of a separately written license, or stated as exceptions; the
|
||||
above requirements apply either way.
|
||||
|
||||
#### 8. Termination.
|
||||
|
||||
You may not propagate or modify a covered work except as expressly
|
||||
provided under this License. Any attempt otherwise to propagate or
|
||||
modify it is void, and will automatically terminate your rights under
|
||||
this License (including any patent licenses granted under the third
|
||||
paragraph of section 11).
|
||||
|
||||
However, if you cease all violation of this License, then your license
|
||||
from a particular copyright holder is reinstated (a) provisionally,
|
||||
unless and until the copyright holder explicitly and finally
|
||||
terminates your license, and (b) permanently, if the copyright holder
|
||||
fails to notify you of the violation by some reasonable means prior to
|
||||
60 days after the cessation.
|
||||
|
||||
Moreover, your license from a particular copyright holder is
|
||||
reinstated permanently if the copyright holder notifies you of the
|
||||
violation by some reasonable means, this is the first time you have
|
||||
received notice of violation of this License (for any work) from that
|
||||
copyright holder, and you cure the violation prior to 30 days after
|
||||
your receipt of the notice.
|
||||
|
||||
Termination of your rights under this section does not terminate the
|
||||
licenses of parties who have received copies or rights from you under
|
||||
this License. If your rights have been terminated and not permanently
|
||||
reinstated, you do not qualify to receive new licenses for the same
|
||||
material under section 10.
|
||||
|
||||
#### 9. Acceptance Not Required for Having Copies.
|
||||
|
||||
You are not required to accept this License in order to receive or run
|
||||
a copy of the Program. Ancillary propagation of a covered work
|
||||
occurring solely as a consequence of using peer-to-peer transmission
|
||||
to receive a copy likewise does not require acceptance. However,
|
||||
nothing other than this License grants you permission to propagate or
|
||||
modify any covered work. These actions infringe copyright if you do
|
||||
not accept this License. Therefore, by modifying or propagating a
|
||||
covered work, you indicate your acceptance of this License to do so.
|
||||
|
||||
#### 10. Automatic Licensing of Downstream Recipients.
|
||||
|
||||
Each time you convey a covered work, the recipient automatically
|
||||
receives a license from the original licensors, to run, modify and
|
||||
propagate that work, subject to this License. You are not responsible
|
||||
for enforcing compliance by third parties with this License.
|
||||
|
||||
An "entity transaction" is a transaction transferring control of an
|
||||
organization, or substantially all assets of one, or subdividing an
|
||||
organization, or merging organizations. If propagation of a covered
|
||||
work results from an entity transaction, each party to that
|
||||
transaction who receives a copy of the work also receives whatever
|
||||
licenses to the work the party's predecessor in interest had or could
|
||||
give under the previous paragraph, plus a right to possession of the
|
||||
Corresponding Source of the work from the predecessor in interest, if
|
||||
the predecessor has it or can get it with reasonable efforts.
|
||||
|
||||
You may not impose any further restrictions on the exercise of the
|
||||
rights granted or affirmed under this License. For example, you may
|
||||
not impose a license fee, royalty, or other charge for exercise of
|
||||
rights granted under this License, and you may not initiate litigation
|
||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||
any patent claim is infringed by making, using, selling, offering for
|
||||
sale, or importing the Program or any portion of it.
|
||||
|
||||
#### 11. Patents.
|
||||
|
||||
A "contributor" is a copyright holder who authorizes use under this
|
||||
License of the Program or a work on which the Program is based. The
|
||||
work thus licensed is called the contributor's "contributor version".
|
||||
|
||||
A contributor's "essential patent claims" are all patent claims owned
|
||||
or controlled by the contributor, whether already acquired or
|
||||
hereafter acquired, that would be infringed by some manner, permitted
|
||||
by this License, of making, using, or selling its contributor version,
|
||||
but do not include claims that would be infringed only as a
|
||||
consequence of further modification of the contributor version. For
|
||||
purposes of this definition, "control" includes the right to grant
|
||||
patent sublicenses in a manner consistent with the requirements of
|
||||
this License.
|
||||
|
||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||
patent license under the contributor's essential patent claims, to
|
||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||
propagate the contents of its contributor version.
|
||||
|
||||
In the following three paragraphs, a "patent license" is any express
|
||||
agreement or commitment, however denominated, not to enforce a patent
|
||||
(such as an express permission to practice a patent or covenant not to
|
||||
sue for patent infringement). To "grant" such a patent license to a
|
||||
party means to make such an agreement or commitment not to enforce a
|
||||
patent against the party.
|
||||
|
||||
If you convey a covered work, knowingly relying on a patent license,
|
||||
and the Corresponding Source of the work is not available for anyone
|
||||
to copy, free of charge and under the terms of this License, through a
|
||||
publicly available network server or other readily accessible means,
|
||||
then you must either (1) cause the Corresponding Source to be so
|
||||
available, or (2) arrange to deprive yourself of the benefit of the
|
||||
patent license for this particular work, or (3) arrange, in a manner
|
||||
consistent with the requirements of this License, to extend the patent
|
||||
license to downstream recipients. "Knowingly relying" means you have
|
||||
actual knowledge that, but for the patent license, your conveying the
|
||||
covered work in a country, or your recipient's use of the covered work
|
||||
in a country, would infringe one or more identifiable patents in that
|
||||
country that you have reason to believe are valid.
|
||||
|
||||
If, pursuant to or in connection with a single transaction or
|
||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||
covered work, and grant a patent license to some of the parties
|
||||
receiving the covered work authorizing them to use, propagate, modify
|
||||
or convey a specific copy of the covered work, then the patent license
|
||||
you grant is automatically extended to all recipients of the covered
|
||||
work and works based on it.
|
||||
|
||||
A patent license is "discriminatory" if it does not include within the
|
||||
scope of its coverage, prohibits the exercise of, or is conditioned on
|
||||
the non-exercise of one or more of the rights that are specifically
|
||||
granted under this License. You may not convey a covered work if you
|
||||
are a party to an arrangement with a third party that is in the
|
||||
business of distributing software, under which you make payment to the
|
||||
third party based on the extent of your activity of conveying the
|
||||
work, and under which the third party grants, to any of the parties
|
||||
who would receive the covered work from you, a discriminatory patent
|
||||
license (a) in connection with copies of the covered work conveyed by
|
||||
you (or copies made from those copies), or (b) primarily for and in
|
||||
connection with specific products or compilations that contain the
|
||||
covered work, unless you entered into that arrangement, or that patent
|
||||
license was granted, prior to 28 March 2007.
|
||||
|
||||
Nothing in this License shall be construed as excluding or limiting
|
||||
any implied license or other defenses to infringement that may
|
||||
otherwise be available to you under applicable patent law.
|
||||
|
||||
#### 12. No Surrender of Others' Freedom.
|
||||
|
||||
If conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot convey a
|
||||
covered work so as to satisfy simultaneously your obligations under
|
||||
this License and any other pertinent obligations, then as a
|
||||
consequence you may not convey it at all. For example, if you agree to
|
||||
terms that obligate you to collect a royalty for further conveying
|
||||
from those to whom you convey the Program, the only way you could
|
||||
satisfy both those terms and this License would be to refrain entirely
|
||||
from conveying the Program.
|
||||
|
||||
#### 13. Use with the GNU Affero General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU Affero General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the special requirements of the GNU Affero General Public License,
|
||||
section 13, concerning interaction through a network will apply to the
|
||||
combination as such.
|
||||
|
||||
#### 14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions
|
||||
of the GNU General Public License from time to time. Such new versions
|
||||
will be similar in spirit to the present version, but may differ in
|
||||
detail to address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the Program
|
||||
specifies that a certain numbered version of the GNU General Public
|
||||
License "or any later version" applies to it, you have the option of
|
||||
following the terms and conditions either of that numbered version or
|
||||
of any later version published by the Free Software Foundation. If the
|
||||
Program does not specify a version number of the GNU General Public
|
||||
License, you may choose any version ever published by the Free
|
||||
Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future versions
|
||||
of the GNU General Public License can be used, that proxy's public
|
||||
statement of acceptance of a version permanently authorizes you to
|
||||
choose that version for the Program.
|
||||
|
||||
Later license versions may give you additional or different
|
||||
permissions. However, no additional obligations are imposed on any
|
||||
author or copyright holder as a result of your choosing to follow a
|
||||
later version.
|
||||
|
||||
#### 15. Disclaimer of Warranty.
|
||||
|
||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT
|
||||
WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND
|
||||
PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE
|
||||
DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR
|
||||
CORRECTION.
|
||||
|
||||
#### 16. Limitation of Liability.
|
||||
|
||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR
|
||||
CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
|
||||
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES
|
||||
ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT
|
||||
NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR
|
||||
LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM
|
||||
TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER
|
||||
PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
|
||||
|
||||
#### 17. Interpretation of Sections 15 and 16.
|
||||
|
||||
If the disclaimer of warranty and limitation of liability provided
|
||||
above cannot be given local legal effect according to their terms,
|
||||
reviewing courts shall apply local law that most closely approximates
|
||||
an absolute waiver of all civil liability in connection with the
|
||||
Program, unless a warranty or assumption of liability accompanies a
|
||||
copy of the Program in return for a fee.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
### How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these
|
||||
terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest to
|
||||
attach them to the start of each source file to most effectively state
|
||||
the exclusion of warranty; and each file should have at least the
|
||||
"copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper
|
||||
mail.
|
||||
|
||||
If the program does terminal interaction, make it output a short
|
||||
notice like this when it starts in an interactive mode:
|
||||
|
||||
<program> Copyright (C) <year> <name of author>
|
||||
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
This is free software, and you are welcome to redistribute it
|
||||
under certain conditions; type `show c' for details.
|
||||
|
||||
The hypothetical commands \`show w' and \`show c' should show the
|
||||
appropriate parts of the General Public License. Of course, your
|
||||
program's commands might be different; for a GUI interface, you would
|
||||
use an "about box".
|
||||
|
||||
You should also get your employer (if you work as a programmer) or
|
||||
school, if any, to sign a "copyright disclaimer" for the program, if
|
||||
necessary. For more information on this, and how to apply and follow
|
||||
the GNU GPL, see <https://www.gnu.org/licenses/>.
|
||||
|
||||
The GNU General Public License does not permit incorporating your
|
||||
program into proprietary programs. If your program is a subroutine
|
||||
library, you may consider it more useful to permit linking proprietary
|
||||
applications with the library. If this is what you want to do, use the
|
||||
GNU Lesser General Public License instead of this License. But first,
|
||||
please read <https://www.gnu.org/licenses/why-not-lgpl.html>.
|
12
Makefile
Normal file
12
Makefile
Normal file
|
@ -0,0 +1,12 @@
|
|||
protoc:
|
||||
@go mod tidy -v
|
||||
@go mod vendor
|
||||
# Install specific version for gogo-proto
|
||||
@go list -f '{{.Path}}/...@{{.Version}}' -m github.com/gogo/protobuf | xargs go get -v
|
||||
# Install specific version for protobuf lib
|
||||
@go list -f '{{.Path}}/...@{{.Version}}' -m github.com/golang/protobuf | xargs go get -v
|
||||
# Protoc generate
|
||||
@find . -type f -name '*.proto' -not -path './vendor/*' \
|
||||
-exec protoc \
|
||||
--proto_path=.:./vendor \
|
||||
--gofast_out=plugins=grpc,paths=source_relative:. '{}' \;
|
99
README.md
Normal file
99
README.md
Normal file
|
@ -0,0 +1,99 @@
|
|||
# NeoFS-proto
|
||||
|
||||
NeoFS-proto repository contains implementation of core NeoFS structures that
|
||||
can be used for integration with NeoFS.
|
||||
|
||||
## Description
|
||||
|
||||
Repository contains 13 packages that implement NeoFS core structures. These
|
||||
packages mostly contain protobuf files with service and structure definitions
|
||||
or NeoFS core types with complemented functions.
|
||||
|
||||
### Accounting
|
||||
|
||||
Accounting package defines services and structures for accounting operations:
|
||||
balance request and `cheque` operations for withdraw. `Cheque` is a structure
|
||||
with inner ring signatures, which approve that user can withdraw requested
|
||||
amount of assets. NeoFS smart contract takes binary formatted `cheque` as a
|
||||
parameter in withdraw call.
|
||||
|
||||
### Bootstrap
|
||||
|
||||
Bootstrap package defines bootstrap service which is used by storage nodes to
|
||||
connect to the storage network.
|
||||
|
||||
### Chain
|
||||
|
||||
Chain package contains util functions for operations with NEO Blockchain types:
|
||||
wallet addresses, script-hashes.
|
||||
|
||||
### Container
|
||||
|
||||
Container package defines service and structures for operations with containers.
|
||||
Objects in NeoFS are stored in containers. Container defines storage
|
||||
policy for the objects.
|
||||
|
||||
### Decimal
|
||||
|
||||
Decimal defines custom decimal implementation which is used in accounting
|
||||
operations.
|
||||
|
||||
### Hash
|
||||
|
||||
Hash package defines homomorphic hash type.
|
||||
|
||||
### Internal
|
||||
|
||||
Internal package defines constant error type and proto interface for custom
|
||||
protobuf structures.
|
||||
|
||||
### Object
|
||||
|
||||
Object package defines service and structures for object operations. Object is
|
||||
a core storage structure in NeoFS. Package contains detailed information
|
||||
about object internal structure.
|
||||
|
||||
### Query
|
||||
|
||||
Query package defines structure for object search requests.
|
||||
|
||||
### Refs
|
||||
|
||||
Refs package defines core identity types: Object ID, Container ID, etc.
|
||||
|
||||
### Service
|
||||
|
||||
Service package defines util structure and functions for all NeoFS services
|
||||
operations: TTL and request signature management, node roles, epoch retriever.
|
||||
|
||||
### Session
|
||||
|
||||
Session package defines service and structures for session obtain. Object
|
||||
operations require an established session with pair of session keys signed by
|
||||
owner of the object.
|
||||
|
||||
### State
|
||||
|
||||
State package defines service and structures for metrics gathering.
|
||||
|
||||
## How to use
|
||||
|
||||
NeoFS-proto packages contain godoc documentation. Examples of using most of
|
||||
these packages can be found in NeoFS-CLI repository. CLI implements and
|
||||
demonstrates all basic interactions with NeoFS: container, object, storage
|
||||
group, and accounting operations.
|
||||
|
||||
Protobuf files are recompiled with the command:
|
||||
|
||||
```
|
||||
$ make protoc
|
||||
```
|
||||
|
||||
## Contributing
|
||||
|
||||
At this moment, we do not accept contributions.
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under the GPLv3 License -
|
||||
see the [LICENSE.md](LICENSE.md) file for details
|
8
accounting/fixtures/cheque.sh
Executable file
8
accounting/fixtures/cheque.sh
Executable file
|
@ -0,0 +1,8 @@
|
|||
#!/bin/bash
|
||||
|
||||
CHEQUE=d6520dabb6cb9b981792608c73670eff14775e9a65bbc189271723ba2703c53263e8d6e522dc32203339dcd8eee9c6b7439a0000000053724e000000000000001e61000603012d47e76210aec73be39ab3d186e0a40fe8d86bfa3d4fabfda57ba13b88f96abe1de4c7ecd46cb32081c0ff199e0b32708d2ce709dd146ce096484073a9b15a259ca799f8d848eb5bea16f6d0842a0181ccd47384af2cdb0fd0af0819e8a08802f7528ce97c9a93558efe7d4f62577aabdf771c931f54a71be6ad21e7d9cc1777686ad19b5dc4b80d7b8decf90054c5aad66c0e6fe63d8473b751cd77c1bd0557516e0f3e7d0ccb485809023b0c08a89f33ae38b2f99ce3f1ebc7905dddf0ed0f023e00f03a16e8707ce045eb42ee80d392451541ee510dc18e1c8befbac54d7426087d37d32d836537d317deafbbd193002a36f80fbdfbf3a730cf011bc6c75c7e6d5724f3adee7015fcb3068d321e2ae555e79107be0c46070efdae2f724dbc9f0340750b92789821683283bcb98e32b7e032b94f267b6964613fc31a7ce5813fddeea47a1db525634237e924178b5c8ea745549ae60aa3570ce6cf52e370e6ab87652bdf8a179176f1acaf48896bef9ab300818a53f410d86241d506a550f4915403fef27f744e829131d0ec980829fafa51db1714c2761d9f78762c008c323e9d6612e4f9efdc609f191fd9ca5431dd9dc037130150107ab8769780d728e9ffdf314019b57c8d2b940b9ec078afa951ed8b06c1bf352edd2037e29b8f24cca3ec700368a6f5829fb2a34fa03d0308ae6b05f433f2904d9a852fed1f5d2eb598ca79475b74ef6394e712d275cd798062c6d8e41fad822ac5a4fcb167f0a2e196f61f9f65a0adef9650f49150e7eb7bb08dd1739fa6e86b341f1b2cf5657fcd200637e8
|
||||
DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
|
||||
|
||||
echo $CHEQUE | xxd -p -r > $DIR/cheque_data
|
||||
|
||||
exit 0
|
BIN
accounting/fixtures/cheque_data
Normal file
BIN
accounting/fixtures/cheque_data
Normal file
Binary file not shown.
49
accounting/service.go
Normal file
49
accounting/service.go
Normal file
|
@ -0,0 +1,49 @@
|
|||
package accounting
|
||||
|
||||
import (
|
||||
"github.com/nspcc-dev/neofs-proto/decimal"
|
||||
"github.com/nspcc-dev/neofs-proto/internal"
|
||||
"github.com/nspcc-dev/neofs-proto/refs"
|
||||
)
|
||||
|
||||
type (
|
||||
// OwnerID type alias.
|
||||
OwnerID = refs.OwnerID
|
||||
|
||||
// Decimal type alias.
|
||||
Decimal = decimal.Decimal
|
||||
|
||||
// Filter is used to filter accounts by criteria.
|
||||
Filter func(acc *Account) bool
|
||||
)
|
||||
|
||||
const (
|
||||
// ErrEmptyAddress is raised when passed Address is empty.
|
||||
ErrEmptyAddress = internal.Error("empty address")
|
||||
|
||||
// ErrEmptyLockTarget is raised when passed LockTarget is empty.
|
||||
ErrEmptyLockTarget = internal.Error("empty lock target")
|
||||
|
||||
// ErrEmptyContainerID is raised when passed CID is empty.
|
||||
ErrEmptyContainerID = internal.Error("empty container ID")
|
||||
|
||||
// ErrEmptyParentAddress is raised when passed ParentAddress is empty.
|
||||
ErrEmptyParentAddress = internal.Error("empty parent address")
|
||||
)
|
||||
|
||||
// SetTTL sets ttl to BalanceRequest to satisfy TTLRequest interface.
|
||||
func (m BalanceRequest) SetTTL(v uint32) { m.TTL = v }
|
||||
|
||||
// SumFunds goes through all accounts and sums up active funds.
|
||||
func SumFunds(accounts []*Account) (res *decimal.Decimal) {
|
||||
res = decimal.Zero.Copy()
|
||||
|
||||
for i := range accounts {
|
||||
if accounts[i] == nil {
|
||||
continue
|
||||
}
|
||||
|
||||
res = res.Add(accounts[i].ActiveFunds)
|
||||
}
|
||||
return
|
||||
}
|
701
accounting/service.pb.go
Normal file
701
accounting/service.pb.go
Normal file
|
@ -0,0 +1,701 @@
|
|||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
||||
// source: accounting/service.proto
|
||||
|
||||
package accounting
|
||||
|
||||
import (
|
||||
context "context"
|
||||
fmt "fmt"
|
||||
_ "github.com/gogo/protobuf/gogoproto"
|
||||
proto "github.com/golang/protobuf/proto"
|
||||
decimal "github.com/nspcc-dev/neofs-proto/decimal"
|
||||
grpc "google.golang.org/grpc"
|
||||
codes "google.golang.org/grpc/codes"
|
||||
status "google.golang.org/grpc/status"
|
||||
io "io"
|
||||
math "math"
|
||||
math_bits "math/bits"
|
||||
)
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = fmt.Errorf
|
||||
var _ = math.Inf
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the proto package it is being compiled against.
|
||||
// A compilation error at this line likely means your copy of the
|
||||
// proto package needs to be updated.
|
||||
const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
|
||||
|
||||
type BalanceRequest struct {
|
||||
OwnerID OwnerID `protobuf:"bytes,1,opt,name=OwnerID,proto3,customtype=OwnerID" json:"OwnerID"`
|
||||
TTL uint32 `protobuf:"varint,2,opt,name=TTL,proto3" json:"TTL,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *BalanceRequest) Reset() { *m = BalanceRequest{} }
|
||||
func (m *BalanceRequest) String() string { return proto.CompactTextString(m) }
|
||||
func (*BalanceRequest) ProtoMessage() {}
|
||||
func (*BalanceRequest) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_7f9514b8f1d4c7fe, []int{0}
|
||||
}
|
||||
func (m *BalanceRequest) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *BalanceRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
func (m *BalanceRequest) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_BalanceRequest.Merge(m, src)
|
||||
}
|
||||
func (m *BalanceRequest) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *BalanceRequest) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_BalanceRequest.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_BalanceRequest proto.InternalMessageInfo
|
||||
|
||||
func (m *BalanceRequest) GetTTL() uint32 {
|
||||
if m != nil {
|
||||
return m.TTL
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
type BalanceResponse struct {
|
||||
Balance *decimal.Decimal `protobuf:"bytes,1,opt,name=Balance,proto3" json:"Balance,omitempty"`
|
||||
LockAccounts []*Account `protobuf:"bytes,2,rep,name=LockAccounts,proto3" json:"LockAccounts,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *BalanceResponse) Reset() { *m = BalanceResponse{} }
|
||||
func (m *BalanceResponse) String() string { return proto.CompactTextString(m) }
|
||||
func (*BalanceResponse) ProtoMessage() {}
|
||||
func (*BalanceResponse) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_7f9514b8f1d4c7fe, []int{1}
|
||||
}
|
||||
func (m *BalanceResponse) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *BalanceResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
func (m *BalanceResponse) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_BalanceResponse.Merge(m, src)
|
||||
}
|
||||
func (m *BalanceResponse) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *BalanceResponse) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_BalanceResponse.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_BalanceResponse proto.InternalMessageInfo
|
||||
|
||||
func (m *BalanceResponse) GetBalance() *decimal.Decimal {
|
||||
if m != nil {
|
||||
return m.Balance
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *BalanceResponse) GetLockAccounts() []*Account {
|
||||
if m != nil {
|
||||
return m.LockAccounts
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterType((*BalanceRequest)(nil), "accounting.BalanceRequest")
|
||||
proto.RegisterType((*BalanceResponse)(nil), "accounting.BalanceResponse")
|
||||
}
|
||||
|
||||
func init() { proto.RegisterFile("accounting/service.proto", fileDescriptor_7f9514b8f1d4c7fe) }
|
||||
|
||||
var fileDescriptor_7f9514b8f1d4c7fe = []byte{
|
||||
// 311 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0x48, 0x4c, 0x4e, 0xce,
|
||||
0x2f, 0xcd, 0x2b, 0xc9, 0xcc, 0x4b, 0xd7, 0x2f, 0x4e, 0x2d, 0x2a, 0xcb, 0x4c, 0x4e, 0xd5, 0x2b,
|
||||
0x28, 0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x42, 0xc8, 0x48, 0x89, 0xa6, 0xa4, 0x26, 0x67, 0xe6, 0x26,
|
||||
0xe6, 0xe8, 0x43, 0x69, 0x88, 0x12, 0x29, 0x31, 0x24, 0xcd, 0x25, 0x95, 0x05, 0xa9, 0xc5, 0x50,
|
||||
0x71, 0xdd, 0xf4, 0xcc, 0x92, 0x8c, 0xd2, 0x24, 0xbd, 0xe4, 0xfc, 0x5c, 0xfd, 0xf4, 0xfc, 0xf4,
|
||||
0x7c, 0x7d, 0xb0, 0x70, 0x52, 0x69, 0x1a, 0x98, 0x07, 0xe6, 0x80, 0x59, 0x10, 0xe5, 0x4a, 0xbe,
|
||||
0x5c, 0x7c, 0x4e, 0x89, 0x39, 0x89, 0x79, 0xc9, 0xa9, 0x41, 0xa9, 0x85, 0xa5, 0xa9, 0xc5, 0x25,
|
||||
0x42, 0x9a, 0x5c, 0xec, 0xfe, 0xe5, 0x79, 0xa9, 0x45, 0x9e, 0x2e, 0x12, 0x8c, 0x0a, 0x8c, 0x1a,
|
||||
0x3c, 0x4e, 0xfc, 0x27, 0xee, 0xc9, 0x33, 0xdc, 0xba, 0x27, 0x0f, 0x13, 0x0e, 0x82, 0x31, 0x84,
|
||||
0x04, 0xb8, 0x98, 0x43, 0x42, 0x7c, 0x24, 0x98, 0x14, 0x18, 0x35, 0x78, 0x83, 0x40, 0x4c, 0xa5,
|
||||
0x32, 0x2e, 0x7e, 0xb8, 0x71, 0xc5, 0x05, 0xf9, 0x79, 0xc5, 0xa9, 0x42, 0x5a, 0x5c, 0xec, 0x50,
|
||||
0x21, 0xb0, 0x79, 0xdc, 0x46, 0x02, 0x7a, 0x30, 0x9f, 0xb8, 0x40, 0xe8, 0x20, 0x98, 0x02, 0x21,
|
||||
0x73, 0x2e, 0x1e, 0x9f, 0xfc, 0xe4, 0x6c, 0x47, 0x88, 0xd7, 0x8a, 0x25, 0x98, 0x14, 0x98, 0x35,
|
||||
0xb8, 0x8d, 0x84, 0xf5, 0x10, 0x7e, 0xd5, 0x83, 0xca, 0x05, 0xa1, 0x28, 0x34, 0x0a, 0xe0, 0xe2,
|
||||
0x72, 0x84, 0xab, 0x11, 0x72, 0x82, 0x5b, 0x29, 0x24, 0x85, 0xac, 0x17, 0xd5, 0xa7, 0x52, 0xd2,
|
||||
0x58, 0xe5, 0x20, 0xce, 0x76, 0x72, 0x3c, 0xf1, 0x48, 0x8e, 0xf1, 0xc2, 0x23, 0x39, 0xc6, 0x1b,
|
||||
0x8f, 0xe4, 0x18, 0x1f, 0x3c, 0x92, 0x63, 0x9c, 0xf1, 0x58, 0x8e, 0x21, 0x4a, 0x1b, 0x29, 0x74,
|
||||
0xf3, 0x8a, 0x0b, 0x92, 0x93, 0x75, 0x53, 0x52, 0xcb, 0xf4, 0xf3, 0x52, 0xf3, 0xd3, 0x8a, 0x75,
|
||||
0x21, 0x61, 0x8b, 0x30, 0x32, 0x89, 0x0d, 0x2c, 0x62, 0x0c, 0x08, 0x00, 0x00, 0xff, 0xff, 0x0c,
|
||||
0x45, 0x3c, 0x0a, 0xe8, 0x01, 0x00, 0x00,
|
||||
}
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ context.Context
|
||||
var _ grpc.ClientConn
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the grpc package it is being compiled against.
|
||||
const _ = grpc.SupportPackageIsVersion4
|
||||
|
||||
// AccountingClient is the client API for Accounting service.
|
||||
//
|
||||
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
|
||||
type AccountingClient interface {
|
||||
Balance(ctx context.Context, in *BalanceRequest, opts ...grpc.CallOption) (*BalanceResponse, error)
|
||||
}
|
||||
|
||||
type accountingClient struct {
|
||||
cc *grpc.ClientConn
|
||||
}
|
||||
|
||||
func NewAccountingClient(cc *grpc.ClientConn) AccountingClient {
|
||||
return &accountingClient{cc}
|
||||
}
|
||||
|
||||
func (c *accountingClient) Balance(ctx context.Context, in *BalanceRequest, opts ...grpc.CallOption) (*BalanceResponse, error) {
|
||||
out := new(BalanceResponse)
|
||||
err := c.cc.Invoke(ctx, "/accounting.Accounting/Balance", in, out, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// AccountingServer is the server API for Accounting service.
|
||||
type AccountingServer interface {
|
||||
Balance(context.Context, *BalanceRequest) (*BalanceResponse, error)
|
||||
}
|
||||
|
||||
// UnimplementedAccountingServer can be embedded to have forward compatible implementations.
|
||||
type UnimplementedAccountingServer struct {
|
||||
}
|
||||
|
||||
func (*UnimplementedAccountingServer) Balance(ctx context.Context, req *BalanceRequest) (*BalanceResponse, error) {
|
||||
return nil, status.Errorf(codes.Unimplemented, "method Balance not implemented")
|
||||
}
|
||||
|
||||
func RegisterAccountingServer(s *grpc.Server, srv AccountingServer) {
|
||||
s.RegisterService(&_Accounting_serviceDesc, srv)
|
||||
}
|
||||
|
||||
func _Accounting_Balance_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(BalanceRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(AccountingServer).Balance(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: "/accounting.Accounting/Balance",
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(AccountingServer).Balance(ctx, req.(*BalanceRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
var _Accounting_serviceDesc = grpc.ServiceDesc{
|
||||
ServiceName: "accounting.Accounting",
|
||||
HandlerType: (*AccountingServer)(nil),
|
||||
Methods: []grpc.MethodDesc{
|
||||
{
|
||||
MethodName: "Balance",
|
||||
Handler: _Accounting_Balance_Handler,
|
||||
},
|
||||
},
|
||||
Streams: []grpc.StreamDesc{},
|
||||
Metadata: "accounting/service.proto",
|
||||
}
|
||||
|
||||
func (m *BalanceRequest) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *BalanceRequest) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *BalanceRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
i -= len(m.XXX_unrecognized)
|
||||
copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
if m.TTL != 0 {
|
||||
i = encodeVarintService(dAtA, i, uint64(m.TTL))
|
||||
i--
|
||||
dAtA[i] = 0x10
|
||||
}
|
||||
{
|
||||
size := m.OwnerID.Size()
|
||||
i -= size
|
||||
if _, err := m.OwnerID.MarshalTo(dAtA[i:]); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i = encodeVarintService(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0xa
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func (m *BalanceResponse) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *BalanceResponse) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *BalanceResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
i -= len(m.XXX_unrecognized)
|
||||
copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
if len(m.LockAccounts) > 0 {
|
||||
for iNdEx := len(m.LockAccounts) - 1; iNdEx >= 0; iNdEx-- {
|
||||
{
|
||||
size, err := m.LockAccounts[iNdEx].MarshalToSizedBuffer(dAtA[:i])
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i -= size
|
||||
i = encodeVarintService(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0x12
|
||||
}
|
||||
}
|
||||
if m.Balance != nil {
|
||||
{
|
||||
size, err := m.Balance.MarshalToSizedBuffer(dAtA[:i])
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i -= size
|
||||
i = encodeVarintService(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0xa
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func encodeVarintService(dAtA []byte, offset int, v uint64) int {
|
||||
offset -= sovService(v)
|
||||
base := offset
|
||||
for v >= 1<<7 {
|
||||
dAtA[offset] = uint8(v&0x7f | 0x80)
|
||||
v >>= 7
|
||||
offset++
|
||||
}
|
||||
dAtA[offset] = uint8(v)
|
||||
return base
|
||||
}
|
||||
func (m *BalanceRequest) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
l = m.OwnerID.Size()
|
||||
n += 1 + l + sovService(uint64(l))
|
||||
if m.TTL != 0 {
|
||||
n += 1 + sovService(uint64(m.TTL))
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (m *BalanceResponse) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
if m.Balance != nil {
|
||||
l = m.Balance.Size()
|
||||
n += 1 + l + sovService(uint64(l))
|
||||
}
|
||||
if len(m.LockAccounts) > 0 {
|
||||
for _, e := range m.LockAccounts {
|
||||
l = e.Size()
|
||||
n += 1 + l + sovService(uint64(l))
|
||||
}
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func sovService(x uint64) (n int) {
|
||||
return (math_bits.Len64(x|1) + 6) / 7
|
||||
}
|
||||
func sozService(x uint64) (n int) {
|
||||
return sovService(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func (m *BalanceRequest) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: BalanceRequest: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: BalanceRequest: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field OwnerID", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if err := m.OwnerID.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 2:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field TTL", wireType)
|
||||
}
|
||||
m.TTL = 0
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
m.TTL |= uint32(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipService(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func (m *BalanceResponse) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: BalanceResponse: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: BalanceResponse: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Balance", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if m.Balance == nil {
|
||||
m.Balance = &decimal.Decimal{}
|
||||
}
|
||||
if err := m.Balance.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 2:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field LockAccounts", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.LockAccounts = append(m.LockAccounts, &Account{})
|
||||
if err := m.LockAccounts[len(m.LockAccounts)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipService(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func skipService(dAtA []byte) (n int, err error) {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
depth := 0
|
||||
for iNdEx < l {
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= (uint64(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
wireType := int(wire & 0x7)
|
||||
switch wireType {
|
||||
case 0:
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx++
|
||||
if dAtA[iNdEx-1] < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 1:
|
||||
iNdEx += 8
|
||||
case 2:
|
||||
var length int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
length |= (int(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if length < 0 {
|
||||
return 0, ErrInvalidLengthService
|
||||
}
|
||||
iNdEx += length
|
||||
case 3:
|
||||
depth++
|
||||
case 4:
|
||||
if depth == 0 {
|
||||
return 0, ErrUnexpectedEndOfGroupService
|
||||
}
|
||||
depth--
|
||||
case 5:
|
||||
iNdEx += 4
|
||||
default:
|
||||
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
|
||||
}
|
||||
if iNdEx < 0 {
|
||||
return 0, ErrInvalidLengthService
|
||||
}
|
||||
if depth == 0 {
|
||||
return iNdEx, nil
|
||||
}
|
||||
}
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
|
||||
var (
|
||||
ErrInvalidLengthService = fmt.Errorf("proto: negative length found during unmarshaling")
|
||||
ErrIntOverflowService = fmt.Errorf("proto: integer overflow")
|
||||
ErrUnexpectedEndOfGroupService = fmt.Errorf("proto: unexpected end of group")
|
||||
)
|
23
accounting/service.proto
Normal file
23
accounting/service.proto
Normal file
|
@ -0,0 +1,23 @@
|
|||
syntax = "proto3";
|
||||
package accounting;
|
||||
option go_package = "github.com/nspcc-dev/neofs-proto/accounting";
|
||||
|
||||
import "decimal/decimal.proto";
|
||||
import "accounting/types.proto";
|
||||
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||
|
||||
option (gogoproto.stable_marshaler_all) = true;
|
||||
|
||||
service Accounting {
|
||||
rpc Balance(BalanceRequest) returns (BalanceResponse);
|
||||
}
|
||||
|
||||
message BalanceRequest {
|
||||
bytes OwnerID = 1 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||
uint32 TTL = 2;
|
||||
}
|
||||
|
||||
message BalanceResponse {
|
||||
decimal.Decimal Balance = 1;
|
||||
repeated Account LockAccounts = 2;
|
||||
}
|
353
accounting/types.go
Normal file
353
accounting/types.go
Normal file
|
@ -0,0 +1,353 @@
|
|||
package accounting
|
||||
|
||||
import (
|
||||
"crypto/ecdsa"
|
||||
"crypto/rand"
|
||||
"encoding/binary"
|
||||
"reflect"
|
||||
|
||||
"github.com/mr-tron/base58"
|
||||
crypto "github.com/nspcc-dev/neofs-crypto"
|
||||
"github.com/nspcc-dev/neofs-proto/chain"
|
||||
"github.com/nspcc-dev/neofs-proto/decimal"
|
||||
"github.com/nspcc-dev/neofs-proto/internal"
|
||||
"github.com/nspcc-dev/neofs-proto/refs"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
type (
|
||||
// Cheque structure that describes a user request for withdrawal of funds.
|
||||
Cheque struct {
|
||||
ID ChequeID
|
||||
Owner refs.OwnerID
|
||||
Amount *decimal.Decimal
|
||||
Height uint64
|
||||
Signatures []ChequeSignature
|
||||
}
|
||||
|
||||
// BalanceReceiver interface that is used to retrieve user balance by address.
|
||||
BalanceReceiver interface {
|
||||
Balance(accountAddress string) (*Account, error)
|
||||
}
|
||||
|
||||
// ChequeID is identifier of user request for withdrawal of funds.
|
||||
ChequeID string
|
||||
|
||||
// CID type alias.
|
||||
CID = refs.CID
|
||||
|
||||
// SGID type alias.
|
||||
SGID = refs.SGID
|
||||
|
||||
// ChequeSignature contains public key and hash, and is used to verify signatures.
|
||||
ChequeSignature struct {
|
||||
Key *ecdsa.PublicKey
|
||||
Hash []byte
|
||||
}
|
||||
)
|
||||
|
||||
const (
|
||||
// ErrWrongSignature is raised when wrong signature is passed.
|
||||
ErrWrongSignature = internal.Error("wrong signature")
|
||||
|
||||
// ErrWrongPublicKey is raised when wrong public key is passed.
|
||||
ErrWrongPublicKey = internal.Error("wrong public key")
|
||||
|
||||
// ErrWrongChequeData is raised when passed bytes cannot not be parsed as valid Cheque.
|
||||
ErrWrongChequeData = internal.Error("wrong cheque data")
|
||||
|
||||
// ErrInvalidLength is raised when passed bytes cannot not be parsed as valid ChequeID.
|
||||
ErrInvalidLength = internal.Error("invalid length")
|
||||
|
||||
u16size = 2
|
||||
u64size = 8
|
||||
|
||||
signaturesOffset = chain.AddressLength + refs.OwnerIDSize + u64size + u64size
|
||||
)
|
||||
|
||||
// NewChequeID generates valid random ChequeID using crypto/rand.Reader.
|
||||
func NewChequeID() (ChequeID, error) {
|
||||
d := make([]byte, chain.AddressLength)
|
||||
if _, err := rand.Read(d); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
id := base58.Encode(d)
|
||||
|
||||
return ChequeID(id), nil
|
||||
}
|
||||
|
||||
// String returns string representation of ChequeID.
|
||||
func (b ChequeID) String() string { return string(b) }
|
||||
|
||||
// Empty returns true, if ChequeID is empty.
|
||||
func (b ChequeID) Empty() bool { return len(b) == 0 }
|
||||
|
||||
// Valid validates ChequeID.
|
||||
func (b ChequeID) Valid() bool {
|
||||
d, err := base58.Decode(string(b))
|
||||
return err == nil && len(d) == chain.AddressLength
|
||||
}
|
||||
|
||||
// Bytes returns bytes representation of ChequeID.
|
||||
func (b ChequeID) Bytes() []byte {
|
||||
d, err := base58.Decode(string(b))
|
||||
if err != nil {
|
||||
return make([]byte, chain.AddressLength)
|
||||
}
|
||||
return d
|
||||
}
|
||||
|
||||
// Equal checks that current ChequeID is equal to passed ChequeID.
|
||||
func (b ChequeID) Equal(b2 ChequeID) bool {
|
||||
return b.Valid() && b2.Valid() && string(b) == string(b2)
|
||||
}
|
||||
|
||||
// Unmarshal tries to parse []byte into valid ChequeID.
|
||||
func (b *ChequeID) Unmarshal(data []byte) error {
|
||||
*b = ChequeID(base58.Encode(data))
|
||||
if !b.Valid() {
|
||||
return ErrInvalidLength
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Size returns size (chain.AddressLength).
|
||||
func (b ChequeID) Size() int {
|
||||
return chain.AddressLength
|
||||
}
|
||||
|
||||
// MarshalTo tries to marshal ChequeID into passed bytes and returns
|
||||
// count of copied bytes or error, if bytes len is not enough to contain ChequeID.
|
||||
func (b ChequeID) MarshalTo(data []byte) (int, error) {
|
||||
if len(data) < chain.AddressLength {
|
||||
return 0, ErrInvalidLength
|
||||
}
|
||||
return copy(data, b.Bytes()), nil
|
||||
}
|
||||
|
||||
// Equals checks that m and tx are valid and equal Tx values.
|
||||
func (m Tx) Equals(tx Tx) bool {
|
||||
return m.From == tx.From &&
|
||||
m.To == tx.To &&
|
||||
m.Type == tx.Type &&
|
||||
m.Amount == tx.Amount
|
||||
}
|
||||
|
||||
// Verify validates current Cheque and Signatures that are generated for current Cheque.
|
||||
func (b Cheque) Verify() error {
|
||||
data := b.marshalBody()
|
||||
for i, sign := range b.Signatures {
|
||||
if err := crypto.VerifyRFC6979(sign.Key, data, sign.Hash); err != nil {
|
||||
return errors.Wrapf(ErrWrongSignature, "item #%d: %s", i, err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Sign is used to sign current Cheque and stores result inside b.Signatures.
|
||||
func (b *Cheque) Sign(key *ecdsa.PrivateKey) error {
|
||||
hash, err := crypto.SignRFC6979(key, b.marshalBody())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
b.Signatures = append(b.Signatures, ChequeSignature{
|
||||
Key: &key.PublicKey,
|
||||
Hash: hash,
|
||||
})
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *Cheque) marshalBody() []byte {
|
||||
buf := make([]byte, signaturesOffset)
|
||||
|
||||
var offset int
|
||||
|
||||
offset += copy(buf, b.ID.Bytes())
|
||||
offset += copy(buf[offset:], b.Owner.Bytes())
|
||||
|
||||
binary.BigEndian.PutUint64(buf[offset:], uint64(b.Amount.Value))
|
||||
offset += u64size
|
||||
|
||||
binary.BigEndian.PutUint64(buf[offset:], b.Height)
|
||||
|
||||
return buf
|
||||
}
|
||||
|
||||
func (b *Cheque) unmarshalBody(buf []byte) error {
|
||||
var offset int
|
||||
|
||||
if len(buf) < signaturesOffset {
|
||||
return ErrWrongChequeData
|
||||
}
|
||||
|
||||
{ // unmarshal UUID
|
||||
if err := b.ID.Unmarshal(buf[offset : offset+chain.AddressLength]); err != nil {
|
||||
return err
|
||||
}
|
||||
offset += chain.AddressLength
|
||||
}
|
||||
|
||||
{ // unmarshal OwnerID
|
||||
if err := b.Owner.Unmarshal(buf[offset : offset+refs.OwnerIDSize]); err != nil {
|
||||
return err
|
||||
}
|
||||
offset += refs.OwnerIDSize
|
||||
}
|
||||
|
||||
{ // unmarshal amount
|
||||
amount := int64(binary.BigEndian.Uint64(buf[offset:]))
|
||||
b.Amount = decimal.New(amount)
|
||||
offset += u64size
|
||||
}
|
||||
|
||||
{ // unmarshal height
|
||||
b.Height = binary.BigEndian.Uint64(buf[offset:])
|
||||
offset += u64size
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// MarshalBinary is used to marshal Cheque into bytes.
|
||||
func (b Cheque) MarshalBinary() ([]byte, error) {
|
||||
var (
|
||||
count = len(b.Signatures)
|
||||
buf = make([]byte, b.Size())
|
||||
offset = copy(buf, b.marshalBody())
|
||||
)
|
||||
|
||||
binary.BigEndian.PutUint16(buf[offset:], uint16(count))
|
||||
offset += u16size
|
||||
|
||||
for _, sign := range b.Signatures {
|
||||
key := crypto.MarshalPublicKey(sign.Key)
|
||||
offset += copy(buf[offset:], key)
|
||||
offset += copy(buf[offset:], sign.Hash)
|
||||
}
|
||||
|
||||
return buf, nil
|
||||
}
|
||||
|
||||
// Size returns size of Cheque (count of bytes needs to store it).
|
||||
func (b Cheque) Size() int {
|
||||
return signaturesOffset + u16size +
|
||||
len(b.Signatures)*(crypto.PublicKeyCompressedSize+crypto.RFC6979SignatureSize)
|
||||
}
|
||||
|
||||
// UnmarshalBinary tries to parse []byte into valid Cheque.
|
||||
func (b *Cheque) UnmarshalBinary(buf []byte) error {
|
||||
if err := b.unmarshalBody(buf); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
body := buf[:signaturesOffset]
|
||||
|
||||
count := int64(binary.BigEndian.Uint16(buf[signaturesOffset:]))
|
||||
offset := signaturesOffset + u16size
|
||||
|
||||
if ln := count * int64(crypto.PublicKeyCompressedSize+crypto.RFC6979SignatureSize); ln > int64(len(buf[offset:])) {
|
||||
return ErrWrongChequeData
|
||||
}
|
||||
|
||||
for i := int64(0); i < count; i++ {
|
||||
sign := ChequeSignature{
|
||||
Key: crypto.UnmarshalPublicKey(buf[offset : offset+crypto.PublicKeyCompressedSize]),
|
||||
Hash: make([]byte, crypto.RFC6979SignatureSize),
|
||||
}
|
||||
|
||||
offset += crypto.PublicKeyCompressedSize
|
||||
if sign.Key == nil {
|
||||
return errors.Wrapf(ErrWrongPublicKey, "item #%d", i)
|
||||
}
|
||||
|
||||
offset += copy(sign.Hash, buf[offset:offset+crypto.RFC6979SignatureSize])
|
||||
if err := crypto.VerifyRFC6979(sign.Key, body, sign.Hash); err != nil {
|
||||
return errors.Wrapf(ErrWrongSignature, "item #%d: %s (offset=%d, len=%d)", i, err.Error(), offset, len(sign.Hash))
|
||||
}
|
||||
|
||||
b.Signatures = append(b.Signatures, sign)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ErrNotEnoughFunds generates error using address and amounts.
|
||||
func ErrNotEnoughFunds(addr string, needed, residue *decimal.Decimal) error {
|
||||
return errors.Errorf("not enough funds (requested=%s, residue=%s, addr=%s", needed, residue, addr)
|
||||
}
|
||||
|
||||
func (m *Account) hasLockAcc(addr string) bool {
|
||||
for i := range m.LockAccounts {
|
||||
if m.LockAccounts[i].Address == addr {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// ValidateLock checks that account can be locked.
|
||||
func (m *Account) ValidateLock() error {
|
||||
switch {
|
||||
case m.Address == "":
|
||||
return ErrEmptyAddress
|
||||
case m.ParentAddress == "":
|
||||
return ErrEmptyParentAddress
|
||||
case m.LockTarget == nil:
|
||||
return ErrEmptyLockTarget
|
||||
}
|
||||
|
||||
switch v := m.LockTarget.Target.(type) {
|
||||
case *LockTarget_WithdrawTarget:
|
||||
if v.WithdrawTarget.Cheque != m.Address {
|
||||
return errors.Errorf("wrong cheque ID: expected %s, has %s", m.Address, v.WithdrawTarget.Cheque)
|
||||
}
|
||||
case *LockTarget_ContainerCreateTarget:
|
||||
switch {
|
||||
case v.ContainerCreateTarget.CID.Empty():
|
||||
return ErrEmptyContainerID
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// CanLock checks possibility to lock funds.
|
||||
func (m *Account) CanLock(lockAcc *Account) error {
|
||||
switch {
|
||||
case m.ActiveFunds.LT(lockAcc.ActiveFunds):
|
||||
return ErrNotEnoughFunds(lockAcc.ParentAddress, lockAcc.ActiveFunds, m.ActiveFunds)
|
||||
case m.hasLockAcc(lockAcc.Address):
|
||||
return errors.Errorf("could not lock account(%s) funds: duplicating lock(%s)", m.Address, lockAcc.Address)
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// LockForWithdraw checks that account contains locked funds by passed ChequeID.
|
||||
func (m *Account) LockForWithdraw(chequeID string) bool {
|
||||
switch v := m.LockTarget.Target.(type) {
|
||||
case *LockTarget_WithdrawTarget:
|
||||
return v.WithdrawTarget.Cheque == chequeID
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// LockForContainerCreate checks that account contains locked funds for container creation.
|
||||
func (m *Account) LockForContainerCreate(cid refs.CID) bool {
|
||||
switch v := m.LockTarget.Target.(type) {
|
||||
case *LockTarget_ContainerCreateTarget:
|
||||
return v.ContainerCreateTarget.CID.Equal(cid)
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Equal checks that current Settlement is equal to passed Settlement.
|
||||
func (m *Settlement) Equal(s *Settlement) bool {
|
||||
if s == nil || m.Epoch != s.Epoch || len(m.Transactions) != len(s.Transactions) {
|
||||
return false
|
||||
}
|
||||
return len(m.Transactions) == 0 || reflect.DeepEqual(m.Transactions, s.Transactions)
|
||||
}
|
3905
accounting/types.pb.go
Normal file
3905
accounting/types.pb.go
Normal file
File diff suppressed because it is too large
Load diff
106
accounting/types.proto
Normal file
106
accounting/types.proto
Normal file
|
@ -0,0 +1,106 @@
|
|||
syntax = "proto3";
|
||||
package accounting;
|
||||
option go_package = "github.com/nspcc-dev/neofs-proto/accounting";
|
||||
|
||||
import "decimal/decimal.proto";
|
||||
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||
|
||||
option (gogoproto.stable_marshaler_all) = true;
|
||||
|
||||
// Snapshot accounting messages
|
||||
message Account {
|
||||
bytes OwnerID = 1 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||
string Address = 2;
|
||||
string ParentAddress = 3;
|
||||
decimal.Decimal ActiveFunds = 4;
|
||||
Lifetime Lifetime = 5 [(gogoproto.nullable) = false];
|
||||
LockTarget LockTarget = 6;
|
||||
repeated Account LockAccounts = 7;
|
||||
}
|
||||
|
||||
message LockTarget {
|
||||
oneof Target {
|
||||
WithdrawTarget WithdrawTarget = 1;
|
||||
ContainerCreateTarget ContainerCreateTarget = 2;
|
||||
}
|
||||
}
|
||||
|
||||
// Snapshot balance messages
|
||||
message Balances {
|
||||
repeated Account Accounts = 1 [(gogoproto.nullable) = false];
|
||||
}
|
||||
|
||||
// PayIn / PayOut messages
|
||||
message PayIO {
|
||||
uint64 BlockID = 1;
|
||||
repeated Tx Transactions = 2 [(gogoproto.nullable) = false];
|
||||
}
|
||||
|
||||
// Clearing messages
|
||||
message Clearing {
|
||||
repeated Tx Transactions = 1 [(gogoproto.nullable) = false];
|
||||
}
|
||||
|
||||
// Clearing messages
|
||||
message Withdraw {
|
||||
string ID = 1;
|
||||
uint64 Epoch = 2;
|
||||
Tx Transaction = 3;
|
||||
}
|
||||
|
||||
// Lifetime of locks
|
||||
message Lifetime {
|
||||
enum Unit {
|
||||
Unlimited = 0;
|
||||
NeoFSEpoch = 1;
|
||||
NeoBlock = 2;
|
||||
}
|
||||
|
||||
Unit unit = 1 [(gogoproto.customname) = "Unit"];
|
||||
int64 Value = 2;
|
||||
}
|
||||
|
||||
// Transaction messages
|
||||
message Tx {
|
||||
enum Type {
|
||||
Unknown = 0;
|
||||
Withdraw = 1;
|
||||
PayIO = 2;
|
||||
Inner = 3;
|
||||
}
|
||||
|
||||
Type type = 1 [(gogoproto.customname) = "Type"];
|
||||
string From = 2;
|
||||
string To = 3;
|
||||
decimal.Decimal Amount = 4;
|
||||
bytes PublicKeys = 5; // of sender
|
||||
}
|
||||
|
||||
message Settlement {
|
||||
message Receiver {
|
||||
string To = 1;
|
||||
decimal.Decimal Amount = 2;
|
||||
}
|
||||
|
||||
message Container {
|
||||
bytes CID = 1 [(gogoproto.customtype) = "CID", (gogoproto.nullable) = false];
|
||||
repeated bytes SGIDs = 2 [(gogoproto.customtype) = "SGID", (gogoproto.nullable) = false];
|
||||
}
|
||||
|
||||
message Tx {
|
||||
string From = 1;
|
||||
Container Container = 2 [(gogoproto.nullable) = false];
|
||||
repeated Receiver Receivers = 3 [(gogoproto.nullable) = false];
|
||||
}
|
||||
|
||||
uint64 Epoch = 1;
|
||||
repeated Tx Transactions = 2;
|
||||
}
|
||||
|
||||
message ContainerCreateTarget {
|
||||
bytes CID = 1 [(gogoproto.customtype) = "CID", (gogoproto.nullable) = false];
|
||||
}
|
||||
|
||||
message WithdrawTarget {
|
||||
string Cheque = 1;
|
||||
}
|
84
accounting/types_test.go
Normal file
84
accounting/types_test.go
Normal file
|
@ -0,0 +1,84 @@
|
|||
package accounting
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"testing"
|
||||
|
||||
"github.com/mr-tron/base58"
|
||||
"github.com/nspcc-dev/neofs-crypto/test"
|
||||
"github.com/nspcc-dev/neofs-proto/chain"
|
||||
"github.com/nspcc-dev/neofs-proto/decimal"
|
||||
"github.com/nspcc-dev/neofs-proto/refs"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestCheque(t *testing.T) {
|
||||
t.Run("new/valid", func(t *testing.T) {
|
||||
id, err := NewChequeID()
|
||||
require.NoError(t, err)
|
||||
require.True(t, id.Valid())
|
||||
|
||||
d := make([]byte, chain.AddressLength+1)
|
||||
|
||||
// expected size + 1 byte
|
||||
str := base58.Encode(d)
|
||||
require.False(t, ChequeID(str).Valid())
|
||||
|
||||
// expected size - 1 byte
|
||||
str = base58.Encode(d[:len(d)-2])
|
||||
require.False(t, ChequeID(str).Valid())
|
||||
|
||||
// wrong encoding
|
||||
d = d[:len(d)-1] // normal size
|
||||
require.False(t, ChequeID(string(d)).Valid())
|
||||
})
|
||||
|
||||
t.Run("marshal/unmarshal", func(t *testing.T) {
|
||||
var b2 = new(Cheque)
|
||||
|
||||
key1 := test.DecodeKey(0)
|
||||
key2 := test.DecodeKey(1)
|
||||
|
||||
id, err := NewChequeID()
|
||||
require.NoError(t, err)
|
||||
|
||||
owner, err := refs.NewOwnerID(&key1.PublicKey)
|
||||
require.NoError(t, err)
|
||||
|
||||
b1 := &Cheque{
|
||||
ID: id,
|
||||
Owner: owner,
|
||||
Height: 100,
|
||||
Amount: decimal.NewGAS(100),
|
||||
}
|
||||
|
||||
require.NoError(t, b1.Sign(key1))
|
||||
require.NoError(t, b1.Sign(key2))
|
||||
|
||||
data, err := b1.MarshalBinary()
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Len(t, data, b1.Size())
|
||||
require.NoError(t, b2.UnmarshalBinary(data))
|
||||
require.Equal(t, b1, b2)
|
||||
|
||||
require.NoError(t, b1.Verify())
|
||||
require.NoError(t, b2.Verify())
|
||||
})
|
||||
|
||||
t.Run("example from SC", func(t *testing.T) {
|
||||
var pathToCheque = "fixtures/cheque_data"
|
||||
expect, err := ioutil.ReadFile(pathToCheque)
|
||||
require.NoError(t, err)
|
||||
|
||||
var cheque Cheque
|
||||
require.NoError(t, cheque.UnmarshalBinary(expect))
|
||||
|
||||
actual, err := cheque.MarshalBinary()
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Equal(t, expect, actual)
|
||||
|
||||
require.NoError(t, cheque.Verify())
|
||||
})
|
||||
}
|
53
accounting/withdraw.go
Normal file
53
accounting/withdraw.go
Normal file
|
@ -0,0 +1,53 @@
|
|||
package accounting
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/nspcc-dev/neofs-proto/refs"
|
||||
)
|
||||
|
||||
type (
|
||||
// MessageID type alias.
|
||||
MessageID = refs.MessageID
|
||||
)
|
||||
|
||||
// SetTTL sets ttl to GetRequest to satisfy TTLRequest interface.
|
||||
func (m *GetRequest) SetTTL(v uint32) { m.TTL = v }
|
||||
|
||||
// SetTTL sets ttl to PutRequest to satisfy TTLRequest interface.
|
||||
func (m *PutRequest) SetTTL(v uint32) { m.TTL = v }
|
||||
|
||||
// SetTTL sets ttl to ListRequest to satisfy TTLRequest interface.
|
||||
func (m *ListRequest) SetTTL(v uint32) { m.TTL = v }
|
||||
|
||||
// SetTTL sets ttl to DeleteRequest to satisfy TTLRequest interface.
|
||||
func (m *DeleteRequest) SetTTL(v uint32) { m.TTL = v }
|
||||
|
||||
// SetSignature sets signature to PutRequest to satisfy SignedRequest interface.
|
||||
func (m *PutRequest) SetSignature(v []byte) { m.Signature = v }
|
||||
|
||||
// SetSignature sets signature to DeleteRequest to satisfy SignedRequest interface.
|
||||
func (m *DeleteRequest) SetSignature(v []byte) { m.Signature = v }
|
||||
|
||||
// PrepareData prepares bytes representation of PutRequest to satisfy SignedRequest interface.
|
||||
func (m *PutRequest) PrepareData() ([]byte, error) {
|
||||
var offset int
|
||||
// MessageID-len + OwnerID-len + Amount + Height
|
||||
buf := make([]byte, refs.UUIDSize+refs.OwnerIDSize+binary.MaxVarintLen64+binary.MaxVarintLen64)
|
||||
offset += copy(buf[offset:], m.MessageID.Bytes())
|
||||
offset += copy(buf[offset:], m.OwnerID.Bytes())
|
||||
offset += binary.PutVarint(buf[offset:], m.Amount.Value)
|
||||
binary.PutUvarint(buf[offset:], m.Height)
|
||||
return buf, nil
|
||||
}
|
||||
|
||||
// PrepareData prepares bytes representation of DeleteRequest to satisfy SignedRequest interface.
|
||||
func (m *DeleteRequest) PrepareData() ([]byte, error) {
|
||||
var offset int
|
||||
// ID-len + OwnerID-len + MessageID-len
|
||||
buf := make([]byte, refs.UUIDSize+refs.OwnerIDSize+refs.UUIDSize)
|
||||
offset += copy(buf[offset:], m.ID.Bytes())
|
||||
offset += copy(buf[offset:], m.OwnerID.Bytes())
|
||||
copy(buf[offset:], m.MessageID.Bytes())
|
||||
return buf, nil
|
||||
}
|
2641
accounting/withdraw.pb.go
Normal file
2641
accounting/withdraw.pb.go
Normal file
File diff suppressed because it is too large
Load diff
61
accounting/withdraw.proto
Normal file
61
accounting/withdraw.proto
Normal file
|
@ -0,0 +1,61 @@
|
|||
syntax = "proto3";
|
||||
package accounting;
|
||||
option go_package = "github.com/nspcc-dev/neofs-proto/accounting";
|
||||
|
||||
import "decimal/decimal.proto";
|
||||
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||
|
||||
option (gogoproto.stable_marshaler_all) = true;
|
||||
|
||||
service Withdraw {
|
||||
rpc Get(GetRequest) returns (GetResponse);
|
||||
rpc Put(PutRequest) returns (PutResponse);
|
||||
rpc List(ListRequest) returns (ListResponse);
|
||||
rpc Delete(DeleteRequest) returns (DeleteResponse);
|
||||
}
|
||||
|
||||
message Item {
|
||||
bytes ID = 1 [(gogoproto.customtype) = "ChequeID", (gogoproto.nullable) = false];
|
||||
bytes OwnerID = 2 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||
decimal.Decimal Amount = 3;
|
||||
uint64 Height = 4;
|
||||
bytes Payload = 5;
|
||||
}
|
||||
|
||||
message GetRequest {
|
||||
bytes ID = 1 [(gogoproto.customtype) = "ChequeID", (gogoproto.nullable) = false];
|
||||
bytes OwnerID = 2 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||
uint32 TTL = 3;
|
||||
}
|
||||
message GetResponse {
|
||||
Item Withdraw = 1;
|
||||
}
|
||||
|
||||
message PutRequest {
|
||||
bytes OwnerID = 1 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||
decimal.Decimal Amount = 2;
|
||||
uint64 Height = 3;
|
||||
bytes MessageID = 4 [(gogoproto.customtype) = "MessageID", (gogoproto.nullable) = false];
|
||||
bytes Signature = 5;
|
||||
uint32 TTL = 6;
|
||||
}
|
||||
message PutResponse {
|
||||
bytes ID = 1 [(gogoproto.customtype) = "ChequeID", (gogoproto.nullable) = false];
|
||||
}
|
||||
|
||||
message ListRequest {
|
||||
bytes OwnerID = 1 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||
uint32 TTL = 2;
|
||||
}
|
||||
message ListResponse {
|
||||
repeated Item Items = 1;
|
||||
}
|
||||
|
||||
message DeleteRequest {
|
||||
bytes ID = 1 [(gogoproto.customtype) = "ChequeID", (gogoproto.nullable) = false];
|
||||
bytes OwnerID = 2 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||
bytes MessageID = 3 [(gogoproto.customtype) = "MessageID", (gogoproto.nullable) = false];
|
||||
bytes Signature = 4;
|
||||
uint32 TTL = 5;
|
||||
}
|
||||
message DeleteResponse {}
|
11
bootstrap/service.go
Normal file
11
bootstrap/service.go
Normal file
|
@ -0,0 +1,11 @@
|
|||
package bootstrap
|
||||
|
||||
import (
|
||||
"github.com/nspcc-dev/neofs-proto/service"
|
||||
)
|
||||
|
||||
// NodeType type alias.
|
||||
type NodeType = service.NodeRole
|
||||
|
||||
// SetTTL sets ttl to Request to satisfy TTLRequest interface.
|
||||
func (m *Request) SetTTL(v uint32) { m.TTL = v }
|
483
bootstrap/service.pb.go
Normal file
483
bootstrap/service.pb.go
Normal file
|
@ -0,0 +1,483 @@
|
|||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
||||
// source: bootstrap/service.proto
|
||||
|
||||
package bootstrap
|
||||
|
||||
import (
|
||||
context "context"
|
||||
fmt "fmt"
|
||||
_ "github.com/gogo/protobuf/gogoproto"
|
||||
proto "github.com/golang/protobuf/proto"
|
||||
grpc "google.golang.org/grpc"
|
||||
codes "google.golang.org/grpc/codes"
|
||||
status "google.golang.org/grpc/status"
|
||||
io "io"
|
||||
math "math"
|
||||
math_bits "math/bits"
|
||||
)
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = fmt.Errorf
|
||||
var _ = math.Inf
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the proto package it is being compiled against.
|
||||
// A compilation error at this line likely means your copy of the
|
||||
// proto package needs to be updated.
|
||||
const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
|
||||
|
||||
// Request message to communicate between DHT nodes
|
||||
type Request struct {
|
||||
Type NodeType `protobuf:"varint,1,opt,name=type,proto3,customtype=NodeType" json:"type"`
|
||||
Info NodeInfo `protobuf:"bytes,2,opt,name=info,proto3" json:"info"`
|
||||
TTL uint32 `protobuf:"varint,3,opt,name=TTL,proto3" json:"TTL,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *Request) Reset() { *m = Request{} }
|
||||
func (m *Request) String() string { return proto.CompactTextString(m) }
|
||||
func (*Request) ProtoMessage() {}
|
||||
func (*Request) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_21bce759c9d8eb63, []int{0}
|
||||
}
|
||||
func (m *Request) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *Request) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
func (m *Request) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_Request.Merge(m, src)
|
||||
}
|
||||
func (m *Request) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *Request) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_Request.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_Request proto.InternalMessageInfo
|
||||
|
||||
func (m *Request) GetInfo() NodeInfo {
|
||||
if m != nil {
|
||||
return m.Info
|
||||
}
|
||||
return NodeInfo{}
|
||||
}
|
||||
|
||||
func (m *Request) GetTTL() uint32 {
|
||||
if m != nil {
|
||||
return m.TTL
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterType((*Request)(nil), "bootstrap.Request")
|
||||
}
|
||||
|
||||
func init() { proto.RegisterFile("bootstrap/service.proto", fileDescriptor_21bce759c9d8eb63) }
|
||||
|
||||
var fileDescriptor_21bce759c9d8eb63 = []byte{
|
||||
// 284 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0x4f, 0xca, 0xcf, 0x2f,
|
||||
0x29, 0x2e, 0x29, 0x4a, 0x2c, 0xd0, 0x2f, 0x4e, 0x2d, 0x2a, 0xcb, 0x4c, 0x4e, 0xd5, 0x2b, 0x28,
|
||||
0xca, 0x2f, 0xc9, 0x17, 0xe2, 0x84, 0x4b, 0x48, 0x89, 0x22, 0xd4, 0x94, 0x54, 0x16, 0xa4, 0x16,
|
||||
0x43, 0x54, 0x48, 0xe9, 0xa6, 0x67, 0x96, 0x64, 0x94, 0x26, 0xe9, 0x25, 0xe7, 0xe7, 0xea, 0xa7,
|
||||
0xe7, 0xa7, 0xe7, 0xeb, 0x83, 0x85, 0x93, 0x4a, 0xd3, 0xc0, 0x3c, 0x30, 0x07, 0xcc, 0x82, 0x28,
|
||||
0x57, 0xaa, 0xe0, 0x62, 0x0f, 0x4a, 0x2d, 0x2c, 0x4d, 0x2d, 0x2e, 0x11, 0xd2, 0xe1, 0x62, 0x01,
|
||||
0x19, 0x24, 0xc1, 0xa8, 0xc0, 0xa8, 0xc1, 0xea, 0x24, 0x71, 0xe2, 0x9e, 0x3c, 0xc3, 0xad, 0x7b,
|
||||
0xf2, 0x1c, 0x7e, 0xf9, 0x29, 0xa9, 0x21, 0x95, 0x05, 0xa9, 0x8f, 0xee, 0xc9, 0xb3, 0x80, 0xe8,
|
||||
0x20, 0xb0, 0x2a, 0x21, 0x5d, 0x2e, 0x96, 0xcc, 0xbc, 0xb4, 0x7c, 0x09, 0x26, 0x05, 0x46, 0x0d,
|
||||
0x6e, 0x23, 0x61, 0x3d, 0xb8, 0x6b, 0xf4, 0x40, 0x1a, 0x3c, 0xf3, 0xd2, 0xf2, 0x9d, 0x58, 0x40,
|
||||
0x46, 0x04, 0x81, 0x95, 0x09, 0x09, 0x70, 0x31, 0x87, 0x84, 0xf8, 0x48, 0x30, 0x2b, 0x30, 0x6a,
|
||||
0xf0, 0x06, 0x81, 0x98, 0x46, 0x0e, 0x5c, 0x9c, 0x4e, 0x30, 0x3d, 0x42, 0xc6, 0x5c, 0xec, 0x01,
|
||||
0x45, 0xf9, 0xc9, 0xa9, 0xc5, 0xc5, 0x42, 0x42, 0x48, 0x46, 0x41, 0x9d, 0x26, 0x25, 0x82, 0x24,
|
||||
0x16, 0x5c, 0x50, 0x94, 0x9a, 0x98, 0xe2, 0x9b, 0x58, 0xe0, 0xe4, 0x70, 0xe2, 0x91, 0x1c, 0xe3,
|
||||
0x85, 0x47, 0x72, 0x8c, 0x37, 0x1e, 0xc9, 0x31, 0x3e, 0x78, 0x24, 0xc7, 0x38, 0xe3, 0xb1, 0x1c,
|
||||
0x43, 0x94, 0x16, 0x52, 0x00, 0xe4, 0x15, 0x17, 0x24, 0x27, 0xeb, 0xa6, 0xa4, 0x96, 0xe9, 0xe7,
|
||||
0xa5, 0xe6, 0xa7, 0x15, 0xeb, 0x42, 0xbc, 0x0f, 0x37, 0x2b, 0x89, 0x0d, 0x2c, 0x60, 0x0c, 0x08,
|
||||
0x00, 0x00, 0xff, 0xff, 0xdf, 0x93, 0xe2, 0x48, 0x70, 0x01, 0x00, 0x00,
|
||||
}
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ context.Context
|
||||
var _ grpc.ClientConn
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the grpc package it is being compiled against.
|
||||
const _ = grpc.SupportPackageIsVersion4
|
||||
|
||||
// BootstrapClient is the client API for Bootstrap service.
|
||||
//
|
||||
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
|
||||
type BootstrapClient interface {
|
||||
Process(ctx context.Context, in *Request, opts ...grpc.CallOption) (*SpreadMap, error)
|
||||
}
|
||||
|
||||
type bootstrapClient struct {
|
||||
cc *grpc.ClientConn
|
||||
}
|
||||
|
||||
func NewBootstrapClient(cc *grpc.ClientConn) BootstrapClient {
|
||||
return &bootstrapClient{cc}
|
||||
}
|
||||
|
||||
func (c *bootstrapClient) Process(ctx context.Context, in *Request, opts ...grpc.CallOption) (*SpreadMap, error) {
|
||||
out := new(SpreadMap)
|
||||
err := c.cc.Invoke(ctx, "/bootstrap.Bootstrap/Process", in, out, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// BootstrapServer is the server API for Bootstrap service.
|
||||
type BootstrapServer interface {
|
||||
Process(context.Context, *Request) (*SpreadMap, error)
|
||||
}
|
||||
|
||||
// UnimplementedBootstrapServer can be embedded to have forward compatible implementations.
|
||||
type UnimplementedBootstrapServer struct {
|
||||
}
|
||||
|
||||
func (*UnimplementedBootstrapServer) Process(ctx context.Context, req *Request) (*SpreadMap, error) {
|
||||
return nil, status.Errorf(codes.Unimplemented, "method Process not implemented")
|
||||
}
|
||||
|
||||
func RegisterBootstrapServer(s *grpc.Server, srv BootstrapServer) {
|
||||
s.RegisterService(&_Bootstrap_serviceDesc, srv)
|
||||
}
|
||||
|
||||
func _Bootstrap_Process_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(Request)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(BootstrapServer).Process(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: "/bootstrap.Bootstrap/Process",
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(BootstrapServer).Process(ctx, req.(*Request))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
var _Bootstrap_serviceDesc = grpc.ServiceDesc{
|
||||
ServiceName: "bootstrap.Bootstrap",
|
||||
HandlerType: (*BootstrapServer)(nil),
|
||||
Methods: []grpc.MethodDesc{
|
||||
{
|
||||
MethodName: "Process",
|
||||
Handler: _Bootstrap_Process_Handler,
|
||||
},
|
||||
},
|
||||
Streams: []grpc.StreamDesc{},
|
||||
Metadata: "bootstrap/service.proto",
|
||||
}
|
||||
|
||||
func (m *Request) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *Request) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *Request) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
i -= len(m.XXX_unrecognized)
|
||||
copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
if m.TTL != 0 {
|
||||
i = encodeVarintService(dAtA, i, uint64(m.TTL))
|
||||
i--
|
||||
dAtA[i] = 0x18
|
||||
}
|
||||
{
|
||||
size, err := m.Info.MarshalToSizedBuffer(dAtA[:i])
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i -= size
|
||||
i = encodeVarintService(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0x12
|
||||
if m.Type != 0 {
|
||||
i = encodeVarintService(dAtA, i, uint64(m.Type))
|
||||
i--
|
||||
dAtA[i] = 0x8
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func encodeVarintService(dAtA []byte, offset int, v uint64) int {
|
||||
offset -= sovService(v)
|
||||
base := offset
|
||||
for v >= 1<<7 {
|
||||
dAtA[offset] = uint8(v&0x7f | 0x80)
|
||||
v >>= 7
|
||||
offset++
|
||||
}
|
||||
dAtA[offset] = uint8(v)
|
||||
return base
|
||||
}
|
||||
func (m *Request) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
if m.Type != 0 {
|
||||
n += 1 + sovService(uint64(m.Type))
|
||||
}
|
||||
l = m.Info.Size()
|
||||
n += 1 + l + sovService(uint64(l))
|
||||
if m.TTL != 0 {
|
||||
n += 1 + sovService(uint64(m.TTL))
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func sovService(x uint64) (n int) {
|
||||
return (math_bits.Len64(x|1) + 6) / 7
|
||||
}
|
||||
func sozService(x uint64) (n int) {
|
||||
return sovService(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func (m *Request) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: Request: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: Request: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType)
|
||||
}
|
||||
m.Type = 0
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
m.Type |= NodeType(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 2:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Info", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if err := m.Info.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 3:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field TTL", wireType)
|
||||
}
|
||||
m.TTL = 0
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
m.TTL |= uint32(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipService(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func skipService(dAtA []byte) (n int, err error) {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
depth := 0
|
||||
for iNdEx < l {
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= (uint64(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
wireType := int(wire & 0x7)
|
||||
switch wireType {
|
||||
case 0:
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx++
|
||||
if dAtA[iNdEx-1] < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 1:
|
||||
iNdEx += 8
|
||||
case 2:
|
||||
var length int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
length |= (int(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if length < 0 {
|
||||
return 0, ErrInvalidLengthService
|
||||
}
|
||||
iNdEx += length
|
||||
case 3:
|
||||
depth++
|
||||
case 4:
|
||||
if depth == 0 {
|
||||
return 0, ErrUnexpectedEndOfGroupService
|
||||
}
|
||||
depth--
|
||||
case 5:
|
||||
iNdEx += 4
|
||||
default:
|
||||
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
|
||||
}
|
||||
if iNdEx < 0 {
|
||||
return 0, ErrInvalidLengthService
|
||||
}
|
||||
if depth == 0 {
|
||||
return iNdEx, nil
|
||||
}
|
||||
}
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
|
||||
var (
|
||||
ErrInvalidLengthService = fmt.Errorf("proto: negative length found during unmarshaling")
|
||||
ErrIntOverflowService = fmt.Errorf("proto: integer overflow")
|
||||
ErrUnexpectedEndOfGroupService = fmt.Errorf("proto: unexpected end of group")
|
||||
)
|
20
bootstrap/service.proto
Normal file
20
bootstrap/service.proto
Normal file
|
@ -0,0 +1,20 @@
|
|||
syntax = "proto3";
|
||||
package bootstrap;
|
||||
option go_package = "github.com/nspcc-dev/neofs-proto/bootstrap";
|
||||
|
||||
import "bootstrap/types.proto";
|
||||
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||
|
||||
option (gogoproto.stable_marshaler_all) = true;
|
||||
|
||||
// The Bootstrap service definition.
|
||||
service Bootstrap {
|
||||
rpc Process(Request) returns (bootstrap.SpreadMap);
|
||||
}
|
||||
|
||||
// Request message to communicate between DHT nodes
|
||||
message Request {
|
||||
int32 type = 1 [(gogoproto.customname) = "Type" , (gogoproto.nullable) = false, (gogoproto.customtype) = "NodeType"];
|
||||
bootstrap.NodeInfo info = 2 [(gogoproto.nullable) = false];
|
||||
uint32 TTL = 3;
|
||||
}
|
100
bootstrap/types.go
Normal file
100
bootstrap/types.go
Normal file
|
@ -0,0 +1,100 @@
|
|||
package bootstrap
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/hex"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/golang/protobuf/proto"
|
||||
"github.com/nspcc-dev/neofs-proto/object"
|
||||
)
|
||||
|
||||
type (
|
||||
// NodeStatus is a bitwise status field of the node.
|
||||
NodeStatus uint64
|
||||
)
|
||||
|
||||
const (
|
||||
storageFullMask = 0x1
|
||||
|
||||
optionCapacity = "/Capacity:"
|
||||
optionPrice = "/Price:"
|
||||
)
|
||||
|
||||
var (
|
||||
_ proto.Message = (*NodeInfo)(nil)
|
||||
_ proto.Message = (*SpreadMap)(nil)
|
||||
)
|
||||
|
||||
// Equals checks whether two NodeInfo has same address.
|
||||
func (m NodeInfo) Equals(n1 NodeInfo) bool {
|
||||
return m.Address == n1.Address && bytes.Equal(m.PubKey, n1.PubKey)
|
||||
}
|
||||
|
||||
// Full checks if node has enough space for storing users objects.
|
||||
func (n NodeStatus) Full() bool {
|
||||
return n&storageFullMask > 0
|
||||
}
|
||||
|
||||
// SetFull changes state of node to indicate if node has enough space for storing users objects.
|
||||
// If value is true - there's not enough space.
|
||||
func (n *NodeStatus) SetFull(value bool) {
|
||||
switch value {
|
||||
case true:
|
||||
*n |= NodeStatus(storageFullMask)
|
||||
case false:
|
||||
*n &= NodeStatus(^uint64(storageFullMask))
|
||||
}
|
||||
}
|
||||
|
||||
// Price returns price in 1e-8*GAS/Megabyte per month.
|
||||
// User set price in GAS/Terabyte per month.
|
||||
func (m NodeInfo) Price() uint64 {
|
||||
for i := range m.Options {
|
||||
if strings.HasPrefix(m.Options[i], optionPrice) {
|
||||
n, err := strconv.ParseFloat(m.Options[i][len(optionPrice):], 64)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return uint64(n*1e8) / uint64(object.UnitsMB) // UnitsMB == megabytes in 1 terabyte
|
||||
}
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
// Capacity returns node's capacity as reported by user.
|
||||
func (m NodeInfo) Capacity() uint64 {
|
||||
for i := range m.Options {
|
||||
if strings.HasPrefix(m.Options[i], optionCapacity) {
|
||||
n, err := strconv.ParseUint(m.Options[i][len(optionCapacity):], 10, 64)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return n
|
||||
}
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
// String returns string representation of NodeInfo.
|
||||
func (m NodeInfo) String() string {
|
||||
return "(NodeInfo)<" +
|
||||
"Address:" + m.Address +
|
||||
", " +
|
||||
"PublicKey:" + hex.EncodeToString(m.PubKey) +
|
||||
", " +
|
||||
"Options: [" + strings.Join(m.Options, ",") + "]>"
|
||||
}
|
||||
|
||||
// String returns string representation of SpreadMap.
|
||||
func (m SpreadMap) String() string {
|
||||
result := make([]string, 0, len(m.NetMap))
|
||||
for i := range m.NetMap {
|
||||
result = append(result, m.NetMap[i].String())
|
||||
}
|
||||
return "(SpreadMap)<" +
|
||||
"Epoch: " + strconv.FormatUint(m.Epoch, 10) +
|
||||
", " +
|
||||
"Netmap: [" + strings.Join(result, ",") + "]>"
|
||||
}
|
697
bootstrap/types.pb.go
Normal file
697
bootstrap/types.pb.go
Normal file
|
@ -0,0 +1,697 @@
|
|||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
||||
// source: bootstrap/types.proto
|
||||
|
||||
package bootstrap
|
||||
|
||||
import (
|
||||
fmt "fmt"
|
||||
_ "github.com/gogo/protobuf/gogoproto"
|
||||
proto "github.com/golang/protobuf/proto"
|
||||
io "io"
|
||||
math "math"
|
||||
math_bits "math/bits"
|
||||
)
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = fmt.Errorf
|
||||
var _ = math.Inf
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the proto package it is being compiled against.
|
||||
// A compilation error at this line likely means your copy of the
|
||||
// proto package needs to be updated.
|
||||
const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
|
||||
|
||||
type SpreadMap struct {
|
||||
Epoch uint64 `protobuf:"varint,1,opt,name=Epoch,proto3" json:"Epoch,omitempty"`
|
||||
NetMap []NodeInfo `protobuf:"bytes,2,rep,name=NetMap,proto3" json:"NetMap"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *SpreadMap) Reset() { *m = SpreadMap{} }
|
||||
func (*SpreadMap) ProtoMessage() {}
|
||||
func (*SpreadMap) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_423083266369adee, []int{0}
|
||||
}
|
||||
func (m *SpreadMap) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *SpreadMap) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
func (m *SpreadMap) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_SpreadMap.Merge(m, src)
|
||||
}
|
||||
func (m *SpreadMap) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *SpreadMap) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_SpreadMap.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_SpreadMap proto.InternalMessageInfo
|
||||
|
||||
func (m *SpreadMap) GetEpoch() uint64 {
|
||||
if m != nil {
|
||||
return m.Epoch
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *SpreadMap) GetNetMap() []NodeInfo {
|
||||
if m != nil {
|
||||
return m.NetMap
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type NodeInfo struct {
|
||||
Address string `protobuf:"bytes,1,opt,name=Address,proto3" json:"address"`
|
||||
PubKey []byte `protobuf:"bytes,2,opt,name=PubKey,proto3" json:"pubkey,omitempty"`
|
||||
Options []string `protobuf:"bytes,3,rep,name=Options,proto3" json:"options,omitempty"`
|
||||
Status NodeStatus `protobuf:"varint,4,opt,name=Status,proto3,customtype=NodeStatus" json:"status"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *NodeInfo) Reset() { *m = NodeInfo{} }
|
||||
func (*NodeInfo) ProtoMessage() {}
|
||||
func (*NodeInfo) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_423083266369adee, []int{1}
|
||||
}
|
||||
func (m *NodeInfo) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *NodeInfo) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
func (m *NodeInfo) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_NodeInfo.Merge(m, src)
|
||||
}
|
||||
func (m *NodeInfo) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *NodeInfo) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_NodeInfo.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_NodeInfo proto.InternalMessageInfo
|
||||
|
||||
func (m *NodeInfo) GetAddress() string {
|
||||
if m != nil {
|
||||
return m.Address
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *NodeInfo) GetPubKey() []byte {
|
||||
if m != nil {
|
||||
return m.PubKey
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *NodeInfo) GetOptions() []string {
|
||||
if m != nil {
|
||||
return m.Options
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterType((*SpreadMap)(nil), "bootstrap.SpreadMap")
|
||||
proto.RegisterType((*NodeInfo)(nil), "bootstrap.NodeInfo")
|
||||
}
|
||||
|
||||
func init() { proto.RegisterFile("bootstrap/types.proto", fileDescriptor_423083266369adee) }
|
||||
|
||||
var fileDescriptor_423083266369adee = []byte{
|
||||
// 345 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x4c, 0x90, 0xcb, 0x4a, 0xc3, 0x40,
|
||||
0x18, 0x85, 0x33, 0x6d, 0x4d, 0xcd, 0xd4, 0x85, 0xc6, 0x16, 0x82, 0x48, 0x12, 0x0a, 0x42, 0x90,
|
||||
0x36, 0xc1, 0xcb, 0x0b, 0x18, 0x10, 0x14, 0x69, 0x95, 0xd4, 0x95, 0xbb, 0x5c, 0xa6, 0x17, 0xa4,
|
||||
0xf9, 0x87, 0xcc, 0x44, 0xc8, 0xce, 0xc7, 0xf0, 0x89, 0xa4, 0x4b, 0x97, 0xc5, 0x45, 0xd0, 0xb8,
|
||||
0xcb, 0x53, 0x48, 0x27, 0x6d, 0xe9, 0xee, 0x3f, 0xe7, 0x7c, 0x33, 0xf3, 0xcf, 0xc1, 0x9d, 0x00,
|
||||
0x80, 0x33, 0x9e, 0xf8, 0xd4, 0xe1, 0x19, 0x25, 0xcc, 0xa6, 0x09, 0x70, 0x50, 0x95, 0xad, 0x7d,
|
||||
0xd2, 0x9f, 0xcc, 0xf8, 0x34, 0x0d, 0xec, 0x10, 0xe6, 0xce, 0x04, 0x26, 0xe0, 0x08, 0x22, 0x48,
|
||||
0xc7, 0x42, 0x09, 0x21, 0xa6, 0xea, 0x64, 0xf7, 0x19, 0x2b, 0x23, 0x9a, 0x10, 0x3f, 0x1a, 0xf8,
|
||||
0x54, 0x6d, 0xe3, 0xbd, 0x5b, 0x0a, 0xe1, 0x54, 0x43, 0x26, 0xb2, 0x1a, 0x5e, 0x25, 0xd4, 0x0b,
|
||||
0x2c, 0x0f, 0x09, 0x1f, 0xf8, 0x54, 0xab, 0x99, 0x75, 0xab, 0x75, 0x79, 0x6c, 0x6f, 0x5f, 0xb3,
|
||||
0x87, 0x10, 0x91, 0xfb, 0x78, 0x0c, 0x6e, 0x63, 0x91, 0x1b, 0x92, 0xb7, 0x06, 0xbb, 0x9f, 0x08,
|
||||
0xef, 0x6f, 0x22, 0xf5, 0x0c, 0x37, 0x6f, 0xa2, 0x28, 0x21, 0x8c, 0x89, 0x7b, 0x15, 0xb7, 0x55,
|
||||
0xe6, 0x46, 0xd3, 0xaf, 0x2c, 0x6f, 0x93, 0xa9, 0x3d, 0x2c, 0x3f, 0xa5, 0xc1, 0x03, 0xc9, 0xb4,
|
||||
0x9a, 0x89, 0xac, 0x03, 0xb7, 0x5d, 0xe6, 0xc6, 0x21, 0x4d, 0x83, 0x57, 0x92, 0xf5, 0x60, 0x3e,
|
||||
0xe3, 0x64, 0x4e, 0x79, 0xe6, 0xad, 0x19, 0xd5, 0xc1, 0xcd, 0x47, 0xca, 0x67, 0x10, 0x33, 0xad,
|
||||
0x6e, 0xd6, 0x2d, 0xc5, 0xed, 0x94, 0xb9, 0x71, 0x04, 0x95, 0xb5, 0xc3, 0x6f, 0x28, 0xf5, 0x1a,
|
||||
0xcb, 0x23, 0xee, 0xf3, 0x94, 0x69, 0x8d, 0xd5, 0xe7, 0xdc, 0xd3, 0xd5, 0xc2, 0xdf, 0xb9, 0x81,
|
||||
0x57, 0x7b, 0x56, 0x49, 0x99, 0x1b, 0x32, 0x13, 0x93, 0xb7, 0x66, 0xdd, 0xbb, 0xe5, 0xaf, 0x2e,
|
||||
0xbd, 0x17, 0xba, 0xb4, 0x28, 0x74, 0xf4, 0x55, 0xe8, 0x68, 0x59, 0xe8, 0xe8, 0xa7, 0xd0, 0xd1,
|
||||
0xc7, 0x9f, 0x2e, 0xbd, 0x9c, 0xef, 0x74, 0x1d, 0x33, 0x1a, 0x86, 0xfd, 0x88, 0xbc, 0x39, 0x31,
|
||||
0x81, 0x31, 0xeb, 0x57, 0x4d, 0x6f, 0x9b, 0x0a, 0x64, 0x61, 0x5c, 0xfd, 0x07, 0x00, 0x00, 0xff,
|
||||
0xff, 0x71, 0xeb, 0x37, 0x57, 0xc2, 0x01, 0x00, 0x00,
|
||||
}
|
||||
|
||||
func (m *SpreadMap) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *SpreadMap) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *SpreadMap) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
i -= len(m.XXX_unrecognized)
|
||||
copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
if len(m.NetMap) > 0 {
|
||||
for iNdEx := len(m.NetMap) - 1; iNdEx >= 0; iNdEx-- {
|
||||
{
|
||||
size, err := m.NetMap[iNdEx].MarshalToSizedBuffer(dAtA[:i])
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i -= size
|
||||
i = encodeVarintTypes(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0x12
|
||||
}
|
||||
}
|
||||
if m.Epoch != 0 {
|
||||
i = encodeVarintTypes(dAtA, i, uint64(m.Epoch))
|
||||
i--
|
||||
dAtA[i] = 0x8
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func (m *NodeInfo) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *NodeInfo) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *NodeInfo) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
i -= len(m.XXX_unrecognized)
|
||||
copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
if m.Status != 0 {
|
||||
i = encodeVarintTypes(dAtA, i, uint64(m.Status))
|
||||
i--
|
||||
dAtA[i] = 0x20
|
||||
}
|
||||
if len(m.Options) > 0 {
|
||||
for iNdEx := len(m.Options) - 1; iNdEx >= 0; iNdEx-- {
|
||||
i -= len(m.Options[iNdEx])
|
||||
copy(dAtA[i:], m.Options[iNdEx])
|
||||
i = encodeVarintTypes(dAtA, i, uint64(len(m.Options[iNdEx])))
|
||||
i--
|
||||
dAtA[i] = 0x1a
|
||||
}
|
||||
}
|
||||
if len(m.PubKey) > 0 {
|
||||
i -= len(m.PubKey)
|
||||
copy(dAtA[i:], m.PubKey)
|
||||
i = encodeVarintTypes(dAtA, i, uint64(len(m.PubKey)))
|
||||
i--
|
||||
dAtA[i] = 0x12
|
||||
}
|
||||
if len(m.Address) > 0 {
|
||||
i -= len(m.Address)
|
||||
copy(dAtA[i:], m.Address)
|
||||
i = encodeVarintTypes(dAtA, i, uint64(len(m.Address)))
|
||||
i--
|
||||
dAtA[i] = 0xa
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func encodeVarintTypes(dAtA []byte, offset int, v uint64) int {
|
||||
offset -= sovTypes(v)
|
||||
base := offset
|
||||
for v >= 1<<7 {
|
||||
dAtA[offset] = uint8(v&0x7f | 0x80)
|
||||
v >>= 7
|
||||
offset++
|
||||
}
|
||||
dAtA[offset] = uint8(v)
|
||||
return base
|
||||
}
|
||||
func (m *SpreadMap) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
if m.Epoch != 0 {
|
||||
n += 1 + sovTypes(uint64(m.Epoch))
|
||||
}
|
||||
if len(m.NetMap) > 0 {
|
||||
for _, e := range m.NetMap {
|
||||
l = e.Size()
|
||||
n += 1 + l + sovTypes(uint64(l))
|
||||
}
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (m *NodeInfo) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
l = len(m.Address)
|
||||
if l > 0 {
|
||||
n += 1 + l + sovTypes(uint64(l))
|
||||
}
|
||||
l = len(m.PubKey)
|
||||
if l > 0 {
|
||||
n += 1 + l + sovTypes(uint64(l))
|
||||
}
|
||||
if len(m.Options) > 0 {
|
||||
for _, s := range m.Options {
|
||||
l = len(s)
|
||||
n += 1 + l + sovTypes(uint64(l))
|
||||
}
|
||||
}
|
||||
if m.Status != 0 {
|
||||
n += 1 + sovTypes(uint64(m.Status))
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func sovTypes(x uint64) (n int) {
|
||||
return (math_bits.Len64(x|1) + 6) / 7
|
||||
}
|
||||
func sozTypes(x uint64) (n int) {
|
||||
return sovTypes(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func (m *SpreadMap) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: SpreadMap: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: SpreadMap: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Epoch", wireType)
|
||||
}
|
||||
m.Epoch = 0
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
m.Epoch |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 2:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field NetMap", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.NetMap = append(m.NetMap, NodeInfo{})
|
||||
if err := m.NetMap[len(m.NetMap)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipTypes(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func (m *NodeInfo) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: NodeInfo: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: NodeInfo: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Address", wireType)
|
||||
}
|
||||
var stringLen uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
stringLen |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
intStringLen := int(stringLen)
|
||||
if intStringLen < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
postIndex := iNdEx + intStringLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Address = string(dAtA[iNdEx:postIndex])
|
||||
iNdEx = postIndex
|
||||
case 2:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field PubKey", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.PubKey = append(m.PubKey[:0], dAtA[iNdEx:postIndex]...)
|
||||
if m.PubKey == nil {
|
||||
m.PubKey = []byte{}
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 3:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Options", wireType)
|
||||
}
|
||||
var stringLen uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
stringLen |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
intStringLen := int(stringLen)
|
||||
if intStringLen < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
postIndex := iNdEx + intStringLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Options = append(m.Options, string(dAtA[iNdEx:postIndex]))
|
||||
iNdEx = postIndex
|
||||
case 4:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType)
|
||||
}
|
||||
m.Status = 0
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
m.Status |= NodeStatus(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipTypes(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func skipTypes(dAtA []byte) (n int, err error) {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
depth := 0
|
||||
for iNdEx < l {
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= (uint64(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
wireType := int(wire & 0x7)
|
||||
switch wireType {
|
||||
case 0:
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx++
|
||||
if dAtA[iNdEx-1] < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 1:
|
||||
iNdEx += 8
|
||||
case 2:
|
||||
var length int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
length |= (int(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if length < 0 {
|
||||
return 0, ErrInvalidLengthTypes
|
||||
}
|
||||
iNdEx += length
|
||||
case 3:
|
||||
depth++
|
||||
case 4:
|
||||
if depth == 0 {
|
||||
return 0, ErrUnexpectedEndOfGroupTypes
|
||||
}
|
||||
depth--
|
||||
case 5:
|
||||
iNdEx += 4
|
||||
default:
|
||||
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
|
||||
}
|
||||
if iNdEx < 0 {
|
||||
return 0, ErrInvalidLengthTypes
|
||||
}
|
||||
if depth == 0 {
|
||||
return iNdEx, nil
|
||||
}
|
||||
}
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
|
||||
var (
|
||||
ErrInvalidLengthTypes = fmt.Errorf("proto: negative length found during unmarshaling")
|
||||
ErrIntOverflowTypes = fmt.Errorf("proto: integer overflow")
|
||||
ErrUnexpectedEndOfGroupTypes = fmt.Errorf("proto: unexpected end of group")
|
||||
)
|
22
bootstrap/types.proto
Normal file
22
bootstrap/types.proto
Normal file
|
@ -0,0 +1,22 @@
|
|||
syntax = "proto3";
|
||||
package bootstrap;
|
||||
option go_package = "github.com/nspcc-dev/neofs-proto/bootstrap";
|
||||
|
||||
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||
|
||||
option (gogoproto.stable_marshaler_all) = true;;
|
||||
|
||||
option (gogoproto.stringer_all) = false;
|
||||
option (gogoproto.goproto_stringer_all) = false;
|
||||
|
||||
message SpreadMap {
|
||||
uint64 Epoch = 1;
|
||||
repeated NodeInfo NetMap = 2 [(gogoproto.nullable) = false];
|
||||
}
|
||||
|
||||
message NodeInfo {
|
||||
string Address = 1 [(gogoproto.jsontag) = "address"];
|
||||
bytes PubKey = 2 [(gogoproto.jsontag) = "pubkey,omitempty"];
|
||||
repeated string Options = 3 [(gogoproto.jsontag) = "options,omitempty"];
|
||||
uint64 Status = 4 [(gogoproto.jsontag) = "status", (gogoproto.nullable) = false, (gogoproto.customtype) = "NodeStatus"];
|
||||
}
|
185
chain/address.go
Normal file
185
chain/address.go
Normal file
|
@ -0,0 +1,185 @@
|
|||
package chain
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/ecdsa"
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
|
||||
"github.com/mr-tron/base58"
|
||||
crypto "github.com/nspcc-dev/neofs-crypto"
|
||||
"github.com/nspcc-dev/neofs-proto/internal"
|
||||
"github.com/pkg/errors"
|
||||
"golang.org/x/crypto/ripemd160"
|
||||
)
|
||||
|
||||
// WalletAddress implements NEO address.
|
||||
type WalletAddress [AddressLength]byte
|
||||
|
||||
const (
|
||||
// AddressLength contains size of address,
|
||||
// 0x17 byte (address version) + 20 bytes of ScriptHash + 4 bytes of checksum.
|
||||
AddressLength = 25
|
||||
|
||||
// ScriptHashLength contains size of ScriptHash.
|
||||
ScriptHashLength = 20
|
||||
|
||||
// ErrEmptyAddress is raised when empty Address is passed.
|
||||
ErrEmptyAddress = internal.Error("empty address")
|
||||
|
||||
// ErrAddressLength is raised when passed address has wrong size.
|
||||
ErrAddressLength = internal.Error("wrong address length")
|
||||
)
|
||||
|
||||
func checksum(sign []byte) []byte {
|
||||
hash := sha256.Sum256(sign)
|
||||
hash = sha256.Sum256(hash[:])
|
||||
return hash[:4]
|
||||
}
|
||||
|
||||
// FetchPublicKeys tries to parse public keys from verification script.
|
||||
func FetchPublicKeys(vs []byte) []*ecdsa.PublicKey {
|
||||
var (
|
||||
count int
|
||||
offset int
|
||||
ln = len(vs)
|
||||
result []*ecdsa.PublicKey
|
||||
)
|
||||
|
||||
switch {
|
||||
case ln < 1: // wrong data size
|
||||
return nil
|
||||
case vs[ln-1] == 0xac: // last byte is CHECKSIG
|
||||
count = 1
|
||||
case vs[ln-1] == 0xae: // last byte is CHECKMULTISIG
|
||||
// 2nd byte from the end indicates about PK's count
|
||||
count = int(vs[ln-2] - 0x50)
|
||||
// ignores CHECKMULTISIG
|
||||
offset = 1
|
||||
default: // unknown type
|
||||
return nil
|
||||
}
|
||||
|
||||
result = make([]*ecdsa.PublicKey, 0, count)
|
||||
for i := 0; i < count; i++ {
|
||||
// ignores PUSHBYTE33 and tries to parse
|
||||
from, to := offset+1, offset+1+crypto.PublicKeyCompressedSize
|
||||
|
||||
// when passed VerificationScript has wrong size
|
||||
if len(vs) < to {
|
||||
return nil
|
||||
}
|
||||
|
||||
key := crypto.UnmarshalPublicKey(vs[from:to])
|
||||
// when wrong public key is passed
|
||||
if key == nil {
|
||||
return nil
|
||||
}
|
||||
result = append(result, key)
|
||||
|
||||
offset += 1 + crypto.PublicKeyCompressedSize
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
// VerificationScript returns VerificationScript composed from public keys.
|
||||
func VerificationScript(pubs ...*ecdsa.PublicKey) []byte {
|
||||
var (
|
||||
pre []byte
|
||||
suf []byte
|
||||
body []byte
|
||||
offset int
|
||||
lnPK = len(pubs)
|
||||
ln = crypto.PublicKeyCompressedSize*lnPK + lnPK // 33 * count + count * 1 (PUSHBYTES33)
|
||||
)
|
||||
|
||||
if len(pubs) > 1 {
|
||||
pre = []byte{0x51} // one address
|
||||
suf = []byte{byte(0x50 + lnPK), 0xae} // count of PK's + CHECKMULTISIG
|
||||
} else {
|
||||
suf = []byte{0xac} // CHECKSIG
|
||||
}
|
||||
|
||||
ln += len(pre) + len(suf)
|
||||
|
||||
body = make([]byte, ln)
|
||||
offset += copy(body, pre)
|
||||
|
||||
for i := range pubs {
|
||||
body[offset] = 0x21
|
||||
offset++
|
||||
offset += copy(body[offset:], crypto.MarshalPublicKey(pubs[i]))
|
||||
}
|
||||
|
||||
copy(body[offset:], suf)
|
||||
|
||||
return body
|
||||
}
|
||||
|
||||
// KeysToAddress return NEO address composed from public keys.
|
||||
func KeysToAddress(pubs ...*ecdsa.PublicKey) string {
|
||||
if len(pubs) == 0 {
|
||||
return ""
|
||||
}
|
||||
return Address(VerificationScript(pubs...))
|
||||
}
|
||||
|
||||
// Address returns NEO address based on passed VerificationScript.
|
||||
func Address(verificationScript []byte) string {
|
||||
sign := [AddressLength]byte{0x17}
|
||||
hash := sha256.Sum256(verificationScript)
|
||||
ripe := ripemd160.New()
|
||||
ripe.Write(hash[:])
|
||||
copy(sign[1:], ripe.Sum(nil))
|
||||
copy(sign[21:], checksum(sign[:21]))
|
||||
return base58.Encode(sign[:])
|
||||
}
|
||||
|
||||
// ReversedScriptHashToAddress parses script hash and returns valid NEO address.
|
||||
func ReversedScriptHashToAddress(sc string) (addr string, err error) {
|
||||
var data []byte
|
||||
if data, err = DecodeScriptHash(sc); err != nil {
|
||||
return
|
||||
}
|
||||
sign := [AddressLength]byte{0x17}
|
||||
copy(sign[1:], data)
|
||||
copy(sign[1+ScriptHashLength:], checksum(sign[:1+ScriptHashLength]))
|
||||
return base58.Encode(sign[:]), nil
|
||||
}
|
||||
|
||||
// IsAddress checks that passed NEO Address is valid.
|
||||
func IsAddress(s string) error {
|
||||
if s == "" {
|
||||
return ErrEmptyAddress
|
||||
} else if addr, err := base58.Decode(s); err != nil {
|
||||
return errors.Wrap(err, "base58 decode")
|
||||
} else if ln := len(addr); ln != AddressLength {
|
||||
return errors.Wrapf(ErrAddressLength, "length %d != %d", AddressLength, ln)
|
||||
} else if sum := checksum(addr[:21]); !bytes.Equal(addr[21:], sum) {
|
||||
return errors.Errorf("wrong checksum %0x != %0x",
|
||||
addr[21:], sum)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ReverseBytes returns reversed []byte of given.
|
||||
func ReverseBytes(data []byte) []byte {
|
||||
for i, j := 0, len(data)-1; i < j; i, j = i+1, j-1 {
|
||||
data[i], data[j] = data[j], data[i]
|
||||
}
|
||||
return data
|
||||
}
|
||||
|
||||
// DecodeScriptHash parses script hash into slice of bytes.
|
||||
func DecodeScriptHash(s string) ([]byte, error) {
|
||||
if s == "" {
|
||||
return nil, ErrEmptyAddress
|
||||
} else if addr, err := hex.DecodeString(s); err != nil {
|
||||
return nil, errors.Wrap(err, "hex decode")
|
||||
} else if ln := len(addr); ln != ScriptHashLength {
|
||||
return nil, errors.Wrapf(ErrAddressLength, "length %d != %d", ScriptHashLength, ln)
|
||||
} else {
|
||||
return addr, nil
|
||||
}
|
||||
}
|
292
chain/address_test.go
Normal file
292
chain/address_test.go
Normal file
|
@ -0,0 +1,292 @@
|
|||
package chain
|
||||
|
||||
import (
|
||||
"crypto/ecdsa"
|
||||
"encoding/hex"
|
||||
"testing"
|
||||
|
||||
crypto "github.com/nspcc-dev/neofs-crypto"
|
||||
"github.com/nspcc-dev/neofs-crypto/test"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestAddress(t *testing.T) {
|
||||
var (
|
||||
multiSigVerificationScript = "512103c02a93134f98d9c78ec54b1b1f97fc64cd81360f53a293f41e4ad54aac3c57172103fea219d4ccfd7641cebbb2439740bb4bd7c4730c1abd6ca1dc44386533816df952ae"
|
||||
multiSigAddress = "ANbvKqa2SfgTUkq43NRUhCiyxPrpUPn7S3"
|
||||
|
||||
normalVerificationScript = "2102a33413277a319cc6fd4c54a2feb9032eba668ec587f307e319dc48733087fa61ac"
|
||||
normalAddress = "AcraNnCuPKnUYtPYyrACRCVJhLpvskbfhu"
|
||||
)
|
||||
|
||||
t.Run("check multi-sig address", func(t *testing.T) {
|
||||
data, err := hex.DecodeString(multiSigVerificationScript)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, multiSigAddress, Address(data))
|
||||
})
|
||||
|
||||
t.Run("check normal address", func(t *testing.T) {
|
||||
data, err := hex.DecodeString(normalVerificationScript)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, normalAddress, Address(data))
|
||||
})
|
||||
}
|
||||
|
||||
func TestVerificationScript(t *testing.T) {
|
||||
t.Run("check normal", func(t *testing.T) {
|
||||
pkString := "02a33413277a319cc6fd4c54a2feb9032eba668ec587f307e319dc48733087fa61"
|
||||
|
||||
pkBytes, err := hex.DecodeString(pkString)
|
||||
require.NoError(t, err)
|
||||
|
||||
pk := crypto.UnmarshalPublicKey(pkBytes)
|
||||
|
||||
expect, err := hex.DecodeString(
|
||||
"21" + pkString + // PUSHBYTES33
|
||||
"ac", // CHECKSIG
|
||||
)
|
||||
|
||||
require.Equal(t, expect, VerificationScript(pk))
|
||||
})
|
||||
|
||||
t.Run("check multisig", func(t *testing.T) {
|
||||
pk1String := "03c02a93134f98d9c78ec54b1b1f97fc64cd81360f53a293f41e4ad54aac3c5717"
|
||||
pk2String := "03fea219d4ccfd7641cebbb2439740bb4bd7c4730c1abd6ca1dc44386533816df9"
|
||||
|
||||
pk1Bytes, err := hex.DecodeString(pk1String)
|
||||
require.NoError(t, err)
|
||||
|
||||
pk1 := crypto.UnmarshalPublicKey(pk1Bytes)
|
||||
|
||||
pk2Bytes, err := hex.DecodeString(pk2String)
|
||||
require.NoError(t, err)
|
||||
|
||||
pk2 := crypto.UnmarshalPublicKey(pk2Bytes)
|
||||
|
||||
expect, err := hex.DecodeString(
|
||||
"51" + // one address
|
||||
"21" + pk1String + // PUSHBYTES33
|
||||
"21" + pk2String + // PUSHBYTES33
|
||||
"52" + // 2 PublicKeys
|
||||
"ae", // CHECKMULTISIG
|
||||
)
|
||||
|
||||
require.Equal(t, expect, VerificationScript(pk1, pk2))
|
||||
})
|
||||
}
|
||||
|
||||
func TestKeysToAddress(t *testing.T) {
|
||||
t.Run("check normal", func(t *testing.T) {
|
||||
pkString := "02a33413277a319cc6fd4c54a2feb9032eba668ec587f307e319dc48733087fa61"
|
||||
|
||||
pkBytes, err := hex.DecodeString(pkString)
|
||||
require.NoError(t, err)
|
||||
|
||||
pk := crypto.UnmarshalPublicKey(pkBytes)
|
||||
|
||||
expect := "AcraNnCuPKnUYtPYyrACRCVJhLpvskbfhu"
|
||||
|
||||
actual := KeysToAddress(pk)
|
||||
require.Equal(t, expect, actual)
|
||||
require.NoError(t, IsAddress(actual))
|
||||
})
|
||||
|
||||
t.Run("check multisig", func(t *testing.T) {
|
||||
pk1String := "03c02a93134f98d9c78ec54b1b1f97fc64cd81360f53a293f41e4ad54aac3c5717"
|
||||
pk2String := "03fea219d4ccfd7641cebbb2439740bb4bd7c4730c1abd6ca1dc44386533816df9"
|
||||
|
||||
pk1Bytes, err := hex.DecodeString(pk1String)
|
||||
require.NoError(t, err)
|
||||
|
||||
pk1 := crypto.UnmarshalPublicKey(pk1Bytes)
|
||||
|
||||
pk2Bytes, err := hex.DecodeString(pk2String)
|
||||
require.NoError(t, err)
|
||||
|
||||
pk2 := crypto.UnmarshalPublicKey(pk2Bytes)
|
||||
|
||||
expect := "ANbvKqa2SfgTUkq43NRUhCiyxPrpUPn7S3"
|
||||
actual := KeysToAddress(pk1, pk2)
|
||||
require.Equal(t, expect, actual)
|
||||
require.NoError(t, IsAddress(actual))
|
||||
})
|
||||
}
|
||||
|
||||
func TestFetchPublicKeys(t *testing.T) {
|
||||
var (
|
||||
multiSigVerificationScript = "512103c02a93134f98d9c78ec54b1b1f97fc64cd81360f53a293f41e4ad54aac3c57172103fea219d4ccfd7641cebbb2439740bb4bd7c4730c1abd6ca1dc44386533816df952ae"
|
||||
normalVerificationScript = "2102a33413277a319cc6fd4c54a2feb9032eba668ec587f307e319dc48733087fa61ac"
|
||||
|
||||
pk1String = "03c02a93134f98d9c78ec54b1b1f97fc64cd81360f53a293f41e4ad54aac3c5717"
|
||||
pk2String = "03fea219d4ccfd7641cebbb2439740bb4bd7c4730c1abd6ca1dc44386533816df9"
|
||||
pk3String = "02a33413277a319cc6fd4c54a2feb9032eba668ec587f307e319dc48733087fa61"
|
||||
)
|
||||
|
||||
t.Run("shouls not fail", func(t *testing.T) {
|
||||
wrongVS, err := hex.DecodeString(multiSigVerificationScript)
|
||||
require.NoError(t, err)
|
||||
|
||||
wrongVS[len(wrongVS)-1] = 0x1
|
||||
|
||||
wrongPK, err := hex.DecodeString(multiSigVerificationScript)
|
||||
require.NoError(t, err)
|
||||
wrongPK[2] = 0x1
|
||||
|
||||
var testCases = []struct {
|
||||
name string
|
||||
value []byte
|
||||
}{
|
||||
{name: "empty VerificationScript"},
|
||||
{
|
||||
name: "wrong size VerificationScript",
|
||||
value: []byte{0x1},
|
||||
},
|
||||
{
|
||||
name: "wrong VerificationScript type",
|
||||
value: wrongVS,
|
||||
},
|
||||
{
|
||||
name: "wrong public key in VerificationScript",
|
||||
value: wrongPK,
|
||||
},
|
||||
}
|
||||
|
||||
for i := range testCases {
|
||||
tt := testCases[i]
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
var keys []*ecdsa.PublicKey
|
||||
require.NotPanics(t, func() {
|
||||
keys = FetchPublicKeys(tt.value)
|
||||
})
|
||||
require.Nil(t, keys)
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("check multi-sig address", func(t *testing.T) {
|
||||
data, err := hex.DecodeString(multiSigVerificationScript)
|
||||
require.NoError(t, err)
|
||||
|
||||
pk1Bytes, err := hex.DecodeString(pk1String)
|
||||
require.NoError(t, err)
|
||||
|
||||
pk2Bytes, err := hex.DecodeString(pk2String)
|
||||
require.NoError(t, err)
|
||||
|
||||
pk1 := crypto.UnmarshalPublicKey(pk1Bytes)
|
||||
pk2 := crypto.UnmarshalPublicKey(pk2Bytes)
|
||||
|
||||
keys := FetchPublicKeys(data)
|
||||
require.Len(t, keys, 2)
|
||||
require.Equal(t, keys[0], pk1)
|
||||
require.Equal(t, keys[1], pk2)
|
||||
})
|
||||
|
||||
t.Run("check normal address", func(t *testing.T) {
|
||||
data, err := hex.DecodeString(normalVerificationScript)
|
||||
require.NoError(t, err)
|
||||
|
||||
pkBytes, err := hex.DecodeString(pk3String)
|
||||
require.NoError(t, err)
|
||||
|
||||
pk := crypto.UnmarshalPublicKey(pkBytes)
|
||||
|
||||
keys := FetchPublicKeys(data)
|
||||
require.Len(t, keys, 1)
|
||||
require.Equal(t, keys[0], pk)
|
||||
})
|
||||
|
||||
t.Run("generate 10 keys VerificationScript and try parse it", func(t *testing.T) {
|
||||
var (
|
||||
count = 10
|
||||
expect = make([]*ecdsa.PublicKey, 0, count)
|
||||
)
|
||||
|
||||
for i := 0; i < count; i++ {
|
||||
key := test.DecodeKey(i)
|
||||
expect = append(expect, &key.PublicKey)
|
||||
}
|
||||
|
||||
vs := VerificationScript(expect...)
|
||||
|
||||
actual := FetchPublicKeys(vs)
|
||||
require.Equal(t, expect, actual)
|
||||
})
|
||||
}
|
||||
|
||||
func TestReversedScriptHashToAddress(t *testing.T) {
|
||||
var testCases = []struct {
|
||||
name string
|
||||
value string
|
||||
expect string
|
||||
}{
|
||||
{
|
||||
name: "first",
|
||||
expect: "APfiG5imQgn8dzTTfaDfqHnxo3QDUkF69A",
|
||||
value: "5696acd07f0927fd5f01946828638c9e2c90c5dc",
|
||||
},
|
||||
|
||||
{
|
||||
name: "second",
|
||||
expect: "AK2nJJpJr6o664CWJKi1QRXjqeic2zRp8y",
|
||||
value: "23ba2703c53263e8d6e522dc32203339dcd8eee9",
|
||||
},
|
||||
}
|
||||
|
||||
for i := range testCases {
|
||||
tt := testCases[i]
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
actual, err := ReversedScriptHashToAddress(tt.value)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, tt.expect, actual)
|
||||
require.NoError(t, IsAddress(actual))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestReverseBytes(t *testing.T) {
|
||||
var testCases = []struct {
|
||||
name string
|
||||
value []byte
|
||||
expect []byte
|
||||
}{
|
||||
{name: "empty"},
|
||||
{
|
||||
name: "single byte",
|
||||
expect: []byte{0x1},
|
||||
value: []byte{0x1},
|
||||
},
|
||||
|
||||
{
|
||||
name: "two bytes",
|
||||
expect: []byte{0x2, 0x1},
|
||||
value: []byte{0x1, 0x2},
|
||||
},
|
||||
|
||||
{
|
||||
name: "three bytes",
|
||||
expect: []byte{0x3, 0x2, 0x1},
|
||||
value: []byte{0x1, 0x2, 0x3},
|
||||
},
|
||||
|
||||
{
|
||||
name: "five bytes",
|
||||
expect: []byte{0x5, 0x4, 0x3, 0x2, 0x1},
|
||||
value: []byte{0x1, 0x2, 0x3, 0x4, 0x5},
|
||||
},
|
||||
|
||||
{
|
||||
name: "eight bytes",
|
||||
expect: []byte{0x8, 0x7, 0x6, 0x5, 0x4, 0x3, 0x2, 0x1},
|
||||
value: []byte{0x1, 0x2, 0x3, 0x4, 0x5, 0x6, 0x7, 0x8},
|
||||
},
|
||||
}
|
||||
|
||||
for i := range testCases {
|
||||
tt := testCases[i]
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
actual := ReverseBytes(tt.value)
|
||||
require.Equal(t, tt.expect, actual)
|
||||
})
|
||||
}
|
||||
}
|
68
container/service.go
Normal file
68
container/service.go
Normal file
|
@ -0,0 +1,68 @@
|
|||
package container
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
|
||||
"github.com/nspcc-dev/neofs-proto/refs"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
type (
|
||||
// CID type alias.
|
||||
CID = refs.CID
|
||||
// UUID type alias.
|
||||
UUID = refs.UUID
|
||||
// OwnerID type alias.
|
||||
OwnerID = refs.OwnerID
|
||||
// OwnerID type alias.
|
||||
MessageID = refs.MessageID
|
||||
)
|
||||
|
||||
// SetTTL sets ttl to GetRequest to satisfy TTLRequest interface.
|
||||
func (m *GetRequest) SetTTL(v uint32) { m.TTL = v }
|
||||
|
||||
// SetTTL sets ttl to PutRequest to satisfy TTLRequest interface.
|
||||
func (m *PutRequest) SetTTL(v uint32) { m.TTL = v }
|
||||
|
||||
// SetTTL sets ttl to ListRequest to satisfy TTLRequest interface.
|
||||
func (m *ListRequest) SetTTL(v uint32) { m.TTL = v }
|
||||
|
||||
// SetTTL sets ttl to DeleteRequest to satisfy TTLRequest interface.
|
||||
func (m *DeleteRequest) SetTTL(v uint32) { m.TTL = v }
|
||||
|
||||
// SetSignature sets signature to PutRequest to satisfy SignedRequest interface.
|
||||
func (m *PutRequest) SetSignature(v []byte) { m.Signature = v }
|
||||
|
||||
// SetSignature sets signature to DeleteRequest to satisfy SignedRequest interface.
|
||||
func (m *DeleteRequest) SetSignature(v []byte) { m.Signature = v }
|
||||
|
||||
// PrepareData prepares bytes representation of PutRequest to satisfy SignedRequest interface.
|
||||
func (m *PutRequest) PrepareData() ([]byte, error) {
|
||||
var (
|
||||
err error
|
||||
buf = new(bytes.Buffer)
|
||||
capBytes = make([]byte, 8)
|
||||
)
|
||||
|
||||
binary.BigEndian.PutUint64(capBytes, m.Capacity)
|
||||
|
||||
if _, err = buf.Write(m.MessageID.Bytes()); err != nil {
|
||||
return nil, errors.Wrap(err, "could not write message id")
|
||||
} else if _, err = buf.Write(capBytes); err != nil {
|
||||
return nil, errors.Wrap(err, "could not write capacity")
|
||||
} else if _, err = buf.Write(m.OwnerID.Bytes()); err != nil {
|
||||
return nil, errors.Wrap(err, "could not write pub")
|
||||
} else if data, err := m.Rules.Marshal(); err != nil {
|
||||
return nil, errors.Wrap(err, "could not marshal placement")
|
||||
} else if _, err = buf.Write(data); err != nil {
|
||||
return nil, errors.Wrap(err, "could not write placement")
|
||||
}
|
||||
|
||||
return buf.Bytes(), nil
|
||||
}
|
||||
|
||||
// PrepareData prepares bytes representation of DeleteRequest to satisfy SignedRequest interface.
|
||||
func (m *DeleteRequest) PrepareData() ([]byte, error) {
|
||||
return m.CID.Bytes(), nil
|
||||
}
|
2131
container/service.pb.go
Normal file
2131
container/service.pb.go
Normal file
File diff suppressed because it is too large
Load diff
68
container/service.proto
Normal file
68
container/service.proto
Normal file
|
@ -0,0 +1,68 @@
|
|||
syntax = "proto3";
|
||||
package container;
|
||||
option go_package = "github.com/nspcc-dev/neofs-proto/container";
|
||||
|
||||
import "container/types.proto";
|
||||
import "github.com/nspcc-dev/netmap/selector.proto";
|
||||
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||
|
||||
option (gogoproto.stable_marshaler_all) = true;
|
||||
|
||||
service Service {
|
||||
// Create container
|
||||
rpc Put(PutRequest) returns (PutResponse);
|
||||
|
||||
// Delete container ... discuss implementation later
|
||||
rpc Delete(DeleteRequest) returns (DeleteResponse);
|
||||
|
||||
// Get container
|
||||
rpc Get(GetRequest) returns (GetResponse);
|
||||
|
||||
rpc List(ListRequest) returns (ListResponse);
|
||||
}
|
||||
|
||||
// NewRequest message to create new container
|
||||
message PutRequest {
|
||||
bytes MessageID = 1 [(gogoproto.customtype) = "MessageID", (gogoproto.nullable) = false];
|
||||
uint64 Capacity = 2; // not actual size in megabytes, but probability of storage availability
|
||||
bytes OwnerID = 3 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||
netmap.PlacementRule rules = 4 [(gogoproto.nullable) = false];
|
||||
bytes Signature = 5;
|
||||
uint32 TTL = 6;
|
||||
}
|
||||
|
||||
// PutResponse message to respond about container uuid
|
||||
message PutResponse {
|
||||
bytes CID = 1 [(gogoproto.customtype) = "CID", (gogoproto.nullable) = false];
|
||||
}
|
||||
|
||||
message DeleteRequest {
|
||||
bytes CID = 1 [(gogoproto.customtype) = "CID", (gogoproto.nullable) = false];
|
||||
uint32 TTL = 2;
|
||||
bytes Signature = 3;
|
||||
}
|
||||
|
||||
message DeleteResponse { }
|
||||
|
||||
|
||||
// GetRequest message to fetch container placement rules
|
||||
message GetRequest {
|
||||
bytes CID = 1 [(gogoproto.customtype) = "CID", (gogoproto.nullable) = false];
|
||||
uint32 TTL = 2;
|
||||
}
|
||||
|
||||
// GetResponse message with container structure
|
||||
message GetResponse {
|
||||
container.Container Container = 1;
|
||||
}
|
||||
|
||||
// ListRequest message to list containers for user
|
||||
message ListRequest {
|
||||
bytes OwnerID = 1 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||
uint32 TTL = 2;
|
||||
}
|
||||
|
||||
// ListResponse message to respond about all user containers
|
||||
message ListResponse {
|
||||
repeated bytes CID = 1 [(gogoproto.customtype) = "CID", (gogoproto.nullable) = false];
|
||||
}
|
94
container/types.go
Normal file
94
container/types.go
Normal file
|
@ -0,0 +1,94 @@
|
|||
package container
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/nspcc-dev/neofs-crypto/test"
|
||||
"github.com/nspcc-dev/neofs-proto/internal"
|
||||
"github.com/nspcc-dev/neofs-proto/refs"
|
||||
"github.com/nspcc-dev/netmap"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
var (
|
||||
_ internal.Custom = (*Container)(nil)
|
||||
|
||||
emptySalt = (UUID{}).Bytes()
|
||||
emptyOwner = (OwnerID{}).Bytes()
|
||||
)
|
||||
|
||||
// New creates new user container based on capacity, OwnerID and PlacementRules.
|
||||
func New(cap uint64, owner OwnerID, rules netmap.PlacementRule) (*Container, error) {
|
||||
if bytes.Equal(owner[:], emptyOwner) {
|
||||
return nil, refs.ErrEmptyOwner
|
||||
} else if cap == 0 {
|
||||
return nil, refs.ErrEmptyCapacity
|
||||
}
|
||||
|
||||
salt, err := uuid.NewRandom()
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "could not create salt")
|
||||
}
|
||||
|
||||
return &Container{
|
||||
OwnerID: owner,
|
||||
Salt: UUID(salt),
|
||||
Capacity: cap,
|
||||
Rules: rules,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Bytes returns bytes representation of Container.
|
||||
func (m *Container) Bytes() []byte {
|
||||
data, err := m.Marshal()
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return data
|
||||
}
|
||||
|
||||
// ID returns generated ContainerID based on Container (data).
|
||||
func (m *Container) ID() (CID, error) {
|
||||
if m.Empty() {
|
||||
return CID{}, refs.ErrEmptyContainer
|
||||
}
|
||||
data, err := m.Marshal()
|
||||
if err != nil {
|
||||
return CID{}, err
|
||||
}
|
||||
|
||||
return refs.CIDForBytes(data), nil
|
||||
}
|
||||
|
||||
// Empty checks that container is empty.
|
||||
func (m *Container) Empty() bool {
|
||||
return m.Capacity == 0 || bytes.Equal(m.Salt.Bytes(), emptySalt) || bytes.Equal(m.OwnerID.Bytes(), emptyOwner)
|
||||
}
|
||||
|
||||
// -- Test container definition -- //
|
||||
// NewTestContainer returns test container.
|
||||
//
|
||||
// WARNING: DON'T USE THIS OUTSIDE TESTS.
|
||||
func NewTestContainer() (*Container, error) {
|
||||
key := test.DecodeKey(0)
|
||||
owner, err := refs.NewOwnerID(&key.PublicKey)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return New(100, owner, netmap.PlacementRule{
|
||||
ReplFactor: 2,
|
||||
SFGroups: []netmap.SFGroup{
|
||||
{
|
||||
Selectors: []netmap.Select{
|
||||
{Key: "Country", Count: 1},
|
||||
{Key: netmap.NodesBucket, Count: 2},
|
||||
},
|
||||
Filters: []netmap.Filter{
|
||||
{Key: "Country", F: netmap.FilterIn("USA")},
|
||||
},
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
464
container/types.pb.go
Normal file
464
container/types.pb.go
Normal file
|
@ -0,0 +1,464 @@
|
|||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
||||
// source: container/types.proto
|
||||
|
||||
package container
|
||||
|
||||
import (
|
||||
fmt "fmt"
|
||||
_ "github.com/gogo/protobuf/gogoproto"
|
||||
proto "github.com/golang/protobuf/proto"
|
||||
netmap "github.com/nspcc-dev/netmap"
|
||||
io "io"
|
||||
math "math"
|
||||
math_bits "math/bits"
|
||||
)
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = fmt.Errorf
|
||||
var _ = math.Inf
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the proto package it is being compiled against.
|
||||
// A compilation error at this line likely means your copy of the
|
||||
// proto package needs to be updated.
|
||||
const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
|
||||
|
||||
// The Container service definition.
|
||||
type Container struct {
|
||||
OwnerID OwnerID `protobuf:"bytes,1,opt,name=OwnerID,proto3,customtype=OwnerID" json:"OwnerID"`
|
||||
Salt UUID `protobuf:"bytes,2,opt,name=Salt,proto3,customtype=UUID" json:"Salt"`
|
||||
Capacity uint64 `protobuf:"varint,3,opt,name=Capacity,proto3" json:"Capacity,omitempty"`
|
||||
Rules netmap.PlacementRule `protobuf:"bytes,4,opt,name=Rules,proto3" json:"Rules"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *Container) Reset() { *m = Container{} }
|
||||
func (m *Container) String() string { return proto.CompactTextString(m) }
|
||||
func (*Container) ProtoMessage() {}
|
||||
func (*Container) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_1432e52ab0b53e3e, []int{0}
|
||||
}
|
||||
func (m *Container) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *Container) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
func (m *Container) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_Container.Merge(m, src)
|
||||
}
|
||||
func (m *Container) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *Container) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_Container.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_Container proto.InternalMessageInfo
|
||||
|
||||
func (m *Container) GetCapacity() uint64 {
|
||||
if m != nil {
|
||||
return m.Capacity
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *Container) GetRules() netmap.PlacementRule {
|
||||
if m != nil {
|
||||
return m.Rules
|
||||
}
|
||||
return netmap.PlacementRule{}
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterType((*Container)(nil), "container.Container")
|
||||
}
|
||||
|
||||
func init() { proto.RegisterFile("container/types.proto", fileDescriptor_1432e52ab0b53e3e) }
|
||||
|
||||
var fileDescriptor_1432e52ab0b53e3e = []byte{
|
||||
// 275 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0x4d, 0xce, 0xcf, 0x2b,
|
||||
0x49, 0xcc, 0xcc, 0x4b, 0x2d, 0xd2, 0x2f, 0xa9, 0x2c, 0x48, 0x2d, 0xd6, 0x2b, 0x28, 0xca, 0x2f,
|
||||
0xc9, 0x17, 0xe2, 0x84, 0x0b, 0x4b, 0x69, 0xa5, 0x67, 0x96, 0x64, 0x94, 0x26, 0xe9, 0x25, 0xe7,
|
||||
0xe7, 0xea, 0xe7, 0x15, 0x17, 0x24, 0x27, 0xeb, 0xa6, 0xa4, 0x96, 0xe9, 0xe7, 0xa5, 0x96, 0xe4,
|
||||
0x26, 0x16, 0xe8, 0x17, 0xa7, 0xe6, 0xa4, 0x26, 0x97, 0xe4, 0x17, 0x41, 0xb4, 0x49, 0xe9, 0x22,
|
||||
0xa9, 0x4d, 0xcf, 0x4f, 0xcf, 0xd7, 0x07, 0x0b, 0x27, 0x95, 0xa6, 0x81, 0x79, 0x60, 0x0e, 0x98,
|
||||
0x05, 0x51, 0xae, 0xb4, 0x9c, 0x91, 0x8b, 0xd3, 0x19, 0x66, 0x91, 0x90, 0x26, 0x17, 0xbb, 0x7f,
|
||||
0x79, 0x5e, 0x6a, 0x91, 0xa7, 0x8b, 0x04, 0xa3, 0x02, 0xa3, 0x06, 0x8f, 0x13, 0xff, 0x89, 0x7b,
|
||||
0xf2, 0x0c, 0xb7, 0xee, 0xc9, 0xc3, 0x84, 0x83, 0x60, 0x0c, 0x21, 0x05, 0x2e, 0x96, 0xe0, 0xc4,
|
||||
0x9c, 0x12, 0x09, 0x26, 0xb0, 0x3a, 0x1e, 0xa8, 0x3a, 0x96, 0xd0, 0x50, 0x4f, 0x97, 0x20, 0xb0,
|
||||
0x8c, 0x90, 0x14, 0x17, 0x87, 0x73, 0x62, 0x41, 0x62, 0x72, 0x66, 0x49, 0xa5, 0x04, 0xb3, 0x02,
|
||||
0xa3, 0x06, 0x4b, 0x10, 0x9c, 0x2f, 0x64, 0xc8, 0xc5, 0x1a, 0x54, 0x9a, 0x93, 0x5a, 0x2c, 0xc1,
|
||||
0xa2, 0xc0, 0xa8, 0xc1, 0x6d, 0x24, 0xaa, 0x07, 0xf1, 0x8c, 0x5e, 0x40, 0x4e, 0x62, 0x72, 0x6a,
|
||||
0x6e, 0x6a, 0x5e, 0x09, 0x48, 0xd6, 0x89, 0x05, 0x64, 0x6a, 0x10, 0x44, 0xa5, 0x93, 0xc3, 0x89,
|
||||
0x47, 0x72, 0x8c, 0x17, 0x1e, 0xc9, 0x31, 0xde, 0x78, 0x24, 0xc7, 0xf8, 0xe0, 0x91, 0x1c, 0xe3,
|
||||
0x8c, 0xc7, 0x72, 0x0c, 0x51, 0xb8, 0x82, 0x26, 0x3f, 0xad, 0x58, 0x17, 0xe2, 0x59, 0x78, 0x30,
|
||||
0x26, 0xb1, 0x81, 0x05, 0x8c, 0x01, 0x01, 0x00, 0x00, 0xff, 0xff, 0x29, 0x6b, 0x4d, 0x08, 0x71,
|
||||
0x01, 0x00, 0x00,
|
||||
}
|
||||
|
||||
func (m *Container) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *Container) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *Container) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
i -= len(m.XXX_unrecognized)
|
||||
copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
{
|
||||
size, err := m.Rules.MarshalToSizedBuffer(dAtA[:i])
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i -= size
|
||||
i = encodeVarintTypes(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0x22
|
||||
if m.Capacity != 0 {
|
||||
i = encodeVarintTypes(dAtA, i, uint64(m.Capacity))
|
||||
i--
|
||||
dAtA[i] = 0x18
|
||||
}
|
||||
{
|
||||
size := m.Salt.Size()
|
||||
i -= size
|
||||
if _, err := m.Salt.MarshalTo(dAtA[i:]); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i = encodeVarintTypes(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0x12
|
||||
{
|
||||
size := m.OwnerID.Size()
|
||||
i -= size
|
||||
if _, err := m.OwnerID.MarshalTo(dAtA[i:]); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i = encodeVarintTypes(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0xa
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func encodeVarintTypes(dAtA []byte, offset int, v uint64) int {
|
||||
offset -= sovTypes(v)
|
||||
base := offset
|
||||
for v >= 1<<7 {
|
||||
dAtA[offset] = uint8(v&0x7f | 0x80)
|
||||
v >>= 7
|
||||
offset++
|
||||
}
|
||||
dAtA[offset] = uint8(v)
|
||||
return base
|
||||
}
|
||||
func (m *Container) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
l = m.OwnerID.Size()
|
||||
n += 1 + l + sovTypes(uint64(l))
|
||||
l = m.Salt.Size()
|
||||
n += 1 + l + sovTypes(uint64(l))
|
||||
if m.Capacity != 0 {
|
||||
n += 1 + sovTypes(uint64(m.Capacity))
|
||||
}
|
||||
l = m.Rules.Size()
|
||||
n += 1 + l + sovTypes(uint64(l))
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func sovTypes(x uint64) (n int) {
|
||||
return (math_bits.Len64(x|1) + 6) / 7
|
||||
}
|
||||
func sozTypes(x uint64) (n int) {
|
||||
return sovTypes(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func (m *Container) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: Container: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: Container: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field OwnerID", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if err := m.OwnerID.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 2:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Salt", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if err := m.Salt.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 3:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Capacity", wireType)
|
||||
}
|
||||
m.Capacity = 0
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
m.Capacity |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 4:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Rules", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if err := m.Rules.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipTypes(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func skipTypes(dAtA []byte) (n int, err error) {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
depth := 0
|
||||
for iNdEx < l {
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= (uint64(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
wireType := int(wire & 0x7)
|
||||
switch wireType {
|
||||
case 0:
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx++
|
||||
if dAtA[iNdEx-1] < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 1:
|
||||
iNdEx += 8
|
||||
case 2:
|
||||
var length int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
length |= (int(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if length < 0 {
|
||||
return 0, ErrInvalidLengthTypes
|
||||
}
|
||||
iNdEx += length
|
||||
case 3:
|
||||
depth++
|
||||
case 4:
|
||||
if depth == 0 {
|
||||
return 0, ErrUnexpectedEndOfGroupTypes
|
||||
}
|
||||
depth--
|
||||
case 5:
|
||||
iNdEx += 4
|
||||
default:
|
||||
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
|
||||
}
|
||||
if iNdEx < 0 {
|
||||
return 0, ErrInvalidLengthTypes
|
||||
}
|
||||
if depth == 0 {
|
||||
return iNdEx, nil
|
||||
}
|
||||
}
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
|
||||
var (
|
||||
ErrInvalidLengthTypes = fmt.Errorf("proto: negative length found during unmarshaling")
|
||||
ErrIntOverflowTypes = fmt.Errorf("proto: integer overflow")
|
||||
ErrUnexpectedEndOfGroupTypes = fmt.Errorf("proto: unexpected end of group")
|
||||
)
|
16
container/types.proto
Normal file
16
container/types.proto
Normal file
|
@ -0,0 +1,16 @@
|
|||
syntax = "proto3";
|
||||
package container;
|
||||
option go_package = "github.com/nspcc-dev/neofs-proto/container";
|
||||
|
||||
import "github.com/nspcc-dev/netmap/selector.proto";
|
||||
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||
|
||||
option (gogoproto.stable_marshaler_all) = true;
|
||||
|
||||
// The Container service definition.
|
||||
message Container {
|
||||
bytes OwnerID = 1 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||
bytes Salt = 2 [(gogoproto.customtype) = "UUID", (gogoproto.nullable) = false];
|
||||
uint64 Capacity = 3;
|
||||
netmap.PlacementRule Rules = 4 [(gogoproto.nullable) = false];
|
||||
}
|
57
container/types_test.go
Normal file
57
container/types_test.go
Normal file
|
@ -0,0 +1,57 @@
|
|||
package container
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/gogo/protobuf/proto"
|
||||
"github.com/nspcc-dev/neofs-crypto/test"
|
||||
"github.com/nspcc-dev/neofs-proto/refs"
|
||||
"github.com/nspcc-dev/netmap"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestCID(t *testing.T) {
|
||||
t.Run("check that marshal/unmarshal works like expected", func(t *testing.T) {
|
||||
var (
|
||||
c2 Container
|
||||
cid2 CID
|
||||
key = test.DecodeKey(0)
|
||||
)
|
||||
|
||||
rules := netmap.PlacementRule{
|
||||
ReplFactor: 2,
|
||||
SFGroups: []netmap.SFGroup{
|
||||
{
|
||||
Selectors: []netmap.Select{
|
||||
{Key: "Country", Count: 1},
|
||||
{Key: netmap.NodesBucket, Count: 2},
|
||||
},
|
||||
Filters: []netmap.Filter{
|
||||
{Key: "Country", F: netmap.FilterIn("USA")},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
owner, err := refs.NewOwnerID(&key.PublicKey)
|
||||
require.NoError(t, err)
|
||||
|
||||
c1, err := New(10, owner, rules)
|
||||
require.NoError(t, err)
|
||||
|
||||
data, err := proto.Marshal(c1)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.NoError(t, c2.Unmarshal(data))
|
||||
require.Equal(t, c1, &c2)
|
||||
|
||||
cid1, err := c1.ID()
|
||||
require.NoError(t, err)
|
||||
|
||||
data, err = proto.Marshal(&cid1)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, cid2.Unmarshal(data))
|
||||
|
||||
require.Equal(t, cid1, cid2)
|
||||
})
|
||||
}
|
110
decimal/decimal.go
Normal file
110
decimal/decimal.go
Normal file
|
@ -0,0 +1,110 @@
|
|||
package decimal
|
||||
|
||||
import (
|
||||
"math"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// GASPrecision contains precision for NEO Gas token.
|
||||
const GASPrecision = 8
|
||||
|
||||
// Zero is empty Decimal value.
|
||||
var Zero = &Decimal{}
|
||||
|
||||
// New returns new Decimal (in satoshi).
|
||||
func New(v int64) *Decimal {
|
||||
return NewWithPrecision(v, GASPrecision)
|
||||
}
|
||||
|
||||
// NewGAS returns new Decimal * 1e8 (in GAS).
|
||||
func NewGAS(v int64) *Decimal {
|
||||
v *= int64(math.Pow10(GASPrecision))
|
||||
return NewWithPrecision(v, GASPrecision)
|
||||
}
|
||||
|
||||
// NewWithPrecision returns new Decimal with custom precision.
|
||||
func NewWithPrecision(v int64, p uint32) *Decimal {
|
||||
return &Decimal{Value: v, Precision: p}
|
||||
}
|
||||
|
||||
// ParseFloat return new Decimal parsed from float64 * 1e8 (in GAS).
|
||||
func ParseFloat(v float64) *Decimal {
|
||||
return new(Decimal).Parse(v, GASPrecision)
|
||||
}
|
||||
|
||||
// ParseFloatWithPrecision returns new Decimal parsed from float64 * 1^p.
|
||||
func ParseFloatWithPrecision(v float64, p int) *Decimal {
|
||||
return new(Decimal).Parse(v, p)
|
||||
}
|
||||
|
||||
// Copy returns copy of current Decimal.
|
||||
func (m *Decimal) Copy() *Decimal { return &Decimal{Value: m.Value, Precision: m.Precision} }
|
||||
|
||||
// Parse returns parsed Decimal from float64 * 1^p.
|
||||
func (m *Decimal) Parse(v float64, p int) *Decimal {
|
||||
m.Value = int64(v * math.Pow10(p))
|
||||
m.Precision = uint32(p)
|
||||
return m
|
||||
}
|
||||
|
||||
// String returns string representation of Decimal.
|
||||
func (m Decimal) String() string {
|
||||
buf := new(strings.Builder)
|
||||
val := m.Value
|
||||
dec := int64(math.Pow10(int(m.Precision)))
|
||||
if val < 0 {
|
||||
buf.WriteRune('-')
|
||||
val = -val
|
||||
}
|
||||
str := strconv.FormatInt(val/dec, 10)
|
||||
buf.WriteString(str)
|
||||
val %= dec
|
||||
if val > 0 {
|
||||
buf.WriteRune('.')
|
||||
str = strconv.FormatInt(val, 10)
|
||||
for i := len(str); i < int(m.Precision); i++ {
|
||||
buf.WriteRune('0')
|
||||
}
|
||||
buf.WriteString(strings.TrimRight(str, "0"))
|
||||
}
|
||||
return buf.String()
|
||||
}
|
||||
|
||||
// Add returns d + m.
|
||||
func (m Decimal) Add(d *Decimal) *Decimal {
|
||||
precision := m.Precision
|
||||
if precision < d.Precision {
|
||||
precision = d.Precision
|
||||
}
|
||||
return &Decimal{
|
||||
Value: m.Value + d.Value,
|
||||
Precision: precision,
|
||||
}
|
||||
}
|
||||
|
||||
// Zero checks that Decimal is empty.
|
||||
func (m Decimal) Zero() bool { return m.Value == 0 }
|
||||
|
||||
// Equal checks that current Decimal is equal to passed Decimal.
|
||||
func (m Decimal) Equal(v *Decimal) bool { return m.Value == v.Value && m.Precision == v.Precision }
|
||||
|
||||
// GT checks that m > v.
|
||||
func (m Decimal) GT(v *Decimal) bool { return m.Value > v.Value }
|
||||
|
||||
// GTE checks that m >= v.
|
||||
func (m Decimal) GTE(v *Decimal) bool { return m.Value >= v.Value }
|
||||
|
||||
// LT checks that m < v.
|
||||
func (m Decimal) LT(v *Decimal) bool { return m.Value < v.Value }
|
||||
|
||||
// LTE checks that m <= v.
|
||||
func (m Decimal) LTE(v *Decimal) bool { return m.Value <= v.Value }
|
||||
|
||||
// Neg returns negative representation of current Decimal (m * -1).
|
||||
func (m Decimal) Neg() *Decimal {
|
||||
return &Decimal{
|
||||
Value: m.Value * -1,
|
||||
Precision: m.Precision,
|
||||
}
|
||||
}
|
345
decimal/decimal.pb.go
Normal file
345
decimal/decimal.pb.go
Normal file
|
@ -0,0 +1,345 @@
|
|||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
||||
// source: decimal/decimal.proto
|
||||
|
||||
package decimal
|
||||
|
||||
import (
|
||||
fmt "fmt"
|
||||
_ "github.com/gogo/protobuf/gogoproto"
|
||||
proto "github.com/golang/protobuf/proto"
|
||||
io "io"
|
||||
math "math"
|
||||
math_bits "math/bits"
|
||||
)
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = fmt.Errorf
|
||||
var _ = math.Inf
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the proto package it is being compiled against.
|
||||
// A compilation error at this line likely means your copy of the
|
||||
// proto package needs to be updated.
|
||||
const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
|
||||
|
||||
type Decimal struct {
|
||||
Value int64 `protobuf:"varint,1,opt,name=Value,proto3" json:"Value,omitempty"`
|
||||
Precision uint32 `protobuf:"varint,2,opt,name=Precision,proto3" json:"Precision,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *Decimal) Reset() { *m = Decimal{} }
|
||||
func (*Decimal) ProtoMessage() {}
|
||||
func (*Decimal) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_e7e70e1773836c80, []int{0}
|
||||
}
|
||||
func (m *Decimal) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *Decimal) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
func (m *Decimal) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_Decimal.Merge(m, src)
|
||||
}
|
||||
func (m *Decimal) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *Decimal) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_Decimal.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_Decimal proto.InternalMessageInfo
|
||||
|
||||
func (m *Decimal) GetValue() int64 {
|
||||
if m != nil {
|
||||
return m.Value
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *Decimal) GetPrecision() uint32 {
|
||||
if m != nil {
|
||||
return m.Precision
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterType((*Decimal)(nil), "decimal.Decimal")
|
||||
}
|
||||
|
||||
func init() { proto.RegisterFile("decimal/decimal.proto", fileDescriptor_e7e70e1773836c80) }
|
||||
|
||||
var fileDescriptor_e7e70e1773836c80 = []byte{
|
||||
// 181 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0x4d, 0x49, 0x4d, 0xce,
|
||||
0xcc, 0x4d, 0xcc, 0xd1, 0x87, 0xd2, 0x7a, 0x05, 0x45, 0xf9, 0x25, 0xf9, 0x42, 0xec, 0x50, 0xae,
|
||||
0x94, 0x6e, 0x7a, 0x66, 0x49, 0x46, 0x69, 0x92, 0x5e, 0x72, 0x7e, 0xae, 0x7e, 0x7a, 0x7e, 0x7a,
|
||||
0xbe, 0x3e, 0x58, 0x3e, 0xa9, 0x34, 0x0d, 0xcc, 0x03, 0x73, 0xc0, 0x2c, 0x88, 0x3e, 0x25, 0x67,
|
||||
0x2e, 0x76, 0x17, 0x88, 0x4e, 0x21, 0x11, 0x2e, 0xd6, 0xb0, 0xc4, 0x9c, 0xd2, 0x54, 0x09, 0x46,
|
||||
0x05, 0x46, 0x0d, 0xe6, 0x20, 0x08, 0x47, 0x48, 0x86, 0x8b, 0x33, 0xa0, 0x28, 0x35, 0x39, 0xb3,
|
||||
0x38, 0x33, 0x3f, 0x4f, 0x82, 0x49, 0x81, 0x51, 0x83, 0x37, 0x08, 0x21, 0x60, 0xc5, 0x32, 0x63,
|
||||
0x81, 0x3c, 0x83, 0x93, 0xdd, 0x89, 0x47, 0x72, 0x8c, 0x17, 0x1e, 0xc9, 0x31, 0xde, 0x78, 0x24,
|
||||
0xc7, 0xf8, 0xe0, 0x91, 0x1c, 0xe3, 0x8c, 0xc7, 0x72, 0x0c, 0x51, 0x1a, 0x48, 0x2e, 0xc9, 0x2b,
|
||||
0x2e, 0x48, 0x4e, 0xd6, 0x4d, 0x49, 0x2d, 0xd3, 0xcf, 0x4b, 0xcd, 0x4f, 0x2b, 0xd6, 0x85, 0xb8,
|
||||
0x03, 0xea, 0xe6, 0x24, 0x36, 0x30, 0xd7, 0x18, 0x10, 0x00, 0x00, 0xff, 0xff, 0xac, 0x68, 0x21,
|
||||
0x20, 0xdc, 0x00, 0x00, 0x00,
|
||||
}
|
||||
|
||||
func (m *Decimal) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *Decimal) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *Decimal) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
i -= len(m.XXX_unrecognized)
|
||||
copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
if m.Precision != 0 {
|
||||
i = encodeVarintDecimal(dAtA, i, uint64(m.Precision))
|
||||
i--
|
||||
dAtA[i] = 0x10
|
||||
}
|
||||
if m.Value != 0 {
|
||||
i = encodeVarintDecimal(dAtA, i, uint64(m.Value))
|
||||
i--
|
||||
dAtA[i] = 0x8
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func encodeVarintDecimal(dAtA []byte, offset int, v uint64) int {
|
||||
offset -= sovDecimal(v)
|
||||
base := offset
|
||||
for v >= 1<<7 {
|
||||
dAtA[offset] = uint8(v&0x7f | 0x80)
|
||||
v >>= 7
|
||||
offset++
|
||||
}
|
||||
dAtA[offset] = uint8(v)
|
||||
return base
|
||||
}
|
||||
func (m *Decimal) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
if m.Value != 0 {
|
||||
n += 1 + sovDecimal(uint64(m.Value))
|
||||
}
|
||||
if m.Precision != 0 {
|
||||
n += 1 + sovDecimal(uint64(m.Precision))
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func sovDecimal(x uint64) (n int) {
|
||||
return (math_bits.Len64(x|1) + 6) / 7
|
||||
}
|
||||
func sozDecimal(x uint64) (n int) {
|
||||
return sovDecimal(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func (m *Decimal) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowDecimal
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: Decimal: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: Decimal: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType)
|
||||
}
|
||||
m.Value = 0
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowDecimal
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
m.Value |= int64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 2:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Precision", wireType)
|
||||
}
|
||||
m.Precision = 0
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowDecimal
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
m.Precision |= uint32(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipDecimal(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthDecimal
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
return ErrInvalidLengthDecimal
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func skipDecimal(dAtA []byte) (n int, err error) {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
depth := 0
|
||||
for iNdEx < l {
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowDecimal
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= (uint64(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
wireType := int(wire & 0x7)
|
||||
switch wireType {
|
||||
case 0:
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowDecimal
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx++
|
||||
if dAtA[iNdEx-1] < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 1:
|
||||
iNdEx += 8
|
||||
case 2:
|
||||
var length int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowDecimal
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
length |= (int(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if length < 0 {
|
||||
return 0, ErrInvalidLengthDecimal
|
||||
}
|
||||
iNdEx += length
|
||||
case 3:
|
||||
depth++
|
||||
case 4:
|
||||
if depth == 0 {
|
||||
return 0, ErrUnexpectedEndOfGroupDecimal
|
||||
}
|
||||
depth--
|
||||
case 5:
|
||||
iNdEx += 4
|
||||
default:
|
||||
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
|
||||
}
|
||||
if iNdEx < 0 {
|
||||
return 0, ErrInvalidLengthDecimal
|
||||
}
|
||||
if depth == 0 {
|
||||
return iNdEx, nil
|
||||
}
|
||||
}
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
|
||||
var (
|
||||
ErrInvalidLengthDecimal = fmt.Errorf("proto: negative length found during unmarshaling")
|
||||
ErrIntOverflowDecimal = fmt.Errorf("proto: integer overflow")
|
||||
ErrUnexpectedEndOfGroupDecimal = fmt.Errorf("proto: unexpected end of group")
|
||||
)
|
14
decimal/decimal.proto
Normal file
14
decimal/decimal.proto
Normal file
|
@ -0,0 +1,14 @@
|
|||
syntax = "proto3";
|
||||
package decimal;
|
||||
option go_package = "github.com/nspcc-dev/neofs-proto/decimal";
|
||||
|
||||
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||
|
||||
option (gogoproto.stable_marshaler_all) = true;
|
||||
|
||||
message Decimal {
|
||||
option (gogoproto.goproto_stringer) = false;
|
||||
|
||||
int64 Value = 1;
|
||||
uint32 Precision = 2;
|
||||
}
|
445
decimal/decimal_test.go
Normal file
445
decimal/decimal_test.go
Normal file
|
@ -0,0 +1,445 @@
|
|||
package decimal
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestDecimal_Parse(t *testing.T) {
|
||||
tests := []struct {
|
||||
value float64
|
||||
name string
|
||||
expect *Decimal
|
||||
}{
|
||||
{name: "empty", expect: &Decimal{Precision: GASPrecision}},
|
||||
|
||||
{
|
||||
value: 100,
|
||||
name: "100 GAS",
|
||||
expect: &Decimal{Value: 1e10, Precision: GASPrecision},
|
||||
},
|
||||
}
|
||||
for i := range tests {
|
||||
tt := tests[i]
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
require.Equal(t, tt.expect, ParseFloat(tt.value))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecimal_ParseWithPrecision(t *testing.T) {
|
||||
type args struct {
|
||||
v float64
|
||||
p int
|
||||
}
|
||||
tests := []struct {
|
||||
args args
|
||||
name string
|
||||
expect *Decimal
|
||||
}{
|
||||
{name: "empty", expect: &Decimal{}},
|
||||
|
||||
{
|
||||
name: "empty precision",
|
||||
expect: &Decimal{Value: 0, Precision: 0},
|
||||
},
|
||||
|
||||
{
|
||||
name: "100 GAS",
|
||||
args: args{100, GASPrecision},
|
||||
expect: &Decimal{Value: 1e10, Precision: GASPrecision},
|
||||
},
|
||||
}
|
||||
for i := range tests {
|
||||
tt := tests[i]
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
require.Equal(t, tt.expect,
|
||||
ParseFloatWithPrecision(tt.args.v, tt.args.p))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestNew(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
val int64
|
||||
expect *Decimal
|
||||
}{
|
||||
{name: "empty", expect: &Decimal{Value: 0, Precision: GASPrecision}},
|
||||
{name: "100 GAS", val: 1e10, expect: &Decimal{Value: 1e10, Precision: GASPrecision}},
|
||||
}
|
||||
for i := range tests {
|
||||
tt := tests[i]
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
require.Equalf(t, tt.expect, New(tt.val), tt.name)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewGAS(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
val int64
|
||||
expect *Decimal
|
||||
}{
|
||||
{name: "empty", expect: &Decimal{Value: 0, Precision: GASPrecision}},
|
||||
{name: "100 GAS", val: 100, expect: &Decimal{Value: 1e10, Precision: GASPrecision}},
|
||||
}
|
||||
for i := range tests {
|
||||
tt := tests[i]
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
require.Equalf(t, tt.expect, NewGAS(tt.val), tt.name)
|
||||
})
|
||||
}
|
||||
}
|
||||
func TestNewWithPrecision(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
val int64
|
||||
pre uint32
|
||||
expect *Decimal
|
||||
}{
|
||||
{name: "empty", expect: &Decimal{}},
|
||||
{name: "100 GAS", val: 1e10, pre: GASPrecision, expect: &Decimal{Value: 1e10, Precision: GASPrecision}},
|
||||
}
|
||||
for i := range tests {
|
||||
tt := tests[i]
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
require.Equalf(t, tt.expect, NewWithPrecision(tt.val, tt.pre), tt.name)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecimal_Neg(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
val int64
|
||||
expect *Decimal
|
||||
}{
|
||||
{name: "empty", expect: &Decimal{Value: 0, Precision: GASPrecision}},
|
||||
{name: "100 GAS", val: 1e10, expect: &Decimal{Value: -1e10, Precision: GASPrecision}},
|
||||
}
|
||||
for i := range tests {
|
||||
tt := tests[i]
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
require.NotPanicsf(t, func() {
|
||||
require.Equalf(t, tt.expect, New(tt.val).Neg(), tt.name)
|
||||
}, tt.name)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecimal_String(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
expect string
|
||||
value *Decimal
|
||||
}{
|
||||
{name: "empty", expect: "0", value: &Decimal{}},
|
||||
{name: "100 GAS", expect: "100", value: &Decimal{Value: 1e10, Precision: GASPrecision}},
|
||||
{name: "-100 GAS", expect: "-100", value: &Decimal{Value: -1e10, Precision: GASPrecision}},
|
||||
}
|
||||
for i := range tests {
|
||||
tt := tests[i]
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
require.Equalf(t, tt.expect, tt.value.String(), tt.name)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
const SomethingElsePrecision = 5
|
||||
|
||||
func TestDecimal_Add(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
expect *Decimal
|
||||
values [2]*Decimal
|
||||
}{
|
||||
{name: "empty", expect: &Decimal{}, values: [2]*Decimal{{}, {}}},
|
||||
{
|
||||
name: "5 GAS + 2 GAS",
|
||||
expect: &Decimal{Value: 7e8, Precision: GASPrecision},
|
||||
values: [2]*Decimal{
|
||||
{Value: 2e8, Precision: GASPrecision},
|
||||
{Value: 5e8, Precision: GASPrecision},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "1e2 + 1e3",
|
||||
expect: &Decimal{Value: 1.1e3, Precision: 3},
|
||||
values: [2]*Decimal{
|
||||
{Value: 1e2, Precision: 2},
|
||||
{Value: 1e3, Precision: 3},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "5 GAS + 10 SomethingElse",
|
||||
expect: &Decimal{Value: 5.01e8, Precision: GASPrecision},
|
||||
values: [2]*Decimal{
|
||||
{Value: 5e8, Precision: GASPrecision},
|
||||
{Value: 1e6, Precision: SomethingElsePrecision},
|
||||
},
|
||||
},
|
||||
}
|
||||
for i := range tests {
|
||||
tt := tests[i]
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
require.NotPanicsf(t, func() {
|
||||
{ // A + B
|
||||
one := tt.values[0]
|
||||
two := tt.values[1]
|
||||
require.Equalf(t, tt.expect, one.Add(two), tt.name)
|
||||
t.Log(one.Add(two))
|
||||
}
|
||||
|
||||
{ // B + A
|
||||
one := tt.values[0]
|
||||
two := tt.values[1]
|
||||
require.Equalf(t, tt.expect, two.Add(one), tt.name)
|
||||
t.Log(two.Add(one))
|
||||
}
|
||||
}, tt.name)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecimal_Copy(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
expect *Decimal
|
||||
value *Decimal
|
||||
}{
|
||||
{name: "zero", expect: Zero},
|
||||
{
|
||||
name: "5 GAS",
|
||||
expect: &Decimal{Value: 5e8, Precision: GASPrecision},
|
||||
},
|
||||
{
|
||||
name: "100 GAS",
|
||||
expect: &Decimal{Value: 1e10, Precision: GASPrecision},
|
||||
},
|
||||
}
|
||||
for i := range tests {
|
||||
tt := tests[i]
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
require.NotPanicsf(t, func() {
|
||||
require.Equal(t, tt.expect, tt.expect.Copy())
|
||||
}, tt.name)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecimal_Zero(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
expect bool
|
||||
value *Decimal
|
||||
}{
|
||||
{name: "zero", expect: true, value: Zero},
|
||||
{
|
||||
name: "5 GAS",
|
||||
expect: false,
|
||||
value: &Decimal{Value: 5e8, Precision: GASPrecision},
|
||||
},
|
||||
{
|
||||
name: "100 GAS",
|
||||
expect: false,
|
||||
value: &Decimal{Value: 1e10, Precision: GASPrecision},
|
||||
},
|
||||
}
|
||||
for i := range tests {
|
||||
tt := tests[i]
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
require.NotPanicsf(t, func() {
|
||||
require.Truef(t, tt.expect == tt.value.Zero(), tt.name)
|
||||
}, tt.name)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecimal_Equal(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
expect bool
|
||||
values [2]*Decimal
|
||||
}{
|
||||
{name: "zero == zero", expect: true, values: [2]*Decimal{Zero, Zero}},
|
||||
{
|
||||
name: "5 GAS != 2 GAS",
|
||||
expect: false,
|
||||
values: [2]*Decimal{
|
||||
{Value: 5e8, Precision: GASPrecision},
|
||||
{Value: 2e8, Precision: GASPrecision},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "100 GAS == 100 GAS",
|
||||
expect: true,
|
||||
values: [2]*Decimal{
|
||||
{Value: 1e10, Precision: GASPrecision},
|
||||
{Value: 1e10, Precision: GASPrecision},
|
||||
},
|
||||
},
|
||||
}
|
||||
for i := range tests {
|
||||
tt := tests[i]
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
require.NotPanicsf(t, func() {
|
||||
require.Truef(t, tt.expect == (tt.values[0].Equal(tt.values[1])), tt.name)
|
||||
}, tt.name)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecimal_GT(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
expect bool
|
||||
values [2]*Decimal
|
||||
}{
|
||||
{name: "two zeros", expect: false, values: [2]*Decimal{Zero, Zero}},
|
||||
{
|
||||
name: "5 GAS > 2 GAS",
|
||||
expect: true,
|
||||
values: [2]*Decimal{
|
||||
{Value: 5e8, Precision: GASPrecision},
|
||||
{Value: 2e8, Precision: GASPrecision},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "100 GAS !> 100 GAS",
|
||||
expect: false,
|
||||
values: [2]*Decimal{
|
||||
{Value: 1e10, Precision: GASPrecision},
|
||||
{Value: 1e10, Precision: GASPrecision},
|
||||
},
|
||||
},
|
||||
}
|
||||
for i := range tests {
|
||||
tt := tests[i]
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
require.NotPanicsf(t, func() {
|
||||
require.Truef(t, tt.expect == (tt.values[0].GT(tt.values[1])), tt.name)
|
||||
}, tt.name)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecimal_GTE(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
expect bool
|
||||
values [2]*Decimal
|
||||
}{
|
||||
{name: "two zeros", expect: true, values: [2]*Decimal{Zero, Zero}},
|
||||
{
|
||||
name: "5 GAS >= 2 GAS",
|
||||
expect: true,
|
||||
values: [2]*Decimal{
|
||||
{Value: 5e8, Precision: GASPrecision},
|
||||
{Value: 2e8, Precision: GASPrecision},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "1 GAS !>= 100 GAS",
|
||||
expect: false,
|
||||
values: [2]*Decimal{
|
||||
{Value: 1e8, Precision: GASPrecision},
|
||||
{Value: 1e10, Precision: GASPrecision},
|
||||
},
|
||||
},
|
||||
}
|
||||
for i := range tests {
|
||||
tt := tests[i]
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
require.NotPanicsf(t, func() {
|
||||
require.Truef(t, tt.expect == (tt.values[0].GTE(tt.values[1])), tt.name)
|
||||
}, tt.name)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecimal_LT(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
expect bool
|
||||
values [2]*Decimal
|
||||
}{
|
||||
{name: "two zeros", expect: false, values: [2]*Decimal{Zero, Zero}},
|
||||
{
|
||||
name: "5 GAS !< 2 GAS",
|
||||
expect: false,
|
||||
values: [2]*Decimal{
|
||||
{Value: 5e8, Precision: GASPrecision},
|
||||
{Value: 2e8, Precision: GASPrecision},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "1 GAS < 100 GAS",
|
||||
expect: true,
|
||||
values: [2]*Decimal{
|
||||
{Value: 1e8, Precision: GASPrecision},
|
||||
{Value: 1e10, Precision: GASPrecision},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "100 GAS !< 100 GAS",
|
||||
expect: false,
|
||||
values: [2]*Decimal{
|
||||
{Value: 1e10, Precision: GASPrecision},
|
||||
{Value: 1e10, Precision: GASPrecision},
|
||||
},
|
||||
},
|
||||
}
|
||||
for i := range tests {
|
||||
tt := tests[i]
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
require.NotPanicsf(t, func() {
|
||||
require.Truef(t, tt.expect == (tt.values[0].LT(tt.values[1])), tt.name)
|
||||
}, tt.name)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestDecimal_LTE(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
expect bool
|
||||
values [2]*Decimal
|
||||
}{
|
||||
{name: "two zeros", expect: true, values: [2]*Decimal{Zero, Zero}},
|
||||
{
|
||||
name: "5 GAS <= 2 GAS",
|
||||
expect: false,
|
||||
values: [2]*Decimal{
|
||||
{Value: 5e8, Precision: GASPrecision},
|
||||
{Value: 2e8, Precision: GASPrecision},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "1 GAS <= 100 GAS",
|
||||
expect: true,
|
||||
values: [2]*Decimal{
|
||||
{Value: 1e8, Precision: GASPrecision},
|
||||
{Value: 1e10, Precision: GASPrecision},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "100 GAS !<= 1 GAS",
|
||||
expect: false,
|
||||
values: [2]*Decimal{
|
||||
{Value: 1e10, Precision: GASPrecision},
|
||||
{Value: 1e8, Precision: GASPrecision},
|
||||
},
|
||||
},
|
||||
}
|
||||
for i := range tests {
|
||||
tt := tests[i]
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
require.NotPanicsf(t, func() {
|
||||
require.Truef(t, tt.expect == (tt.values[0].LTE(tt.values[1])), tt.name)
|
||||
}, tt.name)
|
||||
})
|
||||
}
|
||||
}
|
22
go.mod
Normal file
22
go.mod
Normal file
|
@ -0,0 +1,22 @@
|
|||
module github.com/nspcc-dev/neofs-proto
|
||||
|
||||
go 1.13
|
||||
|
||||
require (
|
||||
code.cloudfoundry.org/bytefmt v0.0.0-20190819182555-854d396b647c
|
||||
github.com/gogo/protobuf v1.3.1
|
||||
github.com/golang/protobuf v1.3.2
|
||||
github.com/google/uuid v1.1.1
|
||||
github.com/mr-tron/base58 v1.1.2
|
||||
github.com/nspcc-dev/neofs-crypto v0.2.1
|
||||
github.com/nspcc-dev/netmap v1.6.1
|
||||
github.com/nspcc-dev/tzhash v1.3.0
|
||||
github.com/onsi/ginkgo v1.10.2 // indirect
|
||||
github.com/onsi/gomega v1.7.0 // indirect
|
||||
github.com/pkg/errors v0.8.1
|
||||
github.com/prometheus/client_golang v1.2.1
|
||||
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4
|
||||
github.com/stretchr/testify v1.4.0
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550
|
||||
google.golang.org/grpc v1.24.0
|
||||
)
|
165
go.sum
Normal file
165
go.sum
Normal file
|
@ -0,0 +1,165 @@
|
|||
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
|
||||
code.cloudfoundry.org/bytefmt v0.0.0-20190819182555-854d396b647c h1:2RuXx1+tSNWRjxhY0Bx52kjV2odJQ0a6MTbfTPhGAkg=
|
||||
code.cloudfoundry.org/bytefmt v0.0.0-20190819182555-854d396b647c/go.mod h1:wN/zk7mhREp/oviagqUXY3EwuHhWyOvAdsn5Y4CzOrc=
|
||||
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
|
||||
github.com/abiosoft/ishell v2.0.0+incompatible/go.mod h1:HQR9AqF2R3P4XXpMpI0NAzgHf/aS6+zVXRj14cVk9qg=
|
||||
github.com/abiosoft/readline v0.0.0-20180607040430-155bce2042db/go.mod h1:rB3B4rKii8V21ydCbIzH5hZiCQE7f5E9SzUb/ZZx530=
|
||||
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
|
||||
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
||||
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
|
||||
github.com/awalterschulze/gographviz v0.0.0-20181013152038-b2885df04310 h1:t+qxRrRtwNiUYA+Xh2jSXhoG2grnMCMKX4Fg6lx9X1U=
|
||||
github.com/awalterschulze/gographviz v0.0.0-20181013152038-b2885df04310/go.mod h1:GEV5wmg4YquNw7v1kkyoX9etIk8yVmXj+AkDHuuETHs=
|
||||
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
|
||||
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
|
||||
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
|
||||
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
|
||||
github.com/cespare/xxhash/v2 v2.1.0 h1:yTUvW7Vhb89inJ+8irsUqiWjh8iT6sQPZiQzI6ReGkA=
|
||||
github.com/cespare/xxhash/v2 v2.1.0/go.mod h1:dgIUBU3pDso/gPgZ1osOZ0iQf77oPR28Tjxl5dIMyVM=
|
||||
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
|
||||
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
|
||||
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
|
||||
github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
|
||||
github.com/flynn-archive/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:rZfgFAXFS/z/lEd6LJmf9HVZ1LkgYiHx5pHhV5DR16M=
|
||||
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
|
||||
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
|
||||
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
|
||||
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
|
||||
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
|
||||
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
|
||||
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
|
||||
github.com/gogo/protobuf v1.3.0/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
|
||||
github.com/gogo/protobuf v1.3.1 h1:DqDEcV5aeaTmdFBePNpYsp3FlcVH/2ISVVM9Qf8PSls=
|
||||
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
|
||||
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
|
||||
github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
|
||||
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
|
||||
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
|
||||
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
|
||||
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
|
||||
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
|
||||
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
|
||||
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
|
||||
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
|
||||
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
|
||||
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
|
||||
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
|
||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
|
||||
github.com/mr-tron/base58 v1.1.2 h1:ZEw4I2EgPKDJ2iEw0cNmLB3ROrEmkOtXIkaG7wZg+78=
|
||||
github.com/mr-tron/base58 v1.1.2/go.mod h1:BinMc/sQntlIE1frQmRFPUoPA1Zkr8VRgBdjWI2mNwc=
|
||||
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
|
||||
github.com/nspcc-dev/hrw v1.0.8 h1:vwRuJXZXgkMvf473vFzeWGCfY1WBVeSHAEHvR4u3/Cg=
|
||||
github.com/nspcc-dev/hrw v1.0.8/go.mod h1:l/W2vx83vMQo6aStyx2AuZrJ+07lGv2JQGlVkPG06MU=
|
||||
github.com/nspcc-dev/neofs-crypto v0.2.1 h1:NxKexcW88vlHO/u7EYjx5Q1UaOQ7XhYrCsLSVgOcCxw=
|
||||
github.com/nspcc-dev/neofs-crypto v0.2.1/go.mod h1:F/96fUzPM3wR+UGsPi3faVNmFlA9KAEAUQR7dMxZmNA=
|
||||
github.com/nspcc-dev/netmap v1.6.1 h1:Pigqpqi6QSdRiusbq5XlO20A18k6Eyu7j9MzOfAE3CM=
|
||||
github.com/nspcc-dev/netmap v1.6.1/go.mod h1:mhV3UOg9ljQmu0teQShD6+JYX09XY5gu2I4hIByCH9M=
|
||||
github.com/nspcc-dev/rfc6979 v0.1.0 h1:Lwg7esRRoyK1Up/IN1vAef1EmvrBeMHeeEkek2fAJ6c=
|
||||
github.com/nspcc-dev/rfc6979 v0.1.0/go.mod h1:exhIh1PdpDC5vQmyEsGvc4YDM/lyQp/452QxGq/UEso=
|
||||
github.com/nspcc-dev/tzhash v1.3.0 h1:n6FTHsfPYbMi5Jmo6SwGVVRQD8i2w1P2ScCaW6rz69Q=
|
||||
github.com/nspcc-dev/tzhash v1.3.0/go.mod h1:Lc4DersKS8MNIrunTmsAzANO56qnG+LZ4GOE/WYGVzU=
|
||||
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/ginkgo v1.10.2 h1:uqH7bpe+ERSiDa34FDOF7RikN6RzXgduUF8yarlZp94=
|
||||
github.com/onsi/ginkgo v1.10.2/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
|
||||
github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME=
|
||||
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
|
||||
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
|
||||
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
|
||||
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
|
||||
github.com/prometheus/client_golang v1.2.1 h1:JnMpQc6ppsNgw9QPAGF6Dod479itz7lvlsMzzNayLOI=
|
||||
github.com/prometheus/client_golang v1.2.1/go.mod h1:XMU6Z2MjaRKVu/dC1qupJI9SiNkDYzz3xecMgSW/F+U=
|
||||
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
|
||||
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 h1:gQz4mCbXsO+nc9n1hCxHcGA3Zx3Eo+UHZoInFGUIXNM=
|
||||
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
|
||||
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
|
||||
github.com/prometheus/common v0.7.0 h1:L+1lyG48J1zAQXA3RBX/nG/B3gjlHq0zTt2tlbJLyCY=
|
||||
github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA=
|
||||
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
|
||||
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
|
||||
github.com/prometheus/procfs v0.0.5 h1:3+auTFlqw+ZaQYJARz6ArODtkaIwtvBTx3N2NehQlL8=
|
||||
github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
|
||||
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
|
||||
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
|
||||
github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
|
||||
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
|
||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550 h1:ObdrDkeb4kJdCP557AjRjq69pTHfNouLtWZG7j9rPN8=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3 h1:0GoQqolDA55aaLxZyTzK/Y2ePZzZTUrRacwib7cNsYQ=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980 h1:dfGZHvZk057jK2MCeWus/TowKpJ8y4AmooUzdBSR9GU=
|
||||
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20181228144115-9a3f9b0469bb/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d h1:+R4KGOnez64A81RvjARKc4UT5/tI9ujCIVX+P5KiHuI=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20191010194322-b09406accb47 h1:/XfQ9z7ib8eEJX2hdgFTZJ/ntt0swNk5oYBziWeTCvY=
|
||||
golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8 h1:Nw54tB0rB7hY/N0NQvRW8DG4Yk3Q6T9cu9RcFQDu1tc=
|
||||
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||
google.golang.org/grpc v1.24.0 h1:vb/1TCsVn3DcJlQ0Gs1yB1pKI6Do2/QNwxdKqmc/b0s=
|
||||
google.golang.org/grpc v1.24.0/go.mod h1:XDChyiUovWa60DnaeDeZmSW86xtLtjtZbwvSiRnRtcA=
|
||||
gopkg.in/abiosoft/ishell.v2 v2.0.0/go.mod h1:sFp+cGtH6o4s1FtpVPTMcHq2yue+c4DGOVohJCPUzwY=
|
||||
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
|
||||
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
|
||||
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
|
||||
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
|
||||
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
|
||||
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
|
||||
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
|
98
hash/hash.go
Normal file
98
hash/hash.go
Normal file
|
@ -0,0 +1,98 @@
|
|||
package hash
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
|
||||
"github.com/mr-tron/base58"
|
||||
"github.com/nspcc-dev/neofs-proto/internal"
|
||||
"github.com/nspcc-dev/tzhash/tz"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// HomomorphicHashSize contains size of HH.
|
||||
const HomomorphicHashSize = 64
|
||||
|
||||
// Hash is implementation of HomomorphicHash.
|
||||
type Hash [HomomorphicHashSize]byte
|
||||
|
||||
// ErrWrongDataSize raised when wrong size of bytes is passed to unmarshal HH.
|
||||
const ErrWrongDataSize = internal.Error("wrong data size")
|
||||
|
||||
var (
|
||||
_ internal.Custom = (*Hash)(nil)
|
||||
|
||||
emptyHH [HomomorphicHashSize]byte
|
||||
)
|
||||
|
||||
// Size returns size of Hash (HomomorphicHashSize).
|
||||
func (h Hash) Size() int { return HomomorphicHashSize }
|
||||
|
||||
// Empty checks that Hash is empty.
|
||||
func (h Hash) Empty() bool { return bytes.Equal(h.Bytes(), emptyHH[:]) }
|
||||
|
||||
// Reset sets current Hash to empty value.
|
||||
func (h *Hash) Reset() { *h = Hash{} }
|
||||
|
||||
// ProtoMessage method to satisfy proto.Message interface.
|
||||
func (h Hash) ProtoMessage() {}
|
||||
|
||||
// Bytes represents Hash as bytes.
|
||||
func (h Hash) Bytes() []byte {
|
||||
buf := make([]byte, HomomorphicHashSize)
|
||||
copy(buf, h[:])
|
||||
return h[:]
|
||||
}
|
||||
|
||||
// Marshal returns bytes representation of Hash.
|
||||
func (h Hash) Marshal() ([]byte, error) { return h.Bytes(), nil }
|
||||
|
||||
// MarshalTo tries to marshal Hash into passed bytes and returns count of copied bytes.
|
||||
func (h *Hash) MarshalTo(data []byte) (int, error) { return copy(data, h.Bytes()), nil }
|
||||
|
||||
// Unmarshal tries to parse bytes into valid Hash.
|
||||
func (h *Hash) Unmarshal(data []byte) error {
|
||||
if ln := len(data); ln != HomomorphicHashSize {
|
||||
return errors.Wrapf(ErrWrongDataSize, "expect=%d, actual=%d", HomomorphicHashSize, ln)
|
||||
}
|
||||
|
||||
copy((*h)[:], data)
|
||||
return nil
|
||||
}
|
||||
|
||||
// String returns string representation of Hash.
|
||||
func (h Hash) String() string { return base58.Encode(h[:]) }
|
||||
|
||||
// Equal checks that current Hash is equal to passed Hash.
|
||||
func (h Hash) Equal(hash Hash) bool { return h == hash }
|
||||
|
||||
// Verify validates if current hash generated from passed data.
|
||||
func (h Hash) Verify(data []byte) bool { return h.Equal(Sum(data)) }
|
||||
|
||||
// Validate checks if combined hashes are equal to current Hash.
|
||||
func (h Hash) Validate(hashes []Hash) bool {
|
||||
var hashBytes = make([][]byte, 0, len(hashes))
|
||||
for i := range hashes {
|
||||
hashBytes = append(hashBytes, hashes[i].Bytes())
|
||||
}
|
||||
ok, err := tz.Validate(h.Bytes(), hashBytes)
|
||||
return err == nil && ok
|
||||
}
|
||||
|
||||
// Sum returns Tillich-Zémor checksum of data.
|
||||
func Sum(data []byte) Hash { return tz.Sum(data) }
|
||||
|
||||
// Concat combines hashes based on homomorphic property.
|
||||
func Concat(hashes []Hash) (Hash, error) {
|
||||
var (
|
||||
hash Hash
|
||||
h = make([][]byte, 0, len(hashes))
|
||||
)
|
||||
for i := range hashes {
|
||||
h = append(h, hashes[i].Bytes())
|
||||
}
|
||||
cat, err := tz.Concat(h)
|
||||
if err != nil {
|
||||
return hash, err
|
||||
}
|
||||
return hash, hash.Unmarshal(cat)
|
||||
}
|
166
hash/hash_test.go
Normal file
166
hash/hash_test.go
Normal file
|
@ -0,0 +1,166 @@
|
|||
package hash
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/rand"
|
||||
"testing"
|
||||
|
||||
"github.com/pkg/errors"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func Test_Sum(t *testing.T) {
|
||||
var (
|
||||
data = []byte("Hello world")
|
||||
sum = Sum(data)
|
||||
hash = []byte{0, 0, 0, 0, 1, 79, 16, 173, 134, 90, 176, 77, 114, 165, 253, 114, 0, 0, 0, 0, 0, 148,
|
||||
172, 222, 98, 248, 15, 99, 205, 129, 66, 91, 0, 0, 0, 0, 0, 138, 173, 39, 228, 231, 239, 123,
|
||||
170, 96, 186, 61, 0, 0, 0, 0, 0, 90, 69, 237, 131, 90, 161, 73, 38, 164, 185, 55}
|
||||
)
|
||||
|
||||
require.Equal(t, hash, sum.Bytes())
|
||||
}
|
||||
|
||||
func Test_Validate(t *testing.T) {
|
||||
var (
|
||||
data = []byte("Hello world")
|
||||
hash = Sum(data)
|
||||
pieces = splitData(data, 2)
|
||||
ln = len(pieces)
|
||||
hashes = make([]Hash, 0, ln)
|
||||
)
|
||||
|
||||
for i := 0; i < ln; i++ {
|
||||
hashes = append(hashes, Sum(pieces[i]))
|
||||
}
|
||||
|
||||
require.True(t, hash.Validate(hashes))
|
||||
}
|
||||
|
||||
func Test_Concat(t *testing.T) {
|
||||
var (
|
||||
data = []byte("Hello world")
|
||||
hash = Sum(data)
|
||||
pieces = splitData(data, 2)
|
||||
ln = len(pieces)
|
||||
hashes = make([]Hash, 0, ln)
|
||||
)
|
||||
|
||||
for i := 0; i < ln; i++ {
|
||||
hashes = append(hashes, Sum(pieces[i]))
|
||||
}
|
||||
|
||||
res, err := Concat(hashes)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, hash, res)
|
||||
}
|
||||
|
||||
func Test_HashChunks(t *testing.T) {
|
||||
var (
|
||||
chars = []byte("+")
|
||||
size = 1400
|
||||
data = bytes.Repeat(chars, size)
|
||||
hash = Sum(data)
|
||||
count = 150
|
||||
)
|
||||
|
||||
hashes, err := dataHashes(data, count)
|
||||
require.NoError(t, err)
|
||||
require.Len(t, hashes, count)
|
||||
|
||||
require.True(t, hash.Validate(hashes))
|
||||
|
||||
// 100 / 150 = 0
|
||||
hashes, err = dataHashes(data[:100], count)
|
||||
require.Error(t, err)
|
||||
require.Nil(t, hashes)
|
||||
}
|
||||
|
||||
func TestXOR(t *testing.T) {
|
||||
var (
|
||||
dl = 10
|
||||
data = make([]byte, dl)
|
||||
)
|
||||
|
||||
_, err := rand.Read(data)
|
||||
require.NoError(t, err)
|
||||
|
||||
t.Run("XOR with <nil> salt", func(t *testing.T) {
|
||||
res := SaltXOR(data, nil)
|
||||
require.Equal(t, res, data)
|
||||
})
|
||||
|
||||
t.Run("XOR with empty salt", func(t *testing.T) {
|
||||
xorWithSalt(t, data, 0)
|
||||
})
|
||||
|
||||
t.Run("XOR with salt same data size", func(t *testing.T) {
|
||||
xorWithSalt(t, data, dl)
|
||||
})
|
||||
|
||||
t.Run("XOR with salt shorter than data aliquot", func(t *testing.T) {
|
||||
xorWithSalt(t, data, dl/2)
|
||||
})
|
||||
|
||||
t.Run("XOR with salt shorter than data aliquant", func(t *testing.T) {
|
||||
xorWithSalt(t, data, dl/3/+1)
|
||||
})
|
||||
|
||||
t.Run("XOR with salt longer than data aliquot", func(t *testing.T) {
|
||||
xorWithSalt(t, data, dl*2)
|
||||
})
|
||||
|
||||
t.Run("XOR with salt longer than data aliquant", func(t *testing.T) {
|
||||
xorWithSalt(t, data, dl*2-1)
|
||||
})
|
||||
}
|
||||
|
||||
func xorWithSalt(t *testing.T, data []byte, saltSize int) {
|
||||
var (
|
||||
direct, reverse []byte
|
||||
salt = make([]byte, saltSize)
|
||||
)
|
||||
|
||||
_, err := rand.Read(salt)
|
||||
require.NoError(t, err)
|
||||
|
||||
direct = SaltXOR(data, salt)
|
||||
require.Len(t, direct, len(data))
|
||||
|
||||
reverse = SaltXOR(direct, salt)
|
||||
require.Len(t, reverse, len(data))
|
||||
|
||||
require.Equal(t, reverse, data)
|
||||
}
|
||||
|
||||
func splitData(buf []byte, lim int) [][]byte {
|
||||
var piece []byte
|
||||
pieces := make([][]byte, 0, len(buf)/lim+1)
|
||||
for len(buf) >= lim {
|
||||
piece, buf = buf[:lim], buf[lim:]
|
||||
pieces = append(pieces, piece)
|
||||
}
|
||||
if len(buf) > 0 {
|
||||
pieces = append(pieces, buf)
|
||||
}
|
||||
return pieces
|
||||
}
|
||||
|
||||
func dataHashes(data []byte, count int) ([]Hash, error) {
|
||||
var (
|
||||
ln = len(data)
|
||||
mis = ln / count
|
||||
off = (count - 1) * mis
|
||||
hashes = make([]Hash, 0, count)
|
||||
)
|
||||
if mis == 0 {
|
||||
return nil, errors.Errorf("could not split %d bytes to %d pieces", ln, count)
|
||||
}
|
||||
|
||||
pieces := splitData(data[:off], mis)
|
||||
pieces = append(pieces, data[off:])
|
||||
for i := 0; i < count; i++ {
|
||||
hashes = append(hashes, Sum(pieces[i]))
|
||||
}
|
||||
return hashes, nil
|
||||
}
|
20
hash/hashesslice.go
Normal file
20
hash/hashesslice.go
Normal file
|
@ -0,0 +1,20 @@
|
|||
package hash
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
)
|
||||
|
||||
// HashesSlice is a collection that satisfies sort.Interface and can be
|
||||
// sorted by the routines in sort package.
|
||||
type HashesSlice []Hash
|
||||
|
||||
// -- HashesSlice -- an inner type to sort Objects
|
||||
// Len is the number of elements in the collection.
|
||||
func (hs HashesSlice) Len() int { return len(hs) }
|
||||
|
||||
// Less reports whether the element with
|
||||
// index i should be sorted before the element with index j.
|
||||
func (hs HashesSlice) Less(i, j int) bool { return bytes.Compare(hs[i].Bytes(), hs[j].Bytes()) == -1 }
|
||||
|
||||
// Swap swaps the elements with indexes i and j.
|
||||
func (hs HashesSlice) Swap(i, j int) { hs[i], hs[j] = hs[j], hs[i] }
|
17
hash/salt.go
Normal file
17
hash/salt.go
Normal file
|
@ -0,0 +1,17 @@
|
|||
package hash
|
||||
|
||||
// SaltXOR xors bits of data with salt
|
||||
// repeating salt if necessary.
|
||||
func SaltXOR(data, salt []byte) (result []byte) {
|
||||
result = make([]byte, len(data))
|
||||
ls := len(salt)
|
||||
if ls == 0 {
|
||||
copy(result, data)
|
||||
return
|
||||
}
|
||||
|
||||
for i := range result {
|
||||
result[i] = data[i] ^ salt[i%ls]
|
||||
}
|
||||
return
|
||||
}
|
7
internal/error.go
Normal file
7
internal/error.go
Normal file
|
@ -0,0 +1,7 @@
|
|||
package internal
|
||||
|
||||
// Error is a custom error.
|
||||
type Error string
|
||||
|
||||
// Error is an implementation of error interface.
|
||||
func (e Error) Error() string { return string(e) }
|
16
internal/proto.go
Normal file
16
internal/proto.go
Normal file
|
@ -0,0 +1,16 @@
|
|||
package internal
|
||||
|
||||
import "github.com/gogo/protobuf/proto"
|
||||
|
||||
// Custom contains methods to satisfy proto.Message
|
||||
// including custom methods to satisfy protobuf for
|
||||
// non-proto defined types.
|
||||
type Custom interface {
|
||||
Size() int
|
||||
Empty() bool
|
||||
Bytes() []byte
|
||||
Marshal() ([]byte, error)
|
||||
MarshalTo(data []byte) (int, error)
|
||||
Unmarshal(data []byte) error
|
||||
proto.Message
|
||||
}
|
143
object/doc.go
Normal file
143
object/doc.go
Normal file
|
@ -0,0 +1,143 @@
|
|||
/*
|
||||
Package object manages main storage structure in the system. All storage
|
||||
operations are performed with the objects. During lifetime object might be
|
||||
transformed into another object by cutting its payload or adding meta
|
||||
information. All transformation may be reversed, therefore source object
|
||||
will be able to restore.
|
||||
|
||||
Object structure
|
||||
|
||||
Object consists of Payload and Header. Payload is unlimited but storage nodes
|
||||
may have a policy to store objects with a limited payload. In this case object
|
||||
with large payload will be transformed into the chain of objects with small
|
||||
payload.
|
||||
|
||||
Headers are simple key-value fields that divided into two groups: system
|
||||
headers and extended headers. System headers contain information about
|
||||
protocol version, object id, payload length in bytes, owner id, container id
|
||||
and object creation timestamp (both in epochs and unix time). All these fields
|
||||
must be set up in the correct object.
|
||||
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
|
||||
| System Headers |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
|
||||
| Version : 1 |
|
||||
| Payload Length : 21673465 |
|
||||
| Object ID : 465208e2-ba4f-4f99-ad47-82a59f4192d4 |
|
||||
| Owner ID : AShvoCbSZ7VfRiPkVb1tEcBLiJrcbts1tt |
|
||||
| Container ID : FGobtRZA6sBZv2i9k4L7TiTtnuP6E788qa278xfj3Fxj |
|
||||
| Created At : Epoch#10, 1573033162 |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
|
||||
| Extended Headers |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
|
||||
| User Header : <user-defined-key>, <user-defined-value> |
|
||||
| Verification Header : <session public key>, <owner's signature> |
|
||||
| Homomorphic Hash : 0x23d35a56ae... |
|
||||
| Payload Checksum : 0x1bd34abs75... |
|
||||
| Integrity Header : <header checksum>, <session signature> |
|
||||
| Transformation : Payload Split |
|
||||
| Link-parent : cae08935-b4ba-499a-bf6c-98276c1e6c0b |
|
||||
| Link-next : c3b40fbf-3798-4b61-a189-2992b5fb5070 |
|
||||
| Payload Checksum : 0x1f387a5c36... |
|
||||
| Integrity Header : <header checksum>, <session signature> |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
|
||||
| Payload |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
|
||||
| 0xd1581963a342d231... |
|
||||
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
|
||||
|
||||
There are different kinds of extended headers. A correct object must contain
|
||||
verification header, homomorphic hash header, payload checksum and
|
||||
integrity header. The order of headers is matter. Let's look through all
|
||||
these headers.
|
||||
|
||||
Link header points to the connected objects. During object transformation, large
|
||||
object might be transformed into the chain of smaller objects. One of these
|
||||
objects drops payload and has several "Child" links. We call this object as
|
||||
zero-object. Others will have "Parent" link to the zero-object, "Previous"
|
||||
and "Next" links in the payload chain.
|
||||
|
||||
[ Object ID:1 ] = > transformed
|
||||
`- [ Zero-Object ID:1 ]
|
||||
`- Link-child ID:2
|
||||
`- Link-child ID:3
|
||||
`- Link-child ID:4
|
||||
`- Payload [null]
|
||||
`- [ Object ID:2 ]
|
||||
`- Link-parent ID:1
|
||||
`- Link-next ID:3
|
||||
`- Payload [ 0x13ba... ]
|
||||
`- [ Object ID:3 ]
|
||||
`- Link-parent ID:1
|
||||
`- Link-previous ID:2
|
||||
`- Link-next ID:4
|
||||
`- Payload [ 0xcd34... ]
|
||||
`- [ Object ID:4 ]
|
||||
`- Link-parent ID:1
|
||||
`- Link-previous ID:3
|
||||
`- Payload [ 0xef86... ]
|
||||
|
||||
Storage groups are also objects. They have "Storage Group" links to all
|
||||
objects in the group. Links are set by nodes during transformations and,
|
||||
in general, they should not be set by user manually.
|
||||
|
||||
Redirect headers are not used yet, they will be implemented and described
|
||||
later.
|
||||
|
||||
User header is a key-value pair of string that can be defined by user. User
|
||||
can use these headers as search attribute. You can store any meta information
|
||||
about object there, e.g. object's nicename.
|
||||
|
||||
Transformation header notifies that object was transformed by some pre-defined
|
||||
way. This header sets up before object is transformed and all headers after
|
||||
transformation must be located after transformation header. During reverse
|
||||
transformation, all headers under transformation header will be cut out.
|
||||
|
||||
+-+-+-+-+-+-+-+-+-+- +-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+
|
||||
| Payload checksum | | Payload checksum | | Payload checksum |
|
||||
| Integrity header | => | Integrity header | + | Integrity header |
|
||||
+-+-+-+-+-+-+-+-+-+- | Transformation | | Transformation |
|
||||
| Large payload | | New Checksum | | New Checksum |
|
||||
+-+-+-+-+-+-+-+-+-+- | New Integrity | | New Integrity |
|
||||
+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+
|
||||
| Small payload | | Small payload |
|
||||
+-+-+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+
|
||||
|
||||
For now, we use only one type of transformation: payload split transformation.
|
||||
This header set up by node automatically.
|
||||
|
||||
Tombstone header notifies that this object was deleted by user. Objects with
|
||||
tombstone header do not have payload, but they still contain meta information
|
||||
in the headers. This way we implement two-phase commit for object removal.
|
||||
Storage nodes will eventually delete all tombstone objects. If you want to
|
||||
delete object, you must create new object with the same object id, with
|
||||
tombstone header, correct signatures and without payload.
|
||||
|
||||
Verification header contains session information. To put the object in
|
||||
the system user must create session. It is required because objects might
|
||||
be transformed and therefore must be re-signed. To do that node creates
|
||||
a pair of session public and private keys. Object owner delegates permission to
|
||||
re-sign objects by signing session public key. This header contains session
|
||||
public key and owner's signature of this key. You must specify this header
|
||||
manually.
|
||||
|
||||
Homomorphic hash header contains homomorphic hash of the source object.
|
||||
Transformations do not affect this header. This header used by data audit and
|
||||
set by node automatically.
|
||||
|
||||
Payload checksum contains checksum of the actual object payload. All payload
|
||||
transformation must set new payload checksum headers. This header set by node
|
||||
automatically.
|
||||
|
||||
Integrity header contains checksum of the header and signature of the
|
||||
session key. This header must be last in the list of extended headers.
|
||||
Checksum is calculated by marshaling all above headers, including system
|
||||
headers. This header set by node automatically.
|
||||
|
||||
Storage group header is presented in storage group objects. It contains
|
||||
information for data audit: size of validated data, homomorphic has of this
|
||||
data, storage group expiration time in epochs or unix time.
|
||||
|
||||
|
||||
*/
|
||||
package object
|
84
object/extensions.go
Normal file
84
object/extensions.go
Normal file
|
@ -0,0 +1,84 @@
|
|||
package object
|
||||
|
||||
import (
|
||||
"github.com/nspcc-dev/neofs-proto/hash"
|
||||
)
|
||||
|
||||
// IsLinking checks if object has children links to another objects.
|
||||
// We have to check payload size because zero-object must have zero
|
||||
// payload and non-zero payload length field in system header.
|
||||
func (m Object) IsLinking() bool {
|
||||
for i := range m.Headers {
|
||||
switch v := m.Headers[i].Value.(type) {
|
||||
case *Header_Link:
|
||||
if v.Link.GetType() == Link_Child {
|
||||
return m.SystemHeader.PayloadLength > 0 && len(m.Payload) == 0
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// VerificationHeader returns verification header if it is presented in extended headers.
|
||||
func (m Object) VerificationHeader() (*VerificationHeader, error) {
|
||||
_, vh := m.LastHeader(HeaderType(VerifyHdr))
|
||||
if vh == nil {
|
||||
return nil, ErrHeaderNotFound
|
||||
}
|
||||
return vh.Value.(*Header_Verify).Verify, nil
|
||||
}
|
||||
|
||||
// SetVerificationHeader sets verification header in the object.
|
||||
// It will replace existing verification header or add a new one.
|
||||
func (m *Object) SetVerificationHeader(header *VerificationHeader) {
|
||||
m.SetHeader(&Header{Value: &Header_Verify{Verify: header}})
|
||||
}
|
||||
|
||||
// Links returns slice of ids of specified link type
|
||||
func (m *Object) Links(t Link_Type) []ID {
|
||||
var res []ID
|
||||
for i := range m.Headers {
|
||||
switch v := m.Headers[i].Value.(type) {
|
||||
case *Header_Link:
|
||||
if v.Link.GetType() == t {
|
||||
res = append(res, v.Link.ID)
|
||||
}
|
||||
}
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
// Tombstone returns tombstone header if it is presented in extended headers.
|
||||
func (m Object) Tombstone() *Tombstone {
|
||||
_, h := m.LastHeader(HeaderType(TombstoneHdr))
|
||||
if h != nil {
|
||||
return h.Value.(*Header_Tombstone).Tombstone
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// IsTombstone checks if object has tombstone header.
|
||||
func (m Object) IsTombstone() bool {
|
||||
n, _ := m.LastHeader(HeaderType(TombstoneHdr))
|
||||
return n != -1
|
||||
}
|
||||
|
||||
// StorageGroup returns storage group structure if it is presented in extended headers.
|
||||
func (m Object) StorageGroup() (*StorageGroup, error) {
|
||||
_, sgHdr := m.LastHeader(HeaderType(StorageGroupHdr))
|
||||
if sgHdr == nil {
|
||||
return nil, ErrHeaderNotFound
|
||||
}
|
||||
return sgHdr.Value.(*Header_StorageGroup).StorageGroup, nil
|
||||
}
|
||||
|
||||
// SetStorageGroup sets storage group header in the object.
|
||||
// It will replace existing storage group header or add a new one.
|
||||
func (m *Object) SetStorageGroup(sg *StorageGroup) {
|
||||
m.SetHeader(&Header{Value: &Header_StorageGroup{StorageGroup: sg}})
|
||||
}
|
||||
|
||||
// Empty checks if storage group has some data for validation.
|
||||
func (m StorageGroup) Empty() bool {
|
||||
return m.ValidationDataSize == 0 && m.ValidationHash.Equal(hash.Hash{})
|
||||
}
|
215
object/service.go
Normal file
215
object/service.go
Normal file
|
@ -0,0 +1,215 @@
|
|||
package object
|
||||
|
||||
import (
|
||||
"github.com/nspcc-dev/neofs-proto/hash"
|
||||
"github.com/nspcc-dev/neofs-proto/internal"
|
||||
"github.com/nspcc-dev/neofs-proto/refs"
|
||||
"github.com/nspcc-dev/neofs-proto/service"
|
||||
"github.com/nspcc-dev/neofs-proto/session"
|
||||
)
|
||||
|
||||
type (
|
||||
// ID is a type alias of object id.
|
||||
ID = refs.ObjectID
|
||||
|
||||
// CID is a type alias of container id.
|
||||
CID = refs.CID
|
||||
|
||||
// SGID is a type alias of storage group id.
|
||||
SGID = refs.SGID
|
||||
|
||||
// OwnerID is a type alias of owner id.
|
||||
OwnerID = refs.OwnerID
|
||||
|
||||
// Hash is a type alias of Homomorphic hash.
|
||||
Hash = hash.Hash
|
||||
|
||||
// Token is a type alias of session token.
|
||||
Token = session.Token
|
||||
|
||||
// Request defines object rpc requests.
|
||||
// All object operations must have TTL, Epoch, Container ID and
|
||||
// permission of usage previous network map.
|
||||
Request interface {
|
||||
service.TTLRequest
|
||||
service.EpochRequest
|
||||
|
||||
CID() CID
|
||||
AllowPreviousNetMap() bool
|
||||
}
|
||||
)
|
||||
|
||||
const (
|
||||
// UnitsB starts enum for amount of bytes.
|
||||
UnitsB int64 = 1 << (10 * iota)
|
||||
|
||||
// UnitsKB defines amount of bytes in one kilobyte.
|
||||
UnitsKB
|
||||
|
||||
// UnitsMB defines amount of bytes in one megabyte.
|
||||
UnitsMB
|
||||
|
||||
// UnitsGB defines amount of bytes in one gigabyte.
|
||||
UnitsGB
|
||||
|
||||
// UnitsTB defines amount of bytes in one terabyte.
|
||||
UnitsTB
|
||||
)
|
||||
|
||||
const (
|
||||
// ErrNotFound is raised when object is not found in the system.
|
||||
ErrNotFound = internal.Error("could not find object")
|
||||
|
||||
// ErrHeaderExpected is raised when first message in protobuf stream does not contain user header.
|
||||
ErrHeaderExpected = internal.Error("expected header as a first message in stream")
|
||||
|
||||
// KeyStorageGroup is a key for a search object by storage group id.
|
||||
KeyStorageGroup = "STORAGE_GROUP"
|
||||
|
||||
// KeyNoChildren is a key for searching object that have no children links.
|
||||
KeyNoChildren = "LEAF"
|
||||
|
||||
// KeyParent is a key for searching object by id of parent object.
|
||||
KeyParent = "PARENT"
|
||||
|
||||
// KeyHasParent is a key for searching object that have parent link.
|
||||
KeyHasParent = "HAS_PAR"
|
||||
|
||||
// KeyTombstone is a key for searching object that have tombstone header.
|
||||
KeyTombstone = "TOMBSTONE"
|
||||
|
||||
// KeyChild is a key for searching object by id of child link.
|
||||
KeyChild = "CHILD"
|
||||
|
||||
// KeyPrev is a key for searching object by id of previous link.
|
||||
KeyPrev = "PREV"
|
||||
|
||||
// KeyNext is a key for searching object by id of next link.
|
||||
KeyNext = "NEXT"
|
||||
|
||||
// KeyID is a key for searching object by object id.
|
||||
KeyID = "ID"
|
||||
|
||||
// KeyCID is a key for searching object by container id.
|
||||
KeyCID = "CID"
|
||||
|
||||
// KeyOwnerID is a key for searching object by owner id.
|
||||
KeyOwnerID = "OWNERID"
|
||||
|
||||
// KeyRootObject is a key for searching object that are zero-object or do
|
||||
// not have any children.
|
||||
KeyRootObject = "ROOT_OBJECT"
|
||||
)
|
||||
|
||||
func checkIsNotFull(v interface{}) bool {
|
||||
var obj *Object
|
||||
|
||||
switch t := v.(type) {
|
||||
case *GetResponse:
|
||||
obj = t.GetObject()
|
||||
case *PutRequest:
|
||||
if h := t.GetHeader(); h != nil {
|
||||
obj = h.Object
|
||||
}
|
||||
default:
|
||||
panic("unknown type")
|
||||
}
|
||||
|
||||
return obj == nil || obj.SystemHeader.PayloadLength != uint64(len(obj.Payload)) && !obj.IsLinking()
|
||||
}
|
||||
|
||||
// NotFull checks if protobuf stream provided whole object for get operation.
|
||||
func (m *GetResponse) NotFull() bool { return checkIsNotFull(m) }
|
||||
|
||||
// NotFull checks if protobuf stream provided whole object for put operation.
|
||||
func (m *PutRequest) NotFull() bool { return checkIsNotFull(m) }
|
||||
|
||||
// GetTTL returns TTL value from object put request.
|
||||
func (m *PutRequest) GetTTL() uint32 { return m.GetHeader().TTL }
|
||||
|
||||
// GetEpoch returns epoch value from object put request.
|
||||
func (m *PutRequest) GetEpoch() uint64 { return m.GetHeader().GetEpoch() }
|
||||
|
||||
// SetTTL sets TTL value into object put request.
|
||||
func (m *PutRequest) SetTTL(ttl uint32) { m.GetHeader().TTL = ttl }
|
||||
|
||||
// SetTTL sets TTL value into object get request.
|
||||
func (m *GetRequest) SetTTL(ttl uint32) { m.TTL = ttl }
|
||||
|
||||
// SetTTL sets TTL value into object head request.
|
||||
func (m *HeadRequest) SetTTL(ttl uint32) { m.TTL = ttl }
|
||||
|
||||
// SetTTL sets TTL value into object search request.
|
||||
func (m *SearchRequest) SetTTL(ttl uint32) { m.TTL = ttl }
|
||||
|
||||
// SetTTL sets TTL value into object delete request.
|
||||
func (m *DeleteRequest) SetTTL(ttl uint32) { m.TTL = ttl }
|
||||
|
||||
// SetTTL sets TTL value into object get range request.
|
||||
func (m *GetRangeRequest) SetTTL(ttl uint32) { m.TTL = ttl }
|
||||
|
||||
// SetTTL sets TTL value into object get range hash request.
|
||||
func (m *GetRangeHashRequest) SetTTL(ttl uint32) { m.TTL = ttl }
|
||||
|
||||
// SetEpoch sets epoch value into object put request.
|
||||
func (m *PutRequest) SetEpoch(v uint64) { m.GetHeader().Epoch = v }
|
||||
|
||||
// SetEpoch sets epoch value into object get request.
|
||||
func (m *GetRequest) SetEpoch(v uint64) { m.Epoch = v }
|
||||
|
||||
// SetEpoch sets epoch value into object head request.
|
||||
func (m *HeadRequest) SetEpoch(v uint64) { m.Epoch = v }
|
||||
|
||||
// SetEpoch sets epoch value into object search request.
|
||||
func (m *SearchRequest) SetEpoch(v uint64) { m.Epoch = v }
|
||||
|
||||
// SetEpoch sets epoch value into object delete request.
|
||||
func (m *DeleteRequest) SetEpoch(v uint64) { m.Epoch = v }
|
||||
|
||||
// SetEpoch sets epoch value into object get range request.
|
||||
func (m *GetRangeRequest) SetEpoch(v uint64) { m.Epoch = v }
|
||||
|
||||
// SetEpoch sets epoch value into object get range hash request.
|
||||
func (m *GetRangeHashRequest) SetEpoch(v uint64) { m.Epoch = v }
|
||||
|
||||
// CID returns container id value from object put request.
|
||||
func (m *PutRequest) CID() CID { return m.GetHeader().Object.SystemHeader.CID }
|
||||
|
||||
// CID returns container id value from object get request.
|
||||
func (m *GetRequest) CID() CID { return m.Address.CID }
|
||||
|
||||
// CID returns container id value from object head request.
|
||||
func (m *HeadRequest) CID() CID { return m.Address.CID }
|
||||
|
||||
// CID returns container id value from object search request.
|
||||
func (m *SearchRequest) CID() CID { return m.ContainerID }
|
||||
|
||||
// CID returns container id value from object delete request.
|
||||
func (m *DeleteRequest) CID() CID { return m.Address.CID }
|
||||
|
||||
// CID returns container id value from object get range request.
|
||||
func (m *GetRangeRequest) CID() CID { return m.Address.CID }
|
||||
|
||||
// CID returns container id value from object get range hash request.
|
||||
func (m *GetRangeHashRequest) CID() CID { return m.Address.CID }
|
||||
|
||||
// AllowPreviousNetMap returns permission to use previous network map in object put request.
|
||||
func (m *PutRequest) AllowPreviousNetMap() bool { return false }
|
||||
|
||||
// AllowPreviousNetMap returns permission to use previous network map in object get request.
|
||||
func (m *GetRequest) AllowPreviousNetMap() bool { return true }
|
||||
|
||||
// AllowPreviousNetMap returns permission to use previous network map in object head request.
|
||||
func (m *HeadRequest) AllowPreviousNetMap() bool { return true }
|
||||
|
||||
// AllowPreviousNetMap returns permission to use previous network map in object search request.
|
||||
func (m *SearchRequest) AllowPreviousNetMap() bool { return true }
|
||||
|
||||
// AllowPreviousNetMap returns permission to use previous network map in object delete request.
|
||||
func (m *DeleteRequest) AllowPreviousNetMap() bool { return false }
|
||||
|
||||
// AllowPreviousNetMap returns permission to use previous network map in object get range request.
|
||||
func (m *GetRangeRequest) AllowPreviousNetMap() bool { return false }
|
||||
|
||||
// AllowPreviousNetMap returns permission to use previous network map in object get range hash request.
|
||||
func (m *GetRangeHashRequest) AllowPreviousNetMap() bool { return false }
|
4491
object/service.pb.go
Normal file
4491
object/service.pb.go
Normal file
File diff suppressed because it is too large
Load diff
119
object/service.proto
Normal file
119
object/service.proto
Normal file
|
@ -0,0 +1,119 @@
|
|||
syntax = "proto3";
|
||||
package object;
|
||||
option go_package = "github.com/nspcc-dev/neofs-proto/object";
|
||||
|
||||
import "refs/types.proto";
|
||||
import "object/types.proto";
|
||||
import "session/types.proto";
|
||||
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||
|
||||
option (gogoproto.stable_marshaler_all) = true;
|
||||
|
||||
service Service {
|
||||
// Get the object from a container
|
||||
rpc Get(GetRequest) returns (stream GetResponse);
|
||||
|
||||
// Put the object into a container
|
||||
rpc Put(stream PutRequest) returns (PutResponse);
|
||||
|
||||
// Delete the object from a container
|
||||
rpc Delete(DeleteRequest) returns (DeleteResponse);
|
||||
|
||||
// Get MetaInfo
|
||||
rpc Head(HeadRequest) returns (HeadResponse);
|
||||
|
||||
// Search by MetaInfo
|
||||
rpc Search(SearchRequest) returns (SearchResponse);
|
||||
|
||||
// Get ranges of object payload
|
||||
rpc GetRange(GetRangeRequest) returns (GetRangeResponse);
|
||||
|
||||
// Get hashes of object ranges
|
||||
rpc GetRangeHash(GetRangeHashRequest) returns (GetRangeHashResponse);
|
||||
}
|
||||
|
||||
message GetRequest {
|
||||
uint64 Epoch = 1;
|
||||
refs.Address Address = 2 [(gogoproto.nullable) = false];
|
||||
uint32 TTL = 3;
|
||||
}
|
||||
|
||||
message GetResponse {
|
||||
oneof R {
|
||||
Object object = 1;
|
||||
bytes Chunk = 2;
|
||||
}
|
||||
}
|
||||
|
||||
message PutRequest {
|
||||
message PutHeader {
|
||||
uint64 Epoch = 1;
|
||||
Object Object = 2;
|
||||
uint32 TTL = 3;
|
||||
session.Token Token = 4;
|
||||
}
|
||||
|
||||
oneof R {
|
||||
PutHeader Header = 1;
|
||||
bytes Chunk = 2;
|
||||
}
|
||||
}
|
||||
|
||||
message PutResponse {
|
||||
refs.Address Address = 1 [(gogoproto.nullable) = false];
|
||||
}
|
||||
message DeleteRequest {
|
||||
uint64 Epoch = 1;
|
||||
refs.Address Address = 2 [(gogoproto.nullable) = false];
|
||||
bytes OwnerID = 3 [(gogoproto.nullable) = false, (gogoproto.customtype) = "OwnerID"];
|
||||
uint32 TTL = 4;
|
||||
session.Token Token = 5;
|
||||
}
|
||||
message DeleteResponse {}
|
||||
|
||||
// HeadRequest.FullHeader == true, for fetch all headers
|
||||
message HeadRequest {
|
||||
uint64 Epoch = 1;
|
||||
refs.Address Address = 2 [(gogoproto.nullable) = false, (gogoproto.customtype) = "Address"];
|
||||
bool FullHeaders = 3;
|
||||
uint32 TTL = 4;
|
||||
}
|
||||
message HeadResponse {
|
||||
Object Object = 1;
|
||||
}
|
||||
|
||||
message SearchRequest {
|
||||
uint64 Epoch = 1;
|
||||
uint32 Version = 2;
|
||||
bytes ContainerID = 3 [(gogoproto.nullable) = false, (gogoproto.customtype) = "CID"];
|
||||
bytes Query = 4;
|
||||
uint32 TTL = 5;
|
||||
}
|
||||
|
||||
message SearchResponse {
|
||||
repeated refs.Address Addresses = 1 [(gogoproto.nullable) = false];
|
||||
}
|
||||
|
||||
message GetRangeRequest {
|
||||
uint64 Epoch = 1;
|
||||
refs.Address Address = 2 [(gogoproto.nullable) = false];
|
||||
repeated Range Ranges = 3 [(gogoproto.nullable) = false];
|
||||
uint32 TTL = 4;
|
||||
}
|
||||
|
||||
message GetRangeResponse {
|
||||
repeated bytes Fragments = 1;
|
||||
}
|
||||
|
||||
message GetRangeHashRequest {
|
||||
uint64 Epoch = 1;
|
||||
refs.Address Address = 2 [(gogoproto.nullable) = false];
|
||||
repeated Range Ranges = 3 [(gogoproto.nullable) = false];
|
||||
bytes Salt = 4;
|
||||
uint32 TTL = 5;
|
||||
}
|
||||
|
||||
message GetRangeHashResponse {
|
||||
repeated bytes Hashes = 1 [(gogoproto.customtype) = "Hash", (gogoproto.nullable) = false];
|
||||
}
|
||||
|
66
object/sg.go
Normal file
66
object/sg.go
Normal file
|
@ -0,0 +1,66 @@
|
|||
package object
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"sort"
|
||||
)
|
||||
|
||||
// Here are defined getter functions for objects that contain storage group
|
||||
// information.
|
||||
|
||||
type (
|
||||
// IDList is a slice of object ids, that can be sorted.
|
||||
IDList []ID
|
||||
|
||||
// ZoneInfo provides validation info of storage group.
|
||||
ZoneInfo struct {
|
||||
Hash
|
||||
Size uint64
|
||||
}
|
||||
|
||||
// IdentificationInfo provides meta information about storage group.
|
||||
IdentificationInfo struct {
|
||||
SGID
|
||||
CID
|
||||
OwnerID
|
||||
}
|
||||
)
|
||||
|
||||
// Len returns amount of object ids in IDList.
|
||||
func (s IDList) Len() int { return len(s) }
|
||||
|
||||
// Less returns byte comparision between IDList[i] and IDList[j].
|
||||
func (s IDList) Less(i, j int) bool { return bytes.Compare(s[i].Bytes(), s[j].Bytes()) == -1 }
|
||||
|
||||
// Swap swaps element with i and j index in IDList.
|
||||
func (s IDList) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
|
||||
|
||||
// Group returns slice of object ids that are part of a storage group.
|
||||
func (m *Object) Group() []ID {
|
||||
sgLinks := m.Links(Link_StorageGroup)
|
||||
sort.Sort(IDList(sgLinks))
|
||||
return sgLinks
|
||||
}
|
||||
|
||||
// Zones returns validation zones of storage group.
|
||||
func (m *Object) Zones() []ZoneInfo {
|
||||
sgInfo, err := m.StorageGroup()
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
return []ZoneInfo{
|
||||
{
|
||||
Hash: sgInfo.ValidationHash,
|
||||
Size: sgInfo.ValidationDataSize,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// IDInfo returns meta information about storage group.
|
||||
func (m *Object) IDInfo() *IdentificationInfo {
|
||||
return &IdentificationInfo{
|
||||
SGID: m.SystemHeader.ID,
|
||||
CID: m.SystemHeader.CID,
|
||||
OwnerID: m.SystemHeader.OwnerID,
|
||||
}
|
||||
}
|
87
object/sg_test.go
Normal file
87
object/sg_test.go
Normal file
|
@ -0,0 +1,87 @@
|
|||
package object
|
||||
|
||||
import (
|
||||
"math/rand"
|
||||
"sort"
|
||||
"testing"
|
||||
|
||||
"github.com/nspcc-dev/neofs-proto/hash"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestObject_StorageGroup(t *testing.T) {
|
||||
t.Run("group method", func(t *testing.T) {
|
||||
var linkCount byte = 100
|
||||
|
||||
obj := &Object{Headers: make([]Header, 0, linkCount)}
|
||||
require.Empty(t, obj.Group())
|
||||
|
||||
idList := make([]ID, linkCount)
|
||||
for i := byte(0); i < linkCount; i++ {
|
||||
idList[i] = ID{i}
|
||||
obj.Headers = append(obj.Headers, Header{
|
||||
Value: &Header_Link{Link: &Link{
|
||||
Type: Link_StorageGroup,
|
||||
ID: idList[i],
|
||||
}},
|
||||
})
|
||||
}
|
||||
|
||||
rand.Shuffle(len(obj.Headers), func(i, j int) { obj.Headers[i], obj.Headers[j] = obj.Headers[j], obj.Headers[i] })
|
||||
sort.Sort(IDList(idList))
|
||||
require.Equal(t, idList, obj.Group())
|
||||
})
|
||||
t.Run("identification method", func(t *testing.T) {
|
||||
oid, cid, owner := ID{1}, CID{2}, OwnerID{3}
|
||||
obj := &Object{
|
||||
SystemHeader: SystemHeader{
|
||||
ID: oid,
|
||||
OwnerID: owner,
|
||||
CID: cid,
|
||||
},
|
||||
}
|
||||
|
||||
idInfo := obj.IDInfo()
|
||||
require.Equal(t, oid, idInfo.SGID)
|
||||
require.Equal(t, cid, idInfo.CID)
|
||||
require.Equal(t, owner, idInfo.OwnerID)
|
||||
})
|
||||
t.Run("zones method", func(t *testing.T) {
|
||||
sgSize := uint64(100)
|
||||
|
||||
d := make([]byte, sgSize)
|
||||
_, err := rand.Read(d)
|
||||
require.NoError(t, err)
|
||||
sgHash := hash.Sum(d)
|
||||
|
||||
obj := &Object{
|
||||
Headers: []Header{
|
||||
{
|
||||
Value: &Header_StorageGroup{
|
||||
StorageGroup: &StorageGroup{
|
||||
ValidationDataSize: sgSize,
|
||||
ValidationHash: sgHash,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
var (
|
||||
sumSize uint64
|
||||
zones = obj.Zones()
|
||||
hashes = make([]Hash, len(zones))
|
||||
)
|
||||
|
||||
for i := range zones {
|
||||
sumSize += zones[i].Size
|
||||
hashes[i] = zones[i].Hash
|
||||
}
|
||||
|
||||
sumHash, err := hash.Concat(hashes)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Equal(t, sgSize, sumSize)
|
||||
require.Equal(t, sgHash, sumHash)
|
||||
})
|
||||
}
|
219
object/types.go
Normal file
219
object/types.go
Normal file
|
@ -0,0 +1,219 @@
|
|||
package object
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
|
||||
"github.com/gogo/protobuf/proto"
|
||||
"github.com/nspcc-dev/neofs-proto/internal"
|
||||
"github.com/nspcc-dev/neofs-proto/refs"
|
||||
"github.com/nspcc-dev/neofs-proto/session"
|
||||
)
|
||||
|
||||
type (
|
||||
// Pred defines a predicate function that can check if passed header
|
||||
// satisfies predicate condition. It is used to find headers of
|
||||
// specific type.
|
||||
Pred = func(*Header) bool
|
||||
|
||||
// Address is a type alias of object Address.
|
||||
Address = refs.Address
|
||||
|
||||
// VerificationHeader is a type alias of session's verification header.
|
||||
VerificationHeader = session.VerificationHeader
|
||||
|
||||
// PositionReader defines object reader that returns slice of bytes
|
||||
// for specified object and data range.
|
||||
PositionReader interface {
|
||||
PRead(ctx context.Context, addr refs.Address, rng Range) ([]byte, error)
|
||||
}
|
||||
|
||||
headerType int
|
||||
)
|
||||
|
||||
const (
|
||||
// ErrVerifyPayload is raised when payload checksum cannot be verified.
|
||||
ErrVerifyPayload = internal.Error("can't verify payload")
|
||||
|
||||
// ErrVerifyHeader is raised when object integrity cannot be verified.
|
||||
ErrVerifyHeader = internal.Error("can't verify header")
|
||||
|
||||
// ErrHeaderNotFound is raised when requested header not found.
|
||||
ErrHeaderNotFound = internal.Error("header not found")
|
||||
|
||||
// ErrVerifySignature is raised when signature cannot be verified.
|
||||
ErrVerifySignature = internal.Error("can't verify signature")
|
||||
)
|
||||
|
||||
const (
|
||||
_ headerType = iota
|
||||
// LinkHdr is a link header type.
|
||||
LinkHdr
|
||||
// RedirectHdr is a redirect header type.
|
||||
RedirectHdr
|
||||
// UserHdr is a user defined header type.
|
||||
UserHdr
|
||||
// TransformHdr is a transformation header type.
|
||||
TransformHdr
|
||||
// TombstoneHdr is a tombstone header type.
|
||||
TombstoneHdr
|
||||
// VerifyHdr is a verification header type.
|
||||
VerifyHdr
|
||||
// HomoHashHdr is a homomorphic hash header type.
|
||||
HomoHashHdr
|
||||
// PayloadChecksumHdr is a payload checksum header type.
|
||||
PayloadChecksumHdr
|
||||
// IntegrityHdr is a integrity header type.
|
||||
IntegrityHdr
|
||||
// StorageGroupHdr is a storage group header type.
|
||||
StorageGroupHdr
|
||||
)
|
||||
|
||||
var (
|
||||
_ internal.Custom = (*Object)(nil)
|
||||
|
||||
emptyObject = new(Object).Bytes()
|
||||
)
|
||||
|
||||
// Bytes returns marshaled object in a binary format.
|
||||
func (m Object) Bytes() []byte { data, _ := m.Marshal(); return data }
|
||||
|
||||
// Empty checks if object does not contain any information.
|
||||
func (m Object) Empty() bool { return bytes.Equal(m.Bytes(), emptyObject) }
|
||||
|
||||
// LastHeader returns last header of the specified type. Type must be
|
||||
// specified as a Pred function.
|
||||
func (m Object) LastHeader(f Pred) (int, *Header) {
|
||||
for i := len(m.Headers) - 1; i >= 0; i-- {
|
||||
if f != nil && f(&m.Headers[i]) {
|
||||
return i, &m.Headers[i]
|
||||
}
|
||||
}
|
||||
return -1, nil
|
||||
}
|
||||
|
||||
// AddHeader adds passed header to the end of extended header list.
|
||||
func (m *Object) AddHeader(h *Header) {
|
||||
m.Headers = append(m.Headers, *h)
|
||||
}
|
||||
|
||||
// SetPayload sets payload field and payload length in the system header.
|
||||
func (m *Object) SetPayload(payload []byte) {
|
||||
m.Payload = payload
|
||||
m.SystemHeader.PayloadLength = uint64(len(payload))
|
||||
}
|
||||
|
||||
// SetHeader replaces existing extended header or adds new one to the end of
|
||||
// extended header list.
|
||||
func (m *Object) SetHeader(h *Header) {
|
||||
// looking for the header of that type
|
||||
for i := range m.Headers {
|
||||
if m.Headers[i].typeOf(h.Value) {
|
||||
// if we found one - set it with new value and return
|
||||
m.Headers[i] = *h
|
||||
return
|
||||
}
|
||||
}
|
||||
// if we did not find one - add this header
|
||||
m.AddHeader(h)
|
||||
}
|
||||
|
||||
func (m Header) typeOf(t isHeader_Value) (ok bool) {
|
||||
switch t.(type) {
|
||||
case *Header_Link:
|
||||
_, ok = m.Value.(*Header_Link)
|
||||
case *Header_Redirect:
|
||||
_, ok = m.Value.(*Header_Redirect)
|
||||
case *Header_UserHeader:
|
||||
_, ok = m.Value.(*Header_UserHeader)
|
||||
case *Header_Transform:
|
||||
_, ok = m.Value.(*Header_Transform)
|
||||
case *Header_Tombstone:
|
||||
_, ok = m.Value.(*Header_Tombstone)
|
||||
case *Header_Verify:
|
||||
_, ok = m.Value.(*Header_Verify)
|
||||
case *Header_HomoHash:
|
||||
_, ok = m.Value.(*Header_HomoHash)
|
||||
case *Header_PayloadChecksum:
|
||||
_, ok = m.Value.(*Header_PayloadChecksum)
|
||||
case *Header_Integrity:
|
||||
_, ok = m.Value.(*Header_Integrity)
|
||||
case *Header_StorageGroup:
|
||||
_, ok = m.Value.(*Header_StorageGroup)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// HeaderType returns predicate that check if extended header is a header
|
||||
// of specified type.
|
||||
func HeaderType(t headerType) Pred {
|
||||
switch t {
|
||||
case LinkHdr:
|
||||
return func(h *Header) bool { _, ok := h.Value.(*Header_Link); return ok }
|
||||
case RedirectHdr:
|
||||
return func(h *Header) bool { _, ok := h.Value.(*Header_Redirect); return ok }
|
||||
case UserHdr:
|
||||
return func(h *Header) bool { _, ok := h.Value.(*Header_UserHeader); return ok }
|
||||
case TransformHdr:
|
||||
return func(h *Header) bool { _, ok := h.Value.(*Header_Transform); return ok }
|
||||
case TombstoneHdr:
|
||||
return func(h *Header) bool { _, ok := h.Value.(*Header_Tombstone); return ok }
|
||||
case VerifyHdr:
|
||||
return func(h *Header) bool { _, ok := h.Value.(*Header_Verify); return ok }
|
||||
case HomoHashHdr:
|
||||
return func(h *Header) bool { _, ok := h.Value.(*Header_HomoHash); return ok }
|
||||
case PayloadChecksumHdr:
|
||||
return func(h *Header) bool { _, ok := h.Value.(*Header_PayloadChecksum); return ok }
|
||||
case IntegrityHdr:
|
||||
return func(h *Header) bool { _, ok := h.Value.(*Header_Integrity); return ok }
|
||||
case StorageGroupHdr:
|
||||
return func(h *Header) bool { _, ok := h.Value.(*Header_StorageGroup); return ok }
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Copy creates full copy of the object.
|
||||
func (m *Object) Copy() (obj *Object) {
|
||||
obj = new(Object)
|
||||
m.CopyTo(obj)
|
||||
return
|
||||
}
|
||||
|
||||
// CopyTo creates fills passed object with the data from the current object.
|
||||
// This function creates copies on every available data slice.
|
||||
func (m *Object) CopyTo(o *Object) {
|
||||
o.SystemHeader = m.SystemHeader
|
||||
o.Headers = make([]Header, len(m.Headers))
|
||||
o.Payload = make([]byte, len(m.Payload))
|
||||
|
||||
for i := range m.Headers {
|
||||
switch v := m.Headers[i].Value.(type) {
|
||||
case *Header_Link:
|
||||
link := *v.Link
|
||||
o.Headers[i] = Header{
|
||||
Value: &Header_Link{
|
||||
Link: &link,
|
||||
},
|
||||
}
|
||||
case *Header_HomoHash:
|
||||
o.Headers[i] = Header{
|
||||
Value: &Header_HomoHash{
|
||||
HomoHash: v.HomoHash,
|
||||
},
|
||||
}
|
||||
default:
|
||||
o.Headers[i] = *proto.Clone(&m.Headers[i]).(*Header)
|
||||
}
|
||||
}
|
||||
|
||||
copy(o.Payload, m.Payload)
|
||||
}
|
||||
|
||||
// Address returns object's address.
|
||||
func (m Object) Address() *refs.Address {
|
||||
return &refs.Address{
|
||||
ObjectID: m.SystemHeader.ID,
|
||||
CID: m.SystemHeader.CID,
|
||||
}
|
||||
}
|
3814
object/types.pb.go
Normal file
3814
object/types.pb.go
Normal file
File diff suppressed because it is too large
Load diff
107
object/types.proto
Normal file
107
object/types.proto
Normal file
|
@ -0,0 +1,107 @@
|
|||
syntax = "proto3";
|
||||
package object;
|
||||
option go_package = "github.com/nspcc-dev/neofs-proto/object";
|
||||
|
||||
import "refs/types.proto";
|
||||
import "session/types.proto";
|
||||
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||
|
||||
option (gogoproto.stable_marshaler_all) = true;
|
||||
|
||||
message Range {
|
||||
uint64 Offset = 1;
|
||||
uint64 Length = 2;
|
||||
}
|
||||
|
||||
message UserHeader {
|
||||
string Key = 1;
|
||||
string Value = 2;
|
||||
}
|
||||
|
||||
message Header {
|
||||
oneof Value {
|
||||
Link Link = 1;
|
||||
refs.Address Redirect = 2;
|
||||
UserHeader UserHeader = 3;
|
||||
Transform Transform = 4;
|
||||
Tombstone Tombstone = 5;
|
||||
// session-related info: session.VerificationHeader
|
||||
session.VerificationHeader Verify = 6;
|
||||
// integrity-related info
|
||||
bytes HomoHash = 7 [(gogoproto.customtype) = "Hash"];
|
||||
bytes PayloadChecksum = 8;
|
||||
IntegrityHeader Integrity = 9;
|
||||
StorageGroup StorageGroup = 10;
|
||||
}
|
||||
}
|
||||
|
||||
message Tombstone {
|
||||
uint64 Epoch = 1;
|
||||
}
|
||||
|
||||
message SystemHeader {
|
||||
uint64 Version = 1;
|
||||
uint64 PayloadLength = 2;
|
||||
|
||||
bytes ID = 3 [(gogoproto.customtype) = "ID", (gogoproto.nullable) = false];
|
||||
bytes OwnerID = 4 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||
bytes CID = 5 [(gogoproto.customtype) = "CID", (gogoproto.nullable) = false];
|
||||
CreationPoint CreatedAt = 6 [(gogoproto.nullable) = false];
|
||||
}
|
||||
|
||||
message CreationPoint {
|
||||
int64 UnixTime = 1;
|
||||
uint64 Epoch = 2;
|
||||
}
|
||||
|
||||
message IntegrityHeader {
|
||||
bytes HeadersChecksum = 1;
|
||||
bytes ChecksumSignature = 2;
|
||||
}
|
||||
|
||||
message Link {
|
||||
enum Type {
|
||||
Unknown = 0;
|
||||
Parent = 1;
|
||||
Previous = 2;
|
||||
Next = 3;
|
||||
Child = 4;
|
||||
StorageGroup = 5;
|
||||
}
|
||||
Type type = 1;
|
||||
bytes ID = 2 [(gogoproto.customtype) = "ID", (gogoproto.nullable) = false];
|
||||
}
|
||||
|
||||
message Transform {
|
||||
enum Type {
|
||||
Unknown = 0;
|
||||
Split = 1;
|
||||
Sign = 2;
|
||||
Mould = 3;
|
||||
}
|
||||
Type type = 1;
|
||||
}
|
||||
|
||||
message Object {
|
||||
SystemHeader SystemHeader = 1 [(gogoproto.nullable) = false];
|
||||
repeated Header Headers = 2 [(gogoproto.nullable) = false];
|
||||
bytes Payload = 3;
|
||||
}
|
||||
|
||||
message StorageGroup {
|
||||
uint64 ValidationDataSize = 1;
|
||||
bytes ValidationHash = 2 [(gogoproto.customtype) = "Hash", (gogoproto.nullable) = false];
|
||||
|
||||
message Lifetime {
|
||||
enum Unit {
|
||||
Unlimited = 0;
|
||||
NeoFSEpoch = 1;
|
||||
UnixTime = 2;
|
||||
}
|
||||
|
||||
Unit unit = 1 [(gogoproto.customname) = "Unit"];
|
||||
int64 Value = 2;
|
||||
}
|
||||
|
||||
Lifetime lifetime = 3 [(gogoproto.customname) = "Lifetime"];
|
||||
}
|
107
object/utils.go
Normal file
107
object/utils.go
Normal file
|
@ -0,0 +1,107 @@
|
|||
package object
|
||||
|
||||
import (
|
||||
"io"
|
||||
|
||||
"code.cloudfoundry.org/bytefmt"
|
||||
"github.com/nspcc-dev/neofs-proto/session"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
const maxGetPayloadSize = 3584 * 1024 // 3.5 MiB
|
||||
|
||||
func splitBytes(data []byte, maxSize int) (result [][]byte) {
|
||||
l := len(data)
|
||||
if l == 0 {
|
||||
return [][]byte{data}
|
||||
}
|
||||
for i := 0; i < l; i += maxSize {
|
||||
last := i + maxSize
|
||||
if last > l {
|
||||
last = l
|
||||
}
|
||||
result = append(result, data[i:last])
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// SendPutRequest prepares object and sends it in chunks through protobuf stream.
|
||||
func SendPutRequest(s Service_PutClient, obj *Object, epoch uint64, ttl uint32) (*PutResponse, error) {
|
||||
// TODO split must take into account size of the marshalled Object
|
||||
chunks := splitBytes(obj.Payload, maxGetPayloadSize)
|
||||
obj.Payload = chunks[0]
|
||||
if err := s.Send(MakePutRequestHeader(obj, epoch, ttl, nil)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
for i := range chunks[1:] {
|
||||
if err := s.Send(MakePutRequestChunk(chunks[i+1])); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
resp, err := s.CloseAndRecv()
|
||||
if err != nil && err != io.EOF {
|
||||
return nil, err
|
||||
}
|
||||
return resp, nil
|
||||
}
|
||||
|
||||
// MakePutRequestHeader combines object, epoch, ttl and session token value
|
||||
// into header of object put request.
|
||||
func MakePutRequestHeader(obj *Object, epoch uint64, ttl uint32, token *session.Token) *PutRequest {
|
||||
return &PutRequest{
|
||||
R: &PutRequest_Header{
|
||||
Header: &PutRequest_PutHeader{
|
||||
Epoch: epoch,
|
||||
Object: obj,
|
||||
TTL: ttl,
|
||||
Token: token,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// MakePutRequestChunk splits data into chunks that will be transferred
|
||||
// in the protobuf stream.
|
||||
func MakePutRequestChunk(chunk []byte) *PutRequest {
|
||||
return &PutRequest{R: &PutRequest_Chunk{Chunk: chunk}}
|
||||
}
|
||||
|
||||
func errMaxSizeExceeded(size uint64) error {
|
||||
return errors.Errorf("object payload size exceed: %s", bytefmt.ByteSize(size))
|
||||
}
|
||||
|
||||
// ReceiveGetResponse receives object by chunks from the protobuf stream
|
||||
// and combine it into single get response structure.
|
||||
func ReceiveGetResponse(c Service_GetClient, maxSize uint64) (*GetResponse, error) {
|
||||
res, err := c.Recv()
|
||||
if err == io.EOF {
|
||||
return res, err
|
||||
} else if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
obj := res.GetObject()
|
||||
if obj == nil {
|
||||
return nil, ErrHeaderExpected
|
||||
}
|
||||
|
||||
if obj.SystemHeader.PayloadLength > maxSize {
|
||||
return nil, errMaxSizeExceeded(maxSize)
|
||||
}
|
||||
|
||||
if res.NotFull() {
|
||||
payload := make([]byte, obj.SystemHeader.PayloadLength)
|
||||
offset := copy(payload, obj.Payload)
|
||||
|
||||
var r *GetResponse
|
||||
for r, err = c.Recv(); err == nil; r, err = c.Recv() {
|
||||
offset += copy(payload[offset:], r.GetChunk())
|
||||
}
|
||||
if err != io.EOF {
|
||||
return nil, err
|
||||
}
|
||||
obj.Payload = payload
|
||||
}
|
||||
|
||||
return res, nil
|
||||
}
|
132
object/verification.go
Normal file
132
object/verification.go
Normal file
|
@ -0,0 +1,132 @@
|
|||
package object
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/ecdsa"
|
||||
"crypto/sha256"
|
||||
|
||||
crypto "github.com/nspcc-dev/neofs-crypto"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
func (m Object) headersData(check bool) ([]byte, error) {
|
||||
var bytebuf = new(bytes.Buffer)
|
||||
|
||||
// fixme: we must marshal fields one by one without protobuf marshaling
|
||||
// protobuf marshaling does not guarantee the same result
|
||||
|
||||
if sysheader, err := m.SystemHeader.Marshal(); err != nil {
|
||||
return nil, err
|
||||
} else if _, err := bytebuf.Write(sysheader); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
n, _ := m.LastHeader(HeaderType(IntegrityHdr))
|
||||
for i := range m.Headers {
|
||||
if check && i == n {
|
||||
// ignore last integrity header in order to check headers data
|
||||
continue
|
||||
}
|
||||
|
||||
if header, err := m.Headers[i].Marshal(); err != nil {
|
||||
return nil, err
|
||||
} else if _, err := bytebuf.Write(header); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return bytebuf.Bytes(), nil
|
||||
}
|
||||
|
||||
func (m Object) headersChecksum(check bool) ([]byte, error) {
|
||||
data, err := m.headersData(check)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
checksum := sha256.Sum256(data)
|
||||
return checksum[:], nil
|
||||
}
|
||||
|
||||
// PayloadChecksum calculates sha256 checksum of object payload.
|
||||
func (m Object) PayloadChecksum() []byte {
|
||||
checksum := sha256.Sum256(m.Payload)
|
||||
return checksum[:]
|
||||
}
|
||||
|
||||
func (m Object) verifySignature(key []byte, ih *IntegrityHeader) error {
|
||||
pk := crypto.UnmarshalPublicKey(key)
|
||||
if crypto.Verify(pk, ih.HeadersChecksum, ih.ChecksumSignature) == nil {
|
||||
return nil
|
||||
}
|
||||
return ErrVerifySignature
|
||||
}
|
||||
|
||||
// Verify performs local integrity check by finding verification header and
|
||||
// integrity header. If header integrity is passed, function verifies
|
||||
// checksum of the object payload.
|
||||
func (m Object) Verify() error {
|
||||
var (
|
||||
err error
|
||||
checksum []byte
|
||||
)
|
||||
// Prepare structures
|
||||
_, vh := m.LastHeader(HeaderType(VerifyHdr))
|
||||
if vh == nil {
|
||||
return ErrHeaderNotFound
|
||||
}
|
||||
verify := vh.Value.(*Header_Verify).Verify
|
||||
|
||||
_, ih := m.LastHeader(HeaderType(IntegrityHdr))
|
||||
if ih == nil {
|
||||
return ErrHeaderNotFound
|
||||
}
|
||||
integrity := ih.Value.(*Header_Integrity).Integrity
|
||||
|
||||
// Verify signature
|
||||
err = m.verifySignature(verify.PublicKey, integrity)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "public key: %x", verify.PublicKey)
|
||||
}
|
||||
|
||||
// Verify checksum of header
|
||||
checksum, err = m.headersChecksum(true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !bytes.Equal(integrity.HeadersChecksum, checksum) {
|
||||
return ErrVerifyHeader
|
||||
}
|
||||
|
||||
// Verify checksum of payload
|
||||
if m.SystemHeader.PayloadLength > 0 && !m.IsLinking() {
|
||||
checksum = m.PayloadChecksum()
|
||||
|
||||
_, ph := m.LastHeader(HeaderType(PayloadChecksumHdr))
|
||||
if ph == nil {
|
||||
return ErrHeaderNotFound
|
||||
}
|
||||
if !bytes.Equal(ph.Value.(*Header_PayloadChecksum).PayloadChecksum, checksum) {
|
||||
return ErrVerifyPayload
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Sign creates new integrity header and adds it to the end of the list of
|
||||
// extended headers.
|
||||
func (m *Object) Sign(key *ecdsa.PrivateKey) error {
|
||||
headerChecksum, err := m.headersChecksum(false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
headerChecksumSignature, err := crypto.Sign(key, headerChecksum)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
m.AddHeader(&Header{Value: &Header_Integrity{
|
||||
Integrity: &IntegrityHeader{
|
||||
HeadersChecksum: headerChecksum,
|
||||
ChecksumSignature: headerChecksumSignature,
|
||||
},
|
||||
}})
|
||||
return nil
|
||||
}
|
105
object/verification_test.go
Normal file
105
object/verification_test.go
Normal file
|
@ -0,0 +1,105 @@
|
|||
package object
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/google/uuid"
|
||||
crypto "github.com/nspcc-dev/neofs-crypto"
|
||||
"github.com/nspcc-dev/neofs-crypto/test"
|
||||
"github.com/nspcc-dev/neofs-proto/container"
|
||||
"github.com/nspcc-dev/neofs-proto/refs"
|
||||
"github.com/nspcc-dev/neofs-proto/session"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestObject_Verify(t *testing.T) {
|
||||
key := test.DecodeKey(0)
|
||||
sessionkey := test.DecodeKey(1)
|
||||
|
||||
payload := make([]byte, 1024*1024)
|
||||
|
||||
cnr, err := container.NewTestContainer()
|
||||
require.NoError(t, err)
|
||||
|
||||
cid, err := cnr.ID()
|
||||
require.NoError(t, err)
|
||||
|
||||
id, err := uuid.NewRandom()
|
||||
uid := refs.UUID(id)
|
||||
require.NoError(t, err)
|
||||
|
||||
obj := &Object{
|
||||
SystemHeader: SystemHeader{
|
||||
ID: uid,
|
||||
CID: cid,
|
||||
OwnerID: refs.OwnerID([refs.OwnerIDSize]byte{}),
|
||||
},
|
||||
Headers: []Header{
|
||||
{
|
||||
Value: &Header_UserHeader{
|
||||
UserHeader: &UserHeader{
|
||||
Key: "Profession",
|
||||
Value: "Developer",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Value: &Header_UserHeader{
|
||||
UserHeader: &UserHeader{
|
||||
Key: "Language",
|
||||
Value: "GO",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
obj.SetPayload(payload)
|
||||
obj.SetHeader(&Header{Value: &Header_PayloadChecksum{[]byte("incorrect checksum")}})
|
||||
|
||||
t.Run("error no integrity header", func(t *testing.T) {
|
||||
err = obj.Verify()
|
||||
require.EqualError(t, err, ErrHeaderNotFound.Error())
|
||||
})
|
||||
|
||||
badHeaderChecksum := []byte("incorrect checksum")
|
||||
signature, err := crypto.Sign(sessionkey, badHeaderChecksum)
|
||||
require.NoError(t, err)
|
||||
ih := &IntegrityHeader{
|
||||
HeadersChecksum: badHeaderChecksum,
|
||||
ChecksumSignature: signature,
|
||||
}
|
||||
obj.SetHeader(&Header{Value: &Header_Integrity{ih}})
|
||||
|
||||
t.Run("error no validation header", func(t *testing.T) {
|
||||
err = obj.Verify()
|
||||
require.EqualError(t, err, ErrHeaderNotFound.Error())
|
||||
})
|
||||
|
||||
dataPK := crypto.MarshalPublicKey(&sessionkey.PublicKey)
|
||||
signature, err = crypto.Sign(key, dataPK)
|
||||
vh := &session.VerificationHeader{
|
||||
PublicKey: dataPK,
|
||||
KeySignature: signature,
|
||||
}
|
||||
obj.SetVerificationHeader(vh)
|
||||
|
||||
t.Run("error invalid header checksum", func(t *testing.T) {
|
||||
err = obj.Verify()
|
||||
require.EqualError(t, err, ErrVerifyHeader.Error())
|
||||
})
|
||||
|
||||
require.NoError(t, obj.Sign(sessionkey))
|
||||
|
||||
t.Run("error invalid payload checksum", func(t *testing.T) {
|
||||
err = obj.Verify()
|
||||
require.EqualError(t, err, ErrVerifyPayload.Error())
|
||||
})
|
||||
|
||||
obj.SetHeader(&Header{Value: &Header_PayloadChecksum{obj.PayloadChecksum()}})
|
||||
require.NoError(t, obj.Sign(sessionkey))
|
||||
|
||||
t.Run("correct", func(t *testing.T) {
|
||||
err = obj.Verify()
|
||||
require.NoError(t, err)
|
||||
})
|
||||
}
|
7
proto.go
Normal file
7
proto.go
Normal file
|
@ -0,0 +1,7 @@
|
|||
package neofs_proto // import "github.com/nspcc-dev/neofs-proto"
|
||||
|
||||
import (
|
||||
_ "github.com/gogo/protobuf/gogoproto"
|
||||
_ "github.com/gogo/protobuf/proto"
|
||||
_ "github.com/golang/protobuf/proto"
|
||||
)
|
43
query/types.go
Normal file
43
query/types.go
Normal file
|
@ -0,0 +1,43 @@
|
|||
package query
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"github.com/gogo/protobuf/proto"
|
||||
)
|
||||
|
||||
var (
|
||||
_ proto.Message = (*Query)(nil)
|
||||
_ proto.Message = (*Filter)(nil)
|
||||
)
|
||||
|
||||
// String returns string representation of Filter.
|
||||
func (m Filter) String() string {
|
||||
b := new(strings.Builder)
|
||||
b.WriteString("<Filter '$" + m.Name + "' ")
|
||||
switch m.Type {
|
||||
case Filter_Exact:
|
||||
b.WriteString("==")
|
||||
case Filter_Regex:
|
||||
b.WriteString("~=")
|
||||
default:
|
||||
b.WriteString("??")
|
||||
}
|
||||
b.WriteString(" '" + m.Value + "'>")
|
||||
return b.String()
|
||||
}
|
||||
|
||||
// String returns string representation of Query.
|
||||
func (m Query) String() string {
|
||||
b := new(strings.Builder)
|
||||
b.WriteString("<Query [")
|
||||
ln := len(m.Filters)
|
||||
for i := 0; i < ln; i++ {
|
||||
b.WriteString(m.Filters[i].String())
|
||||
if ln-1 != i {
|
||||
b.WriteByte(',')
|
||||
}
|
||||
}
|
||||
b.WriteByte(']')
|
||||
return b.String()
|
||||
}
|
634
query/types.pb.go
Normal file
634
query/types.pb.go
Normal file
|
@ -0,0 +1,634 @@
|
|||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
||||
// source: query/types.proto
|
||||
|
||||
package query
|
||||
|
||||
import (
|
||||
fmt "fmt"
|
||||
_ "github.com/gogo/protobuf/gogoproto"
|
||||
proto "github.com/golang/protobuf/proto"
|
||||
io "io"
|
||||
math "math"
|
||||
math_bits "math/bits"
|
||||
)
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = fmt.Errorf
|
||||
var _ = math.Inf
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the proto package it is being compiled against.
|
||||
// A compilation error at this line likely means your copy of the
|
||||
// proto package needs to be updated.
|
||||
const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
|
||||
|
||||
type Filter_Type int32
|
||||
|
||||
const (
|
||||
Filter_Exact Filter_Type = 0
|
||||
Filter_Regex Filter_Type = 1
|
||||
)
|
||||
|
||||
var Filter_Type_name = map[int32]string{
|
||||
0: "Exact",
|
||||
1: "Regex",
|
||||
}
|
||||
|
||||
var Filter_Type_value = map[string]int32{
|
||||
"Exact": 0,
|
||||
"Regex": 1,
|
||||
}
|
||||
|
||||
func (x Filter_Type) String() string {
|
||||
return proto.EnumName(Filter_Type_name, int32(x))
|
||||
}
|
||||
|
||||
func (Filter_Type) EnumDescriptor() ([]byte, []int) {
|
||||
return fileDescriptor_c682aeaf51d46f4d, []int{0, 0}
|
||||
}
|
||||
|
||||
type Filter struct {
|
||||
Type Filter_Type `protobuf:"varint,1,opt,name=type,proto3,enum=query.Filter_Type" json:"type,omitempty"`
|
||||
Name string `protobuf:"bytes,2,opt,name=Name,proto3" json:"Name,omitempty"`
|
||||
Value string `protobuf:"bytes,3,opt,name=Value,proto3" json:"Value,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *Filter) Reset() { *m = Filter{} }
|
||||
func (*Filter) ProtoMessage() {}
|
||||
func (*Filter) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_c682aeaf51d46f4d, []int{0}
|
||||
}
|
||||
func (m *Filter) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *Filter) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
func (m *Filter) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_Filter.Merge(m, src)
|
||||
}
|
||||
func (m *Filter) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *Filter) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_Filter.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_Filter proto.InternalMessageInfo
|
||||
|
||||
func (m *Filter) GetType() Filter_Type {
|
||||
if m != nil {
|
||||
return m.Type
|
||||
}
|
||||
return Filter_Exact
|
||||
}
|
||||
|
||||
func (m *Filter) GetName() string {
|
||||
if m != nil {
|
||||
return m.Name
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *Filter) GetValue() string {
|
||||
if m != nil {
|
||||
return m.Value
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
type Query struct {
|
||||
Filters []Filter `protobuf:"bytes,1,rep,name=Filters,proto3" json:"Filters"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *Query) Reset() { *m = Query{} }
|
||||
func (*Query) ProtoMessage() {}
|
||||
func (*Query) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_c682aeaf51d46f4d, []int{1}
|
||||
}
|
||||
func (m *Query) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *Query) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
func (m *Query) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_Query.Merge(m, src)
|
||||
}
|
||||
func (m *Query) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *Query) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_Query.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_Query proto.InternalMessageInfo
|
||||
|
||||
func (m *Query) GetFilters() []Filter {
|
||||
if m != nil {
|
||||
return m.Filters
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterEnum("query.Filter_Type", Filter_Type_name, Filter_Type_value)
|
||||
proto.RegisterType((*Filter)(nil), "query.Filter")
|
||||
proto.RegisterType((*Query)(nil), "query.Query")
|
||||
}
|
||||
|
||||
func init() { proto.RegisterFile("query/types.proto", fileDescriptor_c682aeaf51d46f4d) }
|
||||
|
||||
var fileDescriptor_c682aeaf51d46f4d = []byte{
|
||||
// 275 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0x2c, 0x2c, 0x4d, 0x2d,
|
||||
0xaa, 0xd4, 0x2f, 0xa9, 0x2c, 0x48, 0x2d, 0xd6, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0x05,
|
||||
0x0b, 0x49, 0xe9, 0xa6, 0x67, 0x96, 0x64, 0x94, 0x26, 0xe9, 0x25, 0xe7, 0xe7, 0xea, 0xa7, 0xe7,
|
||||
0xa7, 0xe7, 0xeb, 0x83, 0x65, 0x93, 0x4a, 0xd3, 0xc0, 0x3c, 0x30, 0x07, 0xcc, 0x82, 0xe8, 0x52,
|
||||
0xea, 0x60, 0xe4, 0x62, 0x73, 0xcb, 0xcc, 0x29, 0x49, 0x2d, 0x12, 0x32, 0xe0, 0x62, 0x01, 0x99,
|
||||
0x27, 0xc1, 0xa8, 0xc0, 0xa8, 0xc1, 0x67, 0x24, 0xa4, 0x07, 0x36, 0x4f, 0x0f, 0x22, 0xa9, 0x17,
|
||||
0x52, 0x59, 0x90, 0xea, 0xc4, 0xf1, 0xe8, 0x9e, 0x3c, 0x0b, 0x88, 0x15, 0x04, 0x56, 0x29, 0x24,
|
||||
0xc4, 0xc5, 0xe2, 0x97, 0x98, 0x9b, 0x2a, 0xc1, 0xa4, 0xc0, 0xa8, 0xc1, 0x19, 0x04, 0x66, 0x0b,
|
||||
0x89, 0x70, 0xb1, 0x86, 0x25, 0xe6, 0x94, 0xa6, 0x4a, 0x30, 0x83, 0x05, 0x21, 0x1c, 0x25, 0x19,
|
||||
0x2e, 0xb0, 0x3e, 0x21, 0x4e, 0x2e, 0x56, 0xd7, 0x8a, 0xc4, 0xe4, 0x12, 0x01, 0x06, 0x10, 0x33,
|
||||
0x28, 0x35, 0x3d, 0xb5, 0x42, 0x80, 0xd1, 0x8a, 0x65, 0xc6, 0x02, 0x79, 0x06, 0x25, 0x1b, 0x2e,
|
||||
0xd6, 0x40, 0x90, 0x95, 0x42, 0xba, 0x5c, 0xec, 0x10, 0x5b, 0x8b, 0x25, 0x18, 0x15, 0x98, 0x35,
|
||||
0xb8, 0x8d, 0x78, 0x51, 0xdc, 0xe2, 0xc4, 0x72, 0xe2, 0x9e, 0x3c, 0x43, 0x10, 0x4c, 0x0d, 0x44,
|
||||
0xb7, 0x93, 0xcd, 0x89, 0x47, 0x72, 0x8c, 0x17, 0x1e, 0xc9, 0x31, 0xde, 0x78, 0x24, 0xc7, 0xf8,
|
||||
0xe0, 0x91, 0x1c, 0xe3, 0x8c, 0xc7, 0x72, 0x0c, 0x51, 0x6a, 0x48, 0xa1, 0x91, 0x57, 0x5c, 0x90,
|
||||
0x9c, 0xac, 0x9b, 0x92, 0x5a, 0xa6, 0x9f, 0x97, 0x9a, 0x9f, 0x56, 0xac, 0x0b, 0x09, 0x0b, 0xb0,
|
||||
0xc9, 0x49, 0x6c, 0x60, 0x8e, 0x31, 0x20, 0x00, 0x00, 0xff, 0xff, 0xa6, 0xcd, 0xeb, 0xf6, 0x58,
|
||||
0x01, 0x00, 0x00,
|
||||
}
|
||||
|
||||
func (m *Filter) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *Filter) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *Filter) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
i -= len(m.XXX_unrecognized)
|
||||
copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
if len(m.Value) > 0 {
|
||||
i -= len(m.Value)
|
||||
copy(dAtA[i:], m.Value)
|
||||
i = encodeVarintTypes(dAtA, i, uint64(len(m.Value)))
|
||||
i--
|
||||
dAtA[i] = 0x1a
|
||||
}
|
||||
if len(m.Name) > 0 {
|
||||
i -= len(m.Name)
|
||||
copy(dAtA[i:], m.Name)
|
||||
i = encodeVarintTypes(dAtA, i, uint64(len(m.Name)))
|
||||
i--
|
||||
dAtA[i] = 0x12
|
||||
}
|
||||
if m.Type != 0 {
|
||||
i = encodeVarintTypes(dAtA, i, uint64(m.Type))
|
||||
i--
|
||||
dAtA[i] = 0x8
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func (m *Query) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *Query) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *Query) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
i -= len(m.XXX_unrecognized)
|
||||
copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
if len(m.Filters) > 0 {
|
||||
for iNdEx := len(m.Filters) - 1; iNdEx >= 0; iNdEx-- {
|
||||
{
|
||||
size, err := m.Filters[iNdEx].MarshalToSizedBuffer(dAtA[:i])
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i -= size
|
||||
i = encodeVarintTypes(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0xa
|
||||
}
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func encodeVarintTypes(dAtA []byte, offset int, v uint64) int {
|
||||
offset -= sovTypes(v)
|
||||
base := offset
|
||||
for v >= 1<<7 {
|
||||
dAtA[offset] = uint8(v&0x7f | 0x80)
|
||||
v >>= 7
|
||||
offset++
|
||||
}
|
||||
dAtA[offset] = uint8(v)
|
||||
return base
|
||||
}
|
||||
func (m *Filter) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
if m.Type != 0 {
|
||||
n += 1 + sovTypes(uint64(m.Type))
|
||||
}
|
||||
l = len(m.Name)
|
||||
if l > 0 {
|
||||
n += 1 + l + sovTypes(uint64(l))
|
||||
}
|
||||
l = len(m.Value)
|
||||
if l > 0 {
|
||||
n += 1 + l + sovTypes(uint64(l))
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (m *Query) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
if len(m.Filters) > 0 {
|
||||
for _, e := range m.Filters {
|
||||
l = e.Size()
|
||||
n += 1 + l + sovTypes(uint64(l))
|
||||
}
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func sovTypes(x uint64) (n int) {
|
||||
return (math_bits.Len64(x|1) + 6) / 7
|
||||
}
|
||||
func sozTypes(x uint64) (n int) {
|
||||
return sovTypes(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func (m *Filter) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: Filter: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: Filter: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType)
|
||||
}
|
||||
m.Type = 0
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
m.Type |= Filter_Type(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 2:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType)
|
||||
}
|
||||
var stringLen uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
stringLen |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
intStringLen := int(stringLen)
|
||||
if intStringLen < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
postIndex := iNdEx + intStringLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Name = string(dAtA[iNdEx:postIndex])
|
||||
iNdEx = postIndex
|
||||
case 3:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType)
|
||||
}
|
||||
var stringLen uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
stringLen |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
intStringLen := int(stringLen)
|
||||
if intStringLen < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
postIndex := iNdEx + intStringLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Value = string(dAtA[iNdEx:postIndex])
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipTypes(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func (m *Query) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: Query: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: Query: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Filters", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Filters = append(m.Filters, Filter{})
|
||||
if err := m.Filters[len(m.Filters)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipTypes(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func skipTypes(dAtA []byte) (n int, err error) {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
depth := 0
|
||||
for iNdEx < l {
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= (uint64(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
wireType := int(wire & 0x7)
|
||||
switch wireType {
|
||||
case 0:
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx++
|
||||
if dAtA[iNdEx-1] < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 1:
|
||||
iNdEx += 8
|
||||
case 2:
|
||||
var length int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
length |= (int(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if length < 0 {
|
||||
return 0, ErrInvalidLengthTypes
|
||||
}
|
||||
iNdEx += length
|
||||
case 3:
|
||||
depth++
|
||||
case 4:
|
||||
if depth == 0 {
|
||||
return 0, ErrUnexpectedEndOfGroupTypes
|
||||
}
|
||||
depth--
|
||||
case 5:
|
||||
iNdEx += 4
|
||||
default:
|
||||
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
|
||||
}
|
||||
if iNdEx < 0 {
|
||||
return 0, ErrInvalidLengthTypes
|
||||
}
|
||||
if depth == 0 {
|
||||
return iNdEx, nil
|
||||
}
|
||||
}
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
|
||||
var (
|
||||
ErrInvalidLengthTypes = fmt.Errorf("proto: negative length found during unmarshaling")
|
||||
ErrIntOverflowTypes = fmt.Errorf("proto: integer overflow")
|
||||
ErrUnexpectedEndOfGroupTypes = fmt.Errorf("proto: unexpected end of group")
|
||||
)
|
25
query/types.proto
Normal file
25
query/types.proto
Normal file
|
@ -0,0 +1,25 @@
|
|||
syntax = "proto3";
|
||||
package query;
|
||||
option go_package = "github.com/nspcc-dev/neofs-proto/query";
|
||||
|
||||
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||
|
||||
option (gogoproto.stable_marshaler_all) = true;
|
||||
|
||||
message Filter {
|
||||
option (gogoproto.goproto_stringer) = false;
|
||||
|
||||
enum Type {
|
||||
Exact = 0;
|
||||
Regex = 1;
|
||||
}
|
||||
Type type = 1 [(gogoproto.customname) = "Type"];
|
||||
string Name = 2;
|
||||
string Value = 3;
|
||||
}
|
||||
|
||||
message Query {
|
||||
option (gogoproto.goproto_stringer) = false;
|
||||
|
||||
repeated Filter Filters = 1 [(gogoproto.nullable) = false];
|
||||
}
|
68
refs/address.go
Normal file
68
refs/address.go
Normal file
|
@ -0,0 +1,68 @@
|
|||
package refs
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"strings"
|
||||
|
||||
"github.com/nspcc-dev/neofs-proto/internal"
|
||||
)
|
||||
|
||||
const (
|
||||
joinSeparator = "/"
|
||||
|
||||
// ErrWrongAddress is raised when wrong address is passed to Address.Parse ParseAddress.
|
||||
ErrWrongAddress = internal.Error("wrong address")
|
||||
|
||||
// ErrEmptyAddress is raised when empty address is passed to Address.Parse ParseAddress.
|
||||
ErrEmptyAddress = internal.Error("empty address")
|
||||
)
|
||||
|
||||
// ParseAddress parses address from string representation into new Address.
|
||||
func ParseAddress(str string) (*Address, error) {
|
||||
var addr Address
|
||||
return &addr, addr.Parse(str)
|
||||
}
|
||||
|
||||
// Parse parses address from string representation into current Address.
|
||||
func (m *Address) Parse(addr string) error {
|
||||
if m == nil {
|
||||
return ErrEmptyAddress
|
||||
}
|
||||
|
||||
items := strings.Split(addr, joinSeparator)
|
||||
if len(items) != 2 {
|
||||
return ErrWrongAddress
|
||||
}
|
||||
|
||||
if err := m.CID.Parse(items[0]); err != nil {
|
||||
return err
|
||||
} else if err := m.ObjectID.Parse(items[1]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// String returns string representation of Address.
|
||||
func (m Address) String() string {
|
||||
return strings.Join([]string{m.CID.String(), m.ObjectID.String()}, joinSeparator)
|
||||
}
|
||||
|
||||
// IsFull checks that ContainerID and ObjectID is not empty.
|
||||
func (m Address) IsFull() bool {
|
||||
return !m.CID.Empty() && !m.ObjectID.Empty()
|
||||
}
|
||||
|
||||
// Equal checks that current Address is equal to passed Address.
|
||||
func (m Address) Equal(a2 *Address) bool {
|
||||
return m.CID.Equal(a2.CID) && m.ObjectID.Equal(a2.ObjectID)
|
||||
}
|
||||
|
||||
// Hash returns []byte that used as a key for storage bucket.
|
||||
func (m Address) Hash() ([]byte, error) {
|
||||
if !m.IsFull() {
|
||||
return nil, ErrEmptyAddress
|
||||
}
|
||||
h := sha256.Sum256(append(m.ObjectID.Bytes(), m.CID.Bytes()...))
|
||||
return h[:], nil
|
||||
}
|
96
refs/cid.go
Normal file
96
refs/cid.go
Normal file
|
@ -0,0 +1,96 @@
|
|||
package refs
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/sha256"
|
||||
|
||||
"github.com/mr-tron/base58"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// CIDForBytes creates CID for passed bytes.
|
||||
func CIDForBytes(data []byte) CID { return sha256.Sum256(data) }
|
||||
|
||||
// CIDFromBytes parses CID from passed bytes.
|
||||
func CIDFromBytes(data []byte) (cid CID, err error) {
|
||||
if ln := len(data); ln != CIDSize {
|
||||
return CID{}, errors.Wrapf(ErrWrongDataSize, "expect=%d, actual=%d", CIDSize, ln)
|
||||
}
|
||||
|
||||
copy(cid[:], data)
|
||||
return
|
||||
}
|
||||
|
||||
// CIDFromString parses CID from string representation of CID.
|
||||
func CIDFromString(c string) (CID, error) {
|
||||
var cid CID
|
||||
decoded, err := base58.Decode(c)
|
||||
if err != nil {
|
||||
return cid, err
|
||||
}
|
||||
return CIDFromBytes(decoded)
|
||||
}
|
||||
|
||||
// Size returns size of CID (CIDSize).
|
||||
func (c CID) Size() int { return CIDSize }
|
||||
|
||||
// Parse tries to parse CID from string representation.
|
||||
func (c *CID) Parse(cid string) error {
|
||||
var err error
|
||||
if *c, err = CIDFromString(cid); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Empty checks that current CID is empty.
|
||||
func (c CID) Empty() bool { return bytes.Equal(c.Bytes(), emptyCID) }
|
||||
|
||||
// Equal checks that current CID is equal to passed CID.
|
||||
func (c CID) Equal(cid CID) bool { return bytes.Equal(c.Bytes(), cid.Bytes()) }
|
||||
|
||||
// Marshal returns CID bytes representation.
|
||||
func (c CID) Marshal() ([]byte, error) { return c.Bytes(), nil }
|
||||
|
||||
// MarshalBinary returns CID bytes representation.
|
||||
func (c CID) MarshalBinary() ([]byte, error) { return c.Bytes(), nil }
|
||||
|
||||
// MarshalTo marshal CID to bytes representation into passed bytes.
|
||||
func (c *CID) MarshalTo(data []byte) (int, error) { return copy(data, c.Bytes()), nil }
|
||||
|
||||
// ProtoMessage method to satisfy proto.Message interface.
|
||||
func (c CID) ProtoMessage() {}
|
||||
|
||||
// String returns string representation of CID.
|
||||
func (c CID) String() string { return base58.Encode(c[:]) }
|
||||
|
||||
// Reset resets current CID to zero value.
|
||||
func (c *CID) Reset() { *c = CID{} }
|
||||
|
||||
// Bytes returns CID bytes representation.
|
||||
func (c CID) Bytes() []byte {
|
||||
buf := make([]byte, CIDSize)
|
||||
copy(buf, c[:])
|
||||
return buf
|
||||
}
|
||||
|
||||
// UnmarshalBinary tries to parse bytes representation of CID.
|
||||
func (c *CID) UnmarshalBinary(data []byte) error { return c.Unmarshal(data) }
|
||||
|
||||
// Unmarshal tries to parse bytes representation of CID.
|
||||
func (c *CID) Unmarshal(data []byte) error {
|
||||
if ln := len(data); ln != CIDSize {
|
||||
return errors.Wrapf(ErrWrongDataSize, "expect=%d, actual=%d", CIDSize, ln)
|
||||
}
|
||||
|
||||
copy((*c)[:], data)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Verify validates that current CID is generated for passed bytes data.
|
||||
func (c CID) Verify(data []byte) error {
|
||||
if id := CIDForBytes(data); !bytes.Equal(c[:], id[:]) {
|
||||
return errors.New("wrong hash for data")
|
||||
}
|
||||
return nil
|
||||
}
|
65
refs/owner.go
Normal file
65
refs/owner.go
Normal file
|
@ -0,0 +1,65 @@
|
|||
package refs
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/ecdsa"
|
||||
|
||||
"github.com/mr-tron/base58"
|
||||
"github.com/nspcc-dev/neofs-proto/chain"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// NewOwnerID returns generated OwnerID from passed public keys.
|
||||
func NewOwnerID(keys ...*ecdsa.PublicKey) (owner OwnerID, err error) {
|
||||
if len(keys) == 0 {
|
||||
return
|
||||
}
|
||||
var d []byte
|
||||
d, err = base58.Decode(chain.KeysToAddress(keys...))
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
copy(owner[:], d)
|
||||
return owner, nil
|
||||
}
|
||||
|
||||
// Size returns OwnerID size in bytes (OwnerIDSize).
|
||||
func (OwnerID) Size() int { return OwnerIDSize }
|
||||
|
||||
// Empty checks that current OwnerID is empty value.
|
||||
func (o OwnerID) Empty() bool { return bytes.Equal(o.Bytes(), emptyOwner) }
|
||||
|
||||
// Equal checks that current OwnerID is equal to passed OwnerID.
|
||||
func (o OwnerID) Equal(id OwnerID) bool { return bytes.Equal(o.Bytes(), id.Bytes()) }
|
||||
|
||||
// Reset sets current OwnerID to empty value.
|
||||
func (o *OwnerID) Reset() { *o = OwnerID{} }
|
||||
|
||||
// ProtoMessage method to satisfy proto.Message interface.
|
||||
func (OwnerID) ProtoMessage() {}
|
||||
|
||||
// Marshal returns OwnerID bytes representation.
|
||||
func (o OwnerID) Marshal() ([]byte, error) { return o.Bytes(), nil }
|
||||
|
||||
// MarshalTo copies OwnerID bytes representation into passed slice of bytes.
|
||||
func (o OwnerID) MarshalTo(data []byte) (int, error) { return copy(data, o.Bytes()), nil }
|
||||
|
||||
// String returns string representation of OwnerID.
|
||||
func (o OwnerID) String() string { return base58.Encode(o[:]) }
|
||||
|
||||
// Bytes returns OwnerID bytes representation.
|
||||
func (o OwnerID) Bytes() []byte {
|
||||
buf := make([]byte, OwnerIDSize)
|
||||
copy(buf, o[:])
|
||||
return buf
|
||||
}
|
||||
|
||||
// Unmarshal tries to parse OwnerID bytes representation into current OwnerID.
|
||||
func (o *OwnerID) Unmarshal(data []byte) error {
|
||||
if ln := len(data); ln != OwnerIDSize {
|
||||
return errors.Wrapf(ErrWrongDataSize, "expect=%d, actual=%d", OwnerIDSize, ln)
|
||||
}
|
||||
|
||||
copy((*o)[:], data)
|
||||
return nil
|
||||
}
|
14
refs/sgid.go
Normal file
14
refs/sgid.go
Normal file
|
@ -0,0 +1,14 @@
|
|||
package refs
|
||||
|
||||
import (
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// SGIDFromBytes parse bytes representation of SGID into new SGID value.
|
||||
func SGIDFromBytes(data []byte) (sgid SGID, err error) {
|
||||
if ln := len(data); ln != SGIDSize {
|
||||
return SGID{}, errors.Wrapf(ErrWrongDataSize, "expect=%d, actual=%d", SGIDSize, ln)
|
||||
}
|
||||
copy(sgid[:], data)
|
||||
return
|
||||
}
|
106
refs/types.go
Normal file
106
refs/types.go
Normal file
|
@ -0,0 +1,106 @@
|
|||
// This package contains basic structures implemented in Go, such as
|
||||
//
|
||||
// CID - container id
|
||||
// OwnerID - owner id
|
||||
// ObjectID - object id
|
||||
// SGID - storage group id
|
||||
// Address - contains object id and container id
|
||||
// UUID - a 128 bit (16 byte) Universal Unique Identifier as defined in RFC 4122
|
||||
|
||||
package refs
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/nspcc-dev/neofs-proto/chain"
|
||||
"github.com/nspcc-dev/neofs-proto/internal"
|
||||
)
|
||||
|
||||
type (
|
||||
// CID is implementation of ContainerID.
|
||||
CID [CIDSize]byte
|
||||
|
||||
// UUID wrapper over github.com/google/uuid.UUID.
|
||||
UUID uuid.UUID
|
||||
|
||||
// SGID is type alias of UUID.
|
||||
SGID = UUID
|
||||
|
||||
// ObjectID is type alias of UUID.
|
||||
ObjectID = UUID
|
||||
|
||||
// MessageID is type alias of UUID.
|
||||
MessageID = UUID
|
||||
|
||||
// OwnerID is wrapper over neofs-proto/chain.WalletAddress.
|
||||
OwnerID chain.WalletAddress
|
||||
)
|
||||
|
||||
const (
|
||||
// UUIDSize contains size of UUID.
|
||||
UUIDSize = 16
|
||||
|
||||
// SGIDSize contains size of SGID.
|
||||
SGIDSize = UUIDSize
|
||||
|
||||
// CIDSize contains size of CID.
|
||||
CIDSize = sha256.Size
|
||||
|
||||
// OwnerIDSize contains size of OwnerID.
|
||||
OwnerIDSize = chain.AddressLength
|
||||
|
||||
// ErrWrongDataSize is raised when passed bytes into Unmarshal have wrong size.
|
||||
ErrWrongDataSize = internal.Error("wrong data size")
|
||||
|
||||
// ErrEmptyOwner is raised when empty OwnerID is passed into container.New.
|
||||
ErrEmptyOwner = internal.Error("owner cant be empty")
|
||||
|
||||
// ErrEmptyCapacity is raised when empty Capacity is passed container.New.
|
||||
ErrEmptyCapacity = internal.Error("capacity cant be empty")
|
||||
|
||||
// ErrEmptyContainer is raised when it CID method is called for an empty container.
|
||||
ErrEmptyContainer = internal.Error("cannot return ID for empty container")
|
||||
)
|
||||
|
||||
var (
|
||||
emptyCID = (CID{}).Bytes()
|
||||
emptyUUID = (UUID{}).Bytes()
|
||||
emptyOwner = (OwnerID{}).Bytes()
|
||||
|
||||
_ internal.Custom = (*CID)(nil)
|
||||
_ internal.Custom = (*SGID)(nil)
|
||||
_ internal.Custom = (*UUID)(nil)
|
||||
_ internal.Custom = (*OwnerID)(nil)
|
||||
_ internal.Custom = (*ObjectID)(nil)
|
||||
_ internal.Custom = (*MessageID)(nil)
|
||||
|
||||
// NewSGID method alias.
|
||||
NewSGID = NewUUID
|
||||
|
||||
// NewObjectID method alias.
|
||||
NewObjectID = NewUUID
|
||||
|
||||
// NewMessageID method alias.
|
||||
NewMessageID = NewUUID
|
||||
)
|
||||
|
||||
// NewUUID returns a Random (Version 4) UUID.
|
||||
//
|
||||
// The strength of the UUIDs is based on the strength of the crypto/rand
|
||||
// package.
|
||||
//
|
||||
// A note about uniqueness derived from the UUID Wikipedia entry:
|
||||
//
|
||||
// Randomly generated UUIDs have 122 random bits. One's annual risk of being
|
||||
// hit by a meteorite is estimated to be one chance in 17 billion, that
|
||||
// means the probability is about 0.00000000006 (6 × 10−11),
|
||||
// equivalent to the odds of creating a few tens of trillions of UUIDs in a
|
||||
// year and having one duplicate.
|
||||
func NewUUID() (UUID, error) {
|
||||
id, err := uuid.NewRandom()
|
||||
if err != nil {
|
||||
return UUID{}, err
|
||||
}
|
||||
return UUID(id), nil
|
||||
}
|
368
refs/types.pb.go
Normal file
368
refs/types.pb.go
Normal file
|
@ -0,0 +1,368 @@
|
|||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
||||
// source: refs/types.proto
|
||||
|
||||
package refs
|
||||
|
||||
import (
|
||||
fmt "fmt"
|
||||
_ "github.com/gogo/protobuf/gogoproto"
|
||||
proto "github.com/golang/protobuf/proto"
|
||||
io "io"
|
||||
math "math"
|
||||
math_bits "math/bits"
|
||||
)
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = fmt.Errorf
|
||||
var _ = math.Inf
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the proto package it is being compiled against.
|
||||
// A compilation error at this line likely means your copy of the
|
||||
// proto package needs to be updated.
|
||||
const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
|
||||
|
||||
type Address struct {
|
||||
ObjectID ObjectID `protobuf:"bytes,1,opt,name=ObjectID,proto3,customtype=ObjectID" json:"ObjectID"`
|
||||
CID CID `protobuf:"bytes,2,opt,name=CID,proto3,customtype=CID" json:"CID"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *Address) Reset() { *m = Address{} }
|
||||
func (*Address) ProtoMessage() {}
|
||||
func (*Address) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_063a64a96d952d31, []int{0}
|
||||
}
|
||||
func (m *Address) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *Address) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
func (m *Address) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_Address.Merge(m, src)
|
||||
}
|
||||
func (m *Address) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *Address) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_Address.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_Address proto.InternalMessageInfo
|
||||
|
||||
func init() {
|
||||
proto.RegisterType((*Address)(nil), "refs.Address")
|
||||
}
|
||||
|
||||
func init() { proto.RegisterFile("refs/types.proto", fileDescriptor_063a64a96d952d31) }
|
||||
|
||||
var fileDescriptor_063a64a96d952d31 = []byte{
|
||||
// 199 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0x28, 0x4a, 0x4d, 0x2b,
|
||||
0xd6, 0x2f, 0xa9, 0x2c, 0x48, 0x2d, 0xd6, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0x01, 0x89,
|
||||
0x48, 0xe9, 0xa6, 0x67, 0x96, 0x64, 0x94, 0x26, 0xe9, 0x25, 0xe7, 0xe7, 0xea, 0xa7, 0xe7, 0xa7,
|
||||
0xe7, 0xeb, 0x83, 0x25, 0x93, 0x4a, 0xd3, 0xc0, 0x3c, 0x30, 0x07, 0xcc, 0x82, 0x68, 0x52, 0x0a,
|
||||
0xe3, 0x62, 0x77, 0x4c, 0x49, 0x29, 0x4a, 0x2d, 0x2e, 0x16, 0xd2, 0xe1, 0xe2, 0xf0, 0x4f, 0xca,
|
||||
0x4a, 0x4d, 0x2e, 0xf1, 0x74, 0x91, 0x60, 0x54, 0x60, 0xd4, 0xe0, 0x71, 0x12, 0x38, 0x71, 0x4f,
|
||||
0x9e, 0xe1, 0xd6, 0x3d, 0x79, 0xb8, 0x78, 0x10, 0x9c, 0x25, 0x24, 0xcb, 0xc5, 0xec, 0xec, 0xe9,
|
||||
0x22, 0xc1, 0x04, 0x56, 0xc8, 0x0d, 0x55, 0x08, 0x12, 0x0a, 0x02, 0x11, 0x4e, 0xce, 0x37, 0x1e,
|
||||
0xca, 0x31, 0x34, 0x3c, 0x92, 0x63, 0x38, 0xf1, 0x48, 0x8e, 0xf1, 0xc2, 0x23, 0x39, 0xc6, 0x1b,
|
||||
0x8f, 0xe4, 0x18, 0x1f, 0x3c, 0x92, 0x63, 0x9c, 0xf1, 0x58, 0x8e, 0x21, 0x4a, 0x15, 0xc9, 0x91,
|
||||
0x79, 0xc5, 0x05, 0xc9, 0xc9, 0xba, 0x29, 0xa9, 0x65, 0xfa, 0x79, 0xa9, 0xf9, 0x69, 0xc5, 0xba,
|
||||
0x10, 0x27, 0x82, 0xfc, 0x92, 0xc4, 0x06, 0x66, 0x1b, 0x03, 0x02, 0x00, 0x00, 0xff, 0xff, 0xfd,
|
||||
0xb6, 0x0b, 0x68, 0xec, 0x00, 0x00, 0x00,
|
||||
}
|
||||
|
||||
func (m *Address) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *Address) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *Address) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
i -= len(m.XXX_unrecognized)
|
||||
copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
{
|
||||
size := m.CID.Size()
|
||||
i -= size
|
||||
if _, err := m.CID.MarshalTo(dAtA[i:]); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i = encodeVarintTypes(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0x12
|
||||
{
|
||||
size := m.ObjectID.Size()
|
||||
i -= size
|
||||
if _, err := m.ObjectID.MarshalTo(dAtA[i:]); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i = encodeVarintTypes(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0xa
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func encodeVarintTypes(dAtA []byte, offset int, v uint64) int {
|
||||
offset -= sovTypes(v)
|
||||
base := offset
|
||||
for v >= 1<<7 {
|
||||
dAtA[offset] = uint8(v&0x7f | 0x80)
|
||||
v >>= 7
|
||||
offset++
|
||||
}
|
||||
dAtA[offset] = uint8(v)
|
||||
return base
|
||||
}
|
||||
func (m *Address) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
l = m.ObjectID.Size()
|
||||
n += 1 + l + sovTypes(uint64(l))
|
||||
l = m.CID.Size()
|
||||
n += 1 + l + sovTypes(uint64(l))
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func sovTypes(x uint64) (n int) {
|
||||
return (math_bits.Len64(x|1) + 6) / 7
|
||||
}
|
||||
func sozTypes(x uint64) (n int) {
|
||||
return sovTypes(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func (m *Address) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: Address: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: Address: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field ObjectID", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if err := m.ObjectID.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 2:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field CID", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if err := m.CID.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipTypes(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func skipTypes(dAtA []byte) (n int, err error) {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
depth := 0
|
||||
for iNdEx < l {
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= (uint64(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
wireType := int(wire & 0x7)
|
||||
switch wireType {
|
||||
case 0:
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx++
|
||||
if dAtA[iNdEx-1] < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 1:
|
||||
iNdEx += 8
|
||||
case 2:
|
||||
var length int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
length |= (int(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if length < 0 {
|
||||
return 0, ErrInvalidLengthTypes
|
||||
}
|
||||
iNdEx += length
|
||||
case 3:
|
||||
depth++
|
||||
case 4:
|
||||
if depth == 0 {
|
||||
return 0, ErrUnexpectedEndOfGroupTypes
|
||||
}
|
||||
depth--
|
||||
case 5:
|
||||
iNdEx += 4
|
||||
default:
|
||||
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
|
||||
}
|
||||
if iNdEx < 0 {
|
||||
return 0, ErrInvalidLengthTypes
|
||||
}
|
||||
if depth == 0 {
|
||||
return iNdEx, nil
|
||||
}
|
||||
}
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
|
||||
var (
|
||||
ErrInvalidLengthTypes = fmt.Errorf("proto: negative length found during unmarshaling")
|
||||
ErrIntOverflowTypes = fmt.Errorf("proto: integer overflow")
|
||||
ErrUnexpectedEndOfGroupTypes = fmt.Errorf("proto: unexpected end of group")
|
||||
)
|
15
refs/types.proto
Normal file
15
refs/types.proto
Normal file
|
@ -0,0 +1,15 @@
|
|||
syntax = "proto3";
|
||||
package refs;
|
||||
option go_package = "github.com/nspcc-dev/neofs-proto/refs";
|
||||
|
||||
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||
|
||||
option (gogoproto.stable_marshaler_all) = true;
|
||||
|
||||
option (gogoproto.stringer_all) = false;
|
||||
option (gogoproto.goproto_stringer_all) = false;
|
||||
|
||||
message Address {
|
||||
bytes ObjectID = 1[(gogoproto.customtype) = "ObjectID", (gogoproto.nullable) = false]; // UUID
|
||||
bytes CID = 2[(gogoproto.customtype) = "CID", (gogoproto.nullable) = false]; // sha256
|
||||
}
|
112
refs/types_test.go
Normal file
112
refs/types_test.go
Normal file
|
@ -0,0 +1,112 @@
|
|||
package refs
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/gogo/protobuf/proto"
|
||||
"github.com/google/uuid"
|
||||
"github.com/nspcc-dev/neofs-crypto/test"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestSGID(t *testing.T) {
|
||||
t.Run("check that marshal/unmarshal works like expected", func(t *testing.T) {
|
||||
var sgid1, sgid2 UUID
|
||||
|
||||
sgid1, err := NewSGID()
|
||||
require.NoError(t, err)
|
||||
|
||||
data, err := proto.Marshal(&sgid1)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.NoError(t, sgid2.Unmarshal(data))
|
||||
require.Equal(t, sgid1, sgid2)
|
||||
})
|
||||
}
|
||||
|
||||
func TestUUID(t *testing.T) {
|
||||
t.Run("parse should work like expected", func(t *testing.T) {
|
||||
var u UUID
|
||||
|
||||
id, err := uuid.NewRandom()
|
||||
require.NoError(t, err)
|
||||
|
||||
require.NoError(t, u.Parse(id.String()))
|
||||
require.Equal(t, id.String(), u.String())
|
||||
})
|
||||
|
||||
t.Run("check that marshal/unmarshal works like expected", func(t *testing.T) {
|
||||
var u1, u2 UUID
|
||||
|
||||
u1 = UUID{0x8f, 0xe4, 0xeb, 0xa0, 0xb8, 0xfb, 0x49, 0x3b, 0xbb, 0x1d, 0x1d, 0x13, 0x6e, 0x69, 0xfc, 0xf7}
|
||||
|
||||
data, err := proto.Marshal(&u1)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.NoError(t, u2.Unmarshal(data))
|
||||
require.Equal(t, u1, u2)
|
||||
})
|
||||
|
||||
t.Run("check that marshal/unmarshal works like expected even for msg id", func(t *testing.T) {
|
||||
var u2 MessageID
|
||||
|
||||
u1, err := NewMessageID()
|
||||
require.NoError(t, err)
|
||||
|
||||
data, err := proto.Marshal(&u1)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.NoError(t, u2.Unmarshal(data))
|
||||
require.Equal(t, u1, u2)
|
||||
})
|
||||
}
|
||||
|
||||
func TestOwnerID(t *testing.T) {
|
||||
t.Run("check that marshal/unmarshal works like expected", func(t *testing.T) {
|
||||
var u1, u2 OwnerID
|
||||
|
||||
owner, err := NewOwnerID()
|
||||
require.NoError(t, err)
|
||||
require.True(t, owner.Empty())
|
||||
|
||||
key := test.DecodeKey(0)
|
||||
|
||||
u1, err = NewOwnerID(&key.PublicKey)
|
||||
require.NoError(t, err)
|
||||
data, err := proto.Marshal(&u1)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.NoError(t, u2.Unmarshal(data))
|
||||
require.Equal(t, u1, u2)
|
||||
})
|
||||
}
|
||||
|
||||
func TestAddress(t *testing.T) {
|
||||
cid := CIDForBytes([]byte("test"))
|
||||
|
||||
id, err := NewObjectID()
|
||||
require.NoError(t, err)
|
||||
|
||||
expect := strings.Join([]string{
|
||||
cid.String(),
|
||||
id.String(),
|
||||
}, joinSeparator)
|
||||
|
||||
require.NotPanics(t, func() {
|
||||
actual := (Address{
|
||||
ObjectID: id,
|
||||
CID: cid,
|
||||
}).String()
|
||||
|
||||
require.Equal(t, expect, actual)
|
||||
})
|
||||
|
||||
var temp Address
|
||||
require.NoError(t, temp.Parse(expect))
|
||||
require.Equal(t, expect, temp.String())
|
||||
|
||||
actual, err := ParseAddress(expect)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, expect, actual.String())
|
||||
}
|
76
refs/uuid.go
Normal file
76
refs/uuid.go
Normal file
|
@ -0,0 +1,76 @@
|
|||
package refs
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/hex"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
func encodeHex(dst []byte, uuid UUID) {
|
||||
hex.Encode(dst, uuid[:4])
|
||||
dst[8] = '-'
|
||||
hex.Encode(dst[9:13], uuid[4:6])
|
||||
dst[13] = '-'
|
||||
hex.Encode(dst[14:18], uuid[6:8])
|
||||
dst[18] = '-'
|
||||
hex.Encode(dst[19:23], uuid[8:10])
|
||||
dst[23] = '-'
|
||||
hex.Encode(dst[24:], uuid[10:])
|
||||
}
|
||||
|
||||
// Size returns size in bytes of UUID (UUIDSize).
|
||||
func (UUID) Size() int { return UUIDSize }
|
||||
|
||||
// Empty checks that current UUID is empty value.
|
||||
func (u UUID) Empty() bool { return bytes.Equal(u.Bytes(), emptyUUID) }
|
||||
|
||||
// Reset sets current UUID to empty value.
|
||||
func (u *UUID) Reset() { *u = [UUIDSize]byte{} }
|
||||
|
||||
// ProtoMessage method to satisfy proto.Message.
|
||||
func (UUID) ProtoMessage() {}
|
||||
|
||||
// Marshal returns UUID bytes representation.
|
||||
func (u UUID) Marshal() ([]byte, error) { return u.Bytes(), nil }
|
||||
|
||||
// MarshalTo returns UUID bytes representation.
|
||||
func (u UUID) MarshalTo(data []byte) (int, error) { return copy(data, u[:]), nil }
|
||||
|
||||
// Bytes returns UUID bytes representation.
|
||||
func (u UUID) Bytes() []byte {
|
||||
buf := make([]byte, UUIDSize)
|
||||
copy(buf, u[:])
|
||||
return buf
|
||||
}
|
||||
|
||||
// Equal checks that current UUID is equal to passed UUID.
|
||||
func (u UUID) Equal(u2 UUID) bool { return bytes.Equal(u.Bytes(), u2.Bytes()) }
|
||||
|
||||
func (u UUID) String() string {
|
||||
var buf [36]byte
|
||||
encodeHex(buf[:], u)
|
||||
return string(buf[:])
|
||||
}
|
||||
|
||||
// Unmarshal tries to parse UUID bytes representation.
|
||||
func (u *UUID) Unmarshal(data []byte) error {
|
||||
if ln := len(data); ln != UUIDSize {
|
||||
return errors.Wrapf(ErrWrongDataSize, "expect=%d, actual=%d", UUIDSize, ln)
|
||||
}
|
||||
|
||||
copy((*u)[:], data)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Parse tries to parse UUID string representation.
|
||||
func (u *UUID) Parse(id string) error {
|
||||
tmp, err := uuid.Parse(id)
|
||||
if err != nil {
|
||||
return errors.Wrapf(err, "could not parse `%s`", id)
|
||||
}
|
||||
|
||||
copy((*u)[:], tmp[:])
|
||||
return nil
|
||||
}
|
7
service/epoch.go
Normal file
7
service/epoch.go
Normal file
|
@ -0,0 +1,7 @@
|
|||
package service
|
||||
|
||||
// EpochRequest interface gives possibility to get or set epoch in RPC Requests.
|
||||
type EpochRequest interface {
|
||||
GetEpoch() uint64
|
||||
SetEpoch(v uint64)
|
||||
}
|
24
service/role.go
Normal file
24
service/role.go
Normal file
|
@ -0,0 +1,24 @@
|
|||
package service
|
||||
|
||||
// NodeRole to identify in Bootstrap service.
|
||||
type NodeRole int32
|
||||
|
||||
const (
|
||||
_ NodeRole = iota
|
||||
// InnerRingNode that work like IR node.
|
||||
InnerRingNode
|
||||
// StorageNode that work like a storage node.
|
||||
StorageNode
|
||||
)
|
||||
|
||||
// String is method, that represent NodeRole as string.
|
||||
func (nt NodeRole) String() string {
|
||||
switch nt {
|
||||
case InnerRingNode:
|
||||
return "InnerRingNode"
|
||||
case StorageNode:
|
||||
return "StorageNode"
|
||||
default:
|
||||
return "Unknown"
|
||||
}
|
||||
}
|
22
service/role_test.go
Normal file
22
service/role_test.go
Normal file
|
@ -0,0 +1,22 @@
|
|||
package service
|
||||
|
||||
import (
|
||||
"github.com/stretchr/testify/require"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestNodeRole_String(t *testing.T) {
|
||||
tests := []struct {
|
||||
nt NodeRole
|
||||
want string
|
||||
}{
|
||||
{want: "Unknown"},
|
||||
{nt: StorageNode, want: "StorageNode"},
|
||||
{nt: InnerRingNode, want: "InnerRingNode"},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.want, func(t *testing.T) {
|
||||
require.Equal(t, tt.want, tt.nt.String())
|
||||
})
|
||||
}
|
||||
}
|
47
service/sign.go
Normal file
47
service/sign.go
Normal file
|
@ -0,0 +1,47 @@
|
|||
package service
|
||||
|
||||
import (
|
||||
"crypto/ecdsa"
|
||||
|
||||
crypto "github.com/nspcc-dev/neofs-crypto"
|
||||
"github.com/nspcc-dev/neofs-proto/internal"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// ErrWrongSignature should be raised when wrong signature is passed into VerifyRequest.
|
||||
const ErrWrongSignature = internal.Error("wrong signature")
|
||||
|
||||
// SignedRequest interface allows sign and verify requests.
|
||||
type SignedRequest interface {
|
||||
PrepareData() ([]byte, error)
|
||||
GetSignature() []byte
|
||||
SetSignature([]byte)
|
||||
}
|
||||
|
||||
// SignRequest with passed private key.
|
||||
func SignRequest(r SignedRequest, key *ecdsa.PrivateKey) error {
|
||||
var signature []byte
|
||||
if data, err := r.PrepareData(); err != nil {
|
||||
return err
|
||||
} else if signature, err = crypto.Sign(key, data); err != nil {
|
||||
return errors.Wrap(err, "could not sign data")
|
||||
}
|
||||
|
||||
r.SetSignature(signature)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// VerifyRequest by passed public keys.
|
||||
func VerifyRequest(r SignedRequest, keys ...*ecdsa.PublicKey) bool {
|
||||
data, err := r.PrepareData()
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
for i := range keys {
|
||||
if err := crypto.Verify(keys[i], data, r.GetSignature()); err == nil {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
45
service/ttl.go
Normal file
45
service/ttl.go
Normal file
|
@ -0,0 +1,45 @@
|
|||
package service
|
||||
|
||||
import (
|
||||
"github.com/nspcc-dev/neofs-proto/internal"
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// TTLRequest to verify and update ttl requests.
|
||||
type TTLRequest interface {
|
||||
GetTTL() uint32
|
||||
SetTTL(uint32)
|
||||
}
|
||||
|
||||
const (
|
||||
// ZeroTTL is empty ttl, should produce ErrZeroTTL.
|
||||
ZeroTTL = iota
|
||||
|
||||
// NonForwardingTTL is a ttl that allows direct connections only.
|
||||
NonForwardingTTL
|
||||
|
||||
// SingleForwardingTTL is a ttl that allows connections through another node.
|
||||
SingleForwardingTTL
|
||||
|
||||
// ErrZeroTTL is raised when zero ttl is passed.
|
||||
ErrZeroTTL = internal.Error("zero ttl")
|
||||
|
||||
// ErrIncorrectTTL is raised when NonForwardingTTL is passed and NodeRole != InnerRingNode.
|
||||
ErrIncorrectTTL = internal.Error("incorrect ttl")
|
||||
)
|
||||
|
||||
// CheckTTLRequest validates and update ttl requests.
|
||||
func CheckTTLRequest(req TTLRequest, role NodeRole) error {
|
||||
var ttl = req.GetTTL()
|
||||
|
||||
if ttl == ZeroTTL {
|
||||
return status.New(codes.InvalidArgument, ErrZeroTTL.Error()).Err()
|
||||
} else if ttl == NonForwardingTTL && role != InnerRingNode {
|
||||
return status.New(codes.InvalidArgument, ErrIncorrectTTL.Error()).Err()
|
||||
}
|
||||
|
||||
req.SetTTL(ttl - 1)
|
||||
|
||||
return nil
|
||||
}
|
72
service/ttl_test.go
Normal file
72
service/ttl_test.go
Normal file
|
@ -0,0 +1,72 @@
|
|||
package service
|
||||
|
||||
import (
|
||||
"github.com/stretchr/testify/require"
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/status"
|
||||
"testing"
|
||||
)
|
||||
|
||||
type mockedRequest struct {
|
||||
msg string
|
||||
ttl uint32
|
||||
name string
|
||||
role NodeRole
|
||||
code codes.Code
|
||||
}
|
||||
|
||||
func (m *mockedRequest) SetTTL(v uint32) { m.ttl = v }
|
||||
func (m mockedRequest) GetTTL() uint32 { return m.ttl }
|
||||
|
||||
func TestCheckTTLRequest(t *testing.T) {
|
||||
tests := []mockedRequest{
|
||||
{
|
||||
ttl: NonForwardingTTL,
|
||||
role: InnerRingNode,
|
||||
name: "direct to ir node",
|
||||
},
|
||||
{
|
||||
ttl: NonForwardingTTL,
|
||||
role: StorageNode,
|
||||
code: codes.InvalidArgument,
|
||||
msg: ErrIncorrectTTL.Error(),
|
||||
name: "direct to storage node",
|
||||
},
|
||||
{
|
||||
ttl: ZeroTTL,
|
||||
role: StorageNode,
|
||||
msg: ErrZeroTTL.Error(),
|
||||
code: codes.InvalidArgument,
|
||||
name: "zero ttl",
|
||||
},
|
||||
{
|
||||
ttl: SingleForwardingTTL,
|
||||
role: InnerRingNode,
|
||||
name: "default to ir node",
|
||||
},
|
||||
{
|
||||
ttl: SingleForwardingTTL,
|
||||
role: StorageNode,
|
||||
name: "default to storage node",
|
||||
},
|
||||
}
|
||||
|
||||
for i := range tests {
|
||||
tt := tests[i]
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
before := tt.ttl
|
||||
err := CheckTTLRequest(&tt, tt.role)
|
||||
if tt.msg != "" {
|
||||
require.Errorf(t, err, tt.msg)
|
||||
|
||||
state, ok := status.FromError(err)
|
||||
require.True(t, ok)
|
||||
require.Equal(t, state.Code(), tt.code)
|
||||
require.Equal(t, state.Message(), tt.msg)
|
||||
} else {
|
||||
require.NoError(t, err)
|
||||
require.NotEqualf(t, before, tt.ttl, "ttl should be changed: %d vs %d", before, tt.ttl)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
57
session/service.go
Normal file
57
session/service.go
Normal file
|
@ -0,0 +1,57 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/ecdsa"
|
||||
|
||||
crypto "github.com/nspcc-dev/neofs-crypto"
|
||||
"github.com/nspcc-dev/neofs-proto/refs"
|
||||
)
|
||||
|
||||
type (
|
||||
// KeyStore is an interface that describes storage,
|
||||
// that allows to fetch public keys by OwnerID.
|
||||
KeyStore interface {
|
||||
Get(ctx context.Context, id refs.OwnerID) ([]*ecdsa.PublicKey, error)
|
||||
}
|
||||
|
||||
// TokenStore is a PToken storage manipulation interface.
|
||||
TokenStore interface {
|
||||
// New returns new token with specified parameters.
|
||||
New(p TokenParams) *PToken
|
||||
|
||||
// Fetch tries to fetch a token with specified id.
|
||||
Fetch(id TokenID) *PToken
|
||||
|
||||
// Remove removes token with id from store.
|
||||
Remove(id TokenID)
|
||||
}
|
||||
|
||||
// TokenParams contains params to create new PToken.
|
||||
TokenParams struct {
|
||||
FirstEpoch uint64
|
||||
LastEpoch uint64
|
||||
ObjectID []ObjectID
|
||||
OwnerID OwnerID
|
||||
}
|
||||
)
|
||||
|
||||
// NewInitRequest returns new initialization CreateRequest from passed Token.
|
||||
func NewInitRequest(t *Token) *CreateRequest {
|
||||
return &CreateRequest{Message: &CreateRequest_Init{Init: t}}
|
||||
}
|
||||
|
||||
// NewSignedRequest returns new signed CreateRequest from passed Token.
|
||||
func NewSignedRequest(t *Token) *CreateRequest {
|
||||
return &CreateRequest{Message: &CreateRequest_Signed{Signed: t}}
|
||||
}
|
||||
|
||||
// Sign signs contents of the header with the private key.
|
||||
func (m *VerificationHeader) Sign(key *ecdsa.PrivateKey) error {
|
||||
s, err := crypto.Sign(key, m.PublicKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
m.KeySignature = s
|
||||
return nil
|
||||
}
|
922
session/service.pb.go
Normal file
922
session/service.pb.go
Normal file
|
@ -0,0 +1,922 @@
|
|||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
||||
// source: session/service.proto
|
||||
|
||||
package session
|
||||
|
||||
import (
|
||||
context "context"
|
||||
fmt "fmt"
|
||||
_ "github.com/gogo/protobuf/gogoproto"
|
||||
proto "github.com/golang/protobuf/proto"
|
||||
grpc "google.golang.org/grpc"
|
||||
codes "google.golang.org/grpc/codes"
|
||||
status "google.golang.org/grpc/status"
|
||||
io "io"
|
||||
math "math"
|
||||
math_bits "math/bits"
|
||||
)
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = fmt.Errorf
|
||||
var _ = math.Inf
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the proto package it is being compiled against.
|
||||
// A compilation error at this line likely means your copy of the
|
||||
// proto package needs to be updated.
|
||||
const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
|
||||
|
||||
type CreateRequest struct {
|
||||
// Types that are valid to be assigned to Message:
|
||||
// *CreateRequest_Init
|
||||
// *CreateRequest_Signed
|
||||
Message isCreateRequest_Message `protobuf_oneof:"Message"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *CreateRequest) Reset() { *m = CreateRequest{} }
|
||||
func (m *CreateRequest) String() string { return proto.CompactTextString(m) }
|
||||
func (*CreateRequest) ProtoMessage() {}
|
||||
func (*CreateRequest) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_b329bee0fd1148e0, []int{0}
|
||||
}
|
||||
func (m *CreateRequest) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *CreateRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
func (m *CreateRequest) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_CreateRequest.Merge(m, src)
|
||||
}
|
||||
func (m *CreateRequest) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *CreateRequest) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_CreateRequest.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_CreateRequest proto.InternalMessageInfo
|
||||
|
||||
type isCreateRequest_Message interface {
|
||||
isCreateRequest_Message()
|
||||
MarshalTo([]byte) (int, error)
|
||||
Size() int
|
||||
}
|
||||
|
||||
type CreateRequest_Init struct {
|
||||
Init *Token `protobuf:"bytes,1,opt,name=Init,proto3,oneof" json:"Init,omitempty"`
|
||||
}
|
||||
type CreateRequest_Signed struct {
|
||||
Signed *Token `protobuf:"bytes,2,opt,name=Signed,proto3,oneof" json:"Signed,omitempty"`
|
||||
}
|
||||
|
||||
func (*CreateRequest_Init) isCreateRequest_Message() {}
|
||||
func (*CreateRequest_Signed) isCreateRequest_Message() {}
|
||||
|
||||
func (m *CreateRequest) GetMessage() isCreateRequest_Message {
|
||||
if m != nil {
|
||||
return m.Message
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *CreateRequest) GetInit() *Token {
|
||||
if x, ok := m.GetMessage().(*CreateRequest_Init); ok {
|
||||
return x.Init
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *CreateRequest) GetSigned() *Token {
|
||||
if x, ok := m.GetMessage().(*CreateRequest_Signed); ok {
|
||||
return x.Signed
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// XXX_OneofWrappers is for the internal use of the proto package.
|
||||
func (*CreateRequest) XXX_OneofWrappers() []interface{} {
|
||||
return []interface{}{
|
||||
(*CreateRequest_Init)(nil),
|
||||
(*CreateRequest_Signed)(nil),
|
||||
}
|
||||
}
|
||||
|
||||
type CreateResponse struct {
|
||||
// Types that are valid to be assigned to Message:
|
||||
// *CreateResponse_Unsigned
|
||||
// *CreateResponse_Result
|
||||
Message isCreateResponse_Message `protobuf_oneof:"Message"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *CreateResponse) Reset() { *m = CreateResponse{} }
|
||||
func (m *CreateResponse) String() string { return proto.CompactTextString(m) }
|
||||
func (*CreateResponse) ProtoMessage() {}
|
||||
func (*CreateResponse) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_b329bee0fd1148e0, []int{1}
|
||||
}
|
||||
func (m *CreateResponse) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *CreateResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
func (m *CreateResponse) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_CreateResponse.Merge(m, src)
|
||||
}
|
||||
func (m *CreateResponse) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *CreateResponse) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_CreateResponse.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_CreateResponse proto.InternalMessageInfo
|
||||
|
||||
type isCreateResponse_Message interface {
|
||||
isCreateResponse_Message()
|
||||
MarshalTo([]byte) (int, error)
|
||||
Size() int
|
||||
}
|
||||
|
||||
type CreateResponse_Unsigned struct {
|
||||
Unsigned *Token `protobuf:"bytes,1,opt,name=Unsigned,proto3,oneof" json:"Unsigned,omitempty"`
|
||||
}
|
||||
type CreateResponse_Result struct {
|
||||
Result *Token `protobuf:"bytes,2,opt,name=Result,proto3,oneof" json:"Result,omitempty"`
|
||||
}
|
||||
|
||||
func (*CreateResponse_Unsigned) isCreateResponse_Message() {}
|
||||
func (*CreateResponse_Result) isCreateResponse_Message() {}
|
||||
|
||||
func (m *CreateResponse) GetMessage() isCreateResponse_Message {
|
||||
if m != nil {
|
||||
return m.Message
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *CreateResponse) GetUnsigned() *Token {
|
||||
if x, ok := m.GetMessage().(*CreateResponse_Unsigned); ok {
|
||||
return x.Unsigned
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *CreateResponse) GetResult() *Token {
|
||||
if x, ok := m.GetMessage().(*CreateResponse_Result); ok {
|
||||
return x.Result
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// XXX_OneofWrappers is for the internal use of the proto package.
|
||||
func (*CreateResponse) XXX_OneofWrappers() []interface{} {
|
||||
return []interface{}{
|
||||
(*CreateResponse_Unsigned)(nil),
|
||||
(*CreateResponse_Result)(nil),
|
||||
}
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterType((*CreateRequest)(nil), "session.CreateRequest")
|
||||
proto.RegisterType((*CreateResponse)(nil), "session.CreateResponse")
|
||||
}
|
||||
|
||||
func init() { proto.RegisterFile("session/service.proto", fileDescriptor_b329bee0fd1148e0) }
|
||||
|
||||
var fileDescriptor_b329bee0fd1148e0 = []byte{
|
||||
// 284 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x74, 0x90, 0xbd, 0x4e, 0xc3, 0x30,
|
||||
0x10, 0xc7, 0x6b, 0x84, 0x12, 0x30, 0xa2, 0x83, 0x11, 0x50, 0x65, 0xb0, 0x50, 0xc5, 0x90, 0x81,
|
||||
0x24, 0xa8, 0xcc, 0x30, 0x94, 0xa5, 0x0c, 0x2c, 0x29, 0x2c, 0x6c, 0x4d, 0x7a, 0x35, 0xe6, 0xc3,
|
||||
0x0e, 0x39, 0xa7, 0x12, 0x6f, 0xc2, 0x23, 0x31, 0x32, 0x32, 0xa2, 0xf0, 0x22, 0x08, 0x3b, 0xad,
|
||||
0x82, 0x50, 0x36, 0xff, 0x3f, 0x7c, 0x3f, 0xfb, 0xe8, 0x3e, 0x02, 0xa2, 0xd4, 0x2a, 0x41, 0x28,
|
||||
0x97, 0x32, 0x87, 0xb8, 0x28, 0xb5, 0xd1, 0xcc, 0x6f, 0xec, 0x60, 0x6f, 0x95, 0x9b, 0xd7, 0x02,
|
||||
0xd0, 0xa5, 0x41, 0x24, 0xa4, 0xb9, 0xaf, 0xb2, 0x38, 0xd7, 0xcf, 0x89, 0xd0, 0x42, 0x27, 0xd6,
|
||||
0xce, 0xaa, 0x85, 0x55, 0x56, 0xd8, 0x93, 0xab, 0x0f, 0x1f, 0xe8, 0xee, 0x65, 0x09, 0x33, 0x03,
|
||||
0x29, 0xbc, 0x54, 0x80, 0x86, 0x1d, 0xd3, 0xcd, 0x2b, 0x25, 0xcd, 0x80, 0x1c, 0x91, 0x70, 0x67,
|
||||
0xd4, 0x8f, 0x1b, 0x46, 0x7c, 0xa3, 0x1f, 0x41, 0x4d, 0x7a, 0xa9, 0x4d, 0x59, 0x48, 0xbd, 0xa9,
|
||||
0x14, 0x0a, 0xe6, 0x83, 0x8d, 0x8e, 0x5e, 0x93, 0x8f, 0xb7, 0xa9, 0x7f, 0x0d, 0x88, 0x33, 0x01,
|
||||
0x43, 0xa4, 0xfd, 0x15, 0x0b, 0x0b, 0xad, 0x10, 0xd8, 0x09, 0xdd, 0xba, 0x55, 0xe8, 0x06, 0x75,
|
||||
0x01, 0xd7, 0x8d, 0x5f, 0x68, 0x0a, 0x58, 0x3d, 0x99, 0x6e, 0xa8, 0xcb, 0x5b, 0xd0, 0xd1, 0x84,
|
||||
0xfa, 0x53, 0xd7, 0x62, 0xe7, 0xd4, 0x73, 0x7c, 0x76, 0xb0, 0xbe, 0xf9, 0xe7, 0xf3, 0xc1, 0xe1,
|
||||
0x3f, 0xdf, 0x3d, 0x34, 0x24, 0xa7, 0x64, 0x7c, 0xf1, 0x5e, 0x73, 0xf2, 0x51, 0x73, 0xf2, 0x59,
|
||||
0x73, 0xf2, 0x55, 0x73, 0xf2, 0xf6, 0xcd, 0x7b, 0x77, 0x61, 0x6b, 0xdf, 0x0a, 0x8b, 0x3c, 0x8f,
|
||||
0xe6, 0xb0, 0x4c, 0x14, 0xe8, 0x05, 0x46, 0x6e, 0xdb, 0xcd, 0xc8, 0xcc, 0xb3, 0xf2, 0xec, 0x27,
|
||||
0x00, 0x00, 0xff, 0xff, 0x74, 0x3d, 0x2a, 0x06, 0xd7, 0x01, 0x00, 0x00,
|
||||
}
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ context.Context
|
||||
var _ grpc.ClientConn
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the grpc package it is being compiled against.
|
||||
const _ = grpc.SupportPackageIsVersion4
|
||||
|
||||
// SessionClient is the client API for Session service.
|
||||
//
|
||||
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
|
||||
type SessionClient interface {
|
||||
Create(ctx context.Context, opts ...grpc.CallOption) (Session_CreateClient, error)
|
||||
}
|
||||
|
||||
type sessionClient struct {
|
||||
cc *grpc.ClientConn
|
||||
}
|
||||
|
||||
func NewSessionClient(cc *grpc.ClientConn) SessionClient {
|
||||
return &sessionClient{cc}
|
||||
}
|
||||
|
||||
func (c *sessionClient) Create(ctx context.Context, opts ...grpc.CallOption) (Session_CreateClient, error) {
|
||||
stream, err := c.cc.NewStream(ctx, &_Session_serviceDesc.Streams[0], "/session.Session/Create", opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
x := &sessionCreateClient{stream}
|
||||
return x, nil
|
||||
}
|
||||
|
||||
type Session_CreateClient interface {
|
||||
Send(*CreateRequest) error
|
||||
Recv() (*CreateResponse, error)
|
||||
grpc.ClientStream
|
||||
}
|
||||
|
||||
type sessionCreateClient struct {
|
||||
grpc.ClientStream
|
||||
}
|
||||
|
||||
func (x *sessionCreateClient) Send(m *CreateRequest) error {
|
||||
return x.ClientStream.SendMsg(m)
|
||||
}
|
||||
|
||||
func (x *sessionCreateClient) Recv() (*CreateResponse, error) {
|
||||
m := new(CreateResponse)
|
||||
if err := x.ClientStream.RecvMsg(m); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return m, nil
|
||||
}
|
||||
|
||||
// SessionServer is the server API for Session service.
|
||||
type SessionServer interface {
|
||||
Create(Session_CreateServer) error
|
||||
}
|
||||
|
||||
// UnimplementedSessionServer can be embedded to have forward compatible implementations.
|
||||
type UnimplementedSessionServer struct {
|
||||
}
|
||||
|
||||
func (*UnimplementedSessionServer) Create(srv Session_CreateServer) error {
|
||||
return status.Errorf(codes.Unimplemented, "method Create not implemented")
|
||||
}
|
||||
|
||||
func RegisterSessionServer(s *grpc.Server, srv SessionServer) {
|
||||
s.RegisterService(&_Session_serviceDesc, srv)
|
||||
}
|
||||
|
||||
func _Session_Create_Handler(srv interface{}, stream grpc.ServerStream) error {
|
||||
return srv.(SessionServer).Create(&sessionCreateServer{stream})
|
||||
}
|
||||
|
||||
type Session_CreateServer interface {
|
||||
Send(*CreateResponse) error
|
||||
Recv() (*CreateRequest, error)
|
||||
grpc.ServerStream
|
||||
}
|
||||
|
||||
type sessionCreateServer struct {
|
||||
grpc.ServerStream
|
||||
}
|
||||
|
||||
func (x *sessionCreateServer) Send(m *CreateResponse) error {
|
||||
return x.ServerStream.SendMsg(m)
|
||||
}
|
||||
|
||||
func (x *sessionCreateServer) Recv() (*CreateRequest, error) {
|
||||
m := new(CreateRequest)
|
||||
if err := x.ServerStream.RecvMsg(m); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return m, nil
|
||||
}
|
||||
|
||||
var _Session_serviceDesc = grpc.ServiceDesc{
|
||||
ServiceName: "session.Session",
|
||||
HandlerType: (*SessionServer)(nil),
|
||||
Methods: []grpc.MethodDesc{},
|
||||
Streams: []grpc.StreamDesc{
|
||||
{
|
||||
StreamName: "Create",
|
||||
Handler: _Session_Create_Handler,
|
||||
ServerStreams: true,
|
||||
ClientStreams: true,
|
||||
},
|
||||
},
|
||||
Metadata: "session/service.proto",
|
||||
}
|
||||
|
||||
func (m *CreateRequest) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *CreateRequest) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *CreateRequest) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
i -= len(m.XXX_unrecognized)
|
||||
copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
if m.Message != nil {
|
||||
{
|
||||
size := m.Message.Size()
|
||||
i -= size
|
||||
if _, err := m.Message.MarshalTo(dAtA[i:]); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func (m *CreateRequest_Init) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *CreateRequest_Init) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
if m.Init != nil {
|
||||
{
|
||||
size, err := m.Init.MarshalToSizedBuffer(dAtA[:i])
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i -= size
|
||||
i = encodeVarintService(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0xa
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
func (m *CreateRequest_Signed) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *CreateRequest_Signed) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
if m.Signed != nil {
|
||||
{
|
||||
size, err := m.Signed.MarshalToSizedBuffer(dAtA[:i])
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i -= size
|
||||
i = encodeVarintService(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0x12
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
func (m *CreateResponse) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *CreateResponse) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *CreateResponse) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
i -= len(m.XXX_unrecognized)
|
||||
copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
if m.Message != nil {
|
||||
{
|
||||
size := m.Message.Size()
|
||||
i -= size
|
||||
if _, err := m.Message.MarshalTo(dAtA[i:]); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func (m *CreateResponse_Unsigned) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *CreateResponse_Unsigned) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
if m.Unsigned != nil {
|
||||
{
|
||||
size, err := m.Unsigned.MarshalToSizedBuffer(dAtA[:i])
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i -= size
|
||||
i = encodeVarintService(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0xa
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
func (m *CreateResponse_Result) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *CreateResponse_Result) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
if m.Result != nil {
|
||||
{
|
||||
size, err := m.Result.MarshalToSizedBuffer(dAtA[:i])
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i -= size
|
||||
i = encodeVarintService(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0x12
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
func encodeVarintService(dAtA []byte, offset int, v uint64) int {
|
||||
offset -= sovService(v)
|
||||
base := offset
|
||||
for v >= 1<<7 {
|
||||
dAtA[offset] = uint8(v&0x7f | 0x80)
|
||||
v >>= 7
|
||||
offset++
|
||||
}
|
||||
dAtA[offset] = uint8(v)
|
||||
return base
|
||||
}
|
||||
func (m *CreateRequest) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
if m.Message != nil {
|
||||
n += m.Message.Size()
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (m *CreateRequest_Init) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
if m.Init != nil {
|
||||
l = m.Init.Size()
|
||||
n += 1 + l + sovService(uint64(l))
|
||||
}
|
||||
return n
|
||||
}
|
||||
func (m *CreateRequest_Signed) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
if m.Signed != nil {
|
||||
l = m.Signed.Size()
|
||||
n += 1 + l + sovService(uint64(l))
|
||||
}
|
||||
return n
|
||||
}
|
||||
func (m *CreateResponse) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
if m.Message != nil {
|
||||
n += m.Message.Size()
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (m *CreateResponse_Unsigned) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
if m.Unsigned != nil {
|
||||
l = m.Unsigned.Size()
|
||||
n += 1 + l + sovService(uint64(l))
|
||||
}
|
||||
return n
|
||||
}
|
||||
func (m *CreateResponse_Result) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
if m.Result != nil {
|
||||
l = m.Result.Size()
|
||||
n += 1 + l + sovService(uint64(l))
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func sovService(x uint64) (n int) {
|
||||
return (math_bits.Len64(x|1) + 6) / 7
|
||||
}
|
||||
func sozService(x uint64) (n int) {
|
||||
return sovService(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func (m *CreateRequest) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: CreateRequest: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: CreateRequest: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Init", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
v := &Token{}
|
||||
if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
m.Message = &CreateRequest_Init{v}
|
||||
iNdEx = postIndex
|
||||
case 2:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Signed", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
v := &Token{}
|
||||
if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
m.Message = &CreateRequest_Signed{v}
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipService(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func (m *CreateResponse) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: CreateResponse: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: CreateResponse: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Unsigned", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
v := &Token{}
|
||||
if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
m.Message = &CreateResponse_Unsigned{v}
|
||||
iNdEx = postIndex
|
||||
case 2:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Result", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
v := &Token{}
|
||||
if err := v.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
m.Message = &CreateResponse_Result{v}
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipService(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
return ErrInvalidLengthService
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func skipService(dAtA []byte) (n int, err error) {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
depth := 0
|
||||
for iNdEx < l {
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= (uint64(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
wireType := int(wire & 0x7)
|
||||
switch wireType {
|
||||
case 0:
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx++
|
||||
if dAtA[iNdEx-1] < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 1:
|
||||
iNdEx += 8
|
||||
case 2:
|
||||
var length int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowService
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
length |= (int(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if length < 0 {
|
||||
return 0, ErrInvalidLengthService
|
||||
}
|
||||
iNdEx += length
|
||||
case 3:
|
||||
depth++
|
||||
case 4:
|
||||
if depth == 0 {
|
||||
return 0, ErrUnexpectedEndOfGroupService
|
||||
}
|
||||
depth--
|
||||
case 5:
|
||||
iNdEx += 4
|
||||
default:
|
||||
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
|
||||
}
|
||||
if iNdEx < 0 {
|
||||
return 0, ErrInvalidLengthService
|
||||
}
|
||||
if depth == 0 {
|
||||
return iNdEx, nil
|
||||
}
|
||||
}
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
|
||||
var (
|
||||
ErrInvalidLengthService = fmt.Errorf("proto: negative length found during unmarshaling")
|
||||
ErrIntOverflowService = fmt.Errorf("proto: integer overflow")
|
||||
ErrUnexpectedEndOfGroupService = fmt.Errorf("proto: unexpected end of group")
|
||||
)
|
27
session/service.proto
Normal file
27
session/service.proto
Normal file
|
@ -0,0 +1,27 @@
|
|||
syntax = "proto3";
|
||||
package session;
|
||||
option go_package = "github.com/nspcc-dev/neofs-proto/session";
|
||||
|
||||
import "session/types.proto";
|
||||
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||
|
||||
option (gogoproto.stable_marshaler_all) = true;
|
||||
|
||||
service Session {
|
||||
rpc Create (stream CreateRequest) returns (stream CreateResponse);
|
||||
}
|
||||
|
||||
|
||||
message CreateRequest {
|
||||
oneof Message {
|
||||
session.Token Init = 1;
|
||||
session.Token Signed = 2;
|
||||
}
|
||||
}
|
||||
|
||||
message CreateResponse {
|
||||
oneof Message {
|
||||
session.Token Unsigned = 1;
|
||||
session.Token Result = 2;
|
||||
}
|
||||
}
|
81
session/store.go
Normal file
81
session/store.go
Normal file
|
@ -0,0 +1,81 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"crypto/ecdsa"
|
||||
"crypto/elliptic"
|
||||
"crypto/rand"
|
||||
"sync"
|
||||
|
||||
crypto "github.com/nspcc-dev/neofs-crypto"
|
||||
"github.com/nspcc-dev/neofs-proto/refs"
|
||||
)
|
||||
|
||||
type simpleStore struct {
|
||||
*sync.RWMutex
|
||||
|
||||
tokens map[TokenID]*PToken
|
||||
}
|
||||
|
||||
// TODO get curve from neofs-crypto
|
||||
func defaultCurve() elliptic.Curve {
|
||||
return elliptic.P256()
|
||||
}
|
||||
|
||||
// NewSimpleStore creates simple token storage
|
||||
func NewSimpleStore() TokenStore {
|
||||
return &simpleStore{
|
||||
RWMutex: new(sync.RWMutex),
|
||||
tokens: make(map[TokenID]*PToken),
|
||||
}
|
||||
}
|
||||
|
||||
// New returns new token with specified parameters.
|
||||
func (s *simpleStore) New(p TokenParams) *PToken {
|
||||
tid, err := refs.NewUUID()
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
key, err := ecdsa.GenerateKey(defaultCurve(), rand.Reader)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if p.FirstEpoch > p.LastEpoch || p.OwnerID.Empty() {
|
||||
return nil
|
||||
}
|
||||
|
||||
t := &PToken{
|
||||
mtx: new(sync.Mutex),
|
||||
Token: Token{
|
||||
ID: tid,
|
||||
Header: VerificationHeader{PublicKey: crypto.MarshalPublicKey(&key.PublicKey)},
|
||||
FirstEpoch: p.FirstEpoch,
|
||||
LastEpoch: p.LastEpoch,
|
||||
ObjectID: p.ObjectID,
|
||||
OwnerID: p.OwnerID,
|
||||
},
|
||||
PrivateKey: key,
|
||||
}
|
||||
|
||||
s.Lock()
|
||||
s.tokens[t.ID] = t
|
||||
s.Unlock()
|
||||
|
||||
return t
|
||||
}
|
||||
|
||||
// Fetch tries to fetch a token with specified id.
|
||||
func (s *simpleStore) Fetch(id TokenID) *PToken {
|
||||
s.RLock()
|
||||
defer s.RUnlock()
|
||||
|
||||
return s.tokens[id]
|
||||
}
|
||||
|
||||
// Remove removes token with id from store.
|
||||
func (s *simpleStore) Remove(id TokenID) {
|
||||
s.Lock()
|
||||
delete(s.tokens, id)
|
||||
s.Unlock()
|
||||
}
|
84
session/store_test.go
Normal file
84
session/store_test.go
Normal file
|
@ -0,0 +1,84 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"crypto/ecdsa"
|
||||
"crypto/rand"
|
||||
"testing"
|
||||
|
||||
crypto "github.com/nspcc-dev/neofs-crypto"
|
||||
"github.com/nspcc-dev/neofs-proto/refs"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
type testClient struct {
|
||||
*ecdsa.PrivateKey
|
||||
OwnerID OwnerID
|
||||
}
|
||||
|
||||
func (c *testClient) Sign(data []byte) ([]byte, error) {
|
||||
return crypto.Sign(c.PrivateKey, data)
|
||||
}
|
||||
|
||||
func newTestClient(t *testing.T) *testClient {
|
||||
key, err := ecdsa.GenerateKey(defaultCurve(), rand.Reader)
|
||||
require.NoError(t, err)
|
||||
|
||||
owner, err := refs.NewOwnerID(&key.PublicKey)
|
||||
require.NoError(t, err)
|
||||
|
||||
return &testClient{PrivateKey: key, OwnerID: owner}
|
||||
}
|
||||
|
||||
func signToken(t *testing.T, token *PToken, c *testClient) {
|
||||
require.NotNil(t, token)
|
||||
|
||||
signH, err := c.Sign(token.Header.PublicKey)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, signH)
|
||||
|
||||
// data is not yet signed
|
||||
require.False(t, token.Verify(&c.PublicKey))
|
||||
|
||||
signT, err := c.Sign(token.verificationData())
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, signT)
|
||||
|
||||
token.AddSignatures(signH, signT)
|
||||
require.True(t, token.Verify(&c.PublicKey))
|
||||
}
|
||||
|
||||
func TestTokenStore(t *testing.T) {
|
||||
s := NewSimpleStore()
|
||||
|
||||
oid, err := refs.NewObjectID()
|
||||
require.NoError(t, err)
|
||||
|
||||
c := newTestClient(t)
|
||||
require.NotNil(t, c)
|
||||
|
||||
// create new token
|
||||
token := s.New(TokenParams{ObjectID: []ObjectID{oid}, OwnerID: c.OwnerID})
|
||||
signToken(t, token, c)
|
||||
|
||||
// check that it can be fetched
|
||||
t1 := s.Fetch(token.ID)
|
||||
require.NotNil(t, t1)
|
||||
require.Equal(t, token, t1)
|
||||
|
||||
// create and sign another token by the same client
|
||||
t1 = s.New(TokenParams{ObjectID: []ObjectID{oid}, OwnerID: c.OwnerID})
|
||||
signToken(t, t1, c)
|
||||
|
||||
data := []byte{1, 2, 3}
|
||||
sign, err := t1.SignData(data)
|
||||
require.NoError(t, err)
|
||||
require.Error(t, token.Header.VerifyData(data, sign))
|
||||
|
||||
sign, err = token.SignData(data)
|
||||
require.NoError(t, err)
|
||||
require.NoError(t, token.Header.VerifyData(data, sign))
|
||||
|
||||
s.Remove(token.ID)
|
||||
require.Nil(t, s.Fetch(token.ID))
|
||||
require.NotNil(t, s.Fetch(t1.ID))
|
||||
}
|
159
session/types.go
Normal file
159
session/types.go
Normal file
|
@ -0,0 +1,159 @@
|
|||
package session
|
||||
|
||||
import (
|
||||
"crypto/ecdsa"
|
||||
"encoding/binary"
|
||||
"sync"
|
||||
|
||||
crypto "github.com/nspcc-dev/neofs-crypto"
|
||||
"github.com/nspcc-dev/neofs-proto/internal"
|
||||
"github.com/nspcc-dev/neofs-proto/refs"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
type (
|
||||
// ObjectID type alias.
|
||||
ObjectID = refs.ObjectID
|
||||
// OwnerID type alias.
|
||||
OwnerID = refs.OwnerID
|
||||
// TokenID type alias.
|
||||
TokenID = refs.UUID
|
||||
|
||||
// PToken is a wrapper around Token that allows to sign data
|
||||
// and to do thread-safe manipulations.
|
||||
PToken struct {
|
||||
Token
|
||||
|
||||
mtx *sync.Mutex
|
||||
PrivateKey *ecdsa.PrivateKey
|
||||
}
|
||||
)
|
||||
|
||||
const (
|
||||
// ErrWrongFirstEpoch is raised when passed Token contains wrong first epoch.
|
||||
// First epoch is an epoch since token is valid
|
||||
ErrWrongFirstEpoch = internal.Error("wrong first epoch")
|
||||
|
||||
// ErrWrongLastEpoch is raised when passed Token contains wrong last epoch.
|
||||
// Last epoch is an epoch until token is valid
|
||||
ErrWrongLastEpoch = internal.Error("wrong last epoch")
|
||||
|
||||
// ErrWrongOwner is raised when passed Token contains wrong OwnerID.
|
||||
ErrWrongOwner = internal.Error("wrong owner")
|
||||
|
||||
// ErrEmptyPublicKey is raised when passed Token contains wrong public key.
|
||||
ErrEmptyPublicKey = internal.Error("empty public key")
|
||||
|
||||
// ErrWrongObjectsCount is raised when passed Token contains wrong objects count.
|
||||
ErrWrongObjectsCount = internal.Error("wrong objects count")
|
||||
|
||||
// ErrWrongObjects is raised when passed Token contains wrong object ids.
|
||||
ErrWrongObjects = internal.Error("wrong objects")
|
||||
|
||||
// ErrInvalidSignature is raised when wrong signature is passed to VerificationHeader.VerifyData().
|
||||
ErrInvalidSignature = internal.Error("invalid signature")
|
||||
)
|
||||
|
||||
// verificationData returns byte array to sign.
|
||||
// Note: protobuf serialization is inconsistent as
|
||||
// wire order is unspecified.
|
||||
func (m *Token) verificationData() (data []byte) {
|
||||
var size int
|
||||
if l := len(m.ObjectID); l > 0 {
|
||||
size = m.ObjectID[0].Size()
|
||||
data = make([]byte, 16+l*size)
|
||||
} else {
|
||||
data = make([]byte, 16)
|
||||
}
|
||||
binary.BigEndian.PutUint64(data, m.FirstEpoch)
|
||||
binary.BigEndian.PutUint64(data[8:], m.LastEpoch)
|
||||
for i := range m.ObjectID {
|
||||
copy(data[16+i*size:], m.ObjectID[i].Bytes())
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// IsSame checks if the passed token is valid and equal to current token
|
||||
func (m *Token) IsSame(t *Token) error {
|
||||
switch {
|
||||
case m.FirstEpoch != t.FirstEpoch:
|
||||
return ErrWrongFirstEpoch
|
||||
case m.LastEpoch != t.LastEpoch:
|
||||
return ErrWrongLastEpoch
|
||||
case !m.OwnerID.Equal(t.OwnerID):
|
||||
return ErrWrongOwner
|
||||
case m.Header.PublicKey == nil:
|
||||
return ErrEmptyPublicKey
|
||||
case len(m.ObjectID) != len(t.ObjectID):
|
||||
return ErrWrongObjectsCount
|
||||
default:
|
||||
for i := range m.ObjectID {
|
||||
if !m.ObjectID[i].Equal(t.ObjectID[i]) {
|
||||
return errors.Wrapf(ErrWrongObjects, "expect %s, actual: %s", m.ObjectID[i], t.ObjectID[i])
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Sign tries to sign current Token data and stores signature inside it.
|
||||
func (m *Token) Sign(key *ecdsa.PrivateKey) error {
|
||||
if err := m.Header.Sign(key); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
s, err := crypto.Sign(key, m.verificationData())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
m.Signature = s
|
||||
return nil
|
||||
}
|
||||
|
||||
// Verify checks if token is correct and signed.
|
||||
func (m *Token) Verify(keys ...*ecdsa.PublicKey) bool {
|
||||
if m.FirstEpoch > m.LastEpoch {
|
||||
return false
|
||||
}
|
||||
for i := range keys {
|
||||
if m.Header.Verify(keys[i]) && crypto.Verify(keys[i], m.verificationData(), m.Signature) == nil {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Sign adds token signatures.
|
||||
func (t *PToken) AddSignatures(signH, signT []byte) {
|
||||
t.mtx.Lock()
|
||||
|
||||
t.Header.KeySignature = signH
|
||||
t.Signature = signT
|
||||
|
||||
t.mtx.Unlock()
|
||||
}
|
||||
|
||||
// SignData signs data with session private key.
|
||||
func (t *PToken) SignData(data []byte) ([]byte, error) {
|
||||
return crypto.Sign(t.PrivateKey, data)
|
||||
}
|
||||
|
||||
// VerifyData checks if signature of data by token t
|
||||
// is equal to sign.
|
||||
func (m *VerificationHeader) VerifyData(data, sign []byte) error {
|
||||
if crypto.Verify(crypto.UnmarshalPublicKey(m.PublicKey), data, sign) != nil {
|
||||
return ErrInvalidSignature
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Verify checks if verification header was issued by id.
|
||||
func (m *VerificationHeader) Verify(keys ...*ecdsa.PublicKey) bool {
|
||||
for i := range keys {
|
||||
if crypto.Verify(keys[i], m.PublicKey, m.KeySignature) == nil {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
845
session/types.pb.go
Normal file
845
session/types.pb.go
Normal file
|
@ -0,0 +1,845 @@
|
|||
// Code generated by protoc-gen-gogo. DO NOT EDIT.
|
||||
// source: session/types.proto
|
||||
|
||||
package session
|
||||
|
||||
import (
|
||||
fmt "fmt"
|
||||
_ "github.com/gogo/protobuf/gogoproto"
|
||||
proto "github.com/golang/protobuf/proto"
|
||||
io "io"
|
||||
math "math"
|
||||
math_bits "math/bits"
|
||||
)
|
||||
|
||||
// Reference imports to suppress errors if they are not otherwise used.
|
||||
var _ = proto.Marshal
|
||||
var _ = fmt.Errorf
|
||||
var _ = math.Inf
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the proto package it is being compiled against.
|
||||
// A compilation error at this line likely means your copy of the
|
||||
// proto package needs to be updated.
|
||||
const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
|
||||
|
||||
type VerificationHeader struct {
|
||||
PublicKey []byte `protobuf:"bytes,1,opt,name=PublicKey,proto3" json:"PublicKey,omitempty"`
|
||||
KeySignature []byte `protobuf:"bytes,2,opt,name=KeySignature,proto3" json:"KeySignature,omitempty"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *VerificationHeader) Reset() { *m = VerificationHeader{} }
|
||||
func (m *VerificationHeader) String() string { return proto.CompactTextString(m) }
|
||||
func (*VerificationHeader) ProtoMessage() {}
|
||||
func (*VerificationHeader) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_c0d9d9cb855cdad8, []int{0}
|
||||
}
|
||||
func (m *VerificationHeader) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *VerificationHeader) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
func (m *VerificationHeader) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_VerificationHeader.Merge(m, src)
|
||||
}
|
||||
func (m *VerificationHeader) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *VerificationHeader) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_VerificationHeader.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_VerificationHeader proto.InternalMessageInfo
|
||||
|
||||
func (m *VerificationHeader) GetPublicKey() []byte {
|
||||
if m != nil {
|
||||
return m.PublicKey
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *VerificationHeader) GetKeySignature() []byte {
|
||||
if m != nil {
|
||||
return m.KeySignature
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type Token struct {
|
||||
Header VerificationHeader `protobuf:"bytes,1,opt,name=Header,proto3" json:"Header"`
|
||||
OwnerID OwnerID `protobuf:"bytes,2,opt,name=OwnerID,proto3,customtype=OwnerID" json:"OwnerID"`
|
||||
FirstEpoch uint64 `protobuf:"varint,3,opt,name=FirstEpoch,proto3" json:"FirstEpoch,omitempty"`
|
||||
LastEpoch uint64 `protobuf:"varint,4,opt,name=LastEpoch,proto3" json:"LastEpoch,omitempty"`
|
||||
ObjectID []ObjectID `protobuf:"bytes,5,rep,name=ObjectID,proto3,customtype=ObjectID" json:"ObjectID"`
|
||||
Signature []byte `protobuf:"bytes,6,opt,name=Signature,proto3" json:"Signature,omitempty"`
|
||||
ID TokenID `protobuf:"bytes,7,opt,name=ID,proto3,customtype=TokenID" json:"ID"`
|
||||
XXX_NoUnkeyedLiteral struct{} `json:"-"`
|
||||
XXX_unrecognized []byte `json:"-"`
|
||||
XXX_sizecache int32 `json:"-"`
|
||||
}
|
||||
|
||||
func (m *Token) Reset() { *m = Token{} }
|
||||
func (m *Token) String() string { return proto.CompactTextString(m) }
|
||||
func (*Token) ProtoMessage() {}
|
||||
func (*Token) Descriptor() ([]byte, []int) {
|
||||
return fileDescriptor_c0d9d9cb855cdad8, []int{1}
|
||||
}
|
||||
func (m *Token) XXX_Unmarshal(b []byte) error {
|
||||
return m.Unmarshal(b)
|
||||
}
|
||||
func (m *Token) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
|
||||
b = b[:cap(b)]
|
||||
n, err := m.MarshalToSizedBuffer(b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b[:n], nil
|
||||
}
|
||||
func (m *Token) XXX_Merge(src proto.Message) {
|
||||
xxx_messageInfo_Token.Merge(m, src)
|
||||
}
|
||||
func (m *Token) XXX_Size() int {
|
||||
return m.Size()
|
||||
}
|
||||
func (m *Token) XXX_DiscardUnknown() {
|
||||
xxx_messageInfo_Token.DiscardUnknown(m)
|
||||
}
|
||||
|
||||
var xxx_messageInfo_Token proto.InternalMessageInfo
|
||||
|
||||
func (m *Token) GetHeader() VerificationHeader {
|
||||
if m != nil {
|
||||
return m.Header
|
||||
}
|
||||
return VerificationHeader{}
|
||||
}
|
||||
|
||||
func (m *Token) GetFirstEpoch() uint64 {
|
||||
if m != nil {
|
||||
return m.FirstEpoch
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *Token) GetLastEpoch() uint64 {
|
||||
if m != nil {
|
||||
return m.LastEpoch
|
||||
}
|
||||
return 0
|
||||
}
|
||||
|
||||
func (m *Token) GetSignature() []byte {
|
||||
if m != nil {
|
||||
return m.Signature
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
proto.RegisterType((*VerificationHeader)(nil), "session.VerificationHeader")
|
||||
proto.RegisterType((*Token)(nil), "session.Token")
|
||||
}
|
||||
|
||||
func init() { proto.RegisterFile("session/types.proto", fileDescriptor_c0d9d9cb855cdad8) }
|
||||
|
||||
var fileDescriptor_c0d9d9cb855cdad8 = []byte{
|
||||
// 344 bytes of a gzipped FileDescriptorProto
|
||||
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x64, 0x91, 0x4d, 0x4b, 0xc3, 0x30,
|
||||
0x18, 0xc7, 0x97, 0xee, 0x4d, 0xe3, 0x40, 0x89, 0x97, 0xa2, 0xd2, 0x8d, 0x9d, 0x2a, 0xb8, 0x16,
|
||||
0xf4, 0xe4, 0xc5, 0x43, 0xa9, 0x62, 0x99, 0x30, 0xa9, 0xb2, 0x83, 0xb7, 0x36, 0xcb, 0xba, 0xf8,
|
||||
0x92, 0x94, 0x26, 0x55, 0xf6, 0x4d, 0xf6, 0x91, 0x76, 0xf4, 0x28, 0x1e, 0x86, 0xd4, 0x2f, 0x22,
|
||||
0x4b, 0xbb, 0x75, 0xc3, 0xdb, 0xf3, 0xfc, 0xfe, 0xc9, 0xf3, 0xf2, 0x7f, 0xe0, 0xa1, 0x20, 0x42,
|
||||
0x50, 0xce, 0x6c, 0x39, 0x8d, 0x89, 0xb0, 0xe2, 0x84, 0x4b, 0x8e, 0x9a, 0x05, 0x3c, 0xea, 0x45,
|
||||
0x54, 0x4e, 0xd2, 0xd0, 0xc2, 0xfc, 0xcd, 0x8e, 0x78, 0xc4, 0x6d, 0xa5, 0x87, 0xe9, 0x58, 0x65,
|
||||
0x2a, 0x51, 0x51, 0xfe, 0xaf, 0x3b, 0x84, 0x68, 0x48, 0x12, 0x3a, 0xa6, 0x38, 0x90, 0x94, 0xb3,
|
||||
0x5b, 0x12, 0x8c, 0x48, 0x82, 0x4e, 0xe0, 0xee, 0x7d, 0x1a, 0xbe, 0x52, 0xdc, 0x27, 0x53, 0x1d,
|
||||
0x74, 0x80, 0xd9, 0xf2, 0x4b, 0x80, 0xba, 0xb0, 0xd5, 0x27, 0xd3, 0x07, 0x1a, 0xb1, 0x40, 0xa6,
|
||||
0x09, 0xd1, 0x35, 0xf5, 0x60, 0x8b, 0x75, 0x67, 0x1a, 0xac, 0x3f, 0xf2, 0x17, 0xc2, 0xd0, 0x25,
|
||||
0x6c, 0xe4, 0x55, 0x55, 0xa1, 0xbd, 0xf3, 0x63, 0xab, 0x18, 0xd5, 0xfa, 0xdf, 0xd8, 0xa9, 0xcd,
|
||||
0x17, 0xed, 0x8a, 0x5f, 0x7c, 0x40, 0xa7, 0xb0, 0x39, 0xf8, 0x60, 0x24, 0xf1, 0xdc, 0xbc, 0x87,
|
||||
0xb3, 0xbf, 0x94, 0xbf, 0x17, 0xed, 0x15, 0xf6, 0x57, 0x01, 0x32, 0x20, 0xbc, 0xa1, 0x89, 0x90,
|
||||
0xd7, 0x31, 0xc7, 0x13, 0xbd, 0xda, 0x01, 0x66, 0xcd, 0xdf, 0x20, 0xcb, 0x8d, 0xee, 0x82, 0x95,
|
||||
0x5c, 0x53, 0x72, 0x09, 0xd0, 0x19, 0xdc, 0x19, 0x84, 0xcf, 0x04, 0x4b, 0xcf, 0xd5, 0xeb, 0x9d,
|
||||
0xaa, 0xd9, 0x72, 0x0e, 0x8a, 0x4e, 0x6b, 0xee, 0xaf, 0xa3, 0x65, 0xad, 0x72, 0xf9, 0x46, 0xee,
|
||||
0xce, 0x1a, 0xa0, 0x36, 0xd4, 0x3c, 0x57, 0x6f, 0x6e, 0xcf, 0xab, 0xac, 0xf0, 0x5c, 0x5f, 0xf3,
|
||||
0x5c, 0xe7, 0x6a, 0x9e, 0x19, 0xe0, 0x33, 0x33, 0xc0, 0x57, 0x66, 0x80, 0x9f, 0xcc, 0x00, 0xb3,
|
||||
0x5f, 0xa3, 0xf2, 0x64, 0x6e, 0xdc, 0x8d, 0x89, 0x18, 0xe3, 0xde, 0x88, 0xbc, 0xdb, 0x8c, 0xf0,
|
||||
0xb1, 0xe8, 0xe5, 0x57, 0x2b, 0x6c, 0x0b, 0x1b, 0x2a, 0xbd, 0xf8, 0x0b, 0x00, 0x00, 0xff, 0xff,
|
||||
0xc6, 0x87, 0x25, 0xf9, 0x08, 0x02, 0x00, 0x00,
|
||||
}
|
||||
|
||||
func (m *VerificationHeader) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *VerificationHeader) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *VerificationHeader) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
i -= len(m.XXX_unrecognized)
|
||||
copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
if len(m.KeySignature) > 0 {
|
||||
i -= len(m.KeySignature)
|
||||
copy(dAtA[i:], m.KeySignature)
|
||||
i = encodeVarintTypes(dAtA, i, uint64(len(m.KeySignature)))
|
||||
i--
|
||||
dAtA[i] = 0x12
|
||||
}
|
||||
if len(m.PublicKey) > 0 {
|
||||
i -= len(m.PublicKey)
|
||||
copy(dAtA[i:], m.PublicKey)
|
||||
i = encodeVarintTypes(dAtA, i, uint64(len(m.PublicKey)))
|
||||
i--
|
||||
dAtA[i] = 0xa
|
||||
}
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func (m *Token) Marshal() (dAtA []byte, err error) {
|
||||
size := m.Size()
|
||||
dAtA = make([]byte, size)
|
||||
n, err := m.MarshalToSizedBuffer(dAtA[:size])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return dAtA[:n], nil
|
||||
}
|
||||
|
||||
func (m *Token) MarshalTo(dAtA []byte) (int, error) {
|
||||
size := m.Size()
|
||||
return m.MarshalToSizedBuffer(dAtA[:size])
|
||||
}
|
||||
|
||||
func (m *Token) MarshalToSizedBuffer(dAtA []byte) (int, error) {
|
||||
i := len(dAtA)
|
||||
_ = i
|
||||
var l int
|
||||
_ = l
|
||||
if m.XXX_unrecognized != nil {
|
||||
i -= len(m.XXX_unrecognized)
|
||||
copy(dAtA[i:], m.XXX_unrecognized)
|
||||
}
|
||||
{
|
||||
size := m.ID.Size()
|
||||
i -= size
|
||||
if _, err := m.ID.MarshalTo(dAtA[i:]); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i = encodeVarintTypes(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0x3a
|
||||
if len(m.Signature) > 0 {
|
||||
i -= len(m.Signature)
|
||||
copy(dAtA[i:], m.Signature)
|
||||
i = encodeVarintTypes(dAtA, i, uint64(len(m.Signature)))
|
||||
i--
|
||||
dAtA[i] = 0x32
|
||||
}
|
||||
if len(m.ObjectID) > 0 {
|
||||
for iNdEx := len(m.ObjectID) - 1; iNdEx >= 0; iNdEx-- {
|
||||
{
|
||||
size := m.ObjectID[iNdEx].Size()
|
||||
i -= size
|
||||
if _, err := m.ObjectID[iNdEx].MarshalTo(dAtA[i:]); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i = encodeVarintTypes(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0x2a
|
||||
}
|
||||
}
|
||||
if m.LastEpoch != 0 {
|
||||
i = encodeVarintTypes(dAtA, i, uint64(m.LastEpoch))
|
||||
i--
|
||||
dAtA[i] = 0x20
|
||||
}
|
||||
if m.FirstEpoch != 0 {
|
||||
i = encodeVarintTypes(dAtA, i, uint64(m.FirstEpoch))
|
||||
i--
|
||||
dAtA[i] = 0x18
|
||||
}
|
||||
{
|
||||
size := m.OwnerID.Size()
|
||||
i -= size
|
||||
if _, err := m.OwnerID.MarshalTo(dAtA[i:]); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i = encodeVarintTypes(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0x12
|
||||
{
|
||||
size, err := m.Header.MarshalToSizedBuffer(dAtA[:i])
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
i -= size
|
||||
i = encodeVarintTypes(dAtA, i, uint64(size))
|
||||
}
|
||||
i--
|
||||
dAtA[i] = 0xa
|
||||
return len(dAtA) - i, nil
|
||||
}
|
||||
|
||||
func encodeVarintTypes(dAtA []byte, offset int, v uint64) int {
|
||||
offset -= sovTypes(v)
|
||||
base := offset
|
||||
for v >= 1<<7 {
|
||||
dAtA[offset] = uint8(v&0x7f | 0x80)
|
||||
v >>= 7
|
||||
offset++
|
||||
}
|
||||
dAtA[offset] = uint8(v)
|
||||
return base
|
||||
}
|
||||
func (m *VerificationHeader) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
l = len(m.PublicKey)
|
||||
if l > 0 {
|
||||
n += 1 + l + sovTypes(uint64(l))
|
||||
}
|
||||
l = len(m.KeySignature)
|
||||
if l > 0 {
|
||||
n += 1 + l + sovTypes(uint64(l))
|
||||
}
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func (m *Token) Size() (n int) {
|
||||
if m == nil {
|
||||
return 0
|
||||
}
|
||||
var l int
|
||||
_ = l
|
||||
l = m.Header.Size()
|
||||
n += 1 + l + sovTypes(uint64(l))
|
||||
l = m.OwnerID.Size()
|
||||
n += 1 + l + sovTypes(uint64(l))
|
||||
if m.FirstEpoch != 0 {
|
||||
n += 1 + sovTypes(uint64(m.FirstEpoch))
|
||||
}
|
||||
if m.LastEpoch != 0 {
|
||||
n += 1 + sovTypes(uint64(m.LastEpoch))
|
||||
}
|
||||
if len(m.ObjectID) > 0 {
|
||||
for _, e := range m.ObjectID {
|
||||
l = e.Size()
|
||||
n += 1 + l + sovTypes(uint64(l))
|
||||
}
|
||||
}
|
||||
l = len(m.Signature)
|
||||
if l > 0 {
|
||||
n += 1 + l + sovTypes(uint64(l))
|
||||
}
|
||||
l = m.ID.Size()
|
||||
n += 1 + l + sovTypes(uint64(l))
|
||||
if m.XXX_unrecognized != nil {
|
||||
n += len(m.XXX_unrecognized)
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func sovTypes(x uint64) (n int) {
|
||||
return (math_bits.Len64(x|1) + 6) / 7
|
||||
}
|
||||
func sozTypes(x uint64) (n int) {
|
||||
return sovTypes(uint64((x << 1) ^ uint64((int64(x) >> 63))))
|
||||
}
|
||||
func (m *VerificationHeader) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: VerificationHeader: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: VerificationHeader: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field PublicKey", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.PublicKey = append(m.PublicKey[:0], dAtA[iNdEx:postIndex]...)
|
||||
if m.PublicKey == nil {
|
||||
m.PublicKey = []byte{}
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 2:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field KeySignature", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.KeySignature = append(m.KeySignature[:0], dAtA[iNdEx:postIndex]...)
|
||||
if m.KeySignature == nil {
|
||||
m.KeySignature = []byte{}
|
||||
}
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipTypes(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func (m *Token) Unmarshal(dAtA []byte) error {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
for iNdEx < l {
|
||||
preIndex := iNdEx
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
fieldNum := int32(wire >> 3)
|
||||
wireType := int(wire & 0x7)
|
||||
if wireType == 4 {
|
||||
return fmt.Errorf("proto: Token: wiretype end group for non-group")
|
||||
}
|
||||
if fieldNum <= 0 {
|
||||
return fmt.Errorf("proto: Token: illegal tag %d (wire type %d)", fieldNum, wire)
|
||||
}
|
||||
switch fieldNum {
|
||||
case 1:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Header", wireType)
|
||||
}
|
||||
var msglen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
msglen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if msglen < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
postIndex := iNdEx + msglen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if err := m.Header.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 2:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field OwnerID", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if err := m.OwnerID.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 3:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field FirstEpoch", wireType)
|
||||
}
|
||||
m.FirstEpoch = 0
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
m.FirstEpoch |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 4:
|
||||
if wireType != 0 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field LastEpoch", wireType)
|
||||
}
|
||||
m.LastEpoch = 0
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
m.LastEpoch |= uint64(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 5:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field ObjectID", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
var v ObjectID
|
||||
m.ObjectID = append(m.ObjectID, v)
|
||||
if err := m.ObjectID[len(m.ObjectID)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 6:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field Signature", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.Signature = append(m.Signature[:0], dAtA[iNdEx:postIndex]...)
|
||||
if m.Signature == nil {
|
||||
m.Signature = []byte{}
|
||||
}
|
||||
iNdEx = postIndex
|
||||
case 7:
|
||||
if wireType != 2 {
|
||||
return fmt.Errorf("proto: wrong wireType = %d for field ID", wireType)
|
||||
}
|
||||
var byteLen int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
byteLen |= int(b&0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if byteLen < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
postIndex := iNdEx + byteLen
|
||||
if postIndex < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if postIndex > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
if err := m.ID.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
|
||||
return err
|
||||
}
|
||||
iNdEx = postIndex
|
||||
default:
|
||||
iNdEx = preIndex
|
||||
skippy, err := skipTypes(dAtA[iNdEx:])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if skippy < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if (iNdEx + skippy) < 0 {
|
||||
return ErrInvalidLengthTypes
|
||||
}
|
||||
if (iNdEx + skippy) > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
|
||||
iNdEx += skippy
|
||||
}
|
||||
}
|
||||
|
||||
if iNdEx > l {
|
||||
return io.ErrUnexpectedEOF
|
||||
}
|
||||
return nil
|
||||
}
|
||||
func skipTypes(dAtA []byte) (n int, err error) {
|
||||
l := len(dAtA)
|
||||
iNdEx := 0
|
||||
depth := 0
|
||||
for iNdEx < l {
|
||||
var wire uint64
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
wire |= (uint64(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
wireType := int(wire & 0x7)
|
||||
switch wireType {
|
||||
case 0:
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
iNdEx++
|
||||
if dAtA[iNdEx-1] < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
case 1:
|
||||
iNdEx += 8
|
||||
case 2:
|
||||
var length int
|
||||
for shift := uint(0); ; shift += 7 {
|
||||
if shift >= 64 {
|
||||
return 0, ErrIntOverflowTypes
|
||||
}
|
||||
if iNdEx >= l {
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
b := dAtA[iNdEx]
|
||||
iNdEx++
|
||||
length |= (int(b) & 0x7F) << shift
|
||||
if b < 0x80 {
|
||||
break
|
||||
}
|
||||
}
|
||||
if length < 0 {
|
||||
return 0, ErrInvalidLengthTypes
|
||||
}
|
||||
iNdEx += length
|
||||
case 3:
|
||||
depth++
|
||||
case 4:
|
||||
if depth == 0 {
|
||||
return 0, ErrUnexpectedEndOfGroupTypes
|
||||
}
|
||||
depth--
|
||||
case 5:
|
||||
iNdEx += 4
|
||||
default:
|
||||
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
|
||||
}
|
||||
if iNdEx < 0 {
|
||||
return 0, ErrInvalidLengthTypes
|
||||
}
|
||||
if depth == 0 {
|
||||
return iNdEx, nil
|
||||
}
|
||||
}
|
||||
return 0, io.ErrUnexpectedEOF
|
||||
}
|
||||
|
||||
var (
|
||||
ErrInvalidLengthTypes = fmt.Errorf("proto: negative length found during unmarshaling")
|
||||
ErrIntOverflowTypes = fmt.Errorf("proto: integer overflow")
|
||||
ErrUnexpectedEndOfGroupTypes = fmt.Errorf("proto: unexpected end of group")
|
||||
)
|
22
session/types.proto
Normal file
22
session/types.proto
Normal file
|
@ -0,0 +1,22 @@
|
|||
syntax = "proto3";
|
||||
package session;
|
||||
option go_package = "github.com/nspcc-dev/neofs-proto/session";
|
||||
|
||||
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||
|
||||
option (gogoproto.stable_marshaler_all) = true;
|
||||
|
||||
message VerificationHeader {
|
||||
bytes PublicKey = 1;
|
||||
bytes KeySignature = 2;
|
||||
}
|
||||
|
||||
message Token {
|
||||
VerificationHeader Header = 1 [(gogoproto.nullable) = false];
|
||||
bytes OwnerID = 2 [(gogoproto.customtype) = "OwnerID", (gogoproto.nullable) = false];
|
||||
uint64 FirstEpoch = 3;
|
||||
uint64 LastEpoch = 4;
|
||||
repeated bytes ObjectID = 5 [(gogoproto.customtype) = "ObjectID", (gogoproto.nullable) = false];
|
||||
bytes Signature = 6;
|
||||
bytes ID = 7 [(gogoproto.customtype) = "TokenID", (gogoproto.nullable) = false];
|
||||
}
|
48
state/service.go
Normal file
48
state/service.go
Normal file
|
@ -0,0 +1,48 @@
|
|||
package state
|
||||
|
||||
import (
|
||||
"github.com/golang/protobuf/proto"
|
||||
"github.com/prometheus/client_golang/prometheus"
|
||||
dto "github.com/prometheus/client_model/go"
|
||||
"google.golang.org/grpc/codes"
|
||||
"google.golang.org/grpc/status"
|
||||
)
|
||||
|
||||
// MetricFamily is type alias for proto.Message generated
|
||||
// from github.com/prometheus/client_model/metrics.proto.
|
||||
type MetricFamily = dto.MetricFamily
|
||||
|
||||
// EncodeMetrics encodes metrics from gatherer into MetricsResponse message,
|
||||
// if something went wrong returns gRPC Status error (can be returned from service).
|
||||
func EncodeMetrics(g prometheus.Gatherer) (*MetricsResponse, error) {
|
||||
metrics, err := g.Gather()
|
||||
if err != nil {
|
||||
return nil, status.New(codes.Internal, err.Error()).Err()
|
||||
}
|
||||
|
||||
results := make([][]byte, 0, len(metrics))
|
||||
for _, mf := range metrics {
|
||||
item, err := proto.Marshal(mf)
|
||||
if err != nil {
|
||||
return nil, status.New(codes.Internal, err.Error()).Err()
|
||||
}
|
||||
|
||||
results = append(results, item)
|
||||
}
|
||||
|
||||
return &MetricsResponse{Metrics: results}, nil
|
||||
}
|
||||
|
||||
// DecodeMetrics decodes metrics from MetricsResponse to []MetricFamily,
|
||||
// if something went wrong returns error.
|
||||
func DecodeMetrics(r *MetricsResponse) ([]*MetricFamily, error) {
|
||||
metrics := make([]*dto.MetricFamily, 0, len(r.Metrics))
|
||||
for i := range r.Metrics {
|
||||
mf := new(MetricFamily)
|
||||
if err := proto.Unmarshal(r.Metrics[i], mf); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return metrics, nil
|
||||
}
|
1111
state/service.pb.go
Normal file
1111
state/service.pb.go
Normal file
File diff suppressed because it is too large
Load diff
37
state/service.proto
Normal file
37
state/service.proto
Normal file
|
@ -0,0 +1,37 @@
|
|||
syntax = "proto3";
|
||||
package state;
|
||||
option go_package = "github.com/nspcc-dev/neofs-proto/state";
|
||||
|
||||
import "bootstrap/types.proto";
|
||||
import "github.com/gogo/protobuf/gogoproto/gogo.proto";
|
||||
|
||||
option (gogoproto.stable_marshaler_all) = true;
|
||||
|
||||
// The Status service definition.
|
||||
service Status {
|
||||
rpc Netmap(NetmapRequest) returns (bootstrap.SpreadMap);
|
||||
rpc Metrics(MetricsRequest) returns (MetricsResponse);
|
||||
rpc HealthCheck(HealthRequest) returns (HealthResponse);
|
||||
}
|
||||
|
||||
// NetmapRequest message to request current node netmap
|
||||
message NetmapRequest {}
|
||||
|
||||
// MetricsRequest message to request node metrics
|
||||
message MetricsRequest {}
|
||||
|
||||
// MetricsResponse contains [][]byte,
|
||||
// every []byte is marshaled MetricFamily proto message
|
||||
// from github.com/prometheus/client_model/metrics.proto
|
||||
message MetricsResponse {
|
||||
repeated bytes Metrics = 1;
|
||||
}
|
||||
|
||||
// HealthRequest message to check current state
|
||||
message HealthRequest {}
|
||||
|
||||
// HealthResponse message with current state
|
||||
message HealthResponse {
|
||||
bool Healthy = 1;
|
||||
string Status = 2;
|
||||
}
|
Loading…
Reference in a new issue