Also do not limit depth. It was introduced in e34fa2e915 as a simple
solution to OOM problem. In this commit we do exactly the refactoring
described there. Maximum size is the same as stack item size and
can be changed if needed withouth significat refactoring.
`1 MiB` seems sufficient, though.
Signed-off-by: Evgeniy Stratonikov <evgeniy@nspcc.ru>
Doesn't change much, but still simpler.
name old time/op new time/op delta
SerializeSimple-8 452ns ±10% 435ns ± 4% ~ (p=0.356 n=10+9)
name old alloc/op new alloc/op delta
SerializeSimple-8 432B ± 0% 432B ± 0% ~ (all equal)
name old allocs/op new allocs/op delta
SerializeSimple-8 7.00 ± 0% 7.00 ± 0% ~ (all equal)
Turns out C# VM doesn't have it since preview2, so our limiting of
MaxArraySize in incompatible with it. Removing this limit shouldn't be a
problem with the reference counter we have, both APPEND and SETITEM add things
to reference counter and we can't exceed MaxStackSize. PACK on the other hand
can't get more than MaxStackSize-1 of input elements.
Unify NEWSTRUCT with NEWARRAY* and use better integer checks at the same time.
Multisig limit is still 1024.
Follow neo-project/neo#2531. Even though it's not strictly required (our node
handles problematic script just fine) we better be compliant wrt
deserialization behavior. MaxDeserialized is introduced to avoid moving
MaxStackSize which is a VM parameter.
We have a lot of native contract types that are converted to stack items
before serialization, then deserialized as stack items and converted back to
regular structures. stackitem.Convertible allows to remove a lot of repetitive
io.Serializable code.
This also introduces to/from converter in testserdes which unfortunately
required to change util tests to avoid circular references.
Resulting item can't have more than MaxStackSize elements. Technically this
limits to MaxStackSize cloned elements but it's considered to be enough to
mitigate the issue (the next size check is going to happen during push to the
stack). See neo-project/neo#2534, thanks @vang1ong7ang.
Standard binary serialization/deserialization is mostly used in VM to put/get
elements into/from storage, so they never should exceed MaxSize (otherwise one
won't be able to deserialize these items).
This patch leaves EncodeBinaryStackItem unprotected, but that's a streaming
interface, so it's up to the user of it to ensure its appropriate use (and our
uses are mostly for native contract's data, so they're fine).
We can have very deep reference types and attempt to JSONize them can easily
lead to OOM (even though there are no recursive references inside). Therefore
we have to limit them. While regular ToJSON() is buffer size limited to
MaxSize, ToJSONWithTypes is not and limiting it to MaxSize output will require
substantial rewriting effort while not really providing fair result, MaxSize
is more about stack item size while its JSON representation can be much bigger
because of various overheads.
Initial thought was to limit it by element count based on
MaxIteratorResultItems, but the problem here is that we don't always have this
limit in contexts using ToJSONWithTypes (like notification event
marshaling). Thus we need some generic limit which would be fine for all
users.
We at the same time have maxJSONDepth used when deserializing from JSON, so
it can be used for marshaling as well, it's not often that we have deeper
structures in real results.
Inspired by neo-project/neo#2521.