-
-
Notifications
You must be signed in to change notification settings - Fork 798
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prevent inefficient internal conversion from BigDecimal
to BigInteger
wrt ultra-large scale
#968
Comments
/cc @pjfanning 2/2 of issues discussed separately. |
It might be useful to clone the BigDecimalParser code but have the methods use BigInteger instead. This would avoid creating a BigDecimal and converting that to a BigInteger. Small duplication of code but it should be more performant. |
@pjfanning Not sure it'd work since input uses engineering notation... is that legal for |
The FastDoubleParser lib won't accept '1e20000000'. Do we need to support this value for BigInteger or do we need to ask the maintainer of FastDoubleParser lib to support this as a valid BigInteger?
Are we better off to modify jackson-core to fail if an Integer has 'e' notation? |
@pjfanning It's little bit different than that: if But it seems to me that since the problem is conversion of |
One interesting note: I can only reproduce this with 2.15 -- 2.14 and 3.0 fail with different error; probably value overflow (I think given value exceeds |
Would it make sense to add a StreamReadConstraints setting for max absolute BigInt exponent? We can add a sensible limit but lets people, who know the risks and need to support big exponents, go ahead and change the config to suit themselves. |
@pjfanning That sounds reasonable. One question I have is whether it should only add to this conversion (BigDec->BigInt) or And the other angle is that with scale of 1000 you get numeric string of ~1000 characters so in a way we could actually simply use existing value |
I would suggest just adding it for (big) ints. |
@pjfanning Makes sense. But I am still wondering if a new limit is even needed. Given that this is sort of an edge case (from floating-point number to integer), and since problematic scale magnitude is orders of magnitude bigger than maximum number length... that is,
is 1 meg string when written out as "plain"
since users can work around the issue of big scale by using It is not that I couldn't add a new limit constant, but there is some maintenance overhead. Also: I think that validation of scale limit, whatever it is, could be done via I guess I can cobble up a PR to show what I am thinking, as PoC. |
@plokhotnyuk - feel free to ignore this but we ran into an edge case where deserialization to BigInteger can be very slow if the input has a large exponent (eg '1e20000000'). jackson-core parses numbers like this as BigDecimals and then uses the You have shown great diligence about these problematic inputs in the past. I'm just wondering if you have any thoughts on the best way to handle them. |
BigDecimal
to BigInteger
wrt large exponentsBigDecimal
to BigInteger
wrt ultra-large scale
Quick note: there's a PR (#980) to block specific issue but it would be good to know if there are other approaches to avoid having to block this |
(note: somewhat related to #967)
Although we have reasonable protections against direct parsing/decoding of both
BigDecimal
(as of 2.15 release candidates), regarding "too long" numbers (by textual representation), it appears there may be one performance problem that only occurs if:BigDecimal
(ordouble
, depending)BigInteger
, there is coercion (BigDecimal.toBigInteger())but if so, performance can deteriorate significantly.
If this turns out to be true, we may need to limit magnitude (scale) of floating-point numbers that are legal to convert; this could be configurable limit (either new value in
StreamReadConstraints
, or derivative of max number length?) or, possibly just hard-coded value.The text was updated successfully, but these errors were encountered: