You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since y is in the range [1,2], the largest power of two less than or equal to y is 1.0 and the exponent field will always be 1023.
is wrong for the upper-bound of the range. Exponent bias is 1023 for [1.0, 2.0) and exponent for 2.0 flips over to 1024. This means that 90.0 and 180.0 that are clamped to the range maximum 2, have their signicand (and subsequently, the geohash integer repesentation) zeroed and dequantization will give equivalent output as -90.0 and -180.0 respectively.
Most straight-forward fix would probably be just guarding this corner-case and returning max 32-bit value for these cases seperately. I believe dequantization would then return closest value to 2.0.
I'd like to ask whether this behavior was already known and whether this actually applies to the rust implementation..
TLDR: Quantization makes 90.0, 180.0 => -90.0, -180.0. Do we care?
The text was updated successfully, but these errors were encountered:
While optimizing GeoHash conversions in Garnet, I came across small error in the McLoughlin's quantization approach that this library is seemingly also based on.
Quoting from myself:
Most straight-forward fix would probably be just guarding this corner-case and returning max 32-bit value for these cases seperately. I believe dequantization would then return closest value to
2.0
.I'd like to ask whether this behavior was already known and whether this actually applies to the rust implementation..
TLDR: Quantization makes
90.0, 180.0
=>-90.0, -180.0
. Do we care?The text was updated successfully, but these errors were encountered: