We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
@DonIsaac I'm afraid I found an edge case with #3296.
Because of the switch from using floating point to integers, we need to handle arithmetic overflow now.
For numbers which evaluate to more than u64::MAX, parsing is incorrect.
u64::MAX
e.g.: 0x10000000000000000 should be evaluated as 2.0f64.powf(64.0), but instead produces 0.
0x10000000000000000
2.0f64.powf(64.0)
0
Playground
The text was updated successfully, but these errors were encountered:
A possible fix would be:
f64
Such enormous numbers are an uncommon case, and we wouldn't want to slow down the fast path to accommodate it. So I'd suggest:
parse_hex_slow
parse_hex
#[cold]
#[inline(never)]
This should only cost the fast path 2 CPU ops (test length, conditional jump), but will preserve correct behavior in all cases.
That's just a suggestion, you may see a better way. And if you don't have time for this, just say, and I'll do it.
Sorry, something went wrong.
DonIsaac
No branches or pull requests
@DonIsaac I'm afraid I found an edge case with #3296.
Because of the switch from using floating point to integers, we need to handle arithmetic overflow now.
For numbers which evaluate to more than
u64::MAX
, parsing is incorrect.e.g.:
0x10000000000000000
should be evaluated as2.0f64.powf(64.0)
, but instead produces0
.Playground
The text was updated successfully, but these errors were encountered: