The first parser issue is that on the Windows 7 (64-bit) platform, values as small as around 1e‑323 are accepted without a range error, whereas on the Linux (64‑bit) and Windows XP (32‑bit), values smaller than around 1e-308 produce a range error. This is equivalent to the large value around 1e+308.
I thought possibly this had to do with Intel (what Windows 7 is running) on and AMD (what Linux and the Windows XP virtual machine are running on). However, in testing the limits on a Scientific Linux 6.2 (equivalent to RedHat Enterprise 6.2) 64-bit virtual machine on the Intel, the limit was around 1e‑308 (though it was with the older GCC 4.4 vs. 4.6 on the others).
The float and long double conversions were also tested on all three platforms. The float had a limit of around 1e‑38 on all three platforms as well as the long double having a limit of around 1e‑4931. So I have no explanation why double conversions are different on Windows 7. Both 7 and XP have identical versions of MinGW installed.
So instead of fighting this problem any longer, the 1.234e‑308 test value in parser test #3 was changed to 1.234e‑324, a value that will produce a range error on all three platforms. Next to deal with the differences in the number of exponent digits output.
Sunday, October 14, 2012
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
All comments and feedback welcomed, whether positive or negative.
(Anonymous comments are allowed, but comments with URL links or unrelated comments will be removed.)