Re: inexactness vs. exactness
William D Clinger
(27 Jul 2005 06:48 UTC)
|
Re: inexactness vs. exactness
Michael Sperber
(27 Jul 2005 15:22 UTC)
|
Re: inexactness vs. exactness
Aubrey Jaffer
(31 Jul 2005 02:37 UTC)
|
Re: inexactness vs. exactness
bear
(31 Jul 2005 06:20 UTC)
|
Re: inexactness vs. exactness
Paul Schlie
(31 Jul 2005 13:51 UTC)
|
Re: inexactness vs. exactness bear (31 Jul 2005 18:47 UTC)
|
Re: inexactness vs. exactness
Paul Schlie
(01 Aug 2005 02:17 UTC)
|
Limiting the precision to that of the most-precise inexact argument, as suggested by Will Clinger and myself at different times, seems like a relatively practical thing to do (explanations below). However, it would be forbidden by the current suggested wording, because it runs against the principle of using a particular inexact value (expressible in, say, four words because one of the arguments was an inexact with four words of precision) when there are inexact values (which are expressible in 1024 words) that are actually closer to the mathematically expected result. On Sun, 31 Jul 2005, Paul Schlie wrote: >> From: bear <xxxxxx@sonic.net> >> I know that it's not the right thing in all cases and all times, >> but I think it's a good thing in most of the cases I use, for >> an operation on exact arguments whose mathematically correct >> result is too large to be represented as an exact number, to >> (silently) return an inexact number of the highest available >> precision. >- As personally I think it's poor idea to place the responsibility > to limit the precision of exact data such that calculations will > not exceed the practical physical limitations of an exact > implementation's representational capabilities on the programmer, > but rather believe that it's the responsibility of the implementation > to endeavor to limit the precision of an exact value's representation > to some practical physical limit in an effort to prevent such > scenarios; I'd rather see it become acceptable that an exact > implementation may return an imprecise exact value limited to some > implementation defined precision limit, where the precision limit > of an exact value's representation is presumed to be greater than > the precision supported by an inexact value's representation. Uh, this is faint and fuzzy thinking; no matter what the precision limit, if you have a situation where you're not getting the exact mathematically correct result, you must mark the result as being an inexact number. There is no limit on the size of the representation for inexact numbers; the "highest available precision" for inexacts may be several kilobytes long depending on what the implementation's author did. I agree with you, I think, about the desirable behavior: use exact results until they get too big, and then switch to inexact results _of_about_the_same_representational_precision to prevent further out-of-control growth. Unfortunately, most implementors limit inexacts to 64 bit (or even 32 bit) real values, and therefore have drastically different precision ranges and limits available for exact and inexact numbers. > As this would imply that such an exact implementation may have some > practical precision limit, it correspondingly must either be defined > to have the same representational magnitude bounds as an inexact > implementation, or by implication have distinct infinite and reciprocal > bounds defined. (where the former seems simplest, although apparently > only suggested in jest earlier). Ehhh. Only half in jest. Most of these problems do go away if you provide inexact numbers as big and precise as your exact numbers; But general-purpose schemata can't really be expected to do so, since most of them want blazing speed out of inexact numbers and are perfectly happy with inexact precision limited to 64 bits. I think I'd be happy in a scheme that represented numbers in up to 1024 words of memory, whether exact or inexact, marked them "inexact" if that wasn't enough to hold the correct answer but was enough to hold an approximation within reasonable roundoff errors, and returned an error object if the result was actually out of range (beyond the highest/lowest representable number). But for certain kinds of calculations, you'd still want a way to specify a much lower "precision limit" in order to get wrong answers really really fast. An example is most iterative adaptive algorithms; you get closer to the correct answer by making more iterations, much faster rather than by carrying each iteration out to 1024 words of precision. And since those answers were doomed to be wrong anyway (since they usually home in on irrationals) and the correctness is not usually very critical within a small roundoff error, why *not* limit them to 4 words of precision? So, I think it's desirable to have a numeric system where inexacts have a range of different precisions, depending on the precision of the arguments to the procedures that produced them as results. But this is where we violate the principle of always using "the" closest representable inexact value to the mathematically correct result. Bear