Re: inexactness vs. exactness Paul Schlie 09 Aug 2005 17:21 UTC

(sorry, obviously I can't do arithmetic, as 10^300 => ~1K bits of precision
implying that an application domain specific library would be required for
public key cryptography beyond this precision, but still seems reasonable
for most very large integer and/or geometric purposes that I can think of?)

> From: Paul Schlie <xxxxxx@comcast.net>
> Date: Tue, 09 Aug 2005 13:07:47 -0400
> To: bear <xxxxxx@sonic.net>
> Cc: Aubrey Jaffer <xxxxxx@alum.mit.edu>, <xxxxxx@ccs.neu.edu>,
> <srfi-70@srfi.schemers.org>
> Subject: Re: inexactness vs. exactness
> Resent-From: <srfi-70@srfi.schemers.org>
> Resent-Date: Tue,  9 Aug 2005 19:09:43 +0200 (DFT)
>
>> From: bear <xxxxxx@sonic.net>
>> On Mon, 8 Aug 2005, Paul Schlie wrote:
>>
>>> Thanks, I guess my point/question was predominantly related to the
>>> observation that there seems often little need for truly "exact"
>>> values beyond theoretical geometry and/or combinatorial mathematics,
>>> which often themselves only require a determinable finite precision;
>>
>> This is a point on which you're going to lose.  Combinatorial
>> mathematics has developed subfields called cryptography,
>> compression, and correction codes which are fundamental to
>> modern networking.  If you're doing any of those and you
>> round anything off, you lose.
>
> - no, I've tried to already consider this, but as such algorithms tend to
>   only require (and actually rely on) finite precision modular integer
>   arithmetic, most typically well within the ~3K or so equivalent bits of
>   integer precision what would be required to represent a double's dynamic
>   range of a ~10^300 exactly (even for pubic key algorithms, which tend to
>   be typically be limited in practice to ~2k bits for even very secure key
>   exchanges).
>
>>> while simultaneously observing there's often broader need for more
>>> precise potentially "inexact" values than typically supported by
>>> double precision floating point implementations; so it just seemed
>>> that in practice that it may be more useful to define that "exact"
>>> values need only be as precise as necessary to support the exact
>>> representation of the integer values bounded by the dynamic range of
>>> the implementation's "inexact" implementation, and their
>>> corresponding reciprocal values in practice (as you've implied);
>>
>> NACK!  If you have limited precision, and the limited precision
>> affects the answer, then the answer is inexact.  PERIOD.  There is no
>> such thing as "exact numbers limited in precision" to *ANY* limit of
>> precision.  Once you go beyond a limit of precision and round
>> something, you aren't talking about exact numbers anymore.  Exact
>> numbers are, by definition, *infinitely* precise.  You may be talking
>> about limiting the representation size of exact numbers, thereby
>> decreasing the size of the set of exact numbers you can represent; but
>> that's not the same thing.
>
> - so what, in practice all values beyond the theoretical are based on
>   measured values which are imprecise by definition; therefore in practice
>   I find it hard to believe that there's any truly identifiable value of
>   lossless calculations beyond the precision typically required by lossless
>   cryptography by default (which as required more domain specific library
>   packages can themselves leverage without having to burden the general
>   implementation or programmer with preventing the specification of
>   calculations which may yield irrational values, and/or force the
>   truncation of a result to significantly less precision than may be
>   supported by a inexact implementation?)
>
>> Infinite precision in finite memory arises when the number happens to
>> match our representational scheme very well; integers and ratios of
>> integers happen to be infinitely precise things we can represent in
>> finite memory - but the finiteness of our memory means that we can
>> only represent an infinitesimal fraction of those in any fixed amount
>> of space. Things work because our usual calculations tend to give us
>> results that are in the set of things we can represent; and when they
>> don't, we can throw an error, if it's last-bit critical, or return an
>> inexact number, if it isn't.
>>
>>> thereby both providing a likely reasonably efficient "inexact" (aka
>>> double) >and a likely reasonably precise corresponding "exact"
>>> representation,
>>
>> I will say it again.  Exact numbers aren't "reasonably" precise.  they
>> are *exact*, which is to say "infinitely" precise.  You are arguing
>> for extended-precision inexact numbers, and I agree with you that
>> these are needed and useful - but to call them exact is to confuse the
>> issue and does not help.
>
> - yes, I know what the definition of the word is, but don't believe it's
>   literally significant beyond some reasonably typically required precision
>   as I've tried to explain above given our cryptography example.
>
>   As in practice, it seems much more useful to know for example that an
>   inexact value may only be precise to ~50 bits of precision, and
>   hypothetically an exact value may only be precise to ~3000 bits, and both
>   constrained to the same dynamic range, where then with that knowledge, the
>   most appropriate form may be utilized directly, and/or leveraged by more
>   application specific domain libraries as may be required.
>
> (I know we differ in opinion on this point, but thank you for the
>  opportunity to express it, regardless of our being in agreement.)
>
>