On Thu, 17 Nov 2005, Michael Sperber wrote:
>
>Thomas Bushnell BSG <xxxxxx@becket.net> writes:
>
>> bear <xxxxxx@sonic.net> writes:
>>
>>> I was surprised by (and agree completely with) the suggestion that
>>> there should be multiple different functions for addition (and other
>>> functions) depending on what behaviors you want [...]
>>
>> Yay! A convert! :)
>
>Now, this reads like you're agreeing with the approach of SRFI 77, if
>not with all the details of its realization. Is that a correct
>interpretation of what you two wrote?
On my part, I think so.... what the programmer wants when he uses
a mathematical operator is often not what he gets if the numbers
are exact or inexact in a configuration unexpected to him, and
specifying the operation directly is better than lying about the
exactness of the results.
I think of it in terms of principles:
1) I firmly believe that *relying* on numbers being inexact
or *relying* on some value being not exactly representable
is an error. This is because exactness is good; representations
that can represent a larger or more useful subset of the numbers
exactly are useful; and whenever an exact result is mathematically
true no one should be constrained against returning the result
exactly.
2) I firmly believe that converting numbers to inexact explicitly
in order to gain access to fast mathematical operations that
run in finite memory is something we shouldn't have to do. In
the first place, exactness is good and this is throwing it away
for the sake of a side effect on further computation. In the
second place it may happen that the "side effect" we want, if
we have vocabulary to specify it directly, may in many cases be
had without the loss of that information. It may be that for
most inputs, the fast operations do not encounter a roundoff
error, and in that case returning exact results would be a bonus.
3) I firmly believe that even when results are known to be inexact,
it should be the programmer's choice when and how to reduce their
precision. If I have something that's 300 bits accurate but known
to be inexact starting in the 301st bit, the language semantics
should not force me to jump through hoops and convert to exact
in order to access operations that preserve the known precision.
This is even worse than the above case, because it's actively
introducing *wrong* information for the sake of a side effect,
and had we the ability to specify the effect directly, we could
preserve the correct information through the operation.
4) I think that the simplest way to solve a whole lot of semantic
nonsense is for there to be a one-to-one correspondence between
exact and inexact numbers. But if there's a range conflict, I
believe that the inexact numbers should *always* have the greater
dynamic range. "number too large/small to be exactly represented"
is valid, where "number too large/small to be inexactly represented,
but we can represent it exactly" is just silly and invites lying
about the exactness of operands and results.
Bear