Re: Exactness Marcin 'Qrczak' Kowalczyk 22 Oct 2005 12:47 UTC

Thomas Bushnell BSG <xxxxxx@becket.net> writes:

> (with-ieee-floating-point ...) can just be syntactic sugar for a
> specification of whatever options is the right set.

Let's make this the default, so users know what to expect,
and introduce options for hypothetical other interpretations.

> Oh, I think it's already agreed that Scheme should not be mandated
> to support floating point arithmetic.

I can agree with this only in the sense that Scheme implementations
may implement a subset of the numeric tower. I don't see the point
in allowing other interpretations of the default inexact syntax.
Less common formats should be chosen explicitly.

> At least, I can see no advantage to such a mandate.

It standarizes the status quo, so programs which rely on inexact
numbers being floats are not only working in practice but also
formally portable.

> Do you think that mandating it will somehow change anyone's behavior
> materially?

No, because people already rely on it. The very example program in
R5RS would make little sense if .01 meant "1/100 with the inexact
bit set" and the computation was performed on vulgar fractions.

> You must stop thinking of "inexact" as a synonym for "floating
> point".  Please.

Why? Isn't it true in practice? Which implementations make some other
format the default for inexact numbers?

Other languages see no problem with that. Common Lisp, Haskell, the C
family, Python, and probably many others not only do this in practice
but have this specified in the language definition.

The concept of floating point was designed a long time ago. It's not
some fancy new idea which might become fashionable for a while and
then go away. Hardware is converging to it, not diverging.

I'm open to libraries which provide other kinds of inexact numbers.
Just not make them the default, because program relying on them would
be much less portable than programs relying on flonums.

> I do not think that any particular combination should be mandated,
> much less mandating all possible combinations. Those which do not
> support them (or some combination of them) should be required to
> document their behavior and behave sensibly when the cannot
> implement something.

So programs can only give hints, they can't rely on availability of
any particular properties? Well, in this area I prefer programming
based on confidence than hope.

While I dislike lots of things about Java, one thing was successful:
lots of core operations have well-defined meaning and work the same
way everywhere.

It's bad to standarize on something which is a temporary limitation of
current computers and is likely to change in future. Floating point
doesn't look like this.

> Trig functions most certainly can be computed exactly if you have a
> clever enough implementation.  You aren't thinking correctly here,
> because you are wedded to implementation issues.  It is perfectly
> possible to implement exacts reals, and make the trig functions
> compute exact answers.  (Have you ever played with maxima?)

Scheme is not a CAS. Fancy representations are OK as long as they are
not the default. Programming should be predictable. Is there any
existing Scheme implementation with exact irrational numbers?

Anyway, you can ignore the "(e.g. trigonometric)" part of my question.
The rest of the question stands.

> Perhaps we could insist on neither, and give programmers the option of
> specifying the precision, and then implementations which can comply
> will do so, and the others will either signal an error or (at user
> preference) do the best they can.

Again programming based on hope...

> How to print a number is an entirely separate question from its
> representation.

Agreed.

> For example, on a system with exact reals, the system might know
> that the value of some computation is 2π; what should happen when
> it's printed?

Well, if it printed this as the result of (* 8 (atan 1)), then any
other program reading its data would be surprised. I would prefer that
a given Scheme implementation doesn't use non-standard output notation
by default.

>> 4. What should happen when the number being computed is getting too
>>    large to be conveniently represented? Return "infinity"? Signal an
>>    error? How an infinity is formatted into text?
>
> As Alan Watson pointed out, this is simply a normal case of memory
> exhaustion and does not need to be discussed separately.

It's a case of memory exhaustion only in the context of exact numbers.

For inexact numbers you might argue that such interpretation is right
too, but users will say that that it's broken and switch to others.

>> 9. Should the implementation try to track inexactness of real and
>>    imaginary part separately, or we don't care? If the imaginary part
>>    comes very close to 0, should the result be indistinguishable from
>>    a real number, or we care about being sure whether it's real or not?
>
> This is an ongoing difficulty for some people.  The answer is
> certainly "no", it should not be tracked differently, but that's
> because I think of complex numbers as numbers, not as pairs of
> numbers.

This philosophical standpoint doesn't explain the practical issue.
It's a false dichotomy: I treat complex numbers as numbers which are
isomorphic to pairs of real numbers. Both interpretations are true at
the same time.

Consider these numbers:
   -5.0
   -5.0+0i
   -5.0-0i
   -5.0+0.0i
   -5.0-0.0i
Which of them should be indistinguishable (eqv?)?

FP experts would be upset if the last two were the same.
(angle -5.0+0.0i) is pi, (angle -5.0-0.0i) is -pi.

In my interpretation the first three are the same, other two are
different from the first three and from each other.

In the strict reading of R5RS all are eqv?, even though some of them
can be distinguishable by arithmetic operations. This is bad, eqv?
should not unify distinguishable values (this doesn't apply to
distinguishing with eq?); the error is in the definition of eqv? in
terms of = and exactness. It breaks also for 0.0 vs. -0.0 and
3s0 vs. 3L0.

>> 10. When we ask whether the number is even, and it happens to be
>>    inexact, should the implementation try to answer hoping that
>>    inexactness did not change the value, or we prefer an error to be
>>    signalled?
>
> Perhaps we need two functions!

I would be happy in making this an error. Currently it's required to
answer if it looks like integer (inexactly). Even though whether
it looks like integer depends on the inaccuracy introduced during
computation.

It's bad when the validity of a procedure call depends on floating
point accuracy. That's why IEEE fp produces infinities and NaNs by
default; they don't always result from arguments truly outside the
domain, but from arguments which have slipped outside after rounding
to fp precision.

Aubrey Jaffer <xxxxxx@alum.mit.edu> writes:

> Were the finiteness predicates left out of srfi-77 accidentally?

I don't know. It's clear however that the semantics of R5RS predicates
is unclear in the presence of special fp numbers. Are +inf.0 and
+nan.0 real? Are they allowed as real and imaginary parts of complex
numbers? Since the latter should be true (operations on complex number
can easily produce them), I think the former should be true as well.
They should not be rational? though.

The questions about domains are intrinsically poorly defined for
inexact numbers. That's why thinking in terms of representations
is often more practical.

> But mixed exactness spawns many senseless combinations. What does
> 1.23+5i represent?

What is senseless about it? It's a number whose real part is not known
exactly (the computation has determined that it's about 1.23) but its
imaginary part is 5 for sure.

--
   __("<         Marcin Kowalczyk
   \__/       xxxxxx@knm.org.pl
    ^^     http://qrnik.knm.org.pl/~qrczak/