Re: infinities reformulated Chongkai Zhu (31 May 2005 07:17 UTC)
Re: infinities reformulated Aubrey Jaffer (31 May 2005 23:47 UTC)
Re: infinities reformulated Thomas Bushnell BSG (02 Jun 2005 15:23 UTC)
Re: infinities reformulated Aubrey Jaffer (02 Jun 2005 16:12 UTC)
Re: infinities reformulated Thomas Bushnell BSG (02 Jun 2005 16:16 UTC)
string->number Aubrey Jaffer (02 Jun 2005 19:10 UTC)
Re: string->number Thomas Bushnell BSG (02 Jun 2005 20:05 UTC)
Re: string->number Aubrey Jaffer (03 Jun 2005 01:59 UTC)
Re: string->number Thomas Bushnell BSG (03 Jun 2005 02:09 UTC)
Re: string->number Aubrey Jaffer (15 Jun 2005 21:10 UTC)
Re: string->number Thomas Bushnell BSG (16 Jun 2005 15:28 UTC)
Re: string->number bear (16 Jun 2005 16:59 UTC)
Re: string->number Aubrey Jaffer (17 Jun 2005 02:16 UTC)
Re: infinities reformulated bear (04 Jun 2005 16:42 UTC)
Re: infinities reformulated Aubrey Jaffer (17 Jun 2005 02:22 UTC)
Re: infinities reformulated bear (19 Jun 2005 17:19 UTC)
Re: infinities reformulated Aubrey Jaffer (20 Jun 2005 03:10 UTC)
Re: infinities reformulated bear (20 Jun 2005 05:46 UTC)
precise-numbers Aubrey Jaffer (26 Jun 2005 01:50 UTC)

Re: infinities reformulated bear 20 Jun 2005 05:46 UTC


On Sun, 19 Jun 2005, Aubrey Jaffer wrote:

> | It often happens in neural networks (read: my day job) that
> | being able to store a bunch of floats compactly (level-2
> | cache size) results in dramatic speedups, and in such cases
> | (in C) I use arrays of 32-bit floats rather than 64-bit
> | doubles.

> | But a couple of years ago, I had a (toy) project where I was
> | <clip>. And in that project, having 512-bit precise reals <clip>
> | was *NECESSARY*, since even with scaling, using "doubles" would
> | have lost crucial information in the underflow.

> Would weakening the "most precise" requirement to a recommendation
> improve Scheme as a platform for such arithmetics?

It's hard to know what to do.  No portable code relying on
particular float sizes can be written on the basis of R5RS.
The suggested change of weakening the requirement to a
recommendation would not enable such code, so the situation
for specialized calculations would not be improved.  But
I think maybe code like that *ought* to be the domain of
implementation-specific extensions rather than scheme
itself.

Because I don't think that scheme ought to concern
itself overmuch with the underlying hardware representations,
I wouldn't like the specification of an exact floating
point representation to become part of the language
standard.  But I would like to be able to tell the
system what minimum precision I need and let it decide
what underlying representation it can use to most
economically and effectively meet that requirement.

It is, and ought to remain, an error for code to
*rely* on a particular roundoff or wraparound error
resulting from a hardware operation on a limited-precision
number, and therefore specifying an exact size rather
than a minimum size for inexact numbers is not "the
right thing."

What I would *like* is to have a way to specify what
precision to use for inexact-number calculations in a
given (ideally dynamic, but given scheme's design more
properly lexical) scope.  I would like to be able to
say

(with-precision 512 220 expr)

in order to let the compiler know that if at least 512 bits
of mantissa and 220 bits of exponent are retained for inexact
calculations, expr (whether a single number, or a function
call) will not result in an intolerably erroneous result.
The system, if capable, may allocate and use inexact numbers
of that precision or higher, or evaluate the expr using only
exact numbers, or otherwise, must report a violation of an
implementation restriction.  And likewise, if I say

(with-precision 10 6 expr)

it would be a promise that 10 bits of mantissa and 6 of
exponent are enough to get results tolerable for my purposes
and the compiler could use 32-bit floats, or even 16-bit
floats of the suggested format, if the hardware and compiler
happen to support exactly that.  But if it happens to be
a martian architecture that uses words of 27 ternary trits
instead of 32 binary bits, that would be okay too as long
as it were capable of *at least* that precision.

In this system we wouldn't have to worry about comparisons
between inexact numbers of different precision, because
being in the same scope, all inexact numbers would have
the same precision.  But you could still use the precision
you actually need for your calculations and not have the
system wasting resources with too much precision where it's
not needed.  And it insulates code enough from the hardware
for future systems no matter how strange or unexpected, to
not be required to simulate the roundoff errors of older
systems nor use restricted representations where doing so
would slow them down.

That's what I'd like.  But is it reasonable to require it?
I dunno.  Maybe it's proper SRFI material, as long as
everything else under the sun in terms of numeric fixes is
being proposed.

				Bear