It looks that taking this issue into consideration is too much burden with small benefit for very special use cases.  At this moment I agree that it would be over-engineering.
Let's just leave this issue open to implementations.  When the demand arises, another srfi to provide sophisticated 'numeric' procedure can be written.

I suggest to add explanation in the precision to inform this issue to the users and implementors.  Something like the following (feel free to edit):

===================
The precision specifies the number of digits written after the decimal point.  If the numeric value to be written out requires more digits to reperesent it than precision, the written representation is chosen which is closest to the numeric value and representable with the specified precision.  If the numeric value falls on the midpoint of two such representations, it is implementation dependent that which representation is chosen.

When the numeric value is inexact floating-point numbers, there are more than one interpretations in this "rounding".  One way is to take the effective value the floating-point number represents (e.g. if we use binary floating-point numbers, we take the value of <code>(* <i>sign</i> <i>mantissa</i> (expt 2 <i>exponent</i>))</code>), and compares it to two closest numeric representations of the given precision.  Another way is to obtain the default notation of the floating-point number and apply rounding on it.  The former (we call it effective rounding) is consistent with most floating-point number operations, but may lead a non-intuitive result than the latter (we call it notational rounding).  For example,  5.015 can't be represented exactly in binary floating-point numbers.  With IEEE754 floating-point numbers, the floating point number closest to 5.015 is smaller than exact 5.015, i.e. <code>(< 5.015 5015/1000) ⇒ #t</code>.  With effective rounding with precision 2, it should result "5.01".  However, the users who look at the notation may be confused by "5.015" not being rounded up as they usually expect.  With notational rounding the implementation chooses "5.02" (if it also adopts round-half-to-infinity or round-half-up rule).  It is up to the implementation to adopt  which interpretation.
===================


On Mon, Nov 6, 2017 at 6:52 PM, Alex Shinn <xxxxxx@gmail.com> wrote:
Sorry for the late reply.

On Mon, Oct 30, 2017 at 11:01 AM, Shiro Kawai <xxxxxx@gmail.com> wrote:
That's right, and there I still haven't got a clear opinion.  Using round-half-to-even would be consistent with round, and probably so in other parts of the implementation (e.g. the built-in string->number and number->string might be using round-half-to-even whenever tie-breaking is required).
Initially I implemented Gauche's notational rounding with round-half-to-even.  But then I thought it might not be what users expect---when a user thinks 5.015 rounded down to 5.01 is a bug, she is assuming round-half-to-infinity.  I assume nobody would pick on statistical bias after notationally rounded numbers so I changed Gauche to use round-half-to-infinity.   It does appear to be an arbitrary choice.

Maybe this complexity is the reason that many programming language / implementations are vague on this.  It's not worth to define precisely, they think.

Here are some options I can think of right now:

Option 1:
We take the same path with others, just saying numbers falling near the midpoint may be rounded up or down, it's implementation dependent.

Option 2:
Potentially over-engineering specification:
* Must have a way to guarantee notational rounding (it's ok to only provide notational rounding; the impl may provide more efficient, native effective rounding as an option)
* Have an option to specify round-half-to-even or round-half-to-infinity
* Say exact number must be treated exact while generating digits, e.g. converting it to flonum first isn't an option.
If we go down this strict path, we might end up providing full Burger&Dybvig algorithm in the reference implementation, to guarantee the consistent digit generation across implementations.

Option 3:
Make the option 2 an "option", and add some mechanism to query what the implementation provides.
It reduces the burden from the implementors.  Easier for the implementation that already has customizable flonum formatter to adopt srfi-159.
An application that prefer notational rounding at least can warn users if it's not the case.

Option 4:
Make the number formatter pluggable.  Can be done with the current spec by just replacing numeric, but we can also define lower-level callback that only generates digits (comma etc. are handled in srfi-159).  The default is implementation-dependent (with the loose definition like option 1).  Separate srfi would provide various digit generators.  Can be combined with option 3.

I think for option 4 we can rely on replacing numeric - the whole system is intended to be pluggable.
Option 2 also sounds like a large burden, so the choice should be between options 1 and 3.
Since there are arguments for both notational and effective rounding, we wouldn't want to make
either the default, and would need to allow a way to achieve both.  We could consider new
parameters to control this:

  - rounding: any of 'native, 'effective, 'notational
  - rounding-direction: any of 'native, 'even, 'infinity

Plus utility procedures `show-get-supported-roundings' and `show-get-supported-rounding-directions'
to return a list of the supported values for these parameters (extensions of course allowed).

I'm just worried this is over-engineering.  Would anyone else want this?

Note the spec as written already requires arbitrary precision for exact numbers:

Implementations should allow arbitrary precision for exact rational numbers,
for example, using string-segment from SRFI 152, the following code
returns the first 100 Fibonacci numbers: [...]

We could explicitly note that conversion to inexact is forbidden, but that
rules out implementations which convert to arbitrary precision inexact
which covers the current precision.

--
Alex
 




On Sat, Oct 28, 2017 at 10:17 PM, Alex Shinn <xxxxxx@gmail.com> wrote:
On Sat, Oct 28, 2017 at 5:25 AM, Shiro Kawai <xxxxxx@gmail.com> wrote:
If you mean round-half-to-even, it is orthogonal to effective vs notational rounding.  1.15 in binary floating point number is not on a midpoint of 1.1 and 1.2, so no need of tie-breaking.
If you mean always round to even, the result would be consistent across implementations,  We could list that (though it's different behavior than Scheme's "round"; the difference is apparent when precision is 0).

Right, it's orthogonal.  For implementations using notational rounding,
it needs to be addressed though.  We should also say something about
how the exact number 115/100 is supposed to round.

-- 
Alex