Clock precision and accuracy Lassi Kortela (11 May 2019 10:48 UTC)
Re: Clock precision and accuracy Marc Feeley (11 May 2019 12:04 UTC)
Re: Clock precision and accuracy Lassi Kortela (11 May 2019 12:34 UTC)
Re: Clock precision and accuracy Marc Feeley (11 May 2019 14:36 UTC)
Re: Clock precision and accuracy John Cowan (11 May 2019 18:19 UTC)
Re: Clock precision and accuracy Marc Feeley (11 May 2019 18:54 UTC)
Re: Clock precision and accuracy Lassi Kortela (11 May 2019 19:17 UTC)
Re: Clock precision and accuracy Marc Feeley (11 May 2019 20:07 UTC)

Re: Clock precision and accuracy Marc Feeley 11 May 2019 18:54 UTC

> On May 11, 2019, at 2:19 PM, John Cowan <xxxxxx@ccil.org> wrote:
>
> On Sat, May 11, 2019 at 8:04 AM Marc Feeley <xxxxxx@iro.umontreal.ca> wrote:
>
> As time passes, the (fixed) 52 significant bits of a 64 bit flonum will represent the seconds since the epoch with increased error.
>
> Indeed, IEEE binary64 format could only represent nanosecond precision until April 14, 1970.  That puts all the precision long ago and very nearly in a galaxy far, far away.  64-bit seconds and 32-bit or 64-bit nanoseconds will take us to the Big Crunch (assuming there is one) and with constant precision.

I very much doubt that there is a real need for nanosecond precision on current computers (in the context of Scheme to measure time).  A microsecond is pretty short (light takes about a microsecond to travel 1000 feet).  Anyway, if you really need nanosecond precision, 80 bit floats (which have 12 more bits of mantissa) will get you there and actually more (about 100 picosecond precision currently).  80 bit floats have been “common” (on intel) since the 70’s.

The problem with an integer representation (counting microseconds or nanoseconds or picoseconds) is that it is not future proof.  Sooner or later you will want more precision and the units (and APIs) will have to change.  In computing the units have moved from seconds, to HZ (1/18.2, 1/50, 1/60, 1/100, 1/250 second), then milliseconds, microseconds and nanoseconds.  What a mess!  Using real numbers “solve” that.

And using a time object abstraction solves the problem of different exact time representations (file times on unix vs windows for example).  That was my main point.

> By that time architectures will probably have evolved to 128 bit flonums
>
> I'm not so certain of that.  Integer range tends to be driven by bus widths, but float precision and range by the needs of scientific work.  64-bit (though non-IEEE) floats have been around since 1964 at least, with no sign of 128-bit floats in common use yet.  Even the much more useful decimal floats are taking approximately forever to catch on: they were first published 16 years ago and standardized 11 years ago.

Some current architectures support 128 bit floats, and even some models of the IBM360 in the 70’s.  I’m pretty confident 128 bit floats will be common within the next 20 years, not because they are needed for precise calculations but because CPU engineers are running out of ideas of things to do with all those transistors (and for publicity/marketing of course).

Marc