Re: Clock precision and accuracy Marc Feeley 11 May 2019 20:07 UTC

> On May 11, 2019, at 3:17 PM, Lassi Kortela <> wrote:
>> I very much doubt that there is a real need for nanosecond precision on current computers (in the context of Scheme to measure time).
> Indeed, there is almost certainly no principled meaning that can be given to a particular moment on a standard timescale with nanosecond precision. The cycle counter on a CPU is the only thing (short of some cutting-edge lab equipment) that can hope to achieve that kind of accuracy, and a cycle counter is not based on a standard reference timescale like TAI or UTC.

Just for reference, on the fairly recent computers I have access to, gettimeofday and clock_gettime take about 15 nanoseconds to execute and return time to microsecond and nanosecond precision respectively (a few years back I did a similar experiment and gettimeofday took more than a microsecond to execute).  So those OS functions seem accurate (in the sense that successive values don’t seem to be the result of a scaling of a lower precision timer).  So I think the timing at nanosecond level is possible, I just don’t think it is very useful to go below microseconds in the context of Scheme currently.  This is just an opinion about what is practical currently.  This argument should not be used to build an API where time is measured in microseconds… because “practical currently” will change.

> The application that I keep worrying about is comparing two timestamps for exact equality (most likely to determine whether or not some file or other thing has changed). This comparison would not interpret those timestamps in any way -- it would just look at the raw numeric values (e.g. in the st_mtime field from stat()). This is the only purpose for which exact copying of timestamps would be needed. Exact comparison of floats is a classic thing to avoid, especially is they have been converted from integers.

I don’t follow your reasoning.  Testing float equality will work fine if they are the result of a deterministic conversion from an integer representation (as they should be in a given RTS).  However, testing that one file is exactly 1/10 of a second more recent than another will be an issue (but I wonder about what use case would need to do this…).

>> A microsecond is pretty short (light takes about a microsecond to travel 1000 feet).  Anyway, if you really need nanosecond precision, 80 bit floats (which have 12 more bits of mantissa) will get you there and actually more (about 100 picosecond precision currently).  80 bit floats have been “common” (on intel) since the 70’s.
> Also fully agreed. Interpreting time (other than cycle counts) below microsecond precision is almost certainly bogus outside a lab setting. Cutting-edge high-frequency trading is done in microsecond scales, not nanoseconds.
>> And using a time object abstraction solves the problem of different exact time representations (file times on unix vs windows for example).  That was my main point.
> The more I think and read about the abstraction, the more I like it. If the internal representation can be a float and/or a cons cell, everyone can be happy :)

Funny you say that.  In Gambit there’s a C level compilation flag that can switch between an integer time representation and a float representation.


#define POS_INFINITY (1.0/0.0)  /* positive infinity */
#define NEG_INFINITY (-1.0/0.0) /* negative infinity */
#define POS_INFINITY (1.7976931348623157e308)  /* positive infinity */
#define NEG_INFINITY (-1.7976931348623157e308) /* negative infinity */


typedef ___F64 ___time;




typedef struct ___time_struct
    ___SM32 secs;
    ___SM32 nsecs;
  } ___time;

#define TIME_POS_INFINITY { 2147483647, 999999999 }
#define TIME_NEG_INFINITY { -2147483648, 0 }