Re: Clock precision and accuracy
Lassi Kortela 11 May 2019 19:17 UTC
> I very much doubt that there is a real need for nanosecond precision on current computers (in the context of Scheme to measure time).
Indeed, there is almost certainly no principled meaning that can be
given to a particular moment on a standard timescale with nanosecond
precision. The cycle counter on a CPU is the only thing (short of some
cutting-edge lab equipment) that can hope to achieve that kind of
accuracy, and a cycle counter is not based on a standard reference
timescale like TAI or UTC.
The application that I keep worrying about is comparing two timestamps
for exact equality (most likely to determine whether or not some file or
other thing has changed). This comparison would not interpret those
timestamps in any way -- it would just look at the raw numeric values
(e.g. in the st_mtime field from stat()). This is the only purpose for
which exact copying of timestamps would be needed. Exact comparison of
floats is a classic thing to avoid, especially is they have been
converted from integers.
> A microsecond is pretty short (light takes about a microsecond to travel 1000 feet). Anyway, if you really need nanosecond precision, 80 bit floats (which have 12 more bits of mantissa) will get you there and actually more (about 100 picosecond precision currently). 80 bit floats have been “common” (on intel) since the 70’s.
Also fully agreed. Interpreting time (other than cycle counts) below
microsecond precision is almost certainly bogus outside a lab setting.
Cutting-edge high-frequency trading is done in microsecond scales, not
nanoseconds.
> And using a time object abstraction solves the problem of different exact time representations (file times on unix vs windows for example). That was my main point.
The more I think and read about the abstraction, the more I like it. If
the internal representation can be a float and/or a cons cell, everyone
can be happy :)