Clock precision and accuracy Lassi Kortela (11 May 2019 10:48 UTC)
Re: Clock precision and accuracy Marc Feeley (11 May 2019 12:04 UTC)
Re: Clock precision and accuracy Lassi Kortela (11 May 2019 12:34 UTC)
Re: Clock precision and accuracy Marc Feeley (11 May 2019 14:36 UTC)
Re: Clock precision and accuracy John Cowan (11 May 2019 18:19 UTC)
Re: Clock precision and accuracy Marc Feeley (11 May 2019 18:54 UTC)
Re: Clock precision and accuracy Lassi Kortela (11 May 2019 19:17 UTC)
Re: Clock precision and accuracy Marc Feeley (11 May 2019 20:07 UTC)

Re: Clock precision and accuracy Marc Feeley 11 May 2019 12:04 UTC

The precision (nanosecond, millisecond, etc) should not be part of the time API.  Doing so leads to the kind of issues you mention and endless discussion of what is the most appropriate precision.  Time should be an abstract object that hides the precision (which can depend on the specifics of the lowlevel interface).

Gambit has the time->seconds and seconds->time procedures to convert between time objects and a flonum giving the elapsed time since a reference point (unix epoch).  Internally Gambit uses flonums for time calculations such as I/O timeouts, thread scheduling quantums, etc

As time passes, the (fixed) 52 significant bits of a 64 bit flonum will represent the seconds since the epoch with increased error.  Currently and for the next 20 years, the integer part of the time takes 31 bits and there are 21 bits left to represent the fraction of a second, which means the error is sub-microsecond for the next 20 years.  By that time architectures will probably have evolved to 128 bit flonums (alternatively a new epoch could be defined to reset the error).

Here are a few examples from Gambit:

> (file-info ".")
#<file-info #2
   type: directory
   device: 16777220
   inode: 8606475805
   mode: 493
   number-of-links: 46
   owner: 501
   group: 20
   size: 1472
   last-access-time: #<time #3>
   last-modification-time: #<time #4>
   last-change-time: #<time #5>
   attributes: 16
   creation-time: #<time #6>>
> (time->seconds (file-info-last-modification-time (file-info ".")))
1557425438.
> (time->seconds (current-time))
1557573958.072262
> (- (time->seconds (current-time)) (time->seconds (current-time)))
-9.5367431640625e-7
> (real-time)
166.17429184913635
> (cpu-time)
.035474

Marc

> On May 11, 2019, at 6:48 AM, Lassi Kortela <xxxxxx@lassi.io> wrote:
>
> Here are some more thoughts about time. They are from a devil's
> advocate perspective (which I think is the best perspective when it
> comes to OS stuff, and especially time stuff) so unfortunately it's
> all nitpicking, all the time.
>
> Should we just return all timestamps (file time, current time of day,
> anything else) as two nonnegative integers: [seconds nanoseconds]?
>
> (A nanosecond is a billionth of a second so it fits in 30 bits, making
> it somewhat fixnum-friendly also.)
>
> All timestamps would have nanosecond precision, and the accuracy of
> the nanoseconds part is "make of it what you will". We would not
> provide procedures to ask "how precise is the nanosecond part really?"
>
> This is based on the observation that timestamp precision is a tricky
> thing. A more precise timestamp is often just a copy of a less precise
> timestamp upstream. So although some APIs can advertise a "precision",
> the real precision is up to file system, or a network host, or a
> hardware oscillator with thermal issues - often a combination of
> those. FAT stores last modified timestamps with two-second precision,
> UFS stores with nanosecond precision (which no computer can actually
> supply it). Both are "advertised" by stat() as having nanosecond
> precision. Likewise, different CPUs and motherboards have different
> clock sources, etc. I read on the internet that the Raspberry Pi can
> keep time at microsecond precision.
>
> Asking about timestamp precision in a high-level language sounds like
> an XY problem (<http://xyproblem.info/>) by default:
> - How do I know how precise the time from the xyz function is?
> - What problem are you _really_ trying to solve?
>
> Another thing is that you can't really _use_ an _actual_ nanosecond
> precision timer in the same way as a microsecond precision timer,
> which in turn can't be used in the same way as a millisecond timer,
> etc. When you use very precise timers it leads to problems where the
> code around the timer needs to be written carefully to not take so
> long to run that it distrubs the intended purpose of the timer.
>
> So the current draft would be changed as follows:
>
> (current-nanosecond) ---> [secs nsecs]
>
> I would add this procedure by analogy to current-second in R7RS. With
> the proviso that the accuracy of nsecs is "make of it what you will".
> R7RS current-second uses (or aspires to use) TAI; this one could as
> well.
>
> What bothers me a bit about this is that the name says nanosecond but
> it will never have a real precision (never mind accuracy) of anything
> close to a nanosecond. But it's consistent with many on-disk formats
> and APIs that use nanoseconds so the interoperability is nice. And
> ultimately we would have more or less the same problems is we
> specified a current-microsecond procedure. We just need to add a
> prominent warning about the real precision so people don't expect
> miracles.
>
> (time+ticks) ---> [secs ticks]
>
> I would leave out this procedure that combines a precise second with a
> variable-precision sub-second tick counter:
>
> * Getting the current time with _some_ unspecified sub-second
>  precision is served well by (current-nanosecond).
>
> * Giving users more information about the tick precision or frequency
>  is misleading, as context switch overhead and many other things
>  always weaken the precision at sub-microsecond timescales. I can't
>  figure out how to give users a reliable estimate of the real
>  precision (again, ignoring accuracy concerns) and how users would
>  make use of that information.
>
> * Getting a high-performance sub-second counter would be:
>
>  1) Better served by some kind of low-overhead CPU counter which is
>     really more CPU- than OS-dependent so it would be out of scope
>     for an OS SRFI. Perhaps better suited for a dedicated
>     high-performance timer SRFI. There is SRFI 120: Timer APIs.
>
>  2) Not useful to tie into a clock that measures human time, because
>     the relevant computations weaken the real precision of the timer.
>     Probably in practice you can have _either_ sub-microsecond
>     precision _or_ a time based on a standard epoch of society's
>     timekeeping. It seems ill-advised to ask for both at once unless
>     you have some kind of cutting edge research equipment.
>
> Remarks on specific parts of the procedure specification:
>
> "This would be important for a system that wanted to precisely time
> the duration of some event." --> I would recommend a cycle counter for
> durations with sub-millisecond precision in a portable context. Since
> it's a duration it doesn't need reference to a standard timescale like
> UTC or TAI - a monotonically increasing clock is enough (if the clock
> wraps around, subtracting two timestamps can undo the wraparound).
>
> "Time stamps could be collected with little overhead, deferring the
> overhead of precisely calculating with them until after collection."
> --> This is a very good principle.
>
> (ticks/sec) ---> real
>
> I would remove this procedure altogether, as I can't think of any use
> cases for it. For example, to find out that the RasPi's clock is
> accurate to one microsecond, instead of calling a function I had to
> read a blog post that did some measurements to find out. If we don't
> pretend to know anything about the precision/accuracy, we don't give
> misleading information and false hope to users :)
>
> (file-info:atime file-info) → integer
> (file-info:mtime file-info) → integer
> (file-info:ctime file-info) → integer
>
> These could also return [secs nsecs] instead of one integer (does that
> integer store seconds only in Scsh?) In fact, the Unix timespec from
> stat() is already happily factored into second and nanosecond parts.
>