Quick question on thread-interrupt! Marc Nieper-Wißkirchen (07 Nov 2022 09:41 UTC)
Re: Quick question on thread-interrupt! Marc Feeley (07 Nov 2022 18:19 UTC)
Re: Quick question on thread-interrupt! Marc Nieper-Wißkirchen (07 Nov 2022 18:43 UTC)
Re: Quick question on thread-interrupt! Marc Feeley (07 Nov 2022 20:08 UTC)
Re: Quick question on thread-interrupt! Marc Feeley (07 Nov 2022 20:28 UTC)
Re: Quick question on thread-interrupt! Marc Nieper-Wißkirchen (08 Nov 2022 07:27 UTC)
Re: Quick question on thread-interrupt! Marc Feeley (08 Nov 2022 15:00 UTC)
Re: Quick question on thread-interrupt! Marc Nieper-Wißkirchen (08 Nov 2022 15:58 UTC)
Re: Quick question on thread-interrupt! Marc Nieper-Wißkirchen (09 Nov 2022 12:15 UTC)
Re: Quick question on thread-interrupt! Marc Nieper-Wißkirchen (11 Nov 2022 11:24 UTC)
Re: Quick question on thread-interrupt! Marc Feeley (07 Nov 2022 20:36 UTC)

Re: Quick question on thread-interrupt! Marc Nieper-Wißkirchen 11 Nov 2022 11:23 UTC

PPS It should be noted that in a multi-threaded environment there is
no guarantee that the BEFORE thunk of dynamic-wind is not called twice
in succession without a call to the AFTER in between.  So the
safe-dynamic-wind from the previous post is only "safe" when BEFORE
and AFTER can be meaningfully invoked in different order (but there
will never be more calls to AFTER than to BEFORE, and in each
individual thread, the calls pair as usual).

Am Mi., 9. Nov. 2022 um 13:14 Uhr schrieb Marc Nieper-Wißkirchen
<xxxxxx@nieper-wisskirchen.de>:
>
> "Asynchronous" exceptions, a subset of the features thread-interrupt!
> would give us, break dynamic-wind guarantees:
>
> (dynamic-wind
>     (lambda () (increase! x))
>     thunk
>     (lambda () (decrease! x)))
>
> Here, the idea is that within the dynamic extent of the THUNK, the
> location of X should hold an increased value (see the sample
> implementation of parameters in R7RS for a similar real-world
> example).  This works if decrease! cannot fail.  This is no longer the
> case with asynchronous exceptions (and first-class continuations).
> Now one can argue that this is just the power of thread-interrupt! and
> it is the programmer's responsibility not to misuse it (a point of
> view every C programmer could subscribe to); however, dynamic-wind was
> made to uphold the concept of a dynamic environment even in the
> presence of non-local control flow.
>
> Thus, if we have asynchronous exceptions (or, more generally,
> thread-interrupt!), we probably need a parameter-like object
> INTERRUPT-LEVEL, a non-negative integer.  If INTERRUPT-LEVEL is
> positive, interrupts will be postponed.  Dynamic-wind would then be
> modified so that it is in itself atomic, and it would reparameterize
> the INTERRUPT-LEVEL (so that it increases) during the execution of
> BEFORE and AFTER.  A possible definition would be
>
> (define dynamic-wind
>   (lambda (before thunk after)
>     (with-interrupts-disabled
>       (unmodified-dynamic-wind
>         before
>         (lambda () (with-interrupts-enabled (thunk)))
>         after))))
>
> Here, with-interrupts-enabled/disabled does the reparameterizing
> increasing/decreasing INTERRUPT-LEVEL by one.
>
> An alternative would be not to make the modified dynamic-wind the
> official dynamic-wind but the unmodified one, which would probably
> lead to unsafer code in the wild.  Note that the unmodified
> dynamic-wind can also be expressed in the modified dynamic-wind:
>
> (define unsafe-dynamic-wind
>   (lambda (before thunk after)
>     (safe-dynamic-wind
>       (lambda () (with-interrupts-enabled (before))
>       thunk
>       (lambda () (with-interrupts-enabled (after))))))
>
> Thoughts?
>
> PS: Thread-emergency-exit (or whatever it will be called) would still
> bypass all dynamic-winds. But could only be harmful for data shared
> across threads and they have to be carefully treated with mutexes
> anyway.
>
> Am Di., 8. Nov. 2022 um 16:57 Uhr schrieb Marc Nieper-Wißkirchen
> <xxxxxx@nieper-wisskirchen.de>:
> >
> > Am Di., 8. Nov. 2022 um 16:01 Uhr schrieb Marc Feeley <xxxxxx@iro.umontreal.ca>:
> > >
> > >
> > > > On Nov 8, 2022, at 2:27 AM, Marc Nieper-Wißkirchen <xxxxxx@nieper-wisskirchen.de> wrote:
> > > >
> > > > Am Mo., 7. Nov. 2022 um 21:08 Uhr schrieb Marc Feeley <xxxxxx@iro.umontreal.ca>:
> > > >>
> > > >>
> > > >>> On Nov 7, 2022, at 1:43 PM, Marc Nieper-Wißkirchen <xxxxxx@nieper-wisskirchen.de> wrote:
> > > >>>
> > > >>> Thank you!
> > > >>>
> > > >>> Am Mo., 7. Nov. 2022 um 19:19 Uhr schrieb Marc Feeley <xxxxxx@iro.umontreal.ca>:
> > > >>>>
> > > >>>>
> > > >>>>> On Nov 7, 2022, at 4:41 AM, Marc Nieper-Wißkirchen <xxxxxx@nieper-wisskirchen.de> wrote:
> > > >>>>>
> > > >>>>> Hey Marc,
> > > >>>>>
> > > >>>>> could you describe the exact semantics of Gambit's thread-interrupt!
> > > >>>>> or give me a link to where it is documented?
> > > >>>>>
> > > >>>>> Specifically, what happens when the thunk returns normally?
> > > >>>>>
> > > >>>>> We talked about capturing a continuation inside the thunk, which is
> > > >>>>> related to the question.
> > > >>>>>
> > > >>>>> Thanks,
> > > >>>>>
> > > >>>>> Marc
> > > >>>>>
> > > >>>>
> > > >>>> The API and semantics of thread-interrupt! has evolved over time since the introduction of threads in Gambit v4.0 (~2000).  The basic idea is to force a runnable or blocked thread to immediately execute a call to a thunk, regardless of what that thread is currently doing.  Conceptually each thread executes a series of atomic actions (“atomic” in the sense that they are operations at a certain level of abstraction of the virtual machine, such as a call to “cons”, “car”, “pair?”, etc).  But note that some Scheme predefined procedures, such as “append”, “map”, etc are not atomic and are a series of atomic actions.  The thread interrupt mechanism inserts the call to the thunk between such atomic actions thus ensuring that interrupts happen at “safe places” (for the Scheme virtual machine, which does not mean that it is safe for the logic of the program, which is the programmer’s concern).  Conceptually, if the thread was about to evaluate <expr>, it replaces this by the evaluation of (begin (thunk) <expr>) so that the thunk’s result is ignored and the thunk is called with the thread’s current continuation as a parent, including the dynamic environment.
> > > >
> > > > The problem I still see is that thread-interrupt! and, to a lesser
> > > > extent, thread-exit! will expose implementation details of standard
> > > > procedures (those that are implemented as non-atomic ones).
> > > >
> > > > Assume that we have a procedure "foo" in the standard, that, by
> > > > definition, increases the global variable "x" by one and then the
> > > > global variable "y" before it returns.
> > > >
> > > > With the usual "as if" rule, it wouldn't matter if foo were implemented as
> > > >
> > > > (define foo
> > > >  (lambda ()
> > > >    (set! x (+ x 1))
> > > >    (set! y (+ y 1))))
> > > >
> > > > or as
> > > >
> > > > (define bar
> > > >  (lambda ()
> > > >    (set! y (+ y 1))
> > > >    (set! x (+ x 1))))
> > > >
> > > > With thread-interrupt!, however, a continuation in the middle of the
> > > > execution of foo could be caught, making the order of assignments
> > > > observable.  This would suddenly rule out the second implementation
> > > > because the specification of "foo" would have become an
> > > > over-specification with the introduction of thread-interrupt!.
> > > >
> > > > Even more complicated is the matter with the non-atomic increase of x and y.
> > >
> > > But this issue (exposing the implementation) is already the case currently:
> > >
> > > 1) when the operation throws an exception and the exception handler captures the continuation (for example imagine the variable x in (set! x (+ x 1)) contains a non-number and “+” raises a type exception, or that x is a bignum and “+” needs to allocate a bignum and this needs to call the garbage collector because the heap is full, and the GC “hook” (a common very useful extension) captures the continuation to show the user where the GC was triggered or for profiling).
> >
> > My example "bar" procedure is probably not a good example.  A standard
> > library routine would probably first check whether "x" and "y" are
> > numbers and would employ mutexes if "x" and "y" are not thread-local.
> > I am less concerned about a GC hook unless such one is standardized as
> > well.
> >
> > > 2) when the operation includes a procedure call such as (set! x (f x)) where f might contain code that captures the continuation or raises an exception like for #1
> >
> > Such procedures would indeed have to be coded carefully to match the
> > spec.  We discussed this on the SRFI 231 mailing list (and used
> > `vector-map' as an example, if my memory serves me right).  (Related
> > to this are my recent messages on the SRFI 127 and SRFI 158 mailing
> > lists.)
> >
> > > 3) when “+” has been set! to a procedure that captures the continuation (this may not be allowed by R7RS, but it is in R5RS and in Scheme systems that allow this for debugging reasons)
> >
> > At least in R5RS, standard library procedures would not be affected.
> > If it is a non-standard extension, I don't see this as a problem as a
> > non-standard extension should be allowed to inspect the VM even on the
> > level of electrons and nucleons.
> >
> > > Moreover it is possible using threads to observe y being mutated before x, so the transformation you mention above is limited to non-threaded code so clearly not in situations where thread-interrupt! would be used.
> >
> > See above; every example code has limits.  So let us assume that x and
> > y are lexically scoped but that they can be accessed through getters
> > outside of "bar".
> >
> > > If it is critical that the two assignments can’t be interrupted in the middle then this should be explicitly enforced with a critical section.  This could be achieved using a boolean parameter object that indicates if interrupts should be handled immediately or deferred until later:
> > >
> > >     (define bar
> > >       (lambda ()
> > >         (parameterize ((defer-interrupts? #t))
> > >           (set! x (+ x 1))
> > >           (set! y (+ y 1)))))
> > >
> > > When defer-interrupts? is #t any incoming interrupts are put in a queue.  When defer-interrupts? goes from #t to #f the interrupts on the queue are serviced.
> > >
> > > It could also be a special form (with-deferred-interrupts thunk) to hide the parameter object, or equivalent mechanism.
> >
> > I have been thinking about this; Chez Scheme has something similar,
> > and on the (x86) machine level, the cli/sti instructions do the same.
> >
> > I don't entirely like it; for simplicity, programmers may just wrap
> > the entire code in a critical section.  And if an implementation's GC
> > works through the interrupt mechanism, disabling interrupts would be
> > problematic in any case.  (But, maybe, the latter should be solved
> > independently by such an implementation.)
> >
> > >
> > > >
> > > > Another problem is that the application, including the standard
> > > > libraries (but not the VM) may be in an unsafe state at the time of
> > > > the interrupt.  So any call to a standard procedure may crash the
> > > > system.  But the standard should allow an implementation to offer a
> > > > safe mode.
> > >
> > > This is why I say at that point it is the programmer’s responsibility to use this powerful construct correctly (similarly to using first class continuations and assignments together).
> >
> > If the standard libraries are written safely, and the VM can operate
> > in a safe mode, I cannot crash the system (I am not talking about the
> > application!) using first-class continuations or assignments.
> >
> > My problem is that with thread-interrupt!, it seems to become very
> > hard to write safe standard library implementations (where "safe" is
> > in the above sense).
> >
> > >
> > > >
> > > > Hypothetical procedures "thread-raise" and "thread-raise-continuable",
> > > > would make it a bit easier.  We would then have:
> > > >
> > > > (define thread-interrupt!
> > > >  (lambda (thread thunk)
> > > >    (thread-raise-continuable thread (make-interrupt-condition (lambda
> > > > (exc) (thunk)))))
> > > >
> > > > (define-condition-type &interrupt-condition &condition
> > > >  make-interrupt-condition interrupt-condition?
> > > >  (handler interrupt-condition-handler))
> > > >
> > > > The initial exception handler would then be:
> > > >
> > > > (lambda (exc)
> > > >  (cond
> > > >    [(interrupt-condition? exc)
> > > >     ((interrupt-condition-handler) exc)]
> > > >    [else ...]))
> > > >
> > > > Procedures like foo could then install a custom exception handler
> > > > ("interrupt handler").
> > > >
> > > > This would make thread-interrupt! less powerful, though.
> > > >
> > > > What is a good way out?
> > >
> > > Isn’t it still possible to write an exception handler (called at the moment of an interrupt) that contains a continuation capture?  So I’m not sure what is gained by doing it this way.
> >
> > A critical section could install an exception handler that first
> > unwinds to the beginning of the critical section (which includes
> > resetting the continuation to the beginning) before it calls the
> > interrupt thunk in this unwound continuation.
> >
> > The problem is, of course, that a second interrupt may arrive during
> > the unwinding process, and this interrupt would see an exception
> > handler up the stack...
> >
> > > On a separate subject I have some thoughts about thread-exit!.  I think this procedure should drop the “!” for consistency with the R7RS “exit” procedure.  The semantics should also be modeled after “exit” and “emergency-exit”:
> >
> > What was your rationale for "!" vs no "!" in SRFI 18?
> >
> > >    (thread-exit [obj])
> > >    (thread-emergency-exit [obj])
> > >
> > > in such a way that in the primordial thread we have:
> > >
> > >    (exit [obj]) = (thread-exit [obj])
> > >    (emergency-exit [obj]) = (thread-emergency-exit [obj])
> > >
> > > The thread-emergency-exit procedure would “Terminate the thread without running any outstanding dynamic-wind after procedures” (same wording as “emergency-exit” but “program” is replaced by “thread”).  The optional “obj” parameter would be the thread’s result, accessible with (thread-join! <thread>).
> >
> > If, <obj> should be a "reason" of the exception that is currently
> > raised by joining an abnormally terminated thread, I think.
> >
> > If we have something like thread-interrupt!, I don't think we need
> > something like "emergency-exit".  It is enough to throw an exception
> > (that would finally be handled by the initial exception handler).
> >
> > >
> > > A thread-terminate! procedure would not be needed because a thread could terminate a target thread with either:
> > >
> > >    (thread-interrupt! <target-thread> thread-exit)
> > >    (thread-interrupt! <target-thread> thread-emergency-exit)
> > >
> > > and the first form would be preferred to let the thread do any required cleanup.  It would also be possible to do (thread-interrupt! <target-thread> thread-exit), then (thread-join! <target-thread> <timeout>), followed by (thread-interrupt! <target-thread> thread-emergency-exit) if the timeout was reached, in case the target thread is wedged.
> > >
> > > Marc
> > >
> > > >
> > > > [...]
> > > >
> > > >>>
> > > >>> Does Gambit handle signals through thread-int!, e.g. as in the following?
> > > >>>
> > > >>> (##thread-int! <thread> (lambda () (raise-continuable <signal-condition>)))
> > > >>>
> > > >>> or
> > > >>>
> > > >>> (##thread-int! <thread> (lambda () (signal-interrupt <info>)))
> > > >>>
> > > >>> where
> > > >>>
> > > >>> (define (signal-interrupt info)
> > > >>> ((current-signal-interrupt-handler) info))
> > > >>
> > > >> It depends what you mean by “a signal”…  If you mean something related to POSIX signals then no it is not implemented that way because POSIX signals are executed asynchronously (with respect to the Gambit virtual machine) so they could happen in the middle of some VM operation that should not be interrupted because the VM state is temporarily inconsistent.  So instead a POSIX signal will register the signal in a bit set, and then raise an “interrupt flag” that is checked regularly at “safe points” and the handler checks the bit set.  This is a very low overhead polling mechanism that piggybacks on the stack overflow detection logic.  See my paper:
> > > >
> > > > I meant a "VM signal", which could be triggered by a POSIX signal. I
> > > > was interested in how the high-level interface/high-level semantics
> > > > worked.  It seemed to me that thread-interrupt! is a sufficiently
> > > > general primitive.
> > > >
> > > >>   Polling Efficiently on Stock Hardware, FPCA93 (http://www.iro.umontreal.ca/~feeley/papers/FeeleyFPCA93.pdf)
> > > >
> > > > I have been experimenting a bit with virtual machines.  There, I
> > > > usually use hardware detection for a stack overflow.  The overflow
> > > > handler expands the stack but then sets an interrupt flag.  The actual
> > > > code then just has to poll the interrupt flag (some volatile
> > > > sig_atomic_t).
> > > >
> > > >> Fun fact: that paper has achieved a certain notoriety because it is one of the few references in the book “The Java Language Specification” in the section 11.3.2 “Handling Asynchronous Exceptions”.
> > > >
> > > > Cool!
> > > >
> > > > Marc
> > > >
> > > > [...]
> > >
> > >