New draft (#2) of SRFI 226: Control Features Arthur A. Gleckler (10 Sep 2022 01:26 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (30 Sep 2022 07:32 UTC)
Re: New draft (#2) of SRFI 226: Control Features John Cowan (30 Sep 2022 19:15 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (30 Sep 2022 20:08 UTC)
Re: New draft (#2) of SRFI 226: Control Features John Cowan (30 Sep 2022 21:22 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (01 Oct 2022 13:14 UTC)
Re: New draft (#2) of SRFI 226: Control Features John Cowan (01 Oct 2022 18:56 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (01 Oct 2022 21:10 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (03 Oct 2022 11:39 UTC)
Re: New draft (#2) of SRFI 226: Control Features John Cowan (03 Oct 2022 13:21 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (03 Oct 2022 14:59 UTC)
Re: New draft (#2) of SRFI 226: Control Features John Cowan (07 Oct 2022 16:22 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (07 Oct 2022 21:36 UTC)
Re: New draft (#2) of SRFI 226: Control Features John Cowan (08 Oct 2022 00:13 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (09 Oct 2022 16:07 UTC)
Re: New draft (#2) of SRFI 226: Control Features John Cowan (09 Oct 2022 20:30 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (10 Oct 2022 06:26 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (10 Oct 2022 16:48 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (12 Oct 2022 06:27 UTC)
Re: New draft (#2) of SRFI 226: Control Features John Cowan (13 Oct 2022 00:02 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (13 Oct 2022 06:40 UTC)
Re: New draft (#2) of SRFI 226: Control Features John Cowan (13 Oct 2022 15:31 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (13 Oct 2022 15:51 UTC)
Re: New draft (#2) of SRFI 226: Control Features Arthur A. Gleckler (30 Sep 2022 23:35 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (27 Oct 2022 06:01 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Feeley (29 Oct 2022 02:54 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (29 Oct 2022 08:13 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Feeley (29 Oct 2022 12:49 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (29 Oct 2022 13:49 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (29 Oct 2022 18:49 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (03 Nov 2022 17:02 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (04 Nov 2022 19:54 UTC)
Re: New draft (#2) of SRFI 226: Control Features John Cowan (29 Oct 2022 15:38 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (29 Oct 2022 16:31 UTC)
Re: New draft (#2) of SRFI 226: Control Features John Cowan (29 Oct 2022 21:52 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (30 Oct 2022 08:35 UTC)
Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen (28 Oct 2022 15:17 UTC)

Re: New draft (#2) of SRFI 226: Control Features Marc Nieper-Wißkirchen 29 Oct 2022 13:49 UTC

Am Sa., 29. Okt. 2022 um 14:49 Uhr schrieb Marc Feeley
<xxxxxx@iro.umontreal.ca>:
>
> > On Oct 29, 2022, at 4:13 AM, Marc Nieper-Wißkirchen <xxxxxx@gmail.com> wrote:
> >
> > Am Sa., 29. Okt. 2022 um 04:54 Uhr schrieb Marc Feeley
> > <xxxxxx@iro.umontreal.ca>:
> >>
> >>> On Oct 27, 2022, at 2:01 AM, Marc Nieper-Wißkirchen <xxxxxx@gmail.com> wrote:
> >>>
> >>> I am also still hoping for a reply from Marc in the discussion about
> >>> "weak threads".
> >>>
> >>> [...]
> >>
> >> Sorry for not getting back sooner.  I have continued my reading of the SRFI 226 spec.  Unfortunately my time is still constrained and the spec is huge so my comments are bound to be more superficial than I’d like.  Here's what stands out.
> >
> > There's no need to apologize. I am very grateful to all that take the
> > time to read and think about the specification.
> >
> >> (call-in-continuation cont thunk) misses an opportunity of having the more general form (call-in-continuation cont proc arg1...) so that it can be called with a procedure and as many arguments as needed.  Instead of (call-in-continuation k (lambda () (values tmp ...)) you could write (call-in-continuation k values tmp ...).  See the definition of the continuation-graft form that you cite:
> >
> > I will generalize call-in-continuation in this respect.  Thank you for
> > the suggestion.
> >
> > [...]
> >
>
> Let me also suggest a name change (for a shorter name) and giving a name to the “return” operation:
>
>   (call-in k proc arg1...)
>   (return-to k val1...)        equivalent to (call-in k values val1...)
>
> These are the continuation-graft and continuation-return procedures of the “better API” paper, but with more palatable names.  It reads well to write:
>
>   (define (inverse lst)
>     (call/cc
>       (lambda (caller)  ;; caller is a continuation object
>         (map (lambda (x)
>                (if (= x 0)
>                    (return-to caller 'error)
>                    (/ 1 x)))
>              lst))))
>
> >> Note also that one of the main points of the "Better API" paper is to treat continuations as a specific type different from procedures so that the burden of the procedure representation can be avoided (conceptual and also run-time cost for creating the procedure), and also have other operations such as (continuation? obj), (continuation-length k), etc.  I view "continuations as procedures" to be a historical blunder that was motivated by CPS style.  If you have ever tried to explain how call/cc works to students you will probably understand what I'm talking about: "call/cc receives a procedure and calls this procedure with a procedure that represents the continuation".  Too many procedures for most students.  With SRFI 226 there's an opportunity to correct this by making (call-with-non-composable-continuation proc) call proc with a continuation object that is separate from procedures.  It changes very little to the API, except that those continuations have to be called with (call-in-continuation k values ...) or some new more specific procedure (return-to-continuation k ...).
> >
> > From a theoretical point of view, I agree with you, and I also see the
> > point of teaching.  For historical reasons (call/cc), however, I would
> > like to leave the API as is. Given the presence of call/cc and
> > existing code, I feel that introducing a new, theoretically more
> > appealing approach while the historical one is still there leads to
> > its own share of problems and confusion.
> >
> > If you want, you can view a continuation (as created by call/cc) as an
> > element of a new abstract datatype, which, however, happens to be
> > callable.  To enforce this point of view, SRFI 226 has introduced the
> > procedure `continuation?`, which checks for whether an object is a
> > continuation.
>
> As I say this conflation of the procedure and continuation concept hurts
>
> 1) understanding: not just explaining to students, but also this exceptional thing that in (+ 1 (k …)) the addition is never executed yet (k …) is a procedure call (I know it would still be a possibility to have a procedure behave like this, but typically for a continuation there would be a visual marker that warns of something exceptional happening: (+ 1 (return-to k …)) ).

We could mitigate this by stopping calling our continuations just "k".
I.e., write "return-to-k", so you get more or less the equivalent of
the latter expression.

> 2) performance: the underlying continuation structure needs to be wrapped in a closure that needs to be created (more memory allocation), and a procedure call is needed to restore a continuation (whereas call-in and return-to could be inlined by the compiler and optimally pass the arguments to it).

The underlying continuation structure can be the closure, so more
memory allocation is not necessary. Inlining would also remain the
same: Either the compiler can infer the type of the continuation
expression, in which inlining can happen, or it cannot, in which case
a dynamic type check has to be made in both implementation models.

> Certainly the “continuation as procedure” API for call/cc must stay the same for compatibility, but all the new continuation operations in SRFI 226 could be using a separate type.
>
> To bridge the two points of view, instead of (continuation? obj) you could have (procedure->continuation proc) that extracts the continuation object of proc if it is a “continuation as procedure” created by call/cc and returns #f otherwise.  Moreover, all the continuation procedures defined by the SRFI could accept both a continuation object and a “continuation as procedure”.  In other words it would not be a requirement to represent continuations as procedures, except for call/cc for historical reasons.
>
> Alternatively, a variant of call/cc, perhaps called with-current-continuation, would use continuation ojects but would be otherwise equivalent.  call/cc could then be defined as:
>
>   (define (call/cc receiver)
>     (with-current-continuation
>      (lambda (cont)  ;; cont is a continuation object
>        (receiver (lambda vals (apply return-to cont values vals))))))
>
> This may be the last chance to get the, as you say, “theoretically more appealing approach” while preserving the API of call/cc for historical reasons.

I would like to hear from more Schemers about this idea because it is
a radical change from how continuation objects have been presented so
far.

It should also be noted that composable continuations behave like
procedures, in particular, they return. Something like "return-to" for
composable continuations wouldn't make sense.  So we would see a split
between non-composable and composable continuations.  Maybe this is
good.

Another approach would be to leave everything as is (including the
continuation? predicate) but to add "return-to" and
"continuation->procedure" and abbreviate call-in-continuation to
"call-in".  This way, the old interface would still be supported while
there would be a canonical way to write code following the principles
from your paper.

> >> Concerning (thread-terminate! thread), the part "the current thread waits until the termination of thread has occurred" is not ideal.  This was also specified by SRFI 18, and it is OK in a single processor system (because the scheduler is centralized), but I now think it causes issues in a multiprocessor system because it is impossible to predict how long the wait might be.  It is better to have an asynchronous termination, and to use (thread-join! thread), possibly with timeout, when it is necessary to ensure the thread has terminated before proceeding.
> >
> > I see your point. And adding an extra timeout parameter to
> > thread-terminate! would make the interface more complicated. The only
> > problem I see is that this change would introduce a silent
> > incompatibility with SRFI 18.  Thus, it may be better to drop the name
> > thread-terminate! and replace it with a different name, like
> > thread-kill!.
>
> This seems like a minor incompatibility when compared to the other incompatibilities of SRFI 226 and 18.

Are there other incompatibilities that can not be detected easily
(e.g. through type errors)?

> >
> >> An alternative to thread-terminate! that is similarly powerful and more elegant is to have an asynchronous (thread-interrupt! thread thunk) procedure that causes thunk to be called at a safe point at the current point of execution of the target thread.  The thunk could then call raise or abort-current-continuation to terminate the thread “from within”, allowing the target thread to do some cleanup.
> >
> > I don't yet see how this is equally powerful.  What I have in mind is
> > an implementation of a Scheme REPL where the user starts a program (in
> > some thread) that goes astray and wishes to abnormally terminate it.
> > This must work with no cooperation from the program thread.  Raising
> > an exception or aborting a continuation doesn't necessarily do it.
> >
> > Also, thread-interrupt! breaches abstraction barriers. Given the code
> > (begin foo1 foo2) and assuming that evaluating foo1 does not raise any
> > exception (nor invokes a previously captured continuation), there is a
> > guarantee that foo2 will always be evaluated once after foo1 (bar
> > abnormal termination).  Now, using thread-interrupt!, one could
> > capture a continuation between evaluating foo1 and foo2 and using it
> > to break the invariant.
> >
>
> I said “similarly” powerful, and you are right that they are not equally powerful in all cases.  There's some cooperation needed from the target thread but it does mean termination can be done more cleanly in many cases.  A brutal termination (equivalent of thread-terminate!) could be obtained by adding a (thread-suicide!).  So this could be defined:
>
>   (define (thread-terminate! thread)
>     (thread-interrupt! thread thread-suicide!))

In principle, a rogue thread could thread-interrupt! itself all the
time: (let f () (thread-interrupt! (current-thread) f)). Depending
where safe points are, the thread will never be forced to kill itself.

> Your point of view that thread-interrupt! breaches abstraction barriers only holds if you view code as uninterruptible to start with.  I view interrupts as a fact of life in a mature concurrent system, so it is best to plan their existence and take advantage of the power they offer.  Ask yourself how something like “user interrupt” (ctrl-c), “interval timer interrupt” and “about to lose power interrupt” might be implemented using Scheme code.  I think the best solution is to view them as some execution of (thread-interrupt! thread thunk), where thunk is the appropriate action.  If you can’t express it in Scheme it is a weakness in the design.

Few Schemes can express POSIX signal handlers or GC handlers in Scheme
code.  I don't see this as a conceptual weakness as long as the
implementation handles signals and GC transparently to the evaluator.
As for how to handle the things in Scheme, a dedicated thread could
listen to these interrupts; there does not seem to be a need to handle
this in worker threads.

I have to think about it more, though.  Some random thoughts now:
What about something like (with-signal-handler HANDLER THUNK)? If no
(Scheme version of a) signal handler is installed, thread-interrupt!
(or, maybe better, thread-signal!) would raise an exception in its
continuation.  Otherwise the signal handler would be called with an
argument, a signal.

E.g.

(with-signal-handler (lambda (signal) (assert (procedure? signal))
(signal)) THUNK)

for use with the original thread-interrupt!

> The situation you mention (capturing a continuation of a running thread) is actually something that is very useful for debugging a thread from “outside” that thread.  Here’s a simple example:
>
>   $ gsi
>   Gambit v4.9.4
>
>   > (define (run) (let loop ((i 0)) (loop (+ i 1))))
>   > (define t (thread-start! (make-thread run)))
>   > (define k (thread-interrupt! t (lambda () (continuation-capture identity))))
>   > (display-continuation-backtrace k (current-output-port) #t)
>   0  +
>   1  loop                    (stdin)@1:39            (+ i 1)
>           i = 1650146841
>           loop = (lambda (i) (loop (+ i 1)))
>   #f
>
> As you see it is possible to start a thread and while it is running get a snapshot of its execution by capturing its continuation with a thread-interrupt + continuation-capture.  Then the state of that thread can be observed.
>
> It is true that thread-interrupt! is a powerful feature to have and like any powerful feature it can be misused.  The power it offers is why I chose to support it in Gambit.  This feature is heavily used in production to debug complex multithreaded Scheme programs with hundreds of threads and hundreds of thousands of lines of code.

I don't mean that such a feature - together with other powerful
debugging features - should not be provided by a Scheme
implementation.  E.g. a hypothetical procedure "record-field-set!!"
that mutates a record field even if it was marked immutable can be
very helpful for debugging purposes but offering such a procedure to
portable code counteracts offering records with immutable fields.  (So
the guarantee that would be breached would be that portable code
receiving a record with an immutable field cannot mutate it.)

> >> Concerning the addition of (mutex-owner mutex) as a companion to (mutex-state mutex), this has introduced a race condition.  If (eq? (mutex-state mutex) 'owned) is true then extracting the owner thread with (mutex-owner mutex) may return #f.  The API of the SRFI 18 (mutex-state mutex) was designed to not have this race condition.
> >
> > Yep. I somehow had in mind to query mutex-owner and mutex-state the
> > other way around, but this actually has the same problem.  Working
> > around this problem would need unpleasant looping.
> >
> > I will revert it to the SRFI 18 API or something equivalent.
> >
> >> The mutex-unlock! procedure's parameter list does not have a timeout parameter, but the description talks about that parameter.  Timeouts are important on all blocking operations.
> >
> > Indeed.  This oversight has already been reported by Shiro and fixed
> > in my personal repo.
> >
> >> The section on thread locals is rather vague and unconvincing.  The thread-specific field has been removed because "If these are needed, weak hash tables could be used instead." but the same can be said for thread locals which are a thin wrapper around weak hash tables indexed by thread.  The point of thread-specific was to have constant time (with small constant) access to thread specific data.
> >
> > Thread locals are natively supported on, for example, POSIX or C11
> > platforms, thus it makes sense for efficiency reasons to provide them
> > as a primitive.  The thread-specific field of SRFI 18 has the problem
> > that it really needs another high-level API to administer it.  On the
> > other hand, weak hash tables compose well when several libraries in a
> > program need thread-specific fields.
>
> I assume you mean the pthread_key_create function.  I don’t view it as efficient (some benchmarking would be interesting because the last time I looked into this was a few years ago).  For C11 what are you referring to?  If you mean the thread_local qualifier it is a static mechanism and it won’t be directly usable to implement make-thread-local which is a dynamic construct.

C11 has tss_create (https://en.cppreference.com/w/c/thread/tss_create).

> > There is another difference between thread locals and the
> > thread-specific field: A thread local is really local to the current
> > thread and a thread can only query its own copy of the value, while a
> > thread-specific field can be queried for any thread.  Of course, a
> > high-level API can provide the respective abstraction.  But even then,
> > a program could break this high-level API by accessing or mutating the
> > thread-specific field through direct access.
> >
>
> This is a question of point of view.  I prefer an open world because a closed world hinders debugging.  There is a difference between “promoting good practice + allowing people to chose” and “forcing people to use (what the designer thinks is) good practice”.  I’m all for the first and against the latter.

See my remarks above about debugging.

>
> > Note that SRFI 226 does not forbid the "specific" fields; an
> > implementation is free to provide them as an extension.  The usual
> > data types do not have "specific" fields (e.g. there is no
> > hash-table-specific), so there are no fundamental reasons why mutexes,
> > etc., should have specific fields.  One should use wrapper objects
> > instead.
>
> I understand for mutexes and conditional variables, but threads are special because (current-thread) returns a reference to the current thread and it is not possible to get to a wrapper that way.
>
> >
> > The latter is a bit different for thread objects because they are
> > returned by procedures in the SRFI 18/226 API, and the API won't
> > return wrapper objects.  Still, a single specific field can only be
> > application specific, not library-specific.  Thus weak hash tables are
> > the better solution.  If you can think of an even better approach, I
> > would like to hear about it.
>
> Weak hash tables are a neat trick, but it only works in a non-distributed model (everything lives in a single process and has access to these hash tables).
>
> In a distributed setting you might want to migrate a running task to another node to continue running it there.  Attaching properties to objects (threads or otherwise) using weak hash tables does not scale to a distributed setting because the hash tables would need to be global to the whole system.  If threads have a thread-specific field it is possible to just copy that information when the thread migrates to another node.  I suggest you read some of my other papers on the subject to know more:
>
> https://www-labs.iro.umontreal.ca/~feeley/papers/GermainFeeleyMonnierSW06.pdf
> https://www-labs.iro.umontreal.ca/~feeley/papers/FeeleyDLS15.pdf
>
> One alternative to thread-local storage is to use subtyping of the thread type.  In Gambit this is done with the define-type-of-thread form.  Here’s a quick example:
>
>   $ gsi
>   Gambit v4.9.4
>
>   > (define-type-of-thread mythread (counter))  ;; add a counter field to threads
>   > (define t (make-mythread 0))  ;; make a thread with counter = 0
>   > (define (run) (let loop () (mythread-counter-set! (current-thread) (+ 1 (mythread-counter (current-thread)))) (loop)))
>   > (thread-start! (thread-init! t run))
>   #<thread #2>
>   > (mythread-counter t)
>   82460212

I like the idea of inheriting from the thread type.  This generalizes
the thread-specific field nicely.  I would retain thread locals, as
well, because they are a helpful API as well: The "parent" thread
determines the shape of the thread object for the "child" thread, so
the "child" thread cannot dynamically add information to itself.

>
> >
> >> I’ll have to address weak threads at some other time… (and also the initial continuations section which I have to read carefully).
> >
> > I am looking forward to reading your comments.
> >
> > Thanks again,
> >
> > the other Marc
>
> the original Marc
>

the Marc that came after