On Sat, Jun 6, 2020 at 12:59 PM Marc Nieper-Wißkirchen <xxxxxx@nieper-wisskirchen.de> wrote:

Either-swap! would just have to change a tag in the internal record

The sample implementation uses different record types for Left and Right rather than a tag field.  This saves a bit of storage.
This is preferable because no heap allocation would occur

I suspect this will be a rare operation (neither Haskell nor Scala has such a thing), and I see little point in optimizing it.

This will be a dangling reference forever because Scheme's model is
not a closed-world model, won't it? Future SRFIs can always bring in
new types.

Yes, they can.  But the key issue here is what  the standard provides for.    When R7RS-large is complete, we will then have a fixed list of disjoint types analogous to the list in R7RS-small section 3.2.  (Not all the SRFIs introduce such types, of course; generators are procedures and list-queues are (improper) lists, for example.)  We keep the line "and all predicates created by define-record-type", and everything is resolved.  Until then, I think "disjoint" suffices.

This is not specific to SRFI 189, but we should really think of
something that makes sense logically. The approach I used in SRFI 146
was to reduce the question to the behavior of `define-record-type'.

That's reasonable too.
So what is really going on is that Scheme "types" are dynamic. They
are values created by `define-record-type' (apart from the few
predefined values), but these values are not first-class values. So
the question, whether types are disjoint boils down to the question of
whether locations are disjoint to each "type value" conceptually
occupies a location in the store. This location is allocated through
the evaluation of `define-record-type'.

That strikes me as unnecessarily complex.  The types string and number are disjoint because any object on which string? answers #t, number answers #f, and vice versa. This is not an operational definition, but it is a definition.

Thanks. Now that I see it, I see that the restriction to two arguments
isn't necessary either. (Numeric `=' and `<' allow for more than one
argument either.) It is enough that the supplied `equal' relation
takes as many arguments as there are following in the call to `maybe='
or `either='.

True.  I'm adding that to the SRFI.

But there is still no definition of `traversable' in the latest draft
if I am not mistaken.

I've rewritten it (using the term "mappable") but it's not easy to be both terse and clear.
> I think it is more important that ->list functions always return a list, even though the distinction between Just no values and Nothing (and likewise for Either) is lost.  Conversion functions are often lossy.

Can you explain why it is important that they always return a list?

Because that's what "conversion to foo" procedures are all about: you take something and return a corresponding foo.  The string->number procedure is an exception because there simply is no interpretation of "@#$%" as a number.  If Maybe had already existed, this would have been an excellent place to use it.
The current conversion function is not only lossy, it loses the
essentials (namely the distinction between Just and Nothing).

Only in the highly specialized case of Just of no values. I agree that there are cases when you want this, corresponding to functions that accept or return no values, but most of the time it won't matter at all.
Again the essentials are lost. Restrict the latest versions to Maybes
and Eithers with payloads consisting of just one object.

If you are really concerned, a restriction for these functions that Just/Left/Right of no values is an error would be reasonable.  But I don't think it's necessary.
I have not been talking about the values that would end up wrapped
into a Right [...] but about the values that would end up in the Left in case
`producer' returns no values. In all other procedures in this SRFI, we
are (now) allowing more than one `obj' argument. We should do here as
well for consistency and regularity.

Allowing multiple objs in values->either now.

Note however that (lisp-)values->* and its converses are not meant to be fully general.  The idea is to translate between particular protocols *that are actually in use* and the fully general Maybe/Either objects.  In the case of values->*, the protocol is one in which a procedure normally returns a (usually fixed) number of values on success.  Suppose three values are normally returned.  Then one approach to reporting failure is to return something like #f #f #f.  But another is to simply return no values.  Obviously this protocol will never be used by a procedure that might successfully return zero values.

Similarly, the lisp-values->* protocol is about returning a single value plus a validity indication.  If the procedure is successful, you get <value> #t, otherwise you get #f #f.  Again, this protocol will not be used by a procedure that naturally returns other than one value on success.

I am adding brief explanations of these things to the next draft, and will rename the procedures to use the terms list, truth, generator, values, and two-values (in SRFI order).
(define (maybe-unfold stop? mapper successor . seed*)
  (if (appy stop? seed*) (nothing)
     (let* ((res (call-with-values (lambda () (apply mapper seed*)) just)))
       (assume (call-with-values (lambda () (apply successor seed*)) stop?)

Multiple seeds are already added.

But what is the point of the call to assume?  The purpose of `successor` is to generate the next seed (or batch of seeds), but what's the point of doing so if they will never have `mapper` invoked on them?  It might make some sense to call `assert` to check for correctness, but `assume` means the compiler can assume that stop? returns true, without that allowing any optimization.  Unless I do not understand.

>> (19) I still think that it is an error that the error MUST be
>> signaled. I still propose to change the wording so that an error
>> SHOULD be signaled ("encouraged" in the RNRS terminology) as this
>> would not preclude Schemes offering a fast unsafe execution mode.\

My counterproposal is simply to require the failure continuation to be provided.  What do you think of that? 
If you have code that consumes a Maybe, you can use `maybe-if' to turn
different data paths (encoded by Maybe) into different code paths. But
when you also need to know the payload, you would use `maybe-case':

(maybe-case maybe
  ((just x)
   ;; maybe is a just and `x' is bound to the payload, which must be a
single object in this case
   ;; maybe is a nothing

Ah, I see.  This looks much more like cond than case.  I'll leave it out for now.