Email list hosting service & mailing list manager

Re: Proposal to add HTML class attributes to SRFIs to aid machine-parsing Marc Nieper-Wißkirchen (06 Mar 2019 10:12 UTC)
Re: Proposal to add HTML class attributes to SRFIs to aid machine-parsing Ciprian Dorin Craciun (07 Mar 2019 22:36 UTC)

Re: Proposal to add HTML class attributes to SRFIs to aid machine-parsing Ciprian Dorin Craciun 07 Mar 2019 22:35 UTC

On Fri, Mar 8, 2019 at 12:18 AM Lassi Kortela <> wrote:
> I'm afraid I still don't quite understand what is meant by reliable and
> comprehensive argument signatures.

A "reliable" and "comprehensive" signature is one that permits static
analysis of the code, similar to how Erlang's `dyalizer` does:

I really advise you to read a little through that document and you'll
get an idea of what I'm talking about.  (I know it isn't yet
achievable, but I think we should have a higher goal than just basic
argument information.)

> What would be a reliable source of this information?

A document specifically written to catch all the "intricacies" of the
signature, from allowed inputs to returned outputs.  (And I don't mean
just argument types, but a little bit more than this.  See below...)

> With more comprehensive signatures, do you mean things like this:
>      (make-vector
>       (type constructor)
>       (export scheme:base)
>       (signature
>        ((range-length-zero) -> vector-empty)
>        ((range-length-zero any) -> vector-empty)
>        ((range-length-not-zero) -> vector-not-empty)
>        ((range-length-not-zero any) -> vector-not-empty))
>       ...)
> Those seem like type constraints of some kind. Where do they come from?

They came from "nowhere", I just created them starting from my
previous experience with Erlang.

And here you can see what I mean about "more" than just types.  For
example if I give the `make-vector` an zero argument I always get an
empty vector.  (And if you look through my R7RS signatures you'll see
more examples like this, especially with regard to procedures that
handle numbers.)

> * You could copy from the text of the SRFI document and then verify by
> hand. The auto-extractor already does basically this same thing -- it's
> just as easy (if not easier) to hand-verify auto-extracted definitions
> as it is to verify hand-copied ones. And if we're really unlucky, the
> signature in the SRFI might have a misprint so the possibility of error
> is still non-zero.

I agree with your statement above.  However no tool would be able to
extract the kind of signatures that you've seen above.

> * You could run a Scheme implementation that implements the SRFI in
> question, then use the introspection facilities in that Scheme to get
> the information.

That is not enough, as any implementation has some "corner-cases" or
special extensions that are not described in SRFI.

(BTW, do you know of such a introspection tool?)

> I'm still not sure that the information would be
> reliable and comprehensive for our purposes -- there are often small
> implementation-specific variations in e.g. the exact data type used to
> implement something. And if we're unlucky, the SRFI implementation might
> be partial or divergent from the source document.


> You would still have
> to verify by hand.

Exactly.  That is why I say that just creating by hand this
"signature" document is far easier than creating the complex markup,
and then verifying and augmenting the resulting S-expressions.

> The reason I'd really like to do that, apart from the simplicity of the
> tooling (just feed the static HTML files to a simple script), is that
> the tool would check that HTML and S-expression metadata stay in sync.
> So if we discover some error or omission in the S-expressions, we are
> "forced" to go back to the HTML and fix the situation at its source.
> Unfortunately my experience is that human-updated things that are not
> checked by computer simply do not stay in sync for any length of time :/

I am very skeptical about this...