Email list hosting service & mailing list manager

JavaScript interpreters Jakub T. Jankiewicz (12 Feb 2021 08:25 UTC)
Re: JavaScript interpreters Marc Feeley (12 Feb 2021 12:31 UTC)
Re: JavaScript interpreters Jakub T. Jankiewicz (12 Feb 2021 14:07 UTC)
Re: JavaScript interpreters Marc Feeley (12 Feb 2021 14:54 UTC)
Re: JavaScript interpreters Jakub T. Jankiewicz (12 Feb 2021 17:38 UTC)
How to classify Scheme implementations on Scheme.org Lassi Kortela (14 Feb 2021 07:52 UTC)
Re: How to classify Scheme implementations on Scheme.org Jakub T. Jankiewicz (14 Feb 2021 09:12 UTC)
Re: How to classify Scheme implementations on Scheme.org Lassi Kortela (14 Feb 2021 09:34 UTC)
Re: How to classify Scheme implementations on Scheme.org Marc Feeley (14 Feb 2021 12:54 UTC)
Re: How to classify Scheme implementations on Scheme.org Arthur A. Gleckler (14 Feb 2021 15:45 UTC)
Re: How to classify Scheme implementations on Scheme.org Jakub T. Jankiewicz (14 Feb 2021 16:23 UTC)
Re: How to classify Scheme implementations on Scheme.org Marc Feeley (14 Feb 2021 17:13 UTC)
Re: How to classify Scheme implementations on Scheme.org Lassi Kortela (15 Feb 2021 22:11 UTC)
Re: How to classify Scheme implementations on Scheme.org Lassi Kortela (15 Feb 2021 22:22 UTC)
Re: How to classify Scheme implementations on Scheme.org Marc Feeley (15 Feb 2021 22:36 UTC)
Re: How to classify Scheme implementations on Scheme.org Lassi Kortela (15 Feb 2021 22:40 UTC)
Re: How to classify Scheme implementations on Scheme.org Marc Feeley (15 Feb 2021 22:31 UTC)

Re: How to classify Scheme implementations on Scheme.org Marc Feeley 14 Feb 2021 12:54 UTC

My original point was specifically about usage of the R6RS and R7RS labels.  The discussion can and was extended to other RnRS standards and even to the “Scheme” label when I asked “what is Scheme”?

Indeed some people are content to define Scheme as “a Lisp that has a single namespace”… period.  And while this can distinguish all Scheme implementations from Common Lisp (and other Lisp) implementations it is not what I personnaly feel to be the essence of what makes an implementation a “Scheme”.  Here is what I consider “must haves” to qualify as a Scheme implementation:

1) Lisp’s parenthesized prefix syntax (R0RS, 1975)
2) Single namespace for functions and variables (R0RS, 1975)
3) Lexical scoping (R0RS, 1975)
4) Proper tail calls (R0RS, 1975)
5) First class continuations (R2RS, 1985)
6) Facility to define macros (R5RS, 1998)

I list them in that order because that is the order in which these features were added by the designers of the language.  Features 1 to 4 are really fundamental as they were included in the very first spec of the language and many Scheme programming idioms depend on them (such as looping using recursion and using thunks and continuation passing style).  First class continuations were added in the R2RS in 1985 (however the R1RS in 1978 did have a “catch” form but it was not required to be as powerful as the current call/cc).  Macros came to Scheme only in R5RS in 1998 (indeed the R3RS says “Scheme does not have any standard facility for defining new kinds of expressions.”, and macros were seen as an “Extension to Scheme” in R4RS).

So the definition of the term “Scheme” clearly has evolved over time but a basic requirement is to have Lisp syntax, single namespace, lexical scoping and proper tail calls.  I think most people consider that first class continuations and a facility to define macros are also defining characteristics of Scheme.

Only when this first “is it a Scheme” bar is passed, does it become relevant to measure conformance to a specific standard (R5RS, R6RS, R7RS).  These standards exist so that programmers that have followed the spec of a specific RnRS have a strong expectation that their program will work on another implementation of Scheme that conforms to the same RnRS.  Yes this is not 100% guaranteed in practice due to bugs and system specific limitations, but still the intention is for the RnRS labels to be highly corelated to adherence to the corresponding spec.

I don’t think that assigning a single numeric measure of conformance is particularly informative.  Assigning weights to the required features is highly subjective.  A grid of checkmarks for various features would be more useful for users.  The checkmarks could be 3-valued: feature not supported, feature fully supported, and feature works “mostly” but not fully due to bugs and limitations.

This conformance grid could be extended to SRFIs and even to loosely defined features (supports Unicode? has an interpreter? has a compiler? has a debugger? produces standalone executables? has a FFI? has serialisable closures and continuations? has few external dependencies? etc).  However this should be in a separate section from RnRS conformance because these are “bonuses” to a RnRS implementation (SRFIs after all are “Request For Implementations” and they are sometimes incompatile with builtin features of the implementation).

For the record, I am all for inclusiveness and think it is good to put as many implementations on the site as possible.  What I want to avoid is the misuse of the RnRS labels.  The RnRS labels should have a meaning, especially on any web site that aims to promote “Scheme”.

Marc

> On Feb 14, 2021, at 4:34 AM, Lassi Kortela <xxxxxx@lassi.io> wrote:
>
>> What are your thoughts about this?
>
> Coverage is a good start but even with the same feature set there's a big difference between an advanced native compiler like Gambit and a slow interpreter like Chibi.
>
> There are also important features like threads and portability that fall outside the standard. It's hard to write useful programs using only RnRS features, so people use SRFIs and other libraries a lot.
>
> I would guess that it's useful to divide implementations into two or three groups. A more detailed ranking is likely to change as implementations add features, making it hard to keep up to date, and may lead to implementers competing and micro-optimizing for specific features to get a better ranking. If there are only a few clear criteria for dividing into groups, it's also easier for newcomers to understand. The presence of a native code compiler is one pretty clear dividing line, as is multi-threading.
>
> The more detailed scores you suggest can also be useful for experienced schemers. https://ecraven.github.io/r7rs-coverage/ is probably the best current example of such a table (source at https://github.com/ecraven/r7rs-coverage). Happily, RnRS conformance testing can be automated to a large degree.
>