JavaScript interpreters Jakub T. Jankiewicz (12 Feb 2021 08:25 UTC)
Re: JavaScript interpreters Marc Feeley (12 Feb 2021 12:31 UTC)
Re: JavaScript interpreters Jakub T. Jankiewicz (12 Feb 2021 14:07 UTC)
Re: JavaScript interpreters Marc Feeley (12 Feb 2021 14:54 UTC)
Re: JavaScript interpreters Jakub T. Jankiewicz (12 Feb 2021 17:38 UTC)
How to classify Scheme implementations on Scheme.org Lassi Kortela (14 Feb 2021 07:52 UTC)
Re: How to classify Scheme implementations on Scheme.org Jakub T. Jankiewicz (14 Feb 2021 09:12 UTC)
Re: How to classify Scheme implementations on Scheme.org Lassi Kortela (14 Feb 2021 09:34 UTC)
Re: How to classify Scheme implementations on Scheme.org Marc Feeley (14 Feb 2021 12:54 UTC)
Re: How to classify Scheme implementations on Scheme.org Arthur A. Gleckler (14 Feb 2021 15:45 UTC)
Re: How to classify Scheme implementations on Scheme.org Jakub T. Jankiewicz (14 Feb 2021 16:23 UTC)
Re: How to classify Scheme implementations on Scheme.org Marc Feeley (14 Feb 2021 17:13 UTC)
Re: How to classify Scheme implementations on Scheme.org Lassi Kortela (15 Feb 2021 22:11 UTC)
Re: How to classify Scheme implementations on Scheme.org Lassi Kortela (15 Feb 2021 22:22 UTC)
Re: How to classify Scheme implementations on Scheme.org Marc Feeley (15 Feb 2021 22:36 UTC)
Re: How to classify Scheme implementations on Scheme.org Lassi Kortela (15 Feb 2021 22:40 UTC)
Re: How to classify Scheme implementations on Scheme.org Marc Feeley (15 Feb 2021 22:31 UTC)

Re: How to classify Scheme implementations on Scheme.org Lassi Kortela 14 Feb 2021 09:34 UTC

> What are your thoughts about this?

Coverage is a good start but even with the same feature set there's a
big difference between an advanced native compiler like Gambit and a
slow interpreter like Chibi.

There are also important features like threads and portability that fall
outside the standard. It's hard to write useful programs using only RnRS
features, so people use SRFIs and other libraries a lot.

I would guess that it's useful to divide implementations into two or
three groups. A more detailed ranking is likely to change as
implementations add features, making it hard to keep up to date, and may
lead to implementers competing and micro-optimizing for specific
features to get a better ranking. If there are only a few clear criteria
for dividing into groups, it's also easier for newcomers to understand.
The presence of a native code compiler is one pretty clear dividing
line, as is multi-threading.

The more detailed scores you suggest can also be useful for experienced
schemers. https://ecraven.github.io/r7rs-coverage/ is probably the best
current example of such a table (source at
https://github.com/ecraven/r7rs-coverage). Happily, RnRS conformance
testing can be automated to a large degree.