SRFI-metadata-syncing SRFI? noosphere@xxxxxx (08 Nov 2020 21:50 UTC)
Re: SRFI-metadata-syncing SRFI? Vladimir Nikishkin (09 Nov 2020 01:00 UTC)
Re: SRFI-metadata-syncing SRFI? Lassi Kortela (09 Nov 2020 09:41 UTC)
Re: SRFI-metadata-syncing SRFI? noosphere@xxxxxx (09 Nov 2020 16:15 UTC)
Re: SRFI-metadata-syncing SRFI? Lassi Kortela (09 Nov 2020 16:36 UTC)
(missing)
Re: SRFI-metadata-syncing SRFI? noosphere@xxxxxx (09 Nov 2020 20:35 UTC)
Re: SRFI-metadata-syncing SRFI? Lassi Kortela (09 Nov 2020 20:57 UTC)
Re: SRFI-metadata-syncing SRFI? Lassi Kortela (09 Nov 2020 21:05 UTC)
Re: SRFI-metadata-syncing SRFI? noosphere@xxxxxx (09 Nov 2020 23:41 UTC)
Re: SRFI-metadata-syncing SRFI? Lassi Kortela (10 Nov 2020 07:53 UTC)
Re: SRFI-metadata-syncing SRFI? noosphere@xxxxxx (09 Nov 2020 23:45 UTC)
Re: SRFI-metadata-syncing SRFI? noosphere@xxxxxx (09 Nov 2020 20:50 UTC)
Re: SRFI-metadata-syncing SRFI? Lassi Kortela (09 Nov 2020 21:12 UTC)
The funnel pattern Lassi Kortela (09 Nov 2020 21:30 UTC)

Re: SRFI-metadata-syncing SRFI? noosphere@xxxxxx 09 Nov 2020 16:15 UTC

On Mon 09 Nov 2020 11:41:03 AM +02, Lassi Kortela wrote:
>
> Every current Scheme implementation is maintained in a git repo. Hence
> the most natural way to implement your suggestion would be to put a
> metadata file in each repo under a standard filename. Since source
> releases (.tar.gz) are made from the repos, the metadata file would
> automatically ship as part of each source release too. This would let us
> easily find out the SRFI set for each release of each implementation.
>
> Much of the table we've gathered with Erkin is already auto-generated by
> scrapers that download a tar archive of the Scheme implementation's
> source code and grep for things in it. We currently have a tailor-made
> scraper for each implementation separately;

While workable, this seems to me to be less than ideal because any time
one scrapes something the process is fragile, needing manual intervention
to fix the scraper whenever some unforeseen change happens to the
structure of what's being scraped.

There's also unnecessary bandwidth being wasted repeatedly downloading
tar files, and time spent uncompressing and searching through them for
what amounts to a relatively tiny bit of data.

Because there is no standard, the data you get from a an arbitrary
Scheme's tar file is going to be unstructured, requiring more custom
rules to extract it.

Wouldn't it be so much simpler if every Scheme published the desired
data in the desired format at could be directly, reliably consumed
without having to write any custom code to deal with unstructured data
in random locations?

> Following your suggestion we could eventually have one scraper that
> works for all implementations. A big simplification, as you say. Since
> Scheme is based on S-expressions, it would be most natural if the
> metadata file were an S-expression file.

Yes.  Exactly.

> One complication remains: Many implementations support extra SRFIs via
> third-party packages in addition to the built-in SRFIs. Chicken is an
> extreme example of this approach: almost all SRFIs are eggs. This
> problem would be most naturally solved by having a similar standard
> format for package metadata. This problem is also tractable. For
> example, the current Chicken egg and Snow-Fort.org metadata files look
> vaguely similar already. With the flexibility of S-expressions and
> `cond-expand`, I'm confident we could specify a standard package
> metadata format that works for everyone. If this were coupled with a
> standard package index format, an automated scraper could parse an
> implementation's package index and look for all packages that implement
> SRFIs.

Scanning through each package's metadata might be the most reliable way
to do this, but there is still the question of how that metadata is made
available.  There is currently no standard, to my knowledge, of how
packages are distributed or how their metadata is published.  Every
Scheme does it in their own way.  This is an opportunity for
standardization as well, with benefits to a metadata collection project.

> By adding the SRFI lists from the implementation metadata and the
> package metadata, a fully automated scraper could build a complete table.
>
> Would you (and others) be interested in collaborating on such a SRFI? If
> so, I have another related idea related to library lookup that I'd like
> to try and merge with it :)

I would be happy help in the immediate future, though I'm afraid prior
commitments might tear me away in the long run.