Yes; I now moved that to here, so below https://github.com/schemedoc/schemedoc.el - hope that's fine. It's still an Org-mode file. Github can render Org-mode files nicely, so that's fine for everyone to read. In case someone also wants to edit the content, but doesn't want to learn Org-mode just for that, please tell me; I would split this to a Markdown and .el sources in that case.

That's fine :) Org-mode is probably fine for markup. I would put each standalone unit of source code into its own source file, but if it's just some code snippets that don't form a useful whole then I guess it's fine to embed in the org-mode file.

I have a half-finished version of a community-driven Emacs Lisp snippet manager but can't find the time to clean it up for release..

Thanks; I started to work on the feature list and will also put that into an appropriate repo below schemedoc. I'll probably start with a Markdown description and add some POC sources later. I'll post links here, once something useful is available.

Great. We have done some planning and drafting in the wiki at https://github.com/schemedoc/wiki but if you're more comfortable with another repo or format, that's fine. Amirouche suggested using ordinary markdown files in a Git repo instead of GitHub's wiki since it's easier to manage things with pull requests and edit locally. That may be better for the long run. The pre-made wiki was mainly a way to get things started quickly.

You have a lot of good stuff in the Gists so maybe they could make good starting points for some repos to explore all this stuff.

The remaining bits are the SRFI-HTML-scrappers and I think they will be find a nice new home in the server-side code of the API-server one day ;) I'll move them, once we have a concrete understanding where to put them (where moving will include adaption from Gauche Scheme, if required).

I think it would be good if each kind of scraper was eventually its own self-contained library (that can also be turned into a self-contained executable program with the help of a little 'main' procedure that calls the library). This way it's easy to develop the scrapers locally without worrying about the server. Once the new version of the library is working then the server can be updated to use that version. So scraping and serving would be completely decoupled.

The interface between a scraper and the server could be a Lisp object (S-expression). So each scraper would just return a huge S-expression containing everything it knows about its subject. The server would then break those huge S-exprs into smaller pieces for the various API endpoints.

This also has the advantage that the S-exprs can be stored into files for inspection, and manipulated with Unix pipes, etc. I just started working on a 'lisq' program to query and manipulate S-expressions from the command line. It's inspired by the great 'jq' program (JSON query).