Re: Who should maintain the RnRS subdomains? Alaric Snell-Pym 29 Nov 2020 14:25 UTC
On 28/11/2020 23:23, Arthur A. Gleckler wrote:
> On Sat, Nov 28, 2020 at 3:03 PM Lassi Kortela <> wrote:
>> Should we have for novices?
> Yes, that would be great — assuming, of course, that we can find a
> volunteer!

Relatedly, I've been wondering in my head whether we should seek to
agree on some clarity about what gets to be a subdomain.

As they are the unit of delegation, subdomains should perhaps reflect
"organisational" structure - each *publishing entity* gets a subdomain.
This will sometimes only loosely follow a natural knowledge-management
hierarchy for viewers, stuff like automatically gathered API docs and
tutorials and so on will probable be produced by different groups/people
but, in terms of front page navigation, should all go under "Learn" or
something like that.

Should we encourage sub-domain sites to have common top-level navigation
elements and styling? That would make sense to help produce a more
unified experience, but wouldn't make sense for less "core" things, like
<implementation> We need to strike some balance involving:

1) Making it clear to users when you leave common "Scheme Community"
territory and go into some more "third party" site, such as an
implementation page. So they know that statements they find apply to
that implementation rather than Scheme as a whole.

2) Making it easy for users to navigate around within the "Scheme
Community" zone, without needing to backtrack to to find
their way on to other areas all the time.

3) Not putting an unnecessary burden on subdomain groups to comply with
some style guide, nor a burden on us to maintain a style guide.

My hunch is that:

1) Aliasing sites that already have their own identity under
probably won't take off. Chicken has, what purpose would fulfil that a link to from an
Implementations section on doesn't? Therefore, most
* sites will be new sites or existing sites that seek to port
themselves in so will be doing a bit of a redesign to fit in anyway, so
inconsistent design won't be a problem for users in practice.

2) We should write the CSS for in such a way as to make it
easy to include on subdomain sites so that subprojects can use the CSS
with minimal work (including: we don't go willy-nilly changing the whole
way the CSS works and adjusting all the classes in ways that will break
other HTML using the same CSS) to get a consistent look and feel, but
using it is just a suggestion for subprojects, they can do what they
want - they will already be aware that there's benefits to "fitting in"
nicely, we should just provide a way to make it easy for them, and for
there to be no shame in diverging where appropriate. I'm a bit behind on
the state of the art with regards to cross-domain requests, so I don't
know if subdomains can just link to that CSS directly without browsers
moaning, or if they'll need to just mirror it to their own subdomain at
site deploy time or what?

3) We should see if there's some reasonable way to share a top-bar
navigation between projects in a similar manner, even if it's just that
we share up the top-bar HTML on its own at some URL so that subprojects
can suck it into their template system when they build and deploy new HTML. probably a reasonable tradeoff?

>> It sounds like a good and workable idea. Fetching the news
>> programmatically from an RSS feed would cause no admin burden to the front
>> page admins. If they are fetched by a small bit of JavaScript, the front
>> page can default to a blank news box in case the feed server doesn't
>> respond.
> I was thinking of doing it through a cron job on the server side so that we
> could still serve the page at lightning speed, even with the contents of
> the feeds, taking full advantage of caching.  (Of course, we'd have to work
> out the Nginx configuration issue we discussed earlier.)  Nightly updates —
> or even hourly updates — would be perfectly adequate for our purposes.

Yeah, things that rely on JavaScript unnecessarily make me feel a bit
ill (and would require many many HTTP fetches from the client browser to
display the whole page which could take a while). An hourly cronjob on
the server that pulls a bunch of RSS feeds into (a) a static HTML front
page with links to the latest RSS in each subdomain and (b) a static RSS
file (listed as the link rel=feed from that HTML front page) with an
aggregated feed would be nice.

(Implementation note: Remember that upstream sources can fail - so the
logic should probably be "Attempt to update our cache of each RSS feed,
being careful not to overwrite any existing RSS if an error response
comes back; then work from the cached files" so that an upstream failure
freezes the updates from that source rather than breaking anything.
Extra credit will be awarded to an implementation that sends conditional
fetch headers to find out if the upstream RSS has actually changed from
what's cached, to save bandwidth, thereby making it practical to move up
to lightning-fast ten-minutely cron job scheduling).

Alaric Snell-Pym   (M0KTN neé M7KIT)