Constructing master lists of data types hga@xxxxxx (30 Sep 2019 14:51 UTC)
Re: Constructing master lists of data types Lassi Kortela (30 Sep 2019 15:09 UTC)
Re: Constructing master lists of data types hga@xxxxxx (30 Sep 2019 18:00 UTC)
Re: Constructing master lists of data types John Cowan (02 Oct 2019 17:15 UTC)
How to store the master lists of data types hga@xxxxxx (02 Oct 2019 17:52 UTC)
Re: How to store the master lists of data types Arthur A. Gleckler (02 Oct 2019 21:10 UTC)
Re: How to store the master lists of data types Lassi Kortela (02 Oct 2019 21:31 UTC)
Re: How to store the master lists of data types hga@xxxxxx (02 Oct 2019 21:54 UTC)
Re: How to store the master lists of data types hga@xxxxxx (02 Oct 2019 21:42 UTC)
Re: How to store the master lists of data types Arthur A. Gleckler (03 Oct 2019 04:11 UTC)
Re: How to store the master lists of data types hga@xxxxxx (03 Oct 2019 12:27 UTC)
Re: How to store the master lists of data types Lassi Kortela (03 Oct 2019 14:55 UTC)
Re: How to store the master lists of data types Arthur A. Gleckler (03 Oct 2019 15:07 UTC)
Re: Constructing master lists of data types Alaric Snell-Pym (01 Oct 2019 09:11 UTC)
Re: Constructing master lists of data types John Cowan (30 Sep 2019 21:59 UTC)
Re: Constructing master lists of data types hga@xxxxxx (30 Sep 2019 22:14 UTC)
Re: Constructing master lists of data types John Cowan (01 Oct 2019 20:05 UTC)
Re: Constructing master lists of data types Alaric Snell-Pym (02 Oct 2019 16:15 UTC)
Re: Constructing master lists of data types Alaric Snell-Pym (01 Oct 2019 09:33 UTC)

Re: How to store the master lists of data types Lassi Kortela 03 Oct 2019 14:55 UTC

>>> And when John brought up the issue of properly backing up such a database, well, one Git repo is not enough, nor is it ideal for a raw database. I thought I'd offer to facilitate and fund a reliable place to backup all sorts of smallish Scheme standards, I'm ... really hard core about backups, e.g. a few hours ago I finished this months' full backup to LTO tapes. You were CC'ed because you at minimum would also have full access to this account, to avoid things like a repeat of the R6RS process data loss, and would likely have other useful things to backup to a rsync.net minimum 500GB size account.
>>
>> Aha, I see. Thank you. That makes sense. So the database of record would be a Git repository holding the "master source," and that would be dumped into a database suitable for querying periodically, which would in turn be backed up by something like rsync.net. That sounds practical.
>
> Yes. Although in this case I assume it wouldn't take more than a few minutes to recreate the database from the master source log, but one would still have to follow a recipe to do that.
>
> I CC'ed the original to you because rsync.net has a 500GB minimum size for an account at a very modest price, and could also be used to backup other standards artifacts. For example, I could natively backup all the SRFI GitHub repositories since rsync.net has direct git support. And the mailing list archives, and whatever else that's important. I'm assuming another offsite backup system where ownership and access can be shared would help to avoid a loss of data like the one that occurred for R6RS. I follow the standard backup rule of thumb that "if you have 1, you have 0, if you have 2, you have 1...", and rsync.net has proven itself to be very solid in the 10 years I've used it, allowing me to recover from a disaster where 2 our of the 3 copies of some important data were lost.

I have no problem with a database as long as the ground truth lies (heh)
in some kind of boringly standard place like a Git repo.

It'd be fun to serve the information from api.schemers.org. I can do the
requisite work. From the API's standpoint, it doesn't matter what the
ground truth format is, as long as an up-to-date copy of the information
is kept somewhere reachable via HTTPS in some format that's convenient
to parse from Scheme.