Database connections as subprocesses Lassi Kortela (14 Sep 2019 07:30 UTC)
Re: Database connections as subprocesses John Cowan (15 Sep 2019 01:06 UTC)
Re: Database connections as subprocesses Lassi Kortela (15 Sep 2019 06:28 UTC)
Re: Database connections as subprocesses John Cowan (15 Sep 2019 23:02 UTC)
Re: Database connections as subprocesses Lassi Kortela (16 Sep 2019 08:22 UTC)
Binary S-expressions Lassi Kortela (16 Sep 2019 17:49 UTC)
(missing)
Re: Binary S-expressions Lassi Kortela (17 Sep 2019 09:46 UTC)
Re: Binary S-expressions Alaric Snell-Pym (17 Sep 2019 11:33 UTC)
Re: Binary S-expressions Lassi Kortela (17 Sep 2019 12:05 UTC)
Re: Binary S-expressions Alaric Snell-Pym (17 Sep 2019 12:23 UTC)
Re: Binary S-expressions Lassi Kortela (17 Sep 2019 13:20 UTC)
Re: Binary S-expressions Lassi Kortela (17 Sep 2019 13:48 UTC)
Re: Binary S-expressions Alaric Snell-Pym (17 Sep 2019 15:52 UTC)
Re: Binary S-expressions hga@xxxxxx (17 Sep 2019 16:25 UTC)
Re: Binary S-expressions rain1@xxxxxx (17 Sep 2019 09:28 UTC)
Re: Binary S-expressions Lassi Kortela (17 Sep 2019 10:05 UTC)
Python library for binary S-expressions Lassi Kortela (17 Sep 2019 21:51 UTC)
R7RS library for binary S-expressions Lassi Kortela (17 Sep 2019 23:56 UTC)
Re: Database connections as subprocesses Alaric Snell-Pym (16 Sep 2019 08:40 UTC)
Re: Database connections as subprocesses Lassi Kortela (16 Sep 2019 09:22 UTC)
Re: Database connections as subprocesses Alaric Snell-Pym (16 Sep 2019 11:28 UTC)
Re: Database connections as subprocesses hga@xxxxxx (16 Sep 2019 13:28 UTC)
Re: Database connections as subprocesses Lassi Kortela (16 Sep 2019 13:50 UTC)
Re: Database connections as subprocesses hga@xxxxxx (17 Sep 2019 13:59 UTC)
Re: Database connections as subprocesses John Cowan (16 Sep 2019 22:41 UTC)
Re: Database connections as subprocesses Lassi Kortela (17 Sep 2019 09:57 UTC)
Re: Database connections as subprocesses Lassi Kortela (17 Sep 2019 10:22 UTC)

Re: Database connections as subprocesses Lassi Kortela 16 Sep 2019 13:50 UTC

>>> Well, if it's a protocol, then drivers can be written in anything -
>
> Perhaps it would be easier to mix this with my current impression that
> the highest payoff in support of "second class" databases (i.e. those
> other than MySQL, PostgreSQL, and SQLite) might be through JDBC.  In
> that case, the subprocess would be a JVM, and it would be natural to
> write a great deal of its code in Kawa, instead of something alien and
> compared to what you can do with a JVM, less performant than Python.

Very interesting. JVM is even more heavyweight than Python, but if it
helps you get to an exotic database why not. JVM is probably also gives
the most identical programming environment across operating systems.

As much as I like Kawa, it may be overkill to go with any Scheme on the
JVM. Ideally it's just a simple wrapper, which could be written in Java
avoiding extra dependencies. Of course, a Kawa JDBC wrapper would be
very useful for applications written in Kawa, and whoever writes the
wrapper chooses their tools :)

>> this approach is more for situations where an implementation
>> doesn't have a FFI
>
> And doesn't implement TCP/IP, since aside from SQLite, how many of
> these databases lack a documented TCP/IP wire protocol?

No idea but:

1) I wouldn't be surprised if commercial databases lack open protocols.

2) All databases have a completely different socket protocol; we'd have
the same protocol covering all databases. The generic protocol is a big
part of the simplicity, even discounting the subprocess arrangement.

>> To the Scheme application programmer, it would be best if any
>> subprocess approach offers an identical (or nearly identical)
>> interface to an FFI-based approach.
>
> Exactly, for us an implementation detail which we make sure we get
> right by parallel development, for a user just a different way to
> set up their stack, and at the Scheme API level a different
> configuration handed to the [whatever]-connect procedure, and whatever
> leaks out of the abstraction like a Java backtrace on a blowup.

Sounds good :)

Blowups of various kinds can be expected in any complex database work.
Just wait until we get to non-ASCII text in big production databases..