On Thu, Sep 12, 2019 at 8:11 AM Peter Bex <xxxxxx@more-magic.net> wrote:
 
- It seems unnecessarily complicated to me to have a special
   "transaction" object.  All SQL databases I know of have a
   transaction be a state of the connection.  So you can't
   create multiple transactions on the same connection.

That was my original idea.  But I thought I'd future-proof it a little, as databases like Cockroach and Foundation start to appear that layer a SQL engine over ordered key-value stores that are transaction-first.  In most cases the transaction object would be the same as the database object.  I'm open to persuasion here, though.
 
- I'm not sure giving blobs such prominence is needed.  I still
   haven't gotten around to implementing blob support in the
   postgresql egg, and I think I've never used blobs in my professional
   career, ever.  It's a bit of an antipattern to store file data in
   databases. 

Blobs are going to be important for representing more or less arbitrary Scheme data that doesn't have an external S-expression representation.  I have an idea, not really written down properly yet,  for using a variant of ASN.1 called LER (Lisp Encoding Rules) that provide binary representations of proper and improper lists, multi-dimensional arrays,  and other Lispy things.  A table of encodable types is at <http://tinyurl.com/asn1-ler>.
 
- I would extend transaction support to include a "level".  Like I
  mentioned in my other mail, you can nest transactions via savepoints
  which can be individually rolled back or "committed" (which is a no-op).

That's only partially nested transactions: you can have two levels but not three.  MySQL also has the problem that a DDL statement commits the current transaction (not true in SQLite).  And it's implementation-dependent whether a uncommitted inner transaction is visible to the outer transaction.  It's messy.
   
- The "meta" interface does not look very well-thought out.  This stuff
  is super complicated and the API seems too limited for proper
  introspection.

I agree: it was a last-minute idea.  What matters for SQLite purposes (or at least for me) is to know what tables exist, and for an existing table, what it's column names are, their types, and their nullable status.  The rest is dispensible.
 
- Drop the difference between statements and queries; in Postgres at
   least you can get a result set with INSERT...RETURNING, so the
   difference is moot.  Everything can return a result set.

I'm okay with saying that INSERT...RETURNING should be treated as a query.  The difference is between operations that produce a stream of results and those that do not.  Obviously a DDL statement, a delete, or whatever, could be treated as producing a stream of results that is always empty, but what for?
 
- Perhaps represent result sets as opaque random-access objects? 

I don't think random access makes sense, especially for large result sets.  The idea is that a statement can be executed if you don't care about its results, but if you do, you get a query object that acts as a generator.
 
Those
   can be converted to Scheme lists as needed.  This could provide some
   efficiency and makes it easier to access meta data like data types of
   the columns in the result set etc.

I've been thinking about whether it's better to return a row as a list, as currently, or as an a-list mapping names to values.  The latter is definitely easier to use, but also more work.


John Cowan          http://vrici.lojban.org/~cowan        xxxxxx@ccil.org
I must confess that I have very little notion of what [s. 4 of the British
Trade Marks Act, 1938] is intended to convey, and particularly the sentence
of 253 words, as I make them, which constitutes sub-section 1.  I doubt if
the entire statute book could be successfully searched for a sentence of
equal length which is of more fuliginous obscurity. --MacKinnon LJ, 1940