Re: HTTP request handler / middleware SRFI Peter Bex (07 Apr 2019 08:39 UTC)
Re: HTTP request handler / middleware SRFI Arthur A. Gleckler (07 Apr 2019 23:19 UTC)
Re: HTTP request handler / middleware SRFI Peter Bex (08 Apr 2019 08:09 UTC)
Re: HTTP request handler / middleware SRFI Arthur A. Gleckler (08 Apr 2019 17:49 UTC)

Re: HTTP request handler / middleware SRFI Peter Bex 07 Apr 2019 08:39 UTC
On Sat, Apr 06, 2019 at 01:20:38PM -0700, Arthur A. Gleckler wrote:
> On Sat, Apr 6, 2019 at 1:24 AM Peter Bex <xxxxxx@more-magic.net> wrote:
> > The original version of Spiffy had something like this as well.
> > It is very user-friendly and convenient, but it makes it quite difficult
> > to remove dispatchers.
>
> Why would that be?

Maybe I just didn't try hard enough :)

Having it available in an alist makes it trivial, but with the handler
chain I think it's equally tough to remove just one specific handler
from the chain.

> > It also makes it impossible to run multiple servers in different threads
> > in the same process, which handle different applications.
> >
>
> Shuttle is multi-threaded (per request),

Yeah but that means each thread serves the same application, right?

> > What I like about this approach is that the request *dispatching* is
> > completely separated from the request/response *handling*.
>
> I like the idea of exposing that abstraction, just in case people need it,
> but it's too low-level for daily use, and encourages serialization of
> dispatching, which could become a performance problem.

I agree; and like you mention, having multiple handlers doesn't preclude
having a handler which takes care of the dispatching based on a table.

Perhaps the dispatch table can be a (wrapped) first class object.
Something like:

(define router (make-request-router))

(register-route! router '("foo" "bar") (lambda (request) ...))

(add-handler! (request-router-handler router))

It's a bit clunky, but I'm sure you get the gist.

This way a re-usable application could export a router from its module
and you can hook it into the handler whereever you want.

Having something like this would make routers composable, so you could
register a router ("mount" it) underneath a different path in a larger
router:

(import (only (example blog) blog-router)) ; has /article/foo
(import (only (example wiki) wiki-router)) ; also has /article/foo

(define main-router (make-request-router))

(register-route! main-router '("blog") blog-router) ; now we have /blog/article/foo
(register-route! main-router '("wiki") wiki-router) ; and /wiki/article/foo

(add-handler! (request-router-handler main-router))

> That would be really cool.  With header dispatching, would we want some way
> to specify what takes priority, or would it be safe to say that one always
> dispatched on host first, then headers, then path, then query parameters?

I'm not sure; conceptually for me at least, when an "accept" header is
passed in for a specific route, it makes more sense to determine the route
first and the content type second.  It depends a bit on the header you want
to dispatch on, I suppose.

Maybe it's overkill; you can always check the headers inside the handler
for the route as long as you have access to the request object.

> Yes, I use lists, too, but it would be nice to make the syntax match the
> well-understood and widely used Wright match macro.

It's not required of course, but it makes things easier to understand
and less "special-cased".

> I never implemented HTTP/1.1, so I never implemented chunked replies.
> However, I was trying to anticipate it, so I think the API will still
> handle it.  The idea is that the thunk can write for as long as it likes,
> and the server itself can read that port and convert the results into
> chunks, which it then delivers.  But if the response is short enough, it
> can skip all that and just deliver the result directly, with just a little
> buffering.  It leaves those decisions to the server, which seems okay.

The main reason I think this would be tricky is that, at least in Spiffy,
the low-level interface requires one to send the headers first, and then
you can start writing the body.  Of course if you do it that way you have
to set the transfer-encoding header first, otherwise you can't chunk the
content.

That means the port you pass to the writer needs to know if it should
"auto-chunk" or just pass on the data as-is.

> > Perhaps you can return either a string which will be returned directly,
> > or a "writer" procedure which receives a port which, when written to,
> > sends the payload wrapped in a "chunk" of a chunked encoding response.
>
> Yes, that's a nice idea.  I actually used to take either a string or a
> thunk, but found that using the thunk consistently was easier and seemed to
> have no drawbacks.  But taking a procedure that takes a port might be
> better than taking a thunk.  I just assume that the server sets the current
> output port when it calls the thunk.

We could do both, like call-with-output-port versus with-output-to-port
in the standard, but I think it's overkill.

> > I'm not sure this is the perfect solution, because you ideally want to
> > automatically support HEAD and Range requests too.  And if possible,
> > the caching headers should be set automatically as well.
>
> I'm assuming that you mean automatically supporting HEAD if no specific
> dispatcher is specified for it, just running the GET handler and truncating
> the result.  Then you just don't run the writer, right?  Since the headers
> are returned separately from the writer, they can still be delivered.
> Content-Length might not be known, but HEAD isn't required to deliver that,
> anyway, as far as I understand.

Yeah, you're right; there's no special support needed for that.

> Does anyone successfully use Range requests?  They always seemed impossible
> to fit nicely into any kind of general framework.

Maybe ignore for now.

> Using the Github repo wiki that Lassi set up, I've put together these notes
> on what we've been discussing:
>
>   https://github.com/schemeweb/wiki/wiki/Shuttle-Spiffy-ideas

Thanks for putting this up!

Cheers,
Peter