Re: coop - a concurrent ml-like wanna be (pre-SRFI)
Alex Shinn 07 May 2021 07:41 UTC
On Fri, May 7, 2021 at 4:01 PM Marc Nieper-Wißkirchen
<xxxxxx@nieper-wisskirchen.de> wrote:
>
> I agree that one has to make some assumptions about the Scheme implementation, but luckily the reality is monotone. Case-lambda is much more easily optimizable than general match code and there are no Scheme implementations with a slow case-lambda but that know how to optimize a (lambda args (match args ...)) construct.
(chibi optimize rest) is a proof of concept optimization for rest
destructuring in the style of let-optionals.
If you compare the (chibi disasm) bytecode for:
(case-lambda ((x) (+ x 1)) ((x y) (+ x y)))
vs
(opt-lambda (x (y 1)) (+ x y))
you'll see the latter is more optimal to begin with, but after (import
(chibi optimize rest))
it becomes substantially faster, and no longer conses at all.
Because this is a proof of concept and disabled by default, it doesn't
handle all patterns.
For example, the output of opt-lambda is slightly less efficient than
just writing out
(lambda (x . o) (let ((y (if (pair? o) (car o) 1))) (+ x y)))
But this could be improved, and common match-lambda cases could be
supported as well.
This type of code rewriting takes advantage of the simple structure of
Chibi's AST.
It would actually be more difficult to optimize if Chibi had native
case-lambda support.
--
Alex