Re: Keyword arguments in procedures specified in SRFIs
Lassi Kortela 22 Jul 2019 08:35 UTC
Thanks for explaining Kawa's approach to us, Per. It's important to keep
efficiency in mind.
>> Racket keywords have the form #:foo and are part of the syntax of
>> function calls rather than a self-evaluating literal.
>
> In newer versions of Kawa too keywords are syntax rather than
> self-evaluating literals.
I didn't know that. Do you have something like Racket's keyword-apply?
> https://www.gnu.org/software/kawa/Application-and-Arguments-Lists.html
>
> The Common Lisp keywords-are-self-evaluating model has the advantage of
> simplicity, but
> the Kawa/Racket model has some advantages, mtoo, ost importantly static
> analysis
> is easier:
> * When calling a known procedure the compiler can statically determine if
> a keyword actual argument is valid. (Not yet implemented in Kawa.)
> * Easier to generate efficient code.
How does this go in practice? When I worked with Common Lisp, almost all
calls to functions taking keyword arguments were like this:
(some-function arg1 arg2 :keyword3 arg3 :keyword4 arg4)
i.e. the keywords are literals in the source code. I'm not well versed
in compilers, but is there some reason the static analyzer doesn't
normally know the lambda list of some-function and see that :keyword3
and :keyword4 are keyword literals at the right positions?
It's true that if keywords are just different symbol objects, it's
harder to optimize something like this:
(defun just-trolling (some-keyword)
(some-function 1 2 some-keyword 3))
But whoever writes that is asking for slow code (and speed will be the
least of their problems). That kind of usage is not normal.
I can look at SBCL and SICL to see what the Common Lisp optimizers do.
They've been dealing with this problem for decades. In fact, I'll email
SICL's author, Robert Strandh. He is very actively and enthusiastically
developing his implementation.
> The Kawa calling convention involves sorting the keywords at both
> the caller and callee sites. This can be done at compile-time: For the
> callee,
> assuming the callee doesn't allow "other keywords"; for the caller,
> assuming argument-list objects aren't used.
Does this mean that if keywords were just ordinary symbol-like objects,
then a call like the some-function example above could be statically
analyzed to use this fast and compact calling convention? Whereas the
just-trolling example above would have to fall back to the same slower
calling convention as allow-other-keys?
Would this imply that each procedure taking kwargs in principle needs
two entry points - a fast path and a slow path? Could the compiler
generate those paths lazily on demand according to the needs of call
sites (i.e. whether they make fast calls, slow calls, or both)?