On Tue, Jun 15, 2021 at 10:33 AM Marc Nieper-Wißkirchen <xxxxxx@gmail.com> wrote:

Okay, it was the compilers to machine code that I had in mind and that try to achieve code efficiency so that Scheme code has chances to rival C.

But we must at least consider all Scheme systems.
If you want to beat a C for loop with higher-order procedures in Scheme,

A worthy activity, but definitely not the only criterion that matters; indeed to some Schemers it is a very unimportant criterion.
I believe that closure allocation needn't be expensive nor that it should in a language like Scheme (because, otherwise, the implementation fails to handle idiomatic code efficiently).

It isn't allocation that's expensive, it's the resulting GC pressure and whether you can optimize it away.  Fast Scheme compilers like Chez attempt to avoid allocation of closures rather than making it fast: is that what you have in mind?

Closure allocation shouldn't be much more expensive than a cons, and compilers like Chicken seem to prove this. In Chicken, all the code is CPS transformed, meaning that every continuation becomes a closure (meaning that a closure is involved in every non-tail call), and Chicken code can be fast.

Meaning that everyone pays an amortized price for closures (and continuations).   The other view is the C++ (and Chez) one, that users should only pay for features they actually use.

I understand. A similar question is: Should we design a library or a core library function (like make-coroutine-generator in SRFI 158) so that relies on an efficient implementation of call/cc? I would say, yes, because a Scheme that cannot handle call/cc efficiently shouldn't dictate an API for the rest of the Scheme world.

But when the "can't handle call/cc efficiently" is in fact most of the Scheme world, as I believe it is, then what?  Not wanting the tail to wag the dog is fine, but which is the dog and which is the tail?