Replying to Marc, and Nala,
It doesn't seem appropriate to talk about C/C++ here, so I will skip that discussion, except to say: it's all about performance. Flexibility is great but if an API cannot be implemented so it's fast, then it's not a good API. It's very hard to determine if an API is fast without measuring it. Thinking about it is often misleading; reading the implementation easily leads to false conclusions. Sad but true.
And so, an anecdote: Nala notes: "guile parameters are built on top of guile fluids". That is what the docs say, but there may be some deep implementation issues. A few years ago, I performance-tuned a medium-sized guile app (
https://github.com/MOZI-AI/annotation-scheme/issues/98) and noticed that code that mixed parameters with call/cc was eating approx half the CPU time. For a job that took hours to run, "half" is a big deal. I vaguely recall that parameters were taking tens of thousands of CPU cycles, or more, to be looked up.
Fluids/parameters need to be extremely fast: dozens of cycles, not tens of thousands. Recall how this concept works in C/C++:
-- A global variable is stored in the text segment, and a lookup of the current value of that variable is a handful of cpu cycles: it's just at some fixed offset in the text segment.
-- A per-thread global variable is stored in the thread control block (TCB), located at the start of the (per-thread) stack. So, a few cycles to find the TCB, a few more to compute the offset and maybe do a pointer chase.
Any sort of solution for per-thread storage in scheme, whether fluids or parameters, needs to be no more complex than the above. The scheme equivalent of the TCB for the currently running thread needs to be instantly available, and not require some traversal of an a-list or hash table. The location of the parameterized value should not be more than a couple of pointer-chases away; dereferencing it should not require locks or atomics. It needs to be fast.
It needs to be fast to avoid the conclusions of the earlier-mentioned "Curse of Lisp" essay: Most scheme programmers are going to be smart enough to cook up their own home-grown, thread-safe paramater object: but their home-grown thing will almost surely have mediocre performance. If "E" is going to be an effective interface to the OS, it needs to be fast. If you can't beat someone's roll-your-own system they cooked up in an afternoon, what's the point?
Conclusion: srfi-226 should almost surely come with a performance-measurement tool-suite, that can spit out hard numbers for parameter-object lookups-per-microsecond while running 12 or 24 threads. If implementations cannot get these numbers into the many-dozens-per-microsecond range, then ... something is misconceived in the API.
My apologies, I cannot make any specific, explicit recommendations beyond the above.
-- Linas.