I wonder how other implementations are handling this.
In order to run arbitrary Scheme code in a Scheme signal handler, Gauche delays
signal handler execution---a POSIX signal handler just records the arrival of the signal,
and later, at a safe point, a Scheme signal handler is called.
When the process has multiple threads, there's a problem. By default, a signal can be
delivered to any threads. If the thread which receives a signal happenes to be
blocking with pthread_cond_wait, the signal may not be handled timely, for pthread_cond_wait
doesn't necessarily be interrupted by a signal. The arrival of the signal is recorded,
but the check may not be done immediately after.
This isn't very useful, so Gauche blocks all signals in threads other than the primordial thread
by default, so that the user knows it is the thread that receives signals. The user can change
per-thread signal masks to build different settings.
The pitfall is that when the program spawns a subprocess from a non-primordial thread.
Since subprocess inherits the signal mask, it will block all signals.
It is not as simple as unblock signals then call spawn(), since a signal may delivered
between two calls. Changing signal mask must be done after fork(), before exec().
Because of that, Gauche doesn't have simple spawn() interface. The one corresponds
to spawn() is implemented with fork/exec, and takes an optional signal mask argument.
Threads are out of scope of srfi-170, but a spawn() without sigmask will be difficult to use
in Gauche. Using it in single-thread app is fine, but if it is used deep in some portable library,
that library needs to be called with care.
Do other implementations have cleaner solutions for this?