Re: CloudABI and its successor, WebAssembly WASI Lassi Kortela 17 Oct 2020 23:26 UTC

> Interpreters associated with compilers can be written in Scheme, though:
> Chicken's and Gambit's, for example.  These particular generate C, so
> they are still involved with C.  But Loko and SICL don't have any C code
> at all, though: they only have small inlined assembly-language
> primitives and everything else is compiled.

True. Programs written in high-level languages can still be exploited,
though bounds checking and type checking does make it way harder. There
are still bits of glue here and there, such as the assembly you mention.
And realistically, all of the big systems have FFI. If the compiler's
optimizer or the host ISA are complicated, there may be exploits.

>     to file system pathnames.
> Actually there are many such mandatory access control features,  but
> whatever.

Yes. Similar concerns apply to them.

Docker/LXC, FreeBSD jails and Solaris zones have security risks similar
to any language runtime sandbox. Docker has had various vulns; not sure
about the latter two, but again exploit discovery is influenced by
popularity. Even VirtualBox has had vulns that let the guest escape.

>     In a monolithic kernel OS, those checks belong
>     in the kernel.
> On reflection, I think we are both wrong: they do not belong in *the*
> kernel, they belong in a different non-Posix kernel altogether that has
> only the 49 CloudABI entry points.

A purpose-built, trimmed-down kernel is indeed ideal when it meets one's
requirements. For many applications, this leads to the usual tension
between well made and complete solutions. Things like MirageOS and
NetBSD's rump kernel offer some hope. VirtIO ought to make writing new
kernels a lot easier, so we should see many more contenders in the next
few years.

> There is no way that a permissive
> kernel with restrictions layered over it can be more secure than an
> inherently restrictive kernel.

I fully agree that OS kernels can and do have security holes, and it
doesn't help that they're written in C.

The thing is that most applications can't do without an OS kernel, most
often a monolithic Unix-like kernel. Given that this monolith is a must,
and is in charge of process and file namespaces, the next best thing is
to put all our access control eggs in this basket. There things can at
least be secured in one central piece of software that everyone must
use, instead of in separate language implementations maintained by lots
of different people who know less about security than the kernel folks.

OpenBSD's pledge() seems to me like the best existing contender for
something CloudABI-like in a general-purpose Unix setting. At its
strictest, it blocks all system calls except things like _exit().

pledge() now has a counterpart called unveil() for blocking parts of the
file system by pathname ( It ought to
be easier to use than something fully fd-based like CloudABI, but is
perhaps not as elegant.

> The init process also remains to be thought out: it will need to have a
> fd open on / and perhaps some other things.  In particular, parts of the
> socket API should be replaced by a file system API, as on Plan 9.

s6 ( is an init replacement that does
service supervision. It may be the most popular unconventional init ATM.

> I also came up with the idea of a fork quota: when a process is started it
> has a certain quota, and every time it forks, the parent process assigns
> the child part of that quota, but not less than 1.  Successfully waiting
> for a process returns its quota to the parent, but if a process has no
> quota it cannot fork.  An analogous quota for allocating disk space
> might make sense.

Interesting idea. I'm not aware of any existing quota system like that.

> Of course the problem that not enough eyeballs will be looking for bugs
> will remain.  And none of this is a defense against infinite loops, for
> example.  Truly, the only way to make a computer *secure* is to drop it
> down a well, say about 15m deep or down to the water table,  and then
> fill the well with concrete.  We can be quite sure it will never do
> anything unauthorized again.

"There's just one kind of man that you can trust, that's a dead man (or
a gringo like me)."

How do the academics proving programs prove that the proofs don't have
bugs? Triangulation is used in navigation to resolve uncertainty. A
program and its proof are just two things. Do they have a third kind?

> That's certainly if you refer to buffer-ovrerrun attacks and the like,
> yes.  That's the advantage of Loko and SICL.

Yes - buffer overrun, parser bugs, cache and timing attacks, etc. It's a
big advantage for sure - most compilers eventually become very complex
programs - but doesn't drop the risk to zero.

Sorry about the snarky tone; I worry whenever it looks like well-known
security advice is going to be ignored.

One persistent irony with access control rules of all kinds is that the
rule set and its parser can have bugs.