I think I should mention that I regard it as a mistake to standardize
anything relating to buckybits. While I'm providing them, I'm
providing them as completely harmless chrome that makes no keystroke
or representation presumptions or requirements.
FWIW, I'm using the upper 11 bits in string representation to give the
index (relative to the start of the buffer) of the character to which
the codepoint belongs. I'm using it in the first codepoint of my
primitive character representation to say how many codepoints are in
this character.
(Technically, this means my character set is not, after all,
"infinite." It is limited to characters which can be expressed in
2047 unicode codepoints or fewer.)
Bear