I feel pretty strongly that SRFI 207 literals should not introduce a wholly new syntax, but should stick with "\xHH;".  The variation between "\xHH" versus "\uHHHH" versus "\UHHHHHHHH" is for hysterical raisins only.  It made sense when characters were bytes to not bother delimiting the escape sequence, since bytes are just two hex digits.  Then in Unicode 1.0 it made sense, since "\xHH" could not be extended, to introduce "\uHHHH", since Unicode characters were just four digits.  Then ISO 10646 introduced extended codepoints up to #xFFFFFFFF, and so yet another escape sequence was needed to handle exactly 8 digits.  Finally, when the highest possible codepoint was set at #x10FFFF for compatibility with UTF-16, no new escape was introduced, and we are now stuck with the first three digits after "\U" always being "000" or (very rarely) "001".

Scheme, probably following SGML and XML, decided to require a trailing ; in all cases, making the use of multiple escapes unnecessary.  It's true that these bytestrings, as I call them, allow only a limited repertoire of characters, but changing that requires assuming a fixed encoding of non-ASCII characters, which I consider to be a mistake.  Strings can be converted to bytevectors in a large variety of ways.  In this case, if the "..." has been lexed as a string, it is just converted from characters to codepoints, and any codepoint > #xFF is a lexical syntax error.  (More efficient processes are also possible, but not required.)

As for the #u8(...) notation, it seems unnecessary to me; it is more verbose than the currently proposed notation.