Re: Why are byte ports "ports" as such? Marcin 'Qrczak' Kowalczyk 14 Apr 2006 19:22 UTC

Marc Feeley <> writes:

> I *strongly* dislike some I/O systems where you need to create layer
> upon layer to achieve what is fundamentally a simple I/O task. As an
> example taken from a Java tutorial on the web:
>     File outFile;
>     PrintWriter pw;
>     outFile = new File("output.text");
>     if (! outFile.exists() || (outFile.isFile() && outFile.canWrite()))
>       {
>         pw = new PrintWriter(new BufferedWriter(new FileWriter
> (outFile)));
>         ...
>       }
> This is plain ugly.  This is one of the main reasons I dislike SRFI
> 68 (Comprehensive I/O).

It's easy to wrap common layer combinations in functions. The
specification should do just that, Java silliness notwithstanding.

OTOH it's impossible to combine non-standard layers if they are only
provided in a prepackaged form.

I don't like non-extensibility and non-modularity of this SRFI.

For example it understands a few character encodings. Suppose I want
to read a file encoded in ISO-8859-2. Even if I have implemented the
conversion itself, I can't plug it into this framework: any buffering
to be put on top of a custom converter must be written from scratch,
and then it's impossible to wrap it in the standard port types.

It provides some encoding converters, but they can't be used to
convert between in-memory byte arrays and strings.

Suppose I want to flush stdout automatically before reading from stdin
(before the OS call; there is no need to flush before pulling data from
the internal buffer). There is no place to plug that.

There is no way to plug on-the-fly decompression.


   __("<         Marcin Kowalczyk