Email list hosting service & mailing list manager

Gathering comprehensive SRFI test suites in one place Lassi Kortela (26 Jan 2020 15:12 UTC)
Re: Gathering comprehensive SRFI test suites in one place Lassi Kortela (26 Jan 2020 15:15 UTC)
Re: Gathering comprehensive SRFI test suites in one place Lassi Kortela (26 Jan 2020 17:13 UTC)
Re: Gathering comprehensive SRFI test suites in one place Lassi Kortela (26 Jan 2020 17:25 UTC)
Re: Gathering comprehensive SRFI test suites in one place Lassi Kortela (26 Jan 2020 18:48 UTC)
Re: Gathering comprehensive SRFI test suites in one place Lassi Kortela (26 Jan 2020 19:28 UTC)
Re: Gathering comprehensive SRFI test suites in one place Lassi Kortela (26 Jan 2020 20:01 UTC)
Re: Gathering comprehensive SRFI test suites in one place Lassi Kortela (26 Jan 2020 20:22 UTC)
Re: Gathering comprehensive SRFI test suites in one place Per Bothner (26 Jan 2020 19:33 UTC)
Re: Gathering comprehensive SRFI test suites in one place Lassi Kortela (26 Jan 2020 19:49 UTC)
Re: Gathering comprehensive SRFI test suites in one place Per Bothner (26 Jan 2020 20:03 UTC)
Re: Gathering comprehensive SRFI test suites in one place Lassi Kortela (26 Jan 2020 20:11 UTC)
Re: Gathering comprehensive SRFI test suites in one place Per Bothner (26 Jan 2020 20:22 UTC)
Re: Gathering comprehensive SRFI test suites in one place Lassi Kortela (26 Jan 2020 20:31 UTC)
Re: Gathering comprehensive SRFI test suites in one place Arthur A. Gleckler (26 Jan 2020 17:50 UTC)

Re: Gathering comprehensive SRFI test suites in one place Lassi Kortela 26 Jan 2020 19:49 UTC

Thanks for chiming in and explaining your SRFI, Per. Do you have many
existing tests for SRFIs in Kawa using the 64 framework? If so, those
could be a useful starting point for a portable collection.

> Typical output on Kawa:
>
> $ ../bin/kawa  "./lib-test.scm"
> %%%% Starting test libs  (Writing full log to "libs.log")
> # of expected passes      269
> # of expected failures    9

This matches the output I get on various Schemes.

> (including 1 line for each unexpected failure or pass)

I guess Peter and I expected the output to explicitly say that all tests
have produced the expected result and that's what confused us.

For what it's worth, I also found the other test frameworks' output a
bit tricky to read. I guess these things come down to personal
preference, hence configurable output formats are a win.

For portable SRFI tests, we should probably eventually write a backend
that outputs a standardized S-expression format which everyone can then
format as they please :)

> Kawa also has another test framework, which is useful for
> short simple tests:  The file is just an executable module,
> with expected output in comments:
>
> (let ((x ::int 1))
>    (set! x 0)
>    (try-finally
>     (if x (display "x is true")
>         (display "x is not true"))
>     (format #t "~%finally-clause executed~%")))
> ;; Output: x is true
> ;; Output: finally-clause executed
>
> Comments can use plain text or regexps, specify error messages,
> and compiler options.  This format is convenient for simple
> regression tests, but matching error messages is likely
> to be implementation-specific.

This is interesting, but parsing comments and matching on error message
text would seem brittle for portable stuff.

> For a portable testing framework, I recommend srfi 64.

It's a fine and widely supported choice as far as I'm concerned.

> The reference implementation is a bit complicated because
> it has a lot of bells and whistles, partly for the sake
> of the detailed log written to the log file.
> However, a minimal implementation that does not support
> custom test runners would be quite simple.

This is good to know. Indeed, as far as I can tell almost all Scheme
unit test frameworks revolve around the same few primitives for writing
tests. It's just the runners that are quite different.