scope of #!sweet and friends inside parens Beni Cherniavsky-Paskin (02 May 2013 10:07 UTC)
Re: scope of #!sweet and friends inside parens Alan Manuel Gloria (02 May 2013 12:01 UTC)
Re: scope of #!sweet and friends inside parens Beni Cherniavsky-Paskin (02 May 2013 16:39 UTC)
Re: scope of #!sweet and friends inside parens David A. Wheeler (02 May 2013 23:22 UTC)
Re: scope of #!sweet and friends inside parens Beni Cherniavsky-Paskin (03 May 2013 00:30 UTC)

Re: scope of #!sweet and friends inside parens David A. Wheeler 02 May 2013 23:22 UTC

> > On Thu, May 2, 2013 at 6:00 PM, Beni Cherniavsky-Paskin <xxxxxx@users.sf.net
> > > wrote:
> >
> >> The spec is not particularly clear on what crazy things like this mean:
> >>
> >> (( ... #!sweet ...) ... #!no-sweet ... ( ... #!curly-infix ...)) ...
> >>
> >> or this:
> >>
> >> define foo()
> >> ! a b
> >> ! #!no-sweet
> >> ! c d

I guess "shoot the developer" is not an available option :-).

Alan is right, how to *parse* it is defined, but Beni make the good point that:

On Thu, 2 May 2013 09:39:30 -0700, Beni Cherniavsky-Paskin <xxxxxx@users.sf.net> wrote:
> It's well defined how #!foo is delimeted and consumed.
> I'm talking about the effect they have on parsing "subsequent datums" -
> what does that mean precisely when it occurs it the middle of some datum?

I think it precisely means that your developers are crazy :-).

But it's a great point, better nail that down.

> >> As written, it sounds that the directives must have a flat, global effect
> >> on the port, crossing all ( ) boundaries.
> >> But correctly implementing this sounds painful to me.  E.g. you can't
> >> call a lower-level (read) / (neoteric-read) unless they understand these
> >> directives.  And every procedure must be ready for sweet processing to be
> >> turned off underneath it.
> >>
> >> I propose for simplicity to say that these directives SHOULD (MUST?) be
> >> used only at top level.
> >> Probably also require them to be alone on a line, at column 0 (trailing
> >> hspace and comments are ok)?
> >> And say that implementations MAY signal an error if used otherwise.

That makes sense.  In *practice*, these directives would only be used on the left-hand column,
at the top level.  Trying to switch in the middle of parsing doesn't really make sense,
and it'd be hideous to try to do.  I don't think we need to REQUIRE it to be an error,
but MAY signal an error if used otherwise is sensible... and that immediately renders it
non-portable.

How about this?  Let's change this text:
========================
An implementation of this SRFI <em>MUST</em> accept
the directive <code>#!sweet</code> followed by a whitespace character
in its standard datum readers (e.g., <code>read</code> and, if applicable,
the default implementation REPL).
...
A <code>#!curly-infix</code>
<em>SHOULD</em> cause the current port to switch to SRFI-105
semantics (e.g., sweet-expression indentation processing is disabled).
A <code>#!no-sweet</code>
<em>SHOULD</em> cause the current port to
disable sweet-expression indentation processing and
<em>MAY</em> also disable curly-infix expression processing.

To this:
========================
An implementation of this SRFI <em>MUST</em> accept
a line beginning with the un-indented directive <code>#!sweet</code>
followed by a newline
in its standard datum readers (e.g., <code>read</code> and, if applicable,
the default implementation REPL).
An implementation <em>MAY</em> signal an error if this directive is not at
the beginning of a line or cannot terminate all sweet-expressions
(e.g., because it's inside parentheses or a collecting list).
...
{After #!curly-infix and #!no-sweet}
An implementation <em>MAY</em> signal an error if the directives
<code>#!curly-infix</code> or <code>#!no-sweet</code>
are not at the beginning of a line or cannot terminate all sweet-expressions.

Thoughts?

--- David A. Wheeler