er-macro-transformer/sc-macro-transformer
Marc Nieper-Wißkirchen
(22 Sep 2020 11:34 UTC)
|
||
Re: er-macro-transformer/sc-macro-transformer
John Cowan
(23 Sep 2020 17:50 UTC)
|
||
Re: er-macro-transformer/sc-macro-transformer
Marc Nieper-Wißkirchen
(24 Sep 2020 08:25 UTC)
|
||
(missing)
|
||
(missing)
|
||
Re: er-macro-transformer/sc-macro-transformer
John Cowan
(25 Sep 2020 17:01 UTC)
|
||
Re: [scheme-reports-wg2] Re: er-macro-transformer/sc-macro-transformer
Marc Nieper-Wißkirchen
(25 Sep 2020 17:47 UTC)
|
||
(missing)
|
||
Re: [scheme-reports-wg2] Re: er-macro-transformer/sc-macro-transformer
Marc Nieper-Wißkirchen
(25 Sep 2020 18:01 UTC)
|
||
Re: [scheme-reports-wg2] Re: er-macro-transformer/sc-macro-transformer
John Cowan
(25 Sep 2020 22:07 UTC)
|
||
Re: [scheme-reports-wg2] Re: er-macro-transformer/sc-macro-transformer
Marc Nieper-Wißkirchen
(26 Sep 2020 08:56 UTC)
|
||
Re: [scheme-reports-wg2] Re: er-macro-transformer/sc-macro-transformer
John Cowan
(27 Sep 2020 23:28 UTC)
|
||
Re: [scheme-reports-wg2] Re: er-macro-transformer/sc-macro-transformer Marc Nieper-Wißkirchen (29 Sep 2020 15:44 UTC)
|
||
Re: er-macro-transformer/sc-macro-transformer
Marc Nieper-Wißkirchen
(24 Sep 2020 15:21 UTC)
|
||
Re: er-macro-transformer/sc-macro-transformer
Marc Nieper-Wißkirchen
(24 Sep 2020 16:26 UTC)
|
Am Mo., 28. Sept. 2020 um 01:28 Uhr schrieb John Cowan <xxxxxx@ccil.org>: > On Sat, Sep 26, 2020 at 4:56 AM Marc Nieper-Wißkirchen <xxxxxx@gmail.com> wrote: > >> >> We mustn't mix up implementations with interfaces. For example, >> syntax-case is not an implementation, but an interface, which can very >> well be provided by fundamentally different types of implementations >> (like, say, Chez/Racket/Larceny show). > > > Certainly. However, if examination of a Scheme implementation's code shows (say) that sc-transformer is implemented using er-transformer, it is pretty much safe to suppose that the two interfaces are compatible. One always needs to take a close look. For example, Chibi says that er-macro-transformer is implemented in terms of syntactic closures. But Chibi's syntactic closures do not implement the (full) SC macro facility. The same is true for Picrin. So, in this case, syntactic closures are an implementation technique for both ER (Chibi) and SC (MIT/GNU), but Chibi's existence doesn't prove necessarily the compatibility of the two interfaces. That said, as long as we do not incorporate implementation-specific quirks (e.g. forcing bound-identifier=? to be eq?) into the various interfaces, Unsyntax now shows that all of the interfaces in SRFI 211 can be implemented on the same base at the same time, making them compatible. Of course, different implementations may handle one interface more efficiently than another one. >> Agreed. Questions I had in mind are, say, whether pattern variables >> are matched with those in a template in syntax-rules through >> bound-identifier=? or free-identifier=?. Such a question came up in >> the discussion of SRFI 148 as the various systems had implemented them >> differently. But there is only one "right answer" >> (bound-identifier=?), which became clear in the discussion. > > > It "became evident" in the sense that there was consensus, which means either that all agreed or that those who disagreed were no longer willing to argue their positions. A different group might well come to a different consensus. I chose this example because I believe that we found the "right answer". (This presupposes, of course, that there is a right answer as in the example of primes.) There was consensus, but not about the answer itself but that we had found the right answer, not that we could agree on an answer. >> Similarly, if we just specified the current behavior of ER macro >> implementations in the case of raw symbols in the output, we would get >> a specification, which is factually bad. > > > Again, this means that some macros will behave in a way that some people would not expect. One can get used to anything in time, "even hanging" as the saying is. Calling it "factually bad" is essentially a judgment of taste rather than of reason. "Bad" may not come from the word family that is appropriate here. How would you call it if the norm of sunglasses were such that the right lens was so dark that one couldn't look through it? Of course, one can get used to it, but nevertheless, I would call such a norm a poor one, actually a factually poor one. (Again, you may know a better word than me for "poor".) > >> My point is that one reason for splitting the language was [...] >> >> to have a language in the scope of R6RS++, for >> which we don't expect that every hobby implementation has the manpower >> to provide a complete implementation. > > > Absolutely. But the further we move away from R5RS+ behavior (where by "+" I mean "commonly accepted extensions") the less likely actual implementation becomes. At the moment, we have moved very far away from R5RS+. And we haven't moved very far from R6RS either, which is likewise good because there is no reason why R6RS platforms shouldn't support R7RS (large) in the long run. Moreover, syntactic extensions (what the scope of SRFI 211 is) are by far less a problem for implementations (barring animosities of their maintainers) than additions to the evaluation model (system calls, threads, continuations, environments, tail-call contexts, signals, garbage collection, ...) because they can be implemented relatively trivially by replacing the frontend. On the other hand, even with adding only a bit of R5RS+ stuff, we may already move away from existing implementations. Of course, nominally, it is easy for an R7RS (small) implementation to implement, say, the Tangerine edition. And yet it may not be efficient enough with all sample implementations of the SRFIs bundled for the area of application of R7RS (large). (To give you an example, when I tried to pretty print about 500 kB of Scheme code using Chibi's implementation of SRFI 166, it seemingly took forever. On the other hand, Chez's built-in pretty-printer did the job in an instant. Of course, SRFI 166 is at a much higher level and Chibi is not an optimizing compiler, but it shows that current implementations of R7RS (large) cannot yet cope with existing tools.) Whatever the final outcome of the R7RS (large) process will be (and whether we will get high quality (choose your preferred meaning of high quality here) implementations, the whole process has already benefitted the Scheme world tremendously through the standardization of so many libraries that have been being created and described in SRFIs along the way. >> What do you mean by "at the runtime level"? That you would write a >> code generator that produces the Scheme code, which is then finally >> compiled? > > > Just so, for good and bad. Embedded DSLs are a reasonable use case for macros. But a CL-to-Scheme, to say nothing of a Fortran-to-Scheme, compiler would not be. There are many intermediate cases. If you want to use your CL or Fortran inside Scheme (so that it is integrated into the module system and aware of the lexical context), it is by far the easiest to implement this as a procedural macro. Because otherwise, you would have to write a Scheme compiler as well. Note that, at least in principle, turning a procedure into a macro is just one instance of `define-syntax'. For example, if you have your Fortran-to-Scheme compiler as a procedure, you can immediately use it at expand-time. More likely than the wish to implement a Fortran-to-Scheme compiler is probably to implement something like lex/yacc in Scheme. In the C world, these are source-code generating tools with all the associated problems. In Scheme with procedural macros, you would first write a lex/yacc analogue that does the translation at run-time as much as the traditional lex/yacc. But then you can lift everything to expand-time through `define-syntax': SRFI 115 is another good use case so that regular expressions are compiled at expansion time and not at runtime. >> It is pretty nice a feature of the syntax-case system that you can >> actually test your macro transformer at the absolute runtime level, >> meaning at the REPL because a transformer is nothing but a procedure, >> which you can experiment with as with any other procedure. > > > How is that distinct from an ER-transformer? An ER-transformer itself (the result from calling/expanding er-macro-transformer) is an opaque object. Of course, you can test the procedure you plug into er-macro-transformer, but then you have to forge the rename and the compare procedures. Of course, it's doable somehow, but with transformer procedures, it is much more direct.