should the default be lower bounds or upper bounds? Peter McGoron (11 Apr 2026 13:42 UTC)
Re: should the default be lower bounds or upper bounds? John Cowan (11 Apr 2026 23:06 UTC)
Re: should the default be lower bounds or upper bounds? Peter McGoron (12 Apr 2026 00:40 UTC)
Re: should the default be lower bounds or upper bounds? Per Bothner (12 Apr 2026 03:01 UTC)
Re: should the default be lower bounds or upper bounds? John Cowan (12 Apr 2026 04:38 UTC)
Re: should the default be lower bounds or upper bounds? John Cowan (12 Apr 2026 01:55 UTC)
Re: should the default be lower bounds or upper bounds? Peter McGoron (12 Apr 2026 03:22 UTC)
Re: should the default be lower bounds or upper bounds? Bradley J Lucier (12 Apr 2026 04:08 UTC)
Re: should the default be lower bounds or upper bounds? John Cowan (12 Apr 2026 04:50 UTC)
Re: should the default be lower bounds or upper bounds? Bradley Lucier (12 Apr 2026 16:00 UTC)
Re: should the default be lower bounds or upper bounds? Peter McGoron (12 Apr 2026 16:14 UTC)

Re: should the default be lower bounds or upper bounds? Peter McGoron 12 Apr 2026 00:35 UTC

 > I think that would be quite surprising. I did a little cross-language
investigation, and few languages support lower bounds at all despite
their obvious utility: Fortran, PL/I, ANSI Full Basic, and Algol 60
and 68.  In the Algols, the lower bound is always required: otherwise
a single bound is taken to be the upper bound.  (Many Basics only
allow the lower bound of all dimensions of all arrays to be set
globally with the "Option Base 0|1" statement.)

For those languages, are there matrix literals and is there syntax for
specifying the lower/upper bounds of those? I am only familiar with
ALGOL 60 of the ones listed and that has no array literals.
An array literal is different from an array type declaration, so if the
syntax is for a declaration that weighs less on a Scheme syntax for
array literals.

 > That's true, and it was one of the reasons why I originally went with
just having lower bounds.  But I think their ergonomics are terrible.
You should be able to determine the shape of an array without looking
at the data in it.

But you can set a static shape of the array if you want to, and use the
inferred-length syntax if you don't want to. There are times where it is
useful to have the language infer type information from its input.

My experience with programming with matrices has mostly been in MATLAB
with some Python/NumPy. Both of these are dynamically-typed, interactive
languages. They do not require lengths or dimensions when writing
matrices, because it is inferred from the input. (Granted, neither has
lower bound support.) Requiring the length would be more annoying than
helpful and if this syntax were to get heavy use I forsee that inferring
lengths will be a common extension.

 > I don't think this is an essential feature.  When would you naturally
specify dimensions in strange bases?

1. Most implementations will probably parse bounds by calling their
number parser, so I feel like it will be supported on many
implementation anyways. Adding a separate parser is probably more work.

2. There are cases where one may want a 256x256 matrix, and specifying
that as #x100 is easier. It would also allow for #e1e10 as a bound,
although you probably wouldn't want to type that matrix in by hand.

3. Why not? There are many ways to write an integer and baking one in is
wrong. It would also not compose with other ways an implementation may
wish to specify numbers, such as allowing underscores as a spacer.

The change is to delete the explicit <integer> definition and say that
it is a Scheme number restricted to be an integer. It is not a major change.

-- Peter McGoron