I see that there has been some followup discussion on my previous
post, but since no-one except Oleg CC'ed me on their message, I didn't
see them until I browsed the archive today.
I have no desire to delay or derail this SRFI. I'm just trying to
make some comments on what level of abstraction I think data structures
in scientific computing should have.
Basically, my point is that an array is conceptually a mapping
g: [0,n1) \times [0,n2) \times ... \to \text{Scheme objects}
that is accessed by some notation like (array-ref g i j k ...). (One
could simply use functional notation if there were easier ways to
construct and manipulate function arguments in Scheme.) There
need be no backing store. This is sometimes helpful when applying functions
to arrays (now just composing functions), to delay their evaluation and the
construction of intermediate storage results. One just constructs the new
mapping; that's why I sent the previous code for abstract-vectors.
When you want to use an array multiple times, it may be more efficient
to convert it to the form that Bawden and this SRFI use. Or if you
want to construct new arrays as slight modifications of old arrays, then
Bawden's form is also most efficient, with array component setters, but one
should not have one's ideas fixed by this particular happenstance of
computer programming and implementation efficiency.
One hits a similar problem when talking about how to do numerical linear
algebra in computer programs. There are whole template packages with
names like matrix++, etc., that deal with all kind of hairy issues of
allocation, access, composition, application, etc., at the component level.
But that level is too low for numerical linear algebra applied to the
numerical solution of partial differential equations. What one needs are
linear maps. Some of these linear maps never have explicit representations.
Some of them you want to apply to vectors (which can look nothing at all
like Scheme vectors or Fortran arrays or ...); some of them you can't
apply to vectors, but you apply their *inverse* to vectors; etc., etc.
In other words, arrays of numbers, matrices of numbers, vectors of numbers,
are all too low-level constructs on which to build a useful PDE solver
system.
For an implementation that incorporates some of these ideas, see the PDE
solver library at
http://www.math.purdue.edu/~lucier/615/software/
So the SAC and Sisal ideas of arrays are too low-level for me, too.
Brad