On 4/24/20 3:54 PM, Jeffrey Mark Siskind wrote:
> One thing that came to mind is that the SRFI does not support half-floats
> which are popular with GPUs.
Jeff:
Thanks for your input.
I've been thinking about your comments and what changes might be made in
response to them.
I don't think this SRFI can be extended to cover the functionality of
(py)Torch or Scorch, but it should not be designed in such a way as to
preclude, or make difficult, supporting those libraries.
The current draft has this language about u16-storage-class, etc.:
==============================================================
Each of these could be defined simply as generic-storage-class, but it
is assumed that implementations with homogeneous vectors will give
definitions that either save space, avoid boxing, etc., for the
specialized arrays.
==============================================================
Thus, these specialized storage classes are seen as possible optimized
implementations, rather than requiring, e.g., adjacent 16-bit unsigned
integers as storage.
I think now that each of these global variables should have a value
either of a storage class that implements "packed" vectors of the
appropriate type, or #f if that storage class does not have an
implementation.
I don't see how to support half floats generically in Scheme, or
specifically in Gambit, for which the sample implementation is written,
so I propose to
(define f16-storage-class #f)
and perhaps
(define f8-storage-class #f)
If someone wants to "pun" u16 or u8 vectors as f16 or f8 vectors to pass
to a library, then they can do it.
I'll respond to your other comments in another email.
Brad