f8 representation Alex Shinn (17 Feb 2021 10:24 UTC)
Re: f8 representation Bradley Lucier (17 Feb 2021 18:55 UTC)
Re: f8 representation John Cowan (17 Feb 2021 21:33 UTC)
Re: f8 representation Alex Shinn (18 Feb 2021 12:21 UTC)
Re: f8 representation John Cowan (18 Feb 2021 15:29 UTC)
Re: f8 representation Alex Shinn (19 Feb 2021 01:15 UTC)
Re: f8 representation Lucier, Bradley J (19 Feb 2021 01:29 UTC)
Re: f8 representation Alex Shinn (21 Feb 2021 01:16 UTC)

Re: f8 representation Alex Shinn 18 Feb 2021 12:21 UTC

On Thu, Feb 18, 2021 at 6:33 AM John Cowan <xxxxxx@ccil.org> wrote:
>
> I think the Right Thing is to use bytevectors/u8vectors as the storage class and to have a global parameter bound to a vector of the 256 possible values as Scheme inexact numbers.  IMO the most important use  case is the one where all the values (except the infinities and NaNs) have integer values: see <https://en.wikipedia.org/wiki/Minifloat#All_values_as_integers> for a table.

Yes, that's a sensible implementation, but my question was what are
those 256 values?

If the numbers are normalized or otherwise within a known range,
it can be useful to divide the range evenly (fixed precision).

Otherwise, as in the case of a general purpose library, you probably
want floating point precision.

I've used both of these options to compress values for machine learning systems.

The 1.4.3 proposal in the wikipedia article you link I've seen a few places, but
it seems mostly pedagogic, and has a narrow range.  The split I ended up using
was 1.5.2, with only 2 bits for the significand.  The result was that
the number 9
could not be represented exactly, but the range was much wider and more useful
in practice.

--
Alex

> On Wed, Feb 17, 2021 at 1:55 PM Bradley Lucier <xxxxxx@math.purdue.edu> wrote:
>
>>
>> Or one could have storage classes with bespoke f8 formats implemented
>> within the storage class (like the u1-storage-class implementation based
>> on u16vectors).
>>
>> Perhaps other people could offer suggestions.
>>
>> Brad