Interim Report, 8-bit binary floating point formats for ML Bradley Lucier (14 Dec 2023 23:26 UTC)

Interim Report, 8-bit binary floating point formats for ML Bradley Lucier 14 Dec 2023 23:26 UTC

I'm forwarding the following email, sent to the Numerical Analysis email
list digest, because some participants of this SRFI's mail list have
expressed an interest in 8-bit floating-point formats.

I will note that these floating-point formats (with adjustable number of
precision bits) have special elements (infinities, NaNs, signed zeroes,
etc.) that do not correspond to similar elements in IEEE 16-, 32-, and
64-bit floating-point arithmetic, so conversion routines for these
formats cannot be generated by the macros
macro-make-representation->double and macro-make-double->representation
in the sample implementation.

Brad

From: Michael Overton xxxxxx@nyu.edu
Date: November 30, 2023
Subject: Interim Report, 8-bit binary floating point formats for ML

The IEEE-SA P3109 Arithmetic Formats for Machine Learning working
group is providing an early draft of its Interim Report on 8-bit
Binary Floating Point Formats. As a member of this committee who
subscribes to NA-Digest, I've been asked to pass on this information.

For more information on the working group, please see
https://sagroups.ieee.org/p3109wgpublic/

For the draft interim report, please see
https://github.com/P3109/Public/tree/main/Shared%2520Reports

To submit comments on the draft, please go to
https://github.com/P3109/Public/issues