arithmetic issues
Aubrey Jaffer
(21 Oct 2005 14:53 UTC)
|
||
Re: arithmetic issues
John.Cowan
(21 Oct 2005 15:59 UTC)
|
||
Re: arithmetic issues
bear
(21 Oct 2005 16:39 UTC)
|
||
Re: arithmetic issues
John.Cowan
(22 Oct 2005 02:03 UTC)
|
||
(missing)
|
||
(missing)
|
||
Re: +nan.0 problems
bear
(24 Oct 2005 06:04 UTC)
|
||
(missing)
|
||
(missing)
|
||
(missing)
|
||
(missing)
|
||
(missing)
|
||
Re: arithmetic issues
Marcin 'Qrczak' Kowalczyk
(23 Oct 2005 20:24 UTC)
|
||
Re: arithmetic issues
Thomas Bushnell BSG
(23 Oct 2005 20:30 UTC)
|
||
Re: arithmetic issues
Marcin 'Qrczak' Kowalczyk
(23 Oct 2005 22:25 UTC)
|
||
Re: arithmetic issues
Thomas Bushnell BSG
(23 Oct 2005 22:30 UTC)
|
||
Re: arithmetic issues
Marcin 'Qrczak' Kowalczyk
(21 Oct 2005 17:15 UTC)
|
||
Re: arithmetic issues
Taylor Campbell
(21 Oct 2005 20:24 UTC)
|
||
Re: arithmetic issues
Thomas Bushnell BSG
(21 Oct 2005 20:32 UTC)
|
||
Re: arithmetic issues
Alan Watson
(22 Oct 2005 00:26 UTC)
|
||
Re: arithmetic issues
Marcin 'Qrczak' Kowalczyk
(22 Oct 2005 00:45 UTC)
|
||
Re: arithmetic issues
Aubrey Jaffer
(22 Oct 2005 01:17 UTC)
|
||
Re: arithmetic issues
Thomas Bushnell BSG
(22 Oct 2005 01:22 UTC)
|
||
(missing)
|
||
(missing)
|
||
(missing)
|
||
(missing)
|
||
Re: arithmetic issues
Bradley Lucier
(23 Oct 2005 19:46 UTC)
|
||
Re: arithmetic issues
Aubrey Jaffer
(23 Oct 2005 20:10 UTC)
|
||
Re: arithmetic issues
Aubrey Jaffer
(23 Oct 2005 19:54 UTC)
|
||
Re: arithmetic issues
Jens Axel Søgaard
(23 Oct 2005 20:01 UTC)
|
||
Re: arithmetic issues
Aubrey Jaffer
(23 Oct 2005 20:50 UTC)
|
||
Re: arithmetic issues
Thomas Bushnell BSG
(23 Oct 2005 21:12 UTC)
|
||
Re: arithmetic issues
Marcin 'Qrczak' Kowalczyk
(23 Oct 2005 22:31 UTC)
|
||
Re: arithmetic issues
Thomas Bushnell BSG
(23 Oct 2005 22:33 UTC)
|
||
Re: arithmetic issues
Marcin 'Qrczak' Kowalczyk
(23 Oct 2005 22:50 UTC)
|
||
Re: arithmetic issues
Thomas Bushnell BSG
(23 Oct 2005 22:57 UTC)
|
||
Re: arithmetic issues
Marcin 'Qrczak' Kowalczyk
(24 Oct 2005 00:53 UTC)
|
||
Re: arithmetic issues
Thomas Bushnell BSG
(24 Oct 2005 01:05 UTC)
|
||
Re: arithmetic issues
Marcin 'Qrczak' Kowalczyk
(24 Oct 2005 01:45 UTC)
|
||
Re: arithmetic issues
Taylor Campbell
(24 Oct 2005 02:00 UTC)
|
||
Re: arithmetic issues
Marcin 'Qrczak' Kowalczyk
(24 Oct 2005 02:08 UTC)
|
||
Re: arithmetic issues
Taylor Campbell
(24 Oct 2005 02:14 UTC)
|
||
Re: arithmetic issues
Marcin 'Qrczak' Kowalczyk
(24 Oct 2005 02:27 UTC)
|
||
Re: arithmetic issues
Taylor Campbell
(24 Oct 2005 02:45 UTC)
|
||
Re: arithmetic issues
Alan Watson
(24 Oct 2005 02:13 UTC)
|
||
Re: arithmetic issues
Taylor Campbell
(24 Oct 2005 02:22 UTC)
|
||
Re: arithmetic issues
Alan Watson
(24 Oct 2005 03:19 UTC)
|
||
Re: arithmetic issues
Thomas Bushnell BSG
(24 Oct 2005 02:01 UTC)
|
||
Re: arithmetic issues
Aubrey Jaffer
(24 Oct 2005 02:27 UTC)
|
||
Re: arithmetic issues
Alan Watson
(24 Oct 2005 03:14 UTC)
|
||
Re: arithmetic issues
John.Cowan
(24 Oct 2005 05:37 UTC)
|
||
Re: arithmetic issues
Per Bothner
(24 Oct 2005 07:05 UTC)
|
||
Re: arithmetic issues
Marcin 'Qrczak' Kowalczyk
(24 Oct 2005 07:58 UTC)
|
||
Re: arithmetic issues
Thomas Bushnell BSG
(24 Oct 2005 08:05 UTC)
|
||
Re: arithmetic issues
Alan Watson
(24 Oct 2005 08:25 UTC)
|
||
reading NaNs
Aubrey Jaffer
(24 Oct 2005 15:35 UTC)
|
||
Re: reading NaNs
Per Bothner
(24 Oct 2005 17:35 UTC)
|
||
Re: reading NaNs bear (24 Oct 2005 19:23 UTC)
|
||
Re: reading NaNs
Marcin 'Qrczak' Kowalczyk
(24 Oct 2005 18:17 UTC)
|
||
Re: arithmetic issues
bear
(24 Oct 2005 06:13 UTC)
|
||
Re: arithmetic issues
Taylor Campbell
(24 Oct 2005 06:27 UTC)
|
||
Re: arithmetic issues
Thomas Bushnell BSG
(24 Oct 2005 07:49 UTC)
|
||
Re: arithmetic issues
bear
(24 Oct 2005 16:41 UTC)
|
||
Re: arithmetic issues
Thomas Bushnell BSG
(24 Oct 2005 07:49 UTC)
|
Okay... On a rereading of SRFI-77, I want to point out a couple things. First, the new external representation where you have a suffixed bar and decimal number indicating the bits of precision: This is a good and necessary extension to the information in inexact constants, and I applaud it. But we've got what is essentially one piece of information all over the place now. The inexact prefix, the exponent marker, and this new suffix all signify inexactness and information about inexactness. Since a lot of representations already support #<decimal>R prefixes for radix, could we consider #<decimal>I prefixes for inexactness? This would eliminate the need for the suffix entirely, and put the primary inexactness information in one place. The exponent markers are still somewhat useful as ways to specify the use of indeterminate amounts of precision that happen to be efficient on the current hardware. So somebody who needed only a little precision and wanted whatever the system found "easy" or "efficient" could write 6.0F0, and somebody who knows that his algorithm won't work if he has less than 16 bits of precision can write #16i6.0. I think you should specify that an implementation may use more bits of precision than requested, and introduce some way to request that an error be signalled if the implementation must use less. Regarding "safe" and "unsafe" mode; I think that "unsafe" mode should _allow_ implementations to skip checks, not _require_ them to skip checks. Code that is incorrect in safe mode is still incorrect in unsafe mode, and we should not provide a canonical way to run unsafe or incorrect code. Additionally, it is a lower initial bar for implementors; they can get the system working correctly (in safe mode) and provide a trivial unsafe mode (equal to their safe mode) to start with. Finally, we can't really tell in advance which checks can or ought to be done or skipped; as compiler technology advances or as hardware advances some checks may become "free" either at compile-time or in hardware at runtime. Theoretically, some checks may even acquire negative cost if the hardware uses them as "cue" indicators for heuristic branch predictions or prefetching. Your redefinition of eqv? makes it the case that: (eqv? #6.0L0 #6.0S0) => #t on implementations that use a single floating-point representation size and #f on implementations that use multiple floating-point representation sizes (because procedures like + can produce results differing in precision depending on the precision of its arguments). So we have a situation where, first, the results of eqv? are specified but implementation-dependent, and second, two numbers can be = without being eqv?. This is consistent with Lucier's proposal, which you mention at the end of the SRFI, but you don't highlight the difference anywhere. I think I agree with your rationale and I think I agree with the results and the respecification of eqv?; but the ramifications on numerically equal numbers of different precisions is not immediately obvious, so I wanted to point it out. I strongly disagree with the idea of mixed exactness in the real and imaginary parts of a complex number. 5.0+3i and 5+3.0i are the same inexact number and should not be treated differently. I would suggest requiring an error to be signalled if inexact->exact gets an argument greater or less than any exact number representable by the implementation. Likewise if exact->inexact recieves an argument outside the range representable as inexact numbers in the implementation. Your formulation permits a fixnum range [0,1] and then states that the modular mathematics primitives perform math modulo hi-lo+1. I first thought, somewhat obtusely, 'What does it mean to perform mathematics modulo zero??' After a fractional second's thought, it became clear that you meant (hi-lo)+1, not hi-(lo+1). Consider using fully parenthesized notation to avoid misinterpretation. Remember to also use it in your description of the functions that do modular operations. I still think that bitwise operations on numbers are incorrect. Bitwise operations should operate on bitvectors, not on numbers. You are not using them as numbers when you do bit operations on them, and their identity as numbers does not give the length of the bitvector you're using them for. This is faint and fuzzy thinking inherited from C and confuses bit representation with semantics. Absolute value is just a special case of complex magnitude, restricted to the various ranges. In fact, (define abs magnitude) works just fine. It didn't seem redundant in R5RS because complex numbers (and therefore the magnitude accessor) weren't required in R5RS. In a situation where the whole tower is required, it seems redundant. According to your definition of real? ... (real? z) <==> (let ((im (imag-part z))) (and (zero? im)(exact? im))) but simultaneously, (real? +nan.0) ==> #t. This implies that (imag-part +nan.0) is an exact zero, which seems wrong, although consistent with your declaration of NaN as a real number of indeterminate value. Are complex operations constrained to return some different kind of NaN, or do their results get coerced to the real line in the event of an error? Bear