On Wed, 25 May 2005, Sebastian Egner wrote:
> Unfortunately, my experience is that this approach is highly
> unreliable. In the end, I spent more time doing analytical sanity
> checks myself than it took to write the proper numerical code
> directly after understanding the limits properly.
>Example: An important function from information theory is
>
> f(x) = -x log(x).
> This function is in principle well behaved (smooth, analytic, etc.)
> on (0,1], but its derivative does not exist at x = 0. Moreover, f(0)
> cannot directly be computed numerically because the underflow from
> log(x) is not cancelled by the multiplication with zero. Practical
> numerical code: IF x < xmin THEN 0 ELSE x log(x), where xmin is
> chosen minimal such that log(xmin) is an ordinary number and not
> -infinity.
This is difficult; In many ways, the idea of "infinity" as a number
too large to represent requires a corresponding idea of "epsilon" as
a number too small to represent. (This is an idea subtly different
from "signed zeros": epsilon is 1/inf, a "smallest positive number".
This saves you in some situations from mathematical errors, but the
properties of such a simple idea of epsilon are not helpful in all
cases. Where log(0) is undefined, log(epsilon) = -infinity. Better
as far as it goes, but it still leaves you with a mathematically
undefined situation, so you have to write the sanity check anyway.
Bear