At the moment, the result of a / mp.norm(a) may unexpectedly be zero or NaN for large MPArrays (or small MPArrays with large tensor entries):
>>> import numpy as np
>>> import mpnum as mp
>>>
>>> a = mp.MPArray.from_kron([np.array([10.0])] * 308)
>>> mp.norm(a / mp.norm(a))
0.0
>>> b = mp.MPArray.from_kron([np.array([10.0])] * 309)
>>> mp.norm(b / mp.norm(b))
nan
The reason is that floats cannot exceed a certain value:
>>> np.finfo(float).max
1.7976931348623157e+308
>>> a.lt[-1], mp.norm(a)
(array([[[ 1.00000000e+308]]]), inf)
>>> b.lt[-1], mp.norm(b)
(array([[[ inf]]]), inf)
>>>
It would be nice to add a method which computes a / mp.norm(a) without running into the floating point overflow.
Underflow can also happen:
>>> for n in 161, 162, 323, 324:
... a = mp.MPArray.from_kron([np.array([0.1])] * n)
... print('{} {!r:25} {!r:25} {!r:20}'.format(
... n, mp.norm(a), a.lt[-1].flat[0], mp.norm(a / mp.norm(a))))
...
161 9.9404793228621183e-162 1.0000000000000097e-161 1.0059877069510206
162 0.0 1.0000000000000097e-162 inf
323 0.0 9.8813129168249309e-324 inf
324 0.0 0.0 nan
>>>
At the moment, the result of
a / mp.norm(a)may unexpectedly be zero orNaNfor large MPArrays (or small MPArrays with large tensor entries):The reason is that floats cannot exceed a certain value:
It would be nice to add a method which computes
a / mp.norm(a)without running into the floating point overflow.Underflow can also happen: