Saturday, 5 August 2023

Is there a fast convolution that works with large numbers?

I am doing convolutions of a lot of reasonably long arrays. The values in them are also big although they fit within the scope of float64. Unfortunately, scipy's convolve gives the wrong answer when it uses the fft method. Here is a MWE:

from scipy.signal import convolve
import numpy as np

A = np.array([0] + [1e100]*10000)
convolve(A, A)

This gives the wrong output of:

array([ 9.15304445e+187, -7.04080342e+187,  1.00000000e+200, ...,
        3.00000000e+200,  2.00000000e+200,  1.00000000e+200])

It is fast though:

%timeit convolve(A, A)
458 µs ± 5.09 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)

You can get the right answer with:

convolve(A, A, method="direct")
array([0.e+000, 0.e+000, 1.e+200, ..., 3.e+200, 2.e+200, 1.e+200])

However this is much slower:

%timeit convolve(A, A, method="direct")
23.4 ms ± 511 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

Is there any way to get the right answer quickly? I am happy to use any library that is freely available.


I get the same problem with less extreme inputs sizes:

A = np.array([0] + [1e10]*10000)
convolve(A, A)
array([1.96826156e+08, 1.35742176e+08, 1.00000000e+20, ...,
       3.00000000e+20, 2.00000000e+20, 1.00000000e+20])

Bounty

The bounty is for the less extreme example of

 A = np.array([0] + [1e10]*10000)


from Is there a fast convolution that works with large numbers?

No comments:

Post a Comment