Wednesday 28 December 2022

Python: how to speed up this function and make it more scalable?

I have the following function which accepts an indicator matrix of shape (20,000 x 20,000). And I have to run the function 20,000 x 20,000 = 400,000,000 times. Note that the indicator_Matrix has to be in the form of a pandas dataframe when passed as parameter into the function, as my actual problem's dataframe has timeIndex and integer columns but I have simplified this a bit for the sake of understanding the problem.

Pandas Implementation

indicator_Matrix = pd.DataFrame(np.random.randint(0,2,[20000,20000]))
def operations(indicator_Matrix):
   s = indicator_Matrix.sum(axis=1)
   d = indicator_Matrix.div(s,axis=0)
   res = d[d>0].mean(axis=0)
   return res.iloc[-1]

I tried to improve it by using numpy but it is still taking ages to run. I also tried concurrent.future.ThreadPoolExecutor but it still take a long time to run and not much improvement from list comprehension.

Numpy Implementation

indicator_Matrix = pd.DataFrame(np.random.randint(0,2,[20000,20000]))
def operations(indicator_Matrix):
   s = indicator_Matrix.to_numpy().sum(axis=1)
   d = (indicator_Matrix.to_numpy().T / s).T
   d = pd.DataFrame(d, index = indicator_Matrix.index, columns = indicator_Matrix.columns)
   res = d[d>0].mean(axis=0)
   return res.iloc[-1]

output = [operations(indicator_Matrix) for i in range(0,20000**2)]

Note that the reason I convert d to a dataframe again is because I need to obtain the column means and retain only the last column mean using .iloc[-1]. d[d>0].mean(axis=0) return column means, i.e.

2478    1.0
0       1.0

Update: I am still stuck in this problem. I wonder if using gpu packages like cudf and CuPy on my local desktop would make any difference.



from Python: how to speed up this function and make it more scalable?

No comments:

Post a Comment