Friday, 29 October 2021

performing many gradient-based optimizations in parallel with TensorFlow

I have a model which requires solving a system of ODEs with tfp.math.ode.BDF, and I would like to find the individual least-squares fits of this model to n > 1000 datasets. That is to say, if my model has m parameters then at the end of the optimization process I will have an n by m tensor of best-fit parameter values.

What would be the best way to perform this optimization in parallel? At this point I’m planning to define an objective function that adds up the n individual sums of square residuals, and then uses tfp.optimizer.lbfgs_minimize to find the best-fit values of the combined n×m parameters.



from performing many gradient-based optimizations in parallel with TensorFlow

No comments:

Post a Comment