Thursday, 23 February 2023

Using locally a tensorflow model from Google AutoML Tabular

I want to use my trained tensorflow model from Google VertexAI AutoML for tabular data locally but I don't understand the documentation and how to pass the input into my model to get the prediction into a backtesting module (so I don't want to deploy it yet). The model is stored in a directory under saved_model.pb alongside with 3 directories : variables, assets, assets.extra

Model was trained with:

tensorflow: "2.8.0"
struct2tensor: "0.39.0"
tensorflow-addons: "0.16.1"

Some more info i got from my python code:

model = tf.saved_model.load(path)
print(list(model.signatures.keys()))
infer = model.signatures["serving_default"]
print(infer.structured_outputs)

Output:

['serving_default']

{'classes': <tf.Tensor 'Reshape_25:0' shape=(None, None) dtype=string>, 'scores': <tf.Tensor 'truediv:0' shape=(None, 2) dtype=float32>}

I trained on a tabular data, a pandas DataFrame with 14 features and the target column. How can I pass my 14 column vector as input of my model ? I heard it might be possible also to repackage the saved_model.pb into a keras model, but the inference is less optimized, but I am ready to go that way.

So far I tried these references from official doc:

https://github.com/Lornatang/TensorFlow2-tutorials/blob/master/guide/serialization/saved_model.py

https://www.tensorflow.org/tutorials/keras/save_and_load

I didn't find any VertexAI documentation to deploy locally without a docker container to serve the model.



from Using locally a tensorflow model from Google AutoML Tabular

No comments:

Post a Comment