I'm loading a model (not written/trained by me) and I add hooks to some layers of the model using register_forward_hook.
My hooks calculate some transformations of the input of layer (which is the output of the previous layer).
The goal is to add the transformations calculated by the hooks to the loss function, so that during fine tune the model will attempt to learn that the output of the transformations should be minimized.
For example:
y1=None
def hook(module, input):
y1=foo(input)
model.some_layer.register_forward_hook(hook)
loss = MSE(...) + L1(y1.detach())
Does it make sense to implement it that way? Would it work backpropagation-wise?
from Pytorch - adding output of hooks to loss function
No comments:
Post a Comment