Is there a way in Keras to cross-validate the early stopping metric being monitored EarlyStopping(monitor = 'val_acc', patience = 5)? Before allowing training to proceed with the next epoch, could the model be cross-validated to get a more robust estimate of the test error?
What I have found is that if patience is treated like a hyperparameter and cross-validated, the average number of resulting epochs across the folds is pretty close to the optimal number of epochs from a grid search. However, there can be a fair bit of variability in the number of resulting epochs from one fold to the next. A randomly high early stopping metric can halt training but that model will not produce the same error on a test set.
I have tried to write a custom metric function that includes cross-validation but I can't think of how to get Keras to re-develop the model on the test folds and then predict on the validation fold inside the function. Is there a way to cross-validate the early stopping metric being monitored, perhaps through a custom function inside the Keras model or a loop outside the Keras model?
Thanks!!
p.s. Sometimes, the very idea of cross-validating a model, epoch-by-epoch, doesn't make sense to me. If there are thoughts on that as well, and how to test it, I'd be grateful!
from Early Stopping with a Cross-Validated Metric in Keras
No comments:
Post a Comment