Disclosure: Self Taught
I am trying to use recall on 2 of 3 classes as a metric, so class B and C from classes A,B,C.
(The original nature of this is that my model is highly imbalanced in the classes [~90% is class A], such that when I use accuracy I get results of ~90% for prediciting class A everytime)
model.compile(
loss='sparse_categorical_crossentropy', #or categorical_crossentropy
optimizer=opt,
metrics=[tf.keras.metrics.Recall(class_id=1, name='recall_1'),tf.keras.metrics.Recall(class_id=2, name='recall_2')]
)
history = model.fit(train_x, train_y, batch_size=BATCH, epochs=EPOCHS, validation_data=(validation_x, validation_y), callbacks=[tensorboard, checkpoint])
This spits out an error:
raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (None, 3) and (None, 1) are incompatible
Model summary is:
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm (LSTM) (None, 120, 32) 19328
_________________________________________________________________
dropout (Dropout) (None, 120, 32) 0
_________________________________________________________________
batch_normalization (BatchNo (None, 120, 32) 128
_________________________________________________________________
lstm_1 (LSTM) (None, 120, 32) 8320
_________________________________________________________________
dropout_1 (Dropout) (None, 120, 32) 0
_________________________________________________________________
batch_normalization_1 (Batch (None, 120, 32) 128
_________________________________________________________________
lstm_2 (LSTM) (None, 32) 8320
_________________________________________________________________
dropout_2 (Dropout) (None, 32) 0
_________________________________________________________________
batch_normalization_2 (Batch (None, 32) 128
_________________________________________________________________
dense (Dense) (None, 32) 1056
_________________________________________________________________
dropout_3 (Dropout) (None, 32) 0
_________________________________________________________________
dense_1 (Dense) (None, 3) 99
=================================================================
Total params: 37,507
Trainable params: 37,315
Non-trainable params: 192
Note that the model works fine without the errors if using:
metrics=['accuracy']
but this and this made me think something has not been implemented along the lines of tf.metrics.SparseCategoricalRecall()
from
tf.metrics.SparseCategoricalAccuracy()
So I diverted to a custom metric which decended into a rabbit hole of other issues as I am highly illeterate when it comes to classes and decorators.
I botched this together from an custom metric example (I have no idea how to use the sample_weight so I commented it out to come back to later):
class RelevantRecall(tf.keras.metrics.Metric):
def __init__(self, name="Relevant_Recall", **kwargs):
super(RelevantRecall, self).__init__(name=name, **kwargs)
self.joined_recall = self.add_weight(name="B/C Recall", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.argmax(y_pred, axis=1)
report_dictionary = classification_report(y_true, y_pred, output_dict = True)
# if sample_weight is not None:
# sample_weight = tf.cast(sample_weight, "float32")
# values = tf.multiply(values, sample_weight)
# self.joined_recall.assign_add(tf.reduce_sum(values))
self.joined_recall.assign_add((float(report_dictionary['1.0']['recall'])+float(report_dictionary['2.0']['recall']))/2)
def result(self):
return self.joined_recall
def reset_states(self):
# The state of the metric will be reset at the start of each epoch.
self.joined_recall.assign(0.0)
model.compile(
loss='sparse_categorical_crossentropy', #or categorical_crossentropy
optimizer=opt,
metrics=[RelevantRecall()]
)
history = model.fit(train_x, train_y, batch_size=BATCH, epochs=EPOCHS, validation_data=(validation_x, validation_y), callbacks=[tensorboard, checkpoint])
This aim is to return a metric of [recall(b)+recall(c)/2]
. I'd imagine returning both recalls seperately like metrics=[recall(b),recall(c)]
would be better but I can't get the former to work anyway.
I got a tensor bool error: OperatorNotAllowedInGraphError: using a 'tf.Tensor' as a Python 'bool' is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
which googling led me to add: @tf.function
above my custom metric class.
This led to a old vs new class type error:
super(RelevantRecall, self).__init__(name=name, **kwargs)
TypeError: super() argument 1 must be type, not Function
which I didn't see how I had achieved since the class has an object?
As I said I'm quite new to all aspects of this so any help on how to achieve (and how best to achieve) using a metric of only a selection of prediciton classes would be really appreciated.
OR
if I am going about this entirely wrong let me know/guide me to the correct resource please
Ideally I'd like to go with the former method of using tf.keras.metrics.Recall(class_id=1....
as it seems the neatest way if it worked.
I am able to get the recall for each class when using a similar function in the callbacks part of the model, but this seems more intensive as I have to do a model.predict on val/test data at the end of each epoch. Also unclear if this even tells the model to focus on improving the selected class (i.e difference in implementing it in metric vs callback)
Callback code:
class MetricsCallback(Callback):
def __init__(self, test_data, y_true):
# Should be the label encoding of your classes
self.y_true = y_true
self.test_data = test_data
def on_epoch_end(self, epoch, logs=None):
# Here we get the probabilities - longer process
y_pred = self.model.predict(self.test_data)
# Here we get the actual classes
y_pred = tf.argmax(y_pred,axis=1)
report_dictionary = classification_report(self.y_true, y_pred, output_dict = True)
print ("\n")
print (f"Accuracy: {report_dictionary['accuracy']} - Holds: {report_dictionary['0.0']['recall']} - Sells: {report_dictionary['1.0']['recall']} - Buys: {report_dictionary['2.0']['recall']}")
self._data = (float(report_dictionary['1.0']['recall'])+float(report_dictionary['2.0']['recall']))/2
return
metrics_callback = MetricsCallback(test_data = validation_x, y_true = validation_y)
history = model.fit(train_x, train_y, batch_size=BATCH, epochs=EPOCHS, validation_data=(validation_x, validation_y), callbacks=[tensorboard, checkpoint, metrics_callback)
from TensorFlow/Keras Using specific classes recall as metric (Sparse Categorical Recall Metric Errors / Custom Metric)
No comments:
Post a Comment