Wednesday 4 August 2021

Keras Custom Loss for One-Hot Encoded

I currently have a DNN I trained that makes a prediction of a one-hot encoded classification for states that a game is in. Essentially, imagine there are three states, 0, 1, or 2.

Now, I normally would use categorical_cross_entropy for the loss function, but I realized not all classifications are not equal for my states. For example:

  • If the model predicts it should be state 1, there is no cost to my system if that classification is wrong, since state 1 is basically do nothing, so reward 0x.
  • If the model correctly predict states 0 or 2 (i.e. predict = 2 and correct = 2), then that reward should be 3x.
  • If the model incorrectly predict states 0 or 2 (i.e. predict = 2 and correct = 0), then that reward should be -1x.

I know that we can declare our custom loss functions in Keras but I keep on getting stuck with forming it. Anyone have suggestions how to transform that pseudo code? I can't tell how I'd do that in a vector-wise operation.

Additional question: I essentially am after a reward function I think. Is this the same as a loss function? Thanks!

def custom_expectancy(y_expected, y_pred):
    
    # Get 0, 1 or 2
    expected_norm = tf.argmax(y_expected);
    predicted_norm = tf.argmax(y_pred);
    
    # Some pseudo code....
    # Now, if predicted == 1
    #     loss += 0
    # elif predicted == expected
    #     loss -= 3
    # elif predicted != expected
    #     loss += 1
    #
    # return loss

Sources consulted:

https://datascience.stackexchange.com/questions/55215/how-do-i-create-a-keras-custom-loss-function-for-a-one-hot-encoded-binary-classi

Custom loss in Keras with softmax to one-hot

Code Update

import tensorflow as tf
def custom_expectancy(y_expected, y_pred):
    
    # Get 0, 1 or 2
    expected_norm = tf.argmax(y_expected);
    predicted_norm = tf.argmax(y_pred);
    
    results = tf.unstack(expected_norm)
    
    # Some pseudo code....
    # Now, if predicted == 1
    #     loss += 0
    # elif predicted == expected
    #     loss += 3
    # elif predicted != expected
    #     loss -= 1
    
    for idx in range(0, len(expected_norm)):
        predicted = predicted_norm[idx]
        expected = expected_norm[idx]
        
        if predicted == 1: # do nothing
            results[idx] = 0.0
        elif predicted == expected: # reward
            results[idx] = 3.0
        else: # wrong, so we lost
            results[idx] = -1.0
    
    
    return tf.stack(results)

I think this is what I'm after, but I haven't quite figured out how to build the correct tensor (which should be of size batch) to return.



from Keras Custom Loss for One-Hot Encoded

No comments:

Post a Comment