Tuesday, 6 November 2018

Keras - Nan in summary histogram LSTM

I've written an LSTM model using Keras, and using LeakyReLU advance activation:

    # ADAM Optimizer with learning rate decay
    opt = optimizers.Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0001)

    # build the model
    model = Sequential()

    num_features = data.shape[2]
    num_samples = data.shape[1]

    model.add(
        LSTM(16, batch_input_shape=(None, num_samples, num_features), return_sequences=True, activation='linear'))
    model.add(LeakyReLU(alpha=.001))
    model.add(Dropout(0.1))
    model.add(LSTM(8, return_sequences=True, activation='linear'))
    model.add(Dropout(0.1))
    model.add(LeakyReLU(alpha=.001))
    model.add(Flatten())
    model.add(Dense(1, activation='sigmoid'))

    model.compile(loss='binary_crossentropy', optimizer=opt,
                  metrics=['accuracy', keras_metrics.precision(), keras_metrics.recall(), f1])

My data is a balanced binary labeled set. i.e: 50% labeled 1 50% labeled 0. I've used activation='linear' for the LSTM layers preceding the LeakyReLU activation, similar to this example I found on GitHub.

The model throws Nan in summary histogram error in that configuration. Changing the LSTM activations to activation='sigmoid' works well, but seems like the wrong thing to do.

Reading this StackOverflow question suggested "introducing a small value when computing the loss", I'm just not sure how to do it on a built-in loss function.

Any help/explanation would be appreciated.

Update: I can see that the loss is nan on the first epoch

260/260 [==============================] - 6s 23ms/step - 
loss: nan - acc: 0.5000 - precision: 0.5217 - recall: 0.6512 - f1: nan - val_loss: nan - val_acc: 0.0000e+00 - val_precision: -2147483648.0000 - val_recall: -49941480.1860 - val_f1: nan



from Keras - Nan in summary histogram LSTM

No comments:

Post a Comment