Sunday, 7 October 2018

python - Implementing an LSTM network with Keras and TensorFlow

With limited knowledge, I've built an LSTM network. I would like to validate my assumptions and better understand the Keras API.

Network Code:

#...
model.add(LSTM(8, batch_input_shape=(None, 100, 4), return_sequences=True))
model.add(LeakyReLU())
model.add(LSTM(4, return_sequences=True))
model.add(LeakyReLU())
model.add(LSTM(1, return_sequences=False, activation='softmax'))
#...

I have tried to build a network with 4 features input, 2 hidden layers: the first one with 8 neurons, second one with 4 neurons and 1 neuron on the output layer.

enter image description here

The activation I wanted was LeakyReLU.

Q:

  1. Is the implementation correct?
    i.e.: does the code reflects what I planned?
  2. When using LeakyReLU should I add linear activation on the previous layer?
    i.e.: Do I need to add activation='linear' to the LSTM layers?


from python - Implementing an LSTM network with Keras and TensorFlow

No comments:

Post a Comment