With limited knowledge, I've built an LSTM network. I would like to validate my assumptions and better understand the Keras API.
Network Code:
#...
model.add(LSTM(8, batch_input_shape=(None, 100, 4), return_sequences=True))
model.add(LeakyReLU())
model.add(LSTM(4, return_sequences=True))
model.add(LeakyReLU())
model.add(LSTM(1, return_sequences=False, activation='softmax'))
#...
I have tried to build a network with 4 features input, 2 hidden layers: the first one with 8 neurons, second one with 4 neurons and 1 neuron on the output layer.
The activation I wanted was LeakyReLU.
Q:
- Is the implementation correct?
i.e.: does the code reflects what I planned? - When using LeakyReLU should I add linear activation on the previous layer?
i.e.: Do I need to addactivation='linear'to the LSTM layers?
from python - Implementing an LSTM network with Keras and TensorFlow

No comments:
Post a Comment