Sunday, 30 June 2019

Unexpected results with CuDNNLSTM (instead of LSTM) layer

I have posted this question as an issue in Keras' Github but figured it might reach a broader audience here.


System information

  • Have I written custom code (as opposed to using example directory): Minimal change to official Keras tutorial
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04.2 LTS
  • TensorFlow backend (yes / no): yes
  • TensorFlow version: 1.13.1
  • Keras version: 2.2.4
  • Python version: 3.6.5
  • CUDA/cuDNN version: 10.1
  • GPU model and memory: Tesla K80 11G

Describe the current behavior
I am executing the code from the Seq2Seq tutorial. The one and only change I made was to swap the LSTM layers for CuDNNLSTM. What happens is that the model predicts a fixed output for any input I give it. When I run the original code, I get sensible results.

Describe the expected behavior
See preceding section.

Code to reproduce the issue
Taken from here. Simply replace LSTM with CuDNNLSTM.


Any insights are greatly appreciated.



from Unexpected results with CuDNNLSTM (instead of LSTM) layer

No comments:

Post a Comment