Monday 25 February 2019

keras understanding Word Embedding Layer

From the page I got the below code:

from numpy import array
from keras.preprocessing.text import one_hot
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers.embeddings import Embedding
# define documents
docs = ['Well done!',
        'Good work',
        'Great effort',
        'nice work',
        'Excellent!',
        'Weak',
        'Poor effort!',
        'not good',
        'poor work',
        'Could have done better.']
# define class labels
labels = array([1,1,1,1,1,0,0,0,0,0])
# integer encode the documents
vocab_size = 50
encoded_docs = [one_hot(d, vocab_size) for d in docs]
print(encoded_docs)
# pad documents to a max length of 4 words
max_length = 4
padded_docs = pad_sequences(encoded_docs, maxlen=max_length, padding='post')
print(padded_docs)
# define the model
model = Sequential()
model.add(Embedding(vocab_size, 8, input_length=max_length))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
# summarize the model
print(model.summary())
# fit the model
model.fit(padded_docs, labels, epochs=50, verbose=0)
# evaluate the model
loss, accuracy = model.evaluate(padded_docs, labels, verbose=0)
print('Accuracy: %f' % (accuracy*100))

  1. I looked at encoded_docs and noticed that words done and work both have one_hot encoding of 2, why? Is it because unicity of word to index mapping non-guaranteed. as per this page?
  2. I got embeddings by command embeddings = model.layers[0].get_weights()[0]. in such case why do we get embedding object of size 50? Even though two words have same one_hot number, do they have different embedding?
  3. how could i understand which embedding is for which word i.e. done vs work
  4. I also found below code at the page that could help with finding embedding of each word. But i dont know how to create word_to_index

    word_to_index is a mapping (i.e. dict) from words to their index, e.g. love: 69 words_embeddings = {w:embeddings[idx] for w, idx in word_to_index.items()}

  5. Please ensure that my understanding of para # is correct.

The first layer has 400 parameters because total word count is 50 and embedding have 8 dimensions so 50*8=400.

The last layer has 33 parameters because each sentence has 4 words max. So 4*8 due to dimensions of embedding and 1 for bias. 33 total


Layer (type) Output Shape Param#

embedding_3 (Embedding) (None, 4, 8) 400


flatten_3 (Flatten) (None, 32) 0


dense_3 (Dense) (None, 1) 33

  1. Finally, if 1 above is correct, is there a better way to get embedding layer model.add(Embedding(vocab_size, 8, input_length=max_length)) without doing one hot coding encoded_docs = [one_hot(d, vocab_size) for d in docs]


from keras understanding Word Embedding Layer

No comments:

Post a Comment