When I try to train an agent with a batch_size greater than 1 it gives me an exception. Where is my issue?
lr = 1e-3
window_length = 1
emb_size = 10
look_back = 6
# "Expert" (regular dqn) model architecture
inp = Input(shape=(look_back,))
emb = Embedding(input_dim=env.action_space.n+1, output_dim = emb_size)(inp)
rnn = Bidirectional(LSTM(5))(emb)
out = Dense(env.action_space.n, activation='softmax')(rnn)
expert_model = Model(inputs = inp, outputs = out)
expert_model.compile(loss='categorical_crossentropy', optimizer= Adam(lr))
print(expert_model.summary())
# memory
memory = PrioritizedMemory(limit=1000000, window_length=window_length)
# policy
policy = BoltzmannQPolicy()
# agent
dqn = DQNAgent(model=expert_model, nb_actions=env.action_space.n, policy=policy, memory=memory,
enable_double_dqn=False, enable_dueling_network=False, gamma=.9, batch_size = 100, #Here
target_model_update=1e-2, processor = RecoProcessor())
Im printing some values directly from the code of keras-rl and it gives me this output:
State[array([0., 0., 0., 0., 0., 0.])]
Batch: [[[0. 0. 0. 0. 0. 0.]]]
But also this exception:
ValueError: Error when checking input: expected input_1 to have 2 dimensions, but got array with shape (1, 1, 6)
I could put the code of the processor class, and I think that there is the key of this, but first I want to make sure that there's nothing wrong here.
from DQNAgent can't put batch size more than 1
No comments:
Post a Comment