I am trying to use the tensorboard debugger to debug my application. I see how the graph gets slowly build, but when the training starts I only see the queue nodes on the CPU and the rest of the graph and the whole GPU section is not visible anymore. It is easiest to explain with a few pictures.
Before I run the first session.run(var.assign(data)) to initialize data:
Then I step trough the initialization of the vars:
and at the end when I start the training with sess.run(training_op):
Why do I only see the FIFO queue in the end and not the whole graph?
It looks like the GPU is getting cut out from the debugging process, but I'm not sure why, because for sure it is getting used, because I have written custom ops, where when I print I get output, but this is a cumbersome way to debug.
I can't post a MVC because the codebase is huge and convoluted, but the principle is:
- FIFO queues
q = tf.FIFOQueue(queue_size, [tf.float32, tf.float32, tf.int32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32, tf.float32])
- graph build up
def make_var(self, name, shape, initializer=None, regularizer=None, trainable=True): return tf.get_variable(name, shape, initializer=initializer, regularizer=regularizer, trainable=trainable)
- variables initialization
session.run(var.assign(data))
- 2nd thread started to feed the queues
t = threading.Thread(target=load_and_enqueue, args=(sess, self.net, data_layer, coord_train, iters_train))
- training loop
- point 5 and 6 are both shown here:
formatting dummy text
for epoch in range(epochs):
coord_train.run = True
coord_val.run = True
t = threading.Thread(target=load_and_enqueue, args=(sess, self.net, data_layer, coord_train, iters_train))
t.start()
t_val = threading.Thread(target=load_and_enqueue_val, args=(sess, self.net, data_layer, coord_val, iters_val))
print("Epoch: %d / %d" % (epoch, epochs))
for iter_train in range(iters_train):
timer.tic()
loss_summary, loss_cls_summary, loss_vertex_summary, loss_pose_summary, loss_regu_summary, loss_value, loss_cls_value, loss_vertex_value, loss_pose_value, loss_regu_value, lr, _ \
= sess.run([loss_op, loss_cls_op, loss_vertex_op, loss_pose_op, loss_regu_op, loss, loss_cls, loss_vertex, loss_pose, loss_regu, learning_rate, train_op])
Let me know if I can provide more information and what kind would be useful to debug this problem.
from Tensorboard debugger graph disappears, only queue left, why?
No comments:
Post a Comment