I am training a Mask R-CNN Inception ResNet V2 1024x1024 algorithm using my computer's GPU. This was downloaded from the TensorFlow Detection Model Zoo, and I labeled my images (dimensions of 1100x1100 pixels) with Label-img. Here is what I am working with:
- GPU: NVIDIA GEFORCE RTX 2060
- GPU: 16GB RAM, 6 processor cores
- TensorFlow: 2.3.1
- Python: 3.8.6
- CUDA: 10.1
- cuDNN: 7.6
- Anaconda 3 command prompt
All tfrecord files have been generated, and when I start to train my model using python model_main_tf2.py --model_dir=models/my_faster_rcnn --pipeline_config_path=models/my_faster_rcnn/pipeline.config
, I get the following errors:
Traceback (most recent call last):
File "model_main_tf2.py", line 113, in <module>
tf.compat.v1.app.run()
File "C:\user\anaconda3\envs\object_detection_api\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\user\anaconda3\envs\object_detection_api\lib\site-packages\absl\app.py", line 303, in run
_run_main(main, args)
File "C:\user\anaconda3\envs\object_detection_api\lib\site-packages\absl\app.py", line 251, in _run_main
sys.exit(main(argv))
File "model_main_tf2.py", line 104, in main
model_lib_v2.train_loop(
File "C:\user\anaconda3\envs\object_detection_api\lib\site-packages\object_detection\model_lib_v2.py", line 564, in train_loop
load_fine_tune_checkpoint(detection_model,
File "C:\user\anaconda3\envs\object_detection_api\lib\site-packages\object_detection\model_lib_v2.py", line 350, in load_fine_tune_checkpoint
features, labels = iter(input_dataset).next()
File "C:\user\anaconda3\envs\object_detection_api\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 645, in next
return self.__next__()
File "C:\user\anaconda3\envs\object_detection_api\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 649, in __next__
return self.get_next()
File "C:\user\anaconda3\envs\object_detection_api\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 694, in get_next
self._iterators[i].get_next_as_list_static_shapes(new_name))
File "C:\user\anaconda3\envs\object_detection_api\lib\site-packages\tensorflow\python\distribute\input_lib.py", line 1474, in get_next_as_list_static_shapes
return self._iterator.get_next()
File "C:\user\anaconda3\envs\object_detection_api\lib\site-packages\tensorflow\python\data\ops\multi_device_iterator_ops.py", line 581, in get_next
result.append(self._device_iterators[i].get_next())
File "C:\user\anaconda3\envs\object_detection_api\lib\site-packages\tensorflow\python\data\ops\iterator_ops.py", line 825, in get_next
return self._next_internal()
File "C:\user\anaconda3\envs\object_detection_api\lib\site-packages\tensorflow\python\data\ops\iterator_ops.py", line 764, in _next_internal
return structure.from_compatible_tensor_list(self._element_spec, ret)
File "C:\user\anaconda3\envs\object_detection_api\lib\contextlib.py", line 131, in __exit__
self.gen.throw(type, value, traceback)
File "C:\user\anaconda3\envs\object_detection_api\lib\site-packages\tensorflow\python\eager\context.py", line 2105, in execution_mode
executor_new.wait()
File "C:\user\anaconda3\envs\object_detection_api\lib\site-packages\tensorflow\python\eager\executor.py", line 67, in wait
pywrap_tfe.TFE_ExecutorWaitForAllPendingNodes(self._handle)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[16] = 16 is not in [0, 0)
[[]]
[[MultiDeviceIteratorGetNextFromShard]]
[[RemoteCall]]
The config file that was used to run the model is:
# Mask R-CNN with Inception Resnet v2 (no atrous)
# Sync-trained on COCO (with 8 GPUs) with batch size 16 (1024x1024 resolution)
# Initialized from Imagenet classification checkpoint
#
# Train on GPU-8
#
# Achieves 40.4 box mAP and 35.5 mask mAP on COCO17 val
model {
faster_rcnn {
number_of_stages: 3
num_classes: 1
image_resizer {
fixed_shape_resizer {
height: 1024
width: 1024
}
}
feature_extractor {
type: 'faster_rcnn_inception_resnet_v2_keras'
}
first_stage_anchor_generator {
grid_anchor_generator {
scales: [0.25, 0.5, 1.0, 2.0]
aspect_ratios: [0.5, 1.0, 2.0]
height_stride: 16
width_stride: 16
}
}
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.7
first_stage_max_proposals: 300
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 17
maxpool_kernel_size: 1
maxpool_stride: 1
second_stage_box_predictor {
mask_rcnn_box_predictor {
use_dropout: false
dropout_keep_probability: 1.0
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
mask_height: 33
mask_width: 33
mask_prediction_conv_depth: 0
mask_prediction_num_conv_layers: 4
conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.01
}
}
}
predict_instance_masks: true
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 100
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
second_stage_mask_prediction_loss_weight: 4.0
resize_masks: false
}
}
train_config: {
batch_size: 1
num_steps: 200000
optimizer {
momentum_optimizer: {
learning_rate: {
cosine_decay_learning_rate {
learning_rate_base: 0.008
total_steps: 200000
warmup_learning_rate: 0.0
warmup_steps: 5000
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint_version: V2
fine_tune_checkpoint: "pre-trained-models/mask_rcnn_inception_resnet_v2_1024x1024_coco17_gpu-8/checkpoint/ckpt-0"
fine_tune_checkpoint_type: "detection"
data_augmentation_options {
random_horizontal_flip {
}
}
}
train_input_reader: {
label_map_path: "annotations/label_map.pbtxt"
tf_record_input_reader {
input_path: "annotations/train.record"
}
load_instance_masks: true
mask_type: PNG_MASKS
}
eval_config: {
metrics_set: "coco_detection_metrics"
metrics_set: "coco_mask_metrics"
eval_instance_masks: true
use_moving_averages: false
batch_size: 1
include_metrics_per_category: true
}
eval_input_reader: {
label_map_path: "annotations/label_map.pbtxt"
shuffle: false
num_epochs: 1
tf_record_input_reader {
input_path: "annotations/test.record"
}
load_instance_masks: true
mask_type: PNG_MASKS
}
What can be done to fix this?
Here is a link to the rest of the code that is mentioned in the errors, since it won't fit in this thread.
from TensorFlow - "tensorflow.python.framework.errors_impl.InvalidArgumentError" when training Mask RCNN Inception Resnet V2 1024x1024 model
No comments:
Post a Comment