Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mobilenetv2 faster rcnn uniform-tf #299

Open
zyxcambridge opened this issue Jul 18, 2019 · 0 comments
Open

mobilenetv2 faster rcnn uniform-tf #299

zyxcambridge opened this issue Jul 18, 2019 · 0 comments

Comments

@zyxcambridge
Copy link

File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/opt/project/main.py", line 51, in main
learner = create_learner(sm_writer, model_helper)
File "/opt/project/learners/learner_utils.py", line 60, in create_learner
learner = UniformQuantTFLearner(sm_writer, model_helper)
File "/opt/project/learners/uniform_quantization_tf/learner.py", line 98, in init
self.__build_train()
File "/opt/project/learners/uniform_quantization_tf/learner.py", line 184, in __build_train
scope=self.model_scope_quan)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/quantize/python/quantize_graph.py", line 197, in experimental_create_training_graph
scope=scope)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/quantize/python/quantize_graph.py", line 70, in _create_graph
is_training=is_training)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/quantize/python/fold_batch_norms.py", line 53, in FoldBatchNorms
graph, is_training, freeze_batch_norm_delay=freeze_batch_norm_delay)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/quantize/python/fold_batch_norms.py", line 98, in _FoldFusedBatchNorms
freeze_batch_norm_delay=freeze_batch_norm_delay))
File "/usr/local/lib/python3.5/dist-packages/tensorflow/contrib/quantize/python/fold_batch_norms.py", line 338, in _ComputeBatchNormCorrections
match.moving_variance_tensor + match.batch_epsilon)
TypeError: unsupported operand type(s) for +: 'NoneType' and 'float'

https://github.com/tensorflow/tensorflow/issues?utf8=%E2%9C%93&q=is%3Aissue+is%3Aopen+match.moving_variance_tensor+%2B+match.batch_epsilon

Quantization is only support for SSD models right now.
tf.layers.batch_normalization( ) replace slim.batch_norm can work ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant