* Added transformation config to support automl efficientdet-4 model
* Added configuration file to convert Automl EfficientDet model
* Updated unit test for Pack
* Added instruction on how to convert EfficientDet Tensorflow model
* Updated documentation on how to convert EfficientDet model
* Updated a documentation with instruction on how to convert Automl EfficientDet.
There was some problem with replicatioon of simple loop body where
input was used as output as is.
Also was voided usage of special prefixes like "in_" for Loop body inputs.
Signed-off-by: Alexander Peskov <alexander.peskov@intel.com>
Also:
Simplified logic of data object name restoring. Avoid duplicatin of input ports
in case of multiple consumers. Provided code has WA comment in corresponding
naming restore section. Also added WA section with restore U8 precision for ouputs.
Avoid to eliminate limitation of CNNNetwork converter.
Signed-off-by: Alexander Peskov <alexander.peskov@intel.com>
* [LPT] functional tests: FakeQuantize with dynamic intervals
* [LPT] decomposeFakeQuantize: removed debug info
* [LPT] Add NetworkHelper::mark_as_dequantization_op function
[ngraph] Fix compare runtime info function
[LPT] Fix test cases with no DEQUANTIZATION runtime attribute
[LPT] Change include path for dequantization op
* [LPT] Remove Subtract functional test, enable and rename legacy tests
Co-authored-by: Vladislav Golubev <vladislav.golubev@intel.com>
Co-authored-by: Aleksandr Pertovsky <aleksandr.pertovsky@intel.com>
* added OpenVINO Model Server
* updated documentation to include valid links
* minor fixes
* Fixed links and style
* Update README.md
fixed links to model_server
* more corrections
* dropped reference in ie_docs and minor fixes
* Update README.md
Fixed links to Inference Engine pages
Co-authored-by: Alina Alborova <alina.alborova@intel.com>
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
* [MO] Add CMake install for Model Optimizer
* [MO] Update test for version.py
* [MO] Add CMake install for Model Optimizer
* [MO] Update test for version.py
* [MO] Add CMake install for Model Optimizer
* [MO] Update test for version.py
* [MO] fix file permissions for install location
Eltwise + ReLU merge is expected to be performed unconditionally
in all cases and since it does not require strides to be defined
could be performed before adjustDataLayout pass.
Unfortunately, there are cases with unexpected degradation after
such a change is introduced. In specific case it seems to be
caused by degradation in HW operation (convolution). It was not
investigated completely and reason is still unknown (convolution
itself remains unchanged in network, but for some reason works
slower).
It has been decided to introduce change only in case of dynamic
models to have performance benefit for some cases and avoid
degradations in others.
Moving mergeEltwiseAndReLU pass before adjustDataLayout for
dynamic cases allows to get additional performance gain due to
lack of extra copy stages introduced in adjustDataLayout.
Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>