* [LPT] NetworkHelper::roundWithTolerance: removed tolerance & rename to round
[LPT] NetworkHelper::round functional tests
[LPT] ieFuncTests: updated some test-cases
* [LPT] Subtract is not used
* [LPT] AddTransformation: zero handling
* [LPT] AddTransformation test
* Add shared test for RESULT_NOT_READY return from Wait() in async mode
* Instantiate test for RESULT_NOT_READY for GNA Plugin only
* Fix compile error
* Increase model size for the RESULT_NOT_READY test
* Reuse most of the test
* Apply review
- Fix typo
* Make the test deterministic
* Use callback timestamp
* Apply review
* Use promise and future
* Deserialization implementation for Constant op.
* Add Cont op implementation for NodeConverter.
* Refactor functional tests, remove Const op from layer and node cretaors.
* Remove Constant op from NodeConverter.
* Refactor smoke test.
* Correct parameter in addBlob function.
* Update Constant op representation for myriad functional tests.
* Correct Const op representation for TopK model test.
* Add changes accroding to review comments.
* Refactor constant test.
* Add review changes.
* Add custom op for testing on_adapter(void*).
* Correct library path.
* Correct test fixture class for custom op test.
* Apply review remarks, remove creators from DeconvolutionIE.
* Refactored test ReadCustomAddConstNetwork, corrected on_adapter().
* Remove on_adapter() for CoordinateDiff which is specific to Convolution op.
* Apply review remarks.
* Apply teview remarks.
* Correct Const op in non_max_suppression tests.
* Resolve conflicts after rebase.
* Align MaxPool op attribute 'rounding_type' to spec.
Attribute name should be in lower case.
* Remove obsolete "cacheable" attribute from Parameter.
* Translate ReLU & SoftMax ops type names from ngraph to IR convention.
* Remove <data> node when op has no attributes.
* Translate all operation attributes values to lower case.
* Revert "Align MaxPool op attribute 'rounding_type' to spec."
This reverts commit 243eeccff3.
* Revert "Translate all operation attributes values to lower case."
This reverts commit d4c24175b3.
* Align MaxPool op attribute 'rounding_type' to spec.
Attribute name should be in lower case.
* Align auto_pad & auto_broadcast operation attributes to spec.
They should be written in lowercase.
* Rename op:PadType 'none' to 'explicit'.
* [GNA] added support for per-channel FakeQuantise layer
* [GNA] added quantisation types detection in FQ enabled networks, and added input scale factors detection from FQ connected to input layer
* added FakeQuantize callback that will be use to cast integer values stored as float in FakeQuantized layer
* fixed per-channel multiplier calculation for int8 case
* precision improvements for int8 fake quantization and support for propagating scale factors to activation layers
* added initial int16 support
* added support for fake quantize layer with many connected output layers and support for FQ data encoded as FP16
* added support for already quantized weights
* Shared single layer test
* Added subgraph test
* Fix comment
* int8
* Enabling FQ tests on GNA
Co-authored-by: Eugene Smirnov <eugene.smirnov@intel.com>
Co-authored-by: Andrey Dmitriev <andrey.dmitriev@intel.com>
* fix MO cli_parser when input contains substring with matching scale/mean values
* some additions to cli_parser unit-tests
* fixed numpy array comparisons -- added assert_ prefix
* more general solution for mean/scale cli_parser, names with only digit values are processed correctly
* minor corrections
* Started to write equality comparator for StridedSlice layers.
* Now considered only unique StridedSlice consumers of non-constant data nodes.
* Fixed test for the transformation ConvertGroupedStridedSlice.
* Deleted commented code.
* Small fixes.
* Moved functions unique_by and group_by_with_binary_predicate to mo/utils/utils.py.
* Deleted function collect_sequences.
* Added asserts into the constructor of StridedSlice.
* Written test for the case when 1) there are 4 StridedSlice operations; 2) 2 of StridedSlice have the same data; 3) 2 others StridedSlice have the same data; 4) all StridedSlice operations outputs are consumed by different operations.
* Added some comments.