* [IE][VPU]: Fixes addCopyForOutputsInsideNetwork
In case of dynamic output with consumer pass tries
to connect output's shape with new intermediate data
twice: one at the moment of duplicateData call (successful)
and once more at the end of the pass manually. The second
try leads to error since child data is already connected.
Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>
* [IE][VPU]: Introduces tests on addCopyForOutputsInsideNetwork
Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>
* fake quantize single layer test for GNA plugin
* implemented fakequantize for fp32 case as an activation function
* added proper seed randomisation within single test run
* [GNA] [FAKEQUANTIZE] fixed ref-fp32 implementation on GNA to use nearbyint instead of roundf
* [GNA] [FAKEQUANTIZE] restored random seed
* [GNA][FAKEQUANTIZE] disabled 4d and integer tests for FakeQuantize
* [GNA][FAKEQUANTIZE]updated ngraph FakeQuantize builder to accept seed
* [GNA][FAKEQUANTIZE]aligned FP calculations order on GNA with reference ngraph - this however gives more error
* [CPU]build of FakeQuantise tests restored
* [TESTS][FAKEQUANTIZE] ignore extra inferRequests for disabled tests
* [GNA] Fixed legacy unit test failuers appeared due to extra check for possible segfault in import frames
* [GNA] adopted fuse multiple identities for FakeQunatize layer
* [GNA]fp32 runtime code review
* Backport of FQ+Mul transform to master
* Accept any type of input to FQ in the transformation
* Test the fusion when all FQ inputs are non-const
* Fusion test when only one output limit is const
* Test passing the output of FQ to second input of Mul
* Specify in and out precisions separately, add layouts for convolution
* Align convolution layer tests instantiations with updated definition
* Align convolution layer tests instantiations with updated definition for template plugin
* net, in, out prcs
Co-authored-by: Mikhail Treskin <mikhail.treskin@intel.com>
* ConvertPrecision - saturate Constant's value to std::numeric_limits<dst_type>::lowest() if it's below that limit.
* Remove clamping to std::numeric_limits<int32_t>::lowest() in U32/U64 case
* fix bidirectional case in references of sequences ops, enable decomposition of bidirectional cases in CommonOptimizations
* introduce new opset5, include GRU/RNN/LSTM Sequences to opset5
* Revert "introduce new opset5, include GRU/RNN/LSTM Sequences to opset5"
This reverts commit 73c22a11db.
* Introduced a new way to test DSR+Op cases
* Enabled DSR_Reduce, DSR_VariadicSplit, DSR_TopK, DSR_Scatter, DSR_Unsqueeze tests
* Other disabled tests are still disabled until reference function is implemented. Added related comments
* Reduce DSR+Op tests execution time via reducing tensor shapes
* Now coordinate_transformation_mode used for all axes in the 'nearest' mode.
* Temporarily added tests for Interpolate-4 evaluate().
* Deleted temporarily added tests.
* Fixed documentation for the 'nearest' mode.
* Small fixes.
* Disabled Interpolate-4 layer tests for CPU.
* Disabled some Interpolate-4 CPU tests.
* do not change index table when execute each time
* layout check added
* interpolate for no batch size even scale is 1
* coordinate transformation with div scale, not multiple 1/scale, for higher accuracy
* disable tests temporal
* test modification
* Some changes.
* Enabled some tests.
some plugins
- added shared parameterized tests
- instantiated for template plugin
- instantiated for cpu plugin
- fixed CPU plugin to properly handle U16 input
- fixed CPU reverse_sequence primitive to alolw input/oputput tensors to
be in FP32 only
- updated ngraph test_simple_computation_on_ndarrays to not expect
failure on U16 input
* [GNA] fix scale factor calculation for unfused bias after fc
* change check
* add test
* apply requested changes
* cpplint fix
* apply test changes
* modify model for test to match ::op::
* Removed shape inference fr IR v7 and older
* Disabled dynamic batch tests which require reshape
* Fixes tests 2
* Disabled MKLDNN tests with convolution reshape
* Fixed GPU tests
* Disable VPU tests with batch size > 1 for old IRs
* Removed most of shape infer functions for old representation
* Removed most of CNNLayer validators
* Fixed validators and keep only parseParams
* Removed tests on invalid IR v7
* Disabled more VPU tests
* Removed Backetize validator
* Disable one more Myriad tests case where reshape for old IR is needed
* Removed useless reshape
* Need to replace GRUCell with Unique
* Moved shape infer functions for experimental layers to Core IE
* Fixed shape inference functions not to depend on legacy
* Added missed SparseToDense
* Added descriptive error message
* Fixed comments
* ti to sequences transformations
* fix sequences to sequences ie conversion
* resolve review marks
* resolve review remarks, fix ti to sequences transformations to support batch > 1 if slice axis == 0
* temporary enable ngraph ti transformations for cpu plugin
* fix includes
* Revert "fix includes"
This reverts commit 6cf15b97be.
* Revert "temporary enable ngraph ti transformations for cpu plugin"
This reverts commit fd528d7216.
* delete todo comments
* Initial commit
* [SSR] Reshape(2D)->MatMul constrain relaxation
* Moved common pattern mechanics to the common function
* Moving SmartReshape to CNNNetworkNgraphImpl ctors
* Review comment
* Tests
* Fix for concat layer with more than 2 inputs
Signed-off-by: Bartosz Sochacki <bartosz.sochacki@intel.com>
* Fixed check if affine is used for crop layer
Signed-off-by: Bartosz Sochacki <bartosz.sochacki@intel.com>
* code cleanup for fix affine layer check
Signed-off-by: Bartosz Sochacki <bartosz.sochacki@intel.com>
* added test for concat layer with multiple inputs
* simplified test to use less number of layers
* fixed code style
* fixed coding style
* addressed review comments and one more issue that appeared during testing
* fixed code style errors
* scale factor propagation for concat layer with multiple inputs
* fix for a case when all inputs to concat are activation layers
* fix for linux compilation - C++14 is not enabled and fails on lambda with auto parameters
* corrected current year in headers in concat multi input tests
* fixes for code review issues raised by Denis Orlov
* enabled integer mode computation in GNA concat multi input test
* removed 1 space per review comment
* a fix to fail when not all scale factors are equal
* added GNA_DEVICE_MODE config to concat multi input test
* corrected searching for a next input to concat layer
* changed selection of 2nd candidate for source quant value
* code style fix - else and brackets should be in the same line
* small code improvement
* fix for mixing line endings
* addressed with endless requantization loop and fixed failing tests
Special test case with input values which cannot be correctly processed via
decomposition with int AVG pool layer.
Signed-off-by: Alexander Peskov <alexander.peskov@intel.com>