Roman Lyamin
77365bcb4c
[IE CLDNN] Added Round-5 operation ( #2838 )
2020-10-27 10:56:15 +03:00
Ilya Lavrenov
166ab89b95
Reorganize LPT: ( #2803 )
...
- inference_engine_lp_transformations keep ngraph LPT
- inference_engine_lp_transformations_legacy keep old CNNLayer based LPT
2020-10-26 14:10:17 +03:00
Patryk Elszkowski
5036b12544
enable reference implementation in CTCGreedyDecoder single layer test ( #2680 )
...
* enable reference implementation for CTCGreedyDecoder single layer tests
* update unit test to have blnak_index
* remove merge_repeated disable flag for CPU test because CPU impl always
merge
* add CTCGreedyDecoder single layer tests for CPU
* changes to match xPU implementations
* apply reviewers suggestions
Co-authored-by: Patryk Elszkowski <patryk.elszkowki@intel.com >
2020-10-26 13:34:50 +03:00
Vladimir Paramuzov
980bbd172a
[IE CLDNN] Enabled more functional tests and added several fixes into ops implementations ( #2763 )
2020-10-24 23:38:13 +03:00
Edward Shogulin
c2271da637
Es/lpt/lpt to ngraph fixes2 with master ( #2671 )
...
* [LPT] Replace creation of dequantization with factory
* [ngraph][LPT] Add ScaleShift replace for dequantization operations
* [LPT] SubtractMultiplyToMultiplyAdd refactoring
* [LPT] Code style fix
* [LPT] Edit SubtractMultiplyToMultiplyAdd transformation for dequantization
* [LPT] Linux compilation quick fix
* [LPT] [WIP] runtime info applying
* [LPT] Concat transformation functional tests extending
* [LPT] MultiplyToConvolution + Subtract to add fusing + improvements in LowPrecisionTransformer
* [LPT] linux compilation error fix
* [LPT] compilation error
* [LPT] MultiplyToGroupConvolution fix: 5D support
* [LPT] Multiply transformation extending: FQ weights support - wip
* [LPT] FQ folding & precision selection
* [LPT] code style fixes
* [LPT] code style fixes
* [LPT] Linux compilation error fix
* [LPT] SubtractMultiplyToMultiplyAdd: refactoring
* [LPT] Tests fixes
* [LPT] MultiplyToGroupConvolution tests
* [LPT] Convert subtract with int inputs to Eltwise sub
* [LPT] Constant folding fix for quant models
* [LPT] 1) Asymmetric quantization improvement 2) tests extending
* [LPT] 2 fixes for se_resnext_50
* [LPT] Add transformation priority branch selection test
* [LPT] AddMultiplyFusion: legacy transformation quick fix
* [LPT] nGraph tests temporary disabling
* [LPT] Fix for eltwise inputs with multiple outputs
* [LPT] Fix for FQ fuse
* [LPT] Reshape by channel, batch temporary disabled
* [nGraph][LPT] MatMul fix for reading FP16 models
* [LPT] 1) Add (not after Convolution/GroupConvolution/MatMul with Constant) to Subtract 2) precision selection fix: MultiplyToGroupConvolution quick fix
* [LPT] DenseNet improvments: AddTransformation: Add to Subtract + tests
* [LPT] AddTransformarion refactoring
* [LPT] AddTransformation tests temporay disabled
* [LPT] ReshapeTransformation improvements: degradation fix
* [LPT] code style fix
* [LPT] Concat tests temporary disabling
* [LPT] tests unification
1) plugin tests: added test-cases and nGraph-validation for clamp, split and variadic split
2) func tests: added test-cases
3) transformNGraph: added the ability to run additional transformations
* [LPT] split & variadic split merge fix
* [LPT] Clamp: added support for asymmetric quantization
* [LPT] added DequantizationAttr run-time attribute
* [LPT] debug info removal
* [LPT] ConcatTransformation: zero point fix
* [LPT] CNNNetwork ReLU transformation quick fix
* [LPT]
1) Concat fix
2) ConcatMultiChannels fix
3) Added "Concat with Split" test-cases
4) Subgraph fix
* [LPT]
1) Concat fix
2) Added "Concat with different precision on childs" test-case
* [LPT] concat fix Ubuntu18
* [LPT] Concat test fixes
* [LPT] Not fp32 FQ input support
* [LPT] MatMul Fix + separateInStandaloneBranch Fix
* [LPT] Fix reference input types in mish fusion tests
* [LPT] Fix cpuFuncTests on CentOS building
* [nGraph][LPT] ScaleShift 2d, 3d nGraph conversion enabling
* [LPT] 1) FullyConnected workaround removing 2) validate_nodes_and_infer_types for LPT
* [ngraph] Add check for childs for ConvertSubtract
* [LPT] Squeeze/Unsqueeze tests unification
* [LPT] Squeeze/Unsqueeze change signature for getReference/getOriginal
* [LPT] Mul & Add -> ScaleShift quick fix
* [LPT] nGraph tests emporary disabling
* [LPT] code style fix
* [LPT] code style fix #2
* [LPT] nGraph tests temporary disabling
* [LPT] code styl fix #3
* [LPT] shared plugin tests temporary disabling
* [LPT] cleanup
* [LPT] nGraph unit_tests tests temproary disabling
* [LPT] nGraph unit tests disabling #2
* [LPT] nGraph tests disabling
* [LPT] nGraph tests temporary disabling
* [LPT] WA removing
* [LPT] CentOS compilation fix
* [LPT] KMB wa to avoid compilation error
* [LPT] functional test temporary disabling
* [nGraph] code style fixes
* [LPT] ConcatTransformation: data movement operation as intermediate handling
* [LPT] FuseSubtractToFakeQuantize after VariadicSplit
* [LPT] ConcatWithSplitTransformation functional test temporary disabling
* [LPT] Clamp and ConcatWithDifferentPrecisionsOnChilds: tests fix
* [LPT] MatMul: bert-nv-mlperf-quantized fix
* [LPT] Add to convolution biases fuse fix
* [LPT] GPU plugin tests fixes
* [LPT] Normalize GPU plugin tests fix
* [LPT] test-commit
* [LPT] CLDNN Plugin FP16 conversion
* [LPT] AvgPool update precision if there is not FQ after + convolution
precision limitation on activation
* [LPT] Convolution fixes
* [LPT] FuseSubtractToFakequantize & FuseMultiplyToFakeQuantize improvement
* [LPT] FuseSubtractToFakeQuantize test fix
* [LPT] FuseSubtractToFakeQuantizeTransformation tests
* [LPT] code style fix
* [LPT] AvgPool child recursive extend
* [LPT] AvgPool tests + fix
* [LPT] compilation quick fix
* [LPT] Add to convolution biases fuse fix
* [LPT] Linux issues: MatMulWithOptimizedConstantFakeQuantizeTransformation temporary disabled
* [LPT] Normalize GPU plugin tests fix
* [LPT] test-commit
* [LPT]
1) added the ability to create sub without dequantizationAttribute
2) fixed optimizeMulAfter: added copying rt_info
3) Tests Unification: Convolution transformation
4) added cleanRunTimeInfo into Network Helper
* [LPT] Tests Unification: GroupConvolution
* [LPT] removed debug info
* [LPT] functional tests for Convolution & GroupConvolution extending
* [LPT] [MatMul] Quick fix ubuntu error
* [LPT] MatMulTransformation quick test fix: one constant for both intervals
* [nGraph] code style fix
* [LPT] added output_precision to NormalizeIE
* [nGraph] NormalizeIE fix for LPT support
* [LPT] nGraph WA removal
* [LPT] fixed fillSubgraph for concat multi channels
* [LPT] MatMul fix
* [nGraph] WA removal: 1) nGraph tests enabling 2) LPT extanding: not handle in FP32
* [LPT] nGraph WA removal: function tests skip config rollback
* [LPT] WA removal: precision propagation fix
* [LPT] ConvertMulOrAddFinally transformation extending
* [nGraph] ConvolutionMultiplyFusion rollback (move from legacy to common)
* [nGraph] ConvertMulAddToScaleShiftOrPower: WA removal
* [nGraph] TypeRelaxed: WA removal
* [nGraph] WA removal: TypeRelaxed
* [LPT] WA removal: ConcatTransformation
* [nGraph] WA removal: Eltwise & ConvertMulOrAddFinally fixes to support LPT
* [nGraph] MulAddConversion fix: 2D & 3D ScaleShift are supproted
* [nGraph] VisualizeTree extending
* [LPT] FakeQuantizeDequantization extending: check element wise dequantization operation
* [LPT] FakeQuantizeDequantization extending: SubtractMultiplyToMultiplyAddTransformation & WeightableLayerTransformation
* [LPT] Convolution + test infrastructure update
* [LPT] GPU compilation error
* [nGraph] BatchNorm plugin tests: input tensor definition
* [LPT] LowPrecisionTransformer::isFunctionQuantized was added
* [nGraph] WA final cleanup
* [nGraph] ScaleShiftIE quick fix
* [LPT] Functional tests: added test-cases "Concat with intermediate with constant"
* [LPT] Transformer::isNetworkquantized fix
* [LPT] SubtractMultiplyToMultiplyAdd zero Add remove: fix for ssd300 on gpu
* [LPT] MultiplyToGroupConvolution not transform on Const
* [LPT] workaround for negative scales
* [LPT] Convert standalone dequantization Mul,Sub,Add to ScaleShift
* [LPT] SubtractMultiplyToMultiplyAdd test fix
* [LPT] Clamp transformation: GPU tests fix
* [LPT] Transformer tests
* [LPT] FakeQuantizePrecisionSelectionTransformation was disabled for GPU
* [LPT] TransformerIsFunctionQuantized refactoring
* [nGraph] code style fix
* [LPT] mobilenet_v2_tf_depthwise test update
* [LPT] TMP: dequantization folding
* [LPT] Elementwise transformation fix: dequantization operations constant folding
* [LPT] cleanup
* [LPT] denormal values fix
* [LPT] FuseFakeQuantize test fixed + negative multiply case
* [LPT] FP32 -> FP16 conversion info
* [LPT] FQ dot interval support + swapMultiplyAdd safely division
* [LPT] test fix
* [LPT] Tests for dot interval on FQ + tests for addTransformation enabling
* [LPT] Clamp transformation fix
* [LPT] FQ prec selection test fix
* [LPT] Clamp test case
* [LPT] Concat division precision fix
* [LPT] cleanup
* [LPT] merge fix
* [LPT] WIP: MatMul asymmetric quantization fix (BERT)
* [LPT] MatMulWithOptimizedConstantFakeQuantizeTransformation disabled
* [LPT] GPU Plugin set config fix
* [LPT] Fix merge mistakes
* [LPT] Rollback device specific INT8
* [LPT] ReshapeFullyConnected fix: FullyConnected output fix
* [LPT] bert-base-chinese GPU fix
* [ngraph/LPT] Tests for fix convert_mul_or_add_finally with dequantization
[ngraph/LPT] Fix convert mul_or_add_finally with dequantization
* [LPT] ScaleShift dim < 4 only dequantization conversion
* [LPT] MatMul transformation tests extensing
* [LPT] ReshapeFullyConnected legacy transformation: LPT test case addition
* [nGraph] VisualizeTree extending: property names displying to simplify search
* [LPT] getDequantization extending
* [LPT] MulAddToScaleshiftOrPower: out precision fix & tests
* [LPT] Multiply to ScaleShiftIE: Multiply transformation: remove DEQUANTIZATION if not valid
* [LPT] Concat test case
* [nGraph] try to fix opencv compatibility
* [nGraph] nGraph code style fix
* [LPT] InPlace dequantization folding
* [LPT] Multiply constant folding test
* [LPT] Fix plugin test case for MatMulWithOptimizedConstantFakeQuantize
[LPT] Enable MatMulWithOptimizedConstantFakeQuantize plugin test
* [LPT] Convolution transformation: mulConst shape fix
* [LPT] INT8 Constant folding branch for elementwise ops optimization removal
* [LPT] eltwise for const branch fix
* [LPT] linux fix
* [LPT] Multiply test refactoring
* [LPT] Convert Fuse in Constant + tests
* [LPT] function comparation: runtime info comparation rollback
* [LPT] linux build fix
* [LPT] linux build fix2
* [LPT] MatMul transformation limitation was added to be similar as CNNNetwork LPT
* [LPT] Reshape transformation update: don't broadcast by batch
* [LPT] MatMul transformation limitation was added to be similar as CNNNetwork LPT - refactoring
* [LPT] MatMul transformation: transpose input tensors fix
* [LPT] checkElementwise for AddTransformation WA: should be moved to getDequantization
* [LPT] merge fix
* [LPT] MatMul fix & tests
* [LPT] AddTransformation tests
* [LPT] Interpolate transformation enabled
* [LPT] constant folding before LPT
* [LPT] WIP: not completed tests
* [LPT] GPU degradation fix
* [LPT] FuseConvert workaround
* [LPT] code cleanup
* [LPT] Interpolate GPU test quick fix
* [LPT] GroupConvolution fix
* [LPT] Fix fusing multiply for non-dequantization layers
* [LPT] GPU pipeline update: enableInt8 initialization place update
* [LPT] tests compilation fix
* [LPT] merge fix
* [LPT] tests enabling
* [LPT] merge issue resolving
* [LPT] LPT CNNNetwork usage macros: part #1 : source code
* [LPT] LPT CNNNetwork usage macros: part #2 : cmake files update and tests addoption
* [LPT] LPT workaround from nGraph core removing
* [LPT] previous LPT version tests
* [LPT] inference_engine_lp_transformations was returned back
* [LPT] replace_node rollback
* [LPT] ConvertSubtract fix
* [LPT] GPU: baselineIsFP16 reuse fix
* [LPT] FakeQuantizeTransformation: GPU workaround: I32 -> FP32 Convert is not fused
* [LPT] AvgPool output precision workaround
* [LPT] Group convolution precision + Subtract to ScaleShift const fix
* [LPT] SubMulToMulAdd & Transpose: action-recognition-0001 fix
* [LPT] Transpose: added test with per-tensor quantization
Co-authored-by: Aleksandr Pertovsky <aleksandr.pertovsky@intel.com >
Co-authored-by: Zinoviev, Vladimir <vladimir.zinoviev@intel.com >
Co-authored-by: Vladislav Golubev <vladislav.golubev@intel.com >
Co-authored-by: Gorokhov Dmitriy <dmitry.gorokhov@intel.com >
2020-10-23 13:22:55 +03:00
Andrew Bakalin
3dfec639f0
[VPU][GT][Tests] Make gemmTranspose pass layout agnostic ( #2666 )
...
* [VPU][GT] Make permTranspose pass layout agnostic
* [IE][Tests] Improve MatMul common test class
* [VPU][Tests] Add tests for MatMul
* [VPU][Tests] Review fixes
* [Tests] Add combineShapes for MatMul
* [VPU][GT] Fix assertion condition
2020-10-22 15:04:53 +03:00
Ilya Churaev
e364271cf6
Constant->Result networks ( #2639 )
...
* Added tests
* Changed iterator algorithm
* Fixed legacy tests
* Added plugin tests
* Disabled some tests
* Remover parameter tests
* Fixed conversion
* Use old approach for old tests
* Temp commit
* Fixed iterator
* Fixed some tests
* Change logic to compare iterators
* Disabled CPU functional test
* Temp commit
* Disabled test for GPU
* Fixed network copy
* Try to fix test for Windows
* Disabled test for GNA
* Disable plugin tests
* Disable legacy test
* Remove redundant code
2020-10-22 13:22:38 +03:00
Irina Efode
a2e49469b5
Cleanup single_layer_tests ( #2716 )
2020-10-20 14:31:59 +03:00
Anton Potapov
8715b60d88
[PP GAPI] Extended plug-ins shared precision conversion tests to use ( #2677 )
...
`GetBlob()` as well
- test were extended to cover case when input tensors are copied into
Blob return by `InferRequest::GetBlob`
- channel number of input tensor is made a test parameter
2020-10-19 12:35:59 +03:00
Ivan Tikhonov
84b5fc51dc
[opset5] ngraph implementation of Loop op ( #2583 )
...
* Loop op ngraph implementation, update IE IR Reader and ngraph to cnn converter
* refactoring SubGraphOp class
* type prop unit tests
* ngraph code style
* update comment
* single layer tests for Loop operation
* fix file name
* Add SpecialBodyPorts attribute in Loop op, update single layer tests
* add several new tests cases, strict checks in Loop impl, temporary disable single layer tests
* ngraph codestyle, refactoring, clone_new_args test
* resolve review remarks
* fix build
* fix tests
* add a new constructor of Loop op, resolve review remarks
2020-10-19 06:53:46 +03:00
Roman Lyamin
cc569d2254
[IE CLDNN] Added HSigmoid operation ( #2700 )
2020-10-18 20:47:22 +03:00
Kamil Magierski
95d7c29628
[GNA] Fix remove layer + identity layer insertion ( #2626 )
...
* [GNA] Fix remove layer + identity layer insertion
test stub
Test impl
style
hpp style
* disable FP16 for GPU
2020-10-16 13:23:32 +03:00
Gabriele Galiero Casay
c9b16a79f5
Reference Implementation for RegionYolo operator ( #2474 )
2020-10-15 22:30:12 +02:00
Katarzyna Mitrus
fadd16ce89
ReorgYolo reference implementation ( #2384 )
...
* Align ReorgYolo to the spec (vector strides -> int stride)
* ReorgYolo ref impl
* ReorgYolo evaluate method
* ReorgYolo tests
* Tests update
* Style apply
* Add some coments
* Code refactor
* Comment update
* Style apply
* Build fix, mark evaluate as override
* Revert "Align ReorgYolo to the spec (vector strides -> int stride)"
* Use int_executable instead of evaluate
* Use char* instead of templates
* Code refactor
* Comment update
* Code review comment
* Add constructor aligned with spec
* Update shape validation
* Update attributes tests
* Add type_prop tests
* Update backend tests
* Add single layer tests
* Update the spec
* Remove wrong transformation test
2020-10-15 13:42:21 +03:00
Gleb Kazantaev
94eacc6544
Move legacy transformations and ops to legacy library ( #2624 )
...
* Initial movement
* Divided transformations to common and legacy
* Changed ngraph visibility to ie_api.h
* CommonTransformaitons to Internal
* New trasnformations location structure
* fixde typo; move convert_quantize_dequantize to common
* Added control_flow folder
2020-10-14 10:58:01 +03:00
Nikita Kudriavtsev
5ce622f4f4
[IE Myriad] Fix layer tests for logical_and ( #2622 )
2020-10-12 16:37:31 +03:00
Alexander Perepelkin
a1b8a11000
Allow to specify both in/out precision, add in/out layout in tests ( #2516 )
...
* test definitions
* CPU plugin shared tests
* CPU plugin custom tests
* GNA plugin shared tests
* GPU plugin shared tests
* MYR plugin shared tests
* TML plugin shared tests
2020-10-11 11:05:55 +03:00
Liubov Batanina
7f78dd797e
[IE Tests] Added NormalizeL2 tests ( #2327 )
...
* Added NormalizeL2 tests
* Added NormalizeL2 reference
* Add nGraph tests
* Fix tests
* Added NormalizeL2 builder
2020-10-08 07:23:25 +03:00
Kamil Magierski
4c1ae9b339
[GNA] Issue 39975 - cascade concat fix ( #2486 )
...
* concat input not used fix rough implementation
* [GNA] Cascade concat input not assigned fix
* reduce copying in recursive function
* [GNA] Aligned cascade concat test
2020-10-06 11:01:19 +03:00
Kamil Magierski
0e62e5e17f
[GNA] FIX CopyLayerPass for concat parent cases ( #2485 )
...
* [GNA] fix cases when layer output is used in both memory and concat parent layer
* coma fixes
* Issue-36189 CopyLayerPass for concat parent cases fix test
* Fix test for CPU
* Remove test for GPU
2020-10-06 11:00:38 +03:00
Kamil Magierski
8abdc32676
[GNA] Fix LSTM Cell channel C being 0 on output ( #1174 )
...
* [GNA] get output before activation test
[GNA] SubstituteScaleShiftBroadCastPass fix for cases when there are multiple scaleshifts as an output from the layer
[GNA] Generalize Fix where LSTMCell output was zero due to being fused into activation
[GNA] Fix LSTMCell being zero on channel C if being output layer
* linux build fix
2020-10-06 10:59:03 +03:00
Anton Pankratv
b21d0fe978
Removed similar behaviour tests ( #2528 )
2020-10-05 12:21:03 +03:00
Andrey Dmitriev
949e23d0e8
[GNA]specific execution order for delayer copy layer ( #2117 )
...
[GNA]specific execution order for delayer copy layer + Test
2020-10-01 15:09:53 +03:00
Mikhail Kozlov
6ae332b072
Fix virtual class inheritance for single layer test detection output ( #2472 )
2020-10-01 14:34:04 +03:00
Egor Churaev
a05333217c
Support operation Interpolate-4 in OpenVINO ( #1596 )
...
JIRA: 26973
2020-10-01 11:41:51 +03:00
Vladislav Vinogradov
d28a5d6c4f
[CMAKE] Introduce FASTER_BUILD experimental feature ( #2438 )
...
It uses CMake 3.16 built-in utilities to speed up build time:
* Unity builds
* Precompiled headers
The feature is controlled via `ENABLE_FASTER_BUILD` CMake option (disabled by default).
The option avaialble only on CMake >= 3.16.
The feature is enabled per-target via `ie_faster_build` function.
Some observations:
* Don't have actual numbers for compile time, but subjectively can see
speed up locally with VS 2019.
* Unity builds gives much more effect, but has some restriction on source files,
so are not used everywhere.
2020-09-28 18:53:11 +03:00
Andrey Markelov
63fbe78d76
Fix tile layer test header ( #2315 )
2020-09-28 17:13:49 +03:00
Anton Pankratv
c8233b7b7c
Fixed canStartSeveralAsyncInsideCompletionCallbackWithSafeDtor ( #2404 )
2020-09-28 11:08:34 +03:00
Anton Pankratv
863a7bd663
Holder test thread safe for all ( #2425 )
2020-09-25 21:12:24 +03:00
Andrew Bakalin
03d184726a
[IE][VPU]: Supports I32 for some eltwise precisions + tests ( #2364 )
2020-09-25 18:29:34 +03:00
Aleksandr Korolev
eda9498b79
[IE][VPU]: Reduce tests execution time ( #2378 )
...
* [IE][VPU]: Reduce tests execution time
* [IE TESTS] Remove 'ConfigurePlugin()' from 'memory_LSTMCell.hpp'
* [IE VPU TESTS] Myriad conv layer tests was changed
Co-authored-by: Maksim Doronin <maksim.doronin@intel.com >
Co-authored-by: kora6 <kora6@github.com >
2020-09-25 14:24:12 +03:00
Liubov Batanina
f8a17a1317
Changed pad data type ( #2354 )
2020-09-23 13:10:58 +03:00
Kamil Magierski
9fca26b21e
[GNA] LSTMCell fixes ( #2080 )
2020-09-22 18:13:28 +03:00
Eugene Smirnov
f0b10bf071
[GNA] fake quantize single layer tests for GNA plugin ( #2060 )
...
* fake quantize single layer test for GNA plugin
* implemented fakequantize for fp32 case as an activation function
* added proper seed randomisation within single test run
* [GNA] [FAKEQUANTIZE] fixed ref-fp32 implementation on GNA to use nearbyint instead of roundf
* [GNA] [FAKEQUANTIZE] restored random seed
* [GNA][FAKEQUANTIZE] disabled 4d and integer tests for FakeQuantize
* [GNA][FAKEQUANTIZE]updated ngraph FakeQuantize builder to accept seed
* [GNA][FAKEQUANTIZE]aligned FP calculations order on GNA with reference ngraph - this however gives more error
* [CPU]build of FakeQuantise tests restored
* [TESTS][FAKEQUANTIZE] ignore extra inferRequests for disabled tests
* [GNA] Fixed legacy unit test failuers appeared due to extra check for possible segfault in import frames
* [GNA] adopted fuse multiple identities for FakeQunatize layer
* [GNA]fp32 runtime code review
2020-09-21 14:22:14 +03:00
Alexander Perepelkin
c13ec24e1e
Specify in and out precisions separately, add layouts for convolution ( #2211 )
...
* Specify in and out precisions separately, add layouts for convolution
* Align convolution layer tests instantiations with updated definition
* Align convolution layer tests instantiations with updated definition for template plugin
* net, in, out prcs
Co-authored-by: Mikhail Treskin <mikhail.treskin@intel.com >
2020-09-21 13:03:01 +03:00
Liubov Batanina
6839ef7699
Add TopK tests ( #2165 )
2020-09-21 11:37:22 +03:00
Ivan Tikhonov
1b7dfc6e4c
Fix bidirectional mode in reference implementations of GRU/LSTM/RNN Sequences ( #2264 )
...
* fix bidirectional case in references of sequences ops, enable decomposition of bidirectional cases in CommonOptimizations
* introduce new opset5, include GRU/RNN/LSTM Sequences to opset5
* Revert "introduce new opset5, include GRU/RNN/LSTM Sequences to opset5"
This reverts commit 73c22a11db .
2020-09-18 10:14:01 +03:00
Irina Efode
12abb2eb49
[IE TESTS] CoreThreading_LoadNetwork tests were disabled for GPU plugin ( #2245 )
2020-09-16 16:46:02 +03:00
Gorokhov Dmitriy
83e96891ca
Revert "[IE TESTS] dynavic batch for mvn layer ( #1010 )" ( #2257 )
...
This reverts commit 2e3378c50f .
2020-09-16 14:11:48 +03:00
Anton Potapov
d590144545
[PP GAPI] Addded tests to cover exisiting precision conversions done by ( #1976 )
...
some plugins
- added shared parameterized tests
- instantiated for template plugin
- instantiated for cpu plugin
- fixed CPU plugin to properly handle U16 input
- fixed CPU reverse_sequence primitive to alolw input/oputput tensors to
be in FP32 only
- updated ngraph test_simple_computation_on_ndarrays to not expect
failure on U16 input
2020-09-16 12:41:14 +03:00
Anna Alberska
3ecee2ce49
[GNA] fix scale factor calculation for unfused bias after fc ( #2097 )
...
* [GNA] fix scale factor calculation for unfused bias after fc
* change check
* add test
* apply requested changes
* cpplint fix
* apply test changes
* modify model for test to match ::op::
2020-09-15 16:04:06 +03:00
Ilya Churaev
1bae5504ca
Fixed query network for networks with KSO ( #2201 )
...
* Added a test to reproduce QueryNetwork with KSO
* Fixed QueryNetwork for networks with KSO
* Added additional test
2020-09-15 14:02:15 +03:00
Edward Shogulin
ac2370b420
[LPT] Copy constant with several outputs before blob update (cherry-pick to master) ( #2198 )
...
* [LPT] Copy constant implementation
* [LPT] the same Constant ops as FQ interval boundaries
2020-09-15 09:18:58 +03:00
Nikita Kudriavtsev
ef2581d5c6
[IE Myriad][IE Tests] Activation layer's constants parametrization. ( #2071 )
...
CI passed: https://gitlab-icv.inn.intel.com/inference-engine/product-configs/merge_requests/870
2020-09-10 12:56:21 +03:00
Anton Voronov
0e34b392ee
[CPU] Supported depthwise 6d, 7d, ..., added test ( #971 )
2020-09-10 08:33:38 +03:00
Bartosz Sochacki
8b87e1a477
[GNA] Fix for concat layer with >2 inputs ( #1475 )
...
* Fix for concat layer with more than 2 inputs
Signed-off-by: Bartosz Sochacki <bartosz.sochacki@intel.com >
* Fixed check if affine is used for crop layer
Signed-off-by: Bartosz Sochacki <bartosz.sochacki@intel.com >
* code cleanup for fix affine layer check
Signed-off-by: Bartosz Sochacki <bartosz.sochacki@intel.com >
* added test for concat layer with multiple inputs
* simplified test to use less number of layers
* fixed code style
* fixed coding style
* addressed review comments and one more issue that appeared during testing
* fixed code style errors
* scale factor propagation for concat layer with multiple inputs
* fix for a case when all inputs to concat are activation layers
* fix for linux compilation - C++14 is not enabled and fails on lambda with auto parameters
* corrected current year in headers in concat multi input tests
* fixes for code review issues raised by Denis Orlov
* enabled integer mode computation in GNA concat multi input test
* removed 1 space per review comment
* a fix to fail when not all scale factors are equal
* added GNA_DEVICE_MODE config to concat multi input test
* corrected searching for a next input to concat layer
* changed selection of 2nd candidate for source quant value
* code style fix - else and brackets should be in the same line
* small code improvement
* fix for mixing line endings
* addressed with endless requantization loop and fixed failing tests
2020-09-09 14:55:07 +03:00
Alexander Peskov
ad74204402
[TEST] One more ReduceSUM func test
...
Special test case with input values which cannot be correctly processed via
decomposition with int AVG pool layer.
Signed-off-by: Alexander Peskov <alexander.peskov@intel.com >
2020-09-09 12:41:31 +03:00
Andrey Dmitriev
8e6d9470bb
[GNA] Handling input orientation ( #1851 )
...
Added test
Add fix
2020-09-08 10:46:10 +03:00
Ivan Tikhonov
063c7ef6b9
GRU/RNN/LSTM sequence ops, reference implementations, single layer tests ( #1594 )
...
* gru/rnn sequences
* update gru/rnn sequences ops, add unit tests
* enable sequence transformations for cpu plugin
* ngraph codestyle
* update tensor iterator to rnn/gru/lstm sequence transformations, add unit tests
* ngraph codestyle
* add visitors for ngraph ie ops, fix a bug with incorrect axis, fix ngraph to ngraph ie conversion
* update GRUSequence/GRUSequenceIE according to plugin format
* fix ngraph ie implementations according to plugins restricrictions
* fix naming issue
* adapt unit tests to accordance to new changes
* strict checks, additional unit tests
* add descriptions for transformations, fix unit tests
* enable ti to sequnece and unroll transformations in plugins for testing
* disable tensor iterator ngraph reader tests
* delete unnecessary cmake file
* fix includes
* clean up, resolve review comments
* move ti to sequence transformation to ti folder
* validate_and_infer_types() implementation
* input parameter validation for LSTM, GRU and RNN
* style-check applied
* Add LSTMSequence dynamic shape validation and test props for RNNCell, GRUCell, LSTMCell and LSTMSequence.
* recurrent_sequence.hpp moved to ngraph/core/include/ngraph/op/util/
* style check applied
* removed unused variable from LSTMSequence::validate_and_infer_types
* Add missing newline mark at the end of file.
* Add supression macro for FusedOp deprecation.
* Add element type initialization
* transpose,rnn cell reference implementations
* Apply PR review remarks
* reference implementations for cells op, single layer tests, align lstm cell/sequence according to the spec
* lstm/gru/rnn cell decompostion transformations
* ngraph codestyle
* clean up
* ngraph code style
* change inheritance of Cells, fix build
* fix build
* fix build again
* remove Peepholes from LSTMSeq, fix copy_runtime_info in transformations
* Rewrite tests to use gtest exception assertions.
* resolve tests issues
* ngraph codestyle
* add missed files
* fix typeprop tests
* fix lstm sequence checks
* fix arm build
* fix arm again
* delete unnecessary file
* add convert weghts format function, enable lstm test, resolve review comments
* add ngraph builders
* ngraph codestyle
* fix unit tests
* revert transpose reference implementation
* move ti to sequences transformation to another branch, resolve review comments
* resolve review comments
* revert fix in ie_layer_validators
* revert LSTM Cell v0, add LSTMCell v1, update transformation lstm_cell_to_cell_ie
* v1 version of LSTMCell op
* LSTMSequence v1 operation, exclude LSTMSeq from opset4
* fix python api tests
* resolve review comments, tests for decomposition transformations, switch lstm cell to opset4 in mo
* references impl for RNN/GRU/LSTM Sequences, single layer tests, bidirectional transformation
* fix unit tests
* process dynamic ranks of rnn/gru/lstm ops
* remove sequences specifications from opset4
* resolve review comments
* fix validate_and_infer_types of GRU/RNN sequences
Co-authored-by: Szymon Durawa <szymon.durawa@intel.com >
2020-09-08 10:31:44 +03:00
Edward Shogulin
dc8bbd930f
[LPT] Multiinput with one parent and FQ with three Constant ( #2066 )
...
* [LPT] FakeQuantize with three constants
* [LPT] Dequantization ops on thw inputs with one parent
2020-09-07 20:31:45 +03:00