Alexander Peskov
5ac26ff3b6
[TEST] One more trivial test case on Loop
...
Also fixed compilation with gcc4.8
Signed-off-by: Alexander Peskov <alexander.peskov@intel.com >
2020-11-02 12:37:48 +03:00
Alexander Peskov
d7e3e92b64
[TEST] Several more Loop test with static shapes
...
Signed-off-by: Alexander Peskov <alexander.peskov@intel.com >
2020-11-02 12:37:48 +03:00
Alexander Peskov
56dc06d09f
Avoid '-' symbol for test name
...
Signed-off-by: Alexander Peskov <alexander.peskov@intel.com >
2020-11-02 12:37:48 +03:00
Andrey Babushkin
4acd117c8d
Temporarily skip ExecGraphTests.CheckExecGraphInfoSerialization on GPU ( #2921 )
...
* Skip ExecGraphTests.CheckExecGraphInfoSerialization on GPU
* [execution_graph_tests] Add test skipping macro
* Add missing import
2020-11-01 12:55:50 +03:00
Maxim Kurin
28de789993
[IE][VPU]: GatherND layer & tests ( #2710 )
...
* GatherND layer & test
* Update vpu firmware 1452
2020-10-31 02:02:23 +03:00
Edward Shogulin
997cc1e863
[LPT] nGraph nodes naming fix ( #2822 )
...
* [LPT] functional tests: FakeQuantize with dynamic intervals
* [LPT] decomposeFakeQuantize: removed debug info
* [LPT] Add NetworkHelper::mark_as_dequantization_op function
[ngraph] Fix compare runtime info function
[LPT] Fix test cases with no DEQUANTIZATION runtime attribute
[LPT] Change include path for dequantization op
* [LPT] Remove Subtract functional test, enable and rename legacy tests
Co-authored-by: Vladislav Golubev <vladislav.golubev@intel.com >
Co-authored-by: Aleksandr Pertovsky <aleksandr.pertovsky@intel.com >
2020-10-30 23:23:35 +03:00
Nikolay Shchegolev
257bfc9944
[CPU] GatherND implementation. ( #2757 )
2020-10-29 19:28:31 +03:00
Aleksandr Korolev
04b7822761
[IE][VPU][TESTS] Fix vpu split with unusable outputs & test ( #2718 )
...
* Fix vpu split with unusable outputs & test
Co-authored-by: kora6 <kora6@github.com >
2020-10-29 15:12:10 +03:00
Maxim Andronov
fdbfab8546
[CPU] Add tests for SetBlob + I64 ( #2402 )
2020-10-29 11:34:29 +03:00
Gorokhov Dmitriy
abb8817cf6
[CPU] Generic JIT Eltwise implementation ( #1464 )
2020-10-28 09:16:28 +03:00
Roman Lyamin
77365bcb4c
[IE CLDNN] Added Round-5 operation ( #2838 )
2020-10-27 10:56:15 +03:00
Ilya Lavrenov
166ab89b95
Reorganize LPT: ( #2803 )
...
- inference_engine_lp_transformations keep ngraph LPT
- inference_engine_lp_transformations_legacy keep old CNNLayer based LPT
2020-10-26 14:10:17 +03:00
Patryk Elszkowski
5036b12544
enable reference implementation in CTCGreedyDecoder single layer test ( #2680 )
...
* enable reference implementation for CTCGreedyDecoder single layer tests
* update unit test to have blnak_index
* remove merge_repeated disable flag for CPU test because CPU impl always
merge
* add CTCGreedyDecoder single layer tests for CPU
* changes to match xPU implementations
* apply reviewers suggestions
Co-authored-by: Patryk Elszkowski <patryk.elszkowki@intel.com >
2020-10-26 13:34:50 +03:00
Vladimir Paramuzov
980bbd172a
[IE CLDNN] Enabled more functional tests and added several fixes into ops implementations ( #2763 )
2020-10-24 23:38:13 +03:00
Edward Shogulin
c2271da637
Es/lpt/lpt to ngraph fixes2 with master ( #2671 )
...
* [LPT] Replace creation of dequantization with factory
* [ngraph][LPT] Add ScaleShift replace for dequantization operations
* [LPT] SubtractMultiplyToMultiplyAdd refactoring
* [LPT] Code style fix
* [LPT] Edit SubtractMultiplyToMultiplyAdd transformation for dequantization
* [LPT] Linux compilation quick fix
* [LPT] [WIP] runtime info applying
* [LPT] Concat transformation functional tests extending
* [LPT] MultiplyToConvolution + Subtract to add fusing + improvements in LowPrecisionTransformer
* [LPT] linux compilation error fix
* [LPT] compilation error
* [LPT] MultiplyToGroupConvolution fix: 5D support
* [LPT] Multiply transformation extending: FQ weights support - wip
* [LPT] FQ folding & precision selection
* [LPT] code style fixes
* [LPT] code style fixes
* [LPT] Linux compilation error fix
* [LPT] SubtractMultiplyToMultiplyAdd: refactoring
* [LPT] Tests fixes
* [LPT] MultiplyToGroupConvolution tests
* [LPT] Convert subtract with int inputs to Eltwise sub
* [LPT] Constant folding fix for quant models
* [LPT] 1) Asymmetric quantization improvement 2) tests extending
* [LPT] 2 fixes for se_resnext_50
* [LPT] Add transformation priority branch selection test
* [LPT] AddMultiplyFusion: legacy transformation quick fix
* [LPT] nGraph tests temporary disabling
* [LPT] Fix for eltwise inputs with multiple outputs
* [LPT] Fix for FQ fuse
* [LPT] Reshape by channel, batch temporary disabled
* [nGraph][LPT] MatMul fix for reading FP16 models
* [LPT] 1) Add (not after Convolution/GroupConvolution/MatMul with Constant) to Subtract 2) precision selection fix: MultiplyToGroupConvolution quick fix
* [LPT] DenseNet improvments: AddTransformation: Add to Subtract + tests
* [LPT] AddTransformarion refactoring
* [LPT] AddTransformation tests temporay disabled
* [LPT] ReshapeTransformation improvements: degradation fix
* [LPT] code style fix
* [LPT] Concat tests temporary disabling
* [LPT] tests unification
1) plugin tests: added test-cases and nGraph-validation for clamp, split and variadic split
2) func tests: added test-cases
3) transformNGraph: added the ability to run additional transformations
* [LPT] split & variadic split merge fix
* [LPT] Clamp: added support for asymmetric quantization
* [LPT] added DequantizationAttr run-time attribute
* [LPT] debug info removal
* [LPT] ConcatTransformation: zero point fix
* [LPT] CNNNetwork ReLU transformation quick fix
* [LPT]
1) Concat fix
2) ConcatMultiChannels fix
3) Added "Concat with Split" test-cases
4) Subgraph fix
* [LPT]
1) Concat fix
2) Added "Concat with different precision on childs" test-case
* [LPT] concat fix Ubuntu18
* [LPT] Concat test fixes
* [LPT] Not fp32 FQ input support
* [LPT] MatMul Fix + separateInStandaloneBranch Fix
* [LPT] Fix reference input types in mish fusion tests
* [LPT] Fix cpuFuncTests on CentOS building
* [nGraph][LPT] ScaleShift 2d, 3d nGraph conversion enabling
* [LPT] 1) FullyConnected workaround removing 2) validate_nodes_and_infer_types for LPT
* [ngraph] Add check for childs for ConvertSubtract
* [LPT] Squeeze/Unsqueeze tests unification
* [LPT] Squeeze/Unsqueeze change signature for getReference/getOriginal
* [LPT] Mul & Add -> ScaleShift quick fix
* [LPT] nGraph tests emporary disabling
* [LPT] code style fix
* [LPT] code style fix #2
* [LPT] nGraph tests temporary disabling
* [LPT] code styl fix #3
* [LPT] shared plugin tests temporary disabling
* [LPT] cleanup
* [LPT] nGraph unit_tests tests temproary disabling
* [LPT] nGraph unit tests disabling #2
* [LPT] nGraph tests disabling
* [LPT] nGraph tests temporary disabling
* [LPT] WA removing
* [LPT] CentOS compilation fix
* [LPT] KMB wa to avoid compilation error
* [LPT] functional test temporary disabling
* [nGraph] code style fixes
* [LPT] ConcatTransformation: data movement operation as intermediate handling
* [LPT] FuseSubtractToFakeQuantize after VariadicSplit
* [LPT] ConcatWithSplitTransformation functional test temporary disabling
* [LPT] Clamp and ConcatWithDifferentPrecisionsOnChilds: tests fix
* [LPT] MatMul: bert-nv-mlperf-quantized fix
* [LPT] Add to convolution biases fuse fix
* [LPT] GPU plugin tests fixes
* [LPT] Normalize GPU plugin tests fix
* [LPT] test-commit
* [LPT] CLDNN Plugin FP16 conversion
* [LPT] AvgPool update precision if there is not FQ after + convolution
precision limitation on activation
* [LPT] Convolution fixes
* [LPT] FuseSubtractToFakequantize & FuseMultiplyToFakeQuantize improvement
* [LPT] FuseSubtractToFakeQuantize test fix
* [LPT] FuseSubtractToFakeQuantizeTransformation tests
* [LPT] code style fix
* [LPT] AvgPool child recursive extend
* [LPT] AvgPool tests + fix
* [LPT] compilation quick fix
* [LPT] Add to convolution biases fuse fix
* [LPT] Linux issues: MatMulWithOptimizedConstantFakeQuantizeTransformation temporary disabled
* [LPT] Normalize GPU plugin tests fix
* [LPT] test-commit
* [LPT]
1) added the ability to create sub without dequantizationAttribute
2) fixed optimizeMulAfter: added copying rt_info
3) Tests Unification: Convolution transformation
4) added cleanRunTimeInfo into Network Helper
* [LPT] Tests Unification: GroupConvolution
* [LPT] removed debug info
* [LPT] functional tests for Convolution & GroupConvolution extending
* [LPT] [MatMul] Quick fix ubuntu error
* [LPT] MatMulTransformation quick test fix: one constant for both intervals
* [nGraph] code style fix
* [LPT] added output_precision to NormalizeIE
* [nGraph] NormalizeIE fix for LPT support
* [LPT] nGraph WA removal
* [LPT] fixed fillSubgraph for concat multi channels
* [LPT] MatMul fix
* [nGraph] WA removal: 1) nGraph tests enabling 2) LPT extanding: not handle in FP32
* [LPT] nGraph WA removal: function tests skip config rollback
* [LPT] WA removal: precision propagation fix
* [LPT] ConvertMulOrAddFinally transformation extending
* [nGraph] ConvolutionMultiplyFusion rollback (move from legacy to common)
* [nGraph] ConvertMulAddToScaleShiftOrPower: WA removal
* [nGraph] TypeRelaxed: WA removal
* [nGraph] WA removal: TypeRelaxed
* [LPT] WA removal: ConcatTransformation
* [nGraph] WA removal: Eltwise & ConvertMulOrAddFinally fixes to support LPT
* [nGraph] MulAddConversion fix: 2D & 3D ScaleShift are supproted
* [nGraph] VisualizeTree extending
* [LPT] FakeQuantizeDequantization extending: check element wise dequantization operation
* [LPT] FakeQuantizeDequantization extending: SubtractMultiplyToMultiplyAddTransformation & WeightableLayerTransformation
* [LPT] Convolution + test infrastructure update
* [LPT] GPU compilation error
* [nGraph] BatchNorm plugin tests: input tensor definition
* [LPT] LowPrecisionTransformer::isFunctionQuantized was added
* [nGraph] WA final cleanup
* [nGraph] ScaleShiftIE quick fix
* [LPT] Functional tests: added test-cases "Concat with intermediate with constant"
* [LPT] Transformer::isNetworkquantized fix
* [LPT] SubtractMultiplyToMultiplyAdd zero Add remove: fix for ssd300 on gpu
* [LPT] MultiplyToGroupConvolution not transform on Const
* [LPT] workaround for negative scales
* [LPT] Convert standalone dequantization Mul,Sub,Add to ScaleShift
* [LPT] SubtractMultiplyToMultiplyAdd test fix
* [LPT] Clamp transformation: GPU tests fix
* [LPT] Transformer tests
* [LPT] FakeQuantizePrecisionSelectionTransformation was disabled for GPU
* [LPT] TransformerIsFunctionQuantized refactoring
* [nGraph] code style fix
* [LPT] mobilenet_v2_tf_depthwise test update
* [LPT] TMP: dequantization folding
* [LPT] Elementwise transformation fix: dequantization operations constant folding
* [LPT] cleanup
* [LPT] denormal values fix
* [LPT] FuseFakeQuantize test fixed + negative multiply case
* [LPT] FP32 -> FP16 conversion info
* [LPT] FQ dot interval support + swapMultiplyAdd safely division
* [LPT] test fix
* [LPT] Tests for dot interval on FQ + tests for addTransformation enabling
* [LPT] Clamp transformation fix
* [LPT] FQ prec selection test fix
* [LPT] Clamp test case
* [LPT] Concat division precision fix
* [LPT] cleanup
* [LPT] merge fix
* [LPT] WIP: MatMul asymmetric quantization fix (BERT)
* [LPT] MatMulWithOptimizedConstantFakeQuantizeTransformation disabled
* [LPT] GPU Plugin set config fix
* [LPT] Fix merge mistakes
* [LPT] Rollback device specific INT8
* [LPT] ReshapeFullyConnected fix: FullyConnected output fix
* [LPT] bert-base-chinese GPU fix
* [ngraph/LPT] Tests for fix convert_mul_or_add_finally with dequantization
[ngraph/LPT] Fix convert mul_or_add_finally with dequantization
* [LPT] ScaleShift dim < 4 only dequantization conversion
* [LPT] MatMul transformation tests extensing
* [LPT] ReshapeFullyConnected legacy transformation: LPT test case addition
* [nGraph] VisualizeTree extending: property names displying to simplify search
* [LPT] getDequantization extending
* [LPT] MulAddToScaleshiftOrPower: out precision fix & tests
* [LPT] Multiply to ScaleShiftIE: Multiply transformation: remove DEQUANTIZATION if not valid
* [LPT] Concat test case
* [nGraph] try to fix opencv compatibility
* [nGraph] nGraph code style fix
* [LPT] InPlace dequantization folding
* [LPT] Multiply constant folding test
* [LPT] Fix plugin test case for MatMulWithOptimizedConstantFakeQuantize
[LPT] Enable MatMulWithOptimizedConstantFakeQuantize plugin test
* [LPT] Convolution transformation: mulConst shape fix
* [LPT] INT8 Constant folding branch for elementwise ops optimization removal
* [LPT] eltwise for const branch fix
* [LPT] linux fix
* [LPT] Multiply test refactoring
* [LPT] Convert Fuse in Constant + tests
* [LPT] function comparation: runtime info comparation rollback
* [LPT] linux build fix
* [LPT] linux build fix2
* [LPT] MatMul transformation limitation was added to be similar as CNNNetwork LPT
* [LPT] Reshape transformation update: don't broadcast by batch
* [LPT] MatMul transformation limitation was added to be similar as CNNNetwork LPT - refactoring
* [LPT] MatMul transformation: transpose input tensors fix
* [LPT] checkElementwise for AddTransformation WA: should be moved to getDequantization
* [LPT] merge fix
* [LPT] MatMul fix & tests
* [LPT] AddTransformation tests
* [LPT] Interpolate transformation enabled
* [LPT] constant folding before LPT
* [LPT] WIP: not completed tests
* [LPT] GPU degradation fix
* [LPT] FuseConvert workaround
* [LPT] code cleanup
* [LPT] Interpolate GPU test quick fix
* [LPT] GroupConvolution fix
* [LPT] Fix fusing multiply for non-dequantization layers
* [LPT] GPU pipeline update: enableInt8 initialization place update
* [LPT] tests compilation fix
* [LPT] merge fix
* [LPT] tests enabling
* [LPT] merge issue resolving
* [LPT] LPT CNNNetwork usage macros: part #1 : source code
* [LPT] LPT CNNNetwork usage macros: part #2 : cmake files update and tests addoption
* [LPT] LPT workaround from nGraph core removing
* [LPT] previous LPT version tests
* [LPT] inference_engine_lp_transformations was returned back
* [LPT] replace_node rollback
* [LPT] ConvertSubtract fix
* [LPT] GPU: baselineIsFP16 reuse fix
* [LPT] FakeQuantizeTransformation: GPU workaround: I32 -> FP32 Convert is not fused
* [LPT] AvgPool output precision workaround
* [LPT] Group convolution precision + Subtract to ScaleShift const fix
* [LPT] SubMulToMulAdd & Transpose: action-recognition-0001 fix
* [LPT] Transpose: added test with per-tensor quantization
Co-authored-by: Aleksandr Pertovsky <aleksandr.pertovsky@intel.com >
Co-authored-by: Zinoviev, Vladimir <vladimir.zinoviev@intel.com >
Co-authored-by: Vladislav Golubev <vladislav.golubev@intel.com >
Co-authored-by: Gorokhov Dmitriy <dmitry.gorokhov@intel.com >
2020-10-23 13:22:55 +03:00
Andrew Bakalin
3dfec639f0
[VPU][GT][Tests] Make gemmTranspose pass layout agnostic ( #2666 )
...
* [VPU][GT] Make permTranspose pass layout agnostic
* [IE][Tests] Improve MatMul common test class
* [VPU][Tests] Add tests for MatMul
* [VPU][Tests] Review fixes
* [Tests] Add combineShapes for MatMul
* [VPU][GT] Fix assertion condition
2020-10-22 15:04:53 +03:00
Ilya Churaev
e364271cf6
Constant->Result networks ( #2639 )
...
* Added tests
* Changed iterator algorithm
* Fixed legacy tests
* Added plugin tests
* Disabled some tests
* Remover parameter tests
* Fixed conversion
* Use old approach for old tests
* Temp commit
* Fixed iterator
* Fixed some tests
* Change logic to compare iterators
* Disabled CPU functional test
* Temp commit
* Disabled test for GPU
* Fixed network copy
* Try to fix test for Windows
* Disabled test for GNA
* Disable plugin tests
* Disable legacy test
* Remove redundant code
2020-10-22 13:22:38 +03:00
Irina Efode
a2e49469b5
Cleanup single_layer_tests ( #2716 )
2020-10-20 14:31:59 +03:00
Anton Potapov
8715b60d88
[PP GAPI] Extended plug-ins shared precision conversion tests to use ( #2677 )
...
`GetBlob()` as well
- test were extended to cover case when input tensors are copied into
Blob return by `InferRequest::GetBlob`
- channel number of input tensor is made a test parameter
2020-10-19 12:35:59 +03:00
Ivan Tikhonov
84b5fc51dc
[opset5] ngraph implementation of Loop op ( #2583 )
...
* Loop op ngraph implementation, update IE IR Reader and ngraph to cnn converter
* refactoring SubGraphOp class
* type prop unit tests
* ngraph code style
* update comment
* single layer tests for Loop operation
* fix file name
* Add SpecialBodyPorts attribute in Loop op, update single layer tests
* add several new tests cases, strict checks in Loop impl, temporary disable single layer tests
* ngraph codestyle, refactoring, clone_new_args test
* resolve review remarks
* fix build
* fix tests
* add a new constructor of Loop op, resolve review remarks
2020-10-19 06:53:46 +03:00
Roman Lyamin
cc569d2254
[IE CLDNN] Added HSigmoid operation ( #2700 )
2020-10-18 20:47:22 +03:00
Kamil Magierski
95d7c29628
[GNA] Fix remove layer + identity layer insertion ( #2626 )
...
* [GNA] Fix remove layer + identity layer insertion
test stub
Test impl
style
hpp style
* disable FP16 for GPU
2020-10-16 13:23:32 +03:00
Gabriele Galiero Casay
c9b16a79f5
Reference Implementation for RegionYolo operator ( #2474 )
2020-10-15 22:30:12 +02:00
Katarzyna Mitrus
fadd16ce89
ReorgYolo reference implementation ( #2384 )
...
* Align ReorgYolo to the spec (vector strides -> int stride)
* ReorgYolo ref impl
* ReorgYolo evaluate method
* ReorgYolo tests
* Tests update
* Style apply
* Add some coments
* Code refactor
* Comment update
* Style apply
* Build fix, mark evaluate as override
* Revert "Align ReorgYolo to the spec (vector strides -> int stride)"
* Use int_executable instead of evaluate
* Use char* instead of templates
* Code refactor
* Comment update
* Code review comment
* Add constructor aligned with spec
* Update shape validation
* Update attributes tests
* Add type_prop tests
* Update backend tests
* Add single layer tests
* Update the spec
* Remove wrong transformation test
2020-10-15 13:42:21 +03:00
Gleb Kazantaev
94eacc6544
Move legacy transformations and ops to legacy library ( #2624 )
...
* Initial movement
* Divided transformations to common and legacy
* Changed ngraph visibility to ie_api.h
* CommonTransformaitons to Internal
* New trasnformations location structure
* fixde typo; move convert_quantize_dequantize to common
* Added control_flow folder
2020-10-14 10:58:01 +03:00
Nikita Kudriavtsev
5ce622f4f4
[IE Myriad] Fix layer tests for logical_and ( #2622 )
2020-10-12 16:37:31 +03:00
Alexander Perepelkin
a1b8a11000
Allow to specify both in/out precision, add in/out layout in tests ( #2516 )
...
* test definitions
* CPU plugin shared tests
* CPU plugin custom tests
* GNA plugin shared tests
* GPU plugin shared tests
* MYR plugin shared tests
* TML plugin shared tests
2020-10-11 11:05:55 +03:00
Liubov Batanina
7f78dd797e
[IE Tests] Added NormalizeL2 tests ( #2327 )
...
* Added NormalizeL2 tests
* Added NormalizeL2 reference
* Add nGraph tests
* Fix tests
* Added NormalizeL2 builder
2020-10-08 07:23:25 +03:00
Kamil Magierski
4c1ae9b339
[GNA] Issue 39975 - cascade concat fix ( #2486 )
...
* concat input not used fix rough implementation
* [GNA] Cascade concat input not assigned fix
* reduce copying in recursive function
* [GNA] Aligned cascade concat test
2020-10-06 11:01:19 +03:00
Kamil Magierski
0e62e5e17f
[GNA] FIX CopyLayerPass for concat parent cases ( #2485 )
...
* [GNA] fix cases when layer output is used in both memory and concat parent layer
* coma fixes
* Issue-36189 CopyLayerPass for concat parent cases fix test
* Fix test for CPU
* Remove test for GPU
2020-10-06 11:00:38 +03:00
Kamil Magierski
8abdc32676
[GNA] Fix LSTM Cell channel C being 0 on output ( #1174 )
...
* [GNA] get output before activation test
[GNA] SubstituteScaleShiftBroadCastPass fix for cases when there are multiple scaleshifts as an output from the layer
[GNA] Generalize Fix where LSTMCell output was zero due to being fused into activation
[GNA] Fix LSTMCell being zero on channel C if being output layer
* linux build fix
2020-10-06 10:59:03 +03:00
Anton Pankratv
b21d0fe978
Removed similar behaviour tests ( #2528 )
2020-10-05 12:21:03 +03:00
Andrey Dmitriev
949e23d0e8
[GNA]specific execution order for delayer copy layer ( #2117 )
...
[GNA]specific execution order for delayer copy layer + Test
2020-10-01 15:09:53 +03:00
Mikhail Kozlov
6ae332b072
Fix virtual class inheritance for single layer test detection output ( #2472 )
2020-10-01 14:34:04 +03:00
Egor Churaev
a05333217c
Support operation Interpolate-4 in OpenVINO ( #1596 )
...
JIRA: 26973
2020-10-01 11:41:51 +03:00
Vladislav Vinogradov
d28a5d6c4f
[CMAKE] Introduce FASTER_BUILD experimental feature ( #2438 )
...
It uses CMake 3.16 built-in utilities to speed up build time:
* Unity builds
* Precompiled headers
The feature is controlled via `ENABLE_FASTER_BUILD` CMake option (disabled by default).
The option avaialble only on CMake >= 3.16.
The feature is enabled per-target via `ie_faster_build` function.
Some observations:
* Don't have actual numbers for compile time, but subjectively can see
speed up locally with VS 2019.
* Unity builds gives much more effect, but has some restriction on source files,
so are not used everywhere.
2020-09-28 18:53:11 +03:00
Andrey Markelov
63fbe78d76
Fix tile layer test header ( #2315 )
2020-09-28 17:13:49 +03:00
Anton Pankratv
c8233b7b7c
Fixed canStartSeveralAsyncInsideCompletionCallbackWithSafeDtor ( #2404 )
2020-09-28 11:08:34 +03:00
Anton Pankratv
863a7bd663
Holder test thread safe for all ( #2425 )
2020-09-25 21:12:24 +03:00
Andrew Bakalin
03d184726a
[IE][VPU]: Supports I32 for some eltwise precisions + tests ( #2364 )
2020-09-25 18:29:34 +03:00
Aleksandr Korolev
eda9498b79
[IE][VPU]: Reduce tests execution time ( #2378 )
...
* [IE][VPU]: Reduce tests execution time
* [IE TESTS] Remove 'ConfigurePlugin()' from 'memory_LSTMCell.hpp'
* [IE VPU TESTS] Myriad conv layer tests was changed
Co-authored-by: Maksim Doronin <maksim.doronin@intel.com >
Co-authored-by: kora6 <kora6@github.com >
2020-09-25 14:24:12 +03:00
Liubov Batanina
f8a17a1317
Changed pad data type ( #2354 )
2020-09-23 13:10:58 +03:00
Kamil Magierski
9fca26b21e
[GNA] LSTMCell fixes ( #2080 )
2020-09-22 18:13:28 +03:00
Eugene Smirnov
f0b10bf071
[GNA] fake quantize single layer tests for GNA plugin ( #2060 )
...
* fake quantize single layer test for GNA plugin
* implemented fakequantize for fp32 case as an activation function
* added proper seed randomisation within single test run
* [GNA] [FAKEQUANTIZE] fixed ref-fp32 implementation on GNA to use nearbyint instead of roundf
* [GNA] [FAKEQUANTIZE] restored random seed
* [GNA][FAKEQUANTIZE] disabled 4d and integer tests for FakeQuantize
* [GNA][FAKEQUANTIZE]updated ngraph FakeQuantize builder to accept seed
* [GNA][FAKEQUANTIZE]aligned FP calculations order on GNA with reference ngraph - this however gives more error
* [CPU]build of FakeQuantise tests restored
* [TESTS][FAKEQUANTIZE] ignore extra inferRequests for disabled tests
* [GNA] Fixed legacy unit test failuers appeared due to extra check for possible segfault in import frames
* [GNA] adopted fuse multiple identities for FakeQunatize layer
* [GNA]fp32 runtime code review
2020-09-21 14:22:14 +03:00
Alexander Perepelkin
c13ec24e1e
Specify in and out precisions separately, add layouts for convolution ( #2211 )
...
* Specify in and out precisions separately, add layouts for convolution
* Align convolution layer tests instantiations with updated definition
* Align convolution layer tests instantiations with updated definition for template plugin
* net, in, out prcs
Co-authored-by: Mikhail Treskin <mikhail.treskin@intel.com >
2020-09-21 13:03:01 +03:00
Liubov Batanina
6839ef7699
Add TopK tests ( #2165 )
2020-09-21 11:37:22 +03:00
Ivan Tikhonov
1b7dfc6e4c
Fix bidirectional mode in reference implementations of GRU/LSTM/RNN Sequences ( #2264 )
...
* fix bidirectional case in references of sequences ops, enable decomposition of bidirectional cases in CommonOptimizations
* introduce new opset5, include GRU/RNN/LSTM Sequences to opset5
* Revert "introduce new opset5, include GRU/RNN/LSTM Sequences to opset5"
This reverts commit 73c22a11db .
2020-09-18 10:14:01 +03:00
Irina Efode
12abb2eb49
[IE TESTS] CoreThreading_LoadNetwork tests were disabled for GPU plugin ( #2245 )
2020-09-16 16:46:02 +03:00
Gorokhov Dmitriy
83e96891ca
Revert "[IE TESTS] dynavic batch for mvn layer ( #1010 )" ( #2257 )
...
This reverts commit 2e3378c50f .
2020-09-16 14:11:48 +03:00
Anton Potapov
d590144545
[PP GAPI] Addded tests to cover exisiting precision conversions done by ( #1976 )
...
some plugins
- added shared parameterized tests
- instantiated for template plugin
- instantiated for cpu plugin
- fixed CPU plugin to properly handle U16 input
- fixed CPU reverse_sequence primitive to alolw input/oputput tensors to
be in FP32 only
- updated ngraph test_simple_computation_on_ndarrays to not expect
failure on U16 input
2020-09-16 12:41:14 +03:00