Gleb Kazantaev
c4e0b74fb1
Add dynamic shape checks to nGraph transformations ( #2735 )
...
* Added dynamic shape checks for BatchNormDecompositoin pass
* Added dynamic shapes checks for FQTranspose fusion pass
* Added patter::has_static_rank predicate
* Added dynamic shapes checks for BroadcastToTiles pass
* Fixed BN inputs order
* Add dynamic shape checks for DepthToSpace/SpaceToDepth passes
* Added dynamic check for ReduceToPooling pass
* Updated BN transformation
* Fix PR comments
* size_t to int64_t
* Updated reduce to pooling pattern
2020-10-23 15:39:47 +03:00
Alexey Ershov
8c97127aa7
[IE][VPU]: Proposal: Implemented support for optional 2nd output (scores) ( #2762 )
...
* Proposal stage: added support for optional 2nd output
* firmware updated
2020-10-23 15:19:20 +03:00
Rafal Blaczkowski
46e8c12a5d
Add watchdog of OpenVino ONNX CI ( #2550 )
2020-10-23 14:16:43 +02:00
Krzysztof Bruniecki
9c78a4855a
Remove CNN GNA1/2 compatibility enforcement when other GNA device detected ( #2745 )
2020-10-23 13:30:16 +03:00
Edward Shogulin
c2271da637
Es/lpt/lpt to ngraph fixes2 with master ( #2671 )
...
* [LPT] Replace creation of dequantization with factory
* [ngraph][LPT] Add ScaleShift replace for dequantization operations
* [LPT] SubtractMultiplyToMultiplyAdd refactoring
* [LPT] Code style fix
* [LPT] Edit SubtractMultiplyToMultiplyAdd transformation for dequantization
* [LPT] Linux compilation quick fix
* [LPT] [WIP] runtime info applying
* [LPT] Concat transformation functional tests extending
* [LPT] MultiplyToConvolution + Subtract to add fusing + improvements in LowPrecisionTransformer
* [LPT] linux compilation error fix
* [LPT] compilation error
* [LPT] MultiplyToGroupConvolution fix: 5D support
* [LPT] Multiply transformation extending: FQ weights support - wip
* [LPT] FQ folding & precision selection
* [LPT] code style fixes
* [LPT] code style fixes
* [LPT] Linux compilation error fix
* [LPT] SubtractMultiplyToMultiplyAdd: refactoring
* [LPT] Tests fixes
* [LPT] MultiplyToGroupConvolution tests
* [LPT] Convert subtract with int inputs to Eltwise sub
* [LPT] Constant folding fix for quant models
* [LPT] 1) Asymmetric quantization improvement 2) tests extending
* [LPT] 2 fixes for se_resnext_50
* [LPT] Add transformation priority branch selection test
* [LPT] AddMultiplyFusion: legacy transformation quick fix
* [LPT] nGraph tests temporary disabling
* [LPT] Fix for eltwise inputs with multiple outputs
* [LPT] Fix for FQ fuse
* [LPT] Reshape by channel, batch temporary disabled
* [nGraph][LPT] MatMul fix for reading FP16 models
* [LPT] 1) Add (not after Convolution/GroupConvolution/MatMul with Constant) to Subtract 2) precision selection fix: MultiplyToGroupConvolution quick fix
* [LPT] DenseNet improvments: AddTransformation: Add to Subtract + tests
* [LPT] AddTransformarion refactoring
* [LPT] AddTransformation tests temporay disabled
* [LPT] ReshapeTransformation improvements: degradation fix
* [LPT] code style fix
* [LPT] Concat tests temporary disabling
* [LPT] tests unification
1) plugin tests: added test-cases and nGraph-validation for clamp, split and variadic split
2) func tests: added test-cases
3) transformNGraph: added the ability to run additional transformations
* [LPT] split & variadic split merge fix
* [LPT] Clamp: added support for asymmetric quantization
* [LPT] added DequantizationAttr run-time attribute
* [LPT] debug info removal
* [LPT] ConcatTransformation: zero point fix
* [LPT] CNNNetwork ReLU transformation quick fix
* [LPT]
1) Concat fix
2) ConcatMultiChannels fix
3) Added "Concat with Split" test-cases
4) Subgraph fix
* [LPT]
1) Concat fix
2) Added "Concat with different precision on childs" test-case
* [LPT] concat fix Ubuntu18
* [LPT] Concat test fixes
* [LPT] Not fp32 FQ input support
* [LPT] MatMul Fix + separateInStandaloneBranch Fix
* [LPT] Fix reference input types in mish fusion tests
* [LPT] Fix cpuFuncTests on CentOS building
* [nGraph][LPT] ScaleShift 2d, 3d nGraph conversion enabling
* [LPT] 1) FullyConnected workaround removing 2) validate_nodes_and_infer_types for LPT
* [ngraph] Add check for childs for ConvertSubtract
* [LPT] Squeeze/Unsqueeze tests unification
* [LPT] Squeeze/Unsqueeze change signature for getReference/getOriginal
* [LPT] Mul & Add -> ScaleShift quick fix
* [LPT] nGraph tests emporary disabling
* [LPT] code style fix
* [LPT] code style fix #2
* [LPT] nGraph tests temporary disabling
* [LPT] code styl fix #3
* [LPT] shared plugin tests temporary disabling
* [LPT] cleanup
* [LPT] nGraph unit_tests tests temproary disabling
* [LPT] nGraph unit tests disabling #2
* [LPT] nGraph tests disabling
* [LPT] nGraph tests temporary disabling
* [LPT] WA removing
* [LPT] CentOS compilation fix
* [LPT] KMB wa to avoid compilation error
* [LPT] functional test temporary disabling
* [nGraph] code style fixes
* [LPT] ConcatTransformation: data movement operation as intermediate handling
* [LPT] FuseSubtractToFakeQuantize after VariadicSplit
* [LPT] ConcatWithSplitTransformation functional test temporary disabling
* [LPT] Clamp and ConcatWithDifferentPrecisionsOnChilds: tests fix
* [LPT] MatMul: bert-nv-mlperf-quantized fix
* [LPT] Add to convolution biases fuse fix
* [LPT] GPU plugin tests fixes
* [LPT] Normalize GPU plugin tests fix
* [LPT] test-commit
* [LPT] CLDNN Plugin FP16 conversion
* [LPT] AvgPool update precision if there is not FQ after + convolution
precision limitation on activation
* [LPT] Convolution fixes
* [LPT] FuseSubtractToFakequantize & FuseMultiplyToFakeQuantize improvement
* [LPT] FuseSubtractToFakeQuantize test fix
* [LPT] FuseSubtractToFakeQuantizeTransformation tests
* [LPT] code style fix
* [LPT] AvgPool child recursive extend
* [LPT] AvgPool tests + fix
* [LPT] compilation quick fix
* [LPT] Add to convolution biases fuse fix
* [LPT] Linux issues: MatMulWithOptimizedConstantFakeQuantizeTransformation temporary disabled
* [LPT] Normalize GPU plugin tests fix
* [LPT] test-commit
* [LPT]
1) added the ability to create sub without dequantizationAttribute
2) fixed optimizeMulAfter: added copying rt_info
3) Tests Unification: Convolution transformation
4) added cleanRunTimeInfo into Network Helper
* [LPT] Tests Unification: GroupConvolution
* [LPT] removed debug info
* [LPT] functional tests for Convolution & GroupConvolution extending
* [LPT] [MatMul] Quick fix ubuntu error
* [LPT] MatMulTransformation quick test fix: one constant for both intervals
* [nGraph] code style fix
* [LPT] added output_precision to NormalizeIE
* [nGraph] NormalizeIE fix for LPT support
* [LPT] nGraph WA removal
* [LPT] fixed fillSubgraph for concat multi channels
* [LPT] MatMul fix
* [nGraph] WA removal: 1) nGraph tests enabling 2) LPT extanding: not handle in FP32
* [LPT] nGraph WA removal: function tests skip config rollback
* [LPT] WA removal: precision propagation fix
* [LPT] ConvertMulOrAddFinally transformation extending
* [nGraph] ConvolutionMultiplyFusion rollback (move from legacy to common)
* [nGraph] ConvertMulAddToScaleShiftOrPower: WA removal
* [nGraph] TypeRelaxed: WA removal
* [nGraph] WA removal: TypeRelaxed
* [LPT] WA removal: ConcatTransformation
* [nGraph] WA removal: Eltwise & ConvertMulOrAddFinally fixes to support LPT
* [nGraph] MulAddConversion fix: 2D & 3D ScaleShift are supproted
* [nGraph] VisualizeTree extending
* [LPT] FakeQuantizeDequantization extending: check element wise dequantization operation
* [LPT] FakeQuantizeDequantization extending: SubtractMultiplyToMultiplyAddTransformation & WeightableLayerTransformation
* [LPT] Convolution + test infrastructure update
* [LPT] GPU compilation error
* [nGraph] BatchNorm plugin tests: input tensor definition
* [LPT] LowPrecisionTransformer::isFunctionQuantized was added
* [nGraph] WA final cleanup
* [nGraph] ScaleShiftIE quick fix
* [LPT] Functional tests: added test-cases "Concat with intermediate with constant"
* [LPT] Transformer::isNetworkquantized fix
* [LPT] SubtractMultiplyToMultiplyAdd zero Add remove: fix for ssd300 on gpu
* [LPT] MultiplyToGroupConvolution not transform on Const
* [LPT] workaround for negative scales
* [LPT] Convert standalone dequantization Mul,Sub,Add to ScaleShift
* [LPT] SubtractMultiplyToMultiplyAdd test fix
* [LPT] Clamp transformation: GPU tests fix
* [LPT] Transformer tests
* [LPT] FakeQuantizePrecisionSelectionTransformation was disabled for GPU
* [LPT] TransformerIsFunctionQuantized refactoring
* [nGraph] code style fix
* [LPT] mobilenet_v2_tf_depthwise test update
* [LPT] TMP: dequantization folding
* [LPT] Elementwise transformation fix: dequantization operations constant folding
* [LPT] cleanup
* [LPT] denormal values fix
* [LPT] FuseFakeQuantize test fixed + negative multiply case
* [LPT] FP32 -> FP16 conversion info
* [LPT] FQ dot interval support + swapMultiplyAdd safely division
* [LPT] test fix
* [LPT] Tests for dot interval on FQ + tests for addTransformation enabling
* [LPT] Clamp transformation fix
* [LPT] FQ prec selection test fix
* [LPT] Clamp test case
* [LPT] Concat division precision fix
* [LPT] cleanup
* [LPT] merge fix
* [LPT] WIP: MatMul asymmetric quantization fix (BERT)
* [LPT] MatMulWithOptimizedConstantFakeQuantizeTransformation disabled
* [LPT] GPU Plugin set config fix
* [LPT] Fix merge mistakes
* [LPT] Rollback device specific INT8
* [LPT] ReshapeFullyConnected fix: FullyConnected output fix
* [LPT] bert-base-chinese GPU fix
* [ngraph/LPT] Tests for fix convert_mul_or_add_finally with dequantization
[ngraph/LPT] Fix convert mul_or_add_finally with dequantization
* [LPT] ScaleShift dim < 4 only dequantization conversion
* [LPT] MatMul transformation tests extensing
* [LPT] ReshapeFullyConnected legacy transformation: LPT test case addition
* [nGraph] VisualizeTree extending: property names displying to simplify search
* [LPT] getDequantization extending
* [LPT] MulAddToScaleshiftOrPower: out precision fix & tests
* [LPT] Multiply to ScaleShiftIE: Multiply transformation: remove DEQUANTIZATION if not valid
* [LPT] Concat test case
* [nGraph] try to fix opencv compatibility
* [nGraph] nGraph code style fix
* [LPT] InPlace dequantization folding
* [LPT] Multiply constant folding test
* [LPT] Fix plugin test case for MatMulWithOptimizedConstantFakeQuantize
[LPT] Enable MatMulWithOptimizedConstantFakeQuantize plugin test
* [LPT] Convolution transformation: mulConst shape fix
* [LPT] INT8 Constant folding branch for elementwise ops optimization removal
* [LPT] eltwise for const branch fix
* [LPT] linux fix
* [LPT] Multiply test refactoring
* [LPT] Convert Fuse in Constant + tests
* [LPT] function comparation: runtime info comparation rollback
* [LPT] linux build fix
* [LPT] linux build fix2
* [LPT] MatMul transformation limitation was added to be similar as CNNNetwork LPT
* [LPT] Reshape transformation update: don't broadcast by batch
* [LPT] MatMul transformation limitation was added to be similar as CNNNetwork LPT - refactoring
* [LPT] MatMul transformation: transpose input tensors fix
* [LPT] checkElementwise for AddTransformation WA: should be moved to getDequantization
* [LPT] merge fix
* [LPT] MatMul fix & tests
* [LPT] AddTransformation tests
* [LPT] Interpolate transformation enabled
* [LPT] constant folding before LPT
* [LPT] WIP: not completed tests
* [LPT] GPU degradation fix
* [LPT] FuseConvert workaround
* [LPT] code cleanup
* [LPT] Interpolate GPU test quick fix
* [LPT] GroupConvolution fix
* [LPT] Fix fusing multiply for non-dequantization layers
* [LPT] GPU pipeline update: enableInt8 initialization place update
* [LPT] tests compilation fix
* [LPT] merge fix
* [LPT] tests enabling
* [LPT] merge issue resolving
* [LPT] LPT CNNNetwork usage macros: part #1 : source code
* [LPT] LPT CNNNetwork usage macros: part #2 : cmake files update and tests addoption
* [LPT] LPT workaround from nGraph core removing
* [LPT] previous LPT version tests
* [LPT] inference_engine_lp_transformations was returned back
* [LPT] replace_node rollback
* [LPT] ConvertSubtract fix
* [LPT] GPU: baselineIsFP16 reuse fix
* [LPT] FakeQuantizeTransformation: GPU workaround: I32 -> FP32 Convert is not fused
* [LPT] AvgPool output precision workaround
* [LPT] Group convolution precision + Subtract to ScaleShift const fix
* [LPT] SubMulToMulAdd & Transpose: action-recognition-0001 fix
* [LPT] Transpose: added test with per-tensor quantization
Co-authored-by: Aleksandr Pertovsky <aleksandr.pertovsky@intel.com>
Co-authored-by: Zinoviev, Vladimir <vladimir.zinoviev@intel.com>
Co-authored-by: Vladislav Golubev <vladislav.golubev@intel.com>
Co-authored-by: Gorokhov Dmitriy <dmitry.gorokhov@intel.com>
2020-10-23 13:22:55 +03:00
Egor Churaev
ca95240c91
[IE CLDNN] Fix linear_onnx Interpolate selection ( #2769 )
2020-10-23 13:16:47 +03:00
Evgenya Stepyreva
1bae540895
[ MO ] KSO=ON for Kaldi ( #2028 )
...
* [ MO ] KSO=ON for Kaldi
* [ MO ] Kaldi KSO
* set static_shape for graph cycle making transformation
2020-10-23 13:14:00 +03:00
Tomasz Dołbniak
f1444b33e7
ONNX Reader supportModel() implementation ( #2744 )
2020-10-23 12:13:04 +02:00
iliya mironov
0a59be6f1e
Transformations for hsigmoid op ( #2531 )
...
* Add hsigmoid op
* Add tests for hsigmoid
* Add fusion hsigmoid
* Add unit tests for fuse hsigmoid
* Add python api for hsigmoid. Update opset 5
* Update opset5 file
* Add hsigmoid decomposition transformation
* fix
* Move transformations for hsigmoid
* Hot fix
* Fix unit tests
* fix unit tests
* Fix unit test
* Fix code style
* Reverse changes
* Add includes for hsigmoid transformations
* Enable in cldnn
* Refactoring hsigmoid fusion
* Move hsigmoid transforms patterns to cpp file
* Reverse hsigmoid fusion refactoring
* Fix according to code review
* Refactoring transformation
* Hot fix
2020-10-23 12:35:56 +03:00
Piotr Szmelczynski
85b06835aa
Reference implementation for Tile op ( #2641 )
2020-10-23 10:39:00 +02:00
Mateusz Tabaka
32b886a892
Remove obsoleted Dequantize op ( #2780 )
...
* Remove obsoleted Dequantize op
* apply code style
2020-10-23 11:25:08 +03:00
Ilya Lavrenov
ddad7e3505
Fixed -Werror=catch-value= gcc-9 error ( #2773 )
2020-10-23 10:39:55 +03:00
Mikołaj Życzyński
6b02cd380f
[IE CLDNN] Fix padding in reduce fsv16 kernel ( #2787 )
2020-10-23 10:16:21 +03:00
Tomasz Dołbniak
d5cd8673f4
Fix the model downloader script ( #2784 )
2020-10-23 09:58:12 +03:00
Roman Donchenko
ba3fc7fb8a
Fix spelling errors in the API and bindings ( #2781 )
2020-10-23 09:17:03 +03:00
Ilya Lavrenov
258c51bd1f
Openvino extra module adding - refactored ( #2754 )
...
* Rename plugin to module
* Added openvino_contrib handling
* Moved NEON flags to common place
* Fixed -Werror=catch-value= gcc-9 error
2020-10-23 08:54:48 +03:00
Ilya Lavrenov
82ea01b7ff
Removed obsolete comments from cmake ( #2748 )
2020-10-22 16:11:28 +03:00
Jan Iwaszkiewicz
77794535ab
[ONNX] WA for I64 images ( #2411 )
2020-10-22 14:06:23 +02:00
Andrew Bakalin
3dfec639f0
[VPU][GT][Tests] Make gemmTranspose pass layout agnostic ( #2666 )
...
* [VPU][GT] Make permTranspose pass layout agnostic
* [IE][Tests] Improve MatMul common test class
* [VPU][Tests] Add tests for MatMul
* [VPU][Tests] Review fixes
* [Tests] Add combineShapes for MatMul
* [VPU][GT] Fix assertion condition
2020-10-22 15:04:53 +03:00
Vladimir Paramuzov
16a73508bd
[IE CLDNN] Base kernels refactoring ( #2758 )
2020-10-22 14:42:42 +03:00
Ilya Churaev
e364271cf6
Constant->Result networks ( #2639 )
...
* Added tests
* Changed iterator algorithm
* Fixed legacy tests
* Added plugin tests
* Disabled some tests
* Remover parameter tests
* Fixed conversion
* Use old approach for old tests
* Temp commit
* Fixed iterator
* Fixed some tests
* Change logic to compare iterators
* Disabled CPU functional test
* Temp commit
* Disabled test for GPU
* Fixed network copy
* Try to fix test for Windows
* Disabled test for GNA
* Disable plugin tests
* Disable legacy test
* Remove redundant code
2020-10-22 13:22:38 +03:00
Ilya Churaev
1594489a2f
Added new version of BatchNormInference ( #2728 )
...
* Added new version of BatchNormInference
* Fixed code style
* Fixed batch norm inference v5
* Added opset4 and opset5 to IE backend
* Fixed functional test
* Fixed cpuFunc tests
* Fixed transformation order
* Try to fix validation
* Revert some changes
* Updated python API and added tests
* Fixed code style
* Fixed python code style
* Disabled test
2020-10-22 13:21:23 +03:00
Vladimir Paramuzov
4519097e47
[IE CLDNN] Extend supported fusing cases for scale and eltwise ( #1960 )
2020-10-22 13:06:27 +03:00
Mateusz Tabaka
d901bbfce3
Use MVN in GroupNorm/InstanceNorm in ONNX importer ( #2711 )
...
* Use MVN in GroupNorm/InstanceNorm in ONNX importer
* Remove mosaic_8 model from xfail list
2020-10-21 13:48:53 +03:00
Jedrzej Hajduczenia
458425ac9e
[IE CLDNN] Another try to fix multiple-kernel implementations profiling ( #2630 )
2020-10-21 13:36:32 +03:00
Tomasz Dołbniak
3688ff4c51
Use LogSoftmax-5 in the onnx_importer ( #2602 )
2020-10-21 10:50:16 +02:00
Anton Pankratv
8a1653b0d1
Supported threading command line options for other devices ( #2725 )
...
* Supported thrieding command line options for ohter devices
* Fixed python benchmark
2020-10-21 06:40:18 +03:00
Vladislav Vinogradov
b2747e68f5
[NGRAPH] Fix UNITY build ( #2732 )
2020-10-21 06:34:35 +03:00
Anton Chetverikov
44406691e5
Add Round-5 operation ( #2328 )
...
* Add Round-5 operation
* Add ONNX Round to supported operation list
* Add ngraph implementation for Round operation
* Update MO part
* Create UnaryElementwise class, update Round Operation
* Fix mode attr in mxnet extractor
* Add tests for Round shape infer
* Update 'enable' attr
* Update MO IR Reader to support UnaryElementwise operations
* Minor test refactor
* Update ngraph Round operation
* Add reference implementation
* Add test for reference implementation
* Add test for shape infer
* Add test for IE IR Reader
* AddRound operation to python api
* Fix missed mode attr
* Update Round operation version
* Fix codestyle
* Add MxNet Round to supported layers list
* Fix error in reference
* Fix comments style
* Update CMake file
* Update Ngraph reference test
* Update IE IR Reader tests
* Return v0::Round operation
* Update shape infer tests
* Fix v0::Round reference
* Fix codestyle
* Enum instead of string
* Fix codestyle
* Add Mode attribute adapter
* Update Mode attr
* Fix reference for v0::Round
* Fix codestyle
* Fix mode attr
* Fix get() method
* Fix codestyle in python api
* Update test info
* Fix ngraph api part
* Ad round v5 to interpreter tests
* Fix codestyle is ie reader test
* Update ngraph python api __init__.py file
* Adde opser5 to dafault opsets in ie_ir reader
* Add parser for Round layer
* Remove redundant spaces
* Add round creator to appropriate list
* Remove redundant import
* Commit to bump infrastructure version
I'm sorry for this, but this commit will be squashed on merge to master anyway and it is needed for your PR to correctly pass the pipeline
* Fix import
* fix codestyle
* Fix ngraph api part
* Add shape infer tests in python api
* Add .upper() for mode attr
* Refactor MO shape infer test for Round op
* Update tests and add comments
* Revert "Commit to bump infrastructure version"
This reverts commit 56e6ae1e4c
.
* remove parser for Round layer
* Update Ronund-5 evaluate test
* Resolve review comments
Co-authored-by: User <user@nnlvdp-achetver.inn.intel.com>
Co-authored-by: Andrey Babushkin <andrey.babushkin@intel.com>
Co-authored-by: Anton Chetverikov <anton.chetverikov@.intel.com>
2020-10-20 18:36:19 +03:00
Irina Efode
a2e49469b5
Cleanup single_layer_tests ( #2716 )
2020-10-20 14:31:59 +03:00
Maxim Vafin
a405546054
Add LogSoftmax-5 to MO and ngraph ( #2409 )
...
Co-authored-by: Evgeny Lazarev <evgeny.lazarev@intel.com>
2020-10-20 13:40:06 +03:00
Mateusz Tabaka
83670dd5cb
Remove deprecated Any op from nGraph ( #2719 )
2020-10-20 12:36:46 +03:00
Mateusz Tabaka
8002b16eb2
[ONNX] Add type conversion for Pow op inputs ( #2589 )
...
Co-authored-by: mitruska <katarzyna.mitrus@intel.com>
2020-10-20 11:19:03 +02:00
Roman Kazantsev
c2394508c1
Implement LookupTableInsert shape inference ( #2348 )
...
* Implement LookupTableInsertV2 shape inference
It is needed if other nodes not beeing pruned in the graph
have a conditional dependence on LookupTableInsertV2 node.
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Fix after core-review #1
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Fix the code after review #2
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Fix after code review #3
2020-10-20 09:57:55 +03:00
Anna Likholat
347e92cc82
[JAVA] Fixed IECore constructor ( #2685 )
2020-10-19 19:38:55 +03:00
Vladimir Paramuzov
9367266ed5
[IE CLDNN] DispatchData refactoring ( #2508 )
2020-10-19 18:45:05 +03:00
Nikolay Shchegolev
ff7fc01c76
[CPU] CTCLoss performance improvement.
2020-10-19 13:01:39 +03:00
Anton Potapov
8715b60d88
[PP GAPI] Extended plug-ins shared precision conversion tests to use ( #2677 )
...
`GetBlob()` as well
- test were extended to cover case when input tensors are copied into
Blob return by `InferRequest::GetBlob`
- channel number of input tensor is made a test parameter
2020-10-19 12:35:59 +03:00
Krzysztof Bruniecki
e1428ecf1d
Improve GNA MT sychronization ( #2553 )
...
* Sync GNA lib calls to avoid multi threads and plugins crash
* Remove TODO
* Enable sync for GNA1
* Fix GNA1 sync
* Add core_threading_tests to GNA Plugin to address story 31709
* Disable and change test description
2020-10-19 12:21:01 +03:00
Vitaliy Urusovskij
3c5aefb427
Remove memcheck_pregen_irs_tests
MemCheck configs due obsolescence ( #2693 )
2020-10-19 09:48:38 +03:00
Mateusz Tabaka
5965010bec
Revise LRN reference implementation ( #2672 )
...
* fix typo in LRN docs
* fix link to reference in LRN doc
* LRN, LRN_IE types alignment with spec
* align LRN ref implementation to plugins behavior
* update LRN docs
* Improve LRN reference implementation performance
* restore LRN constructor with no axes in the input
* apply code format
* revert double->float size_t->int change
* small fix to example in doc
* revert double->float size_t->int in onnx_importer and backend tests
* Changes to docs after review
2020-10-19 08:40:04 +03:00
Ivan Tikhonov
84b5fc51dc
[opset5] ngraph implementation of Loop op ( #2583 )
...
* Loop op ngraph implementation, update IE IR Reader and ngraph to cnn converter
* refactoring SubGraphOp class
* type prop unit tests
* ngraph code style
* update comment
* single layer tests for Loop operation
* fix file name
* Add SpecialBodyPorts attribute in Loop op, update single layer tests
* add several new tests cases, strict checks in Loop impl, temporary disable single layer tests
* ngraph codestyle, refactoring, clone_new_args test
* resolve review remarks
* fix build
* fix tests
* add a new constructor of Loop op, resolve review remarks
2020-10-19 06:53:46 +03:00
Roman Lyamin
cc569d2254
[IE CLDNN] Added HSigmoid operation ( #2700 )
2020-10-18 20:47:22 +03:00
Michał Karzyński
cc2bfcf1d7
Improve python_wheel CMake target ( #2688 )
2020-10-18 17:12:25 +02:00
Michał Karzyński
2b5ed2e9eb
Tweaks for ONNX scoreboard ( #2697 )
2020-10-18 17:08:06 +02:00
Alexey Suhov
f0a37743e1
[install_dependencies.sh] install latest cmake if current version is lower 3.13 ( #2695 )
...
* [install_dependencies.sh] install latest cmake if current version is lower 3.13
* add shellcheck for Ubuntu
* install python 2.7 for Ubuntu
2020-10-16 21:03:46 +03:00
Jesus Espinoza
595a52ae67
Updating broken link on getting started linux doc ( #2507 )
...
Link to build instructions was broken, updated link to the correct location.
2020-10-16 19:02:41 +03:00
Ilya Churaev
d8466cf6ee
Small fix for python doc ( #2696 )
2020-10-16 18:12:20 +03:00
Andrey Dmitriev
aff7a66082
[GNA][Speech sample] Add option to specify blob names ( #1529 )
...
* Added output names
* Add input, output, ref names
* Added zero scale factor
* Adding support for multiple reference files
2020-10-16 15:34:22 +03:00
Kamil Magierski
95d7c29628
[GNA] Fix remove layer + identity layer insertion ( #2626 )
...
* [GNA] Fix remove layer + identity layer insertion
test stub
Test impl
style
hpp style
* disable FP16 for GPU
2020-10-16 13:23:32 +03:00