Commit Graph

7872 Commits

Author SHA1 Message Date
Sungeun Kim
2c4cb88862
[83906] DG2 Systolic engine not getting used for fp16 Action Recognition model (#11815)
* shallow feature convolution needs to set by zyx_fsv2

* bug fix: wrong get_output_layout calling in is_mixedLayout
2022-06-08 16:48:40 +09:00
Mykhailo Hnap
9ee9f880bd
[GPU] Implement Bucketize-3 (#11738)
* [GPU] Implement Bucketize-3

* Add i8 and u8 to evaluates map for bucketize op.
2022-06-08 15:47:19 +09:00
yanlan song
bd02415ad5
enable binder schedule for preview (#11763)
* enable binder schedule

Signed-off-by: fishbell <bell.song@intel.com>

* add cases

Signed-off-by: fishbell <bell.song@intel.com>

* refine

Signed-off-by: fishbell <bell.song@intel.com>

* fix build failure

Signed-off-by: fishbell <bell.song@intel.com>

* fix coredump

Signed-off-by: fishbell <bell.song@intel.com>

Co-authored-by: Chen Peter <peter.chen@intel.com>
2022-06-08 10:02:33 +08:00
Tetiana Gubanova
a3cd2bcc49
[GPU] Implement Reverse operation (#11672)
* Reverse operation single-layer test

* Reverse CreateOp(), primitive and kernel selector

* Implement Reverse CL kernel

* Implement reverse GPU unit tests

* Add boolean as extended input type to reverse operation
2022-06-08 09:46:52 +09:00
Steve Yoo
263c184b97
Fix deformable convolution cl kernel for Multi-Groups (#11613)
* Fix deformable convolution cl kernel for multi group and add its test cases

* Add batch 2, 3, 4 test case to multiple groups
2022-06-08 09:45:42 +09:00
David Nam
755394c1cb
[GPU] Use IE_THROW() instead of throw std::runtime_error (#11769)
* Use IE_THROW() instead of throw std::runtime_error

* Change all of throw std::runtime_error into IE_THROW()
2022-06-08 09:44:59 +09:00
Karol Blaszczak
e70074d9db
DOCS-clear out the integrate_with_app article (#11799)
* DOCS-clear out the integrate_with_app article

* Update docs/OV_Runtime_UG/integrate_with_your_application.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2022-06-07 21:52:27 +02:00
Karol Blaszczak
275f2adf52
DOCS-add supported PdPd models (#11804) 2022-06-07 21:51:17 +02:00
Eddy Kim
3a5805fce0
quick fix in tests (#11812) 2022-06-07 15:22:39 +00:00
hyunback kim
98f989302a
[GPU] Update to use immad with oneDNN test case. (#11808)
Update correct test case

Signed-off-by: hyunback <hyunback.kim@intel.com>
2022-06-07 17:08:23 +03:00
Felix Dohyun Kim
1c9cd04f96
[GPU] Enable blocked formats in Gather primitive (#11486)
* gather blocked format
* enable double blocked
* 5d test
* support cross dimension
* Add some disabled test for later use
* Support non-default planar formats
2022-06-07 17:50:09 +09:00
Felix Dohyun Kim
c1f4cc04de
[GPU] Enable blocked formats in Border primitive (#11652)
* fix default_value of border tensor parameters
* Enable all in/out layout
2022-06-07 17:39:57 +09:00
Min, Byungil
49942c2f80
[GPU] Forcing to use clDNN FC on small batch size (#11715)
+ forced to use clDNN FC due to perf drop

Signed-off-by: byungilm <byungil.min@intel.com>
2022-06-07 10:25:14 +03:00
Kelvin Choi
f9afe07c9d
[GPU] Add reorder between FC matmal and reshape (#11706) 2022-06-07 13:10:58 +09:00
Mateusz Bencer
7335984edd
[MO] Check only source layout if both changing layout and reverse input channel is applied (#11779) 2022-06-06 17:46:07 +02:00
Min, Byungil
104a9d8d52
Add debug config for forcing impl type of fc (#11713)
+ Added new debugging config OV_GPU_ForceImplType to set impl type

Signed-off-by: Min, Byungil <byungil.min@intel.com>
2022-06-06 23:02:34 +09:00
Luo Cheng
bb3868d8cd
[CPU] OneDNN 2.6 migration (#11627)
* Migrate on OneDNN 2.7

* [CPU] Enabled brconv implementation

* Post ops optimizations

* [CPU] Enabled I8 precision on activations for Convolution node

* [CPU][WA] Disabled Deconvolution + post ops fusing optimization

* Fixed FQ post op optimization

* [CPU] Optimize post ops processing

* [WA] Add node name if tensor names are empty

* [WA] remove  layout compatibility chheck that leads to the fase-positive exceptions

* [CPU] Optimize processing for FQ + Sum + FQ post ops pattern

* [CPU][WA] Enabled ReduceSum -> AvgPool transformation due to perf issues

* fix compiler error

* rebase onednn master

* cherry pick from 2.7 to 2.6

* [WA] make cpu case to run completed

* fix xmm zero check

* reopen 'FuseDeconvolutionAndSimpleOperation' Transform  to fix CPU 'ConvolutionBackpropDataLayerTest' fail issue

* [WR] Removed failed the ReduceMean tests caused by 21f3555.

* group deconv may crash on memory out of bound

* [WA] Remove the moc fail case by #af4731a1

* testcase conv maxpool will check brgconv instead of jit

* test subgraph added nhwc format check

* fix gemm bf16 win crash

* fix avx2 groupconv accuracy problem

* [WA] remove invalid FQ tests

* WR to disable the LPT multiplyToGroupConv test because the  transformation was disabled in d5e16f

* add gemm int8 binary postops to fix GroupConvolutionQDqTransformation fail

* add gemm int8 binary postops to fix GroupConvolutionQDqTransformation fail

* fix gemm bf16 fail

* Fix ConcatConvSumInPlaceTest

* Add cpuDebugFuncTests target

* [WA] bf16 crash due to MemoryInput/Output

* OVClassBasicTest case typo

* testcase subgraph sets default ENFORCE_BF16 to NO

* fix clang check

* Fix primType check issue

* Fix cpplint error

* MemoryInput/Output support bf16; Enforce bf16 'NO' should enable snipptes

* disable BF16 fusing fakequant testcase

* testcase init support amx check

* testcase for conv brgconv avx512/amx

* testcase for conv brgconv avx512/amx

* WR enforce reorder bug and add NSPC into deconv supported list.

* Compiling issue fix.

* [WA] skip fakequantize fusing in bf16

* mix legacy/new binary postops

* make nightly case run. tested on amx/avx512/avx2.

* [CPU] Add BF16 AMX test for Matmul

* Add CPU dump check tool

* Add verbose log

* Generate exec graph in cpu dump check tool

* fix binary prelu post Ops

* fix cpplint

* Update ONEDNN version to fix AVX2 bug.

* cpu dump check supports compare dump files

* Add a new CPU_DEBUG_CAPS: OV_CPU_SUMMARY_PERF

* change VERBOSE_LOG to DEBUG_LOG

* fix oneDNN register_jit_code log

* fix cpplint

* Add OV_CPU_DEBUG_LOG controls debug logs to show

* Revert reorder WR.

* Enhanced CPU debug logs and breakpoint support

* Enhanced cpu_dump_check with --ports

* Fix DEBUG_LOG compile issue

* GroupDeconvolutionLayerCPUTest extend to add amx test cases

* Add Node into DBUEG_LOG

* cpu_dump_check: Dump results even no port is specified

* FIx MergeTransposeAndReorder for blocked input

* Fix cpu_dump_check result names

* Enhance DEBUG_LOG on edges

* Cpu dump check support shape mismatch

* Fix bi-directionl inplace

* Cpu dump check support inference_precion_hing f32.

* fix windows dump fail.

* fix depthwise nwc conv

* add rtol arg

* win debugbreak

* fix pooling accuracy

* GroupDeconvolutionLayerCPUTest remove invalid test param for nspc

* recover ov onednn fork

* revert af4731a1f1 '[WA] remove layout compatibility chheck'

* [WA] disable avx2 conv3d fusing case

* [WA] disable avx2 conv3d fusing case

* [WA] Disabled weights md transpose in FC to prevent perf degradations

Co-authored-by: dmitrygo <dmitry.gorokhov@intel.com>
Co-authored-by: Vladislav Golubev <vladislav.golubev@intel.com>
Co-authored-by: Zhang Yi3 <yi3.zhang@intel.com>
Co-authored-by: liubo-intel <bo4.liu@intel.com>
Co-authored-by: Luwei Zhou <luwei.zhou@intel.com>
Co-authored-by: Li, Tingqian <tingqian.li@intel.com>
Co-authored-by: xuchen-intel <chen.xu@intel.com>
Co-authored-by: ceciliapeng2011 <cecilia.peng@intel.com>
2022-06-06 18:30:32 +08:00
Helena Kloosterman
764a2ec012
Fixes for OpenCV script filename in docs (#11791) 2022-06-06 16:42:04 +08:00
guozhong wang
d3b7a1b86c
fix cumulative donot handle nireq (#11775) 2022-06-06 10:10:05 +08:00
Artur Kulikowski
32580ca65b
Add ReduceMerge transformation (#11746)
Tickets: 60918
2022-06-04 13:44:50 +02:00
Sungeun Kim
8029fd9675
add mixed precision case (#11747) 2022-06-04 11:40:52 +09:00
Mateusz Bencer
dba4dbb9d6
Improve L2NormFusion transformation (#11765) 2022-06-03 18:51:21 +02:00
Mateusz Tabaka
4650ecd0b5
Disable fusings_gpu/deconv_scale_activation_quantize_i8_eltwise_quantize_u8.basic/4 (#11777) 2022-06-03 21:07:48 +09:00
Min, Byungil
aea04e275c
[GPU] Enable onednn reduction (#11570)
* It has better performance by using reduction kernel instead of pooling kernel in oneDNN for reduction layer.
* Stop using global pooling instead of reduce primitive
* Use oneDNN reduction if its mode is supported by optimized onedNN kernel
* activation pow is supported
* Use clDNN reduce if 3d or redundant reduce, tensor size mismatch
* Updated thirdparty onednn_gpu

Signed-off-by: Min, Byungil <byungil.min@intel.com>

Co-authored-by: Wei Tang <wei1.tang@intel.com>
Co-authored-by: Chen Kurt <kurt.chen@intel.com>
2022-06-03 18:36:27 +09:00
mei, yang
b780c61506
Extend python API for GenerateProposals (#11723) 2022-06-03 10:38:02 +02:00
Krzysztof Bruniecki
4ef0aab166
Create copy of RO IR/bin file mapped Blob to allow converting from NCHW to NHWC (#11771) 2022-06-03 08:38:52 +02:00
Krzysztof Bruniecki
10a6e56811
Return instead throw in check function aroun dtors (#11745) 2022-06-03 08:37:33 +02:00
Katarzyna Mitrus
47155b43d0
[Transformation] Add GeluFusion with Tanh (#11752) 2022-06-03 08:36:12 +02:00
Przemyslaw Wysocki
1db4446e2a
[MO] Extend MO for NonMaximumSupression-9 (#11576) 2022-06-02 18:18:14 +02:00
Sebastian Golebiewski
b88eed7645
Proofreading MO Guide (#11605)
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_DeepSpeech_From_Tensorflow.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_DeepSpeech_From_Tensorflow.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_CRNN_From_Tensorflow.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/FP16_Compression.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_Caffe_Python_Layers.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_lm_1b_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_lm_1b_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update Additional_Optimizations.md

* Update Deep_Learning_Model_Optimizer_DevGuide.md

* Update IR_and_opsets.md

* Update Getting_performance_numbers.md

* Update Model_Optimizer_FAQ.md

* Update Supported_Frameworks_Layers.md

* Update Convert_Model_From_Caffe.md

* Update Convert_Model_From_Kaldi.md

* Update Convert_Model_From_MxNet.md

* Update Convert_Model_From_ONNX.md

* Update Convert_Model_From_Paddle.md

* Update Convert_Model_From_PyTorch.md

* Update Convert_Model_From_TensorFlow.md

* Update Convert_Model_Tutorials.md

* Update Converting_Model.md

* Update Cutting_Model.md

* Update IR_suitable_for_INT8_inference.md

* Update Aspire_Tdnn_Model.md

* Update Convert_GluonCV_Models.md

* Update Convert_Style_Transfer_From_MXNet.md

* Update Convert_Faster_RCNN.md

* Update Convert_Mask_RCNN.md

* Update Convert_Bert_ner.md

* Update Convert_Cascade_RCNN_res101.md

* Update Convert_F3Net.md

* Update Convert_QuartzNet.md

* Update Convert_RCAN.md

* Update Convert_RNNT.md

* Update Convert_YOLACT.md

* Update Convert_AttentionOCR_From_Tensorflow.md

* Update Convert_BERT_From_Tensorflow.md

* Update Convert_CRNN_From_Tensorflow.md

* Update Convert_DeepSpeech_From_Tensorflow.md

* Update Convert_EfficientDet_Models.md

* Update Convert_FaceNet_From_Tensorflow.md

* Update Convert_GNMT_From_Tensorflow.md

* Update Convert_NCF_From_Tensorflow.md

* Update Convert_Object_Detection_API_Models.md

* Update Convert_RetinaNet_From_Tensorflow.md

* Update Convert_Slim_Library_Models.md

* Update Convert_WideAndDeep_Family_Models.md

* Update Convert_XLNet_From_Tensorflow.md

* Update Convert_YOLO_From_Tensorflow.md

* Update Convert_lm_1b_From_Tensorflow.md

* Update Customize_Model_Optimizer.md

* Update Extending_Model_Optimizer_with_Caffe_Python_Layers.md

* Update docs/MO_DG/prepare_model/FP16_Compression.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/FP16_Compression.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/FP16_Compression.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Converting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Converting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Converting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/kaldi_specific/Aspire_Tdnn_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/kaldi_specific/Aspire_Tdnn_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/mxnet_specific/Convert_Style_Transfer_From_MXNet.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/onnx_specific/Convert_Faster_RCNN.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/onnx_specific/Convert_GPT2.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/onnx_specific/Convert_Mask_RCNN.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_Bert_ner.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_Cascade_RCNN_res101.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_Cascade_RCNN_res101.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RNNT.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_AttentionOCR_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_BERT_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_BERT_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_CRNN_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_DeepSpeech_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_DeepSpeech_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_FaceNet_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_FaceNet_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_NCF_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_NCF_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_NCF_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_RetinaNet_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Slim_Library_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Slim_Library_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Slim_Library_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update Getting_performance_numbers.md

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Cutting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update Convert_Model_From_Kaldi.md

* Update docs/MO_DG/prepare_model/convert_model/Cutting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Cutting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Cutting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Cutting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_NCF_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Object_Detection_API_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_GNMT_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_FaceNet_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update

* Update docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update Convert_Model_From_Paddle.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-06-02 17:05:14 +02:00
Eddy Kim
04b69af0f5
[GPU] Support for PReLU with multiple dims slope tensor for GPU (#11782)
* reshape a slope tensor of channel-wise prelu

* changed to follow prelu spec

* added unittests for prelu with multiple dims slope

* Update constant.cpp

Blanks are added.

* added comments about PReLU slope reshape policy

* added int8 prelu fusion tests
2022-06-02 23:01:01 +09:00
Yuan Xu
fc61b001c0
Yuan transition guide restructure (#11778)
* Add Overview page

* Revert "Add Overview page"

* restructure

* update

* updates

* update

* update

* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/OV_Runtime_UG/migration_ov_2_0/preprocessing.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* fix formatting

* fix formatting

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-06-02 11:33:44 +00:00
Katarzyna Mitrus
fb09555b6d
[Eye-9] Extend MO with Eye-9 op (#11555) 2022-06-02 11:29:55 +02:00
guozhong wang
5b75d69712
add testcase for EXCLUSIVE_AYSNC_REQUESTS when input device is AUTO (#11716)
* add AUTO cpu and gpu testcase for EXCLUSIVE_AYSNC_REQUESTS

* add AUTO myriad testcase for EXCLUSIVE_AYSNC_REQUESTS
2022-06-02 10:28:30 +08:00
guozhong wang
cd771ed23b
Simple graph correctness test for virtual devices (#11492)
* add cumulative correctness test

* add infer_correctness test

Signed-off-by: Hu, Yuan <yuan2.hu@intel.com>

* add comments

Signed-off-by: Hu, Yuan <yuan2.hu@intel.com>

Co-authored-by: Hu, Yuan <yuan2.hu@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>
2022-06-02 09:42:38 +08:00
Mykhailo Hnap
d26ed6a180
[GPU] Roll-7 (#11602)
* [GPU] Implement Roll kernel

* [GPU] Add Roll kernel selector

* [GPU] Add Roll primitive

* [GPU] Add Roll helpers

* [GPU] Implement unit tests for the Roll operation

* [GPU] Add Roll operation to GPU plugin

* [GPU] Add single layer tests for the Roll operation

* [GPU] Add changes after review

* [GPU] Improve cldnn unit test
2022-06-02 09:42:11 +09:00
Jan Iwaszkiewicz
c7f8112339
[ONNX] Extend ONNX FE with SoftSign-9 operation (#11766) 2022-06-01 13:37:17 +02:00
River Li
042bd7274a
Dynamic shape mem reuse solution (#11667)
* Dynamic shape memory reuse solution

* Fix Split node to properly work with dyn mem

* Fix race condition for Memory mgrHandle

* Avoid Memory race condition between GetData and SetDataHandle

Add a lock for race condition between  ov::intel_cpu::Memory::GetData() and ov::intel_cpu::Memory::SetDataHandle() is not a good solution,
which will impact the inference performance. We found that it is unnecessary get edge DataPtr in inferRequest::SetBlob or GetBlob, which
only need the tensorDesc, so we can only get tensorDesc to replace get dataPtr to avoid this race condition.

* Resolve reviewer's comments

* Avoid performance impact due to frenquent reset MemMngrHandle

If MemMngrHandle already has been assigned an external buffer, it can be reused.
Else it need create a new one.
2022-06-01 18:49:47 +08:00
Artur Kulikowski
8b1ed3d5b2
Fix onnx_frontend_tests (#11358) 2022-06-01 10:59:37 +02:00
Artur Kulikowski
c519aff42f
Enable ShuffleChannelsFusion and DepthToSpaceFusion in MOC (#11662)
Ticket: 79523
2022-05-31 11:09:00 +02:00
Chenhu Wang
2e4f14a9f7
fix unitialized value in code scan (#11711) 2022-05-31 05:53:50 +03:00
cecilia peng
016c5f537a
Cecilia/multiclass nms9/cpu impl (#11246)
* multiclass_nms opset9 spec, api, reference, paddle fe mapper, paddle fe unittest.

* multiclass_nms opset9 cpu node impl.

* multiclass_nms opset9 shape infer fix.

* multiclass_nms opset9: add transform ConvertMulticlassNms8ToMulticlassNms9.

* ConvertMulticlassNmsToMulticlassNmsIE: to MulticlassNmsIEInternal

* add test dependency package paddledet==2.1.0

* 1. fix for roisnum overflow. 2. common shape_infer private function.

Signed-off-by: jialipen <cecilia.peng@intel.com>

* 1. use common infer_shape helper. 2. fix roisnum overflow issue. 3. fix for nmsWithEta.

* test suite for opset9 multiclass_nms smoke tests pass, with both static and dynamic shapes.

code clean for unit test.

* decouple specification from this PR.

* op fuzzy: dynamic input/output

* reference impl refactor

* multiclass_nms_base no need clone_inputs.

* code clean

* restrict ppdet import

* fix clang format error

* change ppdet import to resolve CI fail issue related to its dependency.

* fix CI

* refactor: multiclass_nms_shape_inference for opset9 and reference impl.
TODO: could be applied to opset8 and even matrix_nms.

* fix CI build failure.

* CI fix for ambiguous namespace reference issue when
building static libs.

* update nms save_model python scripts.

* dynamic inputs for NMS with CPU plugin.

* copyright header for test scripts.

* op comformance test for multiclass_nms_9.

* minor update: is_type

* python opset9 and multiclass_nms

* flake8 CI fix

flake8 CI fix

flake8 CI fix

* remove NmsBase. stage1.

flake8 CI fix

remove NmsBase. stage 1 fix.

* rm NmsBase. stage2.

* more multiclass_nms prop tests and fix.

* remove unchanged ops from binding opset9.

* dependcy of paddle_tests.

* fix: add MulticlassNms to op mapper.

* clang format fix

* fix merge error.
2022-05-31 07:56:01 +08:00
Sungeun Kim
82fdf165eb
[GPU] choose onednn for 3d conv (#10857)
* add formats for 3d conv
   data formats
   -bs_fs_zyx_bsv32_fsv32
   -bs_fs_zyx_bsv32_fsv16
   -bs_fs_zyx_bsv8_fsv4
   -bs_fs_zyx_bsv8_fsv2
   -bs_fs_zyx_bsv16_fsv32
   -b_fs_zyx_fsv2, b_fs_zyx_fsv4
   weight formats
   -os_is_zyx_osa2_isa8_osv8_isv2
   -os_is_zyx_osv8_isv4
   -os_is_zyx_osv8_isv2
   -gs_oizyx_gsv32
* add supported formats for primitives
* choose onednn convolution impl for 3d conv
* optimize layout of shallow depth convolution
* remove reorder for conv
* Don't remove reorder between bs_fs_zyx_b32_f16/f32 and bfyx.
* add formats to SetDefault() to optimize gws/lws for quantize/eltwise
* fallback cldnn if onednn pooling's layout is b_fs_zyx_fsv32 and i8.
* fixed wrong position for new weight formats
* restore imad_case()
* This func is used to choose format for fallbacked cldnn
* [GPU] add debug flag: OV_GPU_SerialCompile
    0(default): parallel compile
    1: serial compile
* add is_mixed_layout
* remove format::bs_fs_zyx_bsv8_fsv4 in needs_onednn_small_ic_to_blocked
* prevent to fuse the reorder which is between quantize and conv
* shallow feature first conv
2022-05-31 07:54:00 +09:00
Yuan Xu
b67ffe303f
Fix a heading issue in Auto (#11744)
* fix the heading issue

* fix headings
2022-05-30 09:01:54 +00:00
Artur Kulikowski
93021121e3
Fix cutting the graph (#11574)
* Revert "[MO args][ONNX FE]fix cutting graph with input, output or both (#9698)"

This reverts commit 2b03d5fe66.

* Fix cutting the graph when inputs/outputs are passed to the MO

* Check that port exists

* Simplification of getting node port

* Reducing amount of nesting inside searching of node by operation name

* Refactoring

- remove mutable default arg
- changes in code style
- change variables name

* Check that user input data type is dictionary

Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
2022-05-30 10:04:24 +02:00
guozhong wang
f5e2a463b5
remove CPU from default candidate list while GPUs more than 2 (#11753) 2022-05-30 10:04:13 +08:00
Tomasz Jankowski
70e9cc0ce8
Enable ConvertNegative in MOC (#11720) 2022-05-27 13:37:35 +02:00
Artur Kulikowski
ae84e11a41
[ONNX Import] add method Node::get_attribute_as_constant() (#10783)
Tickets: 53284
2022-05-27 12:34:48 +02:00
Katarzyna Mitrus
8a975c886a
[MO] Support for TF GRUBlockCell (#11732)
* Add GRUBlockCell front extractor

* Add GRUBlockCell Op to mo ops

* Add TF GRUBlockCell mo layer tests

* Add GRUBlockCellToGRUCell Replacement init

* Update GRUBlockCellToGRUCell Replacement with gate order adjustment

* Update GRUBlockCellToGRUCell Replacement with weights transpose

* GRUBlockCellToGRUCell Replacment refactor

* Set tests eps to avoid sporadic failures

* Style
2022-05-27 11:47:42 +02:00
Bartek Szmelczynski
da09272d9f
remove xfails and update tolerance (#11729) 2022-05-27 10:26:28 +02:00