Mateusz Bencer
7335984edd
[MO] Check only source layout if both changing layout and reverse input channel is applied ( #11779 )
2022-06-06 17:46:07 +02:00
Min, Byungil
104a9d8d52
Add debug config for forcing impl type of fc ( #11713 )
...
+ Added new debugging config OV_GPU_ForceImplType to set impl type
Signed-off-by: Min, Byungil <byungil.min@intel.com>
2022-06-06 23:02:34 +09:00
Luo Cheng
bb3868d8cd
[CPU] OneDNN 2.6 migration ( #11627 )
...
* Migrate on OneDNN 2.7
* [CPU] Enabled brconv implementation
* Post ops optimizations
* [CPU] Enabled I8 precision on activations for Convolution node
* [CPU][WA] Disabled Deconvolution + post ops fusing optimization
* Fixed FQ post op optimization
* [CPU] Optimize post ops processing
* [WA] Add node name if tensor names are empty
* [WA] remove layout compatibility chheck that leads to the fase-positive exceptions
* [CPU] Optimize processing for FQ + Sum + FQ post ops pattern
* [CPU][WA] Enabled ReduceSum -> AvgPool transformation due to perf issues
* fix compiler error
* rebase onednn master
* cherry pick from 2.7 to 2.6
* [WA] make cpu case to run completed
* fix xmm zero check
* reopen 'FuseDeconvolutionAndSimpleOperation' Transform to fix CPU 'ConvolutionBackpropDataLayerTest' fail issue
* [WR] Removed failed the ReduceMean tests caused by 21f3555
.
* group deconv may crash on memory out of bound
* [WA] Remove the moc fail case by #af4731a1
* testcase conv maxpool will check brgconv instead of jit
* test subgraph added nhwc format check
* fix gemm bf16 win crash
* fix avx2 groupconv accuracy problem
* [WA] remove invalid FQ tests
* WR to disable the LPT multiplyToGroupConv test because the transformation was disabled in d5e16f
* add gemm int8 binary postops to fix GroupConvolutionQDqTransformation fail
* add gemm int8 binary postops to fix GroupConvolutionQDqTransformation fail
* fix gemm bf16 fail
* Fix ConcatConvSumInPlaceTest
* Add cpuDebugFuncTests target
* [WA] bf16 crash due to MemoryInput/Output
* OVClassBasicTest case typo
* testcase subgraph sets default ENFORCE_BF16 to NO
* fix clang check
* Fix primType check issue
* Fix cpplint error
* MemoryInput/Output support bf16; Enforce bf16 'NO' should enable snipptes
* disable BF16 fusing fakequant testcase
* testcase init support amx check
* testcase for conv brgconv avx512/amx
* testcase for conv brgconv avx512/amx
* WR enforce reorder bug and add NSPC into deconv supported list.
* Compiling issue fix.
* [WA] skip fakequantize fusing in bf16
* mix legacy/new binary postops
* make nightly case run. tested on amx/avx512/avx2.
* [CPU] Add BF16 AMX test for Matmul
* Add CPU dump check tool
* Add verbose log
* Generate exec graph in cpu dump check tool
* fix binary prelu post Ops
* fix cpplint
* Update ONEDNN version to fix AVX2 bug.
* cpu dump check supports compare dump files
* Add a new CPU_DEBUG_CAPS: OV_CPU_SUMMARY_PERF
* change VERBOSE_LOG to DEBUG_LOG
* fix oneDNN register_jit_code log
* fix cpplint
* Add OV_CPU_DEBUG_LOG controls debug logs to show
* Revert reorder WR.
* Enhanced CPU debug logs and breakpoint support
* Enhanced cpu_dump_check with --ports
* Fix DEBUG_LOG compile issue
* GroupDeconvolutionLayerCPUTest extend to add amx test cases
* Add Node into DBUEG_LOG
* cpu_dump_check: Dump results even no port is specified
* FIx MergeTransposeAndReorder for blocked input
* Fix cpu_dump_check result names
* Enhance DEBUG_LOG on edges
* Cpu dump check support shape mismatch
* Fix bi-directionl inplace
* Cpu dump check support inference_precion_hing f32.
* fix windows dump fail.
* fix depthwise nwc conv
* add rtol arg
* win debugbreak
* fix pooling accuracy
* GroupDeconvolutionLayerCPUTest remove invalid test param for nspc
* recover ov onednn fork
* revert af4731a1f1
'[WA] remove layout compatibility chheck'
* [WA] disable avx2 conv3d fusing case
* [WA] disable avx2 conv3d fusing case
* [WA] Disabled weights md transpose in FC to prevent perf degradations
Co-authored-by: dmitrygo <dmitry.gorokhov@intel.com>
Co-authored-by: Vladislav Golubev <vladislav.golubev@intel.com>
Co-authored-by: Zhang Yi3 <yi3.zhang@intel.com>
Co-authored-by: liubo-intel <bo4.liu@intel.com>
Co-authored-by: Luwei Zhou <luwei.zhou@intel.com>
Co-authored-by: Li, Tingqian <tingqian.li@intel.com>
Co-authored-by: xuchen-intel <chen.xu@intel.com>
Co-authored-by: ceciliapeng2011 <cecilia.peng@intel.com>
2022-06-06 18:30:32 +08:00
Helena Kloosterman
764a2ec012
Fixes for OpenCV script filename in docs ( #11791 )
2022-06-06 16:42:04 +08:00
guozhong wang
d3b7a1b86c
fix cumulative donot handle nireq ( #11775 )
2022-06-06 10:10:05 +08:00
Artur Kulikowski
32580ca65b
Add ReduceMerge transformation ( #11746 )
...
Tickets: 60918
2022-06-04 13:44:50 +02:00
Sungeun Kim
8029fd9675
add mixed precision case ( #11747 )
2022-06-04 11:40:52 +09:00
Mateusz Bencer
dba4dbb9d6
Improve L2NormFusion transformation ( #11765 )
2022-06-03 18:51:21 +02:00
Mateusz Tabaka
4650ecd0b5
Disable fusings_gpu/deconv_scale_activation_quantize_i8_eltwise_quantize_u8.basic/4 ( #11777 )
2022-06-03 21:07:48 +09:00
Min, Byungil
aea04e275c
[GPU] Enable onednn reduction ( #11570 )
...
* It has better performance by using reduction kernel instead of pooling kernel in oneDNN for reduction layer.
* Stop using global pooling instead of reduce primitive
* Use oneDNN reduction if its mode is supported by optimized onedNN kernel
* activation pow is supported
* Use clDNN reduce if 3d or redundant reduce, tensor size mismatch
* Updated thirdparty onednn_gpu
Signed-off-by: Min, Byungil <byungil.min@intel.com>
Co-authored-by: Wei Tang <wei1.tang@intel.com>
Co-authored-by: Chen Kurt <kurt.chen@intel.com>
2022-06-03 18:36:27 +09:00
mei, yang
b780c61506
Extend python API for GenerateProposals ( #11723 )
2022-06-03 10:38:02 +02:00
Krzysztof Bruniecki
4ef0aab166
Create copy of RO IR/bin file mapped Blob to allow converting from NCHW to NHWC ( #11771 )
2022-06-03 08:38:52 +02:00
Krzysztof Bruniecki
10a6e56811
Return instead throw in check function aroun dtors ( #11745 )
2022-06-03 08:37:33 +02:00
Katarzyna Mitrus
47155b43d0
[Transformation] Add GeluFusion with Tanh ( #11752 )
2022-06-03 08:36:12 +02:00
Przemyslaw Wysocki
1db4446e2a
[MO] Extend MO for NonMaximumSupression-9 ( #11576 )
2022-06-02 18:18:14 +02:00
Sebastian Golebiewski
b88eed7645
Proofreading MO Guide ( #11605 )
...
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/IR_and_opsets.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/IR_and_opsets.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/IR_and_opsets.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/IR_and_opsets.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/IR_and_opsets.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/IR_and_opsets.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/IR_and_opsets.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_DeepSpeech_From_Tensorflow.md
Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_DeepSpeech_From_Tensorflow.md
Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_CRNN_From_Tensorflow.md
Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
* Update docs/MO_DG/IR_and_opsets.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/FP16_Compression.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_Caffe_Python_Layers.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_lm_1b_From_Tensorflow.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_lm_1b_From_Tensorflow.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update Additional_Optimizations.md
* Update Deep_Learning_Model_Optimizer_DevGuide.md
* Update IR_and_opsets.md
* Update Getting_performance_numbers.md
* Update Model_Optimizer_FAQ.md
* Update Supported_Frameworks_Layers.md
* Update Convert_Model_From_Caffe.md
* Update Convert_Model_From_Kaldi.md
* Update Convert_Model_From_MxNet.md
* Update Convert_Model_From_ONNX.md
* Update Convert_Model_From_Paddle.md
* Update Convert_Model_From_PyTorch.md
* Update Convert_Model_From_TensorFlow.md
* Update Convert_Model_Tutorials.md
* Update Converting_Model.md
* Update Cutting_Model.md
* Update IR_suitable_for_INT8_inference.md
* Update Aspire_Tdnn_Model.md
* Update Convert_GluonCV_Models.md
* Update Convert_Style_Transfer_From_MXNet.md
* Update Convert_Faster_RCNN.md
* Update Convert_Mask_RCNN.md
* Update Convert_Bert_ner.md
* Update Convert_Cascade_RCNN_res101.md
* Update Convert_F3Net.md
* Update Convert_QuartzNet.md
* Update Convert_RCAN.md
* Update Convert_RNNT.md
* Update Convert_YOLACT.md
* Update Convert_AttentionOCR_From_Tensorflow.md
* Update Convert_BERT_From_Tensorflow.md
* Update Convert_CRNN_From_Tensorflow.md
* Update Convert_DeepSpeech_From_Tensorflow.md
* Update Convert_EfficientDet_Models.md
* Update Convert_FaceNet_From_Tensorflow.md
* Update Convert_GNMT_From_Tensorflow.md
* Update Convert_NCF_From_Tensorflow.md
* Update Convert_Object_Detection_API_Models.md
* Update Convert_RetinaNet_From_Tensorflow.md
* Update Convert_Slim_Library_Models.md
* Update Convert_WideAndDeep_Family_Models.md
* Update Convert_XLNet_From_Tensorflow.md
* Update Convert_YOLO_From_Tensorflow.md
* Update Convert_lm_1b_From_Tensorflow.md
* Update Customize_Model_Optimizer.md
* Update Extending_Model_Optimizer_with_Caffe_Python_Layers.md
* Update docs/MO_DG/prepare_model/FP16_Compression.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/FP16_Compression.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/FP16_Compression.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md
Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Converting_Model.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Converting_Model.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Converting_Model.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/kaldi_specific/Aspire_Tdnn_Model.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/kaldi_specific/Aspire_Tdnn_Model.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/mxnet_specific/Convert_Style_Transfer_From_MXNet.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/onnx_specific/Convert_Faster_RCNN.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/onnx_specific/Convert_GPT2.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/onnx_specific/Convert_Mask_RCNN.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_Bert_ner.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_Cascade_RCNN_res101.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_Cascade_RCNN_res101.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RNNT.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_AttentionOCR_From_Tensorflow.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_BERT_From_Tensorflow.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_BERT_From_Tensorflow.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_CRNN_From_Tensorflow.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_DeepSpeech_From_Tensorflow.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_DeepSpeech_From_Tensorflow.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_FaceNet_From_Tensorflow.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_FaceNet_From_Tensorflow.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_NCF_From_Tensorflow.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_NCF_From_Tensorflow.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_NCF_From_Tensorflow.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_RetinaNet_From_Tensorflow.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Slim_Library_Models.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Slim_Library_Models.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Slim_Library_Models.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update Getting_performance_numbers.md
* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Cutting_Model.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update Convert_Model_From_Kaldi.md
* Update docs/MO_DG/prepare_model/convert_model/Cutting_Model.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Apply suggestions from code review
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Cutting_Model.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Cutting_Model.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/Cutting_Model.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Apply suggestions from code review
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_NCF_From_Tensorflow.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Object_Detection_API_Models.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_GNMT_From_Tensorflow.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_FaceNet_From_Tensorflow.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/MO_DG/IR_and_opsets.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Apply suggestions from code review
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Apply suggestions from code review
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update
* Update docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Apply suggestions from code review
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Update Convert_Model_From_Paddle.md
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-06-02 17:05:14 +02:00
Eddy Kim
04b69af0f5
[GPU] Support for PReLU with multiple dims slope tensor for GPU ( #11782 )
...
* reshape a slope tensor of channel-wise prelu
* changed to follow prelu spec
* added unittests for prelu with multiple dims slope
* Update constant.cpp
Blanks are added.
* added comments about PReLU slope reshape policy
* added int8 prelu fusion tests
2022-06-02 23:01:01 +09:00
Yuan Xu
fc61b001c0
Yuan transition guide restructure ( #11778 )
...
* Add Overview page
* Revert "Add Overview page"
* restructure
* update
* updates
* update
* update
* Update docs/OV_Runtime_UG/migration_ov_2_0/intro.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* Update docs/OV_Runtime_UG/migration_ov_2_0/preprocessing.md
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* fix formatting
* fix formatting
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-06-02 11:33:44 +00:00
Katarzyna Mitrus
fb09555b6d
[Eye-9] Extend MO with Eye-9 op ( #11555 )
2022-06-02 11:29:55 +02:00
guozhong wang
5b75d69712
add testcase for EXCLUSIVE_AYSNC_REQUESTS when input device is AUTO ( #11716 )
...
* add AUTO cpu and gpu testcase for EXCLUSIVE_AYSNC_REQUESTS
* add AUTO myriad testcase for EXCLUSIVE_AYSNC_REQUESTS
2022-06-02 10:28:30 +08:00
guozhong wang
cd771ed23b
Simple graph correctness test for virtual devices ( #11492 )
...
* add cumulative correctness test
* add infer_correctness test
Signed-off-by: Hu, Yuan <yuan2.hu@intel.com>
* add comments
Signed-off-by: Hu, Yuan <yuan2.hu@intel.com>
Co-authored-by: Hu, Yuan <yuan2.hu@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>
Co-authored-by: Shen, Wanglei <wanglei.shen@intel.com>
2022-06-02 09:42:38 +08:00
Mykhailo Hnap
d26ed6a180
[GPU] Roll-7 ( #11602 )
...
* [GPU] Implement Roll kernel
* [GPU] Add Roll kernel selector
* [GPU] Add Roll primitive
* [GPU] Add Roll helpers
* [GPU] Implement unit tests for the Roll operation
* [GPU] Add Roll operation to GPU plugin
* [GPU] Add single layer tests for the Roll operation
* [GPU] Add changes after review
* [GPU] Improve cldnn unit test
2022-06-02 09:42:11 +09:00
Jan Iwaszkiewicz
c7f8112339
[ONNX] Extend ONNX FE with SoftSign-9 operation ( #11766 )
2022-06-01 13:37:17 +02:00
River Li
042bd7274a
Dynamic shape mem reuse solution ( #11667 )
...
* Dynamic shape memory reuse solution
* Fix Split node to properly work with dyn mem
* Fix race condition for Memory mgrHandle
* Avoid Memory race condition between GetData and SetDataHandle
Add a lock for race condition between ov::intel_cpu::Memory::GetData() and ov::intel_cpu::Memory::SetDataHandle() is not a good solution,
which will impact the inference performance. We found that it is unnecessary get edge DataPtr in inferRequest::SetBlob or GetBlob, which
only need the tensorDesc, so we can only get tensorDesc to replace get dataPtr to avoid this race condition.
* Resolve reviewer's comments
* Avoid performance impact due to frenquent reset MemMngrHandle
If MemMngrHandle already has been assigned an external buffer, it can be reused.
Else it need create a new one.
2022-06-01 18:49:47 +08:00
Artur Kulikowski
8b1ed3d5b2
Fix onnx_frontend_tests ( #11358 )
2022-06-01 10:59:37 +02:00
Artur Kulikowski
c519aff42f
Enable ShuffleChannelsFusion and DepthToSpaceFusion in MOC ( #11662 )
...
Ticket: 79523
2022-05-31 11:09:00 +02:00
Chenhu Wang
2e4f14a9f7
fix unitialized value in code scan ( #11711 )
2022-05-31 05:53:50 +03:00
cecilia peng
016c5f537a
Cecilia/multiclass nms9/cpu impl ( #11246 )
...
* multiclass_nms opset9 spec, api, reference, paddle fe mapper, paddle fe unittest.
* multiclass_nms opset9 cpu node impl.
* multiclass_nms opset9 shape infer fix.
* multiclass_nms opset9: add transform ConvertMulticlassNms8ToMulticlassNms9.
* ConvertMulticlassNmsToMulticlassNmsIE: to MulticlassNmsIEInternal
* add test dependency package paddledet==2.1.0
* 1. fix for roisnum overflow. 2. common shape_infer private function.
Signed-off-by: jialipen <cecilia.peng@intel.com>
* 1. use common infer_shape helper. 2. fix roisnum overflow issue. 3. fix for nmsWithEta.
* test suite for opset9 multiclass_nms smoke tests pass, with both static and dynamic shapes.
code clean for unit test.
* decouple specification from this PR.
* op fuzzy: dynamic input/output
* reference impl refactor
* multiclass_nms_base no need clone_inputs.
* code clean
* restrict ppdet import
* fix clang format error
* change ppdet import to resolve CI fail issue related to its dependency.
* fix CI
* refactor: multiclass_nms_shape_inference for opset9 and reference impl.
TODO: could be applied to opset8 and even matrix_nms.
* fix CI build failure.
* CI fix for ambiguous namespace reference issue when
building static libs.
* update nms save_model python scripts.
* dynamic inputs for NMS with CPU plugin.
* copyright header for test scripts.
* op comformance test for multiclass_nms_9.
* minor update: is_type
* python opset9 and multiclass_nms
* flake8 CI fix
flake8 CI fix
flake8 CI fix
* remove NmsBase. stage1.
flake8 CI fix
remove NmsBase. stage 1 fix.
* rm NmsBase. stage2.
* more multiclass_nms prop tests and fix.
* remove unchanged ops from binding opset9.
* dependcy of paddle_tests.
* fix: add MulticlassNms to op mapper.
* clang format fix
* fix merge error.
2022-05-31 07:56:01 +08:00
Sungeun Kim
82fdf165eb
[GPU] choose onednn for 3d conv ( #10857 )
...
* add formats for 3d conv
data formats
-bs_fs_zyx_bsv32_fsv32
-bs_fs_zyx_bsv32_fsv16
-bs_fs_zyx_bsv8_fsv4
-bs_fs_zyx_bsv8_fsv2
-bs_fs_zyx_bsv16_fsv32
-b_fs_zyx_fsv2, b_fs_zyx_fsv4
weight formats
-os_is_zyx_osa2_isa8_osv8_isv2
-os_is_zyx_osv8_isv4
-os_is_zyx_osv8_isv2
-gs_oizyx_gsv32
* add supported formats for primitives
* choose onednn convolution impl for 3d conv
* optimize layout of shallow depth convolution
* remove reorder for conv
* Don't remove reorder between bs_fs_zyx_b32_f16/f32 and bfyx.
* add formats to SetDefault() to optimize gws/lws for quantize/eltwise
* fallback cldnn if onednn pooling's layout is b_fs_zyx_fsv32 and i8.
* fixed wrong position for new weight formats
* restore imad_case()
* This func is used to choose format for fallbacked cldnn
* [GPU] add debug flag: OV_GPU_SerialCompile
0(default): parallel compile
1: serial compile
* add is_mixed_layout
* remove format::bs_fs_zyx_bsv8_fsv4 in needs_onednn_small_ic_to_blocked
* prevent to fuse the reorder which is between quantize and conv
* shallow feature first conv
2022-05-31 07:54:00 +09:00
Yuan Xu
b67ffe303f
Fix a heading issue in Auto ( #11744 )
...
* fix the heading issue
* fix headings
2022-05-30 09:01:54 +00:00
Artur Kulikowski
93021121e3
Fix cutting the graph ( #11574 )
...
* Revert "[MO args][ONNX FE]fix cutting graph with input, output or both (#9698 )"
This reverts commit 2b03d5fe66
.
* Fix cutting the graph when inputs/outputs are passed to the MO
* Check that port exists
* Simplification of getting node port
* Reducing amount of nesting inside searching of node by operation name
* Refactoring
- remove mutable default arg
- changes in code style
- change variables name
* Check that user input data type is dictionary
Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
2022-05-30 10:04:24 +02:00
guozhong wang
f5e2a463b5
remove CPU from default candidate list while GPUs more than 2 ( #11753 )
2022-05-30 10:04:13 +08:00
Tomasz Jankowski
70e9cc0ce8
Enable ConvertNegative in MOC ( #11720 )
2022-05-27 13:37:35 +02:00
Artur Kulikowski
ae84e11a41
[ONNX Import] add method Node::get_attribute_as_constant() ( #10783 )
...
Tickets: 53284
2022-05-27 12:34:48 +02:00
Katarzyna Mitrus
8a975c886a
[MO] Support for TF GRUBlockCell ( #11732 )
...
* Add GRUBlockCell front extractor
* Add GRUBlockCell Op to mo ops
* Add TF GRUBlockCell mo layer tests
* Add GRUBlockCellToGRUCell Replacement init
* Update GRUBlockCellToGRUCell Replacement with gate order adjustment
* Update GRUBlockCellToGRUCell Replacement with weights transpose
* GRUBlockCellToGRUCell Replacment refactor
* Set tests eps to avoid sporadic failures
* Style
2022-05-27 11:47:42 +02:00
Bartek Szmelczynski
da09272d9f
remove xfails and update tolerance ( #11729 )
2022-05-27 10:26:28 +02:00
Bartek Szmelczynski
ffd797bc9f
[PYTHON][NMS-9] Extend Python API for NMS-9 ( #11681 )
...
* extend NMS-9 ngraph python
* add tests for NMS
* move tests for NMS from test_reduction to test_create_op
2022-05-27 10:25:49 +02:00
Sungeun Kim
7a1e7f122f
[GPU] some convs are in ref for WDSR ( #11728 )
...
* add supported data types for onednn conv
* Remove case: in_f32 to out_f32 in are_data_types_suitable_for_onednn
2022-05-27 13:50:30 +09:00
Artur Kulikowski
873e3dad2d
Limiting protobuf to version < 4.0.0 ( #11748 )
...
* Upgrade Protobuf to version 3.18.2 in python's requirements
* PaddlePaddle tests requires protobuf < 4.0.0
* ONNX tests use protobuf 3.18
* Python bindings protobuf <4.0.0
2022-05-26 21:32:37 +02:00
Yuan Xu
1bcdf48f42
Get started guide restructuring and updating ( #11719 )
...
* Add Overview page
* Revert "Add Overview page"
* restructure get started home page
* update navigation menu
* update formatting
* update wording
* update
* rename configurations files
* update wording
* adjust the structure
* update formatting
* reverse the heading
* test with formatting
* 2nd version of Get Started homepage
* add line breaks
* change to ordered list
* update wording
* update content
* updates
* update DL workbench reference
* update wording
* update references to pip installations
* remove redundant files
* update headings
2022-05-26 17:09:31 +02:00
Paul Youngsoo Ahn
c185198785
[GPU] Added UUID property( #81574 ) ( #11567 )
...
Co-authored-by: Ahn, Paul Y <paul.y.ahn@intel.com>
Co-authored-by: Vladimir Paramuzov <vladimir.paramuzov@intel.com>
2022-05-26 16:44:53 +09:00
opoluektov-lohika
ccd001f25b
[GPU] Support axis 0 for Softmax ( #10364 )
...
* [GPU] Modify Softmax single layer tests to check Softmax-8 is supported with axes in [-rank, rank) interval
* [GPU] Fix cldnn::softmax::dimension_t documentation
* [GPU] Fix ParamsKey::EnableSoftmaxDim
Support Z dimension.
* [GPU] Add Softmax single layer test that checks 5D case
Since some Softmax kernel code contains ifdef on 5-dimensional case,
a test case is needed that covers this functionality.
* [GPU] Support axis 0 in Softmax
* [GPU] Modify Softmax single layer tests to check axis 0
* [GPU] Modify Softmax items class optimized kernel to handle axis 0 correctly
Modify single layer test accordingly.
* [GPU] Modify Softmax unit-test to check softmax::normalize_b
* Split SoftMaxLayerTest into opset1 and opset8 versions
Use SoftMax8LayerTest in the tests throughout repository.
SoftMaxLayerTest now defaults to SoftMax1LayerTest for compatibility.
* [GPU] Add f16 test-case for Softmax single-layer test
Co-authored-by: tgubanova-lohika <tgubanova@lohika.com>
2022-05-26 12:06:08 +09:00
Bo Liu
1cce278fcb
Paddle Frontend Op conversion: ROIAlign9,Sqrt,Swish ( #11661 )
...
* Paddle Frontend Op conversion: ROIAlign9,Sqrt,Swish
* modify import ppdet way based on the latest master branch
2022-05-26 08:38:31 +08:00
Tomasz Dołbniak
dd930fdb6e
GridSample-9 specification ( #11703 )
2022-05-25 21:34:11 +02:00
yanlan song
0c7840ef28
multi code refine ( #11663 )
...
* draft
Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
* refactor for multi
Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
* refactor auto draft
Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
* try to fix executable get config test failed issue
Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
* set ExecNetwork only one time
Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
* format code and using alias
Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
* clear head file
Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
* change name from Context to ScheduleContext
Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
* polishing
Signed-off-by: fishbell <bell.song@intel.com>
* polish & new implementation
Signed-off-by: fishbell <bell.song@intel.com>
* enable/test a new schedule
Signed-off-by: fishbell <bell.song@intel.com>
* port fps logs over
Signed-off-by: fishbell <bell.song@intel.com>
* restructure
Signed-off-by: fishbell <bell.song@intel.com>
* fix windows build failure
Signed-off-by: fishbell <bell.song@intel.com>
* clean up code
Signed-off-by: fishbell <bell.song@intel.com>
Co-authored-by: Hu, Yuan2 <yuan2.hu@intel.com>
2022-05-25 22:53:47 +08:00
Tetiana Gubanova
eb36b891eb
[GPU] Impelement ExperimentalDetectronPriorGridGenerator-6 gpu implementation ( #11632 )
...
* Implement experimental_detectron_prior_grid_generator kernel
* register experimental_detectron_prior_grid_generator operation
* implement single layer tests
* add unit tests to detectron
2022-05-25 18:52:03 +09:00
Jan Iwaszkiewicz
320531def0
[MO] SoftSign operator extractors ( #11726 )
2022-05-25 11:44:26 +02:00
Krzysztof Bruniecki
81adc47e83
[GNA] Implement GNA memory region splitting (RO/Input/Output/State/Scratch) and export in GNA format enabled ( #11577 )
2022-05-25 11:40:50 +02:00
Serhii Pavlovskyi
4b08ce4787
[GPU] (I)Dft with single layer test ( #9891 )
...
* dft with single layer test
* idft with single layer test
* fix output param usage in dft
* update dft according to the clang-format
* move output layout setup to calc_output_layout
* add support for other dimensions
* add clDNN unit test for DFT/IDFT
* remove unnecessary original rank
* use defined formats in kernel
* fix dft docs
* changes after review
* Revert "fix dft docs"
This reverts commit 45b05172dfd161d92dae6d26e0f1b74748e56fd5.
Co-authored-by: Serhii Pavlovskyi <spavlovskyi@lohika.com>
Co-authored-by: Mykhailo Hnap <mhnap@lohika.com>
2022-05-25 16:24:46 +09:00
Mateusz Tabaka
e767e9e243
Extend python API of RDFT and IRDFT ( #11737 )
...
Tickets: 79184 and 79198
2022-05-25 08:25:01 +02:00