Commit Graph

6176 Commits

Author SHA1 Message Date
Maksim Kutakov
2870dc7d3f
[CPU] Cache for runtime data (#9192)
Caching added for Eltwise and MatMul nodes
2021-12-29 09:19:45 +03:00
Vladislav Volkov
cb9fe0910d
[CPU] Broken support for Layout::ANY in CPU plugin (#9434) 2021-12-29 09:09:56 +03:00
Luwei Zhou
ce753f41dc
[shape_infer]shape inference implement of Select Detectionoutput and Shufflechannels OPs (#8348)
* Implement detection_output shape infer

* revise and update the code flow

* update based on review.

* Update based on review

* Implement the shuffle_channels Op shape inference.

* Fix CI coding style issue.

* Implement the select OP shape inference.

* Update based on the review  comments

* Update based on the review comments.

* Add pragma once for the shape inference head.

* Add new shape_infer test file for detection_output OP.

* Ensure the header would only be included once.

* Add shuffle_channels OP shape infer test.

* Add shape_infer() invocations into shape_inference() API

shape_inference() API support Select, ShuffleChannels, DetectionOutput OPs
Fix extra pragma, unnecessary friend function declaration.

* Update based on the review comments.

* Move the shape infer API helpers into new folder.

* Applied review comments.

* Applied 2nd review comments

* Applied review comments

* Fix coding style.

* Update

* Applied review comments.

* Fix comipling issue of unused variable.

* Fix the CI issue.

* Update the coding style

* Move test cases into new folder

* Applied  review comments.
2021-12-29 05:39:50 +03:00
Pavel Esir
a51a735d9f
[MO] cli_parser.py fix to accept scalar value for freezing (#9395)
* cli_parser.py fix to accept scalar value for freezing

* update cli help

* fixed unit-tests, clarified help for specifying data type

* typos correction
2021-12-29 01:33:49 +03:00
Anton Chetverikov
3e6951c1da
[MO] Support for common rt_info attribute in MO IR Reader (#9272)
* Support for common rt_info attribute in MO IR Reader

* Add missed change

* Moved back wrong change

* Change attr name

* Add support for rt_info for out ports

* Add emitting for rt_info

* Fix restoration error

* Add support for rt_info for input ports

* Add more comments

* Set correct layout attr to restored graph
2021-12-29 00:59:48 +03:00
Milana Shhanukova
3cef513495
[POT] Check type for layers' renaming (#9276)
* change in installation

* develop mode

* change in install guide

* style change

* change declare

* add type checking

* revert changes

* rename directly in nx_model

* Update README_dev.md
2021-12-28 18:08:36 +03:00
Roman Kazantsev
b2aff0cd56
[MO SDL] Test MO and IR Reader on attacking inputs (#8947)
* Test MO and IR Reader on attacking inputs

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Add test to check IR Reader against untrusted well-formed IR

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Refactor IR Reader tests with corrupted IR

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Test for regular expression denial of service

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Remove undesired word like bomb

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Move tests to new location

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Use correct import

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Revert blank line

Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
2021-12-28 17:49:48 +03:00
Jade Cho
da1261a1d8
[GPU] Fix a bug in post-operation optimization (#9443) 2021-12-28 17:48:29 +03:00
hyunback kim
c6bc4d0045
[GPU] Fix debug_config build failed issue. (#9441) 2021-12-28 17:00:26 +03:00
Ivan Novoselov
0803684f9e
[Snippets] Support decreasing output shapes (#9446) 2021-12-28 16:22:45 +03:00
Roman Lyamin
ecbeff460a
[GPU] Fix check binary size in SetBlob (#9418) 2021-12-28 16:12:15 +03:00
Vladimir Dudnik
04bb8bb9bb
[IE Samples] fix hello classification cpp (#9450)
* fix image file read error message when sample built w/o opencv

* code style and use model inputs/outputs instead of parameters and results
2021-12-28 15:58:09 +03:00
azhogov
bd82e8d000 Fix stress test install: remove requirements-caffe2.in 2021-12-28 12:07:32 +03:00
Aleksandr Korolev
c87ac722b1
[VPU] Enable new InferRequest behavior tests with OV 2.0 (#9301) 2021-12-28 11:21:00 +03:00
Alexander Zhogov
32bc7840fc
Revert "Revert "[LPT] Assign + ReadValue transformation (#8690)" (#9457)" (#9460)
This reverts commit d51f337934.
2021-12-27 22:51:23 +03:00
Vladimir Paramuzov
f565e0f854
[GPU] Merge cldnn and plugin code (#8484) 2021-12-27 18:35:01 +03:00
Alexander Zhogov
d51f337934
Revert "[LPT] Assign + ReadValue transformation (#8690)" (#9457)
This reverts commit c5824b8494.
2021-12-27 18:25:11 +03:00
azhogov
7d198a8535 Azure CI: Remove yaml for creating Windows docker 2021-12-27 18:22:31 +03:00
Vladimir Dudnik
ab10057371
update open_model_zoo submodule. remove deprecated public models, fix mo path in omz_converter (#9453) 2021-12-27 17:32:40 +03:00
Artyom Anokhov
2538ae5da1
[Deployment Manager] Remove Python common component from configs (#9448)
* Deployment Manager configs: Remove `Python common` component

* Dpeloyment Manager configs: Fixed misprint

* DM configs: Fixed JSON syntax

* DM darwin.json: Removed double comma
2021-12-27 17:17:22 +03:00
Vladimir Zinoviev
c5824b8494
[LPT] Assign + ReadValue transformation (#8690)
* [LPT] Assign + ReadValue transformation

* [LPT] Assign+ReadValue: applied review comments

* fixed some misses
2021-12-27 17:07:20 +03:00
Eugeny Volosenkov
fa1b59b7be
Fix incorrect working UnpackPackReverseInputChannels for centernet (#9201)
* fix UnpackPackReverseInputChannels

* Add UnpackPackReverseInputChannels test
2021-12-27 15:57:02 +03:00
Chenhu Wang
a83bcee4bd
[CPU] NMS optimization (#8312) 2021-12-27 15:51:50 +03:00
azhogov
5afb63b06b Azure CI: Add yaml for creating Windows docker 2021-12-27 15:18:54 +03:00
Sergey Lyubimtsev
4188dbbf9f
install_openvino_dependencies.sh update (#9398)
* Remove opencv requirements from default components list

* Remove opencv requirements from default components list

* fix typo
2021-12-27 13:06:12 +03:00
Sergey Shlyapnikov
95d86eb2bf
[GPU] Add parallel quantizes optimization (#9370) 2021-12-27 09:47:20 +03:00
Vladimir Paramuzov
0fa226a0c2
[GPU] Fixed uninit vairable and exceptions from nothrow methods (#9426) 2021-12-27 09:46:52 +03:00
Roman Lyamin
b050d39f89
[GPU] Add batching surface to new API (#9435) 2021-12-27 09:46:37 +03:00
Dmitry Pigasin
d6dcf58846
[IE Python Speech Sample] Migrate to OV 2.0 API (#9348)
* Create mvp

* Implement new API & Refactoring

* Fix -oname for models whose name of output layer contains a port number

* Fix step numbers

* Create utils.py

* Remove shebang from utils.py

* Fix `-iname` option
2021-12-27 09:22:15 +03:00
Vladimir Dudnik
a9cee5f101
[IE Samples] OV2.0 API python ngraph_function_creation_sample (#9440)
* [IE Python Speech Sample] Migrate to OV 2.0 API

* improvements

* flake notes

* improved code style like as C++

* linters changes

* changed data.py

* sync output with C++ sample

Co-authored-by: Maxim Gordeev <maxim.gordeev@intel.com>
2021-12-27 09:19:18 +03:00
Egor Shulman
b454076a56
[CPU] Fixed leftovers for ExperimentalDetectronTopKROIs and klocwork issue (#7885) 2021-12-26 21:25:33 +03:00
Artur Kulikowski
2262692ce9
Generate result names for ONNX models (#9413) 2021-12-26 17:12:01 +01:00
Sergey Lyubimtsev
73143b8c03
Add batch plugin to openvino wheel (#9432) 2021-12-25 12:42:55 +03:00
Roman Kazantsev
acfae31759
Extend FIFOQueueDequeue replacer to support OOB case (#9428)
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
2021-12-25 10:32:13 +03:00
Anton Pankratov
6dcbb13748
Removed implicit cast from runtime objects (#9419) 2021-12-24 21:14:41 +03:00
Alexandra Sidorova
fa2647f965
[CPU] Added dynamism support for If (#8967) 2021-12-24 19:43:30 +03:00
Alexey Lebedev
de136a6515
[PYTHON API] fix model inputs and outputs property (#9407)
* fix inputs.property and add tests for reshape

* add is_instance in test

* fix code style
2021-12-24 16:42:48 +03:00
Alexey Lebedev
3ca80d12b2
[tools] use ports instead parameters and results in benchmark tool (#9422)
* use ports instead parameters and results

* Fix element_type if preprocessing is skipped

* rename function to model

* rename exe_network to compiled_model
2021-12-24 15:55:58 +03:00
Alexandra Sidorova
91945ba122
[CPU] Added dynamism support for TensorIterator (#8879) 2021-12-24 15:08:42 +03:00
Ilya Churaev
d1fd0d259e
Introduce get|set_layout helpers (#9401)
* Introduce get|set_layout helpers

* Added python tests and fixed comments

* Added non constant methods

* Update src/bindings/python/tests/test_ngraph/test_basic.py

Co-authored-by: Alexey Lebedev <alexey.lebedev@intel.com>

* Fixed tests

* FIxed code style

* Fixed func tests

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: Alexey Lebedev <alexey.lebedev@intel.com>
2021-12-24 14:24:09 +03:00
Maxim Vafin
3f35e2a321
Enable new FP16 and support mixed precision by MO (#8514)
* Enable new FP16 format and support mixed precision

* Apply review comments

* Fix issue with fp64 in FakeQuantWithMinMaxVars.py

* Enabme decompression converts fusing for CPU plugin

* Apply review feedback

* Fix code style

* Fix issue with np.full and apply review feedback

* Apply review feedback

* Fix HardSigmoid onnx extractor

* Replace np.arrays that were skipped with mo_array

* Fix compress_quantized_weights_test.py

* Fix import issues

* Apply review feedback and fix type of fusing linops in MO

* Apply review feedback

* Fix types for Mean/Scales and MXNET zeros

* Add RandomUniform_8 to ConvertPrecision

* Fix merge issue

* Fix consts names collision in GPU plugin
2021-12-24 14:00:37 +03:00
Mikhail Ryzhov
43c45d3065
Moved gna library cmake to plugin dir (#9393) 2021-12-24 13:02:43 +03:00
Mikhail Nosov
9cc4504b78
Removed OV_FRONTEND_PATH from 'setupvars' scripts (#9396)
* Removed OV_FRONTEND_PATH from 'setupvars' scripts

* Update linux.yml

* Change mock frontend's install dir for static builds

* revert linux.yml
2021-12-24 13:01:51 +03:00
Maxim Shevtsov
49b5e5728b
Auto Batching impl (#7883)
* auto-batching POC squashed (all commits from auto-batch-2021.3 branch)

(cherry picked from commit d7742f2c747bc514a126cc9a4d5b99f0ff5cbbc7)

* applying/accomodating the API changes after rebase to the master

* replaying modified version of actual batch selection

* eearly experiments with model mem footprint

* changes from rebasing to the latest master

* experimenting with DG1 on the batch size selection, also collecting the mem footprint

* WIP:moving the auto-batching to the icore to let the MULT/AUTO support that, ALLOW_AUTO_BATCHING as a conventional config key. still fials hot device swap

* quick-n-dirty batch footpint vs device total mem

* code style

* testing which models perform badly due to kernels and NOT (batched) footprint

* stub  pipeline task to comunicate the readiness rather than promise/future

* quick-n-dirty timeout impl

* explicit _completionTasks,reverting BA to use the timeout

* inputs outputs copies, works with AUTO and demo now

* accomodate the config per device-id, after rebase to the latest master

* allowing the auto-batching only with tput hint to let more conventional tests pass

* fix the pre-mature timeout restaring via waiting for batch1 requests completion

* moved the bacthed request statring ( along with input copies) to the dedicated thread

* [IE CLDNN] Disable bs_fs_yx_bsv16_fsv16 format for int8 convolution

* code style

* increasing the timeout to test the ssd_* models perf (timeout?) issues

* reducing number of output stuff in BA to avoid bloating the logs in experiments

* more aggressive batching for experiments, not limited to 32 and also 4 as a min

* more accurate timeout debugging info

* getting the reqs limitation from the plugin SetConfig as well

* refactor the reshape logic a bit to accomodate CPU for bathcing, also added remeote context

* let the benchamrk_app to consume specific batch values for the auto-batching such as BATCH:GPU(4)

* auto-batching functional test (with results check vs ref) and GPU instance for that

* fixed arithemtic on blobs ptrs

* clang

* handling possible batched network failure

* BATCH as the constants device name in test

* ENABLE_BATCH

* func tests for CPU, also DetectionOutput hetero tests (CPU and GPU)

* DetectionOutput hetero test for the CPU

* reenabling the Auto-Batching in the AUTO

* auto-batching device enabled in the test

* fixed the DO test

* improve the loading loop logic

* brushed the config keys

* allow hetero code-path for explicit device name like BATCH:GPU(4), used in the hetero code-path tests

* fix the test after refactoring

* clang

* moving ThreadSafeQueue to the ie_parallel, as it is re-used in the AUTO/MULTI and BATCH now

* auto-batching hetero test (subgraph with DetectionOutput)

* fixed minor changes that were result of experiments with impl

* code-style

* brushing, disabling CPU's HETERO tests until planned activity for 22.2

* removing home-baked MAX_BATCH_SZIE and swicthing to the official impl by GPU team

* remote blobs tests for the auto-batching (old API)

* brushed names a bit

* CreateContext and LoadNEtwork with context for the Auto-Batching plus remote-blobs tests

* fixed the ieUnitTests with adding CreateContext stub to the MockICore

* clang

* improved remote-blobs tests

* revert the back BA from exeprimenents with AB + device_use_mem

* conformance tests for BATCH, alos batch size 1 is default for BATCH:DEVICE

* remote blobs 2.0 tests, issue with context having the orig device name

* debugging DG1 perf drop (presumably due to non-fitting the device-mem)

* disbaling WA with batch/=2 for excesive mem footptint, leaving only streams 2

* remote blobs 2.0 tests for different tensor sharing types

* converting assert to throw to accomodate legacy API where the lock() was possible to be called

* revert the timeout back to avoid mixing the studies, fixed the footprint calc

* reverting to estimating the max batch by extrapolating from bacth1 size

* more conservative footptint etimation (with bacth1), graceful bacth 1 handling without duplication

* even graceful batch 1 handling without duplication

* WA for MAX_BATCH_SIZE failure, removing batch4 as a min for the auto-batching

* AutoBatchPlugin -> ov_auto_batch_plugin

* WA for gcc 4.8

* clang

* fix misprint

* fixed errors resulted from recent OV's Variant to Any transition

* skip auto-batching for already-batched networks

* AUTO_BATCH_TIMEOUT and tests

* GPU-specific L3

* switched to pure config, also improved ALLOW_AUTO_BATCHING config key handling logic

* debugging device info

* enabling the config tests for the GPU and fixing the Auto-batching tests to pass

* making the default (when not recognized the driver) cache size more aggressive, to accomodate recent HW with old drivers

* skip auto-batching for RNNs and alikes (e.g. single CHW input)

* fixed fallback to the bacth1 and moved HETERO path under condition to avoid bloating

* brushing

* Auto plugin GetMetric support gpu auto-batch

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* add test case

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* add comments on test

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* brushing the vars names, alos adding the excpetion handling

* disabling the auto-batching for the networks with non-batched outputs and faster-rcnn and alikes (CVS-74085) to minimize the of #failures

* add try catch

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* brushing the code changed in the GPU plugin

* Auto-Batch requests tests

* brushed varibles a bit (ref)

* cleaned debug output from the ie_core

* cleaned cmake for the Auto-Batch

* removed batchN estimation from batch1

* cleaned from debug printf

* comments, cleanup

* WA the mock test errors introduced with merging the https://github.com/myshevts/openvino/pull/13

* Adding back  removed batchN estimation from batch1 to debug degradations on DG1 (resulted from too optimistic MAX_BATCH_SIZE?). This partially reverts commit e8f1738ac1.

* brushing ie_core.cpp

* fix 32bit compilation

* Code review: ENABLE_AUTO_BATCH

* consolidate the auot-batching logic in ie_core.cpp into single ApplyAutoBAtching

* renamed brushed the OPTIMAL_BATCH (now with_SIZE) and mimicks the MAX_BATCH_SZIE  wrt MODEL_PTR

* default value for the OPTIMAL_BATCH_SIZE

* clang

* accomodate new func tests location

* fix shuffle of headers after clang + copyrights

* fixed misprint made during code refactoring

* moving the common therad-safe containers (like ThreadSafeQueue) to the dedicated dev_api header

* switch from the device name to the OPTIMAL_BATCH_SIZE metric presence as a conditin to consider Auto-Batching

* switching from the unsafe size() and minimizing time under lock

* code style

* brushed the ApplyAutoBatching

* brushed the netric/config names and descriptions

* completed the core intergration tests for the auto-batching

* ExecGraphInfo and check for incorrect cfg

* removed explicit dependencies from cmake file of the plugin

* disabling Auto-Batching thru the tput hint (to preserve current product default), only excplicit like BATCH:GPU used in the tests

Co-authored-by: Roman Lyamin <roman.lyamin@intel.com>
Co-authored-by: Hu, Yuan2 <yuan2.hu@intel.com>
2021-12-24 12:55:22 +03:00
Liubov Talamanova
bc5da8d522
[POT] Handle exception (#9405) 2021-12-24 12:34:23 +03:00
Eugeny Volosenkov
5da7a1119c
Fix ChangeOutputTypeAttributes and CenterNet model conversion (#9230)
* fix fp16 issue

* fix comments

* add test for scalar case

* fix prev commit

* fix test

* revert to size
2021-12-24 11:43:07 +03:00
Yegor Kruglov
bd2880812f
FifoQueueDequeue replacer (#8891)
* added_replacer

* updated comments

* move cut to fifo_replacer

* extend shape serializer for parametr node

* warning message and docstrings

* docs update

* doc fix
2021-12-24 11:38:21 +03:00
Anton Romanov
6b8cfac82c
Refactor install wheels on azure (#9394) 2021-12-24 11:37:05 +03:00
okhovan
31b6b034bc
[GPU] MaxPool-8 (#9064) 2021-12-24 11:18:58 +03:00
Nikolay Tyukaev
da20993272
add master version (#9408)
* add master version

* fix

* fixes
2021-12-24 11:15:17 +03:00