Commit Graph

6234 Commits

Author SHA1 Message Date
Mikhail Nosov
9cc4504b78
Removed OV_FRONTEND_PATH from 'setupvars' scripts (#9396)
* Removed OV_FRONTEND_PATH from 'setupvars' scripts

* Update linux.yml

* Change mock frontend's install dir for static builds

* revert linux.yml
2021-12-24 13:01:51 +03:00
Maxim Shevtsov
49b5e5728b
Auto Batching impl (#7883)
* auto-batching POC squashed (all commits from auto-batch-2021.3 branch)

(cherry picked from commit d7742f2c747bc514a126cc9a4d5b99f0ff5cbbc7)

* applying/accomodating the API changes after rebase to the master

* replaying modified version of actual batch selection

* eearly experiments with model mem footprint

* changes from rebasing to the latest master

* experimenting with DG1 on the batch size selection, also collecting the mem footprint

* WIP:moving the auto-batching to the icore to let the MULT/AUTO support that, ALLOW_AUTO_BATCHING as a conventional config key. still fials hot device swap

* quick-n-dirty batch footpint vs device total mem

* code style

* testing which models perform badly due to kernels and NOT (batched) footprint

* stub  pipeline task to comunicate the readiness rather than promise/future

* quick-n-dirty timeout impl

* explicit _completionTasks,reverting BA to use the timeout

* inputs outputs copies, works with AUTO and demo now

* accomodate the config per device-id, after rebase to the latest master

* allowing the auto-batching only with tput hint to let more conventional tests pass

* fix the pre-mature timeout restaring via waiting for batch1 requests completion

* moved the bacthed request statring ( along with input copies) to the dedicated thread

* [IE CLDNN] Disable bs_fs_yx_bsv16_fsv16 format for int8 convolution

* code style

* increasing the timeout to test the ssd_* models perf (timeout?) issues

* reducing number of output stuff in BA to avoid bloating the logs in experiments

* more aggressive batching for experiments, not limited to 32 and also 4 as a min

* more accurate timeout debugging info

* getting the reqs limitation from the plugin SetConfig as well

* refactor the reshape logic a bit to accomodate CPU for bathcing, also added remeote context

* let the benchamrk_app to consume specific batch values for the auto-batching such as BATCH:GPU(4)

* auto-batching functional test (with results check vs ref) and GPU instance for that

* fixed arithemtic on blobs ptrs

* clang

* handling possible batched network failure

* BATCH as the constants device name in test

* ENABLE_BATCH

* func tests for CPU, also DetectionOutput hetero tests (CPU and GPU)

* DetectionOutput hetero test for the CPU

* reenabling the Auto-Batching in the AUTO

* auto-batching device enabled in the test

* fixed the DO test

* improve the loading loop logic

* brushed the config keys

* allow hetero code-path for explicit device name like BATCH:GPU(4), used in the hetero code-path tests

* fix the test after refactoring

* clang

* moving ThreadSafeQueue to the ie_parallel, as it is re-used in the AUTO/MULTI and BATCH now

* auto-batching hetero test (subgraph with DetectionOutput)

* fixed minor changes that were result of experiments with impl

* code-style

* brushing, disabling CPU's HETERO tests until planned activity for 22.2

* removing home-baked MAX_BATCH_SZIE and swicthing to the official impl by GPU team

* remote blobs tests for the auto-batching (old API)

* brushed names a bit

* CreateContext and LoadNEtwork with context for the Auto-Batching plus remote-blobs tests

* fixed the ieUnitTests with adding CreateContext stub to the MockICore

* clang

* improved remote-blobs tests

* revert the back BA from exeprimenents with AB + device_use_mem

* conformance tests for BATCH, alos batch size 1 is default for BATCH:DEVICE

* remote blobs 2.0 tests, issue with context having the orig device name

* debugging DG1 perf drop (presumably due to non-fitting the device-mem)

* disbaling WA with batch/=2 for excesive mem footptint, leaving only streams 2

* remote blobs 2.0 tests for different tensor sharing types

* converting assert to throw to accomodate legacy API where the lock() was possible to be called

* revert the timeout back to avoid mixing the studies, fixed the footprint calc

* reverting to estimating the max batch by extrapolating from bacth1 size

* more conservative footptint etimation (with bacth1), graceful bacth 1 handling without duplication

* even graceful batch 1 handling without duplication

* WA for MAX_BATCH_SIZE failure, removing batch4 as a min for the auto-batching

* AutoBatchPlugin -> ov_auto_batch_plugin

* WA for gcc 4.8

* clang

* fix misprint

* fixed errors resulted from recent OV's Variant to Any transition

* skip auto-batching for already-batched networks

* AUTO_BATCH_TIMEOUT and tests

* GPU-specific L3

* switched to pure config, also improved ALLOW_AUTO_BATCHING config key handling logic

* debugging device info

* enabling the config tests for the GPU and fixing the Auto-batching tests to pass

* making the default (when not recognized the driver) cache size more aggressive, to accomodate recent HW with old drivers

* skip auto-batching for RNNs and alikes (e.g. single CHW input)

* fixed fallback to the bacth1 and moved HETERO path under condition to avoid bloating

* brushing

* Auto plugin GetMetric support gpu auto-batch

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* add test case

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* add comments on test

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* brushing the vars names, alos adding the excpetion handling

* disabling the auto-batching for the networks with non-batched outputs and faster-rcnn and alikes (CVS-74085) to minimize the of #failures

* add try catch

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* brushing the code changed in the GPU plugin

* Auto-Batch requests tests

* brushed varibles a bit (ref)

* cleaned debug output from the ie_core

* cleaned cmake for the Auto-Batch

* removed batchN estimation from batch1

* cleaned from debug printf

* comments, cleanup

* WA the mock test errors introduced with merging the https://github.com/myshevts/openvino/pull/13

* Adding back  removed batchN estimation from batch1 to debug degradations on DG1 (resulted from too optimistic MAX_BATCH_SIZE?). This partially reverts commit e8f1738ac1.

* brushing ie_core.cpp

* fix 32bit compilation

* Code review: ENABLE_AUTO_BATCH

* consolidate the auot-batching logic in ie_core.cpp into single ApplyAutoBAtching

* renamed brushed the OPTIMAL_BATCH (now with_SIZE) and mimicks the MAX_BATCH_SZIE  wrt MODEL_PTR

* default value for the OPTIMAL_BATCH_SIZE

* clang

* accomodate new func tests location

* fix shuffle of headers after clang + copyrights

* fixed misprint made during code refactoring

* moving the common therad-safe containers (like ThreadSafeQueue) to the dedicated dev_api header

* switch from the device name to the OPTIMAL_BATCH_SIZE metric presence as a conditin to consider Auto-Batching

* switching from the unsafe size() and minimizing time under lock

* code style

* brushed the ApplyAutoBatching

* brushed the netric/config names and descriptions

* completed the core intergration tests for the auto-batching

* ExecGraphInfo and check for incorrect cfg

* removed explicit dependencies from cmake file of the plugin

* disabling Auto-Batching thru the tput hint (to preserve current product default), only excplicit like BATCH:GPU used in the tests

Co-authored-by: Roman Lyamin <roman.lyamin@intel.com>
Co-authored-by: Hu, Yuan2 <yuan2.hu@intel.com>
2021-12-24 12:55:22 +03:00
Liubov Talamanova
bc5da8d522
[POT] Handle exception (#9405) 2021-12-24 12:34:23 +03:00
Eugeny Volosenkov
5da7a1119c
Fix ChangeOutputTypeAttributes and CenterNet model conversion (#9230)
* fix fp16 issue

* fix comments

* add test for scalar case

* fix prev commit

* fix test

* revert to size
2021-12-24 11:43:07 +03:00
Yegor Kruglov
bd2880812f
FifoQueueDequeue replacer (#8891)
* added_replacer

* updated comments

* move cut to fifo_replacer

* extend shape serializer for parametr node

* warning message and docstrings

* docs update

* doc fix
2021-12-24 11:38:21 +03:00
Anton Romanov
6b8cfac82c
Refactor install wheels on azure (#9394) 2021-12-24 11:37:05 +03:00
okhovan
31b6b034bc
[GPU] MaxPool-8 (#9064) 2021-12-24 11:18:58 +03:00
Nikolay Tyukaev
da20993272
add master version (#9408)
* add master version

* fix

* fixes
2021-12-24 11:15:17 +03:00
Alexey Varyzgin
a40b5bf15e
[CPU][INT8][Intel OMZ / Public] Third dimension issue in FuseConvolutionAndZeroPoints (#9385) 2021-12-24 10:13:30 +03:00
Mikhail Nosov
7bfbb46d73
[FE API]: Shared object (SO) holder to frontend's library for FrontEnd/InputModel/ov::Model (#9308)
* Squashed commit of previous work

* Fix mock tests

* clang

* Fix rebase errors

* remove unnecessary changes

* One more finding

* Copy ov::Model runtime info as well

* Fix review comments

* Commit missing file

* Copy m_shared_object when cloning model

* removed copy_shared_objects and use clone_model(model, NodeMap) as a friend for ov::Model

* Added OPENVINO_API to forward declaration

* add OPENVINO_API to friend function declaration
2021-12-24 02:56:45 +03:00
Evgenya Stepyreva
65c0f2daa7
Update Convert_Mask_RCNN.md (#9406) 2021-12-23 16:06:44 +00:00
Nikolay Tyukaev
9e91cf5c08
fix doc tests (#9400) 2021-12-23 18:15:09 +03:00
Andrey Somsikov
f91999b1bc
Update setupvars.bat with CMAKE_BUILD_TYPE (#9327)
* Update setupvars.bat with CMAKE_BUILD_TYPE

 setupvars.bat paths set depends on CMAKE_BUILD_TYPE.
 RelWithDebInfo didn't work correctly.

* Check binaries path before patching setupvars

* Fix loosing semicolon in setupvars,bat
2021-12-23 18:14:27 +03:00
Sergey Lyubimtsev
557d20e2e2
Update install guides for wheels (#9390)
* Update install guides for wheels

* remove extra comma

* License update
2021-12-23 18:12:23 +03:00
Gleb Kazantaev
c9704e7ed4
Eltwise->Transpose sinking for preprocessing ops (#9383) 2021-12-23 17:28:52 +03:00
Gleb Kazantaev
8740438a3b
Fix SmartReshpae TransposeMatMul pass (#9397) 2021-12-23 16:39:41 +03:00
Evgenya Stepyreva
3a61afa2d3
Shape as i32 value (#9343)
* Shape as value propagation in i32

* Comments adressed

* code style

* Modifies test to cover both Numpy and Bidirectional broadcasting

* MYR dynamic tests: made cases truly dynamic. Due to better shape inference it turned out that test cases were actually static.

* Deleting static shape test cases
2021-12-23 16:17:34 +03:00
Katarzyna Mitrus
3ee5bcaf4d
Return (-1) if max_int stop/start provided (#9386) 2021-12-23 16:13:56 +03:00
serhii-pavlovskyi-altran
8315fe0e19
[GPU] Range v4 partial implementation (#8907) 2021-12-23 15:49:43 +03:00
Andrei Molotkov
5346a5226c
[GPU] Mark all nodes with dynamic shape as unsupported (#9372) 2021-12-23 15:04:26 +03:00
Ilya Churaev
5e1d241c11
Renamed template plugin and tests (#9389)
* Renamed template plugin and plugin's tests

* Renamed template_extension
2021-12-23 14:59:24 +03:00
Andrey Sapozhnikov
da67ba135c
[GNA] Remove GNA Library versioning (#9319) 2021-12-23 14:32:58 +03:00
Ilya Churaev
d0cdf14c47
Fixed address sanitizer (#9364) 2021-12-23 14:02:44 +03:00
Artyom Anokhov
a94a6a774e
DeploymentManager::configs: Removed OpenCV component with their python folder. (#9367)
DeploymentManager::main: Refactored code with python-black module. Added compressing for Win-archives. Added shortcuts for options.
2021-12-23 13:48:17 +03:00
Evgenya Stepyreva
41ace9d4e6
Use opsets in sample of function creation (#7792) 2021-12-23 13:41:27 +03:00
Vitaliy Urusovskij
516272aeee
Fix stack-buffer-overflow in ScatterElementsUpdateTest (#9387) 2021-12-23 10:12:45 +00:00
Irina Efode
7618fc8752
[IE TESTS][CONFORMANCE] Update conformance ReadMe files (#9374)
* [IE TESTS][CONFORMANCE] Update conformance ReadMe files

* Fix links
2021-12-23 13:00:41 +03:00
Irina Efode
f316801ccd
[IE TESTS] Move remote tests to Behavior/ov_infer_request (#9233)
* [IE TESTS] Move remote tests to Behavior/ov_infer_request

* Move instance

* Apply comments
2021-12-23 12:51:01 +03:00
Ilya Churaev
b241d5227e
Moved compile_tool to new API (#8501)
* Moved compile_tool to new API

* Fixed comments and added new tests

* Fixed comments

* Fixed build

* Fixed comments

* Fixed unit tests

* Fixed compilation

* Fixed legacy message

* Fixed readme

* Fixed comments

* FIxed build

* Fixed build

* Fixed tests

* Applied comments

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2021-12-23 12:45:02 +03:00
Ilya Churaev
9b04cc0a10
Extend reshape() method for index and for one input (#9369)
* Added reshape by index interface

* Added reshape for function with one input
2021-12-23 12:40:11 +03:00
Elizaveta Lobanova
c4ce6c5430
[IE SAMPLE] Fixed inputs and outputs element type initialization (#9375) 2021-12-23 12:06:23 +03:00
Anton Romanov
882f30cd61
Added install wheel on azure for samples tests (#9179)
* Added install wheel on azure for samples tests

* minor change
2021-12-23 11:53:17 +03:00
Vladislav Volkov
60a11a6348
[CPU] Renamed CPU plugin to ov_intel_cpu_plugin (#9342) 2021-12-23 11:49:25 +03:00
Paul Youngsoo Ahn
bbceae3bc3
[GPU] Add INT32/UINT32 to available input data types when load type is aligned in GetJitLoad (#9300) (#9300)
- Modify fusibility checking to allow sub/div eltwise fusing for other primitives
- Modify dump checking code to use node name in exec graph
2021-12-23 17:48:34 +09:00
Ilya Lavrenov
4eea535e78
Includes in frontends (#9378)
* Used full paths in public includes

* Fixed install of ONNX include
2021-12-23 11:37:06 +03:00
Szymon Durawa
1fbfd426f0
Add FE add_output with tests. (#7644) 2021-12-23 09:04:37 +01:00
Mingyu Kim
16490959e6
[GPU] Use double blocked format if batch >= 16 (#9357) 2021-12-23 16:58:33 +09:00
Taylor Yeonbok Lee
3b03728807
[GPU] Fix get_estimated_device_mem_usage to handle mutable_data (#9297) 2021-12-23 16:20:04 +09:00
Sergey Shlyapnikov
507a498269
[GPU] Add OneDNN post ops description in graph dump mode (#9371) 2021-12-23 09:58:35 +03:00
Vladimir Gavrilov
20ee7fd242
Fix MO and nGraph to support the model context_rcnn_resnet101_snapshot_serengeti (#9255)
* Fixes in the infer function of MO operation Select.

* Fixes in the nGraph transformation SharedShapeOf.

* Deleted commented code.

* Added more tests for the infer function of the MO operation Select.

* Started to write tests for the transformation SharedShapeOf.

* Added more tests.

* Now the transformation can correctly process a mix of opset1::ShapeOf and opset8::ShapeOf.

* Small change.

* Used opset1 and opset3 instead of opset1 and opset8.

* Used get_output_element_type(0) instead of checking the version of ShapeOf.
2021-12-23 09:44:47 +03:00
Ilya Churaev
42350a705e
Remove legacy targets (#9333)
* Remove some legacy targets

* Replace some targets

* Removed inference_engine_plugin_api dependency

* Minor comment for developer config

* Fixed include paths

* Small fixes for static build

* Try to fix build pyopenvino

* Fixed comments

* Try to fix build

* Include OpenVINODeveloperPackage inside InferenceEngineDeveloperPackageConfig

* Try to fix GAPI tests
2021-12-23 08:16:23 +03:00
Ilya Churaev
16b39d15d0
Added BWD macro for all new ops (#9356) 2021-12-23 07:19:34 +03:00
Luwei Zhou
3d244a41ab
[shape_infer]Implement shape inference of Roll, ROIAlign,Proposal (#8610)
* Implement the proposal and experimental_detecron_generate_proposals

* Implement the proposal shape infer

* Add ROI_Align OP shape infer implement.

* Fix building issue

* Fix bug.

* Update test cases.

* Add test cases for the OPs

* Apply the CI coding style check.

* Move the shape_infer API to the new folder.

* Update some fix.

* Applied review comments

* Move the shape infer tests into new folder.

* Apply review comments.

* Fix missing header when mering with master
2021-12-23 03:02:15 +00:00
Mikhail Nosov
8f908db61e
[OV20] Set tensors infer req (#9158)
* Fix incomprehensible error message during layout conversion when layout rank doesn't match with shape rank

* Stash

* stash

* Memcpy implementation
Added tests

* Revert "Fix incomprehensible error message during layout conversion when layout rank doesn't match with shape rank"

This reverts commit 37064741b2.

* Fix clang-format and remove redundant headers

* Covered "cached" case (+ tested on Myriad)

* Apply review comments
Introduced 'applyBatchedBlob' function which allows override 'memcpy' on inferefnce time

* clang-format fix

* Added dynamic shape case

* - Review comments
- Deep copy of parameters/results for caching from cnnNetwork. Deep copy logic is moved to Utils
- Caching Tests: return correct inputs/outputs map after ImportNetwork mock call

* Reworked according to discussion

Also introduced 'SetBlobsImpl' which throws 'Not implemented' exception by default.
Template plugin updates internal '_batched_inputs' map

* Updated according to moved tests

* don't support 'memcpy' for ROI tensors

* Fix caching tests

* Just to retrigger CI

* Correct offset padding (however there is no test update as current implementation will not hit here due to other checks)

* Fix clang-format

* Applied review comments

* Added check that 'get_tensor' throws if set_tensors/set_input_tensors is used

* Fix review comments - part 1

* Fix caching tests - mock implementation becomes more complicated
Cached mock model shall identify its inputs/outputs, otherwise core will assert on SetExeNetworkInfo stage

* More comments fix

* More comments fixes

* More cleanup

* And more style comment

* typo fix

* Try fix caching windows tests

* Blind attempt to fix Ubuntu20 CI
2021-12-23 01:19:28 +03:00
Mikhail Ryzhov
7e0bf0dad5
[GNA] Precision convert support (#9282)
* Rebase master

* [gna] Fixed export/import precision

* Revert "[gna] Fixed export/import precision"

This reverts commit d381a2e216.

* Rebase master

* [gna] Fixed export/import precision

* Revert "[gna] Fixed export/import precision"

* Removed convert nodes

* Added convert transformations

* Output casting

* Fixed convert function

* Added functional tests

* Update inference-engine/tests/functional/plugin/gna/preprocess_tests/precision_convert.cpp

Co-authored-by: Nadezhda Ageeva <nkogteva@gmail.com>
2021-12-22 20:01:06 +03:00
Liubov Talamanova
7e85ada7e4
Handle exception with version (#9376) 2021-12-22 16:45:08 +00:00
Zhang Yi
529ab2b099
Yi3/shape infer 2nd batch (#8420)
* [ShapeInfer]shape infer 2nd batch

* [ShapeInfer]Impl Reverse Sequence

* [ShapeInfer]Fix typo

* [ShapeInfer]fix error

* [ShapeInfer]remove useless code

* [ShapeInfer]fix code style

* [ShapeInfer]enable shape_infer in mkldnn

* [ShapeInfer]use shape_inference in tests

* [ShapeInfer]add partialshape for tests

* [ShapeInfer]revise test cases

* [ShapeInfer]fix review comments

* [ShapeInfer]remove debug logs

* [ShapeInfer]fix ci build

* [ShapeInfer]Fix errors

* [ShapeInfer]fix build

* [ShapeInfer]fix bug

* [ShapeInfer]remove useless check

* [ShapeInfer]Fix vpu tests

* [ShapeInfer]Fix extract_image

* [ShapeInfer]apply reviews
2021-12-22 18:14:37 +03:00
Milana Shhanukova
eb90272331
[POT] Remove gitlab mention (#9079)
* change in installation

* develop mode

* change in install guide

* style change

* change declare
2021-12-22 13:45:37 +00:00
Ivan Tikhonov
fc689699eb
Delete ngraph doxygen file (#9365) 2021-12-22 16:23:20 +03:00
Artur Kulikowski
186f601699
For the ONNX frontend find nodes by tensor name (#9345) 2021-12-22 13:14:48 +01:00