* Squashed commit of previous work
* Fix mock tests
* clang
* Fix rebase errors
* remove unnecessary changes
* One more finding
* Copy ov::Model runtime info as well
* Fix review comments
* Commit missing file
* Copy m_shared_object when cloning model
* removed copy_shared_objects and use clone_model(model, NodeMap) as a friend for ov::Model
* Added OPENVINO_API to forward declaration
* add OPENVINO_API to friend function declaration
* Update setupvars.bat with CMAKE_BUILD_TYPE
setupvars.bat paths set depends on CMAKE_BUILD_TYPE.
RelWithDebInfo didn't work correctly.
* Check binaries path before patching setupvars
* Fix loosing semicolon in setupvars,bat
* Shape as value propagation in i32
* Comments adressed
* code style
* Modifies test to cover both Numpy and Bidirectional broadcasting
* MYR dynamic tests: made cases truly dynamic. Due to better shape inference it turned out that test cases were actually static.
* Deleting static shape test cases
* Fixes in the infer function of MO operation Select.
* Fixes in the nGraph transformation SharedShapeOf.
* Deleted commented code.
* Added more tests for the infer function of the MO operation Select.
* Started to write tests for the transformation SharedShapeOf.
* Added more tests.
* Now the transformation can correctly process a mix of opset1::ShapeOf and opset8::ShapeOf.
* Small change.
* Used opset1 and opset3 instead of opset1 and opset8.
* Used get_output_element_type(0) instead of checking the version of ShapeOf.
* Remove some legacy targets
* Replace some targets
* Removed inference_engine_plugin_api dependency
* Minor comment for developer config
* Fixed include paths
* Small fixes for static build
* Try to fix build pyopenvino
* Fixed comments
* Try to fix build
* Include OpenVINODeveloperPackage inside InferenceEngineDeveloperPackageConfig
* Try to fix GAPI tests
* Implement the proposal and experimental_detecron_generate_proposals
* Implement the proposal shape infer
* Add ROI_Align OP shape infer implement.
* Fix building issue
* Fix bug.
* Update test cases.
* Add test cases for the OPs
* Apply the CI coding style check.
* Move the shape_infer API to the new folder.
* Update some fix.
* Applied review comments
* Move the shape infer tests into new folder.
* Apply review comments.
* Fix missing header when mering with master
* Fix incomprehensible error message during layout conversion when layout rank doesn't match with shape rank
* Stash
* stash
* Memcpy implementation
Added tests
* Revert "Fix incomprehensible error message during layout conversion when layout rank doesn't match with shape rank"
This reverts commit 37064741b2.
* Fix clang-format and remove redundant headers
* Covered "cached" case (+ tested on Myriad)
* Apply review comments
Introduced 'applyBatchedBlob' function which allows override 'memcpy' on inferefnce time
* clang-format fix
* Added dynamic shape case
* - Review comments
- Deep copy of parameters/results for caching from cnnNetwork. Deep copy logic is moved to Utils
- Caching Tests: return correct inputs/outputs map after ImportNetwork mock call
* Reworked according to discussion
Also introduced 'SetBlobsImpl' which throws 'Not implemented' exception by default.
Template plugin updates internal '_batched_inputs' map
* Updated according to moved tests
* don't support 'memcpy' for ROI tensors
* Fix caching tests
* Just to retrigger CI
* Correct offset padding (however there is no test update as current implementation will not hit here due to other checks)
* Fix clang-format
* Applied review comments
* Added check that 'get_tensor' throws if set_tensors/set_input_tensors is used
* Fix review comments - part 1
* Fix caching tests - mock implementation becomes more complicated
Cached mock model shall identify its inputs/outputs, otherwise core will assert on SetExeNetworkInfo stage
* More comments fix
* More comments fixes
* More cleanup
* And more style comment
* typo fix
* Try fix caching windows tests
* Blind attempt to fix Ubuntu20 CI
+ Modified a way to add padding in prepare_padding
+ Changed condition of assertion for onednn padding
Signed-off-by: Min, Byungil <byungil.min@intel.com>