* Multi plugin - override loading network from file
When caching is enabled, MULTI plugin will check all devices
- For devices with caching supported - call LoadNetwork(modelPath, ...)
- For others - ReadNetwork once and then LoadNetwork(cnnNetwork) for each device
Caching unit test is added for both cases
Additional helper methods:
- ICore::ToExecutableNetwork - converts internal ExeNetwork to ExecutableNetwork
- ICore::DeviceSupportsImportExport - checks if device supports import and export functionality. Used by Hetero and Multi
* Updated according to review comments
* fixed sporadic failure of 'multi-device' test cases
Root cause:
Currently only one 'ExecutableNetwork' object is created for each LoadNetwork
For Multi-testing several threads could call simultaneously setNetworkInputs/Outputs/SetPointerToPlugin
It caused race condition and invalid data structures
* Fix build issues after rebase
* Multi: Set network inputs/outputs/pointerToPlugin for load-from-file case
Overloaded function doesn't call these methods, thus multi executable network was unusable
Added caching test verifying that inputs/outputs are copied now from first loaded device network
* adding conv2d decomposition
* save point
* build of conv_2d factorization succeeds
* working 2d conv decomposing transform
* added pseudo code for handling larger kernels
* fix conv splitting due to size
* active work on convolution 1xK without dilation
* validated NHWC ordered networks with convolution kernel sizes:
3x3
3x1
5x1
1x5
1x3
TODO: 2d max pooling
* removed debug printouts
* fusing max pooling/bias/af when -disable_nhwc_to_nchw option is used
* code cleanup
* [GNA] Fixes for CI run
* [GNA] Add tests, fix transform
* [GNA] Fix padded2valid and conv2d decomposition coexistence
* [GNA] Temporarily disable tests due to mock call count issues
* [GNA] Split tests for different hw versions
Co-authored-by: prozen <piotr.rozen@intel.com>
* Exclude xbyak from install
* Added automatically generated InferenceEngineConfig.cmake
* Reverted a version back
* Fixed issues with target aliases
* Make TBB dependency private
* Made ie_parallel.cmake self-sufficient
* Don't expose ie_paralle.cmake to end users
* Fixed compilation with TBB
* Fixes for TBB
* Fixed vpu_graph_transformer compilation
* Fixed tests compilation
* Added install of ie_parallel.cmake
* Switched ENABLE_ALTERNATIVE_TEMP to OFF. Fixed COMPONENTS for TBB
* Fixed file name in install rules
* Added find_dependency for TBB in ie_parallel.cmake
* WA for cmake bug with PACKAGE_PREFIX_DIR
* Fixed no-deprecation to fix speech-library build
* Reverted version from 2.1.0 to 2.1
* Revert "Reverted version from 2.1.0 to 2.1"
This reverts commit 7cb5d1563c.
* Added versions to cmake
* Added versions to ie_version.hpp
* Returned custom version file back
* Added InferenceEngineConfig-version.cmake to share as well
* Disabled one more GPU test
* Added one more WA for CI
* WA for CI issue for C API
* Added InferenceEngineConfig-version.cmake to share as well
* Added verison parsing from ie_version.hpp
* Revert "[CPU] Add Roll support (#5112)"
This reverts commit 5d8f209df6.
* Revert "[CPU] windows_Interpolate_fused-FQ_nearest-mode_nspc-layout_fix (#5317)"
This reverts commit 0808975a37.
* Revert "[INT8][BF16] INT8 + BF16 feature was enabled (#5059)"
This reverts commit 7d2ec02d65.
* Support for components
* No version for IEDevScripts package
* Removed IE_VS_VER_HAS_VERSION from vs_version.rc.in
* Added compatibility for 2.x old versioning
* Update SelectDevice policy in auto plugin
Signed-off-by: Zhengtian Xie <zhengtian.xie@intel.com>
* Implement limit device list for AUTO plugin
Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>
* Add tests for AUTO limit device feature
Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>
* Add gpu tests for auto-plugin
Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>
* Fix CI cpuFuncTests issue due to BATCHED_BLOB
Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>
* Override LoadNetwork(modelPath, config) in AUTO plugin
Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>
* Update SelectDevice() logic for LoadNetwork(model, config)
Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>
* Update GetNetworkPrecision logic for auto-plugin
Signed-off-by: Zhengtian Xie <zhengtian.xie@intel.com>
* Address reviewers' comments
Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>
* Add tests for AUTO:GPU,CPU case
Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>
* Update logic in GetNetworkPrecision for auto-plugin
Signed-off-by: Zhengtian Xie <zhengtian.xie@intel.com>
* Address reviewer's comment: clean and simplify code
Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>
* Fix wrong usage of convolution weight index
Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>
* Address reviewer comment: fix get network precision logic
Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>
* Fix rebase issue
Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>
* Fix ie_core.cpp header change
Signed-off-by: Shoujiang Ma <shoujiang.ma@intel.com>
Co-authored-by: zhengtian.xie <zhengtian.xie@intel.com>
* Fix errors in VariadicSplit layer restored from serialized IR
* Update VariadicSplit specification and error message to allow 1D tensors on 1st input
* Update spec
* Resolve comments
* Apply comments, add unit tests
* Update unit tests
Original source code repo: https://github.com/llohse/libnpy
SHA of original commit: d3fd88697889cefb466b647d3034c1d7b7f615ff
In OpenVINO repo there are some modifications, thus Intel's copyrights are kept as well
* Swap inputs pass
* [GNA] Handle case Gemm layer
* [GNA] Convert Matmul to FC
* VS tests
* Move to common optimization
* Gemm execution such as FC
* Test calc scale_factor
* Some changes
* Working version
* [GNA] Insert transpose between convolution/pooling and reshape.
Insert copy layers after concat inputs with multiple connections to the concat.
Accept networks with input connected to layers with different orientations if one of input dimensions is 1.
Fix scale factor calculation for Eltwise layer.
Fixes for Gemm quantization.
* Insert transpose after Reshape and before Matmul
* Fix concat input alignment when it's the network input
* Comments applying
Co-authored-by: Andrey Dmitriev <andrey.dmitriev@intel.com>
* Added LoadNetwork(filename) to AUTO
* Added more files
* So pointer can be used without loading
* Changed InferencePlugin, ICore to return internal interfaces
* Added SoPointers for InferRequest, ExecutableNetwork
* Fixed Windows
* Fixed KMB
* Fixes for KMB
* Removed dereference operator
* Play with include files
* Fixed compilation with older compilers
* Fixed comments
* Fixed win build
* Try to fix Windows
* Try to fix Windows 2
* Fixed windows
* Fixed windows
* Removed SOPointer as a base class
* Reverted back SOPointer split
* Code review
Co-authored-by: apankratovantonp <anton.pankratov@intel.com>
* [IE TESTS] Fix comparation in LayerTestUtils
* Fixes
* Small fix
* Int4 fixes
* remove extra
* Fix NMS
* Some fixes for tests
* Add small fix
* [IE TESTS] Remove const folding as a result engine
* Remove extra
* Revert remove constant folding (DSR test) & fix some cases for cpu
* Fix GNA
* add conversion of padded to valid convolution without other parameters change
* [GNA] Fix graph loop when multiple connections exist from single layer to concat
* [GNA] Add 1d and 2d conv test cases
Add models covering all transform scenarios.
Add test cases covering 1d and 2d convolutions.
Update transform with the newest code.
Add minor fixes in transform and elsewhere.
* [GNA] Remove debug code
* [GNA] Fixes after review
* [GNA] Fix failing tests
Co-authored-by: prozen <piotr.rozen@intel.com>
* Implement nGraph transformation to decompose Einsum-7 operation
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Use MatMul instead of Eltwise-multiplication and ReduceSum
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Add description for new methods
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Fix code style
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Fix code style #2
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Remove unused variables.py
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Apply feedback after review: fix comments, new_register_node use
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Add Reshape if needed and apply code-review feedback
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Fix code-style
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Remove unused variable
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Remove extern template from headers for RTTI classes
* Moove instantiation out of the namespace
* Use __ANDROID__ conditional compilation for TBlob
* One more attempt
* Reference implementation for memory
* memory reference implementation tests, fixes
* new class Variable context
* fix ngraph code style
* add new evaluate method to ngraph::function
* unordered_set instead of set in find_variables method
* added a new Memory base class; automatiс memory allocation for Variable context in Assign ops; refactoring
* ngraph codestyle
* ngraph codestyle
* temporary disable the check of variables in ngraph::function
* fix for evaluate hides overloaded virtual function warning
* ngraph codestyle
* uncomment a check in validate_and_infer method
* Removing a check (not backward compatible); adding docs
* Auto detect Parameters/Variables, new constructors in ngraph::function
* use zero initial values in ReadValue v6 evaluate
* fix unit tests
* fix codestyle
* fix build (werror)
* ngraph codestyle
* update unit tests, refactoring
* ngraph codestyle
* refactoring, docs, new unit tests
* Resolve review remarks
* rt_attributes likeapproach in EvaluationContext, codestyle
* fix build and unit tests
* resolve review comments
* resolve review comments
* codestyle
* export public API