Remove protocol checks for updating memType and watchdog flag. This has been verified by Microsoft on their target platform with 2 ma2085 over PCIE. The target was able to run openVino sample with these changes.
* Update the spec
* add unit-tests
* add avgPool unit-tests to CMakelist
* Remove second constructor and change the first one to take default values for rounding_type and pad_type
* add type_prop test for default values
* add 5d input single layer test instances
* add type_prop tests
* Require input to be 4D or 5D
* add validation check for pads size
* Update few tests to take 5D input instead of 6D
* Update validate_and_infer_types method
* Update infer_batched_pooling_forward and try_apply_auto_padding methods
* Update auto_padding_spatial_dims_dynamic type_prop test for binary_conv, conv, deformable_conv, group_conv and max_pool
* style-apply
* add validation check for kernel size
* add xfail for avgpool python backend test
* style-apply
* remove avgpool backend test from xfail list
* Update spec
* Allow the 3D input
* Update type_prop test with 3D input
* style-apply
* Remove xfail_issue_38709
* fix typo
* Update spec
* Update outputs section in spec
* Update spec
* fix typo
* clean file
* Update detailed description and fix xml examples
* fix exclude-type typo
* fix typo in outputs section
* [IE][nGraph]: Enables begin/end iterators for PartialShape
It's convenient to be able to use STL algorithms on
PartialShape since semantically PartialShape is a
sequence of Dimensions.
* [IE][VPU][nGraph]: Introduces tree utilities
Introduces Depth-First-Search and Breadth-First-Search
utilities for tree traversal. Templated arguments
makes them extensible for different use-case scenarios.
BFS is designed in way to make it possible to guarantee
node will be visited only after all its predecessors
have been visited:
a
/ \
b c
| |
d |
\ /
e
There with accordingly provided functors (NumEntries) it's
guaranteed node "e" will be visited after "d" and "c".
Such a property is important for nodes depth evaluation.
* [IE][VPU][nGraph]: Fixes printTo for nGraph type
For some reason if printTo for nGraph type is
usual function it's not picked up by VPU_THROW_UNLESS
triggered inside DynamicToStaticShape transformations.
Making it template specialization does the job.
* [IE][VPU]: Introduces SliceConfiguration class
SliceConfiguration is a class that's intended
to express the result of operation slicing by
batch. The result of slicing is configuration
that specifies what to do with each data object
associated with operation. There are two options
defined: Slice and Unchanged. Typical slice
scenario is Slice, when operation has the same
batch for all inputs and outputs, so all
corresponding data object will be "sliced"
(replaced with copy where batch equal to 1).
At some cases, data object should not sliced
(ex. if operation has constant input which
is the same for all input data batches and
so, does not have batch - Add of 2 tensors
with shapes [10, 1000] and [1000]). To
represent such cases there is option
"Unchanged".
At cases when operation should not be sliced
at all (ex. does not have batch, have different
batch for inputs and outputs, has static
batch and so on) SliceConfiguration object will
return false for "hasSlice" method call. In
these cases inputs and outputs methods calls
will throw an exception.
* [IE][VPU][nGraph]: Enables MatMul operation slice
In case of static batch, operation is not going to be sliced,
since for handling such cases other transformation is used.
Such approach allows both passes to co-exist while one is
being replaced with another.
If data input has other dynamic dimension than batch error
will be thrown since Myriad-X plugin does not support
convolutions (HW accelerated operations) with dynamism in
spatial dimensions.
* [IE][VPU][nGraph]: Enables Convolution operations slice
In case of static batch, operation is not going to be sliced,
since for handling such cases other transformation is used.
Such approach allows both passes to co-exist while one is
being replaced with another.
If data input has other dynamic dimension than batch error
will be thrown since Myriad-X plugin does not support
convolutions (HW accelerated operations) with dynamism in
spatial dimensions.
* [IE][VPU][nGraph]: Enables unary eltwise slice
Since extract dynamic batch transformation will handle
dynamism only by batch (so requires body loop to be static)
operations with dynamism in dimension other than batch should
not be covered by loop.
In case of dynamism in dimension other than batch eltwise
will be considered unsupported for sub-graph extraction.
* [IE][VPU][nGraph]: Enables binary eltwise slice
Since extract dynamic batch transformation will handle
dynamism only by batch (so requires body loop to be static)
operations with dynamism in dimension other than batch should
not be covered by loop.
In case of dynamism in dimension other than batch eltwise
will be considered unsupported for sub-graph extraction.
It's template function since different binary eltwise
operations have the same broadcasting rules.
* [IE][VPU][nGraph]: Enables extract dynamic batch transformation
General approach is following:
1. Extracted sub-graphs should have exactly one input and output
operation. Otherwise, it's possible that memory consumption of
model will be increased since loops implementation on Myriad-X
requires to keep all inputs and outputs of loop to be alive
along with memory used by loop body. In layout consolidation
scenario it reflects intention to use minimized amount of
permutations.
2. Extracted sub-graph should not have external connections (
the only nodes that allowed to have predecessor or successor
outside of sub-graph are input and output). Otherwise, it's
possible that memory consumption of model will be increased
for the same reason as in previous point.
To make sure this restriction is met transformation looks
for leaves in both directions, finds corresponding LCA
(Lowest Common Ancestor) and checks if such sub-graph has
external connections. If so, it repeats leaves search
procedure stopping if it approaches leaves from previous
iteration and finds LCA again. It is repeated until
sub-graph without external connections is found (it exists,
at least source itself forms it).
Leaf in current context is a node which satisfies one of
the following conditions (depending on direction):
Top:
1. It has no predecessors which are neither Parameter,
nor Constant
2. It's unknown how to slice this operation
3. It could not be sliced (different batch for inputs and
outputs)
Bottom:
1. It has no successors which are not Result
2. It's unknown how to slice this operation
3. It could not be sliced (different batch for inputs and
outputs)
Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>
This change fixes the error
Input blob size is not equal network input size (1!=0)
seen when passing a scalar input to a model in the case of VPU plugins.
* [MO] Add CMake install for Model Optimizer
* [MO] Update test for version.py
* [MO] Add CMake install for Model Optimizer
* [MO] Update test for version.py
* [MO] Add CMake install for Model Optimizer
* [MO] Update test for version.py
* [MO] fix file permissions for install location
* enable make install for OMZ
* Add option description
* remove OMZ fetching & install
* Update firmware
* Add the test case from the network
* Disable fp32 case, because in this case the network has output Convert which receives non-inner stride in its input which is not supported now.
* Support FP16 comparator.
* Add `USE_BUILD_TYPE_SUBFOLDER` CMake option to append
`CMAKE_BUILD_TYPE` to output binary directory.
Initialize it to `ON` for UNIX to keep current behavior.
* Remove forced `CMAKE_CONFIGURATION_TYPES` initialization,
use user provided value instead.
This will allow to use single config generators (like Ninja) on Windows
with MSVC compilers and get binaries in per-config sub-folders in the same
way as on UNIX.
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
* Partially removed cmake duplication with IE cmake
* Deprecated API usage: fixed or suppressed
* Fix for TypeRelaxed
* Canonical form for ngraph includes
* Removed extra visibilit settings; removed graphviz find_package
* Removed var_functions module; canonical includes for ngraph::reference
* Fixed deprecated API in ngraph tests
* Re-use standard cmake macro for shared libs
* Trying to fix ONNX importer tests
* Add file containing base tempalte funtions for all unary operators and add example with acos
* fix style-check
* add file for tests for all unary operators
* fix style
* rename unary_base.cpp to unary_ops.cpp
* Update test CMakeList
* fix typo
* style-apply
* Remove code blocks and add test for dynamic rank input
* remove deformableconvolution op from layer creator
* remove deformablepsroipooling op from layer creator
* remove maxpool op from layer creator
* remove nonmaxsuppresion from layer creator
* remove groupconvolutionbackpropdata op from layer creator
* remove groupconvolution op from layer creator
* fix code style
* [MO] Implement support of TensorFlow 2 Keras Embedding operation in MO
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Update another requirements files
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
Otherwise CMake produces the following warning:
```
CMake Warning (dev) at /usr/local/share/cmake-3.19/Modules/FindPackageHandleStandardArgs.cmake:426 (message):
The package name passed to `find_package_handle_standard_args` (Wget) does
not match the name of the calling package (IEDevScripts). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
/usr/local/share/cmake-3.19/Modules/FindWget.cmake:26 (FIND_PACKAGE_HANDLE_STANDARD_ARGS)
cmake/developer_package/download/download_and_check.cmake:5 (include)
cmake/developer_package/download/download_and_extract.cmake:6 (include)
cmake/developer_package/download/download.cmake:25 (include)
cmake/developer_package/download/dependency_solver.cmake:5 (include)
cmake/developer_package/IEDevScriptsConfig.cmake:208 (include)
CMakeLists.txt:12 (find_package)
This warning is for project developers. Use -Wno-dev to suppress it.
```