* Initial moving
* ONNX Importer is private now - CMakeLists.txt
* ONNX Importer is private now - Includes
* Make some files visible
* Style apply
* Review fix
* Public headers have a prefix now
* Style
* hide more headers
* regionyolo do_softmax attribute
* add serialization single layer tests for normalizel2 and reshape
* add prelu sslt, change letter size in op name to align with MO
* add shufflechanels sslt, add workaround to serialize the op with proper opset number
* add broadcast sslt, change attribute string representations to lowercase
* add pad sslt, change attribute string representations to lowercase
* Unify sslt name prefixes
* add prelu name translation for serialization
* change expected type of regionyolo do_softmax attribute to bool
* transform autobcast type attr to lowercase, add unit test, add special opset mapping in serialization
* style fix
* fix indentation
* fix indentation 2
* Possibility of different opset assignment for different op versions
* Update header dates in modified files
* Match special opset to type_info_t instead of a string
* Adjust the comment to match the code
* Release mo dev guide refactoring (#3266)
* Updated MO extension guide
* Minor change and adding svg images
* Added additional information about operation extractors. Fixed links and markdown issues
* Added missing file with information about Caffe Python layers and image for MO transformations dependencies graph
* Added section with common graph transformations attributes and diagram with anchor transformations. Added list of available front phase transformations
* Added description of front-phase transformations except the scope-defined and points defined. Removed legacy document and examples for such transformations.
* Added sections about node name pattern defined front phase transformations. Copy-pasted the old one for the points defined front transformation
* Added description of the rest of front transformations and and all middle and back phase transformations
* Refactored Legacy_Mode_for_Caffe_Custom_Layers and updated the Customize_Model_Optimizer with information about extractors order
* Added TOC for the MO Dev guide document and updated SVG images with PNG ones
* Fixed broken link. Removed redundant image
* Fixed broken links
* Added information about attributes 'run_not_recursively', 'force_clean_up' and 'force_shape_inference' of the transformation
* Code review comments
* Added a section about `Port`s
* Extended Ports description with examples
* Added information about Connections
* Updated MO README.md and removed a lot of redundant and misleading information
* Updates to the Customize_Model_Optimizer.md
* More updates to the Customize_Model_Optimizer.md
* Final updates for the Customize_Model_Optimizer.md
* Fixed some broken links
* More fixed links
* Refactored Custom Layers Guide: removed legacy and incorrect text, added up-to-date.
* Draft implementation of the Custom layer guide example for the MO part
* Fixed broken links using #. Change layer->operation in extensibility documents
* Updated Custom operation guide with IE part
* Fixed broken links and minor updates to the Custom Operations Guide
* Updating links
* Layer->Operation
* Moved FFTOp implementation to the template extension
* Update the CMake for template_extension to build the FFT op conditionally
* Fixed template extension compilation
* Fixed CMake for template extension
* Fixed broken snippet
* Added mri_demo script and updated documentation
* One more compilation error fix
* Added missing header for a demo file
* Added reference to OpenCV
* Fixed unit test for the template extension
* Fixed typos in the template extension
* Fixed compilation of template extension for case when ONNX importer is disabled
Co-authored-by: Alexander Zhogov <alexander.zhogov@intel.com>
- Added linear_onnx mode support into resample_opt kernel.
- Fixed byxf layout check.
- Added Resample + Eltwise fusing support
- Update dequantize merge pass to work with eltwise instead of scale
- Fixed uninitialized m_maxBatch value for query mode
- Fixed missing AddPrimitiveToProfiler for DeformablePSRoiPooling
- Fixed 0d gather
- Added WA for Resample+Eltwise fusing
Co-authored-by: Gleb Kazantaev <gleb.nnstu@gmail.com>
Remove protocol checks for updating memType and watchdog flag. This has been verified by Microsoft on their target platform with 2 ma2085 over PCIE. The target was able to run openVino sample with these changes.
* Update the spec
* add unit-tests
* add avgPool unit-tests to CMakelist
* Remove second constructor and change the first one to take default values for rounding_type and pad_type
* add type_prop test for default values
* add 5d input single layer test instances
* add type_prop tests
* Require input to be 4D or 5D
* add validation check for pads size
* Update few tests to take 5D input instead of 6D
* Update validate_and_infer_types method
* Update infer_batched_pooling_forward and try_apply_auto_padding methods
* Update auto_padding_spatial_dims_dynamic type_prop test for binary_conv, conv, deformable_conv, group_conv and max_pool
* style-apply
* add validation check for kernel size
* add xfail for avgpool python backend test
* style-apply
* remove avgpool backend test from xfail list
* Update spec
* Allow the 3D input
* Update type_prop test with 3D input
* style-apply
* Remove xfail_issue_38709
* fix typo
* Update spec
* Update outputs section in spec
* Update spec
* fix typo
* clean file
* Update detailed description and fix xml examples
* fix exclude-type typo
* fix typo in outputs section
* [IE][nGraph]: Enables begin/end iterators for PartialShape
It's convenient to be able to use STL algorithms on
PartialShape since semantically PartialShape is a
sequence of Dimensions.
* [IE][VPU][nGraph]: Introduces tree utilities
Introduces Depth-First-Search and Breadth-First-Search
utilities for tree traversal. Templated arguments
makes them extensible for different use-case scenarios.
BFS is designed in way to make it possible to guarantee
node will be visited only after all its predecessors
have been visited:
a
/ \
b c
| |
d |
\ /
e
There with accordingly provided functors (NumEntries) it's
guaranteed node "e" will be visited after "d" and "c".
Such a property is important for nodes depth evaluation.
* [IE][VPU][nGraph]: Fixes printTo for nGraph type
For some reason if printTo for nGraph type is
usual function it's not picked up by VPU_THROW_UNLESS
triggered inside DynamicToStaticShape transformations.
Making it template specialization does the job.
* [IE][VPU]: Introduces SliceConfiguration class
SliceConfiguration is a class that's intended
to express the result of operation slicing by
batch. The result of slicing is configuration
that specifies what to do with each data object
associated with operation. There are two options
defined: Slice and Unchanged. Typical slice
scenario is Slice, when operation has the same
batch for all inputs and outputs, so all
corresponding data object will be "sliced"
(replaced with copy where batch equal to 1).
At some cases, data object should not sliced
(ex. if operation has constant input which
is the same for all input data batches and
so, does not have batch - Add of 2 tensors
with shapes [10, 1000] and [1000]). To
represent such cases there is option
"Unchanged".
At cases when operation should not be sliced
at all (ex. does not have batch, have different
batch for inputs and outputs, has static
batch and so on) SliceConfiguration object will
return false for "hasSlice" method call. In
these cases inputs and outputs methods calls
will throw an exception.
* [IE][VPU][nGraph]: Enables MatMul operation slice
In case of static batch, operation is not going to be sliced,
since for handling such cases other transformation is used.
Such approach allows both passes to co-exist while one is
being replaced with another.
If data input has other dynamic dimension than batch error
will be thrown since Myriad-X plugin does not support
convolutions (HW accelerated operations) with dynamism in
spatial dimensions.
* [IE][VPU][nGraph]: Enables Convolution operations slice
In case of static batch, operation is not going to be sliced,
since for handling such cases other transformation is used.
Such approach allows both passes to co-exist while one is
being replaced with another.
If data input has other dynamic dimension than batch error
will be thrown since Myriad-X plugin does not support
convolutions (HW accelerated operations) with dynamism in
spatial dimensions.
* [IE][VPU][nGraph]: Enables unary eltwise slice
Since extract dynamic batch transformation will handle
dynamism only by batch (so requires body loop to be static)
operations with dynamism in dimension other than batch should
not be covered by loop.
In case of dynamism in dimension other than batch eltwise
will be considered unsupported for sub-graph extraction.
* [IE][VPU][nGraph]: Enables binary eltwise slice
Since extract dynamic batch transformation will handle
dynamism only by batch (so requires body loop to be static)
operations with dynamism in dimension other than batch should
not be covered by loop.
In case of dynamism in dimension other than batch eltwise
will be considered unsupported for sub-graph extraction.
It's template function since different binary eltwise
operations have the same broadcasting rules.
* [IE][VPU][nGraph]: Enables extract dynamic batch transformation
General approach is following:
1. Extracted sub-graphs should have exactly one input and output
operation. Otherwise, it's possible that memory consumption of
model will be increased since loops implementation on Myriad-X
requires to keep all inputs and outputs of loop to be alive
along with memory used by loop body. In layout consolidation
scenario it reflects intention to use minimized amount of
permutations.
2. Extracted sub-graph should not have external connections (
the only nodes that allowed to have predecessor or successor
outside of sub-graph are input and output). Otherwise, it's
possible that memory consumption of model will be increased
for the same reason as in previous point.
To make sure this restriction is met transformation looks
for leaves in both directions, finds corresponding LCA
(Lowest Common Ancestor) and checks if such sub-graph has
external connections. If so, it repeats leaves search
procedure stopping if it approaches leaves from previous
iteration and finds LCA again. It is repeated until
sub-graph without external connections is found (it exists,
at least source itself forms it).
Leaf in current context is a node which satisfies one of
the following conditions (depending on direction):
Top:
1. It has no predecessors which are neither Parameter,
nor Constant
2. It's unknown how to slice this operation
3. It could not be sliced (different batch for inputs and
outputs)
Bottom:
1. It has no successors which are not Result
2. It's unknown how to slice this operation
3. It could not be sliced (different batch for inputs and
outputs)
Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>
This change fixes the error
Input blob size is not equal network input size (1!=0)
seen when passing a scalar input to a model in the case of VPU plugins.
* [MO] Add CMake install for Model Optimizer
* [MO] Update test for version.py
* [MO] Add CMake install for Model Optimizer
* [MO] Update test for version.py
* [MO] Add CMake install for Model Optimizer
* [MO] Update test for version.py
* [MO] fix file permissions for install location
* enable make install for OMZ
* Add option description
* remove OMZ fetching & install
* Update firmware
* Add the test case from the network
* Disable fp32 case, because in this case the network has output Convert which receives non-inner stride in its input which is not supported now.
* Support FP16 comparator.
* Add `USE_BUILD_TYPE_SUBFOLDER` CMake option to append
`CMAKE_BUILD_TYPE` to output binary directory.
Initialize it to `ON` for UNIX to keep current behavior.
* Remove forced `CMAKE_CONFIGURATION_TYPES` initialization,
use user provided value instead.
This will allow to use single config generators (like Ninja) on Windows
with MSVC compilers and get binaries in per-config sub-folders in the same
way as on UNIX.
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>