* [MO] Add CMake install for Model Optimizer
* [MO] Update test for version.py
* [MO] Add CMake install for Model Optimizer
* [MO] Update test for version.py
* [MO] Add CMake install for Model Optimizer
* [MO] Update test for version.py
* [MO] fix file permissions for install location
* Generate TensorIterator without back edges from TensorFlow models
* Added a check in the MarkSubgraphsWithCorrectLayout to not fail when port is not connected
* Updated the 'protobuf2nx' to consume the graph protobuf message
* Cleanup TI from the IRv7 specific code
* Do not run some front transformations recursively
* Draft support for the ONNX Loop operation when 'cond' = True
* LoopToTI transformation changes
* Added draft of Loop operation and parser for ONNX Loop operation body
* Updated Loop body parser + added shape and type infer for the Loop operation
* Fixes for ONNX Loop operation parser
* Moved Loop parsing to Loop op extractor. Added generation of external edges for the Loop body ops
* Added support for ThresholdedRelu using decomposition
* Added support for Min ONNX operation
* Draft fixes for port_map generation for the Loop
* Rename transformation file and fix BOM
* Fixed shape inference for Loop scan outputs (axis is not None)
* Fixed shape inference for ONNX Loop operation
* Refactor checks in the TensorIteratorMerge transformation
* Code refactoring. Enabled commented transformations
* Documentation update for ONNX Loop, ThresholdedRelu and Min
* Fixed typo in the Loop front transformation where execution condition input is connected. Other refactorings
* Fixed in the Loop extractor
* Added printing 'internal_layer_id' attribute in the graph dumper
* Updated calculation of iterations number for the Loop
* Added missing code
* Fixed output port shapes and types generation for Loop operation
* Update function names and variable names in the Loop operation
* Fixed type inference for iteration count input
* Added removal of input/output ports of the Loop if they are not used
* Fixed renumbering Loop operations input/output ports to keep mandatory
* Fixed ThresholdedReluDecomposition transformation
* Updated MO IR Reader to know about Loop operation. But it is still not supported by the MO IR Reader
* Added unit test for Slice op shape infer (reverse the sequence of elements)
* Reverted changes in the ONNX loader function call to protobuf2nx
* Enable Reshape0DToSqueeze transformation recursively
* Refactored Loop operation support implementation
* Changed ThresholdedReluDecomposition to generate Const with shape [1] instead of scalar
* Code style and wording fixes
* Restored accidentally removed 'return' statement in the TI shape infer function
* Fixed comments
* Fixed comment
Co-authored-by: Evgeny Lazarev <elazarev.nnov@gmail.com>
* Add hsigmoid fusing for MO
* Update Bom file
* Remove comments
* Refactoring hsigmoid fusion according to review
* Add div and mul patterns for hsigmoid fusion
* Refactoring code according to review
* Fix HSigmoid fusion transformation
* Add Round-5 operation
* Add ONNX Round to supported operation list
* Add ngraph implementation for Round operation
* Update MO part
* Create UnaryElementwise class, update Round Operation
* Fix mode attr in mxnet extractor
* Add tests for Round shape infer
* Update 'enable' attr
* Update MO IR Reader to support UnaryElementwise operations
* Minor test refactor
* Update ngraph Round operation
* Add reference implementation
* Add test for reference implementation
* Add test for shape infer
* Add test for IE IR Reader
* AddRound operation to python api
* Fix missed mode attr
* Update Round operation version
* Fix codestyle
* Add MxNet Round to supported layers list
* Fix error in reference
* Fix comments style
* Update CMake file
* Update Ngraph reference test
* Update IE IR Reader tests
* Return v0::Round operation
* Update shape infer tests
* Fix v0::Round reference
* Fix codestyle
* Enum instead of string
* Fix codestyle
* Add Mode attribute adapter
* Update Mode attr
* Fix reference for v0::Round
* Fix codestyle
* Fix mode attr
* Fix get() method
* Fix codestyle in python api
* Update test info
* Fix ngraph api part
* Ad round v5 to interpreter tests
* Fix codestyle is ie reader test
* Update ngraph python api __init__.py file
* Adde opser5 to dafault opsets in ie_ir reader
* Add parser for Round layer
* Remove redundant spaces
* Add round creator to appropriate list
* Remove redundant import
* Commit to bump infrastructure version
I'm sorry for this, but this commit will be squashed on merge to master anyway and it is needed for your PR to correctly pass the pipeline
* Fix import
* fix codestyle
* Fix ngraph api part
* Add shape infer tests in python api
* Add .upper() for mode attr
* Refactor MO shape infer test for Round op
* Update tests and add comments
* Revert "Commit to bump infrastructure version"
This reverts commit 56e6ae1e4c.
* remove parser for Round layer
* Update Ronund-5 evaluate test
* Resolve review comments
Co-authored-by: User <user@nnlvdp-achetver.inn.intel.com>
Co-authored-by: Andrey Babushkin <andrey.babushkin@intel.com>
Co-authored-by: Anton Chetverikov <anton.chetverikov@.intel.com>
* Implement LookupTableInsertV2 shape inference
It is needed if other nodes not beeing pruned in the graph
have a conditional dependence on LookupTableInsertV2 node.
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Fix after core-review #1
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Fix the code after review #2
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Fix after code review #3
* Extend MO for operation GatherND
* Update documentation
* Rename GatherNd.py to gathernd.py
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* [MO] [Kaldi] Added TDNN Component
* TdnnComponent replacer graphical comment updated
* Added SpecAugmentTimeMaskComponent
* some refactor of memoryoffset shape_infer
* moved memoryoffset splitting to the middle stage
* some corrections
- set `need_shape_inferenc`=False in split_memoryoffset
- use cycle instead of pattern in tdnn_replacer
* separated splitting of MemoryOffsets in LSTM and TDNN blocks
* set transpose_weights=True in TdnnComponent
* Corrected Supported_Frameworks_Layers
* corrected comments
* separate naming for tdnn and lstm memoryoffset splits
* corrected BOM file
* corrected generaldropout_ext.py and removed 'has_default' for tdnn_component
* corrections after PR review
* renamed LSTM -> recurrent; added setting element_size for paired nodes of tdnn_memoffset and othe minor changes
* Update split_tdnn_memoryoffset.py
* corrected partial infer with new API in elemental.py and split_tdnn_memoryoffset.py
* Commit.
* Added opset4 version in the class Interpolate.
* Added class ONNXResize11Op to read ONNX Resize with opset version >= 11.
* Added support for Interpolate-4 into transformations TestInterpolateReshapeWA and InterpolateConcat.
* Added support for Interpolate-4 into transformation InterpolateWithConcat.
* Deleted redundant checks from the transformation UpsampleToResample.
* Reverted last changes.
* Changed ONNX Resize extractor to support for Interpolate-4.
* Added conversion of ONNXResize11Op into Interpolate-4.
* Added support for Interpolate-4 into the transformation InterpolateSequenceToInterpolate.
* Small fix for formatting.
* Written tests for MO version of Interpolate-4 with shape_calculation_mode = sizes.
* Written tests for infer function of Interpolate-4.
* Now transformations InterpolateWithConcat, InterpolateConcat, InterpolateReshapeWA skip Interpolate-4.
* Used create_op_with_const_inputs in the transformation InterpolateSequenceToInterpolate.
* The transformation ONNXResize11ToInterpolate4 was rewritten using find_and_replace_pattern.
* Now the dictionary infers (dictionary of infer functions of Interpolate) is a class static attribute.
* Deleted unused variable.
* Restored original logic of find_and_replace_pattern method of the class InterpolateReshapeWA.
* Used create_op_with_const_inputs() in the transformation InterpolateSequenceToInterpolate for opset1 case.
* Replaced resize_name by resize.soft_get('name', resize.id).
* Small fixes.
* Added two tests for Interpolate-4 infer function.
* Fixed the transformation ONNXResize11ToInterpolateV4 for the case when ONNXResize11 operation has 3 inputs.
* Added conversion of ONNXResize11 with tf_crop_and_resize_mode to ROIPooling + ONNXResize11.
* Fixed bugs in the transformation ONNXResize11ToInterpolateV4 and in the infer function of the operation ONNXResize11.
* Small changes.
* Renamed transformation that converts ONNXResize11 into ROIPooling + ONNXResize11 and fixed BOM-file.
* Fixed tests for the transformation InterpolateSequenceToInterpolate.
* Small change.
* Now the transformation InterpolateSequenceToInterpolate preserves output layer name.
* Deleted the transformation ONNXResize11ToTFCropAndResize.
* Fix fusing Multiply node with Convolution in case group != 1
* Add transformation test
* Do not fuse if not possible to reshape const
* Update fuse_linear_ops.py
* Updated ConcatOptimization transformation to work when one dimension of input to Concat is 0D
* Fixed ConcatOptimization transformation to reconnect input edges to Concat
* Completely re-written ConcatOptimization
* Updated Concat0D optimization transformation
* Fixed order of traversing Concat input ports
* Refactored ConcatOptimization transformation to use `delete_input_port` function
* Detele trailing unconnected ports in the ConcatOptimization.py
* Cleaner implementation of ConcatOptimization + unit test
* initial commit
* first reshap-able variant
* right version for reshape
* comment update
* fixes for failed e2e
* set data type to ngraph TensorIterator
* Fix dynamic shapes for cells ops
* clean up
Co-authored-by: yegor.kruglov <ykruglov@nnlvdp-mkaglins.inn.intel.com>
* Implement reshapeable CTCGreedyDecoderPlusSparseToDense transformation and test
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Fix consts (after code-review #1)
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Add CTCGreedyDecoderTransformation with more generic pattern
Also it adds new middle-replacer for transforming sequence length to a mask
along with tests.
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Do fixes after review #2
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Fix after review #3
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Fix after review #4
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Added HSwish operation
* Added HSwish fusing transformation
* Fixed BOM
* Added unit test for HSwish fusing transformation
* Fixed unit tests for transformations using 'build_graph_with_edge_attrs' function to build the graph
* Added fusion transformation for Swish operation
* Added fusing transformation for Softplus operation
* Added fusion transformation for Mish operation
* Added check for the node name in the unit tests
* Fixed Mish fusion pattern
* Updated Mish fusion transformation. Added unit test
* Updated HSwish fusing transformation
* Updated Swish fusion transformation and tests
* Fixed unit tests
* Fixed order of transformation to convert the TF OD API SSD models
* Refactored the sub-graph modification for the TF OD API models related to Squeeze/Reshape after SSD heads