* Implement way to provide keep_output_port attribute to add_opoutput function
* Update tests
* Update comment
* Fake commit to pictures merge problem
* Change default value
* Add type
* Revert "Fake commit to pictures merge problem"
This reverts commit 41850765e0.
* Added handling of debug information in create_node().
* Code refactoring.
* Checks fixed.
* Added comments, added unit test.
* Renamed unit test class.
* Fixed port number in unit test.
* Make order of port names determined in IR
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Make port names in determined order and adopted tests
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Removed test-generator from all MO requirement files except the dev one
* Moved all MO unit tests files to a separate directory
* Added __init__.py files to the tests directory. Fixed importing paths for some unit tests
* Fixed imports in all unit tests. Moved all unit test related files from the MO code to the dedicated directory
* Renamed directory with unit test utils
* Updated imports in unit tests
* Fixed framework name attribute for onnx, mxnet.
* Fixed framework name attribute for caffe.
* Removed unnecessary attribute setting from add_opoutput()
* Added identity nodes adding to outputs in mxnet loader.
* Removed unnecessary reformat.
* Removed unnecessary reformat.
* Added check for empty name.
* Used nodes indices instead of node names in loader.
* Code refactoring, small bug fixed.
* added condition to disconnect method
* add unittest, rewrite the fix
* revert the second implementation, update test
Co-authored-by: yegor.kruglov <ykruglov@nnlvdp-mkaglins.inn.intel.com>
* fix ss
* successfully converted
* successfully run moved infer and normalizer unit-tests
* successfully rewritten StridedSlice infer unittests
* int64 array
* Successfully converter crash-when-loading, xj_feauture and toy nets (cherry-picked maxpoolV4 and tf_broadcast_ext)
* successfully moved PermuteAttrs to general mechanism
* successfully converted xj_feauture and crash when loading with the new rewritten SS infer
* fixed get_shape_from_slice and moved to common utils
* fixed extending masks and some other
* some refactoring
* fixed extending masks in extractor, fixed licence year and some other code clearing
* corrected a couple of unittests
* fox permute for 5 rank slice and 4 rank inputs/
* WIP
* Added comments
* fixed StridedSlice in ProposalMutation.py
* rechecked shape_infer unittests added some new cases
* added shape_infer unit-tests after StridedSliceNormalizer pass and Permute unit-tests
* corrected unittests
* Applied review comments
* general permutations for inputs implemented, corrected ellipsis unrolling when shrink_axis is at the beginning, some other corrections
* removed code duplication in infer and normalizer, moved 'slices' attr normalizing to StridedSliceNormalizer.py
* removed some code duplication and other minor improvements
* Added tests
* minor corrections
* wider range of unittests added (froze the number)
* review comments applied
* enabled skipped unit-test
* comment corrections
* applied review comments: changed op -> type, added some asserts, corrected comments and other minor corrections
* sorted inputs, updated Supported_Frameworks_Layers.md, some minor
* Added result rename operation
* Optimize imports
* Added ResultRename to package_BOM
* ResultRename moved to the end of back phase, code refactoring
* Revert incorrect changes
* Optimize imports
* Added comments and optimized imports.
* [MO] Implement TensorFlow 2 While support in MO
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Add extractors for both While and StatelessWhile and do minor changes
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Improve update_body_graph function and manage graph names properly
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Fix a map for original name of parameters from body and cond
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Implement draft version of support of TF2 Keras RNN
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Implement Keras LSTM and GRU support in MO
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Improve code for Keras RNN support
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Finalize implementation of TF2 Keras RNN support in MO
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Apply the first part of the comments after review #1
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Avoid use of explicit values of port indices in the transformation
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Finalize code after the first-round review
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Apply comments after the second-round review
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Generate TensorIterator without back edges from TensorFlow models
* Added a check in the MarkSubgraphsWithCorrectLayout to not fail when port is not connected
* Updated the 'protobuf2nx' to consume the graph protobuf message
* Cleanup TI from the IRv7 specific code
* Do not run some front transformations recursively
* Draft support for the ONNX Loop operation when 'cond' = True
* LoopToTI transformation changes
* Added draft of Loop operation and parser for ONNX Loop operation body
* Updated Loop body parser + added shape and type infer for the Loop operation
* Fixes for ONNX Loop operation parser
* Moved Loop parsing to Loop op extractor. Added generation of external edges for the Loop body ops
* Added support for ThresholdedRelu using decomposition
* Added support for Min ONNX operation
* Draft fixes for port_map generation for the Loop
* Rename transformation file and fix BOM
* Fixed shape inference for Loop scan outputs (axis is not None)
* Fixed shape inference for ONNX Loop operation
* Refactor checks in the TensorIteratorMerge transformation
* Code refactoring. Enabled commented transformations
* Documentation update for ONNX Loop, ThresholdedRelu and Min
* Fixed typo in the Loop front transformation where execution condition input is connected. Other refactorings
* Fixed in the Loop extractor
* Added printing 'internal_layer_id' attribute in the graph dumper
* Updated calculation of iterations number for the Loop
* Added missing code
* Fixed output port shapes and types generation for Loop operation
* Update function names and variable names in the Loop operation
* Fixed type inference for iteration count input
* Added removal of input/output ports of the Loop if they are not used
* Fixed renumbering Loop operations input/output ports to keep mandatory
* Fixed ThresholdedReluDecomposition transformation
* Updated MO IR Reader to know about Loop operation. But it is still not supported by the MO IR Reader
* Added unit test for Slice op shape infer (reverse the sequence of elements)
* Reverted changes in the ONNX loader function call to protobuf2nx
* Enable Reshape0DToSqueeze transformation recursively
* Refactored Loop operation support implementation
* Changed ThresholdedReluDecomposition to generate Const with shape [1] instead of scalar
* Code style and wording fixes
* Restored accidentally removed 'return' statement in the TI shape infer function
* Fixed comments
* Fixed comment
Co-authored-by: Evgeny Lazarev <elazarev.nnov@gmail.com>
* Updated ConcatOptimization transformation to work when one dimension of input to Concat is 0D
* Fixed ConcatOptimization transformation to reconnect input edges to Concat
* Completely re-written ConcatOptimization
* Updated Concat0D optimization transformation
* Fixed order of traversing Concat input ports
* Refactored ConcatOptimization transformation to use `delete_input_port` function
* Detele trailing unconnected ports in the ConcatOptimization.py
* Cleaner implementation of ConcatOptimization + unit test
* Remove unnnecessary ir_version checks in the MO
* Cleaned up 'backend_attrs_v2' function
* Small clean up from the 'TFCustomSubgraphCall'
* Clean up the MO extractor attributes mapping
* Renamed PreluOp to PReLU
* Removed back phase transformations related to IRv7
* Fixed setting value for the input port using the 'set_value' method
* Removed front and middle phase transformations related to IRv7
* Cleanup the rest of the Model Optimizer transformations from IRv7 specific transformations
* Final cleanup of the deprecated IR v7 related code
* Removed 'blobs_as_input' usage in the Model Optimizer.
* Removed function '_fuse_add' from the Model Optimizer since it is not used anymore.
* Removed 'keep_in_IR' node attribute for FakeQuantize ops in the MO
* Disabled failing gpu_engine.user_context test