* fix expand_onnx_functions
* refactor + unit test
* fixed function in function case
* fixed expand_onnx_functions
* fixed default value of shape in ValueInfo
* enable xpass model
* changed MergeFrom to Swap
* added xfail with missing test data
* added more unit tests
* styles applied
* used std::rotate, review remarks
* removed debug code
* after offline discussion remarks
* fix checking input/output names on Windows
* names comparator refactor
* replace regex with custom comparison
* review remarks
* added RemoveConcatZeroDimInput transformation
* added RemoveLoopDanglingParameters transformation
* chage place of passes during replace
* missing comment
* code refactor + unit tests
* remove unused headers
* used std::any_of in RemoveConcatZeroDimInput
* changed headers and namespaces to new ov convention
* used std::any_of in RemoveConcatZeroDimInput
* RemoveLoopDanglingParameters refactored
* changed names to RemoveMultiSubGraphOpDanglingParams
* handling multi-body cases
* Handling If case during RemoveMultiSubGraphOpDanglingParams
* comments and names refactor
* More tests for If and TensorIterator
* handle removing dagling param from one body and update all descriptors
* fixed test
* revert if change
* moved RemoveConcatZeroDimInput and RemoveMultiSubGraphOpDanglingParams to NopElimantion
* return false if node is not replaced
* added validate_nodes_and_infer_types
* Revert "moved RemoveConcatZeroDimInput and RemoveMultiSubGraphOpDanglingParams to NopElimantion" + remarks
* review remarks
* review remarks
* fixed subgraph rtti
* adjust passes to new structure
Covered case for 'trivial convert' where no permutation is needed
It is needed for Model Optimizer for logic which will guess model's layout, like "?c??"
* Removed 'inline' Preprocessing API
Even though this API provided a way to specify all pre/post-processing in one line - it was considered inconvinient
With 'getters' API preprocessing code looks more clear for user, so old' inline' API is removed
* Fix pyopenvino build issues
* Update after merged PR#8717
* Init implementation
# Conflicts:
# thirdparty/ade
* Switched to shared class
* Refactoring memory commit()
* Added unit tests
* Fixed output order
* Fixed input order
* Fixed split case
* fixed compiling issue in debug mode
* Enabled compact mode by default
* Fixed default order for inputs and outputs
* Changed unit test
* Enabled compact mode bye default
* reverted compac_mode flag order
* add subgraph instead of constant with fixed shape to allow model have undefined batch
* updated transformation (not checked yet)
* changed ReverseV2ToReverseSequence to support dynamic shapes/reshape;
added transformation to reverse_tensor_iterator to support new subgraph got from ReverseV2ToReverseSequence
* remove changes that should not be on this branch
* added tests;
fixed old transformation
* added delete of reversesequences to avoid run of transformation twice
* fixed pattern check for case with dynamic value for input of reversesequence
* Revert "fixed pattern check for case with dynamic value for input of reversesequence"
This reverts commit 0c04164e
* Revert "added delete of reversesequences to avoid run of transformation twice"
This reverts commit fcb7de9c
* reversed changes in reverse_tensorr_iterator for Squeeze case;
update reverse_tensor_iterator with shapeof subgraph
added permutations for attributes to pass layer test
* minor fix for dynamic shape
* updated test;
fixed backward compatibility in reverse_tensor_iterator transformation
* revew comments fixed:
added comments;
refactoring done;
fixed framework name saving for rank = 1
* minor review fixes
* small fix
* [GPU] fix Constant handling when it has multiple users and one if it is bprop conv
When constant is connected to ConvolutionBackpropData or GroupConvolutionBackpropData weights,
we need to swap 'O' and 'I' dimensions. That can be problematic if the same constant
is also connected to other nodes - since after swap - the dimensions may not match
the other node's dimensions.
To handle that, we can create a copy of that constant, replace backprop convolution weights
with that copy and create additional (to the original constant) cldnn::data primitive with swapped dimensions.
* fix windows build
* address review comments
* Cldnn output memory size at GatherND functional-test is aligned with TensorDesc of output blob
* Add param for rank of input data
* Update unittests to add rank of input data
* Update gpu fusing tests
* ngraph and inference-engine parts
* add priorbox_8 python api
* remove 'PriorBoxAttrs' and 'PriorBox' from outside of opset namespace
* add common nGraph transformation 'ConvertPriorBox8To0'
* remove redundant alias of PriorBox::Attributes
* use new Tensor api for evaluate method
* change v0operation back to the former api, pass Attribute structure to the reference implement
* use new Tensor api for constant_fold
* add support for dynamic shapes of constant_fold new Tensor api
* fix Node 'create temp tensors' issue when shape==0'
* revert to 'HostTensor' api for PriorBox8
* Apply suggestions from code review and 'template_plugin reference' testcase replaced 'backend INTERPRETER' testcase
* transformation part Apply suggestions from code review
* python init file updated for opset8
* keep backward compatibility to fix CI issue
* rebase to new structure of OpenVINO repo
* revert 'thirdparty/onednn_gpu' mistake changes
* Moved openvino to src
* Moved ngraph and frontends to src
* Fixed cmake generation
* Moved inference_engine libs to src
* Moved C API to src
* Fixed CMake generation
* Moved readers to tests, snippets and preprocessing to common
* Fixed CMake
* Moved transformations and lp_transformations
* Fixed transformations cmake
* Fixed build
* Fixed unit-tests and ci paths
* Fixed docs
* Fixed C API build
* Try to fix static build
* More clear order
* Renamed inference_engine_legacy_api to legacy
* Fixed some cmake scripts
* Fixed path to legacy
* Fixed Myriad plugin
* Fixed v7 reader
* Fixed plugin.hpp
* Fixed developer config
* Fixed ie_parallel