* Added support of tuple in input, removed type syntax from OVC tool.
* Removed type syntax tests.
* Apply suggestions from code review
* Method annotation corrected.
* Type annotation corrected.
---------
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
* Removed 'example_output' from ovc and ovc.convert_model, used output for this purpose
* Update tools/ovc/openvino/tools/ovc/convert.py
* Update tools/ovc/openvino/tools/ovc/convert_impl.py
* Reverted mo parts not affected by remove of example_output
* fix PDPD convert_model tests
---------
Co-authored-by: Xiuchuan Zhai <xiuchuan.zhai@intel.com>
* Added shape and type infer for result nodes in MOC transformations.
* Clang format.
* Added validate_nodes_and_infer_types() pass at the end of MOC pipeline.
* Clang format.
* Added test.
* Clang format.
* Fixed output_model logic.
* Removed InputCutInfo, disabled input cut in ovc.
* Disabled output cut, added tests for setting shapes or types for not all inputs.
* Returned support of numpy type.
* Separated MO and OVC python API tests.
* Small corrections.
* Added output dir test, exceptions test.
* Tests fixed.
* Corrected extension param description.
* Corrected input description, minor code corrections.
* [PT FE] Use weight share switch in frontend
* Return static for function
* Update src/bindings/python/src/openvino/frontend/pytorch/ts_decoder.py
* Fix issue with quantized constants
* Add tests for shared
* Change `VPUX`/`VPU` occurrences to `NPU`
* Switch `HARDWARE_AWARE_IGNORED_PATTERNS` VPU to NPU
* Rename `MYRIAD plugin`
* Rename vpu_patterns to npu_patterns in tools/pot
* Rename vpu.json to npu.json in tools/pot
* Rename restrict_for_vpu to restrict_for_npu in tools/pot
* Change keembayOptimalBatchNum to npuOptimalBatchNum
---------
Co-authored-by: Dan <mircea-aurelian.dan@intel.com>
* [TF FE] Support MaxPoolWithArgmax operation
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Add ticket number for TS crash
* Correct error message
* Skip crashing tests
* Set additional tensor name for MaxPool
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* [MO] compress_to_fp16=True by default (2dn attempt)
* fix unit-tests
* second round of fixin unit-tests
* set compress_to_fp16 default to True in ovc/cli_parser.py
* use save_model in mo_python_api_tests
* enforce compress_to_fp16=False in test_zero_copy
* selectively compress depending on the path user has chosen to generate IR
* corrected doc
* allow compress_to_fp16=False/True for ovc
* doc and unit-tests failing fix
* user save_model in ovc cli tool
* revert back serialize and compress_model but into main instead of moc_emit_ir
* cover more argument combinations for cli tool and convert_model
* Added Torchscript Backend
* First commit for backend with Torch FX Decoder
* Merging changes from Torch FX branch
* Torch FX initial fixes (Temporary)
* Fixed type/shape issues in Torch FX decoder
* Added translation for built-in getitem
* MaxPool update & Output shape fix (Torch FX)
* Torch FX graph outputs fix
* Torch FX support for sigmoid and slu_
* Torch FX graph module caching
* Torch Fx partitioner cache removed
* Torch FX initial getitem replacer added
* Index check for torch fx getitem replacer
* Debug print removed from partitioner
* Added environment variables for pytorch tracing mode and openvino device
* FX translation fix for getitem & getitem replacer removed
* Added checks for PyTorch tracing mode environment variable
* Adding compile mode for fallback
* Added more ops for resnet18
* Added a check for environment variable
* Generalized addmm to work with torchscript and torchfx
* Added the missing batch_norm.default translation
* fx_backend: include get_attr ops to the partitions
* AddeTODO note t to improvget_attr algorithm
* created function for adding get_attr nodes
* fx_backend: added aten.mul.Tensor, re-enabled aten.empty.memory_format
* fx_backend: Additional op support/improvement for Inception V3
* Added comment for fix 64-bit to 32-bit max int conversion
* fx_backend: Update for avg_poolnd to support 3 inputs
* Fixed erorr in decoder.py
* TorchFX caching fix
* Torch backend, op support for Stable Diff. & BERT
* Arranged ops in order and added torch tensor mapping
* Added support for more ops for super glue
* TorchFX: Initial permanent fallback
* TorchFX: New ops for improved TorchVision support
* TorchFX backend optimizations for partitioning and tmp fallback
* working operator updates for superglue
* Updates to operators for superglue
* Removed max.dim and stack
* Cleanup
* Cleanup
* Fixed a couple of syntax issues
* Fixed a couple of syntax issues
* Added missing method to TorchFX Decoder
* Added missing method to TorchFX Decoder
* Removed redundant code for transpose
* TorchFX: Initial StableDiffusion support
* PyTorch decoder ovtype to ctype fix for int64
* Added ops for distilbert
* Fixed few unnecessary include statements
* Seperated TorchFX and TorchScript decoders
* Modified import statements to reflect two decoders
* f64 fix for TorchFX
* Import fix for PyTorch backend modules
* TorchFX serialize graph for debugging (Temporary)
* Serialize and load back feature enabled for TorchFX
* Temporary optimization to remove Broadcast
* Temporary SoftmaxRehapeElimination pass is added
* TorchFX custom model cache directory
* PyTorch bitwise translation, conversion checks enabled
* Naming fix in make_list_construct
* TorchFX: Added comments to Softmax and Slice translations
* translate_chunk temporarily removed for TS backend
* Fixed linter issues
* Addressed clang formatting issues
* Fixed few more clang and linter issues
* Fixed tests to use ts_decoder
* Fixed naming convention issues
* Added missing import
* Added inlined_inputs to TorchScriptDecoder
* Added tests for torch fx backend
* Removed magic numbers in PyTorch decoder utils
* TorchFX decoder data type fix
* Added cast from size_t to int
* TorchFX output handling code cleanup
* TorchFX: Use detached input tensor
* Added missing cast from size_t to int
* Added static cast in group_norm
* Fixed casting issue in split
---------
Co-authored-by: ynimmaga <yamini.nimmagadda@intel.com>
Co-authored-by: Cavus Mustafa <mustafa.cavus@intel.com>