* [TF FE] Use regular Convolution in case dynamic input channels
This solution is aligned with the legacy frontend but it has limitation.
This is a temporal solution until the core obtains ShapeOf evaluator.
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Remove unused variable from the test
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Fix unit-test
* Update mo unit-test
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* WIP Postpone fp16 in CompressFloatConstantsImpl
* Apply suggestions from code review
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
* WIP: Compression to FP16 in Serialize
* Prepared for efficient fp32 to fp16 conversion
* Update src/core/reference/src/runtime/reference/convert.cpp
* Called real slow reference implementations in the place where the optimized versions are supposed to be implemented
* Code style
* Fixed 0 values in the fast f64 to f16 compression
* Optimized convert_from_f32_to_f16_with_clamp
* Added optimized f32->f16 instance of change_constant_precision
* compression transformation Python test
* use tmp dir, minor corrections
* Update src/bindings/python/tests/test_transformations/test_compression.py
* Update src/bindings/python/tests/test_transformations/test_compression.py
* style fix
* define rt_info for postponed_fp16_compression
* remove redundant class
* fix temp dir for Win in test_compression.py
* update definitions in convert.hpp
* Update implementation in convert.cpp
* Update serialize.cpp
* Update compress_float_constants.cpp
* added macros for ARM/non_x86 in convert.cpp
* fix macros in convert.cpp
* change fixme placement in serialize.cpp
* style_fix
* Update src/core/reference/src/runtime/reference/convert.cpp
* style_fix
* Optimized count_out_of_f16_range
* Code style
* Revert unused
* Update src/core/src/pass/serialize.cpp
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
* Update src/core/reference/src/runtime/reference/convert.cpp
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
* use optimized convert_from_f32_to_f16_with_clamp for non postponed
* minor corrections
* Update src/common/transformations/src/transformations/common_optimizations/compress_float_constants.cpp
* Update compress_float_constants.cpp
* Switched mo and ovc to save_model instead of serialize to leverage performance improvements in fp32->fp16
* Applied minor code imporvements to address review feedback
* Minor changes in code
* Update tools/ovc/openvino/tools/ovc/main.py
* Apply suggestions from code review
* Fixed failed test in case when both usual xml compression and fp16 compression are applied simultaneously (disabled for now)
* Added description for CompressFloatConstantImpl postponed parameter
* Description of postponed parameter for CompressFloatConstants
* Reverted switching to save_model in mo as the compression can be applied not only via CLI and old code should be kept for Python path (not applicable for ovc)
* Removed remining committed test artefacts and reverted remaining changes in mo
---------
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
Co-authored-by: dmitrygo <dmitry.gorokhov@intel.com>
Co-authored-by: Vladimir Paramuzov <vladimir.paramuzov@intel.com>
Co-authored-by: Pavel Esir <pavel.esir@intel.com>
Co-authored-by: Pavel Esir <pavel.esir@gmail.com>
* Added support of tuple in input, removed type syntax from OVC tool.
* Removed type syntax tests.
* Apply suggestions from code review
* Method annotation corrected.
* Type annotation corrected.
---------
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
* Added check that nncf was imported.
* Added check that nncf was imported in MO.
* Added check that nncf was imported in MO.
* Apply suggestions from code review
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
* Removed not needed import.
* Pylint fix.
---------
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
* Removed 'example_output' from ovc and ovc.convert_model, used output for this purpose
* Update tools/ovc/openvino/tools/ovc/convert.py
* Update tools/ovc/openvino/tools/ovc/convert_impl.py
* Reverted mo parts not affected by remove of example_output
* fix PDPD convert_model tests
---------
Co-authored-by: Xiuchuan Zhai <xiuchuan.zhai@intel.com>
* Fixed output_model logic.
* Removed InputCutInfo, disabled input cut in ovc.
* Disabled output cut, added tests for setting shapes or types for not all inputs.
* Returned support of numpy type.
* Separated MO and OVC python API tests.
* Small corrections.
* Added output dir test, exceptions test.
* Tests fixed.
* Corrected extension param description.
* Corrected input description, minor code corrections.
* [PT FE] Use weight share switch in frontend
* Return static for function
* Update src/bindings/python/src/openvino/frontend/pytorch/ts_decoder.py
* Fix issue with quantized constants
* Add tests for shared
* Change `VPUX`/`VPU` occurrences to `NPU`
* Switch `HARDWARE_AWARE_IGNORED_PATTERNS` VPU to NPU
* Rename `MYRIAD plugin`
* Rename vpu_patterns to npu_patterns in tools/pot
* Rename vpu.json to npu.json in tools/pot
* Rename restrict_for_vpu to restrict_for_npu in tools/pot
* Change keembayOptimalBatchNum to npuOptimalBatchNum
---------
Co-authored-by: Dan <mircea-aurelian.dan@intel.com>
* Remove inits, update main one
* Fix stacklevel
* Testing wrong solution
* Testing test test
* Fix test test test test
* mo modules mo problems
* Xfail test that check stdout/err?
* not so correct solution to circular imports
* Fix or not to fix
* CMake magic, co-authors: my best team
* Fix package imports
* Fix tools inits
* Fix ovc tf
* Fix Q000
* Fix F401
* Fix linters
* Add save_model
* Remove LayoutMap
* Move test_utils to 'internal modules'
* First testing
* Missing Type
* Expand main namespace
* Change some more tests
* Add OVAny to namespace
* Add Constant and Parameter to namespace
* More tests changes
* Fix inits
* Add layout_helpers to main namespace
* Revert CMake and linux.yml with ovc
* Update main inits
* Remove MO from tools inits
* changes to init files
* Fix tests
---------
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
* [MO] compress_to_fp16=True by default (2dn attempt)
* fix unit-tests
* second round of fixin unit-tests
* set compress_to_fp16 default to True in ovc/cli_parser.py
* use save_model in mo_python_api_tests
* enforce compress_to_fp16=False in test_zero_copy
* selectively compress depending on the path user has chosen to generate IR
* corrected doc
* allow compress_to_fp16=False/True for ovc
* doc and unit-tests failing fix
* user save_model in ovc cli tool
* revert back serialize and compress_model but into main instead of moc_emit_ir
* cover more argument combinations for cli tool and convert_model
* Added Torchscript Backend
* First commit for backend with Torch FX Decoder
* Merging changes from Torch FX branch
* Torch FX initial fixes (Temporary)
* Fixed type/shape issues in Torch FX decoder
* Added translation for built-in getitem
* MaxPool update & Output shape fix (Torch FX)
* Torch FX graph outputs fix
* Torch FX support for sigmoid and slu_
* Torch FX graph module caching
* Torch Fx partitioner cache removed
* Torch FX initial getitem replacer added
* Index check for torch fx getitem replacer
* Debug print removed from partitioner
* Added environment variables for pytorch tracing mode and openvino device
* FX translation fix for getitem & getitem replacer removed
* Added checks for PyTorch tracing mode environment variable
* Adding compile mode for fallback
* Added more ops for resnet18
* Added a check for environment variable
* Generalized addmm to work with torchscript and torchfx
* Added the missing batch_norm.default translation
* fx_backend: include get_attr ops to the partitions
* AddeTODO note t to improvget_attr algorithm
* created function for adding get_attr nodes
* fx_backend: added aten.mul.Tensor, re-enabled aten.empty.memory_format
* fx_backend: Additional op support/improvement for Inception V3
* Added comment for fix 64-bit to 32-bit max int conversion
* fx_backend: Update for avg_poolnd to support 3 inputs
* Fixed erorr in decoder.py
* TorchFX caching fix
* Torch backend, op support for Stable Diff. & BERT
* Arranged ops in order and added torch tensor mapping
* Added support for more ops for super glue
* TorchFX: Initial permanent fallback
* TorchFX: New ops for improved TorchVision support
* TorchFX backend optimizations for partitioning and tmp fallback
* working operator updates for superglue
* Updates to operators for superglue
* Removed max.dim and stack
* Cleanup
* Cleanup
* Fixed a couple of syntax issues
* Fixed a couple of syntax issues
* Added missing method to TorchFX Decoder
* Added missing method to TorchFX Decoder
* Removed redundant code for transpose
* TorchFX: Initial StableDiffusion support
* PyTorch decoder ovtype to ctype fix for int64
* Added ops for distilbert
* Fixed few unnecessary include statements
* Seperated TorchFX and TorchScript decoders
* Modified import statements to reflect two decoders
* f64 fix for TorchFX
* Import fix for PyTorch backend modules
* TorchFX serialize graph for debugging (Temporary)
* Serialize and load back feature enabled for TorchFX
* Temporary optimization to remove Broadcast
* Temporary SoftmaxRehapeElimination pass is added
* TorchFX custom model cache directory
* PyTorch bitwise translation, conversion checks enabled
* Naming fix in make_list_construct
* TorchFX: Added comments to Softmax and Slice translations
* translate_chunk temporarily removed for TS backend
* Fixed linter issues
* Addressed clang formatting issues
* Fixed few more clang and linter issues
* Fixed tests to use ts_decoder
* Fixed naming convention issues
* Added missing import
* Added inlined_inputs to TorchScriptDecoder
* Added tests for torch fx backend
* Removed magic numbers in PyTorch decoder utils
* TorchFX decoder data type fix
* Added cast from size_t to int
* TorchFX output handling code cleanup
* TorchFX: Use detached input tensor
* Added missing cast from size_t to int
* Added static cast in group_norm
* Fixed casting issue in split
---------
Co-authored-by: ynimmaga <yamini.nimmagadda@intel.com>
Co-authored-by: Cavus Mustafa <mustafa.cavus@intel.com>
* WIP: parameters cleanup
* Removed debug output, fixed CLI
* Fixed python objects conversion
* Finally renamed mmap to share_weights
* Fixed TF conversion from a file or a directory
* Fixed obvious errors in unit tests
* Deleted layouts from OVC. Fixed most of the fails in ovc unit tests (there are still failures)
* Clenaup other references to layouts and fixed --version
* Fixed case when two model files are passed in TF case
* Fixed multiple model parts passing in ovc command line
* Tests fixed, support of unnamed input in cli parser.
* Remove convert_model from runtime.
* Changed silent to verbose.
* Removed transform param.
* Removed example_input, share_weights from ovc cli tool.
* Remove wrong change.
* Test fix.
* Code corrections.
* Returned comment.
* WA to fix process hanging after extension loading.
* Removed not needed code.
* Added comment.
---------
Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
* [PT FE]: support nested inputs in example_inputs and arg dicts with different argtypes
* accept hande lists as inputs
* Update tools/ovc/openvino/tools/ovc/moc_frontend/pytorch_frontend_utils.py
* update tests and add comments in code
* fix for custom types in annotations and duplicate in mo
* Update tools/mo/openvino/tools/mo/moc_frontend/pytorch_frontend_utils.py
* Fix -api sync for single -data_shape
Tickets 111187 and 111185
I wasn’t able to find C++ equivalent of Python’s `info.original_shape.is_static`. Later I realized that it shouldn’t be considered because -shape cmd arg should have higher priority for shape inference than model’s shape. So I removed it from Python.
Replace
`if benchmark.inference_only and batch_size.is_dynamic:`
with
`if allow_inference_only_or_sync and batch_size.is_dynamic:`
to reset batch_size to static in case of dynamic shape with single -data_shape
* Check only app_input_info.size() == 1 because if it's gretaer than 1, input shape is dynamic and there are more that one static shapes. Apply TODO
* [TF FE] Support Switch and Merge to fuse into If operation
It introduces support of TF1 control flow with Switch and Merge nodes.
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Add script for test model generation
* Fix code-style
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Fix build issue
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Fix build issue with types
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Apply code-review feedback: optimizations in utils
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Fix build issue
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Apply code-review remarks and cover more cases
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Remove commented code
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Remove unused vars
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Update MO unit-tests wit Switch-Merge case
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Fix build issue: remove unused variable
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Implementation of MMAP for ONNX FE
* fix win offsets
* added virtual dtor to MappedMemory
* review remarks. part.1
* added disable mmap flag to MO
* added additional checks to mmap
* remove unnecessary const
* fix pybind default value
* Added args.disable_mmap = False to MO tests
* fixed MO test
* avoid global headers
* fix casting for win
* disable mmap for legacy frontends flow
* review remarks
* Fixed passing parameters
* added doc to MappedMemory and load_mmap_object
* Made MO cli parser independent from OVC, added OVC Pylint test, minor fixes.
* Small corrections.
* PyLint fixes.
* Added init files.
* PyLint fixes.
* Small correction.
* Removed OVC dependency from MO.
* Fixed MO unit tests.
* PyLint fixes.
* Unit tests fix.
* Returned MO unit tests.
* PyLint configs.
* Small correction.
* Moved offline_transformations to back.
* Moved offline_transformations to back.
* skip validation, always include cmake
* rm unconditional inclusion of zlib
* always include zlib
* correct path for builtin_extensions
* find builtin extensions recursively
* include test_utils always
* add logs for build_samples
* skip tests with dir accessing
* remove platform specification for samples build
* do not pkgconfig on win, use cmake generic on linux for samples
* rm make
* fix num_threads
* use bare numbers
* skip failing
* skip test_lrn_basic
* find zlib
* print error of downloading
* add linux pipeline
* do not save cache from PRs; add skipif only in GHA
* rm caching
* evaluate against a string
* do not include test_utils to the install dir
* add support for scalar shapes into cli_parser.py
* add test-case with scalar shapes for convert_model
* reordered inputs in test-case with scalar shapes for convert_model
* minor clarifications
---------
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>