* Optimize realloc for dynamic shape with
- Pre-aligned alloc for bounded dynamic shape
- Reuse internal buffer
* - Fix internal buffer of NMS kernel to be reused
- Fixed bug in nms quick sort
* Additional fix for internal buffer reuse
* Fix legacy dynamic batch to be applied only for 0-th dim dynamic shape with upper bound
* Fix unittest error
* Apply nms fixes of padding -1 to all buffers only when internal buffer is reused
* Not to have separate get_max_tensor, becuase currently there is no needs for that separate API.
Currently max tensor is only needed for memory allocation, and there is no need for minimum tensor size for now
* Fix allocation of internal buffer to be done for each layout
* add aten::topk
* remove commented lines
* remove white space
* move include to invidual ops
* swithc include statements
* fix style
* trim test cases
* Remove global ENABLE_INTEL_CPU macro ddefinition.Add mlocal definition for some source files where it is used
* Fix1
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
* Review reverse sequence for:
- partial shapes and labels propagation
- template implementation of shape infer
- refactor shape_infer to use it when op created with default ctor
* Remove friend shape_infer from reverse sequence op
* Infrastructure for tflite
* Removed submodule flatbuffers
* Added flatbuffers submodule. Fixed version to v22.12.06 aka acf39ff
* Move headers back
* Flatbuffers integration
* Small fixes
* Started parsing the Model
* flatbuffer changes
* decoder_flatbuffer changes
* Lite Input Model -- not needed as of now but looks cool
* Rolled back inherritance from ov::frontend::tensorflow::InputModel
* Results are not treated as outputs, but its ok
* Fix missplaced input vs output
* Refactor
* Load model op-by-op. Frontend API finalized
* Debugging still, there are prints here and there. Decoder is not sane
* Convolution with all attributes is translated and quantization is applied for inputs and constatants. TODO: quantize intermediate tensors, separate decoder specific logic?
* Float ssd and posenet models are showing good accuracy
* Need to refactor but work flawlessly
* Telemetry and lightweight model cutting
* Code style and test changes. Extensions supported
* Quantization and style
* Style refinements
* Move onednn back
* New portion of operations enabled
* TFLite FE doesn't inherrit TF FE
* Moved files to another directory
* Rename header op_table.hpp to common_op_table.hpp for all files in src/frontends/tensorflow_common/src/op/
* Removed visability macroses
* CMake changes
* Unit-test execution in .ci
* Update labeler.yml
* Codeowners
* Style check and fix
* Static Build arrangement
* Addressing the comments
* install common headers to previous place
* New approach with public decoder and graph_iterator
* New approach with public decoder and graph_iterator
* Move GraphIterator back
* Comments addressed
* Comments adressed
* Preliminary TF FE README.md changes
* Added target_compile_definitions OPENVINO_STATIC_LIBRARY for static build
* Fixed conflicts and added TF to common places
* Frontends use only openvino::core::dev API
* Merged common tensorflow changes and made code build and work on selective number of models
* Style
* Rollback unnecessary changes from Tensorflow FE
* Rollback unnecessary changes from Tensorflow Common
* Minor refactor
* cmake minor refactoring
* Mixed commit
* Style and merge fix
* Low hanging fruit operations
* Fix windows build
* Refactor quantization parameters representation
* license compliance. approved by OS PDT
* copyrights in generic file
* dependabot
* labeler
* Unit Test to be triggered in CI
* cmake variables naming. corrected copyright years in copyrights/generic file
* library renamed in .ci/ calls
* Copyright year update
* Set openvino-tf-frontend-maintainers as owner of /src/frontends/tensorflow_lite/
* Fixed flatc corss-compilation
* Cleaned flatbuffers header usage
* Nitpicks solved
* Update cmake/templates/OpenVINOConfig.cmake.in
* Compile with flatbuffers headers
* Fixed "which is prefixed in the source directory"
* Fixed typo in flatbuffers cmake
* Removed flatbuffers submodule
* Added fork submodule
* Fixed static build
* Fixed cross-compilatio
* Fixed -Wshadow warning
* Fixed warning on Windows
* Use only headers from flatbuffers library
* Added LTO and fixed compilation errors on Windows
* Fixed warnings in tensorflow_common
* Move ctors implementation to cpp file
* Added information about new frontends to common FEm part
* Temporaryily disable warnings
* Fixed code style using clang-format
* Fixed Windows
* reverted changes in onnx
* Revert changes in onnx_common
* Removed pragma once frm cpp
Co-authored-by: missjane <estepyreva@gmail.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
* Review propagate interval shapes and labels
* Review eye template shape inference
* Add partial value and label propagation on inputs
columns and rows
* Include array in eye shape inference
* Fix compilation issues
* Remove local label_t alias from eye tests
* [GNA] Fix issue with Parameter followed by Broadcast/Tile layer
* added InsertCopyBeforeLayerToBeEliminated transofrmation: handling issue with
Broadcast and Tile layer in case they are eliminated from network,
* added tests for verifying fix
* [GNA] Fix review comment. Remove unnecessery loop and varialble
* [GPU] Perform memory transfer from usm_host to usm_device only for dGPU
* [GPU] Allocate new memory buffer for biases fusions to avoid original buffer modification since it may be used by other primitives
* New static shape inference iface using ov::Tensors
- Add new shape inference factory
- Add helpers to create inference factory map entries
- Create map for IShapeInferCommon instead of if else switch
- Create new map for IStaticShapeInfer
* Re-factor tile shape inference to use new iface
* ov::default_label_evaluator uses ov::Tensor now
* Improve cmp::lt for mixed types unsigned and float
* Fix cpp lint issue
* Update using tile shape_inference in GPU plugin
* Do tile shape infer before repeats lock deletion
* Fix label type conversion to element type
* Rename shape infer transformation
to type utils and change namespace from ov::sh_infer_tr to ov::util
* Update shape inference utilities
* Add unit test for safe compare of values
* Update shape infer factory to be a template
and use unordered map
* Remove from_label_type as lebel_t can be used
by element:from<>
* Add PushConstantToSubgraph transformation
Transformation detects constfoldable inputs to MultiSubGraphOp,
constantfold them and then pushes them to inner subgraphs.
Ticket: 98155
* cast to int
* comments, split to functions
* remove op::util
* Excluded EXACT mode from the test
* Update src/plugins/intel_gna/tests/functional/shared_tests_instances/behavior/ov_infer_request/inference_chaining.cpp
* Remove None at outputs of the model, improve types handling in frontend
* Fix py code style
* Add torch dependency in pybind tests
* Fix tests if fe is disabled and add backward type cpnversion
* Move decoder tests to layer tests
* Fix codestyle
* Add comment
* Move tests to separate folder
* Update .ci/azure/linux.yml
* [MO][TF FE] Switch MO to TF FE in default mode
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Fix code-style
* Extend operations for the fallback
* Fix MO unit-tests
* Check only legacy FE for read-from-memory functionality
* Fix failures in IR comparator tests
* Fallback to the legacy FE in case tensorflow_custom_operations_config_update
* Revert copyright and update
* Fix unit-test since it is oriented for the legacy FE
* Fallback to the legacy FE in case deprecated config options
* Fix value propagation from deprecated config option
* Fix the Result node name in case cutting by input port for outputs
* Set Result node name aligned with the Legacy Frontend
* Reformat a list of operations to fallback
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>