* Update ONNX Runtime from rel-1.8.1 to rel-1.14.0
Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
* Upgrade Cmake to 3.24.0
Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
* Revert "Upgrade Cmake to 3.24.0"
This reverts commit 04a00f60c0.
* Update CMake to version 3.24.0
Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
* Skip CApiTest.test_custom_op_openvino_wrapper_library test for tmp, will add back with the new ONNX Runtime version
Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
---------
Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
* Add descriptions to the transformations, add additional checks
* fix a warning
* TransposeSinking Rafactoring part2: move the transformations to a separate folder, align namespaces
* TransposeSinking refactoring: class names, namespaces
* codestyle
* resolve merge conflicts
* fix special FQ with zero range in quantized models
* fix format & comments
* Add test case
* remove dot interval test case from smoke_LPT/FakeQuantizeTransformation.CompareFunctions
* Remove dot interval gpu test case because Pooling is also folded
* handle review comment
* fix code style
* update docs
* remove fold_zero_multiply
* Mark all failed ONNX layer tests as XFail
* Add additional xfailed marks
* Add one more failed tests into XFail
* Add conditions for CPU/GPU failures
* Revert "Add conditions for CPU/GPU failures"
This reverts commit 790524c59c.
* Add failures separation for CPU/GPU
* Replace all xfail with skip
* Remove redundant clone from serialize pass
* Revert padding changes in serialize pass
* Provide a class for local copy of nodes with paddigs
* Fixed comments
* IR serialization for dynamic models
* added ShapeOf1To3 transformation pass
* fixed input output type mismatch
* removed unnecessary codes
* moved ConvertShapeOf1To3 from common to GPU plugin
* updated copyright year
* fixed build errors
* Reduce the number of validate and infer types in ConvertPrecision
Currently, ConvertPrecision pass frequently runs validate and infer types.
This is due to the fact that it iterates over every precision pair, then over
the whole model followed by validate and infer types.
The proposed solution is to iterate over the model: for each node iterate
over precisions array, update the node if required followed by validate and
infer types.
Ticket: 81311
* use map
* clang format
* move enum hasher
* fix gpu
* revalidate
* reinvalidate if node has changed
* remove validate for input prec changes
* fix gpu
* review
* find
* fix pytorch case
* revalidate
---------
Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
* Stabilize ascending comparison of ref impl
* Use reference to gtest param
* Create ref impl tests
* Fix descending by index sorting
* Sort by index both ways
* Make sort by index always ascending (revert)
* Add possibility to use memory alignment different than 64B
* update tests for new memory api
* Remove ineffective code
* [FIX] Fix memory alignment issues for graph compiler primitives
* Update after review
* Ability to provide several source dirs for ncc-style checks
* Fixed include headers; added NCC to TF common
* Fixed NCC for frontends
* Fixed NCC for frontends
* Extra fixes
* Fixest push --f
* Clang-format
* Apply comments
* Add an option to specify required clang-format version
* Update src/frontends/tensorflow/src/decoder_proto.cpp
* Update src/frontends/tensorflow/src/decoder_proto.cpp