* keep Shape subgraph in FP32 for GPU
* replaced ngraph:: -> ov:: namespace
* renamed ConvertModelToFP16ElementType -> ConvertCompressedToMixedPrecision and other passes; removed unnecessary argument for the pass
Signed-off-by: Pavel Esir <pavel.esir@intel.com>
* Update src/common/transformations/src/transformations/common_optimizations/convert_compressed_to_mixed_precision.cpp
* Update src/common/transformations/src/transformations/common_optimizations/convert_compressed_to_mixed_precision.cpp
* rephrase calling of ConvertPrecision to avoid error of an unused variable
* Update convert_compressed_to_mixed_precision.cpp
* placed calling of ConvertCompressedToMixedPrecision into the beginning of CommonOptimizations
* Update transformations_pipeline.cpp
* refactored unit-tests
* cleared obsolete commented line
* code style fix
* Update src/common/transformations/src/transformations/common_optimizations/common_optimizations.cpp
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
Signed-off-by: Pavel Esir <pavel.esir@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
Co-authored-by: Maxim Vafin <maxim.vafin@intel.com>
* Used newer version of checkout github action
* Updated actions/upload-artifact action version to use Node 16
* Updated actions
* Experiment
* Name documentation artifacts
* Moved doxyrest version to environment
* Extract PR number
* Properly upload documentation
* Updated github_org_control/config.json
* [TF FE] Support body graph conversion and injection
Now this is a base for further implementation support for StatefulPartitionedOp, While, and If operations
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Remove unused variable
* Remove artifacts of serialization experiment
* Apply code-review feedback: comments for decoder_argdef
* Create a mat to cache function indices by names
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* [TF FE] Fix ResizeBilinear for uint8 type and test Resize operations
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Convert to fp32 right before interpolation
* Add one more test for fp64
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* samples/python/benchmark/bert_benhcmark/requirements.txt: add datasets, install torch cpu only for linux
By default for linux torch is installed with GPU support, but for
windows is CPU only
* benhcmark->benchmark
* Add type_prop tests
* Add shape_infer tests
* Update shape_infer to preserve interval dim and label
* Unified approach for get_data_as_shape and negative value checks
* Remove redundant gtest header
* rename one hot shape infer test file
* Add test for shape_infer with default ctor and adjust resolve_axis
* Move get_data_as_shape changes to the one hot custom util
* Adjust custom get_data_as_shape
* Select shape_infer tests update
* Add Select type_prop tests
* Add evaluate_lower/upper for select
* Revert evaluate_lower/upper for Select
* Use get_node_input_partial_shapes
* Style and headers improvements
* Style apply
* Rename select shape infer file tests
* Use default ctor for output_shapes init
* Use helper for shape_labels init and add more dim test cases