Commit Graph

13260 Commits

Author SHA1 Message Date
Liu
9e7243d67c
fix typo (#20906)
Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
2023-11-08 13:10:11 +04:00
Ilya Lavrenov
d6cc3d7058
Disable warnings about API 1.0 in GNA, Python API 1.0 (#20933) 2023-11-08 12:45:22 +04:00
Alexander Kozlov
0f260c2ccd
[DOC]: Added INT4 weight compression description (#20812)
* Added INT4 information into weight compression doc

* Added GPTQ info. Fixed comments

* Fixed list

* Fixed issues. Updated Gen.AI doc

* Applied comments

* Added additional infor about GPTQ support

* Fixed typos

* Update docs/articles_en/openvino_workflow/gen_ai.md

Co-authored-by: Nico Galoppo <nico.galoppo@intel.com>

* Update docs/articles_en/openvino_workflow/gen_ai.md

Co-authored-by: Nico Galoppo <nico.galoppo@intel.com>

* Update docs/optimization_guide/nncf/code/weight_compression_openvino.py

Co-authored-by: Nico Galoppo <nico.galoppo@intel.com>

* Applied changes

* Update docs/articles_en/openvino_workflow/gen_ai.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/gen_ai.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/gen_ai.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/model_optimization_guide/weight_compression.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/model_optimization_guide/weight_compression.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/model_optimization_guide/weight_compression.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/model_optimization_guide/weight_compression.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/articles_en/openvino_workflow/model_optimization_guide/weight_compression.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Added table with results

* One more comment

---------

Co-authored-by: Nico Galoppo <nico.galoppo@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-11-08 10:17:57 +04:00
Paul Youngsoo Ahn
c42a88a190
Support dynamic tensor_iterator (#20869)
* [GPU] Support dynamic tensoriterator with -1  num_iteration
- remove redundant codes

* [GPU] Refactoring methods for pre_process / post_process for body_network

* Add unit test for dynamic tensoriterator wo trip_count_id

* Follow-up code review
* Set inner network in loading of model cache
* Fix legacy loop unit tests
2023-11-07 15:11:08 -08:00
Roman Kazantsev
c6ca7865fb
[TF FE] Fix conversion of TF1 OD models out-of-the-box (#20916)
* [TF FE] Fix conversion of TF1 OD models out-of-the-box

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Add test While with nested If operation

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update tests/layer_tests/tensorflow_tests/test_tf_While.py

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-11-07 17:44:09 +00:00
Sebastian Golebiewski
ac1fb7b955
Fixing OS list in System Requirements for YUM (#20934) 2023-11-07 17:18:18 +01:00
River Li
a0849edca1
[CPU] migrate cpu plugin api 2.0 (#18124)
* [CPU] CPU plugin migrates to plugin API 2.0

* Fix legacy config/metric issue

* Fix some issue of ov_cpu_func_tests

1. set_tensors_impl segment fault
2. ov::loaded_from_cache unsupported issue

* Resolve some comments

1. ov::loaded_from_cache issue
2. throw_if_cancelled issue
3. import_model issue
4. set_tensor_impl issue
5. batched_inference issue

* Fix dynamic shape inference issue

* Fix build error

* keep original model info in infer_request

* Fix minor error

* cache internal tensors for input/output precision change

* Disable import model test cases with precision changes

* fix precision issue

* Fix issue for import model

* Fix InferRequestCancellationTests exception issue

* Skip InferRequestIOBBlobTest.*secondCallGetInputDoNotReAllocateData due to new plugin api have different behavior

* Fix graph name issue

* Fix ROI issues

* Fix Transpose shape issue

* Skip vie::Version test due to change to ov::Version

* Solve input port name changes issue

* Solve preprocess layout issue

* Fix minor issue

* tidy up code

* Fix conflict after rebase

* Fix Windows build warning

* Add aux tensors for precision change issue

* Fix import/export model issue

* WA single layer name changed by preprocess

* Revert "WA single layer name changed by preprocess"

This reverts commit bc8fcdd43c.

* Skip some legacy tests due to plugin api 2.0 is enabled

1. skip some python legacy tests for plugin api 2.0 some different behaviors
2. skip some smoke tests due to output port name was changed

* Fix 2 build warnings

* Skip some AUTO plugin tests

* Fix property issue caused by AUTO plugin

* Skip PSROIPooling issues

* Follow header files reference policy

* Split out transformation fixing for nop_elimination

* Fix AUTO plugin mismatch issue for get_tensor function

* Fix aux tensor shape issue

* Fix tensor shape issue

* WA python sync inference sample's segmentfault issue

* Fix reshape issue for dynamic inference

* Fixed incorrect tensor name in e2e test

Fixe issue: e2e ONNX_Customized_Cascade_Rcnn_api_2_True_batch_1_device_CPU_precision_FP325den8cnk

* Fix python segmentfault issue of plugin api 2.0

* Fix python segmentfault issue of plugin api 2.0

* Revert "Fix python segmentfault issue of plugin api 2.0"

This reverts commit 6f502e5d86.

* Fix onnx_duplicated_output_name due to empty tensor

Co-authored-by: Bell, Song <bell.song@intel.com>

* Remove redundant code

* Remove python segment fault WA

* Keep rt_info to fix test failure in case of legacy public api

* Fix output port names missing issue

* Adress some reviewers' comments

* Restore OnnxBackendNodeModelTest::test_maxpool_with_argmax_2d_precomputed_pads_cpu after fixing has been merged

* Resolve tensor sharing issue when there are same name output port name

In some case, model has 2 or more same name input/output ports, they aslo have the same
precision and partial_shape. Compiled_model will share the same ov::Descriptor::Tensor pointer
and ov::Tensor between multiple such ports.
Considered solving python segment fault issue to create seperated input/output ports, which also
need handle such tensor shared case, this patch will do it.

* Resolve tensor sharing issue when there are same name output port name

In some case, model has 2 or more same name input/output ports, they aslo have the same
precision and partial_shape. Compiled_model will share the same ov::Descriptor::Tensor pointer
and ov::Tensor between multiple such ports.
Considered solving python segment fault issue to create seperated input/output ports, which also
need handle such tensor shared case, this patch will do it.

* Better method to find shrared tensor desc

* rename with snake_case style

* Remove ngraph header files

* Keep external_ptr naming

* Add OPENVINO_SUPPRESS_DEPRECATED for some legacy code

* Use port's tensor_ptr to replace creating new tensor_ptr

* Resolve some reviewer comments

* Implement ov::IInferRequestInternalWrapper::GetPreProcess to recover python GetPrepProcess tests

* Remove unnecessary header files reference

* Assert the risk of precision change and reorder at the same time

* Modify legacy python test to fit plugin api 2.0 behavior

* Recover smoke_Transpose(2|4|5|6)D/TransposeLayerTest.CompareWithRefs due to fixing is merged

* Fix typo issue

* Address reviewer's comments

* Disable precision coversion

* Fix error when CpuBlockedMemoryDesc

* Remove precision mismatch WA

* WA precision issue for query_model

* Solve precision mismatch between compiled model and graph

* Fixe failure of query_model

* Rebase to new plugin api update

* Recover the test cases of precision mismatch

* Try to fix name changing for graph model

* Remove tets code

* Remove fp64

* Rebase to new plugin api update

* Update for some failure cases

* Fix bert_benchmark failure issue

* Avoid segment fault in arm acl

Legacy public api + cpu plugin api will add convert op by preprocess by default for unsupported precision,
but ACLConvertExecutor cannot support dimension > 6, so this test will be segment fault due to dimension > 6

smoke_TestNumpyBroadcastNgraphEvaluate/BroadcastLayerTest.CompareWithRefs/targetShape=(1.2.3.4.5.6.7.8.9.10)_axesMapping=()_mode=numpy_inShape=(1.2.1.4.1.6.1.8.1.10)_inNPrec=I8_trgDev=CPU
smoke_TestNumpyBroadcastNgraphEvaluate/BroadcastLayerTest.CompareWithRefs/targetShape=(1.2.3.4.5.6.7.8.9.10)_axesMapping=()_mode=numpy_inShape=(1.2.1.4.1.6.1.8.1.10)_inNPrec=U8_trgDev=CPU

* Remove precision change from preprocess to avoid ACL unsupport convert dim > 6

* ACLConvertExecutor cannot support dimension > 6, don't let preprocess to add Convert

* Revert "ACLConvertExecutor cannot support dimension > 6, don't let preprocess to add Convert"

This reverts commit fd7a8b35af.

* Revert "Remove precision change from preprocess to avoid ACL unsupport convert dim > 6"

This reverts commit 3c2d9a5f17.

* Debug

* Debug incorrect precision checking issue

* Debug Eltwise FP64 unsupported issue

* Add logs for precision

* debug log

* Update for new dependent PRs merged

* Fix failure caused by preprocess

Fix below failures due to cannot find ops by name
     smoke_LPT/ReduceMaxTransformation.CompareWithRefImpl/f32_[1,3,10,10]_CPU_f32__256*

* Fix build error

* Fix failure caused by missing code during rebase

* Add debug

* Fix precision unsupport issue

* U16/I16/U64 precision support

* Resolve the issue of f64 reorder

Fix below issue:
Cannot create reorder primitive: unsupported reorder case

* Fix convert multiple child edge issue

* Solve ROI tensor failure issues

* Temporarily disable num_nodes comparison

* Only change convert precision for fp64

* Put convert precision change before reorder to avoid confusion

* Add debug log for transformation

* Fix rebase confilict

* Fix clang issue

* Temporarily disable test_infer_mixed_values python test of bf16

* Solve issue of smoke_ConvertCPULayerTest_BOOL_Dynamic_inputPRC=BF16 choose FP32 primType rather than BP16 primType

* Fix issue of pytorch_tests/test_outer.py

There are 2 output ports, but with the same port name, they should share the same tensor.

* Fix arm cannot find Eltwise executor issue

smoke_SetBlobCPU/SetBlobTest.CompareWithRefs/Type=INPUT_Device=CPU_PrecisionInNet=FP16_PrecisionInNgraph=BOOL
will report below error:
	 [ GENERAL_ERROR ] Supported Eltwise executor is not found
It need change convert precision to avoid such problem.

* Fix memory overwritten issue

* Temporarily skip arm fp16 SetBlobTest

* Fix compile error after rebase

* Restore smoke_IsOp test due to fixing pr merged

* Fix float to bf16 issue in avx2 isa

* solve onnx test xfail issue

* Skip test cases that ARM Eltwise executor FP16 is not supported

smoke_SetBlobCPU/SetBlobTest.CompareWithRefs/Type=INPUT_Device=CPU_PrecisionInNet=FP16_PrecisionInNgraph=BOOL
smoke_SetBlobCPU/SetBlobTest.CompareWithRefs/Type=BOTH_Device=CPU_PrecisionInNet=FP16_PrecisionInNgraph=BOOL

      [ GENERAL_ERROR ] Supported Eltwise executor is not found

* [CPU] improve reorder to support any precision

* Implement ReorderExecutor

* Fix builld error

* Not cache executor due to its primitive has been cached

* Keep convert one time at most

At most insert one convert if needed, if still cannot do reorder it will throw exception rather than insert the second convert
For example, below reorder will not be supported:
   FP64<->I64/U64/U32
   U32<->I64/U64
   U32<->I16/U16
   FP64<->FP64
   BIN<->BIN

* Only do conversion if layout is same

* update for only convert case

* Update for reviewer comments

* update for failure cases

* Address reviewer comments

* Update rebase issue

* minor update

* Solve unsupported precision issue in tranfromation rather than init_edge

* Remove unnecessary convert in init_edge

* Minor changes

* Update Reorder::reorderData

* Solve issue if only coversion without reorder

* Address reviewer comments

* Address reviewer comments

* Keep exception for unsuported precision

* update

* Revert reorder executor implement

* Solve float->bool issue on transformation pipeline

* Solve I64 is not supported issues

* Solve reviewer's comments

* Fixed dynamic top_k node issue

* Skip nhwc and nChw16c test cases for ConvertLayer

* Update for reviewers' comments

* Fix some failures

* Update for several failure cases

* Update for apiConformanceTests failures

* Fix incorrect node name after import model

* update

* update comments

* Solve issue of smoke_MatMul_NoTranspose and smoke_MatMul_BothTranspose

* Fixed AlignMatMulInputRanks scalar issue

* Address reviewers' comments, remove redundant path in graph.cpp

* Remove test_div_uint8_cpu from xfail_issue_58676

* Solve invalid number of nodes for smoke_Snippets_BroadcastSelect

* ConstantResultSubgraphTest of u16/i16/u32/i64/u64

* restore smoke_SetBlobCPU BOOL tests for arm

* [CPU] Fix ARM precision issue

ARM64 ACL prefers fp16 than fp32, API 2.0 requires input/output precision not changes,
then fp32 input will trigger convert node is added to convert fp32 to fp16.

* Solve some ARM64 failures

* Fix arm64 InferRequestVariableStateTest tests out of memory issue

ARM64 will force fp16 precision, which cause states memory can be fp16, so memcpy to state_memory
cannot use float * element_size, else it will be out of memory bound.

* Skip 2 arm64 tests caused by forcing fp16 precision

* Revert "Fix arm64 InferRequestVariableStateTest tests out of memory issue"

This reverts commit 3e12bd48c2.

* Fix python test_get_profiling_info failure issue

---------

Co-authored-by: Bell, Song <bell.song@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-11-07 15:25:05 +01:00
Sebastian Golebiewski
3f7989a817
[DOCS] Fixing link in Get Started article (#20881)
* Updating Get Started section

Addressing JIRA ticket: 124289

* Update get_started.md
2023-11-07 14:37:24 +01:00
Vitaliy Urusovskij
e3d7dffa83
Remove legacy API from FEs (except ONNX) (#20849)
* Remove `ngraph` from PT FE and FE tests utils

* Remove `ngraph` from Paddle FE

* Remove `InferenceEngine` from some ONNX FE test

* Port `generate_embedding.py` to API2.0

* CLangFormat

* Fix comments
2023-11-07 14:01:39 +01:00
Pawel Raasz
95aef4bf51
[core]Migrate Exp operator to new API (#20893)
* Migrate Exp operator to new API

* Add missing includes
2023-11-07 13:43:13 +01:00
Pawel Raasz
e82283cf85
[core]Migrate Mish operator to new API (#20892)
* Migrate Mish operator to new API

* Remove `visit_attributes` is same as base class

* Refactor Mish reference implementation

* Add cast as function is generic
-mish calculation is floating-point but return type can be integral.
2023-11-07 13:05:39 +01:00
Tomasz Jankowski
e8b6e17429
[core] Migrate Softplus operator to new API (#20900)
* Drop ngraph remains

* Use ov::Tensor

instaed of ngraph::HostTensor
2023-11-07 12:19:34 +01:00
Alina Kladieva
f17f17acc7
Use custom labeler to label changes not matching any pattern (#20888)
Needed for Smart CI (https://github.com/openvinotoolkit/openvino/pull/19825)
2023-11-07 12:16:32 +01:00
Pawel Raasz
368e6bfb8a
Fix constant folding in MulMulMulFusion (#20803)
* Fix constant folding in MulMulMulFusion
by add f64 precision in Multiply to perform evaluate for const folding

* Do not transform if input has not supported type
2023-11-07 11:57:29 +01:00
Anastasia Kuporosova
4a0098b26a
[PyOV] ngraph linter check update (#20870)
* [PyOV] ngraph linter check update

* Update src/bindings/python/requirements_test.txt

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-11-07 10:42:46 +00:00
Aleksandr Voron
681331d3d7
[CPU] Increase RandomUniform test mean and variance thresholds 2023-11-07 14:37:37 +04:00
Aleksandr Voron
c3948ca799
[nGraph Transformations] NMS convert precisions - change sequence of checks (#20795) 2023-11-07 14:34:12 +04:00
Pawel Raasz
cb53ee5db7
[core]Migrate ReLU operator to new API (#20874)
* Migrate ReLU operator to new API

* Optimize ReLU reference implementation

* Correct define const value in ReLU
2023-11-07 11:29:43 +01:00
Ilya Lavrenov
c0381ab08d
Updated labeler config (#20913) 2023-11-07 14:05:03 +04:00
Maxim Vafin
cdd342ea49
[PT FE] Add ALIKED to model tests (#20899)
* Add ALIKED to model tests

* Update tests/model_hub_tests/torch_tests/test_aliked.py

* Update tests/model_hub_tests/torch_tests/test_aliked.py
2023-11-07 09:34:26 +01:00
Pawel Raasz
8f30470199
[core]Migrate MatMul operator to new API (#20857)
* Migrate MatMul operator to new API

* Correct get shapes references
2023-11-07 09:12:37 +01:00
Maxim Vafin
e976e7b90c
[PT FE] Add tests for Speech-Transformer (#20847)
* Add tests for Speech-Transformer

* Update tests/model_hub_tests/torch_tests/test_speech-transformer.py

* Update tests/model_hub_tests/torch_tests/test_speech-transformer.py
2023-11-07 08:32:22 +01:00
Pawel Raasz
dcdf6750a7
[core]Migrate Sign operator to new API (#20875)
* Migrate Sign operator to new API

* Optimize Sign reference implementation

* Fix code style
2023-11-07 08:31:04 +01:00
Tomasz Jankowski
a304f03852
[core] Migrate Softmax operator to new API (#20894)
* Drop ngraph remains

* Use ov::Tensor

instaed of ngraph::HostTensor
2023-11-07 08:26:58 +01:00
Roman Kazantsev
494a9cf9a9
[TF FE] Refine tests for complex tensors support (#20905)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-11-07 10:29:45 +04:00
Tomasz Jankowski
5cd9659033
[core] Migrate HSwish operator to new API (#20854)
* Drop ngraph remains

* Use ov::Tensor

instaed of ngraph::HostTensor
2023-11-06 20:56:52 +01:00
Anastasiia Pnevskaia
64c21fd6f9
[MO] Fixed MO fallback unit test. (#20868)
* Fixed MO unit test to import paddle conditionally.

* Replace generate with pytest.mark.parametrize.
2023-11-06 22:13:53 +04:00
Karol Blaszczak
3036a3d249
[DOCS] improving the "conversion" section v2 (#20887)
adjustments to conversion and workflow
2023-11-06 18:17:44 +01:00
Roman Kazantsev
d0eb27bd3b
[TF FE] Support Complex Tensors (#20860)
* [TF FE] Support complex tensors

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Align output type for Real and Imag operations

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update decoding complex types

* Add support for ComplexAbs, FFT and IFFT operations

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Correct axes based on a number of inner-most dimensions

* Add layer tests

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Update supported ops documentation

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Add a comment for ComplexTypeMark

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-11-06 16:57:05 +04:00
Pawel Raasz
1083b3b58c
[core]Migrate Erf operator to new API (#20867)
* Migrate Erf operator to new API

* Remove `visit_attributes`  is same as base class

* Optimize reference implementation for size
2023-11-06 13:46:03 +01:00
Mikhail Ryzhov
28279013af
aligned lin build timeouts (#20885) 2023-11-06 14:07:24 +04:00
Sebastian Golebiewski
a554611644
Updating notebooks (#20865) 2023-11-06 10:08:58 +01:00
Pawel Raasz
7d74dac3ee
[core]Migrate GridSample operator to new API (#20852)
* MIgrate GridSample to new API

* Refactor GridSample to reduce binary size
- use function pointer instead std::function (simpler less code size)
- use RoundingGuard instead manual set/restore rounding mode
- move interpolate selection outside main data processing loop
2023-11-06 06:31:02 +01:00
Pawel Raasz
ae343a0178
[core]Migrate FloorMod operator to new API (#20829)
* Migrate FloorMod operator to new API

* Remove `visit_attributes` is same as base class

* Restore FloorMod calculation for signed values
floating-point and integral
2023-11-06 06:26:47 +01:00
Vitaliy Urusovskij
47cdbb9df5
Template plugin folder to API2.0 (#20862)
* Remove unused IE namespace from template plugin

* Remove unused `ngraph::HostTensor`

* Template `ConvolutionLayerTest` to API2.0

* Template `ReshapeLayerTest` to API2.0

* Template `SplitLayerTest` to API2.0

* Remove extra `InferenceEngine::PluginConfigParams`

* CLangFormat
2023-11-04 12:36:16 +04:00
Andrey Kashchikhin
fda1fd9dc1
[CI] [GHA] Add missing setup-python action checkout; use custom action across all pipelines (#20863)
* use unified setup-python actions across all pipelines

* rm triggers
2023-11-04 11:58:01 +04:00
Anastasiia Pnevskaia
cc389c23ca
Removed logic of building example_input by shape. (#20859) 2023-11-03 20:45:34 +04:00
Alina Kladieva
86c638a595
Temporarily restrict flake8_builtins version (#20864) 2023-11-03 16:59:47 +01:00
Tomasz Jankowski
09010657e2
[core] Migrate Gelu operator to new API (#20833)
* Drop HostTensor

* Remove useless overwrite method
2023-11-03 12:16:51 +00:00
Sergey Lyalin
1960536e8e
Fix GPTQ model conversion after two breaking changes (#20823)
* Fix GPTQ model conversion after two breaking changes

* Code style fix

* Remove redundant check
2023-11-03 13:47:51 +04:00
Tomasz Jankowski
3386b85c08
[core] Migrate Divide operator to new API (#20766)
* Use ov:: namespace

* Drop HostTensor

* Use ov::util::make_tensor_of_max_value

instead of ngraph::get_constant_max_of_type

* Use ov::util::make_tensor_of_min_value instead of

ngraph::get_constant_min_of_type

* Refactor get_constant_min_of_type
2023-11-03 10:35:43 +01:00
Anatoliy Talamanov
f890bf7930
Extend throughput benchmark with device CLI parameter (#20816)
* Extend throughput benchmark CLI parameters

* Added device name as the second CLI parameter with default CPU value

* Update samples/cpp/benchmark/throughput_benchmark/main.cpp

Co-authored-by: Zlobin Vladimir <vladimir.zlobin@intel.com>

* Fix comments to review

* Modified python version
* Modified documentation

* Fix comments to review

* Fixed the comment
* Modified python doc
* Fixed device name handling in python version

* Update main.cpp

* Update throughput_benchmark.py

---------

Co-authored-by: Zlobin Vladimir <vladimir.zlobin@intel.com>
2023-11-03 09:57:04 +01:00
Anatoliy Talamanov
c20d52dc4f
Extend sync benchmark CLI parameters (#20844) 2023-11-03 09:51:22 +01:00
Tomasz Jankowski
0effa37811
[core] Migrate HSigmoid operator to new API (#20836)
* Drop ngraph remains

* Use ov::Tensor

instaed of ngraph::HostTensor
2023-11-03 09:10:32 +01:00
Vitaliy Urusovskij
0955faef93
Remove use of convertOps2Nodes() & convert2OutVect() (#20837) 2023-11-03 11:00:33 +04:00
Vitaliy Urusovskij
caa81a0b3c
Remove use of legacy ng/runtime/shared_buffer.hpp (#20840) 2023-11-03 09:09:49 +04:00
Andrei Gorbachev
ff7b49c14d
add a few tests (#20824) 2023-11-02 16:40:47 +00:00
Pawel Raasz
8e4c4c3510
[core]Drop host tensor support in TensorAccessor (#20831)
* Remove functions`get_tensor_data_as for HostTensor

* Remove HostTensor support in TA

* Update doxy comments

Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>

---------

Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>
2023-11-02 16:15:52 +00:00
Aleksandr Voron
e8f21eefae
[CPU] Add FP16 support to MatrixNms (#20804) 2023-11-02 14:40:32 +00:00
Anastasiia Pnevskaia
3f5f923a70
[DOC] Update list of TF formats imported from memory. (#20834)
* Update list of TF formats.

* Minor correction.

* Added comment.

* Update docs/articles_en/openvino_workflow/model_preparation/Convert_Model_From_TensorFlow.md

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Model changed.

* Update docs/articles_en/openvino_workflow/model_preparation/Convert_Model_From_TensorFlow.md

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-11-02 17:31:08 +04:00