[DOCS] fix npu mention (#21147)
This commit is contained in:
@@ -12,8 +12,11 @@
|
||||
openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide
|
||||
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
|
||||
|
||||
In 2023.1 OpenVINO release a new OVC (OpenVINO Model Converter) tool has been introduced with the corresponding Python API: ``openvino.convert_model`` method. ``ovc`` and ``openvino.convert_model`` represent
|
||||
a lightweight alternative of ``mo`` and ``openvino.tools.mo.convert_model`` which are considered legacy API now. In this article, all the differences between ``mo`` and ``ovc`` are summarized and the transition guide from the legacy API to the new API is provided.
|
||||
In the 2023.1 OpenVINO release OpenVINO Model Converter was introduced with the corresponding
|
||||
Python API: ``openvino.convert_model`` method. ``ovc`` and ``openvino.convert_model`` represent
|
||||
a lightweight alternative of ``mo`` and ``openvino.tools.mo.convert_model`` which are considered
|
||||
legacy API now. In this article, all the differences between ``mo`` and ``ovc`` are summarized
|
||||
and the transition guide from the legacy API to the new API is provided.
|
||||
|
||||
Parameters Comparison
|
||||
#####################
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
# GNA Device {#openvino_docs_OV_UG_supported_plugins_GNA}
|
||||
|
||||
|
||||
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
@@ -20,11 +22,12 @@ For more details on how to configure a system to use GNA, see the :doc:`GNA conf
|
||||
|
||||
Intel's GNA is being discontinued and Intel® Core™ Ultra (formerly known as Meteor Lake)
|
||||
will be the last generation of hardware to include it.
|
||||
For this reason, OpenVINO 2023.2 will also be the last version supporting the GNA plugin.
|
||||
Consider Intel's new Visual Processing Unit as a low-power solution for offloading
|
||||
For this reason, the GNA plugin will soon be discontinued.
|
||||
Consider Intel's new Neural Processing Unit as a low-power solution for offloading
|
||||
neural network computation, for processors offering the technology.
|
||||
|
||||
|
||||
|
||||
Intel® GNA Generational Differences
|
||||
###########################################################
|
||||
|
||||
|
||||
Reference in New Issue
Block a user