From 0c12ee6015205c857d0ecaa64ac9ccd7c561e7fe Mon Sep 17 00:00:00 2001 From: Karol Blaszczak Date: Thu, 20 Apr 2023 14:11:16 +0200 Subject: [PATCH] [DOCS] fix for copyright and trademark glyphs (#17021) --- .../Convert_GNMT_From_Tensorflow.md | 2 +- .../Convert_WideAndDeep_Family_Models.md | 4 +-- .../Customize_Model_Optimizer.md | 4 +-- .../supported_plugins/Supported_Devices.md | 26 +++++++++---------- .../dldt_deployment_optimization_common.md | 2 +- 5 files changed, 19 insertions(+), 19 deletions(-) diff --git a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_GNMT_From_Tensorflow.md b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_GNMT_From_Tensorflow.md index 0cb950cf94a..17c738751bd 100644 --- a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_GNMT_From_Tensorflow.md +++ b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_GNMT_From_Tensorflow.md @@ -219,7 +219,7 @@ To generate ``vocab.bpe.32000``, execute the ``nmt/scripts/wmt16_en_de.sh`` scri --output_dir /path/to/output/IR/ -Input and output cutting with the ``--input`` and ``--output`` options is required since OpenVINO™ does not support ``IteratorGetNext`` and ``LookupTableFindV2`` operations. +Input and output cutting with the ``--input`` and ``--output`` options is required since OpenVINO™ does not support ``IteratorGetNext`` and ``LookupTableFindV2`` operations. Input cutting: diff --git a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md index c3583380d2e..9a632c26471 100644 --- a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md +++ b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md @@ -41,7 +41,7 @@ The Wide and Deep model is no longer in the master branch of the repository but **Step 2**. Train the model -As the OpenVINO™ toolkit does not support the categorical with hash and crossed features, such feature types must be switched off in the model +As the OpenVINO™ toolkit does not support the categorical with hash and crossed features, such feature types must be switched off in the model by changing the ``build_model_columns()`` function in `census_dataset.py` as follows: .. code-block:: python @@ -146,7 +146,7 @@ Use the following command line to convert the saved model file with the checkpoi --output head/predictions/probabilities -The model contains operations unsupported by the OpenVINO™ toolkit such as ``IteratorGetNext`` and ``LookupTableFindV2``, so the Model Optimizer must prune these nodes. +The model contains operations unsupported by the OpenVINO™ toolkit such as ``IteratorGetNext`` and ``LookupTableFindV2``, so the Model Optimizer must prune these nodes. The pruning is specified through `--input` option. The prunings for ``IteratorGetNext:*`` nodes correspond to numeric features. The pruning for each categorical feature consists of three prunings for the following nodes: ``*/to_sparse_input/indices:0``, ``*/hash_table_Lookup/LookupTableFindV2:0``, and ``*/to_sparse_input/dense_shape:0``. diff --git a/docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md b/docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md index adef0064a8e..97ba5b272c5 100644 --- a/docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md +++ b/docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md @@ -238,7 +238,7 @@ Methods `in_port()` and `output_port()` of the `Node` class are used to get and how to use them, refer to the [Graph Traversal and Modification Using Ports and Connections](@ref graph-ports-and-conneсtions) section. > **NOTE**: A shape inference function should perform output shape calculation in the original model layout. For -> example, OpenVINO™ supports Convolution operations in NCHW layout only but TensorFlow supports NHWC layout as +> example, OpenVINO™ supports Convolution operations in NCHW layout only but TensorFlow supports NHWC layout as > well. Model Optimizer shape inference function calculates output shapes for NHWC Convolutions in NHWC layout and only > during the layout change phase the shape is converted to NCHW. @@ -259,7 +259,7 @@ More information on how to develop middle transformations and dedicated API desc There are several middle transformations responsible for changing model layout from NHWC to NCHW. These transformations are triggered by default for TensorFlow models as TensorFlow supports Convolution operations in the NHWC layout. -This layout change is disabled automatically if the model does not have operations that OpenVINO&trade needs to execute in the NCHW layout, for example, Convolutions in NHWC layout. +This layout change is disabled automatically if the model does not have operations that OpenVINO™ needs to execute in the NCHW layout, for example, Convolutions in NHWC layout. For more details on how it works, refer to the source code of the transformations mentioned in the below summary of the process: diff --git a/docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md b/docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md index 2c1645fd83c..8a161c184db 100644 --- a/docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md +++ b/docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md @@ -17,21 +17,21 @@ The OpenVINO Runtime provides unique capabilities to infer deep learning models +--------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+ | OpenVINO Device | Supported Hardware | +==========================================================================+===============================================================================================================+ -|| :doc:`GPU ` | Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics | +|| :doc:`GPU ` | Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics | +--------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+ -|| :doc:`CPU ` | Intel® Xeon® with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector | -|| | Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel® Core™ Processors with Intel® | -|| | AVX2, Intel® Atom® Processors with Intel® Streaming SIMD Extensions (Intel® SSE) | +|| :doc:`CPU ` | Intel® Xeon® with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector | +|| | Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel® Core™ Processors with Intel® | +|| | AVX2, Intel® Atom® Processors with Intel® Streaming SIMD Extensions (Intel® SSE) | +--------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+ -|| :doc:`GNA plugin ` | Intel® Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel® | -|| (available in the Intel® Distribution of OpenVINO™ toolkit) | Pentium® Silver J5005 Processor, Intel® Pentium® Silver N5000 Processor, Intel® | -|| | Celeron® J4005 Processor, Intel® Celeron® J4105 Processor, Intel® Celeron® | -|| | Processor N4100, Intel® Celeron® Processor N4000, Intel® Core™ i3-8121U Processor, | -|| | Intel® Core™ i7-1065G7 Processor, Intel® Core™ i7-1060G7 Processor, Intel® | -|| | Core™ i5-1035G4 Processor, Intel® Core™ i5-1035G7 Processor, Intel® Core™ | -|| | i5-1035G1 Processor, Intel® Core™ i5-1030G7 Processor, Intel® Core™ i5-1030G4 Processor, | -|| | Intel® Core™ i3-1005G1 Processor, Intel® Core™ i3-1000G1 Processor, | -|| | Intel® Core™ i3-1000G4 Processor | +|| :doc:`GNA plugin ` | Intel® Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel® | +|| (available in the Intel® Distribution of OpenVINO™ toolkit) | Pentium® Silver J5005 Processor, Intel® Pentium® Silver N5000 Processor, Intel® | +|| | Celeron® J4005 Processor, Intel® Celeron® J4105 Processor, Intel® Celeron® | +|| | Processor N4100, Intel® Celeron® Processor N4000, Intel® Core™ i3-8121U Processor, | +|| | Intel® Core™ i7-1065G7 Processor, Intel® Core™ i7-1060G7 Processor, Intel® | +|| | Core™ i5-1035G4 Processor, Intel® Core™ i5-1035G7 Processor, Intel® Core™ | +|| | i5-1035G1 Processor, Intel® Core™ i5-1030G7 Processor, Intel® Core™ i5-1030G4 Processor, | +|| | Intel® Core™ i3-1005G1 Processor, Intel® Core™ i3-1000G1 Processor, | +|| | Intel® Core™ i3-1000G4 Processor | +--------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+ || :doc:`Arm® CPU ` | Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices | || (unavailable in the Intel® Distribution of OpenVINO™ toolkit) | | diff --git a/docs/optimization_guide/dldt_deployment_optimization_common.md b/docs/optimization_guide/dldt_deployment_optimization_common.md index 9307f5db945..1502d89cc11 100644 --- a/docs/optimization_guide/dldt_deployment_optimization_common.md +++ b/docs/optimization_guide/dldt_deployment_optimization_common.md @@ -36,7 +36,7 @@ The key advantage of the Async approach is that when a device is busy with the i In the example below, inference is applied to the results of the video decoding. It is possible to keep two parallel infer requests, and while the current one is processed, the input frame for the next one is being captured. This essentially hides the latency of capturing, so that the overall frame rate is rather determined only by the slowest part of the pipeline (decoding vs inference) and not by the sum of the stages. .. image:: _static/images/synch-vs-asynch.svg - :alt: Intel® VTune™ screenshot + :alt: Intel® VTune™ screenshot Below are example-codes for the regular and async-based approaches to compare: