[DOCS] fix for copyright and trademark glyphs (#17021)

This commit is contained in:
Karol Blaszczak
2023-04-20 14:11:16 +02:00
committed by GitHub
parent dcfa1f6881
commit 0c12ee6015
5 changed files with 19 additions and 19 deletions

View File

@@ -219,7 +219,7 @@ To generate ``vocab.bpe.32000``, execute the ``nmt/scripts/wmt16_en_de.sh`` scri
--output_dir /path/to/output/IR/
Input and output cutting with the ``--input`` and ``--output`` options is required since OpenVINO™ does not support ``IteratorGetNext`` and ``LookupTableFindV2`` operations.
Input and output cutting with the ``--input`` and ``--output`` options is required since OpenVINO does not support ``IteratorGetNext`` and ``LookupTableFindV2`` operations.
Input cutting:

View File

@@ -41,7 +41,7 @@ The Wide and Deep model is no longer in the master branch of the repository but
**Step 2**. Train the model
As the OpenVINO™ toolkit does not support the categorical with hash and crossed features, such feature types must be switched off in the model
As the OpenVINO toolkit does not support the categorical with hash and crossed features, such feature types must be switched off in the model
by changing the ``build_model_columns()`` function in `census_dataset.py` as follows:
.. code-block:: python
@@ -146,7 +146,7 @@ Use the following command line to convert the saved model file with the checkpoi
--output head/predictions/probabilities
The model contains operations unsupported by the OpenVINO™ toolkit such as ``IteratorGetNext`` and ``LookupTableFindV2``, so the Model Optimizer must prune these nodes.
The model contains operations unsupported by the OpenVINO toolkit such as ``IteratorGetNext`` and ``LookupTableFindV2``, so the Model Optimizer must prune these nodes.
The pruning is specified through `--input` option. The prunings for ``IteratorGetNext:*`` nodes correspond to numeric features.
The pruning for each categorical feature consists of three prunings for the following nodes: ``*/to_sparse_input/indices:0``, ``*/hash_table_Lookup/LookupTableFindV2:0``, and ``*/to_sparse_input/dense_shape:0``.

View File

@@ -238,7 +238,7 @@ Methods `in_port()` and `output_port()` of the `Node` class are used to get and
how to use them, refer to the [Graph Traversal and Modification Using Ports and Connections](@ref graph-ports-and-conneсtions) section.
> **NOTE**: A shape inference function should perform output shape calculation in the original model layout. For
> example, OpenVINO™ supports Convolution operations in NCHW layout only but TensorFlow supports NHWC layout as
> example, OpenVINO supports Convolution operations in NCHW layout only but TensorFlow supports NHWC layout as
> well. Model Optimizer shape inference function calculates output shapes for NHWC Convolutions in NHWC layout and only
> during the layout change phase the shape is converted to NCHW.
@@ -259,7 +259,7 @@ More information on how to develop middle transformations and dedicated API desc
There are several middle transformations responsible for changing model layout from NHWC to NCHW. These transformations are triggered by default for TensorFlow models as TensorFlow supports Convolution operations in the NHWC layout.
This layout change is disabled automatically if the model does not have operations that OpenVINO&trade needs to execute in the NCHW layout, for example, Convolutions in NHWC layout.
This layout change is disabled automatically if the model does not have operations that OpenVINO needs to execute in the NCHW layout, for example, Convolutions in NHWC layout.
For more details on how it works, refer to the source code of the transformations mentioned in the below summary of the process:

View File

@@ -17,21 +17,21 @@ The OpenVINO Runtime provides unique capabilities to infer deep learning models
+--------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+
| OpenVINO Device | Supported Hardware |
+==========================================================================+===============================================================================================================+
|| :doc:`GPU <openvino_docs_OV_UG_supported_plugins_GPU>` | Intel&reg; Processor Graphics, including Intel&reg; HD Graphics and Intel&reg; Iris&reg; Graphics |
|| :doc:`GPU <openvino_docs_OV_UG_supported_plugins_GPU>` | Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics |
+--------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+
|| :doc:`CPU <openvino_docs_OV_UG_supported_plugins_CPU>` | Intel&reg; Xeon&reg; with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector |
|| | Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel&reg; Core&trade; Processors with Intel&reg; |
|| | AVX2, Intel&reg; Atom&reg; Processors with Intel® Streaming SIMD Extensions (Intel® SSE) |
|| :doc:`CPU <openvino_docs_OV_UG_supported_plugins_CPU>` | Intel® Xeon® with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector |
|| | Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel® Core™ Processors with Intel® |
|| | AVX2, Intel® Atom® Processors with Intel® Streaming SIMD Extensions (Intel® SSE) |
+--------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+
|| :doc:`GNA plugin <openvino_docs_OV_UG_supported_plugins_GNA>` | Intel&reg; Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel&reg; |
|| (available in the Intel® Distribution of OpenVINO™ toolkit) | Pentium&reg; Silver J5005 Processor, Intel&reg; Pentium&reg; Silver N5000 Processor, Intel&reg; |
|| | Celeron&reg; J4005 Processor, Intel&reg; Celeron&reg; J4105 Processor, Intel&reg; Celeron&reg; |
|| | Processor N4100, Intel&reg; Celeron&reg; Processor N4000, Intel&reg; Core&trade; i3-8121U Processor, |
|| | Intel&reg; Core&trade; i7-1065G7 Processor, Intel&reg; Core&trade; i7-1060G7 Processor, Intel&reg; |
|| | Core&trade; i5-1035G4 Processor, Intel&reg; Core&trade; i5-1035G7 Processor, Intel&reg; Core&trade; |
|| | i5-1035G1 Processor, Intel&reg; Core&trade; i5-1030G7 Processor, Intel&reg; Core&trade; i5-1030G4 Processor, |
|| | Intel&reg; Core&trade; i3-1005G1 Processor, Intel&reg; Core&trade; i3-1000G1 Processor, |
|| | Intel&reg; Core&trade; i3-1000G4 Processor |
|| :doc:`GNA plugin <openvino_docs_OV_UG_supported_plugins_GNA>` | Intel® Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel® |
|| (available in the Intel® Distribution of OpenVINO™ toolkit) | Pentium® Silver J5005 Processor, Intel® Pentium® Silver N5000 Processor, Intel® |
|| | Celeron® J4005 Processor, Intel® Celeron® J4105 Processor, Intel® Celeron® |
|| | Processor N4100, Intel® Celeron® Processor N4000, Intel® Core™ i3-8121U Processor, |
|| | Intel® Core™ i7-1065G7 Processor, Intel® Core™ i7-1060G7 Processor, Intel® |
|| | Core i5-1035G4 Processor, Intel® Core™ i5-1035G7 Processor, Intel® Core™ |
|| | i5-1035G1 Processor, Intel® Core™ i5-1030G7 Processor, Intel® Core™ i5-1030G4 Processor, |
|| | Intel® Core™ i3-1005G1 Processor, Intel® Core™ i3-1000G1 Processor, |
|| | Intel® Core™ i3-1000G4 Processor |
+--------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------+
|| :doc:`Arm® CPU <openvino_docs_OV_UG_supported_plugins_ARM_CPU>` | Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices |
|| (unavailable in the Intel® Distribution of OpenVINO™ toolkit) | |

View File

@@ -36,7 +36,7 @@ The key advantage of the Async approach is that when a device is busy with the i
In the example below, inference is applied to the results of the video decoding. It is possible to keep two parallel infer requests, and while the current one is processed, the input frame for the next one is being captured. This essentially hides the latency of capturing, so that the overall frame rate is rather determined only by the slowest part of the pipeline (decoding vs inference) and not by the sum of the stages.
.. image:: _static/images/synch-vs-asynch.svg
:alt: Intel&reg; VTune&trade; screenshot
:alt: Intel® VTune screenshot
Below are example-codes for the regular and async-based approaches to compare: