[DOCS] Fixing formatting issues in articles (#17994)
* fixing-formatting
This commit is contained in:
parent
c8f3ed814b
commit
4270dca591
@ -17,7 +17,7 @@ OpenVINO Runtime offers multiple inference modes to allow optimum hardware utili
|
||||
The remaining modes assume certain levels of automation in selecting devices for inference. Using them in the deployed solution may potentially increase its performance and portability. The automated modes are:
|
||||
|
||||
* :doc:`Automatic Device Selection (AUTO) <openvino_docs_OV_UG_supported_plugins_AUTO>`
|
||||
* :doc:``Multi-Device Execution (MULTI) <openvino_docs_OV_UG_Running_on_multiple_devices>`
|
||||
* :doc:`Multi-Device Execution (MULTI) <openvino_docs_OV_UG_Running_on_multiple_devices>`
|
||||
* :doc:`Heterogeneous Execution (HETERO) <openvino_docs_OV_UG_Hetero_execution>`
|
||||
* :doc:`Automatic Batching Execution (Auto-batching) <openvino_docs_OV_UG_Automatic_Batching>`
|
||||
|
||||
|
@ -56,7 +56,7 @@ Examples of CLI Commands
|
||||
|
||||
C_{i}=log(S\*C_{i})
|
||||
|
||||
where :math:`C` - the counts array, :math:`C_{i} - i^{th}` element of the counts array, :math:`|C|` - number of elements in the counts array;
|
||||
where :math:`C` - the counts array, :math:`C_{i} - i^{th}` element of the counts array, :math:`|C|` - number of elements in the counts array;
|
||||
|
||||
* The normalized counts are subtracted from biases of the last or next to last layer (if last layer is SoftMax).
|
||||
|
||||
@ -64,13 +64,13 @@ Examples of CLI Commands
|
||||
|
||||
* If you want to remove the last SoftMax layer in the topology, launch the model conversion with the ``remove_output_softmax`` flag:
|
||||
|
||||
.. code-block:: cpp
|
||||
.. code-block:: cpp
|
||||
|
||||
mo --input_model wsj_dnn5b_smbr.nnet --counts wsj_dnn5b_smbr.counts --remove_output_softmax
|
||||
mo --input_model wsj_dnn5b_smbr.nnet --counts wsj_dnn5b_smbr.counts --remove_output_softmax
|
||||
|
||||
Model conversion API finds the last layer of the topology and removes this layer only if it is a SoftMax layer.
|
||||
Model conversion API finds the last layer of the topology and removes this layer only if it is a SoftMax layer.
|
||||
|
||||
.. note:: Model conversion can remove SoftMax layer only if the topology has one output.
|
||||
.. note:: Model conversion can remove SoftMax layer only if the topology has one output.
|
||||
|
||||
* You can use the *OpenVINO Speech Recognition* sample application for the sample inference of Kaldi models. This sample supports models with only one output. If your model has several outputs, specify the desired one with the ``output`` option.
|
||||
|
||||
|
@ -164,23 +164,23 @@ Command-Line Interface (CLI) Examples Using TensorFlow-Specific Parameters
|
||||
|
||||
* Launching model conversion for Inception V1 frozen model when model file is a plain text protobuf:
|
||||
|
||||
.. code-block:: sh
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model inception_v1.pbtxt --input_model_is_text -b 1
|
||||
mo --input_model inception_v1.pbtxt --input_model_is_text -b 1
|
||||
|
||||
|
||||
* Launching model conversion for Inception V1 frozen model and dump information about the graph to TensorBoard log dir ``/tmp/log_dir``
|
||||
|
||||
.. code-block:: sh
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model inception_v1.pb -b 1 --tensorboard_logdir /tmp/log_dir
|
||||
mo --input_model inception_v1.pb -b 1 --tensorboard_logdir /tmp/log_dir
|
||||
|
||||
|
||||
* Launching model conversion for BERT model in the SavedModel format, with three inputs. Specify explicitly the input shapes where the batch size and the sequence length equal 2 and 30 respectively.
|
||||
|
||||
.. code-block:: sh
|
||||
.. code-block:: sh
|
||||
|
||||
mo --saved_model_dir BERT --input mask,word_ids,type_ids --input_shape [2,30],[2,30],[2,30]
|
||||
mo --saved_model_dir BERT --input mask,word_ids,type_ids --input_shape [2,30],[2,30],[2,30]
|
||||
|
||||
Conversion of TensorFlow models from memory using Python API
|
||||
############################################################
|
||||
@ -189,96 +189,96 @@ Model conversion API supports passing TensorFlow/TensorFlow2 models directly fro
|
||||
|
||||
* ``tf.keras.Model``
|
||||
|
||||
.. code-block:: python
|
||||
.. code-block:: python
|
||||
|
||||
model = tf.keras.applications.ResNet50(weights="imagenet")
|
||||
ov_model = convert_model(model)
|
||||
model = tf.keras.applications.ResNet50(weights="imagenet")
|
||||
ov_model = convert_model(model)
|
||||
|
||||
|
||||
* ``tf.keras.layers.Layer``. Requires setting the "input_shape".
|
||||
|
||||
.. code-block:: python
|
||||
.. code-block:: python
|
||||
|
||||
import tensorflow_hub as hub
|
||||
import tensorflow_hub as hub
|
||||
|
||||
model = hub.KerasLayer("https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/5")
|
||||
ov_model = convert_model(model, input_shape=[-1, 224, 224, 3])
|
||||
model = hub.KerasLayer("https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/5")
|
||||
ov_model = convert_model(model, input_shape=[-1, 224, 224, 3])
|
||||
|
||||
* ``tf.Module``. Requires setting the "input_shape".
|
||||
|
||||
.. code-block:: python
|
||||
.. code-block:: python
|
||||
|
||||
class MyModule(tf.Module):
|
||||
def __init__(self, name=None):
|
||||
super().__init__(name=name)
|
||||
self.variable1 = tf.Variable(5.0, name="var1")
|
||||
self.variable2 = tf.Variable(1.0, name="var2")
|
||||
def __call__(self, x):
|
||||
return self.variable1 * x + self.variable2
|
||||
class MyModule(tf.Module):
|
||||
def __init__(self, name=None):
|
||||
super().__init__(name=name)
|
||||
self.variable1 = tf.Variable(5.0, name="var1")
|
||||
self.variable2 = tf.Variable(1.0, name="var2")
|
||||
def __call__(self, x):
|
||||
return self.variable1 * x + self.variable2
|
||||
|
||||
model = MyModule(name="simple_module")
|
||||
ov_model = convert_model(model, input_shape=[-1])
|
||||
model = MyModule(name="simple_module")
|
||||
ov_model = convert_model(model, input_shape=[-1])
|
||||
|
||||
* ``tf.compat.v1.Graph``
|
||||
|
||||
.. code-block:: python
|
||||
.. code-block:: python
|
||||
|
||||
with tf.compat.v1.Session() as sess:
|
||||
inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
|
||||
inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2')
|
||||
output = tf.nn.relu(inp1 + inp2, name='Relu')
|
||||
tf.compat.v1.global_variables_initializer()
|
||||
model = sess.graph
|
||||
with tf.compat.v1.Session() as sess:
|
||||
inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
|
||||
inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2')
|
||||
output = tf.nn.relu(inp1 + inp2, name='Relu')
|
||||
tf.compat.v1.global_variables_initializer()
|
||||
model = sess.graph
|
||||
|
||||
ov_model = convert_model(model)
|
||||
ov_model = convert_model(model)
|
||||
|
||||
* ``tf.compat.v1.GraphDef``
|
||||
|
||||
.. code-block:: python
|
||||
.. code-block:: python
|
||||
|
||||
with tf.compat.v1.Session() as sess:
|
||||
inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
|
||||
inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2')
|
||||
output = tf.nn.relu(inp1 + inp2, name='Relu')
|
||||
tf.compat.v1.global_variables_initializer()
|
||||
model = sess.graph_def
|
||||
with tf.compat.v1.Session() as sess:
|
||||
inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
|
||||
inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2')
|
||||
output = tf.nn.relu(inp1 + inp2, name='Relu')
|
||||
tf.compat.v1.global_variables_initializer()
|
||||
model = sess.graph_def
|
||||
|
||||
ov_model = convert_model(model)
|
||||
ov_model = convert_model(model)
|
||||
|
||||
* ``tf.function``
|
||||
|
||||
.. code-block:: python
|
||||
.. code-block:: python
|
||||
|
||||
@tf.function(
|
||||
input_signature=[tf.TensorSpec(shape=[1, 2, 3], dtype=tf.float32),
|
||||
tf.TensorSpec(shape=[1, 2, 3], dtype=tf.float32)])
|
||||
def func(x, y):
|
||||
return tf.nn.sigmoid(tf.nn.relu(x + y))
|
||||
@tf.function(
|
||||
input_signature=[tf.TensorSpec(shape=[1, 2, 3], dtype=tf.float32),
|
||||
tf.TensorSpec(shape=[1, 2, 3], dtype=tf.float32)])
|
||||
def func(x, y):
|
||||
return tf.nn.sigmoid(tf.nn.relu(x + y))
|
||||
|
||||
ov_model = convert_model(func)
|
||||
ov_model = convert_model(func)
|
||||
|
||||
* ``tf.compat.v1.session``
|
||||
|
||||
.. code-block:: python
|
||||
.. code-block:: python
|
||||
|
||||
with tf.compat.v1.Session() as sess:
|
||||
inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
|
||||
inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2')
|
||||
output = tf.nn.relu(inp1 + inp2, name='Relu')
|
||||
tf.compat.v1.global_variables_initializer()
|
||||
with tf.compat.v1.Session() as sess:
|
||||
inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
|
||||
inp2 = tf.compat.v1.placeholder(tf.float32, [100], 'Input2')
|
||||
output = tf.nn.relu(inp1 + inp2, name='Relu')
|
||||
tf.compat.v1.global_variables_initializer()
|
||||
|
||||
ov_model = convert_model(sess)
|
||||
ov_model = convert_model(sess)
|
||||
|
||||
* ``tf.train.checkpoint``
|
||||
|
||||
.. code-block:: python
|
||||
.. code-block:: python
|
||||
|
||||
model = tf.keras.Model(...)
|
||||
checkpoint = tf.train.Checkpoint(model)
|
||||
save_path = checkpoint.save(save_directory)
|
||||
# ...
|
||||
checkpoint.restore(save_path)
|
||||
ov_model = convert_model(checkpoint)
|
||||
model = tf.keras.Model(...)
|
||||
checkpoint = tf.train.Checkpoint(model)
|
||||
save_path = checkpoint.save(save_directory)
|
||||
# ...
|
||||
checkpoint.restore(save_path)
|
||||
ov_model = convert_model(checkpoint)
|
||||
|
||||
Supported TensorFlow and TensorFlow 2 Keras Layers
|
||||
##################################################
|
||||
|
@ -21,25 +21,25 @@ This article provides the instructions and examples on how to convert `GluonCV S
|
||||
|
||||
2. Run model conversion API, specifying the ``enable_ssd_gluoncv`` option. Make sure the ``input_shape`` parameter is set to the input shape layout of your model (NHWC or NCHW). The examples below illustrate running model conversion for the SSD and YOLO-v3 models trained with the NHWC layout and located in the ``<model_directory>``:
|
||||
|
||||
* **For GluonCV SSD topologies:**
|
||||
* **For GluonCV SSD topologies:**
|
||||
|
||||
.. code-block:: sh
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model <model_directory>/ssd_512_mobilenet1.0.params --enable_ssd_gluoncv --input_shape [1,512,512,3] --input data --output_dir <OUTPUT_MODEL_DIR>
|
||||
mo --input_model <model_directory>/ssd_512_mobilenet1.0.params --enable_ssd_gluoncv --input_shape [1,512,512,3] --input data --output_dir <OUTPUT_MODEL_DIR>
|
||||
|
||||
* **For YOLO-v3 topology:**
|
||||
* **For YOLO-v3 topology:**
|
||||
|
||||
* To convert the model:
|
||||
* To convert the model:
|
||||
|
||||
.. code-block:: sh
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model <model_directory>/yolo3_mobilenet1.0_voc-0000.params --input_shape [1,255,255,3] --output_dir <OUTPUT_MODEL_DIR>
|
||||
mo --input_model <model_directory>/yolo3_mobilenet1.0_voc-0000.params --input_shape [1,255,255,3] --output_dir <OUTPUT_MODEL_DIR>
|
||||
|
||||
* To convert the model with replacing the subgraph with RegionYolo layers:
|
||||
* To convert the model with replacing the subgraph with RegionYolo layers:
|
||||
|
||||
.. code-block:: sh
|
||||
.. code-block:: sh
|
||||
|
||||
mo --input_model <model_directory>/models/yolo3_mobilenet1.0_voc-0000.params --input_shape [1,255,255,3] --transformations_config "front/mxnet/yolo_v3_mobilenet1_voc. json" --output_dir <OUTPUT_MODEL_DIR>
|
||||
mo --input_model <model_directory>/models/yolo3_mobilenet1.0_voc-0000.params --input_shape [1,255,255,3] --transformations_config "front/mxnet/ yolo_v3_mobilenet1_voc. json" --output_dir <OUTPUT_MODEL_DIR>
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
|
@ -184,7 +184,7 @@ Converting a YOLACT Model to the OpenVINO IR format
|
||||
mo --input_model /path/to/yolact.onnx
|
||||
|
||||
|
||||
**Step 4**. Embed input preprocessing into the IR:
|
||||
**Step 5**. Embed input preprocessing into the IR:
|
||||
|
||||
To get performance gain by offloading to the OpenVINO application of mean/scale values and RGB->BGR conversion, use the following model conversion API parameters:
|
||||
|
||||
|
@ -79,13 +79,13 @@ Follow these steps to make a pretrained TensorFlow BERT model reshapable over ba
|
||||
|
||||
* For UNIX-like systems, run the following command:
|
||||
|
||||
.. code-block:: sh
|
||||
.. code-block:: sh
|
||||
|
||||
wget https://gist.githubusercontent.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e/raw/17b8dd0d724281ed7c3b2aeeda662b92809aadd5/download_glue_data.py
|
||||
wget https://gist.githubusercontent.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e/raw/17b8dd0d724281ed7c3b2aeeda662b92809aadd5/download_glue_data.py
|
||||
|
||||
* For Windows systems:
|
||||
|
||||
Download the `Python script <https://gist.githubusercontent.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e/raw/17b8dd0d724281ed7c3b2aeeda662b92809aadd5/download_glue_data.py>`__ to the current working directory.
|
||||
Download the `Python script <https://gist.githubusercontent.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e/raw/17b8dd0d724281ed7c3b2aeeda662b92809aadd5/download_glue_data.py>`__ to the current working directory.
|
||||
|
||||
6. Download GLUE data by running:
|
||||
|
||||
|
@ -40,15 +40,13 @@ If you have another implementation of CRNN model, it can be converted to OpenVIN
|
||||
|
||||
* For Linux:
|
||||
|
||||
.. code-block:: sh
|
||||
.. code-block:: sh
|
||||
|
||||
export PYTHONPATH="${PYTHONPATH}:/path/to/CRNN_Tensorflow/"
|
||||
export PYTHONPATH="${PYTHONPATH}:/path/to/CRNN_Tensorflow/"
|
||||
|
||||
|
||||
* For Windows, add ``/path/to/CRNN_Tensorflow/`` to the ``PYTHONPATH`` environment variable in settings.
|
||||
|
||||
|
||||
|
||||
2. Edit the ``tools/demo_shadownet.py`` script. After ``saver.restore(sess=sess, save_path=weights_path)`` line, add the following code:
|
||||
|
||||
.. code-block:: python
|
||||
|
@ -19,10 +19,10 @@ To download the model, follow the instruction below:
|
||||
|
||||
* For UNIX-like systems, run the following command:
|
||||
|
||||
.. code-block:: sh
|
||||
.. code-block:: sh
|
||||
|
||||
wget -O - https://github.com/mozilla/DeepSpeech/archive/v0.8.2.tar.gz | tar xvfz -
|
||||
wget -O - https://github.com/mozilla/DeepSpeech/releases/download/v0.8.2/deepspeech-0.8.2-checkpoint.tar.gz | tar xvfz -
|
||||
wget -O - https://github.com/mozilla/DeepSpeech/archive/v0.8.2.tar.gz | tar xvfz -
|
||||
wget -O - https://github.com/mozilla/DeepSpeech/releases/download/v0.8.2/deepspeech-0.8.2-checkpoint.tar.gz | tar xvfz -
|
||||
|
||||
* For Windows systems:
|
||||
|
||||
|
@ -27,8 +27,7 @@ This tutorial explains how to convert Neural Collaborative Filtering (NCF) model
|
||||
|
||||
where ``rating/BiasAdd`` is an output node.
|
||||
|
||||
3. Convert the model to the OpenVINO format. If you look at your frozen model, you can see that
|
||||
it has one input that is split into four ``ResourceGather`` layers. (Click image to zoom in.)
|
||||
3. Convert the model to the OpenVINO format. If you look at your frozen model, you can see that it has one input that is split into four ``ResourceGather`` layers. (Click image to zoom in.)
|
||||
|
||||
.. image:: ./_static/images/NCF_start.svg
|
||||
|
||||
|
@ -40,16 +40,16 @@ To enable model caching, the application must specify a folder to store the cach
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_caching.cpp
|
||||
:language: cpp
|
||||
:fragment: [ov:caching:part0]
|
||||
.. doxygensnippet:: docs/snippets/ov_caching.cpp
|
||||
:language: cpp
|
||||
:fragment: [ov:caching:part0]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_caching.py
|
||||
:language: py
|
||||
:fragment: [ov:caching:part0]
|
||||
.. doxygensnippet:: docs/snippets/ov_caching.py
|
||||
:language: py
|
||||
:fragment: [ov:caching:part0]
|
||||
|
||||
|
||||
With this code, if the device specified by ``device_name`` supports import/export model capability,
|
||||
|
@ -74,6 +74,7 @@ Example usage:
|
||||
|
||||
"Postponed Return" is a practice to omit overhead of ``OVDict``, which is always returned from
|
||||
synchronous calls. "Postponed Return" could be applied when:
|
||||
|
||||
* only a part of output data is required. For example, only one specific output is significant
|
||||
in a given pipeline step and all outputs are large, thus, expensive to copy.
|
||||
* data is not required "now". For example, it can be later extracted inside the pipeline as
|
||||
|
@ -56,101 +56,101 @@ When using the ``reshape`` method, you may take one of the approaches:
|
||||
|
||||
|
||||
1. You can pass a new shape to the method in order to change the input shape of
|
||||
the model with a single input. See the example of adjusting spatial dimensions to the input image:
|
||||
the model with a single input. See the example of adjusting spatial dimensions to the input image:
|
||||
|
||||
.. tab-set::
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: spatial_reshape
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: spatial_reshape
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: simple_spatials_change
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: simple_spatials_change
|
||||
|
||||
|
||||
To do the opposite - to resize input image to match the input shapes of the model,
|
||||
use the :doc:`pre-processing API <openvino_docs_OV_UG_Preprocessing_Overview>`.
|
||||
To do the opposite - to resize input image to match the input shapes of the model,
|
||||
use the :doc:`pre-processing API <openvino_docs_OV_UG_Preprocessing_Overview>`.
|
||||
|
||||
|
||||
2. You can express a reshape plan, specifying the input by the port, the index, and the tensor name:
|
||||
|
||||
.. tab-set::
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: Port
|
||||
.. tab-item:: Port
|
||||
|
||||
.. tab-set::
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
``map<ov::Output<ov::Node>, ov::PartialShape`` specifies input by passing actual input port:
|
||||
``map<ov::Output<ov::Node>, ov::PartialShape`` specifies input by passing actual input port:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [obj_to_shape]
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [obj_to_shape]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
``openvino.runtime.Output`` dictionary key specifies input by passing actual input object.
|
||||
Dictionary values representing new shapes could be ``PartialShape``:
|
||||
``openvino.runtime.Output`` dictionary key specifies input by passing actual input object.
|
||||
Dictionary values representing new shapes could be ``PartialShape``:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [obj_to_shape]
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [obj_to_shape]
|
||||
|
||||
.. tab-item:: Index
|
||||
.. tab-item:: Index
|
||||
|
||||
.. tab-set::
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
``map<size_t, ov::PartialShape>`` specifies input by its index:
|
||||
``map<size_t, ov::PartialShape>`` specifies input by its index:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [idx_to_shape]
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [idx_to_shape]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
``int`` dictionary key specifies input by its index.
|
||||
Dictionary values representing new shapes could be ``tuple``:
|
||||
``int`` dictionary key specifies input by its index.
|
||||
Dictionary values representing new shapes could be ``tuple``:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [idx_to_shape]
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [idx_to_shape]
|
||||
|
||||
.. tab-item:: Tensor Name
|
||||
.. tab-item:: Tensor Name
|
||||
|
||||
.. tab-set::
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
``map<string, ov::PartialShape>`` specifies input by its name:
|
||||
``map<string, ov::PartialShape>`` specifies input by its name:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [name_to_shape]
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.cpp
|
||||
:language: cpp
|
||||
:fragment: [name_to_shape]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
``str`` dictionary key specifies input by its name.
|
||||
Dictionary values representing new shapes could be ``str``:
|
||||
``str`` dictionary key specifies input by its name.
|
||||
Dictionary values representing new shapes could be ``str``:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [name_to_shape]
|
||||
.. doxygensnippet:: docs/snippets/ShapeInference.py
|
||||
:language: python
|
||||
:fragment: [name_to_shape]
|
||||
|
||||
|
||||
You can find the usage scenarios of the ``reshape`` method in
|
||||
|
@ -161,41 +161,41 @@ Considering that JIT kernels can be affected by L1/L2/L3 cache size and the numb
|
||||
|
||||
- L2/L3 cache emulation
|
||||
|
||||
Hack the function of get cache size:
|
||||
Hack the function of get cache size
|
||||
|
||||
``unsigned int dnnl::impl::cpu::platform::get_per_core_cache_size(int level)``
|
||||
|
||||
to make it return emulated cache size in analyzed stage, the simplest way is to leverage environment variable to pass the emulated cache size, for example:
|
||||
|
||||
.. code-block:: cpp
|
||||
.. code-block:: cpp
|
||||
|
||||
#if defined(SELECTIVE_BUILD_ANALYZER)
|
||||
if (level == 2) {
|
||||
const char* L2_cache_size = std::getenv("OV_CC_L2_CACHE_SIZE");
|
||||
if (L2_cache_size) {
|
||||
int size = std::atoi(L2_cache_size);
|
||||
if (size > 0) {
|
||||
return size;
|
||||
}
|
||||
}
|
||||
} else if (level == 3) {
|
||||
const char* L3_cache_size = std::getenv("OV_CC_L3_CACHE_SIZE");
|
||||
if (L3_cache_size) {
|
||||
int size = std::atoi(L3_cache_size);
|
||||
if (size > 0) {
|
||||
return size;
|
||||
}
|
||||
}
|
||||
} else if (level == 1) {
|
||||
const char* L1_cache_size = std::getenv("OV_CC_L1_CACHE_SIZE");
|
||||
if (L1_cache_size) {
|
||||
int size = std::atoi(L1_cache_size);
|
||||
if (size > 0) {
|
||||
return size;
|
||||
}
|
||||
}
|
||||
}
|
||||
#endif
|
||||
#if defined(SELECTIVE_BUILD_ANALYZER)
|
||||
if (level == 2) {
|
||||
const char* L2_cache_size = std::getenv("OV_CC_L2_CACHE_SIZE");
|
||||
if (L2_cache_size) {
|
||||
int size = std::atoi(L2_cache_size);
|
||||
if (size > 0) {
|
||||
return size;
|
||||
}
|
||||
}
|
||||
} else if (level == 3) {
|
||||
const char* L3_cache_size = std::getenv("OV_CC_L3_CACHE_SIZE");
|
||||
if (L3_cache_size) {
|
||||
int size = std::atoi(L3_cache_size);
|
||||
if (size > 0) {
|
||||
return size;
|
||||
}
|
||||
}
|
||||
} else if (level == 1) {
|
||||
const char* L1_cache_size = std::getenv("OV_CC_L1_CACHE_SIZE");
|
||||
if (L1_cache_size) {
|
||||
int size = std::atoi(L1_cache_size);
|
||||
if (size > 0) {
|
||||
return size;
|
||||
}
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
- CPU core number emulation
|
||||
|
||||
|
@ -29,20 +29,20 @@ Feature Support Matrix
|
||||
|
||||
The table below demonstrates support of key features by OpenVINO device plugins.
|
||||
|
||||
========================================================================================= ============================ =============== ===============
|
||||
Capability CPU GPU GNA
|
||||
========================================================================================= ============================ =============== ===============
|
||||
:doc:`Heterogeneous execution <openvino_docs_OV_UG_Hetero_execution>` Yes Yes No
|
||||
:doc:`Multi-device execution <openvino_docs_OV_UG_Running_on_multiple_devices>` Yes Yes Partial
|
||||
:doc:`Automatic batching <openvino_docs_OV_UG_Automatic_Batching>` No Yes No
|
||||
:doc:`Multi-stream execution <openvino_docs_deployment_optimization_guide_tput>` Yes (Intel® x86-64 only) Yes No
|
||||
:doc:`Models caching <openvino_docs_OV_UG_Model_caching_overview>` Yes Partial Yes
|
||||
:doc:`Dynamic shapes <openvino_docs_OV_UG_DynamicShapes>` Yes Partial No
|
||||
:doc:`Import/Export <openvino_ecosystem>` Yes No Yes
|
||||
:doc:`Preprocessing acceleration <openvino_docs_OV_UG_Preprocessing_Overview>` Yes Yes No
|
||||
:doc:`Stateful models <openvino_docs_OV_UG_model_state_intro>` Yes No Yes
|
||||
:doc:`Extensibility <openvino_docs_Extensibility_UG_Intro>` Yes Yes No
|
||||
========================================================================================= ============================ =============== ===============
|
||||
========================================================================================= ============================ =============== ===============
|
||||
Capability CPU GPU GNA
|
||||
========================================================================================= ============================ =============== ===============
|
||||
:doc:`Heterogeneous execution <openvino_docs_OV_UG_Hetero_execution>` Yes Yes No
|
||||
:doc:`Multi-device execution <openvino_docs_OV_UG_Running_on_multiple_devices>` Yes Yes Partial
|
||||
:doc:`Automatic batching <openvino_docs_OV_UG_Automatic_Batching>` No Yes No
|
||||
:doc:`Multi-stream execution <openvino_docs_deployment_optimization_guide_tput>` Yes (Intel® x86-64 only) Yes No
|
||||
:doc:`Models caching <openvino_docs_OV_UG_Model_caching_overview>` Yes Partial Yes
|
||||
:doc:`Dynamic shapes <openvino_docs_OV_UG_DynamicShapes>` Yes Partial No
|
||||
:doc:`Import/Export <openvino_ecosystem>` Yes No Yes
|
||||
:doc:`Preprocessing acceleration <openvino_docs_OV_UG_Preprocessing_Overview>` Yes Yes No
|
||||
:doc:`Stateful models <openvino_docs_OV_UG_model_state_intro>` Yes No Yes
|
||||
:doc:`Extensibility <openvino_docs_Extensibility_UG_Intro>` Yes Yes No
|
||||
========================================================================================= ============================ =============== ===============
|
||||
|
||||
For more details on plugin-specific feature limitations, see the corresponding plugin pages.
|
||||
|
||||
|
@ -78,39 +78,29 @@ Starting with the 2021.4.1 release of OpenVINO™ and the 03.00.00.1363 version
|
||||
In this mode, the GNA driver automatically falls back on CPU for a particular infer request if the HW queue is not empty.
|
||||
Therefore, there is no need for explicitly switching between GNA and CPU.
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/gna/configure.cpp
|
||||
:language: cpp
|
||||
:fragment: [include]
|
||||
|
||||
.. doxygensnippet:: docs/snippets/gna/configure.cpp
|
||||
:language: cpp
|
||||
:fragment: [ov_gna_exec_mode_hw_with_sw_fback]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/gna/configure.py
|
||||
:language: py
|
||||
:fragment: [import]
|
||||
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/gna/configure.cpp
|
||||
:language: cpp
|
||||
:fragment: [include]
|
||||
|
||||
.. doxygensnippet:: docs/snippets/gna/configure.cpp
|
||||
:language: cpp
|
||||
:fragment: [ov_gna_exec_mode_hw_with_sw_fback]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/gna/configure.py
|
||||
:language: py
|
||||
:fragment: [import]
|
||||
|
||||
.. doxygensnippet:: docs/snippets/gna/configure.py
|
||||
:language: py
|
||||
:fragment: [ov_gna_exec_mode_hw_with_sw_fback]
|
||||
|
||||
|
||||
|
||||
.. doxygensnippet:: docs/snippets/gna/configure.py
|
||||
:language: py
|
||||
:fragment: [ov_gna_exec_mode_hw_with_sw_fback]
|
||||
|
||||
|
||||
.. note::
|
||||
|
@ -428,12 +428,12 @@ on waiting for the completion of inference. The pseudo-code may look as follows:
|
||||
Limitations
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
- Some primitives in the GPU plugin may block the host thread on waiting for the previous primitives before adding its kernels
|
||||
to the command queue. In such cases, the ``ov::InferRequest::start_async()`` call takes much more time to return control to the calling thread
|
||||
as internally it waits for a partial or full network completion.
|
||||
Examples of operations: Loop, TensorIterator, DetectionOutput, NonMaxSuppression
|
||||
- Synchronization of pre/post processing jobs and inference pipeline inside a shared queue is user's responsibility.
|
||||
- Throughput mode is not available when queue sharing is used, i.e., only a single stream can be used for each compiled model.
|
||||
- Some primitives in the GPU plugin may block the host thread on waiting for the previous primitives before adding its kernels
|
||||
to the command queue. In such cases, the ``ov::InferRequest::start_async()`` call takes much more time to return control to the calling thread
|
||||
as internally it waits for a partial or full network completion.
|
||||
Examples of operations: Loop, TensorIterator, DetectionOutput, NonMaxSuppression
|
||||
- Synchronization of pre/post processing jobs and inference pipeline inside a shared queue is user's responsibility.
|
||||
- Throughput mode is not available when queue sharing is used, i.e., only a single stream can be used for each compiled model.
|
||||
|
||||
Low-Level Methods for RemoteContext and RemoteTensor Creation
|
||||
#####################################################################
|
||||
@ -490,7 +490,7 @@ To see pseudo-code of usage examples, refer to the sections below.
|
||||
See Also
|
||||
#######################################
|
||||
|
||||
* ov::Core
|
||||
* ov::RemoteTensor
|
||||
* ``:ref:`ov::Core <doxid-classov-1-1-core>```
|
||||
* ``:ref:`ov::RemoteTensor <doxid-classov-1-1-remote-tensor>```
|
||||
|
||||
@endsphinxdirective
|
||||
|
@ -68,17 +68,17 @@ What’s Next?
|
||||
|
||||
Now you are ready to try out OpenVINO™. You can use the following tutorials to write your applications using Python and C++.
|
||||
|
||||
Developing in Python:
|
||||
* Developing in Python:
|
||||
|
||||
* `Start with tensorflow models with OpenVINO™ <notebooks/101-tensorflow-to-openvino-with-output.html>`__
|
||||
* `Start with ONNX and PyTorch models with OpenVINO™ <notebooks/102-pytorch-onnx-to-openvino-with-output.html>`__
|
||||
* `Start with PaddlePaddle models with OpenVINO™ <notebooks/103-paddle-to-openvino-classification-with-output.html>`__
|
||||
* `Start with tensorflow models with OpenVINO™ <notebooks/101-tensorflow-to-openvino-with-output.html>`__
|
||||
* `Start with ONNX and PyTorch models with OpenVINO™ <notebooks/102-pytorch-onnx-to-openvino-with-output.html>`__
|
||||
* `Start with PaddlePaddle models with OpenVINO™ <notebooks/103-paddle-to-openvino-classification-with-output.html>`__
|
||||
|
||||
Developing in C++:
|
||||
* Developing in C++:
|
||||
|
||||
* :doc:`Image Classification Async C++ Sample <openvino_inference_engine_samples_classification_sample_async_README>`
|
||||
* :doc:`Hello Classification C++ Sample <openvino_inference_engine_samples_hello_classification_README>`
|
||||
* :doc:`Hello Reshape SSD C++ Sample <openvino_inference_engine_samples_hello_reshape_ssd_README>`
|
||||
* :doc:`Image Classification Async C++ Sample <openvino_inference_engine_samples_classification_sample_async_README>`
|
||||
* :doc:`Hello Classification C++ Sample <openvino_inference_engine_samples_hello_classification_README>`
|
||||
* :doc:`Hello Reshape SSD C++ Sample <openvino_inference_engine_samples_hello_reshape_ssd_README>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
@ -83,21 +83,20 @@ What’s Next?
|
||||
You can try out the toolkit with:
|
||||
|
||||
|
||||
`Python Quick Start Example <notebooks/201-vision-monodepth-with-output.html>`_ to estimate depth in a scene using an OpenVINO monodepth model in a Jupyter Notebook inside your web browser.
|
||||
* `Python Quick Start Example <notebooks/201-vision-monodepth-with-output.html>`_ to estimate depth in a scene using an OpenVINO monodepth model in a Jupyter Notebook inside your web browser.
|
||||
|
||||
Visit the :ref:`Tutorials <notebook tutorials>` page for more Jupyter Notebooks to get you started with OpenVINO, such as:
|
||||
Visit the :ref:`Tutorials <notebook tutorials>` page for more Jupyter Notebooks to get you started with OpenVINO, such as:
|
||||
|
||||
* `OpenVINO Python API Tutorial <notebooks/002-openvino-api-with-output.html>`__
|
||||
* `Basic image classification program with Hello Image Classification <notebooks/001-hello-world-with-output.html>`__
|
||||
* `Convert a PyTorch model and use it for image background removal <notebooks/205-vision-background-removal-with-output.html>`__
|
||||
|
||||
* `C++ Quick Start Example <openvino_docs_get_started_get_started_demos.html>`__ for step-by-step instructions on building and running a basic image classification C++ application.
|
||||
|
||||
`C++ Quick Start Example <openvino_docs_get_started_get_started_demos.html>`__ for step-by-step instructions on building and running a basic image classification C++ application.
|
||||
Visit the :ref:`Samples <code samples>` page for other C++ example applications to get you started with OpenVINO, such as:
|
||||
|
||||
Visit the :ref:`Samples <code samples>` page for other C++ example applications to get you started with OpenVINO, such as:
|
||||
|
||||
* `Basic object detection with the Hello Reshape SSD C++ sample <openvino_inference_engine_samples_hello_reshape_ssd_README.html>`_
|
||||
* `Automatic speech recognition C++ sample <openvino_inference_engine_samples_speech_sample_README.html>`_
|
||||
* `Basic object detection with the Hello Reshape SSD C++ sample <openvino_inference_engine_samples_hello_reshape_ssd_README.html>`_
|
||||
* `Automatic speech recognition C++ sample <openvino_inference_engine_samples_speech_sample_README.html>`_
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
|
@ -30,7 +30,7 @@ See `Installing Additional Components <#optional-installing-additional-component
|
||||
|
||||
* `Homebrew <https://brew.sh/>`_
|
||||
* `CMake 3.13 or higher <https://cmake.org/download/>`__ (choose "macOS 10.13 or later"). Add ``/Applications/CMake.app/Contents/bin`` to path (for default installation).
|
||||
* `Python 3.7 - 3.11 <https://www.python.org/downloads/mac-osx/>`__ (choose 3.7 - 3.10). Install and add it to path.
|
||||
* `Python 3.7 - 3.11 <https://www.python.org/downloads/mac-osx/>`__ . Install and add it to path.
|
||||
* Apple Xcode Command Line Tools. In the terminal, run ``xcode-select --install`` from any directory to install it.
|
||||
* (Optional) Apple Xcode IDE (not required for OpenVINO™, but useful for development)
|
||||
|
||||
|
@ -107,7 +107,7 @@ All subclasses should override the following methods:
|
||||
|
||||
- `sampler` - `Sampler` instance that provides a way to iterate over the dataset. (See details below).
|
||||
- `metric_per_sample` - if `Metric` is specified and this parameter is set to True, then the metric value should be
|
||||
calculated for each data sample, otherwise for the whole dataset.
|
||||
calculated for each data sample, otherwise for the whole dataset.
|
||||
- `print_progress` - print inference progress.
|
||||
|
||||
*Returns*
|
||||
|
Loading…
Reference in New Issue
Block a user