[DOCS] Fixing formatting issues in articles (#17994)
* fixing-formatting
This commit is contained in:
parent
c8f3ed814b
commit
4270dca591
@ -17,7 +17,7 @@ OpenVINO Runtime offers multiple inference modes to allow optimum hardware utili
|
|||||||
The remaining modes assume certain levels of automation in selecting devices for inference. Using them in the deployed solution may potentially increase its performance and portability. The automated modes are:
|
The remaining modes assume certain levels of automation in selecting devices for inference. Using them in the deployed solution may potentially increase its performance and portability. The automated modes are:
|
||||||
|
|
||||||
* :doc:`Automatic Device Selection (AUTO) <openvino_docs_OV_UG_supported_plugins_AUTO>`
|
* :doc:`Automatic Device Selection (AUTO) <openvino_docs_OV_UG_supported_plugins_AUTO>`
|
||||||
* :doc:``Multi-Device Execution (MULTI) <openvino_docs_OV_UG_Running_on_multiple_devices>`
|
* :doc:`Multi-Device Execution (MULTI) <openvino_docs_OV_UG_Running_on_multiple_devices>`
|
||||||
* :doc:`Heterogeneous Execution (HETERO) <openvino_docs_OV_UG_Hetero_execution>`
|
* :doc:`Heterogeneous Execution (HETERO) <openvino_docs_OV_UG_Hetero_execution>`
|
||||||
* :doc:`Automatic Batching Execution (Auto-batching) <openvino_docs_OV_UG_Automatic_Batching>`
|
* :doc:`Automatic Batching Execution (Auto-batching) <openvino_docs_OV_UG_Automatic_Batching>`
|
||||||
|
|
||||||
|
@ -64,13 +64,13 @@ Examples of CLI Commands
|
|||||||
|
|
||||||
* If you want to remove the last SoftMax layer in the topology, launch the model conversion with the ``remove_output_softmax`` flag:
|
* If you want to remove the last SoftMax layer in the topology, launch the model conversion with the ``remove_output_softmax`` flag:
|
||||||
|
|
||||||
.. code-block:: cpp
|
.. code-block:: cpp
|
||||||
|
|
||||||
mo --input_model wsj_dnn5b_smbr.nnet --counts wsj_dnn5b_smbr.counts --remove_output_softmax
|
mo --input_model wsj_dnn5b_smbr.nnet --counts wsj_dnn5b_smbr.counts --remove_output_softmax
|
||||||
|
|
||||||
Model conversion API finds the last layer of the topology and removes this layer only if it is a SoftMax layer.
|
Model conversion API finds the last layer of the topology and removes this layer only if it is a SoftMax layer.
|
||||||
|
|
||||||
.. note:: Model conversion can remove SoftMax layer only if the topology has one output.
|
.. note:: Model conversion can remove SoftMax layer only if the topology has one output.
|
||||||
|
|
||||||
* You can use the *OpenVINO Speech Recognition* sample application for the sample inference of Kaldi models. This sample supports models with only one output. If your model has several outputs, specify the desired one with the ``output`` option.
|
* You can use the *OpenVINO Speech Recognition* sample application for the sample inference of Kaldi models. This sample supports models with only one output. If your model has several outputs, specify the desired one with the ``output`` option.
|
||||||
|
|
||||||
|
@ -164,21 +164,21 @@ Command-Line Interface (CLI) Examples Using TensorFlow-Specific Parameters
|
|||||||
|
|
||||||
* Launching model conversion for Inception V1 frozen model when model file is a plain text protobuf:
|
* Launching model conversion for Inception V1 frozen model when model file is a plain text protobuf:
|
||||||
|
|
||||||
.. code-block:: sh
|
.. code-block:: sh
|
||||||
|
|
||||||
mo --input_model inception_v1.pbtxt --input_model_is_text -b 1
|
mo --input_model inception_v1.pbtxt --input_model_is_text -b 1
|
||||||
|
|
||||||
|
|
||||||
* Launching model conversion for Inception V1 frozen model and dump information about the graph to TensorBoard log dir ``/tmp/log_dir``
|
* Launching model conversion for Inception V1 frozen model and dump information about the graph to TensorBoard log dir ``/tmp/log_dir``
|
||||||
|
|
||||||
.. code-block:: sh
|
.. code-block:: sh
|
||||||
|
|
||||||
mo --input_model inception_v1.pb -b 1 --tensorboard_logdir /tmp/log_dir
|
mo --input_model inception_v1.pb -b 1 --tensorboard_logdir /tmp/log_dir
|
||||||
|
|
||||||
|
|
||||||
* Launching model conversion for BERT model in the SavedModel format, with three inputs. Specify explicitly the input shapes where the batch size and the sequence length equal 2 and 30 respectively.
|
* Launching model conversion for BERT model in the SavedModel format, with three inputs. Specify explicitly the input shapes where the batch size and the sequence length equal 2 and 30 respectively.
|
||||||
|
|
||||||
.. code-block:: sh
|
.. code-block:: sh
|
||||||
|
|
||||||
mo --saved_model_dir BERT --input mask,word_ids,type_ids --input_shape [2,30],[2,30],[2,30]
|
mo --saved_model_dir BERT --input mask,word_ids,type_ids --input_shape [2,30],[2,30],[2,30]
|
||||||
|
|
||||||
@ -189,7 +189,7 @@ Model conversion API supports passing TensorFlow/TensorFlow2 models directly fro
|
|||||||
|
|
||||||
* ``tf.keras.Model``
|
* ``tf.keras.Model``
|
||||||
|
|
||||||
.. code-block:: python
|
.. code-block:: python
|
||||||
|
|
||||||
model = tf.keras.applications.ResNet50(weights="imagenet")
|
model = tf.keras.applications.ResNet50(weights="imagenet")
|
||||||
ov_model = convert_model(model)
|
ov_model = convert_model(model)
|
||||||
@ -197,7 +197,7 @@ Model conversion API supports passing TensorFlow/TensorFlow2 models directly fro
|
|||||||
|
|
||||||
* ``tf.keras.layers.Layer``. Requires setting the "input_shape".
|
* ``tf.keras.layers.Layer``. Requires setting the "input_shape".
|
||||||
|
|
||||||
.. code-block:: python
|
.. code-block:: python
|
||||||
|
|
||||||
import tensorflow_hub as hub
|
import tensorflow_hub as hub
|
||||||
|
|
||||||
@ -206,7 +206,7 @@ Model conversion API supports passing TensorFlow/TensorFlow2 models directly fro
|
|||||||
|
|
||||||
* ``tf.Module``. Requires setting the "input_shape".
|
* ``tf.Module``. Requires setting the "input_shape".
|
||||||
|
|
||||||
.. code-block:: python
|
.. code-block:: python
|
||||||
|
|
||||||
class MyModule(tf.Module):
|
class MyModule(tf.Module):
|
||||||
def __init__(self, name=None):
|
def __init__(self, name=None):
|
||||||
@ -221,7 +221,7 @@ Model conversion API supports passing TensorFlow/TensorFlow2 models directly fro
|
|||||||
|
|
||||||
* ``tf.compat.v1.Graph``
|
* ``tf.compat.v1.Graph``
|
||||||
|
|
||||||
.. code-block:: python
|
.. code-block:: python
|
||||||
|
|
||||||
with tf.compat.v1.Session() as sess:
|
with tf.compat.v1.Session() as sess:
|
||||||
inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
|
inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
|
||||||
@ -234,7 +234,7 @@ Model conversion API supports passing TensorFlow/TensorFlow2 models directly fro
|
|||||||
|
|
||||||
* ``tf.compat.v1.GraphDef``
|
* ``tf.compat.v1.GraphDef``
|
||||||
|
|
||||||
.. code-block:: python
|
.. code-block:: python
|
||||||
|
|
||||||
with tf.compat.v1.Session() as sess:
|
with tf.compat.v1.Session() as sess:
|
||||||
inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
|
inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
|
||||||
@ -247,7 +247,7 @@ Model conversion API supports passing TensorFlow/TensorFlow2 models directly fro
|
|||||||
|
|
||||||
* ``tf.function``
|
* ``tf.function``
|
||||||
|
|
||||||
.. code-block:: python
|
.. code-block:: python
|
||||||
|
|
||||||
@tf.function(
|
@tf.function(
|
||||||
input_signature=[tf.TensorSpec(shape=[1, 2, 3], dtype=tf.float32),
|
input_signature=[tf.TensorSpec(shape=[1, 2, 3], dtype=tf.float32),
|
||||||
@ -259,7 +259,7 @@ Model conversion API supports passing TensorFlow/TensorFlow2 models directly fro
|
|||||||
|
|
||||||
* ``tf.compat.v1.session``
|
* ``tf.compat.v1.session``
|
||||||
|
|
||||||
.. code-block:: python
|
.. code-block:: python
|
||||||
|
|
||||||
with tf.compat.v1.Session() as sess:
|
with tf.compat.v1.Session() as sess:
|
||||||
inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
|
inp1 = tf.compat.v1.placeholder(tf.float32, [100], 'Input1')
|
||||||
@ -271,7 +271,7 @@ Model conversion API supports passing TensorFlow/TensorFlow2 models directly fro
|
|||||||
|
|
||||||
* ``tf.train.checkpoint``
|
* ``tf.train.checkpoint``
|
||||||
|
|
||||||
.. code-block:: python
|
.. code-block:: python
|
||||||
|
|
||||||
model = tf.keras.Model(...)
|
model = tf.keras.Model(...)
|
||||||
checkpoint = tf.train.Checkpoint(model)
|
checkpoint = tf.train.Checkpoint(model)
|
||||||
|
@ -21,13 +21,13 @@ This article provides the instructions and examples on how to convert `GluonCV S
|
|||||||
|
|
||||||
2. Run model conversion API, specifying the ``enable_ssd_gluoncv`` option. Make sure the ``input_shape`` parameter is set to the input shape layout of your model (NHWC or NCHW). The examples below illustrate running model conversion for the SSD and YOLO-v3 models trained with the NHWC layout and located in the ``<model_directory>``:
|
2. Run model conversion API, specifying the ``enable_ssd_gluoncv`` option. Make sure the ``input_shape`` parameter is set to the input shape layout of your model (NHWC or NCHW). The examples below illustrate running model conversion for the SSD and YOLO-v3 models trained with the NHWC layout and located in the ``<model_directory>``:
|
||||||
|
|
||||||
* **For GluonCV SSD topologies:**
|
* **For GluonCV SSD topologies:**
|
||||||
|
|
||||||
.. code-block:: sh
|
.. code-block:: sh
|
||||||
|
|
||||||
mo --input_model <model_directory>/ssd_512_mobilenet1.0.params --enable_ssd_gluoncv --input_shape [1,512,512,3] --input data --output_dir <OUTPUT_MODEL_DIR>
|
mo --input_model <model_directory>/ssd_512_mobilenet1.0.params --enable_ssd_gluoncv --input_shape [1,512,512,3] --input data --output_dir <OUTPUT_MODEL_DIR>
|
||||||
|
|
||||||
* **For YOLO-v3 topology:**
|
* **For YOLO-v3 topology:**
|
||||||
|
|
||||||
* To convert the model:
|
* To convert the model:
|
||||||
|
|
||||||
@ -39,7 +39,7 @@ This article provides the instructions and examples on how to convert `GluonCV S
|
|||||||
|
|
||||||
.. code-block:: sh
|
.. code-block:: sh
|
||||||
|
|
||||||
mo --input_model <model_directory>/models/yolo3_mobilenet1.0_voc-0000.params --input_shape [1,255,255,3] --transformations_config "front/mxnet/yolo_v3_mobilenet1_voc. json" --output_dir <OUTPUT_MODEL_DIR>
|
mo --input_model <model_directory>/models/yolo3_mobilenet1.0_voc-0000.params --input_shape [1,255,255,3] --transformations_config "front/mxnet/ yolo_v3_mobilenet1_voc. json" --output_dir <OUTPUT_MODEL_DIR>
|
||||||
|
|
||||||
|
|
||||||
@endsphinxdirective
|
@endsphinxdirective
|
||||||
|
@ -184,7 +184,7 @@ Converting a YOLACT Model to the OpenVINO IR format
|
|||||||
mo --input_model /path/to/yolact.onnx
|
mo --input_model /path/to/yolact.onnx
|
||||||
|
|
||||||
|
|
||||||
**Step 4**. Embed input preprocessing into the IR:
|
**Step 5**. Embed input preprocessing into the IR:
|
||||||
|
|
||||||
To get performance gain by offloading to the OpenVINO application of mean/scale values and RGB->BGR conversion, use the following model conversion API parameters:
|
To get performance gain by offloading to the OpenVINO application of mean/scale values and RGB->BGR conversion, use the following model conversion API parameters:
|
||||||
|
|
||||||
|
@ -47,8 +47,6 @@ If you have another implementation of CRNN model, it can be converted to OpenVIN
|
|||||||
|
|
||||||
* For Windows, add ``/path/to/CRNN_Tensorflow/`` to the ``PYTHONPATH`` environment variable in settings.
|
* For Windows, add ``/path/to/CRNN_Tensorflow/`` to the ``PYTHONPATH`` environment variable in settings.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``tools/demo_shadownet.py`` script. After ``saver.restore(sess=sess, save_path=weights_path)`` line, add the following code:
|
2. Edit the ``tools/demo_shadownet.py`` script. After ``saver.restore(sess=sess, save_path=weights_path)`` line, add the following code:
|
||||||
|
|
||||||
.. code-block:: python
|
.. code-block:: python
|
||||||
|
@ -19,7 +19,7 @@ To download the model, follow the instruction below:
|
|||||||
|
|
||||||
* For UNIX-like systems, run the following command:
|
* For UNIX-like systems, run the following command:
|
||||||
|
|
||||||
.. code-block:: sh
|
.. code-block:: sh
|
||||||
|
|
||||||
wget -O - https://github.com/mozilla/DeepSpeech/archive/v0.8.2.tar.gz | tar xvfz -
|
wget -O - https://github.com/mozilla/DeepSpeech/archive/v0.8.2.tar.gz | tar xvfz -
|
||||||
wget -O - https://github.com/mozilla/DeepSpeech/releases/download/v0.8.2/deepspeech-0.8.2-checkpoint.tar.gz | tar xvfz -
|
wget -O - https://github.com/mozilla/DeepSpeech/releases/download/v0.8.2/deepspeech-0.8.2-checkpoint.tar.gz | tar xvfz -
|
||||||
|
@ -27,8 +27,7 @@ This tutorial explains how to convert Neural Collaborative Filtering (NCF) model
|
|||||||
|
|
||||||
where ``rating/BiasAdd`` is an output node.
|
where ``rating/BiasAdd`` is an output node.
|
||||||
|
|
||||||
3. Convert the model to the OpenVINO format. If you look at your frozen model, you can see that
|
3. Convert the model to the OpenVINO format. If you look at your frozen model, you can see that it has one input that is split into four ``ResourceGather`` layers. (Click image to zoom in.)
|
||||||
it has one input that is split into four ``ResourceGather`` layers. (Click image to zoom in.)
|
|
||||||
|
|
||||||
.. image:: ./_static/images/NCF_start.svg
|
.. image:: ./_static/images/NCF_start.svg
|
||||||
|
|
||||||
|
@ -74,6 +74,7 @@ Example usage:
|
|||||||
|
|
||||||
"Postponed Return" is a practice to omit overhead of ``OVDict``, which is always returned from
|
"Postponed Return" is a practice to omit overhead of ``OVDict``, which is always returned from
|
||||||
synchronous calls. "Postponed Return" could be applied when:
|
synchronous calls. "Postponed Return" could be applied when:
|
||||||
|
|
||||||
* only a part of output data is required. For example, only one specific output is significant
|
* only a part of output data is required. For example, only one specific output is significant
|
||||||
in a given pipeline step and all outputs are large, thus, expensive to copy.
|
in a given pipeline step and all outputs are large, thus, expensive to copy.
|
||||||
* data is not required "now". For example, it can be later extracted inside the pipeline as
|
* data is not required "now". For example, it can be later extracted inside the pipeline as
|
||||||
|
@ -56,9 +56,9 @@ When using the ``reshape`` method, you may take one of the approaches:
|
|||||||
|
|
||||||
|
|
||||||
1. You can pass a new shape to the method in order to change the input shape of
|
1. You can pass a new shape to the method in order to change the input shape of
|
||||||
the model with a single input. See the example of adjusting spatial dimensions to the input image:
|
the model with a single input. See the example of adjusting spatial dimensions to the input image:
|
||||||
|
|
||||||
.. tab-set::
|
.. tab-set::
|
||||||
|
|
||||||
.. tab-item:: C++
|
.. tab-item:: C++
|
||||||
:sync: cpp
|
:sync: cpp
|
||||||
@ -75,13 +75,13 @@ the model with a single input. See the example of adjusting spatial dimensions t
|
|||||||
:fragment: simple_spatials_change
|
:fragment: simple_spatials_change
|
||||||
|
|
||||||
|
|
||||||
To do the opposite - to resize input image to match the input shapes of the model,
|
To do the opposite - to resize input image to match the input shapes of the model,
|
||||||
use the :doc:`pre-processing API <openvino_docs_OV_UG_Preprocessing_Overview>`.
|
use the :doc:`pre-processing API <openvino_docs_OV_UG_Preprocessing_Overview>`.
|
||||||
|
|
||||||
|
|
||||||
2. You can express a reshape plan, specifying the input by the port, the index, and the tensor name:
|
2. You can express a reshape plan, specifying the input by the port, the index, and the tensor name:
|
||||||
|
|
||||||
.. tab-set::
|
.. tab-set::
|
||||||
|
|
||||||
.. tab-item:: Port
|
.. tab-item:: Port
|
||||||
|
|
||||||
|
@ -161,7 +161,7 @@ Considering that JIT kernels can be affected by L1/L2/L3 cache size and the numb
|
|||||||
|
|
||||||
- L2/L3 cache emulation
|
- L2/L3 cache emulation
|
||||||
|
|
||||||
Hack the function of get cache size:
|
Hack the function of get cache size
|
||||||
|
|
||||||
``unsigned int dnnl::impl::cpu::platform::get_per_core_cache_size(int level)``
|
``unsigned int dnnl::impl::cpu::platform::get_per_core_cache_size(int level)``
|
||||||
|
|
||||||
|
@ -29,9 +29,9 @@ Feature Support Matrix
|
|||||||
|
|
||||||
The table below demonstrates support of key features by OpenVINO device plugins.
|
The table below demonstrates support of key features by OpenVINO device plugins.
|
||||||
|
|
||||||
========================================================================================= ============================ =============== ===============
|
========================================================================================= ============================ =============== ===============
|
||||||
Capability CPU GPU GNA
|
Capability CPU GPU GNA
|
||||||
========================================================================================= ============================ =============== ===============
|
========================================================================================= ============================ =============== ===============
|
||||||
:doc:`Heterogeneous execution <openvino_docs_OV_UG_Hetero_execution>` Yes Yes No
|
:doc:`Heterogeneous execution <openvino_docs_OV_UG_Hetero_execution>` Yes Yes No
|
||||||
:doc:`Multi-device execution <openvino_docs_OV_UG_Running_on_multiple_devices>` Yes Yes Partial
|
:doc:`Multi-device execution <openvino_docs_OV_UG_Running_on_multiple_devices>` Yes Yes Partial
|
||||||
:doc:`Automatic batching <openvino_docs_OV_UG_Automatic_Batching>` No Yes No
|
:doc:`Automatic batching <openvino_docs_OV_UG_Automatic_Batching>` No Yes No
|
||||||
@ -42,7 +42,7 @@ The table below demonstrates support of key features by OpenVINO device plugins.
|
|||||||
:doc:`Preprocessing acceleration <openvino_docs_OV_UG_Preprocessing_Overview>` Yes Yes No
|
:doc:`Preprocessing acceleration <openvino_docs_OV_UG_Preprocessing_Overview>` Yes Yes No
|
||||||
:doc:`Stateful models <openvino_docs_OV_UG_model_state_intro>` Yes No Yes
|
:doc:`Stateful models <openvino_docs_OV_UG_model_state_intro>` Yes No Yes
|
||||||
:doc:`Extensibility <openvino_docs_Extensibility_UG_Intro>` Yes Yes No
|
:doc:`Extensibility <openvino_docs_Extensibility_UG_Intro>` Yes Yes No
|
||||||
========================================================================================= ============================ =============== ===============
|
========================================================================================= ============================ =============== ===============
|
||||||
|
|
||||||
For more details on plugin-specific feature limitations, see the corresponding plugin pages.
|
For more details on plugin-specific feature limitations, see the corresponding plugin pages.
|
||||||
|
|
||||||
|
@ -78,14 +78,7 @@ Starting with the 2021.4.1 release of OpenVINO™ and the 03.00.00.1363 version
|
|||||||
In this mode, the GNA driver automatically falls back on CPU for a particular infer request if the HW queue is not empty.
|
In this mode, the GNA driver automatically falls back on CPU for a particular infer request if the HW queue is not empty.
|
||||||
Therefore, there is no need for explicitly switching between GNA and CPU.
|
Therefore, there is no need for explicitly switching between GNA and CPU.
|
||||||
|
|
||||||
|
.. tab-set::
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. tab-set::
|
|
||||||
|
|
||||||
.. tab-item:: C++
|
.. tab-item:: C++
|
||||||
:sync: cpp
|
:sync: cpp
|
||||||
@ -110,9 +103,6 @@ Therefore, there is no need for explicitly switching between GNA and CPU.
|
|||||||
:fragment: [ov_gna_exec_mode_hw_with_sw_fback]
|
:fragment: [ov_gna_exec_mode_hw_with_sw_fback]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
Due to the "first come - first served" nature of GNA driver and the QoS feature, this mode may lead to increased
|
Due to the "first come - first served" nature of GNA driver and the QoS feature, this mode may lead to increased
|
||||||
|
@ -428,12 +428,12 @@ on waiting for the completion of inference. The pseudo-code may look as follows:
|
|||||||
Limitations
|
Limitations
|
||||||
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||||
|
|
||||||
- Some primitives in the GPU plugin may block the host thread on waiting for the previous primitives before adding its kernels
|
- Some primitives in the GPU plugin may block the host thread on waiting for the previous primitives before adding its kernels
|
||||||
to the command queue. In such cases, the ``ov::InferRequest::start_async()`` call takes much more time to return control to the calling thread
|
to the command queue. In such cases, the ``ov::InferRequest::start_async()`` call takes much more time to return control to the calling thread
|
||||||
as internally it waits for a partial or full network completion.
|
as internally it waits for a partial or full network completion.
|
||||||
Examples of operations: Loop, TensorIterator, DetectionOutput, NonMaxSuppression
|
Examples of operations: Loop, TensorIterator, DetectionOutput, NonMaxSuppression
|
||||||
- Synchronization of pre/post processing jobs and inference pipeline inside a shared queue is user's responsibility.
|
- Synchronization of pre/post processing jobs and inference pipeline inside a shared queue is user's responsibility.
|
||||||
- Throughput mode is not available when queue sharing is used, i.e., only a single stream can be used for each compiled model.
|
- Throughput mode is not available when queue sharing is used, i.e., only a single stream can be used for each compiled model.
|
||||||
|
|
||||||
Low-Level Methods for RemoteContext and RemoteTensor Creation
|
Low-Level Methods for RemoteContext and RemoteTensor Creation
|
||||||
#####################################################################
|
#####################################################################
|
||||||
@ -490,7 +490,7 @@ To see pseudo-code of usage examples, refer to the sections below.
|
|||||||
See Also
|
See Also
|
||||||
#######################################
|
#######################################
|
||||||
|
|
||||||
* ov::Core
|
* ``:ref:`ov::Core <doxid-classov-1-1-core>```
|
||||||
* ov::RemoteTensor
|
* ``:ref:`ov::RemoteTensor <doxid-classov-1-1-remote-tensor>```
|
||||||
|
|
||||||
@endsphinxdirective
|
@endsphinxdirective
|
||||||
|
@ -68,17 +68,17 @@ What’s Next?
|
|||||||
|
|
||||||
Now you are ready to try out OpenVINO™. You can use the following tutorials to write your applications using Python and C++.
|
Now you are ready to try out OpenVINO™. You can use the following tutorials to write your applications using Python and C++.
|
||||||
|
|
||||||
Developing in Python:
|
* Developing in Python:
|
||||||
|
|
||||||
* `Start with tensorflow models with OpenVINO™ <notebooks/101-tensorflow-to-openvino-with-output.html>`__
|
* `Start with tensorflow models with OpenVINO™ <notebooks/101-tensorflow-to-openvino-with-output.html>`__
|
||||||
* `Start with ONNX and PyTorch models with OpenVINO™ <notebooks/102-pytorch-onnx-to-openvino-with-output.html>`__
|
* `Start with ONNX and PyTorch models with OpenVINO™ <notebooks/102-pytorch-onnx-to-openvino-with-output.html>`__
|
||||||
* `Start with PaddlePaddle models with OpenVINO™ <notebooks/103-paddle-to-openvino-classification-with-output.html>`__
|
* `Start with PaddlePaddle models with OpenVINO™ <notebooks/103-paddle-to-openvino-classification-with-output.html>`__
|
||||||
|
|
||||||
Developing in C++:
|
* Developing in C++:
|
||||||
|
|
||||||
* :doc:`Image Classification Async C++ Sample <openvino_inference_engine_samples_classification_sample_async_README>`
|
* :doc:`Image Classification Async C++ Sample <openvino_inference_engine_samples_classification_sample_async_README>`
|
||||||
* :doc:`Hello Classification C++ Sample <openvino_inference_engine_samples_hello_classification_README>`
|
* :doc:`Hello Classification C++ Sample <openvino_inference_engine_samples_hello_classification_README>`
|
||||||
* :doc:`Hello Reshape SSD C++ Sample <openvino_inference_engine_samples_hello_reshape_ssd_README>`
|
* :doc:`Hello Reshape SSD C++ Sample <openvino_inference_engine_samples_hello_reshape_ssd_README>`
|
||||||
|
|
||||||
@endsphinxdirective
|
@endsphinxdirective
|
||||||
|
|
||||||
|
@ -83,7 +83,7 @@ What’s Next?
|
|||||||
You can try out the toolkit with:
|
You can try out the toolkit with:
|
||||||
|
|
||||||
|
|
||||||
`Python Quick Start Example <notebooks/201-vision-monodepth-with-output.html>`_ to estimate depth in a scene using an OpenVINO monodepth model in a Jupyter Notebook inside your web browser.
|
* `Python Quick Start Example <notebooks/201-vision-monodepth-with-output.html>`_ to estimate depth in a scene using an OpenVINO monodepth model in a Jupyter Notebook inside your web browser.
|
||||||
|
|
||||||
Visit the :ref:`Tutorials <notebook tutorials>` page for more Jupyter Notebooks to get you started with OpenVINO, such as:
|
Visit the :ref:`Tutorials <notebook tutorials>` page for more Jupyter Notebooks to get you started with OpenVINO, such as:
|
||||||
|
|
||||||
@ -91,8 +91,7 @@ You can try out the toolkit with:
|
|||||||
* `Basic image classification program with Hello Image Classification <notebooks/001-hello-world-with-output.html>`__
|
* `Basic image classification program with Hello Image Classification <notebooks/001-hello-world-with-output.html>`__
|
||||||
* `Convert a PyTorch model and use it for image background removal <notebooks/205-vision-background-removal-with-output.html>`__
|
* `Convert a PyTorch model and use it for image background removal <notebooks/205-vision-background-removal-with-output.html>`__
|
||||||
|
|
||||||
|
* `C++ Quick Start Example <openvino_docs_get_started_get_started_demos.html>`__ for step-by-step instructions on building and running a basic image classification C++ application.
|
||||||
`C++ Quick Start Example <openvino_docs_get_started_get_started_demos.html>`__ for step-by-step instructions on building and running a basic image classification C++ application.
|
|
||||||
|
|
||||||
Visit the :ref:`Samples <code samples>` page for other C++ example applications to get you started with OpenVINO, such as:
|
Visit the :ref:`Samples <code samples>` page for other C++ example applications to get you started with OpenVINO, such as:
|
||||||
|
|
||||||
|
@ -30,7 +30,7 @@ See `Installing Additional Components <#optional-installing-additional-component
|
|||||||
|
|
||||||
* `Homebrew <https://brew.sh/>`_
|
* `Homebrew <https://brew.sh/>`_
|
||||||
* `CMake 3.13 or higher <https://cmake.org/download/>`__ (choose "macOS 10.13 or later"). Add ``/Applications/CMake.app/Contents/bin`` to path (for default installation).
|
* `CMake 3.13 or higher <https://cmake.org/download/>`__ (choose "macOS 10.13 or later"). Add ``/Applications/CMake.app/Contents/bin`` to path (for default installation).
|
||||||
* `Python 3.7 - 3.11 <https://www.python.org/downloads/mac-osx/>`__ (choose 3.7 - 3.10). Install and add it to path.
|
* `Python 3.7 - 3.11 <https://www.python.org/downloads/mac-osx/>`__ . Install and add it to path.
|
||||||
* Apple Xcode Command Line Tools. In the terminal, run ``xcode-select --install`` from any directory to install it.
|
* Apple Xcode Command Line Tools. In the terminal, run ``xcode-select --install`` from any directory to install it.
|
||||||
* (Optional) Apple Xcode IDE (not required for OpenVINO™, but useful for development)
|
* (Optional) Apple Xcode IDE (not required for OpenVINO™, but useful for development)
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user