[DOCS] Fixing formatting issues in articles (#17994)
* fixing-formatting
This commit is contained in:
parent
c8f3ed814b
commit
4270dca591
@ -17,7 +17,7 @@ OpenVINO Runtime offers multiple inference modes to allow optimum hardware utili
|
|||||||
The remaining modes assume certain levels of automation in selecting devices for inference. Using them in the deployed solution may potentially increase its performance and portability. The automated modes are:
|
The remaining modes assume certain levels of automation in selecting devices for inference. Using them in the deployed solution may potentially increase its performance and portability. The automated modes are:
|
||||||
|
|
||||||
* :doc:`Automatic Device Selection (AUTO) <openvino_docs_OV_UG_supported_plugins_AUTO>`
|
* :doc:`Automatic Device Selection (AUTO) <openvino_docs_OV_UG_supported_plugins_AUTO>`
|
||||||
* :doc:``Multi-Device Execution (MULTI) <openvino_docs_OV_UG_Running_on_multiple_devices>`
|
* :doc:`Multi-Device Execution (MULTI) <openvino_docs_OV_UG_Running_on_multiple_devices>`
|
||||||
* :doc:`Heterogeneous Execution (HETERO) <openvino_docs_OV_UG_Hetero_execution>`
|
* :doc:`Heterogeneous Execution (HETERO) <openvino_docs_OV_UG_Hetero_execution>`
|
||||||
* :doc:`Automatic Batching Execution (Auto-batching) <openvino_docs_OV_UG_Automatic_Batching>`
|
* :doc:`Automatic Batching Execution (Auto-batching) <openvino_docs_OV_UG_Automatic_Batching>`
|
||||||
|
|
||||||
|
@ -184,7 +184,7 @@ Converting a YOLACT Model to the OpenVINO IR format
|
|||||||
mo --input_model /path/to/yolact.onnx
|
mo --input_model /path/to/yolact.onnx
|
||||||
|
|
||||||
|
|
||||||
**Step 4**. Embed input preprocessing into the IR:
|
**Step 5**. Embed input preprocessing into the IR:
|
||||||
|
|
||||||
To get performance gain by offloading to the OpenVINO application of mean/scale values and RGB->BGR conversion, use the following model conversion API parameters:
|
To get performance gain by offloading to the OpenVINO application of mean/scale values and RGB->BGR conversion, use the following model conversion API parameters:
|
||||||
|
|
||||||
|
@ -47,8 +47,6 @@ If you have another implementation of CRNN model, it can be converted to OpenVIN
|
|||||||
|
|
||||||
* For Windows, add ``/path/to/CRNN_Tensorflow/`` to the ``PYTHONPATH`` environment variable in settings.
|
* For Windows, add ``/path/to/CRNN_Tensorflow/`` to the ``PYTHONPATH`` environment variable in settings.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``tools/demo_shadownet.py`` script. After ``saver.restore(sess=sess, save_path=weights_path)`` line, add the following code:
|
2. Edit the ``tools/demo_shadownet.py`` script. After ``saver.restore(sess=sess, save_path=weights_path)`` line, add the following code:
|
||||||
|
|
||||||
.. code-block:: python
|
.. code-block:: python
|
||||||
|
@ -27,8 +27,7 @@ This tutorial explains how to convert Neural Collaborative Filtering (NCF) model
|
|||||||
|
|
||||||
where ``rating/BiasAdd`` is an output node.
|
where ``rating/BiasAdd`` is an output node.
|
||||||
|
|
||||||
3. Convert the model to the OpenVINO format. If you look at your frozen model, you can see that
|
3. Convert the model to the OpenVINO format. If you look at your frozen model, you can see that it has one input that is split into four ``ResourceGather`` layers. (Click image to zoom in.)
|
||||||
it has one input that is split into four ``ResourceGather`` layers. (Click image to zoom in.)
|
|
||||||
|
|
||||||
.. image:: ./_static/images/NCF_start.svg
|
.. image:: ./_static/images/NCF_start.svg
|
||||||
|
|
||||||
|
@ -74,6 +74,7 @@ Example usage:
|
|||||||
|
|
||||||
"Postponed Return" is a practice to omit overhead of ``OVDict``, which is always returned from
|
"Postponed Return" is a practice to omit overhead of ``OVDict``, which is always returned from
|
||||||
synchronous calls. "Postponed Return" could be applied when:
|
synchronous calls. "Postponed Return" could be applied when:
|
||||||
|
|
||||||
* only a part of output data is required. For example, only one specific output is significant
|
* only a part of output data is required. For example, only one specific output is significant
|
||||||
in a given pipeline step and all outputs are large, thus, expensive to copy.
|
in a given pipeline step and all outputs are large, thus, expensive to copy.
|
||||||
* data is not required "now". For example, it can be later extracted inside the pipeline as
|
* data is not required "now". For example, it can be later extracted inside the pipeline as
|
||||||
|
@ -161,7 +161,7 @@ Considering that JIT kernels can be affected by L1/L2/L3 cache size and the numb
|
|||||||
|
|
||||||
- L2/L3 cache emulation
|
- L2/L3 cache emulation
|
||||||
|
|
||||||
Hack the function of get cache size:
|
Hack the function of get cache size
|
||||||
|
|
||||||
``unsigned int dnnl::impl::cpu::platform::get_per_core_cache_size(int level)``
|
``unsigned int dnnl::impl::cpu::platform::get_per_core_cache_size(int level)``
|
||||||
|
|
||||||
|
@ -78,13 +78,6 @@ Starting with the 2021.4.1 release of OpenVINO™ and the 03.00.00.1363 version
|
|||||||
In this mode, the GNA driver automatically falls back on CPU for a particular infer request if the HW queue is not empty.
|
In this mode, the GNA driver automatically falls back on CPU for a particular infer request if the HW queue is not empty.
|
||||||
Therefore, there is no need for explicitly switching between GNA and CPU.
|
Therefore, there is no need for explicitly switching between GNA and CPU.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. tab-set::
|
.. tab-set::
|
||||||
|
|
||||||
.. tab-item:: C++
|
.. tab-item:: C++
|
||||||
@ -110,9 +103,6 @@ Therefore, there is no need for explicitly switching between GNA and CPU.
|
|||||||
:fragment: [ov_gna_exec_mode_hw_with_sw_fback]
|
:fragment: [ov_gna_exec_mode_hw_with_sw_fback]
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
Due to the "first come - first served" nature of GNA driver and the QoS feature, this mode may lead to increased
|
Due to the "first come - first served" nature of GNA driver and the QoS feature, this mode may lead to increased
|
||||||
|
@ -490,7 +490,7 @@ To see pseudo-code of usage examples, refer to the sections below.
|
|||||||
See Also
|
See Also
|
||||||
#######################################
|
#######################################
|
||||||
|
|
||||||
* ov::Core
|
* ``:ref:`ov::Core <doxid-classov-1-1-core>```
|
||||||
* ov::RemoteTensor
|
* ``:ref:`ov::RemoteTensor <doxid-classov-1-1-remote-tensor>```
|
||||||
|
|
||||||
@endsphinxdirective
|
@endsphinxdirective
|
||||||
|
@ -68,13 +68,13 @@ What’s Next?
|
|||||||
|
|
||||||
Now you are ready to try out OpenVINO™. You can use the following tutorials to write your applications using Python and C++.
|
Now you are ready to try out OpenVINO™. You can use the following tutorials to write your applications using Python and C++.
|
||||||
|
|
||||||
Developing in Python:
|
* Developing in Python:
|
||||||
|
|
||||||
* `Start with tensorflow models with OpenVINO™ <notebooks/101-tensorflow-to-openvino-with-output.html>`__
|
* `Start with tensorflow models with OpenVINO™ <notebooks/101-tensorflow-to-openvino-with-output.html>`__
|
||||||
* `Start with ONNX and PyTorch models with OpenVINO™ <notebooks/102-pytorch-onnx-to-openvino-with-output.html>`__
|
* `Start with ONNX and PyTorch models with OpenVINO™ <notebooks/102-pytorch-onnx-to-openvino-with-output.html>`__
|
||||||
* `Start with PaddlePaddle models with OpenVINO™ <notebooks/103-paddle-to-openvino-classification-with-output.html>`__
|
* `Start with PaddlePaddle models with OpenVINO™ <notebooks/103-paddle-to-openvino-classification-with-output.html>`__
|
||||||
|
|
||||||
Developing in C++:
|
* Developing in C++:
|
||||||
|
|
||||||
* :doc:`Image Classification Async C++ Sample <openvino_inference_engine_samples_classification_sample_async_README>`
|
* :doc:`Image Classification Async C++ Sample <openvino_inference_engine_samples_classification_sample_async_README>`
|
||||||
* :doc:`Hello Classification C++ Sample <openvino_inference_engine_samples_hello_classification_README>`
|
* :doc:`Hello Classification C++ Sample <openvino_inference_engine_samples_hello_classification_README>`
|
||||||
|
@ -83,7 +83,7 @@ What’s Next?
|
|||||||
You can try out the toolkit with:
|
You can try out the toolkit with:
|
||||||
|
|
||||||
|
|
||||||
`Python Quick Start Example <notebooks/201-vision-monodepth-with-output.html>`_ to estimate depth in a scene using an OpenVINO monodepth model in a Jupyter Notebook inside your web browser.
|
* `Python Quick Start Example <notebooks/201-vision-monodepth-with-output.html>`_ to estimate depth in a scene using an OpenVINO monodepth model in a Jupyter Notebook inside your web browser.
|
||||||
|
|
||||||
Visit the :ref:`Tutorials <notebook tutorials>` page for more Jupyter Notebooks to get you started with OpenVINO, such as:
|
Visit the :ref:`Tutorials <notebook tutorials>` page for more Jupyter Notebooks to get you started with OpenVINO, such as:
|
||||||
|
|
||||||
@ -91,8 +91,7 @@ You can try out the toolkit with:
|
|||||||
* `Basic image classification program with Hello Image Classification <notebooks/001-hello-world-with-output.html>`__
|
* `Basic image classification program with Hello Image Classification <notebooks/001-hello-world-with-output.html>`__
|
||||||
* `Convert a PyTorch model and use it for image background removal <notebooks/205-vision-background-removal-with-output.html>`__
|
* `Convert a PyTorch model and use it for image background removal <notebooks/205-vision-background-removal-with-output.html>`__
|
||||||
|
|
||||||
|
* `C++ Quick Start Example <openvino_docs_get_started_get_started_demos.html>`__ for step-by-step instructions on building and running a basic image classification C++ application.
|
||||||
`C++ Quick Start Example <openvino_docs_get_started_get_started_demos.html>`__ for step-by-step instructions on building and running a basic image classification C++ application.
|
|
||||||
|
|
||||||
Visit the :ref:`Samples <code samples>` page for other C++ example applications to get you started with OpenVINO, such as:
|
Visit the :ref:`Samples <code samples>` page for other C++ example applications to get you started with OpenVINO, such as:
|
||||||
|
|
||||||
|
@ -30,7 +30,7 @@ See `Installing Additional Components <#optional-installing-additional-component
|
|||||||
|
|
||||||
* `Homebrew <https://brew.sh/>`_
|
* `Homebrew <https://brew.sh/>`_
|
||||||
* `CMake 3.13 or higher <https://cmake.org/download/>`__ (choose "macOS 10.13 or later"). Add ``/Applications/CMake.app/Contents/bin`` to path (for default installation).
|
* `CMake 3.13 or higher <https://cmake.org/download/>`__ (choose "macOS 10.13 or later"). Add ``/Applications/CMake.app/Contents/bin`` to path (for default installation).
|
||||||
* `Python 3.7 - 3.11 <https://www.python.org/downloads/mac-osx/>`__ (choose 3.7 - 3.10). Install and add it to path.
|
* `Python 3.7 - 3.11 <https://www.python.org/downloads/mac-osx/>`__ . Install and add it to path.
|
||||||
* Apple Xcode Command Line Tools. In the terminal, run ``xcode-select --install`` from any directory to install it.
|
* Apple Xcode Command Line Tools. In the terminal, run ``xcode-select --install`` from any directory to install it.
|
||||||
* (Optional) Apple Xcode IDE (not required for OpenVINO™, but useful for development)
|
* (Optional) Apple Xcode IDE (not required for OpenVINO™, but useful for development)
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user