[DOCS] Review release docs for master (#19152)

This commit is contained in:
Maciej Smyk 2023-08-14 13:40:34 +02:00 committed by GitHub
parent 86c4c6785d
commit e5c4350d92
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
25 changed files with 130 additions and 130 deletions

View File

@ -7,7 +7,7 @@
of Intel® Distribution of OpenVINO™ toolkit.
Performance varies by use, configuration and other factors. Learn more at [www.intel.com/PerformanceIndex](https://www.intel.com/PerformanceIndex).
Performance varies by use, configuration and other factors. Learn more at `www.intel.com/PerformanceIndex <https://www.intel.com/PerformanceIndex>`__.
Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure.

View File

@ -335,7 +335,7 @@ Q31. What does the message "Input port > 0 in --input is not supported if --inpu
**A:** When using the ``PORT:NODE`` notation for the ``--input`` command line argument and ``PORT`` > 0, you should specify ``--input_shape`` for this input. This is a limitation of the current Model Optimizer implementation.
> **NOTE**: It is no longer relevant message since the limitation on input port index for model truncation has been resolved.
.. note:: It is no longer relevant message since the limitation on input port index for model truncation has been resolved.
.. _question-32:

View File

@ -54,11 +54,11 @@ Examples of CLI Commands
.. math::
S = \frac{1}{\sum_{j = 0}^{|C|}C_{j}}
S = \frac{1}{\sum_{j = 0}^{|C|}C_{j}}
.. math::
C_{i}=log(S\*C_{i})
C_{i}=log(S*C_{i})
where :math:`C` - the counts array, :math:`C_{i} - i^{th}` element of the counts array, :math:`|C|` - number of elements in the counts array;

View File

@ -28,7 +28,7 @@ To generate a FaceNet OpenVINO model, feed a TensorFlow FaceNet model to model c
--freeze_placeholder_with_value "phase_train->False"
The batch joining pattern transforms to a placeholder with the model default shape if ``input_shape`` or ``batch`*/*`-b`` are not provided. Otherwise, the placeholder shape has custom parameters.
The batch joining pattern transforms to a placeholder with the model default shape if ``--input_shape`` or ``--batch``/``-b`` are not provided. Otherwise, the placeholder shape has custom parameters.
* ``freeze_placeholder_with_value "phase_train->False"`` to switch graph to inference mode
* ``batch`*/*`-b`` is applicable to override original network batch

View File

@ -82,10 +82,8 @@ Example usage:
"Postponed Return" is a practice to omit overhead of ``OVDict``, which is always returned from
synchronous calls. "Postponed Return" could be applied when:
* only a part of output data is required. For example, only one specific output is significant
in a given pipeline step and all outputs are large, thus, expensive to copy.
* data is not required "now". For example, it can be later extracted inside the pipeline as
a part of latency hiding.
* only a part of output data is required. For example, only one specific output is significant in a given pipeline step and all outputs are large, thus, expensive to copy.
* data is not required "now". For example, it can be later extracted inside the pipeline as a part of latency hiding.
* data return is not required at all. For example, models are being chained with the pure ``Tensor`` interface.

View File

@ -28,7 +28,7 @@ Local Deployment Options
- using PIP package manager on PyPI - the default approach for Python-based applications;
- using Docker images - if the application should be deployed as a Docker image, use a pre-built OpenVINO™ Runtime Docker image as a base image in the Dockerfile for the application container image. For more information about OpenVINO Docker images, refer to :doc:`Installing OpenVINO on Linux from Docker <openvino_docs_install_guides_installing_openvino_docker_linux>`
Furthermore, to customize your OpenVINO Docker image, use the `Docker CI Framework <https://github.com/openvinotoolkit/docker_ci>` to generate a Dockerfile and built the image.
Furthermore, to customize your OpenVINO Docker image, use the `Docker CI Framework <https://github.com/openvinotoolkit/docker_ci>`__ to generate a Dockerfile and built the image.
- Grab a necessary functionality of OpenVINO together with your application, also called "local distribution":

View File

@ -260,7 +260,7 @@ To determine if the output has dynamic dimensions, the ``partial_shape`` propert
:fragment: ov_dynamic_shapes:print_dynamic
If the output has any dynamic dimensions, they will be reported as ``?`` or as a range (e.g.``1..10``).
If the output has any dynamic dimensions, they will be reported as ``?`` or as a range (e.g. ``1..10``).
Output layers can also be checked for dynamic dimensions using the ``partial_shape.is_dynamic()`` property. This can be used on an entire output layer, or on an individual dimension, as shown in these examples:

View File

@ -109,9 +109,9 @@ Additional Resources
* :doc:`Preprocessing Details <openvino_docs_OV_UG_Preprocessing_Details>`
* :doc:`Layout API overview <openvino_docs_OV_UG_Layout_Overview>`
* :doc:`Model Optimizer - Optimize Preprocessing Computation <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>`
* :doc:`Model Caching Overview<openvino_docs_OV_UG_Model_caching_overview>`
* The `ov::preprocess::PrePostProcessor <classov_1_1preprocess_1_1PrePostProcessor.html#doxid-classov-1-1preprocess-1-1-pre-post-processor>` C++ class documentation
* The `ov::pass::Serialize <classov_1_1pass_1_1Serialize.html#doxid-classov-1-1pass-1-1-serialize>` - pass to serialize model to XML/BIN
* The `ov::set_batch <namespaceov.html#doxid-namespaceov-1a3314e2ff91fcc9ffec05b1a77c37862b>` - update batch dimension for a given model
* :doc:`Model Caching Overview <openvino_docs_OV_UG_Model_caching_overview>`
* The `ov::preprocess::PrePostProcessor <https://docs.openvino.ai/2023.0/classov_1_1preprocess_1_1PrePostProcessor.html#doxid-classov-1-1preprocess-1-1-pre-post-processor>`__ C++ class documentation
* The `ov::pass::Serialize <https://docs.openvino.ai/2023.0/classov_1_1pass_1_1Serialize.html#doxid-classov-1-1pass-1-1-serialize.html>`__ - pass to serialize model to XML/BIN
* The `ov::set_batch <https://docs.openvino.ai/2023.0/namespaceov.html#doxid-namespaceov-1a3314e2ff91fcc9ffec05b1a77c37862b.html>`__ - update batch dimension for a given model
@endsphinxdirective

View File

@ -362,7 +362,9 @@ Read-only properties
External Dependencies
###########################################################
For some performance-critical DL operations, the CPU plugin uses third-party libraries:
- `oneDNN <https://github.com/oneapi-src/oneDNN>`__ (Intel® x86-64, Arm®)
- `Compute Library <https://github.com/ARM-software/ComputeLibrary>`__ (Arm®)

View File

@ -498,12 +498,9 @@ on waiting for the completion of inference. The pseudo-code may look as follows:
Limitations
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
- Some primitives in the GPU plugin may block the host thread on waiting for the previous primitives before adding its kernels
to the command queue. In such cases, the ``ov::InferRequest::start_async()`` call takes much more time to return control to the calling thread
as internally it waits for a partial or full network completion.
Examples of operations: Loop, TensorIterator, DetectionOutput, NonMaxSuppression
- Synchronization of pre/post processing jobs and inference pipeline inside a shared queue is user's responsibility.
- Throughput mode is not available when queue sharing is used, i.e., only a single stream can be used for each compiled model.
* Some primitives in the GPU plugin may block the host thread on waiting for the previous primitives before adding its kernels to the command queue. In such cases, the ``ov::InferRequest::start_async()`` call takes much more time to return control to the calling thread as internally it waits for a partial or full network completion. Examples of operations: Loop, TensorIterator, DetectionOutput, NonMaxSuppression
* Synchronization of pre/post processing jobs and inference pipeline inside a shared queue is user's responsibility.
* Throughput mode is not available when queue sharing is used, i.e., only a single stream can be used for each compiled model.
Low-Level Methods for RemoteContext and RemoteTensor Creation
#####################################################################

View File

@ -83,7 +83,7 @@ Glossary of terms used in OpenVINO™
| *Model conversion API*
| A component of OpenVINO Development Tools. The API is used to import, convert, and optimize models trained in popular frameworks to a format usable by other OpenVINO components. In ``openvino.tools.mo`` namespace, model conversion API is represented by a Python ``mo.convert_model()`` method and ``mo`` command-line tool.
| *OpenVINO™ <code>Core</code>*
| *OpenVINO™ Core*
| OpenVINO™ Core is a software component that manages inference on certain Intel(R) hardware devices: CPU, GPU, GNA, etc.
| *OpenVINO™ API*
@ -92,22 +92,22 @@ Glossary of terms used in OpenVINO™
| *OpenVINO™ Runtime*
| A C++ library with a set of classes that you can use in your application to infer input tensors and get the results.
| *<code>ov::Model</code>*
| *ov::Model*
| A class of the Model that OpenVINO™ Runtime reads from IR or converts from ONNX, PaddlePaddle, TensorFlow, TensorFlow Lite formats. Consists of model structure, weights and biases.
| *<code>ov::CompiledModel</code>*
| *ov::CompiledModel*
| An instance of the compiled model which allows the OpenVINO™ Runtime to request (several) infer requests and perform inference synchronously or asynchronously.
| *<code>ov::InferRequest</code>*
| *ov::InferRequest*
| A class that represents the end point of inference on the model compiled by the device and represented by a compiled model. Inputs are set here, outputs should be requested from this interface as well.
| *<code>ov::ProfilingInfo</code>*
| *ov::ProfilingInfo*
| Represents basic inference profiling information per operation.
| *<code>ov::Layout</code>*
| *ov::Layout*
| Image data layout refers to the representation of images batch. Layout shows a sequence of 4D or 5D tensor data in memory. A typical NCHW format represents pixel in horizontal direction, rows by vertical dimension, planes by channel and images into batch. See also [Layout API Overview](./OV_Runtime_UG/layout_overview.md).
| *<code>ov::element::Type</code>*
| *ov::element::Type*
| Represents data element type. For example, f32 is 32-bit floating point, f16 is 16-bit floating point.
| *plugin / Inference Device / Inference Mode*

View File

@ -155,7 +155,7 @@ OpenVINO Development Tools is a set of utilities for working with OpenVINO and O
See the :doc:`Install OpenVINO Development Tools <openvino_docs_install_guides_install_dev_tools>` page for step-by-step installation instructions.
OpenCV is necessary to run demos from Open Model Zoo (OMZ). Some OpenVINO samples can also extend their capabilities when compiled with OpenCV as a dependency. To install OpenCV for OpenVINO, see the `instructions on GitHub <https://github.com/opencv/opencv/wiki/BuildOpenCV4OpenVINO>`.
OpenCV is necessary to run demos from Open Model Zoo (OMZ). Some OpenVINO samples can also extend their capabilities when compiled with OpenCV as a dependency. To install OpenCV for OpenVINO, see the `instructions on GitHub <https://github.com/opencv/opencv/wiki/BuildOpenCV4OpenVINO>`__ .
.. _optional-steps-windows:

View File

@ -111,16 +111,16 @@ Now that you've installed OpenVINO Runtime, you're ready to run your own machine
.. image:: https://user-images.githubusercontent.com/15709723/127752390-f6aa371f-31b5-4846-84b9-18dd4f662406.gif
:width: 400
Try the `Python Quick Start Example <notebooks/201-vision-monodepth-with-output.html>`__ to estimate depth in a scene using an OpenVINO monodepth model in a Jupyter Notebook inside your web browser.
Try the `Python Quick Start Example <https://docs.openvino.ai/2022.3/notebooks/201-vision-monodepth-with-output.html>`__ to estimate depth in a scene using an OpenVINO monodepth model in a Jupyter Notebook inside your web browser.
Get started with Python
+++++++++++++++++++++++
Visit the :doc:`Tutorials <tutorials>` page for more Jupyter Notebooks to get you started with OpenVINO, such as:
* `OpenVINO Python API Tutorial <notebooks/002-openvino-api-with-output.html>`__
* `Basic image classification program with Hello Image Classification <notebooks/001-hello-world-with-output.html>`__
* `Convert a PyTorch model and use it for image background removal <notebooks/205-vision-background-removal-with-output.html>`__
* `OpenVINO Python API Tutorial <https://docs.openvino.ai/2022.3/notebooks/002-openvino-api-with-output.html>`__
* `Basic image classification program with Hello Image Classification <https://docs.openvino.ai/2022.3/notebooks/001-hello-world-with-output.html>`__
* `Convert a PyTorch model and use it for image background removal <https://docs.openvino.ai/2022.3/notebooks/205-vision-background-removal-with-output.html>`__
Run OpenVINO on accelerated devices
+++++++++++++++++++++++++++++++++++

View File

@ -19,10 +19,12 @@ It performs element-wise activation function on a given input tensor, based on t
.. math::
Elu(x) = \left\begin{array}{r}
x \qquad \mbox{if } x > 0 \\
\alpha(e^{x} - 1) \quad \mbox{if } x \leq 0
\end{array}\right.
Elu(x) = \left\
\begin{array}{r}
x \quad \text{if } x > 0 \\
\alpha(e^{x} - 1) \quad \text{if } x \leq 0
\end{array}
\right.
where α corresponds to *alpha* attribute.
@ -71,4 +73,6 @@ where α corresponds to *alpha* attribute.
</output>
</layer>
@endsphinxdirective

View File

@ -60,71 +60,70 @@ second input:
* *score_threshold*
* **Description**: The *score_threshold* attribute specifies a threshold to consider only detections whose score are
larger than the threshold.
* **Range of values**: non-negative floating-point number
* **Type**: ``float``
* **Default value**: None
* **Required**: *yes*
* **Description**: The *score_threshold* attribute specifies a threshold to consider only detections whose score are larger than the threshold.
* **Range of values**: non-negative floating-point number
* **Type**: ``float``
* **Default value**: None
* **Required**: *yes*
* *nms_threshold*
* **Description**: The *nms_threshold* attribute specifies a threshold to be used in the NMS stage.
* **Range of values**: non-negative floating-point number
* **Type**: ``float``
* **Default value**: None
* **Required**: *yes*
* **Description**: The *nms_threshold* attribute specifies a threshold to be used in the NMS stage.
* **Range of values**: non-negative floating-point number
* **Type**: ``float``
* **Default value**: None
* **Required**: *yes*
* *num_classes*
* **Description**: The *num_classes* attribute specifies the number of detected classes.
* **Range of values**: non-negative integer number
* **Type**: ``int``
* **Default value**: None
* **Required**: *yes*
* **Description**: The *num_classes* attribute specifies the number of detected classes.
* **Range of values**: non-negative integer number
* **Type**: ``int``
* **Default value**: None
* **Required**: *yes*
* *post_nms_count*
* **Description**: The *post_nms_count* attribute specifies the maximal number of detections per class.
* **Range of values**: non-negative integer number
* **Type**: ``int``
* **Default value**: None
* **Required**: *yes*
* **Description**: The *post_nms_count* attribute specifies the maximal number of detections per class.
* **Range of values**: non-negative integer number
* **Type**: ``int``
* **Default value**: None
* **Required**: *yes*
* *max_detections_per_image*
* **Description**: The *max_detections_per_image* attribute specifies maximal number of detections per image.
* **Range of values**: non-negative integer number
* **Type**: ``int``
* **Default value**: None
* **Required**: *yes*
* **Description**: The *max_detections_per_image* attribute specifies maximal number of detections per image.
* **Range of values**: non-negative integer number
* **Type**: ``int``
* **Default value**: None
* **Required**: *yes*
* *class_agnostic_box_regression*
* **Description**: *class_agnostic_box_regression* attribute is a flag that specifies whether to delete background classes or not.
* **Range of values**:
* ``true`` means background classes should be deleted
* ``false`` means background classes should not be deleted
* **Type**: ``boolean``
* **Default value**: false
* **Required**: *no*
* **Description**: *class_agnostic_box_regression* attribute is a flag that specifies whether to delete background classes or not.
* **Range of values**:
* ``true`` means background classes should be deleted
* ``false`` means background classes should not be deleted
* **Type**: ``boolean``
* **Default value**: false
* **Required**: *no*
* *max_delta_log_wh*
* **Description**: The *max_delta_log_wh* attribute specifies maximal delta of logarithms for width and height.
* **Range of values**: floating-point number
* **Type**: ``float``
* **Default value**: None
* **Required**: *yes*
* **Description**: The *max_delta_log_wh* attribute specifies maximal delta of logarithms for width and height.
* **Range of values**: floating-point number
* **Type**: ``float``
* **Default value**: None
* **Required**: *yes*
* *deltas_weights*
* **Description**: The *deltas_weights* attribute specifies weights for bounding boxes sizes deltas.
* **Range of values**: a list of non-negative floating-point numbers
* **Type**: ``float[]``
* **Default value**: None
* **Required**: *yes*
* **Description**: The *deltas_weights* attribute specifies weights for bounding boxes sizes deltas.
* **Range of values**: a list of non-negative floating-point numbers
* **Type**: ``float[]``
* **Default value**: None
* **Required**: *yes*
**Inputs**

View File

@ -17,18 +17,18 @@
*Proposal* has three inputs: a tensor with probabilities whether particular bounding box corresponds to background and foreground, a tensor with bbox_deltas for each of the bounding boxes, a tensor with input image size in the [``image_height``, ``image_width``, ``scale_height_and_width``] or [``image_height``, ``image_width``, ``scale_height``, ``scale_width``] format. The produced tensor has two dimensions ``[batch_size * post_nms_topn, 5]``, and for each output box contains batch index and box coordinates.
*Proposal* layer does the following with the input tensor:
1. Generates initial anchor boxes. Left top corner of all boxes is at (0, 0). Width and height of boxes are calculated from *base_size* with *scale* and *ratio* attributes.
2. For each point in the first input tensor:
1. Generates initial anchor boxes. Left top corner of all boxes is at (0, 0). Width and height of boxes are calculated from *base_size* with *scale* and *ratio* attributes.
2. For each point in the first input tensor:
* pins anchor boxes to the image according to the second input tensor that contains four deltas for each box: for *x* and *y* of center, for *width* and for *height*
* finds out score in the first input tensor
* pins anchor boxes to the image according to the second input tensor that contains four deltas for each box: for *x* and *y* of center, for *width* and for *height*
* finds out score in the first input tensor
3. Filters out boxes with size less than *min_size*
4. Sorts all proposals (*box*, *score*) by score from highest to lowest
5. Takes top *pre_nms_topn* proposals
6. Calculates intersections for boxes and filter out all boxes with :math:`intersection/union > nms\_thresh`
7. Takes top *post_nms_topn* proposals
8. Returns top proposals, if there is not enough proposals to fill the whole output tensor, the valid proposals will be terminated with a single -1.
3. Filters out boxes with size less than *min_size*
4. Sorts all proposals (*box*, *score*) by score from highest to lowest
5. Takes top *pre_nms_topn* proposals
6. Calculates intersections for boxes and filter out all boxes with :math:`intersection/union > nms\_thresh`
7. Takes top *post_nms_topn* proposals
8. Returns top proposals, if there is not enough proposals to fill the whole output tensor, the valid proposals will be terminated with a single -1.
**Attributes**:

View File

@ -24,21 +24,21 @@ the second optional tensor of shape ``[batch_size * post_nms_topn]`` with probab
*Proposal* layer does the following with the input tensor:
1. Generates initial anchor boxes. Left top corner of all boxes is at (0, 0). Width and height of boxes are calculated from *base_size* with *scale* and *ratio* attributes.
2. For each point in the first input tensor:
1. Generates initial anchor boxes. Left top corner of all boxes is at (0, 0). Width and height of boxes are calculated from *base_size* with *scale* and *ratio* attributes.
2. For each point in the first input tensor:
* pins anchor boxes to the image according to the second input tensor that contains four deltas for each box: for *x* and *y* of center, for *width* and for *height*
* finds out score in the first input tensor
* pins anchor boxes to the image according to the second input tensor that contains four deltas for each box: for *x* and *y* of center, for *width* and for *height*
* finds out score in the first input tensor
3. Filters out boxes with size less than *min_size*
4. Sorts all proposals (*box*, *score*) by score from highest to lowest
5. Takes top *pre_nms_topn* proposals
6. Calculates intersections for boxes and filter out all boxes with :math:`intersection/union > nms\_thresh`
7. Takes top *post_nms_topn* proposals
8. Returns the results:
3. Filters out boxes with size less than *min_size*
4. Sorts all proposals (*box*, *score*) by score from highest to lowest
5. Takes top *pre_nms_topn* proposals
6. Calculates intersections for boxes and filter out all boxes with :math:`intersection/union > nms\_thresh`
7. Takes top *post_nms_topn* proposals
8. Returns the results:
* Top proposals, if there is not enough proposals to fill the whole output tensor, the valid proposals will be terminated with a single -1.
* Optionally returns probabilities for each proposal, which are not terminated by any special value.
* Top proposals, if there is not enough proposals to fill the whole output tensor, the valid proposals will be terminated with a single -1.
* Optionally returns probabilities for each proposal, which are not terminated by any special value.
**Attributes**:

View File

@ -1,4 +1,4 @@
# Eye <a name="Eye"></a> {#openvino_docs_ops_generation_Eye_9}
# Eye {#openvino_docs_ops_generation_Eye_9}
@sphinxdirective
@ -67,10 +67,10 @@ Example 3. *Eye* output with ``output_type`` = ``f16``:
* *output_type*
* **Description**: the type of the output
* **Range of values**: any numeric type
* **Type**: ``string``
* **Required**: *Yes*
* **Description**: the type of the output
* **Range of values**: any numeric type
* **Type**: ``string``
* **Required**: *Yes*
**Inputs**:

View File

@ -89,7 +89,7 @@
* *cube_coeff*
* **Description**: *cube_coeff* specifies the parameter *a* for cubic interpolation (see, e.g. [article](https://ieeexplore.ieee.org/document/1163711/)). *cube_coeff* is used only when ``mode == cubic``.
* **Description**: *cube_coeff* specifies the parameter *a* for cubic interpolation (see, e.g. `article <https://ieeexplore.ieee.org/document/1163711/>`__ ). *cube_coeff* is used only when ``mode == cubic``.
* **Range of values**: floating-point number
* **Type**: any of supported floating-point type
* **Default value**: ``-0.75``

View File

@ -11,7 +11,7 @@
**Category**: *Infrastructure*
**Short description**: *Loop* operation performs recurrent execution of the network, which is described in the ``body``, iterating through the data.
The operation has similar semantic to the ONNX* Loop `operation <https://github.com/onnx/onnx/blob/master/docs/Changelog.md#Loop-13>`__.
The operation has similar semantic to the ONNX Loop `operation <https://github.com/onnx/onnx/blob/master/docs/Changelog.md#Loop-13>`__.
**Detailed description**

View File

@ -61,11 +61,11 @@ Where
* **4**: ``crops_end`` - Specifies the amount to crop from the ending along each axis of ``data`` input. A 1D tensor of type *T_INT* and shape ``[N]``. All element values must be greater than or equal to 0. ``crops_end[0]`` is expected to be 0. **Required.**
* **Note**: ``N`` corresponds to the rank of ``data`` input.
* **Note**: ``batch`` axis of ``data`` input must be evenly divisible by the cumulative product of ``block_shape`` elements.
* **Note**: It is required that ``crops_begin[i] + crops_end[i] <= block_shape[i] \* input_shape[i]``.
* **Note**: It is required that ``crops_begin[i] + crops_end[i] <= block_shape[i] * input_shape[i]``.
**Outputs**
* **1**: Permuted tensor of type *T* with the same rank as ``data`` input tensor, and shape ``[batch / (block_shape[0] \* block_shape[1] \* ... \* block_shape[N - 1]), D_1 \* block_shape[1] - crops_begin[1] - crops_end[1], D_2 \* block_shape[2] - crops_begin[2] - crops_end[2], ..., D_{N - 1} \* block_shape[N - 1] - crops_begin[N - 1] - crops_end[N - 1]``.
* **1**: Permuted tensor of type *T* with the same rank as ``data`` input tensor, and shape ``[batch / (block_shape[0] * block_shape[1] * ... * block_shape[N - 1]), D_1 * block_shape[1] - crops_begin[1] - crops_end[1], D_2 * block_shape[2] - crops_begin[2] - crops_end[2], ..., D_{N - 1} * block_shape[N - 1] - crops_begin[N - 1] - crops_end[N - 1]``.
**Types**

View File

@ -14,7 +14,7 @@
**Attributes**:
No attributes available.
No attributes available.
**Inputs**

View File

@ -46,13 +46,13 @@ In the example below, inference is applied to the results of the video decoding.
Below are example-codes for the regular and async-based approaches to compare:
* Normally, the frame is captured with OpenCV and then immediately processed:<br>
* Normally, the frame is captured with OpenCV and then immediately processed:
.. doxygensnippet:: docs/snippets/dldt_optimization_guide8.cpp
:language: cpp
:fragment: [part8]
* In the "true" async mode, the ``NEXT`` request is populated in the main (application) thread, while the ``CURRENT`` request is processed:<br>
* In the "true" async mode, the ``NEXT`` request is populated in the main (application) thread, while the ``CURRENT`` request is processed:
.. doxygensnippet:: docs/snippets/dldt_optimization_guide9.cpp
:language: cpp

View File

@ -20,10 +20,10 @@ Model optimization is an optional offline step of improving the final model perf
Post-training Quantization is the fastest way to optimize a model and should be applied first, but it is limited in terms of achievable accuracy-performance trade-off. In case of poor accuracy or performance after Post-training Quantization, Training-time Optimization can be used as an option.
Once the model is optimized using the aforementioned methods, it can be used for inference using the regular OpenVINO inference workflow. No changes to the inference code are required.
.. image:: _static/images/DEVELOPMENT_FLOW_V3_crunch.svg
Once the model is optimized using the aforementioned methods, it can be used for inference using the regular OpenVINO inference workflow. No changes to the inference code are required.
.. image:: _static/images/WHAT_TO_USE.svg
Additional Resources

View File

@ -66,7 +66,7 @@ For example:
+====================================+===============================================================================================================================================================================================================================================================================+
| Host Machine | Physical hardware on which the KVM and Guest VM share set up. |
| Kernel-based Virtual Machine (KVM) | The OpenVINO™ Security Add-on runs in this virtual machine because it provides an isolated environment for security sensitive operations. |
| Guest VM | The Model Developer uses the Guest VM to enable access control to the completed model. <br>The Independent Software Provider uses the Guest VM to host the License Service.<br>The User uses the Guest VM to contact the License Service and run the access controlled model. |
| Guest VM | The Model Developer uses the Guest VM to enable access control to the completed model. The Independent Software Provider uses the Guest VM to host the License Service. The User uses the Guest VM to contact the License Service and run the access controlled model. |
+------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
.. _prerequisites_ovsa:
@ -76,13 +76,13 @@ Prerequisites
**Hardware**
* Intel® Core™ or Xeon® processor<br>
* Intel® Core™ or Xeon® processor
**Operating system, firmware, and software**
* Ubuntu* Linux* 18.04 on the Host Machine.<br>
* Ubuntu* Linux* 18.04 on the Host Machine.
* TPM version 2.0-conformant Discrete Trusted Platform Module (dTPM) or Firmware Trusted Platform Module (fTPM)
* Secure boot is enabled.<br>
* Secure boot is enabled.
**Other**
@ -126,8 +126,8 @@ Begin this step on the Intel® Core™ or Xeon® processor machine that meets th
kvm-ok
The output should show: <br>
``INFO: /dev/kvm exists`` <br>
The output should show:
``INFO: /dev/kvm exists``
``KVM acceleration can be used``
If your output is different, modify your BIOS settings to enable hardware virtualization.
@ -171,7 +171,7 @@ The following are installed and ready to use:
* QEMU
* SW-TPM
* HW-TPM support
* Docker<br>
* Docker
You're ready to configure the Host Machine for networking.
@ -243,7 +243,7 @@ This example in this step uses the following names. Your configuration might use
.. code-block:: sh
4: br0:<br><BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000<br>inet 123.123.123.123/<mask> brd 321.321.321.321 scope global dynamic br0
4: br0:<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000inet 123.123.123.123/<mask> brd 321.321.321.321 scope global dynamic br0
7. Create a script named ``br0-qemu-ifup`` to bring up the ``br0`` interface. Add the following script contents:
@ -357,7 +357,7 @@ As an option, you can use ``virsh`` and the virtual machine manager to create an
* **Option 1**: Use a script to install additional software
1. Copy the script ``install_guest_deps.sh`` from the ``Scripts/reference directory`` of the OVSA repository to the Guest VM
2. Run the script.
3. Shut down the Guest VM.<br>
3. Shut down the Guest VM.
* **Option 2** : Manually install additional software
1. Install the software tool `tpm2-tss <https://github.com/tpm2-software/tpm2-tss/releases/download/2.4.4/tpm2-tss-2.4.4.tar.gz>`__.
@ -367,7 +367,7 @@ As an option, you can use ``virsh`` and the virtual machine manager to create an
3. Install the `tpm2-tools <https://github.com/tpm2-software/tpm2-tools/releases/download/4.3.0/tpm2-tools-4.3.0.tar.gz>`__.
For installation information follow `here <https://github.com/tpm2-software/tpm2-tools/blob/master/docs/INSTALL.md>`__
4. Install the `Docker packages <https://docs.docker.com/engine/install/ubuntu/>`__
5. Shut down the Guest VM.<br>
5. Shut down the Guest VM.
9. On the host, create a directory to support the virtual TPM device and provision its certificates. Only ``root`` should have read/write permission to this directory:
@ -431,7 +431,7 @@ As an option, you can use ``virsh`` and the virtual machine manager to create an
Step 5: Set Up one Guest VM for the User role
+++++++++++++++++++++++++++++++++++++++++++++
1. Choose **ONE** of these options to create a Guest VM for the User role:<br>
1. Choose **ONE** of these options to create a Guest VM for the User role:
**Option 1: Copy and Rename the ovsa_isv_dev_vm_disk.qcow2 disk image**
@ -456,7 +456,7 @@ Step 5: Set Up one Guest VM for the User role
sudo rm /etc/machine-id
systemd-machine-id-setup
6. Shut down the Guest VM.<br><br>
6. Shut down the Guest VM.
**Option 2: Manually create the Guest VM**
@ -497,7 +497,7 @@ Step 5: Set Up one Guest VM for the User role
**Option 1: Use a script to install additional software**
1. Copy the script ``install_guest_deps.sh`` from the ``Scripts/reference`` directory of the OVSA repository to the Guest VM
2. Run the script.
3. Shut down the Guest VM.<br><br>
3. Shut down the Guest VM.
**Option 2: Manually install additional software**
1. Install the software tool `tpm2-tss <https://github.com/tpm2-software/tpm2-tss/releases/download/2.4.4/tpm2-tss-2.4.4.tar.gz>`__ For installation information follow `here <https://github.com/tpm2-software/tpm2-tss/blob/master/INSTALL.md>`__