Typo fix (#17723)
This commit is contained in:
parent
dc36ec11b5
commit
a757506f6f
@ -33,7 +33,7 @@ AsyncInferRequest()
|
||||
|
||||
The main goal of the ``AsyncInferRequest`` constructor is to define a device pipeline ``m_pipeline``. The example below demonstrates ``m_pipeline`` creation with the following stages:
|
||||
|
||||
* ``infer_preprocess_and_start_pipeline`` is a CPU ligthweight task to submit tasks to a remote device.
|
||||
* ``infer_preprocess_and_start_pipeline`` is a CPU lightweight task to submit tasks to a remote device.
|
||||
* ``wait_pipeline`` is a CPU non-compute task that waits for a response from a remote device.
|
||||
* ``infer_postprocess`` is a CPU compute task.
|
||||
|
||||
|
@ -19,7 +19,7 @@ Overview of Artificial Neural Networks Representation
|
||||
A deep learning network is usually represented as a directed graph describing the flow of data from the network input data to the inference results.
|
||||
Input data can be in the form of images, video, text, audio, or preprocessed information representing objects from the target area of interest.
|
||||
|
||||
Here is an illustration sof a small graph representing a model that consists of a single Convolutional layer and activation function:
|
||||
Here is an illustration of a small graph representing a model that consists of a single Convolutional layer and activation function:
|
||||
|
||||
.. image:: _static/images/small_IR_graph_demonstration.png
|
||||
|
||||
@ -52,7 +52,7 @@ A set consists of several groups of operations:
|
||||
|
||||
* Generic element-wise arithmetic tensor operations such as ``Add``, ``Subtract``, and ``Multiply``.
|
||||
|
||||
* Comparison operations that compare two numeric tensors and produce boolean tensors, for example, ``Less``, ``Equeal``, ``Greater``.
|
||||
* Comparison operations that compare two numeric tensors and produce boolean tensors, for example, ``Less``, ``Equal``, ``Greater``.
|
||||
|
||||
* Logical operations that are dealing with boolean tensors, for example, ``And``, ``Xor``, ``Not``.
|
||||
|
||||
|
@ -128,7 +128,7 @@ Information about layer precision is also stored in the performance counters.
|
||||
resnet\_model/add\_5/fq\_input\_1 NOT\_RUN FakeQuantize undef 0 0
|
||||
=========================================================== ============= ============== ===================== ================= ==============
|
||||
|
||||
| The ``exeStatus`` column of the table includes the following possible values:
|
||||
| The ``execStatus`` column of the table includes the following possible values:
|
||||
| - ``EXECUTED`` - the layer was executed by standalone primitive.
|
||||
| - ``NOT_RUN`` - the layer was not executed by standalone primitive or was fused with another operation and executed in another layer primitive.
|
||||
|
|
||||
|
@ -111,7 +111,7 @@ The patch modifies the framework code by adding a special command-line argument
|
||||
+ else:
|
||||
+ state_dict = torch.load(path, map_location=torch.device('cpu'))
|
||||
|
||||
# For backward compatability, remove these (the new variable is called layers)
|
||||
# For backward compatibility, remove these (the new variable is called layers)
|
||||
for key in list(state_dict.keys()):
|
||||
@@ -673,8 +679,11 @@ class Yolact(nn.Module):
|
||||
else:
|
||||
|
@ -149,7 +149,7 @@ Converting a GNMT Model to the IR
|
||||
|
||||
**Step 1**. Clone the GitHub repository and check out the commit:
|
||||
|
||||
1. Clone the NMT reposirory:
|
||||
1. Clone the NMT repository:
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
|
@ -461,7 +461,7 @@ weights are loaded from DDR/L3 cache in the packed format this significantly dec
|
||||
and as a consequence improve inference performance.
|
||||
|
||||
To use this feature, the user is provided with property ``sparse_weights_decompression_rate``, which can take
|
||||
values from the interval \[0, 1\]. ``sparse_weights_decompression_rate`` defines sparse rate threashold: only operations
|
||||
values from the interval \[0, 1\]. ``sparse_weights_decompression_rate`` defines sparse rate threshold: only operations
|
||||
with higher sparse rate will be executed using ``sparse weights decompression feature``. The default value is ``1``,
|
||||
which means the option is disabled.
|
||||
|
||||
|
@ -142,7 +142,7 @@ A returned value appears as follows: ``Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz``
|
||||
|
||||
In order to understand a list of supported properties on ``ov::Core`` or ``ov::CompiledModel`` levels, use ``ov::supported_properties``
|
||||
which contains a vector of supported property names. Properties which can be changed, has ``ov::PropertyName::is_mutable``
|
||||
returning the ``true`` value. Most of the properites which are changable on ``ov::Core`` level, cannot be changed once the model is compiled,
|
||||
returning the ``true`` value. Most of the properties which are changable on ``ov::Core`` level, cannot be changed once the model is compiled,
|
||||
so it becomes immutable read-only property.
|
||||
|
||||
Configure a Work with a Model
|
||||
|
@ -1,6 +1,6 @@
|
||||
# OpenVINO Debug Capabilities
|
||||
|
||||
OpenVINO components provides different debug capabilities, to get more infromation please read:
|
||||
OpenVINO components provides different debug capabilities, to get more information please read:
|
||||
|
||||
* [OpenVINO Model Debug Capabilities](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Model_Representation.html#model-debug-capabilities)
|
||||
* [OpenVINO Pass Manager Debug Capabilities](#todo)
|
||||
|
@ -108,7 +108,7 @@ To install OpenVINO Development Tools to work with Caffe models (OpenVINO suppor
|
||||
Linux and macOS:
|
||||
|
||||
```sh
|
||||
#setup virtual envrinment
|
||||
#setup virtual environment
|
||||
python3 -m venv openvino_env
|
||||
source openvino_env/bin/activate
|
||||
pip install pip --upgrade
|
||||
@ -119,7 +119,7 @@ pip install openvino_dev-<version>-py3-none-any.whl[caffe] --find-links=<INSTAL
|
||||
|
||||
Windows:
|
||||
```bat
|
||||
rem setup virtual envrinment
|
||||
rem setup virtual environment
|
||||
python -m venv openvino_env
|
||||
openvino_env\Scripts\activate.bat
|
||||
pip install pip --upgrade
|
||||
|
@ -31,7 +31,7 @@ And build OpenVINO as usual.
|
||||
|
||||
## Generate coverage report
|
||||
|
||||
In order to generate coverage reports, first of all, the tests must be run. Depending on how many tests are run, the better covegare percentage can be achieved. E.g. for `openvino` component, `InferenceEngineUnitTests`, `ieUnitTests`, `ieFuncTests` must be run as well as plugin tests.
|
||||
In order to generate coverage reports, first of all, the tests must be run. Depending on how many tests are run, the better coverage percentage can be achieved. E.g. for `openvino` component, `InferenceEngineUnitTests`, `ieUnitTests`, `ieFuncTests` must be run as well as plugin tests.
|
||||
|
||||
```bash
|
||||
$ ctest -V
|
||||
|
@ -287,7 +287,7 @@ The sample can also run in a serial mode for a reference and benchmarking purpos
|
||||
// happens on-the-fly here
|
||||
avg.start();
|
||||
} else {
|
||||
// Measurfe & draw FPS for all other frames
|
||||
// Measure & draw FPS for all other frames
|
||||
labels::DrawFPS(frame, frames, avg.fps(frames-1));
|
||||
}
|
||||
if (!no_show) {
|
||||
@ -296,7 +296,7 @@ The sample can also run in a serial mode for a reference and benchmarking purpos
|
||||
}
|
||||
}
|
||||
|
||||
On a test machine (Intel® Core™ i5-6600), with OpenCV built with `Intel® TBB <https://www.threadingbuildingblocks.org/intel-tbb-tutorial>`__ support, detector network assigned to CPU, and classifiers to iGPU, the pipelined sample outperformes the serial one by the factor of 1.36x (thus adding +36% in overall throughput).
|
||||
On a test machine (Intel® Core™ i5-6600), with OpenCV built with `Intel® TBB <https://www.threadingbuildingblocks.org/intel-tbb-tutorial>`__ support, detector network assigned to CPU, and classifiers to iGPU, the pipelined sample outperforms the serial one by the factor of 1.36x (thus adding +36% in overall throughput).
|
||||
|
||||
Conclusion
|
||||
###########
|
||||
|
@ -67,7 +67,7 @@ Glossary of terms used in OpenVINO™
|
||||
| *Batch*
|
||||
| Number of images to analyze during one call of infer. Maximum batch size is a property of the model set before its compilation. In NHWC, NCHW, and NCDHW image data layout representations, the 'N' refers to the number of images in the batch.
|
||||
|
||||
| *Device Affinitity*
|
||||
| *Device Affinity*
|
||||
| A preferred hardware device to run inference (CPU, GPU, GNA, etc.).
|
||||
|
||||
| *Extensibility mechanism, Custom layers*
|
||||
|
@ -27,7 +27,7 @@
|
||||
omz_model_api_ovms_adapter
|
||||
|
||||
|
||||
Open Model Zoo for OpenVINO™ toolkit delivers a wide variety of free, pre-trained deep learning models and demo applications that provide full application templates to help you implement deep learning in Python, C++, or OpenCV Graph API (G-API). Models and demos are avalable in the `Open Model Zoo GitHub repo <https://github.com/openvinotoolkit/open_model_zoo>`__ and licensed under Apache License Version 2.0.
|
||||
Open Model Zoo for OpenVINO™ toolkit delivers a wide variety of free, pre-trained deep learning models and demo applications that provide full application templates to help you implement deep learning in Python, C++, or OpenCV Graph API (G-API). Models and demos are available in the `Open Model Zoo GitHub repo <https://github.com/openvinotoolkit/open_model_zoo>`__ and licensed under Apache License Version 2.0.
|
||||
|
||||
Browse through over 200 neural network models, both :doc:`public <omz_models_group_public>` and from :doc:`Intel <omz_models_group_intel>`, and pick the right one for your solution. Types include object detection, classification, image segmentation, handwriting recognition, text to speech, pose estimation, and others. The Intel models have already been converted to work with OpenVINO™ toolkit, while public models can easily be converted using the :doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` utility.
|
||||
|
||||
|
@ -1,7 +1,7 @@
|
||||
# Jupyter notebooks autodoc
|
||||
|
||||
Auto fetching documentations designed for openvino notebooks tutorials.
|
||||
This module is responsible for fetching artifats, in this particular example jupyter tutorial notebooks and converting them to notebook documentation.
|
||||
This module is responsible for fetching artifacts, in this particular example jupyter tutorial notebooks and converting them to notebook documentation.
|
||||
|
||||
## Step 0. Prepare venv
|
||||
|
||||
|
@ -6,7 +6,7 @@
|
||||
|
||||
**Category**: *Arithmetic binary*
|
||||
|
||||
**Short description**: *Add* performs element-wise addition operation with two given tensors applying broadcasting rule specified in the *auto_broacast* attribute.
|
||||
**Short description**: *Add* performs element-wise addition operation with two given tensors applying broadcasting rule specified in the *auto_broadcast* attribute.
|
||||
|
||||
**Detailed description**
|
||||
Before performing arithmetic operation, input tensors *a* and *b* are broadcasted if their shapes are different and ``auto_broadcast`` attribute is not ``none``. Broadcasting is performed according to ``auto_broadcast`` value.
|
||||
|
@ -16,13 +16,13 @@ Float type input:
|
||||
|
||||
a_{i} = atanh(a_{i})
|
||||
|
||||
Signed Intragral type put:
|
||||
Signed Integral type put:
|
||||
|
||||
.. math::
|
||||
|
||||
a_{i} = (i <= -1) ? std::numeric_limits<T>::min() : (i >= 1) ? std::numeric_limits<T>::max() : atanh(a_{i})
|
||||
|
||||
Unsigned Intragral type put:
|
||||
Unsigned Integral type put:
|
||||
|
||||
.. math::
|
||||
|
||||
|
@ -6,7 +6,7 @@
|
||||
|
||||
**Category**: *Arithmetic binary*
|
||||
|
||||
**Short description**: *Divide* performs element-wise division operation with two given tensors applying broadcasting rule specified in the *auto_broacast* attribute.
|
||||
**Short description**: *Divide* performs element-wise division operation with two given tensors applying broadcasting rule specified in the *auto_broadcast* attribute.
|
||||
|
||||
**Detailed description**
|
||||
Before performing arithmetic operation, input tensors *a* and *b* are broadcasted if their shapes are different and ``auto_broadcast`` attribute is not ``none``. Broadcasting is performed according to ``auto_broadcast`` value.
|
||||
|
@ -6,7 +6,7 @@
|
||||
|
||||
**Category**: *Arithmetic binary*
|
||||
|
||||
**Short description**: *Multiply* performs element-wise multiplication operation with two given tensors applying broadcasting rule specified in the *auto_broacast* attribute.
|
||||
**Short description**: *Multiply* performs element-wise multiplication operation with two given tensors applying broadcasting rule specified in the *auto_broadcast* attribute.
|
||||
|
||||
**Detailed description**
|
||||
Before performing arithmetic operation, input tensors *a* and *b* are broadcasted if their shapes are different and ``auto_broadcast`` attribute is not ``none``. Broadcasting is performed according to ``auto_broadcast`` value.
|
||||
|
@ -6,7 +6,7 @@
|
||||
|
||||
**Category**: *Arithmetic binary*
|
||||
|
||||
**Short description**: *Subtract* performs element-wise subtraction operation with two given tensors applying broadcasting rule specified in the *auto_broacast* attribute.
|
||||
**Short description**: *Subtract* performs element-wise subtraction operation with two given tensors applying broadcasting rule specified in the *auto_broadcast* attribute.
|
||||
|
||||
**Detailed description**
|
||||
Before performing arithmetic operation, input tensors *a* and *b* are broadcasted if their shapes are different and ``auto_broadcast`` attribute is not ``none``. Broadcasting is performed according to ``auto_broadcast`` value.
|
||||
|
@ -27,7 +27,7 @@ Rules
|
||||
1. First input tensor A is of any rank, second input B has rank smaller or equal to the first input.
|
||||
2. Input tensor B is a continuous subsequence of input A.
|
||||
3. Apply broadcast B to match the shape of A, where provided *axis* is the start dimension index for broadcasting B onto A.
|
||||
4. If *axis* is set to default (-1) calculate new value: ``axis = rank(A) - rank(B)``. Except (-1) for default valule, no other negative values are allowed for *axis*.
|
||||
4. If *axis* is set to default (-1) calculate new value: ``axis = rank(A) - rank(B)``. Except (-1) for default value, no other negative values are allowed for *axis*.
|
||||
5. The trailing dimensions of size 1 for input B will be ignored for the consideration of subsequence, such as ``shape(B) = (3, 1) => (3)``.
|
||||
|
||||
Numpy examples
|
||||
|
@ -18,7 +18,7 @@ declared in ``variable_id`` and returns an error otherwise.
|
||||
|
||||
* *variable_id*
|
||||
|
||||
* **Description**: identificator of the variable to be updated
|
||||
* **Description**: identifier of the variable to be updated
|
||||
* **Range of values**: any non-empty string
|
||||
* **Type**: string
|
||||
* **Required**: *yes*
|
||||
|
@ -20,7 +20,7 @@ with the shape and type from the 1 input.
|
||||
|
||||
* *variable_id*
|
||||
|
||||
* **Description**: identificator of the variable to be read
|
||||
* **Description**: identifier of the variable to be read
|
||||
* **Range of values**: any non-empty string
|
||||
* **Type**: string
|
||||
* **Required**: *yes*
|
||||
|
@ -18,7 +18,7 @@ The operation supports ``equation`` in explicit and implicit modes. The formats
|
||||
In explicit mode, the einsum ``equation`` has the output subscript separated from the input subscripts by ``->``, and has the following format for ``n`` operands:
|
||||
``<subscript for input1>, <subscript for input2>, ..., <subscript for inputn> -> <subscript for output>``.
|
||||
Each input subscript ``<subscript for input1>`` contains a sequence of labels (alphabetic letters ``['A',...,'Z','a',...,'z']``),
|
||||
where each label refers to a dimension of the corresponsing operand. Labels are case sensitive and capital letters precede lowercase letters in alphabetical sort.
|
||||
where each label refers to a dimension of the corresponding operand. Labels are case sensitive and capital letters precede lowercase letters in alphabetical sort.
|
||||
Labels do not need to appear in a subscript in alphabetical order.
|
||||
The subscript for a scalar input is empty. The input subscripts are separated with a comma ``,``.
|
||||
The output subscript ``<subscript for output>`` represents a sequence of labels (alphabetic letters ``['A',...,'Z','a',...,'z']``).
|
||||
|
@ -68,7 +68,7 @@ The value can be in the range ``[ -r, r - 1]``, where ``r`` is the rank of ``dat
|
||||
<dim>125</dim>
|
||||
<dim>20</dim>
|
||||
</port>
|
||||
<port id="2"> < !-- udpates -->
|
||||
<port id="2"> < !-- updates -->
|
||||
<dim>1000</dim>
|
||||
<dim>125</dim>
|
||||
<dim>20</dim>
|
||||
@ -102,7 +102,7 @@ The value can be in the range ``[ -r, r - 1]``, where ``r`` is the rank of ``dat
|
||||
<port id="1"> < !-- indices -->
|
||||
<dim>2</dim> < !-- {0, 2} -->
|
||||
</port>
|
||||
<port id="2"> < !-- udpates -->
|
||||
<port id="2"> < !-- updates -->
|
||||
<dim>3</dim> < !-- {1.0f, 1.0f} -->
|
||||
<dim>2</dim> < !-- {1.0f, 1.0f} -->
|
||||
</port> < !-- {1.0f, 2.0f} -->
|
||||
|
@ -12,7 +12,7 @@
|
||||
|
||||
*CTCLoss* operation is presented in `Connectionist Temporal Classification - Labeling Unsegmented Sequence Data with Recurrent Neural Networks: Graves et al., 2016 <http://www.cs.toronto.edu/~graves/icml_2006.pdf>`__
|
||||
|
||||
*CTCLoss* estimates likelihood that a target ``labels[i,:]`` can occur (or is real) for given input sequence of logits ``logits[i,:,:]``. Briefly, *CTCLoss* operation finds all sequences aligned with a target ``labels[i,:]``, computes log-probabilities of the aligned sequences using ``logits[i,:,:]`` and computes a negative sum of these log-probabilies.
|
||||
*CTCLoss* estimates likelihood that a target ``labels[i,:]`` can occur (or is real) for given input sequence of logits ``logits[i,:,:]``. Briefly, *CTCLoss* operation finds all sequences aligned with a target ``labels[i,:]``, computes log-probabilities of the aligned sequences using ``logits[i,:,:]`` and computes a negative sum of these log-probabilities.
|
||||
|
||||
Input sequences of logits ``logits`` can have different lengths. The length of each sequence ``logits[i,:,:]`` equals ``logit_length[i]``.
|
||||
A length of target sequence ``labels[i,:]`` equals ``label_length[i]``. The length of the target sequence must not be greater than the length of corresponding input sequence ``logits[i,:,:]``.
|
||||
|
@ -180,9 +180,9 @@ Begin this step on the Intel® Core™ or Xeon® processor machine that meets th
|
||||
5. Build and install the `libtpm package <https://github.com/stefanberger/libtpms/>`__.
|
||||
6. Build and install the `swtpm package <https://github.com/stefanberger/swtpm/>`__.
|
||||
7. Add the ``swtpm`` package to the ``$PATH`` environment variable.
|
||||
8. Install the software tool `tpm2-tss <https://github.com/tpm2-software/tpm2-tss/releases/download/2.4.4/tpm2-tss-2.4.4.tar.gz>`__ . For innstallation information follow `here <https://github.com/tpm2-software/tpm2-tss/blob/master/INSTALL.md>`__.
|
||||
9. Install the software tool `tpm2-abmrd <https://github.com/tpm2-software/tpm2-abrmd/releases/download/2.3.3/tpm2-abrmd-2.3.3.tar.gz>`__ . For innstallation information follow `here <https://github.com/tpm2-software/tpm2-abrmd/blob/master/INSTALL.md>`__.
|
||||
10. Install the `tpm2-tools <https://github.com/tpm2-software/tpm2-tools/releases/download/4.3.0/tpm2-tools-4.3.0.tar.gz>`__ . For innstallation information follow `here <https://github.com/tpm2-software/tpm2-tools/blob/master/docs/INSTALL.md>`__.
|
||||
8. Install the software tool `tpm2-tss <https://github.com/tpm2-software/tpm2-tss/releases/download/2.4.4/tpm2-tss-2.4.4.tar.gz>`__ . For installation information follow `here <https://github.com/tpm2-software/tpm2-tss/blob/master/INSTALL.md>`__.
|
||||
9. Install the software tool `tpm2-abmrd <https://github.com/tpm2-software/tpm2-abrmd/releases/download/2.3.3/tpm2-abrmd-2.3.3.tar.gz>`__ . For installation information follow `here <https://github.com/tpm2-software/tpm2-abrmd/blob/master/INSTALL.md>`__.
|
||||
10. Install the `tpm2-tools <https://github.com/tpm2-software/tpm2-tools/releases/download/4.3.0/tpm2-tools-4.3.0.tar.gz>`__ . For installation information follow `here <https://github.com/tpm2-software/tpm2-tools/blob/master/docs/INSTALL.md>`__.
|
||||
11. Install the `Docker packages <https://docs.docker.com/engine/install/ubuntu/>`__ .
|
||||
|
||||
.. note::
|
||||
@ -525,11 +525,11 @@ Step 5: Set Up one Guest VM for the User role
|
||||
3. Shut down the Guest VM.<br><br>
|
||||
|
||||
**Option 2: Manually install additional software**
|
||||
1. Install the software tool `tpm2-tss <https://github.com/tpm2-software/tpm2-tss/releases/download/2.4.4/tpm2-tss-2.4.4.tar.gz>`__ For innstallation information follow `here <https://github.com/tpm2-software/tpm2-tss/blob/master/INSTALL.md>`__
|
||||
1. Install the software tool `tpm2-tss <https://github.com/tpm2-software/tpm2-tss/releases/download/2.4.4/tpm2-tss-2.4.4.tar.gz>`__ For installation information follow `here <https://github.com/tpm2-software/tpm2-tss/blob/master/INSTALL.md>`__
|
||||
2. Install the software tool `tpm2-abmrd <https://github.com/tpm2-software/tpm2-abrmd/releases/download/2.3.3/tpm2-abrmd-2.3.3.tar.gz>`__
|
||||
For innstallation information follow `here <https://github.com/tpm2-software/tpm2-abrmd/blob/master/INSTALL.md>`__
|
||||
For installation information follow `here <https://github.com/tpm2-software/tpm2-abrmd/blob/master/INSTALL.md>`__
|
||||
3. Install the `tpm2-tools <https://github.com/tpm2-software/tpm2-tools/releases/download/4.3.0/tpm2-tools-4.3.0.tar.gz>`__
|
||||
For innstallation information follow `here <https://github.com/tpm2-software/tpm2-tools/blob/master/docs/INSTALL.md>`__
|
||||
For installation information follow `here <https://github.com/tpm2-software/tpm2-tools/blob/master/docs/INSTALL.md>`__
|
||||
4. Install the `Docker packages <https://docs.docker.com/engine/install/ubuntu/>`__
|
||||
5. Shut down the Guest VM.
|
||||
|
||||
|
@ -18,7 +18,7 @@ void shape_infer(const Select* op, const std::vector<T>& input_shapes, std::vect
|
||||
auto& result_shape = output_shapes[0];
|
||||
if (broadcast_spec.m_type == op::AutoBroadcastType::PDPD) {
|
||||
result_shape = input_shapes[1]; // 'then' tensor
|
||||
// in PDPD type, Broacast-merging 'else' into 'then' one way not each other.
|
||||
// in PDPD type, Broadcast-merging 'else' into 'then' one way not each other.
|
||||
NODE_VALIDATION_CHECK(op,
|
||||
T::broadcast_merge_into(result_shape, input_shapes[2], broadcast_spec),
|
||||
"'Else' tensor shape is not broadcastable.");
|
||||
|
Loading…
Reference in New Issue
Block a user