Release doc updates_port (#14784)
This commit is contained in:
parent
45bf23e527
commit
f31ebd4947
@ -273,4 +273,4 @@ exec_net = ie.load_network(network=net, device_name="CPU")
|
||||
result_ie = exec_net.infer(input_data)
|
||||
```
|
||||
|
||||
For more information about Python API, refer to the [OpenVINO Runtime Python API](https://docs.openvino.ai/2022.2/api/api_reference.html) guide.
|
||||
For more information about Python API, refer to the [OpenVINO Runtime Python API](https://docs.openvino.ai/latest/api/api_reference.html) guide.
|
||||
|
@ -43,7 +43,7 @@ A typical workflow with OpenVINO is shown below.
|
||||
|
||||
.. image:: _static/images/OV_flow_optimization_hvr.svg
|
||||
:alt: link to an optimization guide
|
||||
:target: openvino_docs_optimization_guide_dldt_optimization_guide.html
|
||||
:target: openvino_docs_model_optimization_guide.html
|
||||
|
||||
.. container:: hp-flow-arrow
|
||||
|
||||
|
@ -55,14 +55,6 @@ See the [Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNo
|
||||
|
||||
4. Download the `OpenVINO Runtime archive file for your system <https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3/linux/>`_, extract the files, rename the extracted folder and move it to the desired path:
|
||||
|
||||
.. tab:: Ubuntu 18.04
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3/linux/l_openvino_toolkit_ubuntu18_2022.3.0.9052.9752fafe8eb_x86_64.tgz --output openvino_2022.3.0.tgz
|
||||
tar -xf openvino_2022.3.0.tgz
|
||||
sudo mv l_openvino_toolkit_ubuntu18_2022.3.0.9052.9752fafe8eb_x86_64 /opt/intel/openvino_2022.3.0
|
||||
|
||||
.. tab:: Ubuntu 20.04
|
||||
|
||||
.. code-block:: sh
|
||||
@ -71,6 +63,14 @@ See the [Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNo
|
||||
tar -xf openvino_2022.3.0.tgz
|
||||
sudo mv l_openvino_toolkit_ubuntu20_2022.3.0.9052.9752fafe8eb_x86_64 /opt/intel/openvino_2022.3.0
|
||||
|
||||
.. tab:: Ubuntu 18.04
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3/linux/l_openvino_toolkit_ubuntu18_2022.3.0.9052.9752fafe8eb_x86_64.tgz --output openvino_2022.3.0.tgz
|
||||
tar -xf openvino_2022.3.0.tgz
|
||||
sudo mv l_openvino_toolkit_ubuntu18_2022.3.0.9052.9752fafe8eb_x86_64 /opt/intel/openvino_2022.3.0
|
||||
|
||||
.. tab:: RHEL 8
|
||||
|
||||
.. code-block:: sh
|
||||
|
@ -25,8 +25,8 @@ Adding compression to a training pipeline only requires a few lines of code. The
|
||||
### NNCF Quick Start Examples
|
||||
See the following Jupyter Notebooks for step-by-step examples showing how to add model compression to a PyTorch or Tensorflow training pipeline with NNCF:
|
||||
|
||||
- [Quantization Aware Training with NNCF and PyTorch](https://docs.openvino.ai/2022.2/notebooks/302-pytorch-quantization-aware-training-with-output.html).
|
||||
- [Quantization Aware Training with NNCF and TensorFlow](https://docs.openvino.ai/2022.2/notebooks/305-tensorflow-quantization-aware-training-with-output.html).
|
||||
- [Quantization Aware Training with NNCF and PyTorch](https://docs.openvino.ai/latest/notebooks/302-pytorch-quantization-aware-training-with-output.html).
|
||||
- [Quantization Aware Training with NNCF and TensorFlow](https://docs.openvino.ai/latest/notebooks/305-tensorflow-quantization-aware-training-with-output.html).
|
||||
|
||||
## Installation
|
||||
NNCF is open-sourced on [GitHub](https://github.com/openvinotoolkit/nncf) and distributed as a separate package from OpenVINO. It is also available on PyPI. Install it to the same Python environment where PyTorch or TensorFlow is installed.
|
||||
@ -82,5 +82,5 @@ Using compression-aware training requires a training pipeline, an annotated data
|
||||
- [Quantizing Models Post-training](@ref pot_introduction)
|
||||
- [NNCF GitHub repository](https://github.com/openvinotoolkit/nncf)
|
||||
- [NNCF FAQ](https://github.com/openvinotoolkit/nncf/blob/develop/docs/FAQ.md)
|
||||
- [Quantization Aware Training with NNCF and PyTorch](https://docs.openvino.ai/2022.2/notebooks/302-pytorch-quantization-aware-training-with-output.html)
|
||||
- [Quantization Aware Training with NNCF and TensorFlow](https://docs.openvino.ai/2022.2/notebooks/305-tensorflow-quantization-aware-training-with-output.html)
|
||||
- [Quantization Aware Training with NNCF and PyTorch](https://docs.openvino.ai/latest/notebooks/302-pytorch-quantization-aware-training-with-output.html)
|
||||
- [Quantization Aware Training with NNCF and TensorFlow](https://docs.openvino.ai/latest/notebooks/305-tensorflow-quantization-aware-training-with-output.html)
|
@ -1,4 +1,4 @@
|
||||
# Post-training Quantization w/ NNCF (new) {#nncf_ptq_introduction}
|
||||
# Post-training Quantization with NNCF (new) {#nncf_ptq_introduction}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Post-training Quantization w/ POT {#pot_introduction}
|
||||
# Post-training Quantization with POT {#pot_introduction}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
@ -27,8 +27,8 @@ While post-training quantization makes your model run faster and take less memor
|
||||
### Post-Training Quantization Quick Start Examples
|
||||
Try out these interactive Jupyter Notebook examples to learn the POT API and see post-training quantization in action:
|
||||
|
||||
* [Quantization of Image Classification Models with POT](https://docs.openvino.ai/2022.2/notebooks/113-image-classification-quantization-with-output.html).
|
||||
* [Object Detection Quantization with POT](https://docs.openvino.ai/2022.2/notebooks/111-detection-quantization-with-output.html).
|
||||
* [Quantization of Image Classification Models with POT](https://docs.openvino.ai/latest/notebooks/113-image-classification-quantization-with-output.html).
|
||||
* [Object Detection Quantization with POT](https://docs.openvino.ai/latest/notebooks/111-detection-quantization-with-output.html).
|
||||
|
||||
## Quantizing Models with POT
|
||||
The figure below shows the post-training quantization workflow with POT. In a typical workflow, a pre-trained model is converted to OpenVINO IR format using Model Optimizer. Then, the model is quantized with a representative dataset using POT.
|
||||
|
Loading…
Reference in New Issue
Block a user