diff --git a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_GNMT_From_Tensorflow.md b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_GNMT_From_Tensorflow.md index 69d4a153507..3fae631ab90 100644 --- a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_GNMT_From_Tensorflow.md +++ b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_GNMT_From_Tensorflow.md @@ -273,4 +273,4 @@ exec_net = ie.load_network(network=net, device_name="CPU") result_ie = exec_net.infer(input_data) ``` -For more information about Python API, refer to the [OpenVINO Runtime Python API](https://docs.openvino.ai/2022.2/api/api_reference.html) guide. +For more information about Python API, refer to the [OpenVINO Runtime Python API](https://docs.openvino.ai/latest/api/api_reference.html) guide. diff --git a/docs/home.rst b/docs/home.rst index 939b2fc7e36..b1ea7e7f4f1 100644 --- a/docs/home.rst +++ b/docs/home.rst @@ -43,7 +43,7 @@ A typical workflow with OpenVINO is shown below. .. image:: _static/images/OV_flow_optimization_hvr.svg :alt: link to an optimization guide - :target: openvino_docs_optimization_guide_dldt_optimization_guide.html + :target: openvino_docs_model_optimization_guide.html .. container:: hp-flow-arrow diff --git a/docs/install_guides/installing-openvino-from-archive-linux.md b/docs/install_guides/installing-openvino-from-archive-linux.md index 55658060470..f8c2a19017a 100644 --- a/docs/install_guides/installing-openvino-from-archive-linux.md +++ b/docs/install_guides/installing-openvino-from-archive-linux.md @@ -55,14 +55,6 @@ See the [Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNo 4. Download the `OpenVINO Runtime archive file for your system `_, extract the files, rename the extracted folder and move it to the desired path: - .. tab:: Ubuntu 18.04 - - .. code-block:: sh - - curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3/linux/l_openvino_toolkit_ubuntu18_2022.3.0.9052.9752fafe8eb_x86_64.tgz --output openvino_2022.3.0.tgz - tar -xf openvino_2022.3.0.tgz - sudo mv l_openvino_toolkit_ubuntu18_2022.3.0.9052.9752fafe8eb_x86_64 /opt/intel/openvino_2022.3.0 - .. tab:: Ubuntu 20.04 .. code-block:: sh @@ -70,7 +62,15 @@ See the [Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNo curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3/linux/l_openvino_toolkit_ubuntu20_2022.3.0.9052.9752fafe8eb_x86_64.tgz --output openvino_2022.3.0.tgz tar -xf openvino_2022.3.0.tgz sudo mv l_openvino_toolkit_ubuntu20_2022.3.0.9052.9752fafe8eb_x86_64 /opt/intel/openvino_2022.3.0 - + + .. tab:: Ubuntu 18.04 + + .. code-block:: sh + + curl -L https://storage.openvinotoolkit.org/repositories/openvino/packages/2022.3/linux/l_openvino_toolkit_ubuntu18_2022.3.0.9052.9752fafe8eb_x86_64.tgz --output openvino_2022.3.0.tgz + tar -xf openvino_2022.3.0.tgz + sudo mv l_openvino_toolkit_ubuntu18_2022.3.0.9052.9752fafe8eb_x86_64 /opt/intel/openvino_2022.3.0 + .. tab:: RHEL 8 .. code-block:: sh diff --git a/docs/optimization_guide/nncf/introduction.md b/docs/optimization_guide/nncf/introduction.md index 08434c040ab..ba2a2662ba3 100644 --- a/docs/optimization_guide/nncf/introduction.md +++ b/docs/optimization_guide/nncf/introduction.md @@ -25,8 +25,8 @@ Adding compression to a training pipeline only requires a few lines of code. The ### NNCF Quick Start Examples See the following Jupyter Notebooks for step-by-step examples showing how to add model compression to a PyTorch or Tensorflow training pipeline with NNCF: -- [Quantization Aware Training with NNCF and PyTorch](https://docs.openvino.ai/2022.2/notebooks/302-pytorch-quantization-aware-training-with-output.html). -- [Quantization Aware Training with NNCF and TensorFlow](https://docs.openvino.ai/2022.2/notebooks/305-tensorflow-quantization-aware-training-with-output.html). +- [Quantization Aware Training with NNCF and PyTorch](https://docs.openvino.ai/latest/notebooks/302-pytorch-quantization-aware-training-with-output.html). +- [Quantization Aware Training with NNCF and TensorFlow](https://docs.openvino.ai/latest/notebooks/305-tensorflow-quantization-aware-training-with-output.html). ## Installation NNCF is open-sourced on [GitHub](https://github.com/openvinotoolkit/nncf) and distributed as a separate package from OpenVINO. It is also available on PyPI. Install it to the same Python environment where PyTorch or TensorFlow is installed. @@ -82,5 +82,5 @@ Using compression-aware training requires a training pipeline, an annotated data - [Quantizing Models Post-training](@ref pot_introduction) - [NNCF GitHub repository](https://github.com/openvinotoolkit/nncf) - [NNCF FAQ](https://github.com/openvinotoolkit/nncf/blob/develop/docs/FAQ.md) -- [Quantization Aware Training with NNCF and PyTorch](https://docs.openvino.ai/2022.2/notebooks/302-pytorch-quantization-aware-training-with-output.html) -- [Quantization Aware Training with NNCF and TensorFlow](https://docs.openvino.ai/2022.2/notebooks/305-tensorflow-quantization-aware-training-with-output.html) \ No newline at end of file +- [Quantization Aware Training with NNCF and PyTorch](https://docs.openvino.ai/latest/notebooks/302-pytorch-quantization-aware-training-with-output.html) +- [Quantization Aware Training with NNCF and TensorFlow](https://docs.openvino.ai/latest/notebooks/305-tensorflow-quantization-aware-training-with-output.html) \ No newline at end of file diff --git a/docs/optimization_guide/nncf/ptq/ptq_introduction.md b/docs/optimization_guide/nncf/ptq/ptq_introduction.md index 423ea00bd03..a87e5f9d293 100644 --- a/docs/optimization_guide/nncf/ptq/ptq_introduction.md +++ b/docs/optimization_guide/nncf/ptq/ptq_introduction.md @@ -1,4 +1,4 @@ -# Post-training Quantization w/ NNCF (new) {#nncf_ptq_introduction} +# Post-training Quantization with NNCF (new) {#nncf_ptq_introduction} @sphinxdirective diff --git a/tools/pot/docs/Introduction.md b/tools/pot/docs/Introduction.md index 8e667d07322..a92b2839f2e 100644 --- a/tools/pot/docs/Introduction.md +++ b/tools/pot/docs/Introduction.md @@ -1,4 +1,4 @@ -# Post-training Quantization w/ POT {#pot_introduction} +# Post-training Quantization with POT {#pot_introduction} @sphinxdirective @@ -27,8 +27,8 @@ While post-training quantization makes your model run faster and take less memor ### Post-Training Quantization Quick Start Examples Try out these interactive Jupyter Notebook examples to learn the POT API and see post-training quantization in action: -* [Quantization of Image Classification Models with POT](https://docs.openvino.ai/2022.2/notebooks/113-image-classification-quantization-with-output.html). -* [Object Detection Quantization with POT](https://docs.openvino.ai/2022.2/notebooks/111-detection-quantization-with-output.html). +* [Quantization of Image Classification Models with POT](https://docs.openvino.ai/latest/notebooks/113-image-classification-quantization-with-output.html). +* [Object Detection Quantization with POT](https://docs.openvino.ai/latest/notebooks/111-detection-quantization-with-output.html). ## Quantizing Models with POT The figure below shows the post-training quantization workflow with POT. In a typical workflow, a pre-trained model is converted to OpenVINO IR format using Model Optimizer. Then, the model is quantized with a representative dataset using POT.