From dc36ec11b5d9f0ceea6d12f638edbf3b159f94e4 Mon Sep 17 00:00:00 2001 From: Maciej Smyk Date: Wed, 31 May 2023 11:27:20 +0200 Subject: [PATCH] [DOCS] Link adjustment for dev docs + fix to build.md CPU link for master (#17744) * link-update-1 * link update * Update build.md * dl workbench * Update README.md --- README.md | 31 +++++++++---------- docs/dev/build.md | 2 +- .../cmake_options_for_custom_compilation.md | 4 +-- docs/dev/debug_capabilities.md | 2 +- docs/install_guides/pypi-openvino-dev.md | 12 +++---- docs/install_guides/pypi-openvino-rt.md | 4 +-- src/bindings/c/README.md | 8 ++--- .../how_to_wrap_openvino_interfaces_with_c.md | 2 +- .../how_to_wrap_openvino_objects_with_c.md | 2 +- src/bindings/c/docs/how_to_write_unit_test.md | 2 +- src/bindings/python/README.md | 6 ++-- src/core/README.md | 4 +-- src/core/docs/api_details.md | 2 +- src/core/docs/debug_capabilities.md | 2 +- src/frontends/ir/README.md | 2 +- src/frontends/paddle/README.md | 2 +- src/frontends/tensorflow/README.md | 8 ++--- src/inference/docs/api_details.md | 4 +-- src/plugins/auto/README.md | 2 +- src/plugins/auto/docs/architecture.md | 6 ++-- src/plugins/intel_cpu/docs/fake_quantize.md | 2 +- .../docs/internal_cpu_plugin_optimization.md | 2 +- .../docs/gpu_plugin_driver_troubleshooting.md | 4 +-- .../intel_gpu/docs/source_code_structure.md | 2 +- src/plugins/template/README.md | 4 +-- tools/pot/README.md | 11 +++---- tools/pot/docs/ModelRepresentation.md | 2 +- 27 files changed, 66 insertions(+), 68 deletions(-) diff --git a/README.md b/README.md index 23fab4d3cb3..4ff8512e994 100644 --- a/README.md +++ b/README.md @@ -71,24 +71,24 @@ The OpenVINO™ Runtime can infer models on different hardware devices. This sec CPU - Intel CPU + Intel CPU openvino_intel_cpu_plugin Intel Xeon with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel Core Processors with Intel AVX2, Intel Atom Processors with Intel® Streaming SIMD Extensions (Intel® SSE) - ARM CPU + ARM CPU openvino_arm_cpu_plugin Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices GPU - Intel GPU + Intel GPU openvino_intel_gpu_plugin Intel Processor Graphics, including Intel HD Graphics and Intel Iris Graphics GNA - Intel GNA + Intel GNA openvino_intel_gna_plugin Intel Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel Pentium Silver J5005 Processor, Intel Pentium Silver N5000 Processor, Intel Celeron J4005 Processor, Intel Celeron J4105 Processor, Intel Celeron Processor N4100, Intel Celeron Processor N4000, Intel Core i3-8121U Processor, Intel Core i7-1065G7 Processor, Intel Core i7-1060G7 Processor, Intel Core i5-1035G4 Processor, Intel Core i5-1035G7 Processor, Intel Core i5-1035G1 Processor, Intel Core i5-1030G7 Processor, Intel Core i5-1030G4 Processor, Intel Core i3-1005G1 Processor, Intel Core i3-1000G1 Processor, Intel Core i3-1000G4 Processor @@ -106,22 +106,22 @@ OpenVINO™ Toolkit also contains several plugins which simplify loading models - Auto + Auto openvino_auto_plugin Auto plugin enables selecting Intel device for inference automatically - Auto Batch + Auto Batch openvino_auto_batch_plugin Auto batch plugin performs on-the-fly automatic batching (i.e. grouping inference requests together) to improve device utilization, with no programming effort from the user - Hetero + Hetero openvino_hetero_plugin Heterogeneous execution enables automatic inference splitting between several devices - Multi + Multi openvino_auto_plugin Multi plugin enables simultaneous inference of the same model on several devices in parallel @@ -158,10 +158,10 @@ The list of OpenVINO tutorials: ## System requirements The system requirements vary depending on platform and are available on dedicated pages: -- [Linux](https://docs.openvino.ai/nightly/openvino_docs_install_guides_installing_openvino_linux_header.html) -- [Windows](https://docs.openvino.ai/nightly/openvino_docs_install_guides_installing_openvino_windows_header.html) -- [macOS](https://docs.openvino.ai/nightly/openvino_docs_install_guides_installing_openvino_macos_header.html) -- [Raspbian](https://docs.openvino.ai/nightly/openvino_docs_install_guides_installing_openvino_raspbian.html) +- [Linux](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_linux_header.html) +- [Windows](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_windows_header.html) +- [macOS](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_macos_header.html) +- [Raspbian](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_raspbian.html) ## How to build @@ -192,7 +192,6 @@ Report questions, issues and suggestions, using: * [Neural Network Compression Framework (NNCF)](https://github.com/openvinotoolkit/nncf) - a suite of advanced algorithms for model inference optimization including quantization, filter pruning, binarization and sparsity * [OpenVINO™ Training Extensions (OTE)](https://github.com/openvinotoolkit/training_extensions) - convenient environment to train Deep Learning models and convert them using OpenVINO for optimized inference. * [OpenVINO™ Model Server (OVMS)](https://github.com/openvinotoolkit/model_server) - a scalable, high-performance solution for serving deep learning models optimized for Intel architectures -* [DL Workbench](https://docs.openvino.ai/nightly/workbench_docs_Workbench_DG_Introduction.html) - an alternative, web-based version of OpenVINO designed to facilitate optimization and compression of pre-trained deep learning models. * [Computer Vision Annotation Tool (CVAT)](https://github.com/opencv/cvat) - an online, interactive video and image annotation tool for computer vision purposes. * [Dataset Management Framework (Datumaro)](https://github.com/openvinotoolkit/datumaro) - a framework and CLI tool to build, transform, and analyze datasets. @@ -200,7 +199,7 @@ Report questions, issues and suggestions, using: \* Other names and brands may be claimed as the property of others. [Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo -[OpenVINO™ Runtime]:https://docs.openvino.ai/nightly/openvino_docs_OV_UG_OV_Runtime_User_Guide.html -[Model Optimizer]:https://docs.openvino.ai/nightly/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html -[Post-Training Optimization Tool]:https://docs.openvino.ai/nightly/pot_introduction.html +[OpenVINO™ Runtime]:https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_OV_Runtime_User_Guide.html +[Model Optimizer]:https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html +[Post-Training Optimization Tool]:https://docs.openvino.ai/2023.0/pot_introduction.html [Samples]:https://github.com/openvinotoolkit/openvino/tree/master/samples diff --git a/docs/dev/build.md b/docs/dev/build.md index e02a5770941..c70ef73f527 100644 --- a/docs/dev/build.md +++ b/docs/dev/build.md @@ -16,7 +16,7 @@ The articles below provide the basic informations about the process of building * [Windows](build_windows.md) * [Linux](build_linux.md) -* [Mac (Intel GPU)](build_mac_intel_cpu.md) +* [Mac (Intel CPU)](build_mac_intel_cpu.md) * [Mac (ARM)](build_mac_arm.md) * [Android](build_android.md) * [Raspbian Stretch](./build_raspbian.md) diff --git a/docs/dev/cmake_options_for_custom_compilation.md b/docs/dev/cmake_options_for_custom_compilation.md index a98bd427831..30a5164ebbb 100644 --- a/docs/dev/cmake_options_for_custom_compilation.md +++ b/docs/dev/cmake_options_for_custom_compilation.md @@ -189,8 +189,8 @@ In this case OpenVINO CMake scripts take `TBBROOT` environment variable into acc [pugixml]:https://pugixml.org/ [ONNX]:https://onnx.ai/ [protobuf]:https://github.com/protocolbuffers/protobuf -[deployment manager]:https://docs.openvino.ai/latest/openvino_docs_install_guides_deployment_manager_tool.html -[OpenVINO Runtime Introduction]:https://docs.openvino.ai/latest/openvino_docs_OV_UG_Integrate_OV_with_your_application.html +[deployment manager]:https://docs.openvino.ai/2023.0/openvino_docs_install_guides_deployment_manager_tool.html +[OpenVINO Runtime Introduction]:https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Integrate_OV_with_your_application.html [PDPD]:https://github.com/PaddlePaddle/Paddle [TensorFlow]:https://www.tensorflow.org/ [TensorFlow Lite]:https://www.tensorflow.org/lite diff --git a/docs/dev/debug_capabilities.md b/docs/dev/debug_capabilities.md index 18a2e34cf42..f228c06e9d6 100644 --- a/docs/dev/debug_capabilities.md +++ b/docs/dev/debug_capabilities.md @@ -2,7 +2,7 @@ OpenVINO components provides different debug capabilities, to get more infromation please read: -* [OpenVINO Model Debug Capabilities](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Model_Representation.html#model-debug-capabilities) +* [OpenVINO Model Debug Capabilities](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Model_Representation.html#model-debug-capabilities) * [OpenVINO Pass Manager Debug Capabilities](#todo) ## See also diff --git a/docs/install_guides/pypi-openvino-dev.md b/docs/install_guides/pypi-openvino-dev.md index 90d5aca7096..33b4b6207dd 100644 --- a/docs/install_guides/pypi-openvino-dev.md +++ b/docs/install_guides/pypi-openvino-dev.md @@ -119,15 +119,15 @@ For example, to install and configure the components for working with TensorFlow | Component | Console Script | Description | |------------------|---------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| [Model Optimizer](https://docs.openvino.ai/nightly/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) | `mo` |**Model Optimizer** imports, converts, and optimizes models that were trained in popular frameworks to a format usable by OpenVINO components.
Supported frameworks include Caffe\*, TensorFlow\*, MXNet\*, PaddlePaddle\*, and ONNX\*. | -| [Benchmark Tool](https://docs.openvino.ai/nightly/openvino_inference_engine_tools_benchmark_tool_README.html)| `benchmark_app` | **Benchmark Application** allows you to estimate deep learning inference performance on supported devices for synchronous and asynchronous modes. | -| [Accuracy Checker](https://docs.openvino.ai/nightly/omz_tools_accuracy_checker.html) and
[Annotation Converter](https://docs.openvino.ai/nightly/omz_tools_accuracy_checker_annotation_converters.html) | `accuracy_check`
`convert_annotation` |**Accuracy Checker** is a deep learning accuracy validation tool that allows you to collect accuracy metrics against popular datasets. The main advantages of the tool are the flexibility of configuration and a set of supported datasets, preprocessing, postprocessing, and metrics.
**Annotation Converter** is a utility that prepares datasets for evaluation with Accuracy Checker. | -| [Post-Training Optimization Tool](https://docs.openvino.ai/nightly/pot_introduction.html)| `pot` |**Post-Training Optimization Tool** allows you to optimize trained models with advanced capabilities, such as quantization and low-precision optimizations, without the need to retrain or fine-tune models. | -| [Model Downloader and other Open Model Zoo tools](https://docs.openvino.ai/nightly/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to the collection of high-quality and extremely fast pre-trained deep learning [public](@ref omz_models_group_public) and [Intel](@ref omz_models_group_intel)-trained models. These free pre-trained models can be used to speed up the development and production deployment process without training your own models. The tool downloads model files from online sources and, if necessary, patches them to make them more usable with Model Optimizer. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for converting Open Model Zoo models that are stored in an original deep learning framework format into the OpenVINO Intermediate Representation (IR) using Model Optimizer.
**Model Quantizer** is a tool for automatic quantization of full-precision models in the IR format into low-precision versions using the Post-Training Optimization Tool.
**Model Information Dumper** is a helper utility for dumping information about the models to a stable, machine-readable format. | +| [Model Optimizer](https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) | `mo` |**Model Optimizer** imports, converts, and optimizes models that were trained in popular frameworks to a format usable by OpenVINO components.
Supported frameworks include Caffe\*, TensorFlow\*, MXNet\*, PaddlePaddle\*, and ONNX\*. | +| [Benchmark Tool](https://docs.openvino.ai/2023.0/openvino_inference_engine_tools_benchmark_tool_README.html)| `benchmark_app` | **Benchmark Application** allows you to estimate deep learning inference performance on supported devices for synchronous and asynchronous modes. | +| [Accuracy Checker](https://docs.openvino.ai/2023.0/omz_tools_accuracy_checker.html) and
[Annotation Converter](https://docs.openvino.ai/2023.0/omz_tools_accuracy_checker_annotation_converters.html) | `accuracy_check`
`convert_annotation` |**Accuracy Checker** is a deep learning accuracy validation tool that allows you to collect accuracy metrics against popular datasets. The main advantages of the tool are the flexibility of configuration and a set of supported datasets, preprocessing, postprocessing, and metrics.
**Annotation Converter** is a utility that prepares datasets for evaluation with Accuracy Checker. | +| [Post-Training Optimization Tool](https://docs.openvino.ai/2023.0/pot_introduction.html)| `pot` |**Post-Training Optimization Tool** allows you to optimize trained models with advanced capabilities, such as quantization and low-precision optimizations, without the need to retrain or fine-tune models. | +| [Model Downloader and other Open Model Zoo tools](https://docs.openvino.ai/2023.0/omz_tools_downloader.html)| `omz_downloader`
`omz_converter`
`omz_quantizer`
`omz_info_dumper`| **Model Downloader** is a tool for getting access to the collection of high-quality and extremely fast pre-trained deep learning [public](@ref omz_models_group_public) and [Intel](@ref omz_models_group_intel)-trained models. These free pre-trained models can be used to speed up the development and production deployment process without training your own models. The tool downloads model files from online sources and, if necessary, patches them to make them more usable with Model Optimizer. A number of additional tools are also provided to automate the process of working with downloaded models:
**Model Converter** is a tool for converting Open Model Zoo models that are stored in an original deep learning framework format into the OpenVINO Intermediate Representation (IR) using Model Optimizer.
**Model Quantizer** is a tool for automatic quantization of full-precision models in the IR format into low-precision versions using the Post-Training Optimization Tool.
**Model Information Dumper** is a helper utility for dumping information about the models to a stable, machine-readable format. | ## Troubleshooting -For general troubleshooting steps and issues, see [Troubleshooting Guide for OpenVINO Installation](https://docs.openvino.ai/nightly/openvino_docs_get_started_guide_troubleshooting.html). The following sections also provide explanations to several error messages. +For general troubleshooting steps and issues, see [Troubleshooting Guide for OpenVINO Installation](https://docs.openvino.ai/2023.0/openvino_docs_get_started_guide_troubleshooting.html). The following sections also provide explanations to several error messages. ### Errors with Installing via PIP for Users in China diff --git a/docs/install_guides/pypi-openvino-rt.md b/docs/install_guides/pypi-openvino-rt.md index f0d6f31eb62..dd85c8b5c8e 100644 --- a/docs/install_guides/pypi-openvino-rt.md +++ b/docs/install_guides/pypi-openvino-rt.md @@ -5,7 +5,7 @@ Intel® Distribution of OpenVINO™ toolkit is an open-source toolkit for optimizing and deploying AI inference. It can be used to develop applications and solutions based on deep learning tasks, such as: emulation of human vision, automatic speech recognition, natural language processing, recommendation systems, etc. It provides high-performance and rich deployment options, from edge to cloud. -If you have already finished developing your models and converting them to the OpenVINO model format, you can install OpenVINO Runtime to deploy your applications on various devices. The [OpenVINO™ Runtime](https://docs.openvino.ai/nightly/openvino_docs_OV_UG_OV_Runtime_User_Guide.html) Python package includes a set of libraries for an easy inference integration with your products. +If you have already finished developing your models and converting them to the OpenVINO model format, you can install OpenVINO Runtime to deploy your applications on various devices. The [OpenVINO™ Runtime](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_OV_Runtime_User_Guide.html) Python package includes a set of libraries for an easy inference integration with your products. ## System Requirements @@ -72,7 +72,7 @@ If installation was successful, you will not see any error messages (no console ## Troubleshooting -For general troubleshooting steps and issues, see [Troubleshooting Guide for OpenVINO Installation](https://docs.openvino.ai/nightly/openvino_docs_get_started_guide_troubleshooting.html). The following sections also provide explanations to several error messages. +For general troubleshooting steps and issues, see [Troubleshooting Guide for OpenVINO Installation](https://docs.openvino.ai/2023.0/openvino_docs_get_started_guide_troubleshooting.html). The following sections also provide explanations to several error messages. ### Errors with Installing via PIP for Users in China diff --git a/src/bindings/c/README.md b/src/bindings/c/README.md index 4a9e18e30fc..225c976d49c 100644 --- a/src/bindings/c/README.md +++ b/src/bindings/c/README.md @@ -25,7 +25,7 @@ People from the [openvino-c-api-maintainers](https://github.com/orgs/openvinotoo OpenVINO C API has the following structure: * [docs](./docs) contains developer documentation for OpenVINO C APIs. - * [include](./include) contains all provided C API headers. [Learn more](https://docs.openvino.ai/latest/api/api_reference.html). + * [include](./include) contains all provided C API headers. [Learn more](https://docs.openvino.ai/2023.0/api/api_reference.html). * [src](./src) contains the implementations of all C APIs. * [tests](./tests) contains all tests for OpenVINO C APIs. [Learn more](./docs/how_to_write_unit_test.md). @@ -33,7 +33,7 @@ OpenVINO C API has the following structure: ## Tutorials -* [How to integrate OpenVINO C API with Your Application](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Integrate_OV_with_your_application.html) +* [How to integrate OpenVINO C API with Your Application](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Integrate_OV_with_your_application.html) * [How to wrap OpenVINO objects with C](./docs/how_to_wrap_openvino_objects_with_c.md) * [How to wrap OpenVINO interfaces with C](./docs/how_to_wrap_openvino_interfaces_with_c.md) * [Samples implemented by OpenVINO C API](../../../samples/c/) @@ -47,5 +47,5 @@ See [CONTRIBUTING](../../../CONTRIBUTING.md) for details. ## See also * [OpenVINO™ README](../../../README.md) - * [OpenVINO Runtime C API User Guide](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Integrate_OV_with_your_application.html) - * [Migration of OpenVINO C API](https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html) + * [OpenVINO Runtime C API User Guide](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Integrate_OV_with_your_application.html) + * [Migration of OpenVINO C API](https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html) diff --git a/src/bindings/c/docs/how_to_wrap_openvino_interfaces_with_c.md b/src/bindings/c/docs/how_to_wrap_openvino_interfaces_with_c.md index 5afb1fc1375..435e2e35529 100644 --- a/src/bindings/c/docs/how_to_wrap_openvino_interfaces_with_c.md +++ b/src/bindings/c/docs/how_to_wrap_openvino_interfaces_with_c.md @@ -78,4 +78,4 @@ The tensor create needs to specify the shape info, so C shape need to be convert ## See also * [OpenVINO™ README](../../../../README.md) * [C API developer guide](../README.md) - * [C API Reference](https://docs.openvino.ai/latest/api/api_reference.html) + * [C API Reference](https://docs.openvino.ai/2023.0/api/api_reference.html) diff --git a/src/bindings/c/docs/how_to_wrap_openvino_objects_with_c.md b/src/bindings/c/docs/how_to_wrap_openvino_objects_with_c.md index 11d1af83f2a..092f37138ac 100644 --- a/src/bindings/c/docs/how_to_wrap_openvino_objects_with_c.md +++ b/src/bindings/c/docs/how_to_wrap_openvino_objects_with_c.md @@ -73,4 +73,4 @@ https://github.com/openvinotoolkit/openvino/blob/d96c25844d6cfd5ad131539c8a09282 ## See also * [OpenVINO™ README](../../../../README.md) * [C API developer guide](../README.md) - * [C API Reference](https://docs.openvino.ai/latest/api/api_reference.html) \ No newline at end of file + * [C API Reference](https://docs.openvino.ai/2023.0/api/api_reference.html) \ No newline at end of file diff --git a/src/bindings/c/docs/how_to_write_unit_test.md b/src/bindings/c/docs/how_to_write_unit_test.md index d1f4a83d6c2..0cc2f0e1681 100644 --- a/src/bindings/c/docs/how_to_write_unit_test.md +++ b/src/bindings/c/docs/how_to_write_unit_test.md @@ -14,5 +14,5 @@ https://github.com/openvinotoolkit/openvino/blob/d96c25844d6cfd5ad131539c8a09282 ## See also * [OpenVINO™ README](../../../../README.md) * [C API developer guide](../README.md) - * [C API Reference](https://docs.openvino.ai/latest/api/api_reference.html) + * [C API Reference](https://docs.openvino.ai/2023.0/api/api_reference.html) diff --git a/src/bindings/python/README.md b/src/bindings/python/README.md index 76a6f646f81..d3df594657f 100644 --- a/src/bindings/python/README.md +++ b/src/bindings/python/README.md @@ -42,8 +42,8 @@ If you want to contribute to OpenVINO Python API, here is the list of learning m * [OpenVINO™ README](../../../README.md) * [OpenVINO™ Core Components](../../README.md) -* [OpenVINO™ Python API Reference](https://docs.openvino.ai/latest/api/ie_python_api/api.html) -* [OpenVINO™ Python API Advanced Inference](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Python_API_inference.html) -* [OpenVINO™ Python API Exclusives](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Python_API_exclusives.html) +* [OpenVINO™ Python API Reference](https://docs.openvino.ai/2023.0/api/ie_python_api/api.html) +* [OpenVINO™ Python API Advanced Inference](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Python_API_inference.html) +* [OpenVINO™ Python API Exclusives](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Python_API_exclusives.html) * [pybind11 repository](https://github.com/pybind/pybind11) * [pybind11 documentation](https://pybind11.readthedocs.io/en/stable/) diff --git a/src/core/README.md b/src/core/README.md index b8e7aa7cbda..f0321a41fcf 100644 --- a/src/core/README.md +++ b/src/core/README.md @@ -2,7 +2,7 @@ OpenVINO Core is a part of OpenVINO Runtime library. The component is responsible for: - * Model representation - component provides classes for manipulation with models inside the OpenVINO Runtime. For more information please read [Model representation in OpenVINO Runtime User Guide](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Model_Representation.html) + * Model representation - component provides classes for manipulation with models inside the OpenVINO Runtime. For more information please read [Model representation in OpenVINO Runtime User Guide](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Model_Representation.html) * Operation representation - contains all from the box supported OpenVINO operations and opsets. For more information read [Operations enabling flow guide](./docs/operation_enabling_flow.md). * Model modification - component provides base classes which allow to develop transformation passes for model modification. For more information read [Transformation enabling flow guide](#todo). @@ -27,7 +27,7 @@ OpenVINO Core has the next structure: ## Tutorials * [How to add new operations](./docs/operation_enabling_flow.md). - * [How to add OpenVINO Extension](https://docs.openvino.ai/latest/openvino_docs_Extensibility_UG_Intro.html). This document is based on the [template_extension](./template_extension/new/). + * [How to add OpenVINO Extension](https://docs.openvino.ai/2023.0/openvino_docs_Extensibility_UG_Intro.html). This document is based on the [template_extension](./template_extension/new/). * [How to debug the component](./docs/debug_capabilities.md). ## See also diff --git a/src/core/docs/api_details.md b/src/core/docs/api_details.md index d715174bbfb..f02b94039c4 100644 --- a/src/core/docs/api_details.md +++ b/src/core/docs/api_details.md @@ -18,7 +18,7 @@ OpenVINO Core API contains two folders: ## Main structures for model representation -* `ov::Model` is located in [openvino/core/model.hpp](../include/openvino/core/model.hpp) and provides API for model representation. For more details, read [OpenVINO Model Representation Guide](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Model_Representation.html). +* `ov::Model` is located in [openvino/core/model.hpp](../include/openvino/core/model.hpp) and provides API for model representation. For more details, read [OpenVINO Model Representation Guide](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Model_Representation.html). * `ov::Node` is a base class for all OpenVINO operations, the class is located in the [openvino/core/node.hpp](../include/openvino/core/node.hpp). * `ov::Shape` and `ov::PartialShape` classes represent shapes in OpenVINO, these classes are located in the [openvino/core/shape.hpp](../include/openvino/core/shape.hpp) and [openvino/core/partial_shape.hpp](../include/openvino/core/partial_shape.hpp) respectively. For more information, read [OpenVINO Shapes representation](./shape_propagation.md#openvino-shapes-representation). * `ov::element::Type` class represents element type for OpenVINO Tensors and Operations. The class is located in the [openvino/core/type/element_type.hpp](../include/openvino/core/type/element_type.hpp). diff --git a/src/core/docs/debug_capabilities.md b/src/core/docs/debug_capabilities.md index 6a282c80677..8bbc21b9a76 100644 --- a/src/core/docs/debug_capabilities.md +++ b/src/core/docs/debug_capabilities.md @@ -2,7 +2,7 @@ OpenVINO Core contains a set of different debug capabilities that make developer life easier by collecting information about object statuses during OpenVINO Runtime execution and reporting this information to the developer. -* OpenVINO Model debug capabilities are described in the [OpenVINO Model User Guide](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Model_Representation.html#model-debug-capabilities). +* OpenVINO Model debug capabilities are described in the [OpenVINO Model User Guide](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Model_Representation.html#model-debug-capabilities). ## See also * [OpenVINO™ Core README](../README.md) diff --git a/src/frontends/ir/README.md b/src/frontends/ir/README.md index 775d6a3ac96..4f418203117 100644 --- a/src/frontends/ir/README.md +++ b/src/frontends/ir/README.md @@ -11,7 +11,7 @@ flowchart LR openvino(openvino library) ir--Read ir---ir_fe ir_fe--Create ov::Model--->openvino - click ir "https://docs.openvino.ai/latest/openvino_docs_MO_DG_IR_and_opsets.html" + click ir "https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_IR_and_opsets.html" ``` The primary function of the OpenVINO IR Frontend is to load an OpenVINO IR into memory. diff --git a/src/frontends/paddle/README.md b/src/frontends/paddle/README.md index fe8d6ea8e8e..6daa4d2d9fd 100644 --- a/src/frontends/paddle/README.md +++ b/src/frontends/paddle/README.md @@ -21,7 +21,7 @@ OpenVINO Paddle Frontend has the following structure: ## Debug capabilities -Developers can use OpenVINO Model debug capabilities that are described in the [OpenVINO Model User Guide](https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Model_Representation.html#model-debug-capabilities). +Developers can use OpenVINO Model debug capabilities that are described in the [OpenVINO Model User Guide](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Model_Representation.html#model-debug-capabilities). ## Tutorials diff --git a/src/frontends/tensorflow/README.md b/src/frontends/tensorflow/README.md index ed5ad6cbf3a..243bb665f91 100644 --- a/src/frontends/tensorflow/README.md +++ b/src/frontends/tensorflow/README.md @@ -31,7 +31,7 @@ flowchart BT ``` The MO tool and MO Python API now use the TensorFlow Frontend as the default path for conversion to IR. -Known limitations of TF FE are described [here](https://docs.openvino.ai/latest/openvino_docs_MO_DG_TensorFlow_Frontend.html). +Known limitations of TF FE are described [here](https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_TensorFlow_Frontend.html). ## Key contacts @@ -140,15 +140,15 @@ The main rules for loaders implementation: In rare cases, TensorFlow operation conversion requires two transformations (`Loader` and `Internal Transformation`). In the first step, `Loader` must convert a TF operation into [Internal Operation](../tensorflow_common/helper_ops) that is used temporarily by the conversion pipeline. -The internal operation implementation must also contain the `validate_and_infer_types()` method as similar to [OpenVINO Core](https://docs.openvino.ai/nightly/groupov_ops_cpp_api.html) operations. +The internal operation implementation must also contain the `validate_and_infer_types()` method as similar to [OpenVINO Core](https://docs.openvino.ai/2023.0/groupov_ops_cpp_api.html) operations. Here is an example of an implementation for the internal operation `SparseFillEmptyRows` used to convert Wide and Deep models. https://github.com/openvinotoolkit/openvino/blob/7f3c95c161bc78ab2aefa6eab8b008142fb945bc/src/frontends/tensorflow/src/helper_ops/sparse_fill_empty_rows.hpp#L17-L55 In the second step, `Internal Transformation` based on `ov::pass::MatcherPass` must convert sub-graphs with internal operations into sub-graphs consisting only of the OpenVINO opset. -For more information about `ov::pass::MatcherPass` based transformations and their development, read [Overview of Transformations API](https://docs.openvino.ai/nightly/openvino_docs_transformations.html) -and [OpenVINO Matcher Pass](https://docs.openvino.ai/nightly/openvino_docs_Extensibility_UG_matcher_pass.html) documentation. +For more information about `ov::pass::MatcherPass` based transformations and their development, read [Overview of Transformations API](https://docs.openvino.ai/2023.0/openvino_docs_transformations.html) +and [OpenVINO Matcher Pass](https://docs.openvino.ai/2023.0/openvino_docs_Extensibility_UG_matcher_pass.html) documentation. The internal transformation must be called in the `ov::frontend::tensorflow::FrontEnd::normalize()` method. It is important to check the order of applying internal transformations to avoid situations when some internal operation breaks a graph pattern with an internal operation for another internal transformation. diff --git a/src/inference/docs/api_details.md b/src/inference/docs/api_details.md index cfeca7ff950..d08f923291a 100644 --- a/src/inference/docs/api_details.md +++ b/src/inference/docs/api_details.md @@ -9,12 +9,12 @@ OpenVINO Inference API contains two folders: Public OpenVINO Inference API defines global header [openvino/openvino.hpp](../include/openvino/openvino.hpp) which includes all common OpenVINO headers. All Inference components are placed inside the [openvino/runtime](../include/openvino/runtime) folder. -To learn more about the Inference API usage, read [How to integrate OpenVINO with your application](https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Integrate_OV_with_your_application.html). +To learn more about the Inference API usage, read [How to integrate OpenVINO with your application](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Integrate_OV_with_your_application.html). The diagram with dependencies is presented on the [OpenVINO Architecture page](../../docs/architecture.md#openvino-inference-pipeline). ## Components of OpenVINO Developer API -OpenVINO Developer API is required for OpenVINO plugin development. This process is described in the [OpenVINO Plugin Development Guide](https://docs.openvino.ai/nightly/openvino_docs_ie_plugin_dg_overview.html). +OpenVINO Developer API is required for OpenVINO plugin development. This process is described in the [OpenVINO Plugin Development Guide](https://docs.openvino.ai/2023.0/openvino_docs_ie_plugin_dg_overview.html). ## See also * [OpenVINO™ Core README](../README.md) diff --git a/src/plugins/auto/README.md b/src/plugins/auto/README.md index 2ac7a8883cd..dd1940d9a90 100644 --- a/src/plugins/auto/README.md +++ b/src/plugins/auto/README.md @@ -20,7 +20,7 @@ The AUTO plugin follows the OpenVINO™ plugin architecture and consists of seve * [src](./src/) - folder contains sources of the AUTO plugin. * [tests](./tests/) - tests for Auto Plugin components. -Learn more in the [OpenVINO™ Plugin Developer Guide](https://docs.openvino.ai/latest/openvino_docs_ie_plugin_dg_overview.html). +Learn more in the [OpenVINO™ Plugin Developer Guide](https://docs.openvino.ai/2023.0/openvino_docs_ie_plugin_dg_overview.html). ## Architecture The diagram below shows an overview of the components responsible for the basic inference flow: diff --git a/src/plugins/auto/docs/architecture.md b/src/plugins/auto/docs/architecture.md index d440d2e0f24..f633a20716c 100644 --- a/src/plugins/auto/docs/architecture.md +++ b/src/plugins/auto/docs/architecture.md @@ -8,8 +8,8 @@ AUTO is a meta plugin in OpenVINO that doesn’t bind to a specific type of hard The logic behind the choice is as follows: * Check what supported devices are available. -* Check performance hint of input setting (For detailed information of performance hint, please read more on the [ov::hint::PerformanceMode](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Performance_Hints.html)). -* Check precisions of the input model (for detailed information on precisions read more on the [ov::device::capabilities](https://docs.openvino.ai/latest/namespaceov_1_1device_1_1capability.html)). +* Check performance hint of input setting (For detailed information of performance hint, please read more on the [ov::hint::PerformanceMode](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Performance_Hints.html)). +* Check precisions of the input model (for detailed information on precisions read more on the [ov::device::capabilities](https://docs.openvino.ai/2023.0/namespaceov_1_1device_1_1capability.html)). * Select the highest-priority device capable of supporting the given model for LATENCY hint and THROUGHPUT hint. Or Select all devices capable of supporting the given model for CUMULATIVE THROUGHPUT hint. * If model’s precision is FP32 but there is no device capable of supporting it, offload the model to a device supporting FP16. @@ -21,7 +21,7 @@ The AUTO plugin is also the default plugin for OpenVINO, if the user does not se Compiling the model to accelerator-optimized kernels may take some time. When AUTO selects one accelerator, it can start inference with the system's CPU by default, as it provides very low latency and can start inference with no additional delays. While the CPU is performing inference, AUTO continues to load the model to the device best suited for the purpose and transfers the task to it when ready. -![alt text](https://docs.openvino.ai/latest/_images/autoplugin_accelerate.svg "AUTO cuts first inference latency (FIL) by running inference on the CPU until the GPU is ready") +![alt text](https://docs.openvino.ai/2023.0/_images/autoplugin_accelerate.svg "AUTO cuts first inference latency (FIL) by running inference on the CPU until the GPU is ready") The user can disable this acceleration feature by excluding CPU from the priority list or disabling `ov::intel_auto::enable_startup_fallback`. Its default value is `true`. diff --git a/src/plugins/intel_cpu/docs/fake_quantize.md b/src/plugins/intel_cpu/docs/fake_quantize.md index e6afecaf548..b5cf627c67a 100644 --- a/src/plugins/intel_cpu/docs/fake_quantize.md +++ b/src/plugins/intel_cpu/docs/fake_quantize.md @@ -1,5 +1,5 @@ # FakeQuantize in OpenVINO -https://docs.openvino.ai/latest/openvino_docs_ops_quantization_FakeQuantize_1.html +https://docs.openvino.ai/2023.0/openvino_docs_ops_quantization_FakeQuantize_1.html definition: ``` diff --git a/src/plugins/intel_cpu/docs/internal_cpu_plugin_optimization.md b/src/plugins/intel_cpu/docs/internal_cpu_plugin_optimization.md index 169e6eab225..45581f9011f 100644 --- a/src/plugins/intel_cpu/docs/internal_cpu_plugin_optimization.md +++ b/src/plugins/intel_cpu/docs/internal_cpu_plugin_optimization.md @@ -3,7 +3,7 @@ The CPU plugin supports several graph optimization algorithms, such as fusing or removing layers. Refer to the sections below for details. -> **NOTE**: For layer descriptions, see the [IR Notation Reference](https://docs.openvino.ai/latest/openvino_docs_ops_opset.html). +> **NOTE**: For layer descriptions, see the [IR Notation Reference](https://docs.openvino.ai/2023.0/openvino_docs_ops_opset.html). ## Fusing Convolution and Simple Layers diff --git a/src/plugins/intel_gpu/docs/gpu_plugin_driver_troubleshooting.md b/src/plugins/intel_gpu/docs/gpu_plugin_driver_troubleshooting.md index 85e0cf033e5..44a497eebd5 100644 --- a/src/plugins/intel_gpu/docs/gpu_plugin_driver_troubleshooting.md +++ b/src/plugins/intel_gpu/docs/gpu_plugin_driver_troubleshooting.md @@ -28,7 +28,7 @@ Some Intel® CPUs might not have integrated GPU, so if you want to run OpenVINO ## 2. Make sure that OpenCL® Runtime is installed -OpenCL runtime is a part of the GPU driver on Windows, but on Linux it should be installed separately. For the installation tips, refer to [OpenVINO docs](https://docs.openvino.ai/latest/openvino_docs_install_guides_installing_openvino_linux_header.html) and [OpenCL Compute Runtime docs](https://github.com/intel/compute-runtime/tree/master/opencl/doc). +OpenCL runtime is a part of the GPU driver on Windows, but on Linux it should be installed separately. For the installation tips, refer to [OpenVINO docs](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_linux_header.html) and [OpenCL Compute Runtime docs](https://github.com/intel/compute-runtime/tree/master/opencl/doc). To get the support of Intel® Iris® Xe MAX Graphics with Linux, follow the [driver installation guide](https://dgpu-docs.intel.com/devices/iris-xe-max-graphics/index.html) ## 3. Make sure that user has all required permissions to work with GPU device @@ -59,7 +59,7 @@ For more details, see the [OpenCL on Linux](https://github.com/bashbaug/OpenCLPa ## 7. If you are using dGPU with XMX, ensure that HW_MATMUL feature is recognized -OpenVINO contains *hello_query_device* sample application: [link](https://docs.openvino.ai/latest/openvino_inference_engine_ie_bridges_python_sample_hello_query_device_README.html) +OpenVINO contains *hello_query_device* sample application: [link](https://docs.openvino.ai/2023.0/openvino_inference_engine_ie_bridges_python_sample_hello_query_device_README.html) With this option, you can check whether Intel XMX(Xe Matrix Extension) feature is properly recognized or not. This is a hardware feature to accelerate matrix operations and available on some discrete GPUs. diff --git a/src/plugins/intel_gpu/docs/source_code_structure.md b/src/plugins/intel_gpu/docs/source_code_structure.md index 0afc73d737f..94a0fe3d431 100644 --- a/src/plugins/intel_gpu/docs/source_code_structure.md +++ b/src/plugins/intel_gpu/docs/source_code_structure.md @@ -5,7 +5,7 @@ but at some point clDNN became a part of OpenVINO, so now it's a part of overall via embedding of [oneDNN library](https://github.com/oneapi-src/oneDNN) OpenVINO GPU plugin is responsible for: - 1. [IE Plugin API](https://docs.openvino.ai/latest/openvino_docs_ie_plugin_dg_overview.html) implementation. + 1. [IE Plugin API](https://docs.openvino.ai/2023.0/openvino_docs_ie_plugin_dg_overview.html) implementation. 2. Translation of a model from common IE semantic (`ov::Function`) into plugin-specific one (`cldnn::topology`), which is then compiled into GPU graph representation (`cldnn::network`). 3. Implementation of OpenVINO operation set for Intel® GPU. diff --git a/src/plugins/template/README.md b/src/plugins/template/README.md index 350c879fda4..8a9d15ad1d2 100644 --- a/src/plugins/template/README.md +++ b/src/plugins/template/README.md @@ -35,11 +35,11 @@ $ make -j8 ## Tutorials -* [OpenVINO Plugin Developer Guide](https://docs.openvino.ai/latest/openvino_docs_ie_plugin_dg_overview.html) +* [OpenVINO Plugin Developer Guide](https://docs.openvino.ai/2023.0/openvino_docs_ie_plugin_dg_overview.html) ## See also * [OpenVINO™ README](../../../README.md) * [OpenVINO Core Components](../../README.md) * [OpenVINO Plugins](../README.md) * [Developer documentation](../../../docs/dev/index.md) - * [OpenVINO Plugin Developer Guide](https://docs.openvino.ai/latest/openvino_docs_ie_plugin_dg_overview.html) + * [OpenVINO Plugin Developer Guide](https://docs.openvino.ai/2023.0/openvino_docs_ie_plugin_dg_overview.html) diff --git a/tools/pot/README.md b/tools/pot/README.md index 9bd8ade59e0..15230642719 100644 --- a/tools/pot/README.md +++ b/tools/pot/README.md @@ -12,14 +12,14 @@ and run on CPU with the OpenVINO™. Figure below shows the optimization workflow: ![](docs/images/workflow_simple.svg) -To get started with POT tool refer to the corresponding OpenVINO™ [documentation](https://docs.openvino.ai/latest/openvino_docs_model_optimization_guide.html). +To get started with POT tool refer to the corresponding OpenVINO™ [documentation](https://docs.openvino.ai/2023.0/openvino_docs_model_optimization_guide.html). ## Installation ### From PyPI -POT is distributed as a part of OpenVINO™ Development Tools package. For installation instruction please refer to this [document](https://docs.openvino.ai/latest/openvino_docs_install_guides_install_dev_tools.html). +POT is distributed as a part of OpenVINO™ Development Tools package. For installation instruction please refer to this [document](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_install_dev_tools.html). ### From GitHub -As prerequisites, you should install [OpenVINO™ Runtime](https://docs.openvino.ai/latest/openvino_docs_install_guides_install_runtime.html) and other dependencies such as [Model Optimizer](https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) and [Accuracy Checker](https://docs.openvino.ai/latest/omz_tools_accuracy_checker.html). +As prerequisites, you should install [OpenVINO™ Runtime](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_install_runtime.html) and other dependencies such as [Model Optimizer](https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) and [Accuracy Checker](https://docs.openvino.ai/2023.0/omz_tools_accuracy_checker.html). To install POT from source: - Clone OpenVINO repository @@ -40,7 +40,7 @@ After installation POT is available as a Python library under `openvino.tools.po OpenVINO provides several examples to demonstrate the POT optimization workflow: * Command-line example: - * [Quantization of Image Classification model](https://docs.openvino.ai/latest/pot_configs_examples_README.html) + * [Quantization of Image Classification model](https://docs.openvino.ai/2023.0/pot_configs_examples_README.html) * API tutorials: * [Quantization of Image Classification model](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/301-tensorflow-training-openvino) * [Quantization of Object Detection model from Model Zoo](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/111-yolov5-quantization-migration) @@ -55,5 +55,4 @@ OpenVINO provides several examples to demonstrate the POT optimization workflow: ## See Also -* [Performance Benchmarks](https://docs.openvino.ai/latest/openvino_docs_performance_benchmarks.html) -* [INT8 Quantization by Using Web-Based Interface of the DL Workbench](https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Int_8_Quantization.html) +* [Performance Benchmarks](https://docs.openvino.ai/2023.0/openvino_docs_performance_benchmarks.html) diff --git a/tools/pot/docs/ModelRepresentation.md b/tools/pot/docs/ModelRepresentation.md index 052dbfbeffd..8bb9f3d3fbc 100644 --- a/tools/pot/docs/ModelRepresentation.md +++ b/tools/pot/docs/ModelRepresentation.md @@ -8,7 +8,7 @@ Currently, there are two groups of optimization methods that can change the IR a ## Representation of quantized models -The OpenVINO Toolkit represents all the quantized models using the so-called [FakeQuantize](https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_convert_model_Legacy_IR_Layers_Catalog_Spec.html#fakequantize-layer) operation. This operation is very expressive and allows mapping values from arbitrary input and output ranges. We project (discretize) the input values to the low-precision data type using affine transformation (with clamp and rounding) and then re-project discrete values back to the original range and data type. It can be considered as an emulation of the quantization/dequantization process which happens at runtime. The figure below shows a part of the DL model, namely the Convolutional layer, that undergoes various transformations, from being a floating-point model to an integer model executed in the OpenVINO runtime. Column 2 of this figure below shows a model quantized with [Neural Network Compression Framework (NNCF)](https://github.com/openvinotoolkit/nncf). +The OpenVINO Toolkit represents all the quantized models using the so-called [FakeQuantize](https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_prepare_model_convert_model_Legacy_IR_Layers_Catalog_Spec.html#fakequantize-layer) operation. This operation is very expressive and allows mapping values from arbitrary input and output ranges. We project (discretize) the input values to the low-precision data type using affine transformation (with clamp and rounding) and then re-project discrete values back to the original range and data type. It can be considered as an emulation of the quantization/dequantization process which happens at runtime. The figure below shows a part of the DL model, namely the Convolutional layer, that undergoes various transformations, from being a floating-point model to an integer model executed in the OpenVINO runtime. Column 2 of this figure below shows a model quantized with [Neural Network Compression Framework (NNCF)](https://github.com/openvinotoolkit/nncf). ![](images/model_flow.png) To reduce memory footprint weights of quantized models are transformed to a target data type, e.g. in the case of 8-bit quantization, this is int8. During this transformation, the floating-point weights tensor and one of the FakeQuantize operations that correspond to it are replaced with 8-bit weight tensor and the sequence of Convert, Subtract, Multiply operations that represent the typecast and dequantization parameters (scale and zero-point) as it is shown in column 3 of the figure.