From d7d42f79be2f1ca72087091a16f2168665f650ef Mon Sep 17 00:00:00 2001 From: Andrey Zaytsev Date: Wed, 30 Sep 2020 14:00:19 +0300 Subject: [PATCH] Replace absolute links to docs.openvinotoolkit.org by relative ones (#2439) (#2461) * Replaced direct links to docs.openvinotoolkit.org with relative links * Replaced direct links to docs.openvinotoolkit.org with relative links. Added GSGs for Win and macOS * Minor fixes in GSGs * Replaced direct links to docs.openvinotoolkit.org with relative links * Removed links to OpenVINO markdown files that contain anchor - they don't work in the current implementation of the doc process * Fixed Notes * Removed links to OpenVINO markdown files that contain anchor - they don't work in the current implementation of the doc process * fixed link to installing-openvino-linux.md --- docs/HOWTO/Custom_Layers_Guide.md | 20 +++---- docs/IE_DG/protecting_model_guide.md | 6 +- .../LowPrecisionModelRepresentation.md | 2 +- docs/IE_PLUGIN_DG/QuantizedNetworks.md | 2 +- .../prepare_model/Model_Optimizer_FAQ.md | 2 +- docs/benchmarks/performance_benchmarks.md | 2 +- docs/benchmarks/performance_benchmarks_faq.md | 10 ++-- docs/doxygen/ie_docs.xml | 2 + docs/get_started/get_started_linux.md | 59 +++++++++++-------- docs/get_started/get_started_macos.md | 20 +++---- docs/get_started/get_started_windows.md | 29 ++++----- docs/install_guides/PAC_Configure.md | 12 ++-- .../VisionAcceleratorFPGA_Configure.md | 10 ++-- ...VisionAcceleratorFPGA_Configure_Windows.md | 10 ++-- .../install_guides/deployment-manager-tool.md | 16 ++--- .../install_guides/installing-openvino-apt.md | 10 ++-- .../installing-openvino-conda.md | 10 ++-- .../installing-openvino-linux-fpga.md | 4 +- .../installing-openvino-linux.md | 6 +- .../installing-openvino-windows-fpga.md | 12 ++-- .../installing-openvino-windows.md | 2 +- .../install_guides/installing-openvino-yum.md | 6 +- 22 files changed, 129 insertions(+), 123 deletions(-) diff --git a/docs/HOWTO/Custom_Layers_Guide.md b/docs/HOWTO/Custom_Layers_Guide.md index ddbd8126798..40700917808 100644 --- a/docs/HOWTO/Custom_Layers_Guide.md +++ b/docs/HOWTO/Custom_Layers_Guide.md @@ -21,11 +21,11 @@ The original format will be a supported framework such as TensorFlow, Caffe, or ## Custom Layer Overview -The [Model Optimizer](https://docs.openvinotoolkit.org/2019_R1.1/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) searches the list of known layers for each layer contained in the input model topology before building the model's internal representation, optimizing the model, and producing the Intermediate Representation files. +The [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) searches the list of known layers for each layer contained in the input model topology before building the model's internal representation, optimizing the model, and producing the Intermediate Representation files. -The [Inference Engine](https://docs.openvinotoolkit.org/2019_R1.1/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html) loads the layers from the input model IR files into the specified device plugin, which will search a list of known layer implementations for the device. If your topology contains layers that are not in the list of known layers for the device, the Inference Engine considers the layer to be unsupported and reports an error. To see the layers that are supported by each device plugin for the Inference Engine, refer to the [Supported Devices](https://docs.openvinotoolkit.org/2019_R1.1/_docs_IE_DG_supported_plugins_Supported_Devices.html) documentation. +The [Inference Engine](../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) loads the layers from the input model IR files into the specified device plugin, which will search a list of known layer implementations for the device. If your topology contains layers that are not in the list of known layers for the device, the Inference Engine considers the layer to be unsupported and reports an error. To see the layers that are supported by each device plugin for the Inference Engine, refer to the [Supported Devices](../IE_DG/supported_plugins/Supported_Devices.md) documentation.
-**Note:** If a device doesn't support a particular layer, an alternative to creating a new custom layer is to target an additional device using the HETERO plugin. The [Heterogeneous Plugin](https://docs.openvinotoolkit.org/2019_R1.1/_docs_IE_DG_supported_plugins_HETERO.html) may be used to run an inference model on multiple devices allowing the unsupported layers on one device to "fallback" to run on another device (e.g., CPU) that does support those layers. +> **NOTE:** If a device doesn't support a particular layer, an alternative to creating a new custom layer is to target an additional device using the HETERO plugin. The [Heterogeneous Plugin](../IE_DG/supported_plugins/HETERO.md) may be used to run an inference model on multiple devices allowing the unsupported layers on one device to "fallback" to run on another device (e.g., CPU) that does support those layers. ## Custom Layer Implementation Workflow @@ -40,7 +40,7 @@ The following figure shows the basic processing steps for the Model Optimizer hi The Model Optimizer first extracts information from the input model which includes the topology of the model layers along with parameters, input and output format, etc., for each layer. The model is then optimized from the various known characteristics of the layers, interconnects, and data flow which partly comes from the layer operation providing details including the shape of the output for each layer. Finally, the optimized model is output to the model IR files needed by the Inference Engine to run the model. -The Model Optimizer starts with a library of known extractors and operations for each [supported model framework](https://docs.openvinotoolkit.org/2019_R1.1/_docs_MO_DG_prepare_model_Supported_Frameworks_Layers.html) which must be extended to use each unknown custom layer. The custom layer extensions needed by the Model Optimizer are: +The Model Optimizer starts with a library of known extractors and operations for each [supported model framework](../MO_DG/prepare_model/Supported_Frameworks_Layers.md) which must be extended to use each unknown custom layer. The custom layer extensions needed by the Model Optimizer are: - Custom Layer Extractor - Responsible for identifying the custom layer operation and extracting the parameters for each instance of the custom layer. The layer parameters are stored per instance and used by the layer operation before finally appearing in the output IR. Typically the input layer parameters are unchanged, which is the case covered by this tutorial. @@ -182,10 +182,10 @@ There are two options to convert your MXNet* model that contains custom layers: 2. If you have sub-graphs that should not be expressed with the analogous sub-graph in the Intermediate Representation, but another sub-graph should appear in the model, the Model Optimizer provides such an option. In MXNet the function is actively used for ssd models provides an opportunity to for the necessary subgraph sequences and replace them. To read more, see [Sub-graph Replacement in the Model Optimizer](../MO_DG/prepare_model/customize_model_optimizer/Subgraph_Replacement_Model_Optimizer.md). ## Kaldi\* Models with Custom Layers -For information on converting your Kaldi* model containing custom layers see [Converting a Kaldi Model in the Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi.html). +For information on converting your Kaldi* model containing custom layers see [Converting a Kaldi Model in the Model Optimizer Developer Guide](../MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md). ## ONNX\* Models with Custom Layers -For information on converting your ONNX* model containing custom layers see [Converting an ONNX Model in the Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_ONNX.html). +For information on converting your ONNX* model containing custom layers see [Converting an ONNX Model in the Model Optimizer Developer Guide](../MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md). ## Step-by-Step Custom Layers Tutorial For a step-by-step walk-through creating and executing a custom layer, see [Custom Layer Implementation Tutorial for Linux and Windows.](https://github.com/david-drew/OpenVINO-Custom-Layers/tree/master/2019.r2.0) @@ -194,10 +194,10 @@ For a step-by-step walk-through creating and executing a custom layer, see [Cust - Intel® Distribution of OpenVINO™ toolkit home page: [https://software.intel.com/en-us/openvino-toolkit](https://software.intel.com/en-us/openvino-toolkit) - OpenVINO™ toolkit online documentation: [https://docs.openvinotoolkit.org](https://docs.openvinotoolkit.org) -- [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) -- [Kernel Extensivility in the Inference Engine Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Integrate_your_kernels_into_IE.html) -- [Inference Engine Samples Overview](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html) -- [Overview of OpenVINO™ Toolkit Pre-Trained Models](https://docs.openvinotoolkit.org/latest/_intel_models_index.html) +- [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) +- [Kernel Extensivility in the Inference Engine Developer Guide](../IE_DG/Integrate_your_kernels_into_IE.md) +- [Inference Engine Samples Overview](../IE_DG/Samples_Overview.md) +- [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index) - [Inference Engine Tutorials](https://github.com/intel-iot-devkit/inference-tutorials-generic) - For IoT Libraries and Code Samples see the [Intel® IoT Developer Kit](https://github.com/intel-iot-devkit). diff --git a/docs/IE_DG/protecting_model_guide.md b/docs/IE_DG/protecting_model_guide.md index ea813ad874a..8281712e40b 100644 --- a/docs/IE_DG/protecting_model_guide.md +++ b/docs/IE_DG/protecting_model_guide.md @@ -51,9 +51,9 @@ weights respectively. - Intel® Distribution of OpenVINO™ toolkit home page: [https://software.intel.com/en-us/openvino-toolkit](https://software.intel.com/en-us/openvino-toolkit) - OpenVINO™ toolkit online documentation: [https://docs.openvinotoolkit.org](https://docs.openvinotoolkit.org) -- Model Optimizer Developer Guide: [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) -- Inference Engine Developer Guide: [Inference Engine Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html) -- For more information on Sample Applications, see the [Inference Engine Samples Overview](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html) +- Model Optimizer Developer Guide: [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) +- Inference Engine Developer Guide: [Inference Engine Developer Guide](Deep_Learning_Inference_Engine_DevGuide.md) +- For more information on Sample Applications, see the [Inference Engine Samples Overview](Samples_Overview.html) - For information on a set of pre-trained models, see the [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_intel_index) - For information on Inference Engine Tutorials, see the [Inference Tutorials](https://github.com/intel-iot-devkit/inference-tutorials-generic) - For IoT Libraries and Code Samples see the [Intel® IoT Developer Kit](https://github.com/intel-iot-devkit). diff --git a/docs/IE_PLUGIN_DG/LowPrecisionModelRepresentation.md b/docs/IE_PLUGIN_DG/LowPrecisionModelRepresentation.md index 8e49870b020..9ff8088a366 100644 --- a/docs/IE_PLUGIN_DG/LowPrecisionModelRepresentation.md +++ b/docs/IE_PLUGIN_DG/LowPrecisionModelRepresentation.md @@ -5,7 +5,7 @@ Currently, there are two groups of optimization methods that can influence on th - **Quantization**. The rest of this document is dedicated to the representation of quantized models. ## Representation of quantized models -The OpenVINO Toolkit represents all the quantized models using the so-called [FakeQuantize](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Legacy_IR_Layers_Catalog_Spec.html#FakeQuantize) operation. This operation is very expressive and allows mapping values from arbitrary input and output ranges. The whole idea behind that is quite simple: we project (discretize) the input values to the low-precision data type using affine transformation (with clamp and rounding) and then reproject discrete values back to the original range and data type. It can be considered as an emulation of the quantization process which happens at runtime. +The OpenVINO Toolkit represents all the quantized models using the so-called FakeQuantize operation (see the description in [this document](../MO_DG/prepare_model/convert_model/Legacy_IR_Layers_Catalog_Spec.md)). This operation is very expressive and allows mapping values from arbitrary input and output ranges. The whole idea behind that is quite simple: we project (discretize) the input values to the low-precision data type using affine transformation (with clamp and rounding) and then reproject discrete values back to the original range and data type. It can be considered as an emulation of the quantization process which happens at runtime. In order to be able to execute a particular DL operation in low-precision all its inputs should be quantized i.e. should have FakeQuantize between operation and data blobs. The figure below shows an example of quantized Convolution which contains two FakeQuantize nodes: one for weights and one for activations (bias is quantized using the same parameters). ![quantized_convolution]
Figure 1. Example of quantized Convolution operation.
diff --git a/docs/IE_PLUGIN_DG/QuantizedNetworks.md b/docs/IE_PLUGIN_DG/QuantizedNetworks.md index a8848bcf639..6e6cdd337b1 100644 --- a/docs/IE_PLUGIN_DG/QuantizedNetworks.md +++ b/docs/IE_PLUGIN_DG/QuantizedNetworks.md @@ -9,7 +9,7 @@ For more details about low-precision model representation please refer to this [ During the model load each plugin can interpret quantization rules expressed in *FakeQuantize* operations: - Independently based on the definition of *FakeQuantize* operation. - Using a special library of low-precision transformations (LPT) which applies common rules for generic operations, -such as Convolution, Fully-Connected, Eltwise, etc., and translates "fake-quantized" models into the models with low-precision operations. For more information about low-precision flow please refer to the following [document](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Int8Inference.html). +such as Convolution, Fully-Connected, Eltwise, etc., and translates "fake-quantized" models into the models with low-precision operations. For more information about low-precision flow please refer to the following [document](../IE_DG/Int8Inference.md). Here we provide only a high-level overview of the interpretation rules of FakeQuantize. At runtime each FakeQuantize can be split into two independent operations: **Quantize** and **Dequantize**. diff --git a/docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md b/docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md index 82d9d79d33c..4dc93936126 100644 --- a/docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md +++ b/docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md @@ -365,7 +365,7 @@ Keep in mind that there is no space between and inside the brackets for input sh #### 58. What does the message "Please provide input layer names for input layer shapes" mean? -When specifying input shapes for several layers, you must provide names for inputs, whose shapes will be overwritten. For usage examples, see [Converting a Caffe\* Model](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe.html). Additional information for `--input_shape` is in FAQ [#57](#question-57). +When specifying input shapes for several layers, you must provide names for inputs, whose shapes will be overwritten. For usage examples, see [Converting a Caffe* Model](convert_model/Convert_Model_From_Caffe.md. Additional information for `--input_shape` is in FAQ [#57](#question-57). #### 59. What does the message "Values cannot be parsed" mean? diff --git a/docs/benchmarks/performance_benchmarks.md b/docs/benchmarks/performance_benchmarks.md index 687cb940c42..11b6dede2c5 100644 --- a/docs/benchmarks/performance_benchmarks.md +++ b/docs/benchmarks/performance_benchmarks.md @@ -216,7 +216,7 @@ Testing by Intel done on: see test date for each HW platform below. | BIOS Release | September 21, 2018 | September 21, 2018 | December 03, 2019 | | Test Date | July 8, 2020 | July 8, 2020 | July 8, 2020 | -Please follow this link for more detailed configuration descriptions: [Configuration Details](https://docs.openvinotoolkit.org/resources/benchmark_files/system_configurations_2020.4.html) +Please follow this link for more detailed configuration descriptions: [Configuration Details](https://docs.openvinotoolkit.org/resources/benchmark_files/system_configurations_2021.1.html) \htmlonly