Update install guides for wheels (#9390)
* Update install guides for wheels * remove extra comma * License update
This commit is contained in:
committed by
GitHub
parent
c9704e7ed4
commit
557d20e2e2
@@ -1,7 +1,7 @@
|
||||
# Intel® Distribution of OpenVINO™ Toolkit Developer Package
|
||||
Copyright © 2018-2021 Intel Corporation
|
||||
> **LEGAL NOTICE**: Your use of this software and any required dependent software (the
|
||||
“Software Package”) is subject to the terms and conditions of the [software license agreements](https://software.intel.com/content/dam/develop/external/us/en/documents/intel-openvino-license-agreements.pdf) for the Software Package, which may also include notices, disclaimers, or
|
||||
“Software Package”) is subject to the terms and conditions of the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0.html) for the Software Package, which may also include notices, disclaimers, or
|
||||
license terms for third party or open source software included in or with the Software Package, and your use indicates your acceptance of all such terms. Please refer to the “third-party-programs.txt” or other similarly-named text file included with the Software Package for additional details.
|
||||
|
||||
>Intel is committed to the respect of human rights and avoiding complicity in human rights abuses, a policy reflected in the [Intel Global Human Rights Principles](https://www.intel.com/content/www/us/en/policy/policy-human-rights.html). Accordingly, by accessing the Intel material on this platform you agree that you will not use the material in a product or application that causes or contributes to a violation of an internationally recognized human right.
|
||||
@@ -15,11 +15,11 @@ OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applicatio
|
||||
|
||||
| Component | Console Script | Description |
|
||||
|------------------|---------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| [Model Optimizer](https://docs.openvinotoolkit.org/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) | `mo` |**Model Optimizer** imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine. <br>Supported frameworks include Caffe\*, TensorFlow\*, MXNet\*, and ONNX\*. |
|
||||
| [Benchmark Tool](https://docs.openvinotoolkit.org/latest/openvino_inference_engine_tools_benchmark_tool_README.html)| `benchmark_app` | **Benchmark Application** allows you to estimate deep learning inference performance on supported devices for synchronous and asynchronous modes. |
|
||||
| [Accuracy Checker](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker.html) and <br> [Annotation Converter](https://docs.openvinotoolkit.org/latest/omz_tools_accuracy_checker_annotation_converters.html) | `accuracy_check` <br> `convert_annotation` |**Accuracy Checker** is a deep learning accuracy validation tool that allows you to collect accuracy metrics against popular datasets. The main advantages of the tool are the flexibility of configuration and a set of supported datasets, preprocessing, postprocessing, and metrics. <br> **Annotation Converter** is a utility that prepares datasets for evaluation with Accuracy Checker. |
|
||||
| [Post-Training Optimization Tool](https://docs.openvinotoolkit.org/latest/pot_README.html)| `pot` |**Post-Training Optimization Tool** allows you to optimize trained models with advanced capabilities, such as quantization and low-precision optimizations, without the need to retrain or fine-tune models. Optimizations are also available through the [API](https://docs.openvinotoolkit.org/latest/pot_compression_api_README.html). |
|
||||
| [Model Downloader and other Open Model Zoo tools](https://docs.openvinotoolkit.org/latest/omz_tools_downloader.html)| `omz_downloader` <br> `omz_converter` <br> `omz_quantizer` <br> `omz_info_dumper`| **Model Downloader** is a tool for getting access to the collection of high-quality and extremely fast pre-trained deep learning [public](https://docs.openvinotoolkit.org/latest/omz_models_group_public.html) and [Intel](https://docs.openvinotoolkit.org/latest/omz_models_group_intel.html)-trained models. These free pre-trained models can be used to speed up the development and production deployment process without training your own models. The tool downloads model files from online sources and, if necessary, patches them to make them more usable with Model Optimizer. A number of additional tools are also provided to automate the process of working with downloaded models:<br> **Model Converter** is a tool for converting Open Model Zoo models that are stored in an original deep learning framework format into the Inference Engine Intermediate Representation (IR) using Model Optimizer. <br> **Model Quantizer** is a tool for automatic quantization of full-precision models in the IR format into low-precision versions using the Post-Training Optimization Tool. <br> **Model Information Dumper** is a helper utility for dumping information about the models to a stable, machine-readable format.
|
||||
| [Model Optimizer](https://docs.openvino.ai/latest/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html) | `mo` |**Model Optimizer** imports, converts, and optimizes models that were trained in popular frameworks to a format usable by Intel tools, especially the Inference Engine. <br>Supported frameworks include Caffe\*, TensorFlow\*, MXNet\*, and ONNX\*. |
|
||||
| [Benchmark Tool](https://docs.openvino.ai/latest/openvino_inference_engine_tools_benchmark_tool_README.html)| `benchmark_app` | **Benchmark Application** allows you to estimate deep learning inference performance on supported devices for synchronous and asynchronous modes. |
|
||||
| [Accuracy Checker](https://docs.openvino.ai/latest/omz_tools_accuracy_checker.html) and <br> [Annotation Converter](https://docs.openvino.ai/latest/omz_tools_accuracy_checker_annotation_converters.html) | `accuracy_check` <br> `convert_annotation` |**Accuracy Checker** is a deep learning accuracy validation tool that allows you to collect accuracy metrics against popular datasets. The main advantages of the tool are the flexibility of configuration and a set of supported datasets, preprocessing, postprocessing, and metrics. <br> **Annotation Converter** is a utility that prepares datasets for evaluation with Accuracy Checker. |
|
||||
| [Post-Training Optimization Tool](https://docs.openvino.ai/latest/pot_README.html)| `pot` |**Post-Training Optimization Tool** allows you to optimize trained models with advanced capabilities, such as quantization and low-precision optimizations, without the need to retrain or fine-tune models. Optimizations are also available through the [API](https://docs.openvino.ai/latest/pot_compression_api_README.html). |
|
||||
| [Model Downloader and other Open Model Zoo tools](https://docs.openvino.ai/latest/omz_tools_downloader.html)| `omz_downloader` <br> `omz_converter` <br> `omz_quantizer` <br> `omz_info_dumper`| **Model Downloader** is a tool for getting access to the collection of high-quality and extremely fast pre-trained deep learning [public](https://docs.openvino.ai/latest/omz_models_group_public.html) and [Intel](https://docs.openvino.ai/latest/omz_models_group_intel.html)-trained models. These free pre-trained models can be used to speed up the development and production deployment process without training your own models. The tool downloads model files from online sources and, if necessary, patches them to make them more usable with Model Optimizer. A number of additional tools are also provided to automate the process of working with downloaded models:<br> **Model Converter** is a tool for converting Open Model Zoo models that are stored in an original deep learning framework format into the Inference Engine Intermediate Representation (IR) using Model Optimizer. <br> **Model Quantizer** is a tool for automatic quantization of full-precision models in the IR format into low-precision versions using the Post-Training Optimization Tool. <br> **Model Information Dumper** is a helper utility for dumping information about the models to a stable, machine-readable format.
|
||||
|
||||
> **NOTE**: The developer package also installs the OpenVINO™ runtime package as a dependency.
|
||||
|
||||
@@ -37,11 +37,10 @@ The table below lists the supported operating systems and Python* versions requi
|
||||
| Supported Operating System | [Python* Version (64-bit)](https://www.python.org/) |
|
||||
| :------------------------------------------------------------| :---------------------------------------------------|
|
||||
| Ubuntu* 18.04 long-term support (LTS), 64-bit | 3.6, 3.7, 3.8 |
|
||||
| Ubuntu* 20.04 long-term support (LTS), 64-bit | 3.6, 3.7, 3.8 |
|
||||
| Ubuntu* 20.04 long-term support (LTS), 64-bit | 3.6, 3.7, 3.8, 3.9 |
|
||||
| Red Hat* Enterprise Linux* 8, 64-bit | 3.6, 3.8 |
|
||||
| CentOS* 7, 64-bit | 3.6, 3.7, 3.8 |
|
||||
| macOS* 10.15.x | 3.6, 3.7, 3.8 |
|
||||
| Windows 10*, 64-bit | 3.6, 3.7, 3.8 |
|
||||
| macOS* 10.15.x | 3.6, 3.7, 3.8, 3.9 |
|
||||
| Windows 10*, 64-bit | 3.6, 3.7, 3.8, 3.9 |
|
||||
|
||||
> **NOTE**: This package can be installed on other versions of macOS, Linux and Windows, but only the specific versions above are fully validated.
|
||||
|
||||
@@ -87,7 +86,6 @@ To install and configure the components of the development package for working w
|
||||
| DL Framework | Extra |
|
||||
| :------------------------------------------------------------------------------- | :-------------------------------|
|
||||
| [Caffe*](https://caffe.berkeleyvision.org/) | caffe |
|
||||
| [Caffe2*](https://github.com/pytorch/pytorch) | caffe2 |
|
||||
| [Kaldi*](https://github.com/kaldi-asr/kaldi) | kaldi |
|
||||
| [MXNet*](https://mxnet.apache.org/) | mxnet |
|
||||
| [ONNX*](https://github.com/microsoft/onnxruntime/) | onnx |
|
||||
@@ -110,7 +108,7 @@ For example, to install and configure the components for working with TensorFlow
|
||||
|
||||
- To verify that Inference Engine from the **runtime package** is available, run the command below:
|
||||
```sh
|
||||
python -c "from openvino.inference_engine import IECore"
|
||||
python -c "from openvino.runtime import Core"
|
||||
```
|
||||
If installation was successful, you will not see any error messages (no console output).
|
||||
|
||||
@@ -130,5 +128,5 @@ sudo apt-get install libpython3.7
|
||||
## Additional Resources
|
||||
|
||||
- [Intel® Distribution of OpenVINO™ toolkit](https://software.intel.com/en-us/openvino-toolkit)
|
||||
- [OpenVINO™ toolkit online documentation](https://docs.openvinotoolkit.org)
|
||||
- [OpenVINO™ toolkit online documentation](https://docs.openvino.ai)
|
||||
- [OpenVINO™ Notebooks](https://github.com/openvinotoolkit/openvino_notebooks)
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Intel® Distribution of OpenVINO™ Toolkit Runtime Package
|
||||
Copyright © 2018-2021 Intel Corporation
|
||||
> **LEGAL NOTICE**: Your use of this software and any required dependent software (the
|
||||
“Software Package”) is subject to the terms and conditions of the [software license agreements](https://software.intel.com/content/dam/develop/external/us/en/documents/intel-openvino-license-agreements.pdf) for the Software Package, which may also include notices, disclaimers, or
|
||||
“Software Package”) is subject to the terms and conditions of the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0.html) for the Software Package, which may also include notices, disclaimers, or
|
||||
license terms for third party or open source software included in or with the Software Package, and your use indicates your acceptance of all such terms. Please refer to the “third-party-programs.txt” or other similarly-named text file included with the Software Package for additional details.
|
||||
|
||||
>Intel is committed to the respect of human rights and avoiding complicity in human rights abuses, a policy reflected in the [Intel Global Human Rights Principles](https://www.intel.com/content/www/us/en/policy/policy-human-rights.html). Accordingly, by accessing the Intel material on this platform you agree that you will not use the material in a product or application that causes or contributes to a violation of an internationally recognized human right.
|
||||
@@ -19,7 +19,7 @@ The **runtime package** includes the following components installed by default:
|
||||
|
||||
| Component | Description |
|
||||
|-----------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| [Inference Engine](https://docs.openvinotoolkit.org/latest/openvino_docs_IE_DG_inference_engine_intro.html) | This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
|
||||
| [Inference Engine](https://docs.openvino.ai/latest/openvino_docs_IE_DG_inference_engine_intro.html) | This is the engine that runs the deep learning model. It includes a set of libraries for an easy inference integration into your applications. |
|
||||
|
||||
## System Requirements
|
||||
The complete list of supported hardware is available in the [Release Notes](https://software.intel.com/content/www/us/en/develop/articles/openvino-relnotes.html#inpage-nav-8).
|
||||
@@ -29,11 +29,10 @@ The table below lists supported operating systems and Python* versions required
|
||||
| Supported Operating System | [Python* Version (64-bit)](https://www.python.org/) |
|
||||
| :------------------------------------------------------------| :---------------------------------------------------|
|
||||
| Ubuntu* 18.04 long-term support (LTS), 64-bit | 3.6, 3.7, 3.8 |
|
||||
| Ubuntu* 20.04 long-term support (LTS), 64-bit | 3.6, 3.7, 3.8 |
|
||||
| Ubuntu* 20.04 long-term support (LTS), 64-bit | 3.6, 3.7, 3.8, 3.9 |
|
||||
| Red Hat* Enterprise Linux* 8, 64-bit | 3.6, 3.8 |
|
||||
| CentOS* 7, 64-bit | 3.6, 3.7, 3.8 |
|
||||
| macOS* 10.15.x versions | 3.6, 3.7, 3.8 |
|
||||
| Windows 10*, 64-bit | 3.6, 3.7, 3.8 |
|
||||
| macOS* 10.15.x versions | 3.6, 3.7, 3.8, 3.9 |
|
||||
| Windows 10*, 64-bit | 3.6, 3.7, 3.8, 3.9 |
|
||||
|
||||
> **NOTE**: This package can be installed on other versions of Linux and Windows OSes, but only the specific versions above are fully validated.
|
||||
|
||||
@@ -83,7 +82,7 @@ Run the command below: <br>
|
||||
|
||||
Run the command below:
|
||||
```sh
|
||||
python -c "from openvino.inference_engine import IECore"
|
||||
python -c "from openvino.runtime import Core"
|
||||
```
|
||||
|
||||
If installation was successful, you will not see any error messages (no console output).
|
||||
@@ -104,6 +103,6 @@ sudo apt-get install libpython3.7
|
||||
## Additional Resources
|
||||
|
||||
- [Intel® Distribution of OpenVINO™ toolkit](https://software.intel.com/en-us/openvino-toolkit)
|
||||
- [OpenVINO™ toolkit online documentation](https://docs.openvinotoolkit.org)
|
||||
- [OpenVINO™ toolkit online documentation](https://docs.openvino.ai)
|
||||
- [OpenVINO™ Notebooks](https://github.com/openvinotoolkit/openvino_notebooks)
|
||||
|
||||
|
||||
Reference in New Issue
Block a user