diff --git a/README.md b/README.md index cf0a510cbfb..770c6213982 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# [OpenVINO™ Toolkit](https://01.org/openvinotoolkit) - Deep Learning Deployment Toolkit repository +# OpenVINO™ Toolkit [![Stable release](https://img.shields.io/badge/version-2021.2-green.svg)](https://github.com/openvinotoolkit/openvino/releases/tag/2021.2) [![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE) ![GitHub branch checks state](https://img.shields.io/github/checks-status/openvinotoolkit/openvino/master?label=GitHub%20checks) @@ -7,7 +7,7 @@ This toolkit allows developers to deploy pre-trained deep learning models through a high-level C++ Inference Engine API integrated with application logic. -This open source version includes several components: namely [Model Optimizer], [ngraph] and +This open source version includes several components: namely [Model Optimizer], [nGraph] and [Inference Engine], as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the [Open Model Zoo], along with 100+ open source and public models in popular formats such as Caffe\*, TensorFlow\*, @@ -15,7 +15,7 @@ MXNet\* and ONNX\*. ## Repository components: * [Inference Engine] -* [ngraph] +* [nGraph] * [Model Optimizer] ## License @@ -27,9 +27,10 @@ and release your contribution under these terms. * Docs: https://docs.openvinotoolkit.org/ * Wiki: https://github.com/openvinotoolkit/openvino/wiki * Issue tracking: https://github.com/openvinotoolkit/openvino/issues -* Additional OpenVINO modules: https://github.com/openvinotoolkit/openvino_contrib -* [HomePage](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) -* [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes) +* Storage: https://storage.openvinotoolkit.org/ +* Additional OpenVINO™ modules: https://github.com/openvinotoolkit/openvino_contrib +* [Intel® Distribution of OpenVINO™ toolkit Product Page](https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit.html) +* [Intel® Distribution of OpenVINO™ toolkit Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes) ## Support Please report questions, issues and suggestions using: @@ -45,4 +46,4 @@ Please report questions, issues and suggestions using: [Inference Engine]:https://software.intel.com/en-us/articles/OpenVINO-InferEngine [Model Optimizer]:https://software.intel.com/en-us/articles/OpenVINO-ModelOptimizer [tag on StackOverflow]:https://stackoverflow.com/search?q=%23openvino -[ngraph]:https://docs.openvinotoolkit.org/latest/openvino_docs_nGraph_DG_DevGuide.html +[nGraph]:https://docs.openvinotoolkit.org/latest/openvino_docs_nGraph_DG_DevGuide.html diff --git a/docs/IE_DG/supported_plugins/GNA.md b/docs/IE_DG/supported_plugins/GNA.md index 68a908e77e5..82e16899705 100644 --- a/docs/IE_DG/supported_plugins/GNA.md +++ b/docs/IE_DG/supported_plugins/GNA.md @@ -69,7 +69,7 @@ Limitations include: - Only 1D convolutions are natively supported. - The number of output channels for convolutions must be a multiple of 4. - Permute layer support is limited to the cases where no data reordering is needed or when reordering is happening for two dimensions, at least one of which is not greater than 8. -- Concatinations and splittings are supported only along the channel dimension (axis=1). +- Concatenations and splitting are supported only along the channel dimension (axis=1). #### Experimental Support for 2D Convolutions @@ -77,7 +77,7 @@ The Intel® GNA hardware natively supports only 1D convolution. However, 2D convolutions can be mapped to 1D when a convolution kernel moves in a single direction. GNA Plugin performs such a transformation for Kaldi `nnet1` convolution. From this perspective, the Intel® GNA hardware convolution operation accepts an `NHWC` input and produces an `NHWC` output. Because OpenVINO™ only supports the `NCHW` layout, you may need to insert `Permute` layers before or after convolutions. -For example, the Kaldi model optimizer inserts such a permute after convolution for the [rm_cnn4a network](https://download.01.org/openvinotoolkit/models_contrib/speech/kaldi/rm_cnn4a_smbr/). This `Permute` layer is automatically removed by the GNA Plugin, because the Intel® GNA hardware convolution layer already produces the required `NHWC` result. +For example, the Kaldi model optimizer inserts such a permute after convolution for the [rm_cnn4a network](https://storage.openvinotoolkit.org/models_contrib/speech/2021.2/rm_cnn4a_smbr/). This `Permute` layer is automatically removed by the GNA Plugin, because the Intel® GNA hardware convolution layer already produces the required `NHWC` result. ## Operation Precision diff --git a/docs/ovsa/ovsa_get_started.md b/docs/ovsa/ovsa_get_started.md index f45d4bf299c..e9062dc7670 100644 --- a/docs/ovsa/ovsa_get_started.md +++ b/docs/ovsa/ovsa_get_started.md @@ -606,7 +606,7 @@ This example uses `curl` to download the `face-detection-retail-004` model from 2. Download a model from the Model Zoo: ```sh cd $OVSA_DEV_ARTEFACTS - curl --create-dirs https://download.01.org/opencv/2021/openvinotoolkit/2021.1/open_model_zoo/models_bin/1/face-detection-retail-0004/FP32/face-detection-retail-0004.xml https:// download.01.org/opencv/2021/openvinotoolkit/2021.1/open_model_zoo/models_bin/1/face-detection-retail-0004/FP32/face-detection-retail-0004.bin -o model/face-detection-retail-0004.xml -o model/face-detection-retail-0004.bin + curl --create-dirs https://storage.openvinotoolkit.org/repositories/open_model_zoo/2021.3/models_bin/1/face-detection-retail-0004/FP32/face-detection-retail-0004.xml https:// storage.openvinotoolkit.org/repositories/open_model_zoo/2021.3/models_bin/1/face-detection-retail-0004/FP32/face-detection-retail-0004.bin -o model/face-detection-retail-0004.xml -o model/face-detection-retail-0004.bin ``` The model is downloaded to the `OVSA_DEV_ARTEFACTS/model` directory. diff --git a/inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md b/inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md index b08704bede0..e3fcd93b3a5 100644 --- a/inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md +++ b/inference-engine/ie_bridges/python/sample/hello_reshape_ssd/README.md @@ -7,7 +7,7 @@ networks like SSD-VGG. The sample shows how to use [Shape Inference feature](../ ## Running -To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/). +To run the sample, you can use public or pre-trained models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README). > **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md). > diff --git a/inference-engine/samples/speech_libs_and_demos/Speech_libs_and_demos.md b/inference-engine/samples/speech_libs_and_demos/Speech_libs_and_demos.md index 3bb4b34bbdf..212ffb26f19 100644 --- a/inference-engine/samples/speech_libs_and_demos/Speech_libs_and_demos.md +++ b/inference-engine/samples/speech_libs_and_demos/Speech_libs_and_demos.md @@ -32,7 +32,7 @@ The package contains the following components: * [Kaldi Statistical Language Model Conversion Tool](Kaldi_SLM_conversion_tool.md), which converts custom language models to use in the decoder -Additionally, [new acoustic and language models](http://download.01.org/opencv/2020/openvinotoolkit/2020.1/models_contrib/speech/kaldi/librispeech_s5/) to be used by new demos are located at [download.01.org](https://01.org/). +Additionally, new acoustic and language models are available in the OpenVINO™ [storage](https://storage.openvinotoolkit.org/models_contrib/speech/2021.2/librispeech_s5/). ## Run Speech Recognition Demos with Pretrained Models diff --git a/inference-engine/samples/speech_sample/README.md b/inference-engine/samples/speech_sample/README.md index 7095d7dae2a..0b8ca6a38cb 100644 --- a/inference-engine/samples/speech_sample/README.md +++ b/inference-engine/samples/speech_sample/README.md @@ -136,7 +136,7 @@ The following pre-trained models are available: * rm\_lstm4f * rm\_cnn4a\_smbr -All of them can be downloaded from [https://download.01.org/openvinotoolkit/models_contrib/speech/kaldi](https://download.01.org/openvinotoolkit/models_contrib/speech/kaldi) or using the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) . +All of them can be downloaded from [https://storage.openvinotoolkit.org/models_contrib/speech/2021.2/](https://storage.openvinotoolkit.org/models_contrib/speech/2021.2/) or using the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) . ### Speech Inference