DOCS: Fixing formatting in Samples - port to master (#13128)
* DOCS: Fixing formatting in Samples - porting to master Porting https://github.com/openvinotoolkit/openvino/pull/13085 Fixing incorrectly numbered lists and indentation of code blocks. * Update get_started_demos.md Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
This commit is contained in:
committed by
GitHub
parent
7f75da93ed
commit
60099a19bd
@@ -99,30 +99,27 @@ Options to find a model suitable for the OpenVINO™ toolkit:
|
||||
This guide uses the OpenVINO™ Model Downloader to get pre-trained models. You can use one of the following commands to find a model:
|
||||
|
||||
* List the models available in the downloader
|
||||
|
||||
``` sh
|
||||
omz_info_dumper --print_all
|
||||
```
|
||||
``` sh
|
||||
omz_info_dumper --print_all
|
||||
```
|
||||
|
||||
* Use `grep` to list models that have a specific name pattern
|
||||
|
||||
``` sh
|
||||
omz_info_dumper --print_all | grep <model_name>
|
||||
```
|
||||
``` sh
|
||||
omz_info_dumper --print_all | grep <model_name>
|
||||
```
|
||||
|
||||
* Use Model Downloader to download models.
|
||||
|
||||
This guide uses `<models_dir>` and `<models_name>` as placeholders for the models directory and model name:
|
||||
|
||||
``` sh
|
||||
omz_downloader --name <model_name> --output_dir <models_dir>
|
||||
```
|
||||
This guide uses `<models_dir>` and `<models_name>` as placeholders for the models directory and model name:
|
||||
``` sh
|
||||
omz_downloader --name <model_name> --output_dir <models_dir>
|
||||
```
|
||||
|
||||
* Download the following models to run the Image Classification Sample:
|
||||
|
||||
|Model Name | Code Sample or Demo App |
|
||||
|-----------------------------------------------|------------------------------------------|
|
||||
|`googlenet-v1` | Image Classification Sample |
|
||||
|Model Name | Code Sample or Demo App |
|
||||
|-----------------------------------------------|------------------------------------------|
|
||||
|`googlenet-v1` | Image Classification Sample |
|
||||
|
||||
@sphinxdirective
|
||||
.. raw:: html
|
||||
@@ -350,8 +347,8 @@ To run the **Image Classification** code sample with an input image using the IR
|
||||
@endsphinxdirective
|
||||
|
||||
3. Run the code sample executable, specifying the input media file, the IR for your model, and a target device for performing inference:
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. tab:: Linux
|
||||
|
||||
.. code-block:: sh
|
||||
@@ -372,13 +369,16 @@ To run the **Image Classification** code sample with an input image using the IR
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. raw:: html
|
||||
|
||||
<div class="collapsible-section" data-title="Click for examples of running the Image Classification code sample on different devices">
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
The following commands run the Image Classification Code Sample using the [dog.bmp](https://storage.openvinotoolkit.org/data/test_data/images/224x224/dog.bmp) file as an input image, the model in IR format from the `ir` directory, and on different hardware devices:
|
||||
|
||||
**CPU:**
|
||||
|
||||
@@ -49,21 +49,19 @@ To run the sample, you need specify a model and image:
|
||||
|
||||
### Example
|
||||
1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader):
|
||||
```
|
||||
python <path_to_omz_tools>/downloader.py --name alexnet
|
||||
```
|
||||
```
|
||||
python <path_to_omz_tools>/downloader.py --name alexnet
|
||||
```
|
||||
|
||||
2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script:
|
||||
|
||||
```
|
||||
python <path_to_omz_tools>/converter.py --name alexnet
|
||||
```
|
||||
```
|
||||
python <path_to_omz_tools>/converter.py --name alexnet
|
||||
```
|
||||
|
||||
3. Perform inference of `car.bmp` using `alexnet` model on a `GPU`, for example:
|
||||
|
||||
```
|
||||
<path_to_sample>/hello_classification_c <path_to_model>/alexnet.xml <path_to_image>/car.bmp GPU
|
||||
```
|
||||
```
|
||||
<path_to_sample>/hello_classification_c <path_to_model>/alexnet.xml <path_to_image>/car.bmp GPU
|
||||
```
|
||||
|
||||
## Sample Output
|
||||
|
||||
|
||||
@@ -64,21 +64,19 @@ ffmpeg -i cat.jpg -pix_fmt nv12 cat.yuv
|
||||
|
||||
### Example
|
||||
1. Download a pre-trained model using [Model Downloader](@ref omz_tools_downloader):
|
||||
```
|
||||
python <path_to_omz_tools>/downloader.py --name alexnet
|
||||
```
|
||||
```
|
||||
python <path_to_omz_tools>/downloader.py --name alexnet
|
||||
```
|
||||
|
||||
2. If a model is not in the Inference Engine IR or ONNX format, it must be converted. You can do this using the model converter script:
|
||||
|
||||
```
|
||||
python <path_to_omz_tools>/converter.py --name alexnet
|
||||
```
|
||||
```
|
||||
python <path_to_omz_tools>/converter.py --name alexnet
|
||||
```
|
||||
|
||||
3. Perform inference of NV12 image using `alexnet` model on a `CPU`, for example:
|
||||
|
||||
```
|
||||
<path_to_sample>/hello_nv12_input_classification_c <path_to_model>/alexnet.xml <path_to_image>/cat.yuv 300x300 CPU
|
||||
```
|
||||
```
|
||||
<path_to_sample>/hello_nv12_input_classification_c <path_to_model>/alexnet.xml <path_to_image>/cat.yuv 300x300 CPU
|
||||
```
|
||||
|
||||
## Sample Output
|
||||
|
||||
|
||||
@@ -83,28 +83,24 @@ To run the sample, you need specify a model and image:
|
||||
### Example
|
||||
|
||||
1. Install the `openvino-dev` Python package to use Open Model Zoo Tools:
|
||||
|
||||
```
|
||||
python -m pip install openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet]
|
||||
```
|
||||
```
|
||||
python -m pip install openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet]
|
||||
```
|
||||
|
||||
2. Download a pre-trained model using:
|
||||
|
||||
```
|
||||
omz_downloader --name googlenet-v1
|
||||
```
|
||||
```
|
||||
omz_downloader --name googlenet-v1
|
||||
```
|
||||
|
||||
3. If a model is not in the IR or ONNX format, it must be converted. You can do this using the model converter:
|
||||
|
||||
```
|
||||
omz_converter --name googlenet-v1
|
||||
```
|
||||
```
|
||||
omz_converter --name googlenet-v1
|
||||
```
|
||||
|
||||
4. Perform inference of `dog.bmp` using `googlenet-v1` model on a `GPU`, for example:
|
||||
|
||||
```
|
||||
classification_sample_async -m googlenet-v1.xml -i dog.bmp -d GPU
|
||||
```
|
||||
```
|
||||
classification_sample_async -m googlenet-v1.xml -i dog.bmp -d GPU
|
||||
```
|
||||
|
||||
## Sample Output
|
||||
|
||||
|
||||
@@ -54,28 +54,24 @@ To run the sample, you need specify a model and image:
|
||||
### Example
|
||||
|
||||
1. Install the `openvino-dev` Python package to use Open Model Zoo Tools:
|
||||
|
||||
```
|
||||
python -m pip install openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet]
|
||||
```
|
||||
```
|
||||
python -m pip install openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet]
|
||||
```
|
||||
|
||||
2. Download a pre-trained model using:
|
||||
|
||||
```
|
||||
omz_downloader --name googlenet-v1
|
||||
```
|
||||
```
|
||||
omz_downloader --name googlenet-v1
|
||||
```
|
||||
|
||||
3. If a model is not in the IR or ONNX format, it must be converted. You can do this using the model converter:
|
||||
|
||||
```
|
||||
omz_converter --name googlenet-v1
|
||||
```
|
||||
```
|
||||
omz_converter --name googlenet-v1
|
||||
```
|
||||
|
||||
4. Perform inference of `car.bmp` using the `googlenet-v1` model on a `GPU`, for example:
|
||||
|
||||
```
|
||||
hello_classification googlenet-v1.xml car.bmp GPU
|
||||
```
|
||||
```
|
||||
hello_classification googlenet-v1.xml car.bmp GPU
|
||||
```
|
||||
|
||||
## Sample Output
|
||||
|
||||
|
||||
@@ -69,27 +69,24 @@ ffmpeg -i cat.jpg -pix_fmt nv12 car.yuv
|
||||
### Example
|
||||
|
||||
1. Install openvino-dev python package if you don't have it to use Open Model Zoo Tools:
|
||||
|
||||
```
|
||||
python -m pip install openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet]
|
||||
```
|
||||
```
|
||||
python -m pip install openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet]
|
||||
```
|
||||
|
||||
2. Download a pre-trained model:
|
||||
```
|
||||
omz_downloader --name alexnet
|
||||
```
|
||||
```
|
||||
omz_downloader --name alexnet
|
||||
```
|
||||
|
||||
3. If a model is not in the IR or ONNX format, it must be converted. You can do this using the model converter:
|
||||
|
||||
```
|
||||
omz_converter --name alexnet
|
||||
```
|
||||
```
|
||||
omz_converter --name alexnet
|
||||
```
|
||||
|
||||
4. Perform inference of NV12 image using `alexnet` model on a `CPU`, for example:
|
||||
|
||||
```
|
||||
hello_nv12_input_classification alexnet.xml car.yuv 300x300 CPU
|
||||
```
|
||||
```
|
||||
hello_nv12_input_classification alexnet.xml car.yuv 300x300 CPU
|
||||
```
|
||||
|
||||
## Sample Output
|
||||
|
||||
|
||||
@@ -55,28 +55,24 @@ To run the sample, you need specify a model and image:
|
||||
### Example
|
||||
|
||||
1. Install openvino-dev python package if you don't have it to use Open Model Zoo Tools:
|
||||
|
||||
```
|
||||
python -m pip install openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet]
|
||||
```
|
||||
```
|
||||
python -m pip install openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet]
|
||||
```
|
||||
|
||||
2. Download a pre-trained model using:
|
||||
|
||||
```
|
||||
omz_downloader --name person-detection-retail-0013
|
||||
```
|
||||
```
|
||||
omz_downloader --name person-detection-retail-0013
|
||||
```
|
||||
|
||||
3. `person-detection-retail-0013` does not need to be converted, because it is already in necessary format, so you can skip this step. If you want to use another model that is not in the IR or ONNX format, you can convert it using the model converter script:
|
||||
|
||||
```
|
||||
omz_converter --name <model_name>
|
||||
```
|
||||
```
|
||||
omz_converter --name <model_name>
|
||||
```
|
||||
|
||||
4. Perform inference of `person_detection.bmp` using `person-detection-retail-0013` model on a `GPU`, for example:
|
||||
|
||||
```
|
||||
hello_reshape_ssd person-detection-retail-0013.xml person_detection.bmp GPU
|
||||
```
|
||||
```
|
||||
hello_reshape_ssd person-detection-retail-0013.xml person_detection.bmp GPU
|
||||
```
|
||||
|
||||
## Sample Output
|
||||
|
||||
|
||||
@@ -207,31 +207,28 @@ The Wall Street Journal DNN model used in this example was prepared using the Ka
|
||||
Kaldi's nnet-forward command. Since the `speech_sample` does not yet use pipes, it is necessary to use temporary files for speaker-transformed feature vectors and scores when running the Kaldi speech recognition pipeline. The following operations assume that feature extraction was already performed according to the `s5` recipe and that the working directory within the Kaldi source tree is `egs/wsj/s5`.
|
||||
|
||||
1. Prepare a speaker-transformed feature set given the feature transform specified in `final.feature_transform` and the feature files specified in `feats.scp`:
|
||||
|
||||
```sh
|
||||
nnet-forward --use-gpu=no final.feature_transform "ark,s,cs:copy-feats scp:feats.scp ark:- |" ark:feat.ark
|
||||
```
|
||||
```sh
|
||||
nnet-forward --use-gpu=no final.feature_transform "ark,s,cs:copy-feats scp:feats.scp ark:- |" ark:feat.ark
|
||||
```
|
||||
|
||||
2. Score the feature set using the `speech_sample`:
|
||||
```sh
|
||||
./speech_sample -d GNA_AUTO -bs 8 -i feat.ark -m wsj_dnn5b.xml -o scores.ark
|
||||
```
|
||||
|
||||
```sh
|
||||
./speech_sample -d GNA_AUTO -bs 8 -i feat.ark -m wsj_dnn5b.xml -o scores.ark
|
||||
```
|
||||
OpenVINO™ toolkit Intermediate Representation `wsj_dnn5b.xml` file was generated in the previous [Model Preparation](#model-preparation) section.
|
||||
OpenVINO™ toolkit Intermediate Representation `wsj_dnn5b.xml` file was generated in the previous [Model Preparation](#model-preparation) section.
|
||||
|
||||
3. Run the Kaldi decoder to produce n-best text hypotheses and select most likely text given the WFST (`HCLG.fst`), vocabulary (`words.txt`), and TID/PID mapping (`final.mdl`):
|
||||
|
||||
```sh
|
||||
latgen-faster-mapped --max-active=7000 --max-mem=50000000 --beam=13.0 --lattice-beam=6.0 --acoustic-scale=0.0833 --allow-partial=true --word-symbol-table=words.txt final.mdl HCLG.fst ark:scores.ark ark:-| lattice-scale --inv-acoustic-scale=13 ark:- ark:- | lattice-best-path --word-symbol-table=words.txt ark:- ark,t:- > out.txt &
|
||||
```
|
||||
```sh
|
||||
latgen-faster-mapped --max-active=7000 --max-mem=50000000 --beam=13.0 --lattice-beam=6.0 --acoustic-scale=0.0833 --allow-partial=true --word-symbol-table=words.txt final.mdl HCLG.fst ark:scores.ark ark:-| lattice-scale --inv-acoustic-scale=13 ark:- ark:- | lattice-best-path --word-symbol-table=words.txt ark:- ark,t:- > out.txt &
|
||||
```
|
||||
|
||||
4. Run the word error rate tool to check accuracy given the vocabulary (`words.txt`) and reference transcript (`test_filt.txt`):
|
||||
```sh
|
||||
cat out.txt | utils/int2sym.pl -f 2- words.txt | sed s:\<UNK\>::g | compute-wer --text --mode=present ark:test_filt.txt ark,p:-
|
||||
```
|
||||
|
||||
```sh
|
||||
cat out.txt | utils/int2sym.pl -f 2- words.txt | sed s:\<UNK\>::g | compute-wer --text --mode=present ark:test_filt.txt ark,p:-
|
||||
```
|
||||
|
||||
All of mentioned files can be downloaded from [https://storage.openvinotoolkit.org/models_contrib/speech/2021.2/wsj_dnn5b_smbr](https://storage.openvinotoolkit.org/models_contrib/speech/2021.2/wsj_dnn5b_smbr)
|
||||
All of mentioned files can be downloaded from [https://storage.openvinotoolkit.org/models_contrib/speech/2021.2/wsj_dnn5b_smbr](https://storage.openvinotoolkit.org/models_contrib/speech/2021.2/wsj_dnn5b_smbr)
|
||||
|
||||
## See Also
|
||||
|
||||
|
||||
@@ -69,27 +69,24 @@ To run the sample, you need specify a model and image:
|
||||
### Example
|
||||
|
||||
1. Install the `openvino-dev` Python package to use Open Model Zoo Tools:
|
||||
|
||||
```
|
||||
python -m pip install openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet]
|
||||
```
|
||||
```
|
||||
python -m pip install openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet]
|
||||
```
|
||||
|
||||
2. Download a pre-trained model:
|
||||
```
|
||||
omz_downloader --name alexnet
|
||||
```
|
||||
```
|
||||
omz_downloader --name alexnet
|
||||
```
|
||||
|
||||
3. If a model is not in the IR or ONNX format, it must be converted. You can do this using the model converter:
|
||||
|
||||
```
|
||||
omz_converter --name alexnet
|
||||
```
|
||||
```
|
||||
omz_converter --name alexnet
|
||||
```
|
||||
|
||||
4. Perform inference of `banana.jpg` and `car.bmp` using the `alexnet` model on a `GPU`, for example:
|
||||
|
||||
```
|
||||
python classification_sample_async.py -m alexnet.xml -i banana.jpg car.bmp -d GPU
|
||||
```
|
||||
```
|
||||
python classification_sample_async.py -m alexnet.xml -i banana.jpg car.bmp -d GPU
|
||||
```
|
||||
|
||||
## Sample Output
|
||||
|
||||
|
||||
@@ -47,27 +47,24 @@ To run the sample, you need specify a model and image:
|
||||
### Example
|
||||
|
||||
1. Install the `openvino-dev` Python package to use Open Model Zoo Tools:
|
||||
|
||||
```
|
||||
python -m pip install openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet]
|
||||
```
|
||||
```
|
||||
python -m pip install openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet]
|
||||
```
|
||||
|
||||
2. Download a pre-trained model:
|
||||
```
|
||||
omz_downloader --name alexnet
|
||||
```
|
||||
```
|
||||
omz_downloader --name alexnet
|
||||
```
|
||||
|
||||
3. If a model is not in the IR or ONNX format, it must be converted. You can do this using the model converter:
|
||||
|
||||
```
|
||||
omz_converter --name alexnet
|
||||
```
|
||||
```
|
||||
omz_converter --name alexnet
|
||||
```
|
||||
|
||||
4. Perform inference of `banana.jpg` using the `alexnet` model on a `GPU`, for example:
|
||||
|
||||
```
|
||||
python hello_classification.py alexnet.xml banana.jpg GPU
|
||||
```
|
||||
```
|
||||
python hello_classification.py alexnet.xml banana.jpg GPU
|
||||
```
|
||||
|
||||
## Sample Output
|
||||
|
||||
|
||||
@@ -48,27 +48,24 @@ To run the sample, you need specify a model and image:
|
||||
### Example
|
||||
|
||||
1. Install the `openvino-dev` Python package to use Open Model Zoo Tools:
|
||||
|
||||
```
|
||||
python -m pip install openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet]
|
||||
```
|
||||
```
|
||||
python -m pip install openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet]
|
||||
```
|
||||
|
||||
2. Download a pre-trained model:
|
||||
```
|
||||
omz_downloader --name mobilenet-ssd
|
||||
```
|
||||
```
|
||||
omz_downloader --name mobilenet-ssd
|
||||
```
|
||||
|
||||
3. If a model is not in the IR or ONNX format, it must be converted. You can do this using the model converter:
|
||||
```
|
||||
omz_converter --name mobilenet-ssd
|
||||
```
|
||||
|
||||
```
|
||||
omz_converter --name mobilenet-ssd
|
||||
```
|
||||
|
||||
4. Perform inference of `banana.jpg` using `mobilenet-ssd` model on a `GPU`, for example:
|
||||
|
||||
```
|
||||
python hello_reshape_ssd.py mobilenet-ssd.xml banana.jpg GPU
|
||||
```
|
||||
4. Perform inference of `banana.jpg` using `ssdlite_mobilenet_v2` model on a `GPU`, for example:
|
||||
```
|
||||
python hello_reshape_ssd.py mobilenet-ssd.xml banana.jpg GPU
|
||||
```
|
||||
|
||||
## Sample Output
|
||||
|
||||
|
||||
Reference in New Issue
Block a user