MXNet renaming into Apache MXNet
This commit is contained in:
msmykx 2022-06-13 15:42:14 +02:00
parent 9fe27be1cb
commit 6dcb0fd2dd
7 changed files with 29 additions and 29 deletions

View File

@ -15,7 +15,7 @@
@endsphinxdirective
The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with various frameworks, including
TensorFlow, PyTorch, ONNX, PaddlePaddle, MXNet, Caffe, and Kaldi. The list of supported operations is different for
TensorFlow, PyTorch, ONNX, PaddlePaddle, Apache MXNet, Caffe, and Kaldi. The list of supported operations is different for
each of the supported frameworks. To see the operations supported by your framework, refer to
[Supported Framework Operations](../MO_DG/prepare_model/Supported_Frameworks_Layers.md).
@ -52,7 +52,7 @@ Depending on model format used for import, mapping of custom operation is implem
2. If model is represented in TensorFlow, Caffe, Kaldi or MXNet formats, then [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) should be used. This approach is available for model conversion in Model Optimizer only.
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle) and legacy frontends (TensorFlow, Caffe, Kaldi and MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with `read_model` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle) and legacy frontends (TensorFlow, Caffe, Kaldi and Apache MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with `read_model` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
If you are implementing extensions for ONNX or PaddlePaddle new frontends and plan to use Model Optimizer `--extension` option for model conversion, then the extensions should be

View File

@ -103,11 +103,11 @@ mo --input_model unet.pdmodel --mean_values [123,117,104] --scale 255
```
For more information, refer to the [Converting a PaddlePaddle Model](prepare_model/convert_model/Convert_Model_From_Paddle.md) guide.
4. Launch Model Optimizer for an MXNet SSD Inception V3 model and specify first-channel layout for the input:
4. Launch Model Optimizer for an Apache MXNet SSD Inception V3 model and specify first-channel layout for the input:
```sh
mo --input_model ssd_inception_v3-0000.params --layout NCHW
```
For more information, refer to the [Converting an MXNet Model](prepare_model/convert_model/Convert_Model_From_MxNet.md) guide.
For more information, refer to the [Converting an Apache MXNet Model](prepare_model/convert_model/Convert_Model_From_MxNet.md) guide.
5. Launch Model Optimizer for a Caffe AlexNet model with input channels in the RGB format which needs to be reversed:
```sh
@ -121,6 +121,6 @@ mo --input_model librispeech_nnet2.mdl --input_shape [1,140]
```
For more information, refer to the [Converting a Kaldi Model](prepare_model/convert_model/Convert_Model_From_Kaldi.md) guide.
- To get conversion recipes for specific TensorFlow, ONNX, PyTorch, MXNet, and Kaldi models,
- To get conversion recipes for specific TensorFlow, ONNX, PyTorch, Apache MXNet, and Kaldi models,
refer to the [Model Conversion Tutorials](prepare_model/convert_model/Convert_Model_Tutorials.md).
- For more information about IR, see [Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™](IR_and_opsets.md).

View File

@ -182,11 +182,11 @@ Your model contains a custom layer and you have correctly registered it with the
#### 15. What does the message "Framework name can not be deduced from the given options. Use --framework to choose one of Caffe, TensorFlow, MXNet" mean? <a name="question-15"></a>
You have run Model Optimizer without a flag `--framework caffe|tf|mxnet`. Model Optimizer tries to deduce the framework by the extension of input model file (`.pb` for TensorFlow, `.caffemodel` for Caffe, `.params` for MXNet). Your input model might have a different extension and you need to explicitly set the source framework. For example, use `--framework caffe`.
You have run Model Optimizer without a flag `--framework caffe|tf|mxnet`. Model Optimizer tries to deduce the framework by the extension of input model file (`.pb` for TensorFlow, `.caffemodel` for Caffe, `.params` for Apache MXNet). Your input model might have a different extension and you need to explicitly set the source framework. For example, use `--framework caffe`.
#### 16. What does the message "Input shape is required to convert MXNet model. Please provide it with --input_shape" mean? <a name="question-16"></a>
Input shape was not provided. That is mandatory for converting an MXNet model to the Intermediate Representation, because MXNet models do not contain information about input shapes. Use the `--input_shape` flag to specify it. For more information about using the `--input_shape`, refer to FAQ [#56](#question-56).
Input shape was not provided. That is mandatory for converting an MXNet model to the OpenVINO Intermediate Representation, because MXNet models do not contain information about input shapes. Use the `--input_shape` flag to specify it. For more information about using the `--input_shape`, refer to FAQ [#56](#question-56).
#### 17. What does the message "Both --mean_file and mean_values are specified. Specify either mean file or mean values" mean? <a name="question-17"></a>
@ -326,9 +326,9 @@ Model Optimizer cannot convert the model to the specified data type. Currently,
Model Optimizer tried to access a node that does not exist. This could happen if you have incorrectly specified placeholder, input or output node name.
#### 51. What does the message "Module mxnet was not found. Please install MXNet 1.0.0" mean? <a name="question-51"></a>
#### 51. What does the message "Module MXNet was not found. Please install MXNet 1.0.0" mean? <a name="question-51"></a>
To convert MXNet models with Model Optimizer, MXNet 1.0.0 must be installed. For more information about prerequisites, see the[Configuring Model Optimizer](../Deep_Learning_Model_Optimizer_DevGuide.md) guide.
To convert MXNet models with Model Optimizer, Apache MXNet 1.0.0 must be installed. For more information about prerequisites, see the[Configuring Model Optimizer](../Deep_Learning_Model_Optimizer_DevGuide.md) guide.
#### 52. What does the message "The following error happened while loading MXNet model .." mean? <a name="question-52"></a>
@ -480,12 +480,12 @@ For more information, refer to the [Converting a Model to Intermediate Represent
#### 83. What does the message "Specified input json ... does not exist" mean? <a name="question-83"></a>
Most likely, `.json` file does not exist or has a name that does not match the notation of MXNet. Make sure the file exists and has a correct name.
Most likely, `.json` file does not exist or has a name that does not match the notation of Apache MXNet. Make sure the file exists and has a correct name.
For more information, refer to the [Converting an MXNet Model](convert_model/Convert_Model_From_MxNet.md) guide.
#### 84. What does the message "Unsupported Input model file type ... Model Optimizer support only .params and .nd files format" mean? <a name="question-84"></a>
Model Optimizer for MXNet supports only `.params` and `.nd` files formats. Most likely, you specified an unsupported file format in `--input_model`.
Model Optimizer for Apache MXNet supports only `.params` and `.nd` files formats. Most likely, you specified an unsupported file format in `--input_model`.
For more information, refer to [Converting an MXNet Model](convert_model/Convert_Model_From_MxNet.md).
#### 85. What does the message "Operation ... not supported. Please register it as custom op" mean? <a name="question-85"></a>
@ -569,9 +569,9 @@ the file is not available or does not exist. Refer to FAQ [#89](#question-89).
#### 92. What does the message "For legacy MXNet models Model Optimizer does not support conversion of old MXNet models (trained with 1.0.0 version of MXNet and lower) with custom layers." mean? <a name="question-92"></a>
This message means that if you have a model with custom layers and its JSON file has been generated with MXNet version
This message means that if you have a model with custom layers and its JSON file has been generated with Apache MXNet version
lower than 1.0.0, Model Optimizer does not support such topologies. If you want to convert it, you have to rebuild
MXNet with unsupported layers or generate a new JSON file with MXNet version 1.0.0 or higher. You also need to implement
MXNet with unsupported layers or generate a new JSON file with Apache MXNet version 1.0.0 or higher. You also need to implement
OpenVINO extension to use custom layers.
For more information, refer to the [OpenVINO&trade; Extensibility Mechanism](../../Extensibility_UG/Intro.md) guide.
@ -624,10 +624,10 @@ If a `*.caffemodel` file exists and is correct, the error occurred possibly beca
#### 100. What does the message "SyntaxError: 'yield' inside list comprehension" during MxNet model conversion mean? <a name="question-100"></a>
The issue "SyntaxError: `yield` inside list comprehension" might occur during converting MXNet models (mobilefacedet-v1-mxnet, brain-tumor-segmentation-0001) on Windows platform with Python 3.8 environment. This issue is caused by the API changes for `yield expression` in Python 3.8.
The issue "SyntaxError: `yield` inside list comprehension" might occur during converting MXNet models (`mobilefacedet-v1-mxnet`, `brain-tumor-segmentation-0001`) on Windows platform with Python 3.8 environment. This issue is caused by the API changes for `yield expression` in Python 3.8.
The following workarounds are suggested to resolve this issue:
1. Use Python 3.6/3.7 to convert MXNet models on Windows
2. Update MXNet by using `pip install mxnet=1.7.0.post2`
2. Update Apache MXNet by using `pip install mxnet=1.7.0.post2`
Note that it might have conflicts with previously installed PyPI dependencies.
#### 101. What does the message "The IR preparation was executed by the legacy MO path. ..." mean? <a name="question-101"></a>

View File

@ -234,7 +234,7 @@ To uninstall the toolkit, follow the steps on the [Uninstalling page](uninstalli
To learn more about converting models from specific frameworks, go to:
* :ref:`Convert Your Caffe Model <convert model caffe>`
* :ref:`Convert Your TensorFlow Model <convert model tf>`
* :ref:`Convert Your MXNet Modele <convert model mxnet>`
* :ref:`Convert Your Appache MXNet Model <convert model mxnet>`
* :ref:`Convert Your Kaldi Model <convert model kaldi>`
* :ref:`Convert Your ONNX Model <convert model onnx>`
--->

View File

@ -152,7 +152,7 @@ To uninstall the toolkit, follow the steps on the [Uninstalling page](uninstalli
To learn more about converting models from specific frameworks, go to:
* :ref:`Convert Your Caffe Model <convert model caffe>`
* :ref:`Convert Your TensorFlow Model <convert model tf>`
* :ref:`Convert Your MXNet Modele <convert model mxnet>`
* :ref:`Convert Your Apache MXNet Model <convert model mxnet>`
* :ref:`Convert Your Kaldi Model <convert model kaldi>`
* :ref:`Convert Your ONNX Model <convert model onnx>`
--->

View File

@ -189,7 +189,7 @@ To uninstall the toolkit, follow the steps on the [Uninstalling page](uninstalli
To learn more about converting models from specific frameworks, go to:
* :ref:`Convert Your Caffe Model <convert model caffe>`
* :ref:`Convert Your TensorFlow Model <convert model tf>`
* :ref:`Convert Your MXNet Modele <convert model mxnet>`
* :ref:`Convert Your Apache MXNet Model <convert model mxnet>`
* :ref:`Convert Your Kaldi Model <convert model kaldi>`
* :ref:`Convert Your ONNX Model <convert model onnx>`
--->

View File

@ -8,7 +8,7 @@ OpenVINO™ toolkit is a comprehensive toolkit for quickly developing applicatio
| Component | Console Script | Description |
|------------------|---------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) | `mo` |**Model Optimizer** imports, converts, and optimizes models that were trained in popular frameworks to a format usable by OpenVINO components. <br>Supported frameworks include Caffe\*, TensorFlow\*, MXNet\*, PaddlePaddle\*, and ONNX\*. |
| [Model Optimizer](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) | `mo` |**Model Optimizer** imports, converts, and optimizes models that were trained in popular frameworks to a format usable by OpenVINO components. <br>Supported frameworks include Caffe, TensorFlow, Apache MXNet, PaddlePaddle, and ONNX. |
| [Benchmark Tool](../../tools/benchmark_tool/README.md)| `benchmark_app` | **Benchmark Application** allows you to estimate deep learning inference performance on supported devices for synchronous and asynchronous modes. |
| [Accuracy Checker](@ref omz_tools_accuracy_checker) and <br> [Annotation Converter](@ref omz_tools_accuracy_checker_annotation_converters) | `accuracy_check` <br> `convert_annotation` |**Accuracy Checker** is a deep learning accuracy validation tool that allows you to collect accuracy metrics against popular datasets. The main advantages of the tool are the flexibility of configuration and a set of supported datasets, preprocessing, postprocessing, and metrics. <br> **Annotation Converter** is a utility that prepares datasets for evaluation with Accuracy Checker. |
| [Post-Training Optimization Tool](../../tools/pot/docs/pot_introduction.md)| `pot` |**Post-Training Optimization Tool** allows you to optimize trained models with advanced capabilities, such as quantization and low-precision optimizations, without the need to retrain or fine-tune models. |
@ -78,17 +78,17 @@ python -m pip install --upgrade pip
To install and configure the components of the development package for working with specific frameworks, use the `pip install openvino-dev[extras]` command, where `extras` is a list of extras from the table below:
| DL Framework | Extra |
| :------------------------------------------------------------------------------- | :-------------------------------|
| [Caffe*](https://caffe.berkeleyvision.org/) | caffe |
| [Kaldi*](https://github.com/kaldi-asr/kaldi) | kaldi |
| [MXNet*](https://mxnet.apache.org/) | mxnet |
| [ONNX*](https://github.com/microsoft/onnxruntime/) | onnx |
| [PyTorch*](https://pytorch.org/) | pytorch |
| [TensorFlow* 1.x](https://www.tensorflow.org/versions#tensorflow_1) | tensorflow |
| [TensorFlow* 2.x](https://www.tensorflow.org/versions#tensorflow_2) | tensorflow2 |
| DL Framework | Extra |
| :------------------------------------------------------------------------------ | :-------------------------------|
| [Caffe](https://caffe.berkeleyvision.org/) | caffe |
| [Kaldi](https://github.com/kaldi-asr/kaldi) | kaldi |
| [Apache MXNet](https://mxnet.apache.org/) | mxnet |
| [ONNX](https://github.com/microsoft/onnxruntime/) | onnx |
| [PyTorch](https://pytorch.org/) | pytorch |
| [TensorFlow 1.x](https://www.tensorflow.org/versions#tensorflow_1) | tensorflow |
| [TensorFlow 2.x](https://www.tensorflow.org/versions#tensorflow_2) | tensorflow2 |
For example, to install and configure the components for working with TensorFlow 2.x, MXNet and Caffe, use the following command:
For example, to install and configure the components for working with TensorFlow 2.x, Apache MXNet and Caffe, use the following command:
```sh
pip install openvino-dev[tensorflow2,mxnet,caffe]
```