From d91a06ac08c8bf9dfcf75c1532ac98efff9f9692 Mon Sep 17 00:00:00 2001 From: Maciej Smyk Date: Tue, 5 Jul 2022 15:04:03 +0200 Subject: [PATCH] Apache MXNet rename (#11871) * MXNet MXNet renaming into Apache MXNet * Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md Co-authored-by: Helena Kloosterman * MXNet 2 * MXNet 3 * Revert "MXNet 3" This reverts commit 046c25239d33a4d5777a678c44546b04a5df9bae. Co-authored-by: Helena Kloosterman --- docs/Extensibility_UG/Intro.md | 4 ++-- .../Deep_Learning_Model_Optimizer_DevGuide.md | 6 +++--- .../prepare_model/Model_Optimizer_FAQ.md | 20 +++++++++---------- .../Supported_Frameworks_Layers.md | 4 ++-- .../convert_model/Convert_Model_From_MxNet.md | 12 +++++------ .../Convert_Style_Transfer_From_MXNet.md | 2 +- .../Customize_Model_Optimizer.md | 2 +- docs/OV_Runtime_UG/network_state_intro.md | 2 +- docs/OV_Runtime_UG/supported_plugins/VPU.md | 6 +++--- .../installing-openvino-macos.md | 2 +- .../installing-openvino-windows.md | 2 +- docs/install_guides/pypi-openvino-dev.md | 2 +- .../ops/detection/DeformablePSROIPooling_1.md | 2 +- docs/ops/detection/DetectionOutput_1.md | 4 ++-- docs/ops/detection/DetectionOutput_8.md | 4 ++-- 15 files changed, 37 insertions(+), 37 deletions(-) diff --git a/docs/Extensibility_UG/Intro.md b/docs/Extensibility_UG/Intro.md index ebb5e603319..aae5be6cd30 100644 --- a/docs/Extensibility_UG/Intro.md +++ b/docs/Extensibility_UG/Intro.md @@ -15,7 +15,7 @@ @endsphinxdirective The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with various frameworks, including -TensorFlow, PyTorch, ONNX, PaddlePaddle, MXNet, Caffe, and Kaldi. The list of supported operations is different for +TensorFlow, PyTorch, ONNX, PaddlePaddle, Apache MXNet, Caffe, and Kaldi. The list of supported operations is different for each of the supported frameworks. To see the operations supported by your framework, refer to [Supported Framework Operations](../MO_DG/prepare_model/Supported_Frameworks_Layers.md). @@ -52,7 +52,7 @@ Depending on model format used for import, mapping of custom operation is implem 2. If model is represented in TensorFlow, Caffe, Kaldi or MXNet formats, then [Model Optimizer Extensions](../MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md) should be used. This approach is available for model conversion in Model Optimizer only. -Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle) and legacy frontends (TensorFlow, Caffe, Kaldi and MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with `read_model` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend. +Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle) and legacy frontends (TensorFlow, Caffe, Kaldi and Apache MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with `read_model` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend. If you are implementing extensions for ONNX or PaddlePaddle new frontends and plan to use Model Optimizer `--extension` option for model conversion, then the extensions should be diff --git a/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md b/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md index 6bbcd8fe825..d3dbb2dfeeb 100644 --- a/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md +++ b/docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md @@ -104,11 +104,11 @@ mo --input_model unet.pdmodel --mean_values [123,117,104] --scale 255 ``` For more information, refer to the [Converting a PaddlePaddle Model](prepare_model/convert_model/Convert_Model_From_Paddle.md) guide. -4. Launch Model Optimizer for an MXNet SSD Inception V3 model and specify first-channel layout for the input: +4. Launch Model Optimizer for an Apache MXNet SSD Inception V3 model and specify first-channel layout for the input: ```sh mo --input_model ssd_inception_v3-0000.params --layout NCHW ``` -For more information, refer to the [Converting an MXNet Model](prepare_model/convert_model/Convert_Model_From_MxNet.md) guide. +For more information, refer to the [Converting an Apache MXNet Model](prepare_model/convert_model/Convert_Model_From_MxNet.md) guide. 5. Launch Model Optimizer for a Caffe AlexNet model with input channels in the RGB format which needs to be reversed: ```sh @@ -122,6 +122,6 @@ mo --input_model librispeech_nnet2.mdl --input_shape [1,140] ``` For more information, refer to the [Converting a Kaldi Model](prepare_model/convert_model/Convert_Model_From_Kaldi.md) guide. -- To get conversion recipes for specific TensorFlow, ONNX, PyTorch, MXNet, and Kaldi models, +- To get conversion recipes for specific TensorFlow, ONNX, PyTorch, Apache MXNet, and Kaldi models, refer to the [Model Conversion Tutorials](prepare_model/convert_model/Convert_Model_Tutorials.md). - For more information about IR, see [Deep Learning Network Intermediate Representation and Operation Sets in OpenVINO™](IR_and_opsets.md). diff --git a/docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md b/docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md index 4123da27eeb..456a6743c91 100644 --- a/docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md +++ b/docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md @@ -182,11 +182,11 @@ Your model contains a custom layer and you have correctly registered it with the #### 15. What does the message "Framework name can not be deduced from the given options. Use --framework to choose one of Caffe, TensorFlow, MXNet" mean? -You have run Model Optimizer without a flag `--framework caffe|tf|mxnet`. Model Optimizer tries to deduce the framework by the extension of input model file (`.pb` for TensorFlow, `.caffemodel` for Caffe, `.params` for MXNet). Your input model might have a different extension and you need to explicitly set the source framework. For example, use `--framework caffe`. +You have run Model Optimizer without a flag `--framework caffe|tf|mxnet`. Model Optimizer tries to deduce the framework by the extension of input model file (`.pb` for TensorFlow, `.caffemodel` for Caffe, `.params` for Apache MXNet). Your input model might have a different extension and you need to explicitly set the source framework. For example, use `--framework caffe`. #### 16. What does the message "Input shape is required to convert MXNet model. Please provide it with --input_shape" mean? -Input shape was not provided. That is mandatory for converting an MXNet model to the Intermediate Representation, because MXNet models do not contain information about input shapes. Use the `--input_shape` flag to specify it. For more information about using the `--input_shape`, refer to FAQ [#56](#question-56). +Input shape was not provided. That is mandatory for converting an MXNet model to the OpenVINO Intermediate Representation, because MXNet models do not contain information about input shapes. Use the `--input_shape` flag to specify it. For more information about using the `--input_shape`, refer to FAQ [#56](#question-56). #### 17. What does the message "Both --mean_file and mean_values are specified. Specify either mean file or mean values" mean? @@ -326,9 +326,9 @@ Model Optimizer cannot convert the model to the specified data type. Currently, Model Optimizer tried to access a node that does not exist. This could happen if you have incorrectly specified placeholder, input or output node name. -#### 51. What does the message "Module mxnet was not found. Please install MXNet 1.0.0" mean? +#### 51. What does the message "Module MXNet was not found. Please install MXNet 1.0.0" mean? -To convert MXNet models with Model Optimizer, MXNet 1.0.0 must be installed. For more information about prerequisites, see the[Configuring Model Optimizer](../Deep_Learning_Model_Optimizer_DevGuide.md) guide. +To convert MXNet models with Model Optimizer, Apache MXNet 1.0.0 must be installed. For more information about prerequisites, see the[Configuring Model Optimizer](../Deep_Learning_Model_Optimizer_DevGuide.md) guide. #### 52. What does the message "The following error happened while loading MXNet model .." mean? @@ -480,12 +480,12 @@ For more information, refer to the [Converting a Model to Intermediate Represent #### 83. What does the message "Specified input json ... does not exist" mean? -Most likely, `.json` file does not exist or has a name that does not match the notation of MXNet. Make sure the file exists and has a correct name. +Most likely, `.json` file does not exist or has a name that does not match the notation of Apache MXNet. Make sure the file exists and has a correct name. For more information, refer to the [Converting an MXNet Model](convert_model/Convert_Model_From_MxNet.md) guide. #### 84. What does the message "Unsupported Input model file type ... Model Optimizer support only .params and .nd files format" mean? -Model Optimizer for MXNet supports only `.params` and `.nd` files formats. Most likely, you specified an unsupported file format in `--input_model`. +Model Optimizer for Apache MXNet supports only `.params` and `.nd` files formats. Most likely, you specified an unsupported file format in `--input_model`. For more information, refer to [Converting an MXNet Model](convert_model/Convert_Model_From_MxNet.md). #### 85. What does the message "Operation ... not supported. Please register it as custom op" mean? @@ -569,9 +569,9 @@ the file is not available or does not exist. Refer to FAQ [#89](#question-89). #### 92. What does the message "For legacy MXNet models Model Optimizer does not support conversion of old MXNet models (trained with 1.0.0 version of MXNet and lower) with custom layers." mean? -This message means that if you have a model with custom layers and its JSON file has been generated with MXNet version +This message means that if you have a model with custom layers and its JSON file has been generated with Apache MXNet version lower than 1.0.0, Model Optimizer does not support such topologies. If you want to convert it, you have to rebuild -MXNet with unsupported layers or generate a new JSON file with MXNet version 1.0.0 or higher. You also need to implement +MXNet with unsupported layers or generate a new JSON file with Apache MXNet version 1.0.0 or higher. You also need to implement OpenVINO extension to use custom layers. For more information, refer to the [OpenVINO™ Extensibility Mechanism](../../Extensibility_UG/Intro.md) guide. @@ -624,10 +624,10 @@ If a `*.caffemodel` file exists and is correct, the error occurred possibly beca #### 100. What does the message "SyntaxError: 'yield' inside list comprehension" during MxNet model conversion mean? -The issue "SyntaxError: `yield` inside list comprehension" might occur during converting MXNet models (mobilefacedet-v1-mxnet, brain-tumor-segmentation-0001) on Windows platform with Python 3.8 environment. This issue is caused by the API changes for `yield expression` in Python 3.8. +The issue "SyntaxError: `yield` inside list comprehension" might occur during converting MXNet models (`mobilefacedet-v1-mxnet`, `brain-tumor-segmentation-0001`) on Windows platform with Python 3.8 environment. This issue is caused by the API changes for `yield expression` in Python 3.8. The following workarounds are suggested to resolve this issue: 1. Use Python 3.6/3.7 to convert MXNet models on Windows -2. Update MXNet by using `pip install mxnet=1.7.0.post2` +2. Update Apache MXNet by using `pip install mxnet==1.7.0.post2` Note that it might have conflicts with previously installed PyPI dependencies. #### 101. What does the message "The IR preparation was executed by the legacy MO path. ..." mean? diff --git a/docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md b/docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md index 744a1915738..d363cc81651 100644 --- a/docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md +++ b/docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md @@ -50,10 +50,10 @@ In this article, you can find lists of supported framework layers, divided by fr | Tile | | -## MXNet Supported Symbols +## Apache MXNet Supported Symbols -| Symbol Name in MXNet| Limitations| +| Symbol Name in Apache MXNet| Limitations| | :----------| :----------| | _Plus | | | _contrib_arange_like | | diff --git a/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md b/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md index 10838034374..7f3952efc82 100644 --- a/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md +++ b/docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md @@ -22,17 +22,17 @@ MXNet-specific parameters: --save_params_from_nd Enable saving built parameters file from .nd files --legacy_mxnet_model - Enable MXNet loader to make a model compatible with the latest MXNet version. - Use only if your model was trained with MXNet version lower than 1.0.0 + Enable Apache MXNet loader to make a model compatible with the latest Apache MXNet version. + Use only if your model was trained with Apache MXNet version lower than 1.0.0 --enable_ssd_gluoncv Enable transformation for converting the gluoncv ssd topologies. Use only if your topology is one of ssd gluoncv topologies ``` -> **NOTE**: By default, Model Optimizer does not use the MXNet loader. It transforms the topology to another format which is compatible with the latest -> version of MXNet. However, the MXNet loader is required for models trained with lower version of MXNet. If your model was trained with an MXNet version lower than 1.0.0, specify the -> `--legacy_mxnet_model` key to enable the MXNet loader. Note that the loader does not support models with custom layers. In this case, you must manually -> recompile MXNet with custom layers and install it in your environment. +> **NOTE**: By default, Model Optimizer does not use the Apache MXNet loader. It transforms the topology to another format which is compatible with the latest +> version of Apache MXNet. However, the Apache MXNet loader is required for models trained with lower version of Apache MXNet. If your model was trained with an Apache MXNet version lower than 1.0.0, specify the +> `--legacy_mxnet_model` key to enable the Apache MXNet loader. Note that the loader does not support models with custom layers. In this case, you must manually +> recompile Apache MXNet with custom layers and install it in your environment. ## Custom Layer Definition diff --git a/docs/MO_DG/prepare_model/convert_model/mxnet_specific/Convert_Style_Transfer_From_MXNet.md b/docs/MO_DG/prepare_model/convert_model/mxnet_specific/Convert_Style_Transfer_From_MXNet.md index 1337ef08d0d..a87026e109e 100644 --- a/docs/MO_DG/prepare_model/convert_model/mxnet_specific/Convert_Style_Transfer_From_MXNet.md +++ b/docs/MO_DG/prepare_model/convert_model/mxnet_specific/Convert_Style_Transfer_From_MXNet.md @@ -110,7 +110,7 @@ cp models/13_decoder_auxs.nd nst_model ``` > **NOTE**: Make sure that all the `.params` and `.json` files are in the same directory as the `.nd` files. Otherwise, the conversion process fails. -3. Run the Model Optimizer for MXNet. Use the `--nd_prefix_name` option to specify the decoder prefix and `--input_shape` to specify input shapes in [N,C,W,H] order. For example:
+3. Run the Model Optimizer for Apache MXNet. Use the `--nd_prefix_name` option to specify the decoder prefix and `--input_shape` to specify input shapes in [N,C,W,H] order. For example:
```sh mo --input_symbol /nst_vgg19-symbol.json --framework mxnet --output_dir --input_shape [1,3,224,224] --nd_prefix_name 13_decoder --pretrained_model /vgg19-0000.params ``` diff --git a/docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md b/docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md index 5c6c1af4ba6..b92df4778d6 100644 --- a/docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md +++ b/docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md @@ -736,7 +736,7 @@ sub-graph of the original graph isomorphic to the specified pattern. 2. [Specific Operation Front Phase Transformations](#specific-operation-front-phase-transformations) triggered for the node with a specific `op` attribute value. 3. [Generic Front Phase Transformations](#generic-front-phase-transformations). -4. Manually enabled transformation, defined with a JSON configuration file (for TensorFlow, ONNX, MXNet, and PaddlePaddle models), specified using the `--transformations_config` command-line parameter: +4. Manually enabled transformation, defined with a JSON configuration file (for TensorFlow, ONNX, Apache MXNet, and PaddlePaddle models), specified using the `--transformations_config` command-line parameter: 1. [Node Name Pattern Front Phase Transformations](#node-name-pattern-front-phase-transformation). 2. [Front Phase Transformations Using Start and End Points](#start-end-points-front-phase-transformations). 3. [Generic Front Phase Transformations Enabled with Transformations Configuration File](#generic-transformations-config-front-phase-transformations). diff --git a/docs/OV_Runtime_UG/network_state_intro.md b/docs/OV_Runtime_UG/network_state_intro.md index 96767566cab..85ada5e26a4 100644 --- a/docs/OV_Runtime_UG/network_state_intro.md +++ b/docs/OV_Runtime_UG/network_state_intro.md @@ -216,7 +216,7 @@ If the original framework does not have a special API for working with states, a **ONNX and frameworks supported via ONNX format:** *LSTM, RNN, GRU* original layers are converted to the TensorIterator operation. TensorIterator body contains LSTM/RNN/GRU Cell. Peepholes, InputForget modifications are not supported, sequence_lengths optional input is supported. *ONNX Loop* layer is converted to the OpenVINO Loop operation. -**MXNet:** *LSTM, RNN, GRU* original layers are converted to TensorIterator operation, TensorIterator body contains LSTM/RNN/GRU Cell operations. +**Apache MXNet:** *LSTM, RNN, GRU* original layers are converted to TensorIterator operation, TensorIterator body contains LSTM/RNN/GRU Cell operations. **TensorFlow:** *BlockLSTM* is converted to TensorIterator operation, TensorIterator body contains LSTM Cell operation, Peepholes, InputForget modifications are not supported. *While* layer is converted to TensorIterator, TensorIterator body can contain any supported operations, but dynamic cases, when count of iterations cannot be calculated in shape inference (ModelOptimizer conversion) time, are not supported. diff --git a/docs/OV_Runtime_UG/supported_plugins/VPU.md b/docs/OV_Runtime_UG/supported_plugins/VPU.md index 1e3b4448cf3..12a7d39b2c0 100644 --- a/docs/OV_Runtime_UG/supported_plugins/VPU.md +++ b/docs/OV_Runtime_UG/supported_plugins/VPU.md @@ -20,7 +20,7 @@ This chapter provides information on the OpenVINO Runtime plugins that enable in ## Supported Networks -**Caffe\***: +**Caffe**: * AlexNet * CaffeNet * GoogleNet (Inception) v1, v2, v4 @@ -32,7 +32,7 @@ This chapter provides information on the OpenVINO Runtime plugins that enable in * DenseNet family (121,161,169,201) * SSD-300, SSD-512, SSD-MobileNet, SSD-GoogleNet, SSD-SqueezeNet -**TensorFlow\***: +**TensorFlow**: * AlexNet * Inception v1, v2, v3, v4 * Inception ResNet v2 @@ -46,7 +46,7 @@ This chapter provides information on the OpenVINO Runtime plugins that enable in * ssd_mobilenet_v1 * DeepLab-v3+ -**MXNet\***: +**Apache MXNet**: * AlexNet and CaffeNet * DenseNet family (121,161,169,201) * SqueezeNet v1.1 diff --git a/docs/install_guides/installing-openvino-macos.md b/docs/install_guides/installing-openvino-macos.md index 81e4f01c25e..6581d4d890b 100644 --- a/docs/install_guides/installing-openvino-macos.md +++ b/docs/install_guides/installing-openvino-macos.md @@ -152,7 +152,7 @@ To uninstall the toolkit, follow the steps on the [Uninstalling page](uninstalli To learn more about converting models from specific frameworks, go to: * :ref:`Convert Your Caffe Model ` * :ref:`Convert Your TensorFlow Model ` - * :ref:`Convert Your MXNet Modele ` + * :ref:`Convert Your Apache MXNet Model ` * :ref:`Convert Your Kaldi Model ` * :ref:`Convert Your ONNX Model ` ---> diff --git a/docs/install_guides/installing-openvino-windows.md b/docs/install_guides/installing-openvino-windows.md index 2b28f06da02..fba252f4b90 100644 --- a/docs/install_guides/installing-openvino-windows.md +++ b/docs/install_guides/installing-openvino-windows.md @@ -189,7 +189,7 @@ To uninstall the toolkit, follow the steps on the [Uninstalling page](uninstalli To learn more about converting models from specific frameworks, go to: * :ref:`Convert Your Caffe Model ` * :ref:`Convert Your TensorFlow Model ` - * :ref:`Convert Your MXNet Modele ` + * :ref:`Convert Your Apache MXNet Model ` * :ref:`Convert Your Kaldi Model ` * :ref:`Convert Your ONNX Model ` ---> diff --git a/docs/install_guides/pypi-openvino-dev.md b/docs/install_guides/pypi-openvino-dev.md index d13c506cf30..8984f9289f4 100644 --- a/docs/install_guides/pypi-openvino-dev.md +++ b/docs/install_guides/pypi-openvino-dev.md @@ -87,7 +87,7 @@ pip install openvino-dev[extras] | tensorflow | [TensorFlow* 1.x](https://www.tensorflow.org/versions#tensorflow_1) | | tensorflow2 | [TensorFlow* 2.x](https://www.tensorflow.org/versions#tensorflow_2) | -For example, to install and configure the components for working with TensorFlow 2.x, MXNet and Caffe, use the following command: +For example, to install and configure the components for working with TensorFlow 2.x, Apache MXNet and Caffe, use the following command: ```sh pip install openvino-dev[tensorflow2,mxnet,caffe] ``` diff --git a/docs/ops/detection/DeformablePSROIPooling_1.md b/docs/ops/detection/DeformablePSROIPooling_1.md index d81290fd050..394d654612d 100644 --- a/docs/ops/detection/DeformablePSROIPooling_1.md +++ b/docs/ops/detection/DeformablePSROIPooling_1.md @@ -13,7 +13,7 @@ If only two inputs are provided, position sensitive pooling with regular ROI bin If third input is provided, each bin position is transformed by adding corresponding offset to the bin left top corner coordinates. Third input values are usually calculated by regular position sensitive pooling layer, so non-deformable mode (DeformablePSROIPooling with two inputs). The ROI coordinates are specified as five element tuples: `[batch_id, x_1, y_1, x_2, y_2]` in absolute values. -This operation is compatible with [MXNet DeformablePSROIPooling](https://mxnet.apache.org/versions/1.7.0/api/python/docs/api/contrib/symbol/index.html#mxnet.contrib.symbol.DeformablePSROIPooling) cases where `group_size` is equal to `pooled_size`. +This operation is compatible with [Apache MXNet DeformablePSROIPooling](https://mxnet.apache.org/versions/1.7.0/api/python/docs/api/contrib/symbol/index.html#mxnet.contrib.symbol.DeformablePSROIPooling) cases where `group_size` is equal to `pooled_size`. **Attributes** diff --git a/docs/ops/detection/DetectionOutput_1.md b/docs/ops/detection/DetectionOutput_1.md index 2278619ec15..c018b87c5fc 100644 --- a/docs/ops/detection/DetectionOutput_1.md +++ b/docs/ops/detection/DetectionOutput_1.md @@ -101,8 +101,8 @@ At each feature map cell, *DetectionOutput* predicts the offsets relative to the * **Description**: *decrease_label_id* flag that denotes how to perform NMS. * **Range of values**: - * false - perform NMS like in Caffe\*. - * true - perform NMS like in MxNet\*. + * false - perform NMS like in Caffe. + * true - perform NMS like in Apache MxNet. * **Type**: boolean * **Default value**: false * **Required**: *no* diff --git a/docs/ops/detection/DetectionOutput_8.md b/docs/ops/detection/DetectionOutput_8.md index 87c6dbba724..bfc5db90138 100644 --- a/docs/ops/detection/DetectionOutput_8.md +++ b/docs/ops/detection/DetectionOutput_8.md @@ -108,8 +108,8 @@ it is necessary to adjust the predicted offset accordingly. * **Description**: *decrease_label_id* flag that denotes how to perform NMS. * **Range of values**: - * false - perform NMS like in Caffe\*. - * true - perform NMS like in MxNet\*. + * false - perform NMS like in Caffe. + * true - perform NMS like in Apache MxNet. * **Type**: boolean * **Default value**: false * **Required**: *no*