doc of paddle 2nd batch operations support (#7952)

* doc of paddle 2nd batch operations support

* Modified based on UX/DX Team feedback

* update the example command in Convert_Model_From_Paddle.m

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

Co-authored-by: meiyang-intel <yang.mei@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
This commit is contained in:
Bo Liu 2021-10-27 07:02:21 +08:00 committed by GitHub
parent 16cee44fbb
commit d5a5dc1aa1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 582 additions and 569 deletions

View File

@ -1,34 +1,52 @@
# Paddle Support in the OpenVINO™ {#openvino_docs_IE_DG_Paddle_Support}
# Paddle Support in OpenVINO™ {#openvino_docs_IE_DG_Paddle_Support}
Starting from the 2022.1 release, OpenVINO™ supports reading native Paddle models.
`Core::ReadNetwork()` method provides a uniform way to read models from IR or Paddle format, it is a recommended approach to reading models.
The `Core::ReadNetwork()` method provides a uniform way to read models from either the Paddle format or IR, which is the recommended approach.
## Read Paddle Models from IR
After [Converting a Paddle Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Paddle.md) to [Intermediate Representation (IR)](../MO_DG/IR_and_opsets.md), it can be read as recommended. Example:
The Paddle Model can be read after it is [converted](../MO_DG/prepare_model/convert_model/Convert_Model_From_Paddle.md) to [Intermediate Representation (IR)](../MO_DG/IR_and_opsets.md).
**C++ Example:**
```cpp
InferenceEngine::Core core;
auto network = core.ReadNetwork("model.xml");
```
## Read Paddle Models from Paddle Format (Paddle `inference model` model type)
**Python Example:**
**Example:**
```sh
from openvino.inference_engine import IECore
ie = IECore()
net = ie.read_network("model.xml")
```
## Read Paddle Models from The Paddle Format (Paddle `inference model` model type)
**C++ Example:**
```cpp
InferenceEngine::Core core;
auto network = core.ReadNetwork("model.pdmodel");
```
**Reshape feature:**
**Python Example:**
OpenVINO™ does not provide a mechanism to specify pre-processing, such as mean values subtraction and reverse input channels, for the Paddle format.
```sh
from openvino.inference_engine import IECore
ie = IECore()
net = ie.read_network("model.pdmodel")
```
**The Reshape feature:**
OpenVINO™ does not provide a mechanism to specify pre-processing, such as mean values subtraction or reverse input channels, for the Paddle format.
If a Paddle model contains dynamic shapes for input, use the `CNNNetwork::reshape` method for shape specialization.
## NOTE
## NOTES
* Paddle [`inference model`](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.1/doc/doc_en/inference_en.md) mainly contains two kinds of files `model.pdmodel`(model file) and `model.pdiparams`(params file), which are used for inference.
* Supported Paddle models list and how to export these models are described in [Convert a Paddle Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Paddle.md).
* The Paddle [`inference model`](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.1/doc/doc_en/inference_en.md) mainly contains two kinds of files `model.pdmodel`(model file) and `model.pdiparams`(params file), which are used for inference.
* The list of supported Paddle models and a description of how to export them can be found in [Convert a Paddle Model](../MO_DG/prepare_model/convert_model/Convert_Model_From_Paddle.md).
* For `Normalize` Paddle Models, the input data should be in FP32 format.
* When reading Paddle models from Paddle format, make sure that `model.pdmodel` and `model.pdiparams` are in the same folder directory.
* When reading Paddle models from The Paddle format, make sure that `model.pdmodel` and `model.pdiparams` are in the same folder directory.

File diff suppressed because it is too large Load Diff

View File

@ -1,9 +1,9 @@
# Converting a Paddle* Model {#openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Paddle}
A summary of the steps for optimizing and deploying a model that was trained with Paddle\*:
A summary of the steps for optimizing and deploying a model trained with Paddle\*:
1. [Configure the Model Optimizer](../Config_Model_Optimizer.md) for Paddle\*.
2. [Convert a Paddle\* Model](#Convert_From_Paddle) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases values.
2. [Convert a Paddle\* Model](#Convert_From_Paddle) to produce an optimized [Intermediate Representation (IR)](../../IR_and_opsets.md) of the model based on the trained network topology, weights, and biases.
3. Test the model in the Intermediate Representation format using the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in the target environment via provided Inference Engine [sample applications](../../../IE_DG/Samples_Overview.md).
4. [Integrate](../../../IE_DG/Samples_Overview.md) the [Inference Engine](../../../IE_DG/Deep_Learning_Inference_Engine_DevGuide.md) in your application to deploy the model in the target environment.
@ -38,12 +38,12 @@ python3 mo.py --input_model <INPUT_MODEL>.pdmodel --output_dir <OUTPUT_MODEL_DIR
Parameters to convert your model:
* [Framework-agnostic parameters](Converting_Model_General.md): These parameters are used to convert a model trained with any supported framework.
> **NOTE:** `--scale`, `--scale_values`, `--mean_values`, `--mean_file` are unsupported in the current version of mo_paddle.
> **NOTE:** `--scale`, `--scale_values`, `--mean_values`, `--mean_file` are not supported in the current version of mo_paddle.
### Example of Converting a Paddle* Model
Below is the example command to convert yolo v3 Paddle\* network to OpenVINO IR network with Model Optimizer.
```sh
python3 mo.py --model_name yolov3_darknet53_270e_coco --output_dir <OUTPUT_MODEL_DIR> --framework=paddle --data_type=FP32 --reverse_input_channels --input_shape=[2,3,608,608],[1,2],[1,2] --input=image,im_shape,scale_factor --output=save_infer_model/scale_0.tmp_1,save_infer_model/scale_1.tmp_1 --input_model=yolov3.pdmodel
python3 mo.py --model_name yolov3_darknet53_270e_coco --output_dir <OUTPUT_MODEL_DIR> --framework=paddle --data_type=FP32 --reverse_input_channels --input_shape=[1,3,608,608],[1,2],[1,2] --input=image,im_shape,scale_factor --output=save_infer_model/scale_0.tmp_1,save_infer_model/scale_1.tmp_1 --input_model=yolov3.pdmodel
```
## Supported Paddle\* Layers
@ -51,12 +51,4 @@ Refer to [Supported Framework Layers](../Supported_Frameworks_Layers.md) for the
## Frequently Asked Questions (FAQ)
The Model Optimizer provides explanatory messages if it is unable to run to completion due to issues like typographical errors, incorrectly used options, or other issues. The message describes the potential cause of the problem and gives a link to the [Model Optimizer FAQ](../Model_Optimizer_FAQ.md). The FAQ has instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.
## Summary
In this document, you learned:
* Basic information about how the Model Optimizer works with Paddle\* models
* Which Paddle\* models are supported
* How to convert a trained Paddle\* model using the Model Optimizer with framework-agnostic command-line options
When Model Optimizer is unable to run to completion due to issues like typographical errors, incorrectly used options, etc., it provides explanatory messages. They describe the potential cause of the problem and give a link to the [Model Optimizer FAQ](../Model_Optimizer_FAQ.md), which provides instructions on how to resolve most issues. The FAQ also includes links to relevant sections in the Model Optimizer Developer Guide to help you understand what went wrong.