Files
openvino/docs/IE_DG/Paddle_Support.md
Bo Liu d520e5558f add hardware support description for Paddle models (#9731)
* add hardware support description for Paddle models

* fix 'Fast-SCNN' name
2022-01-18 18:40:03 +03:00

2.1 KiB

Paddle Support in OpenVINO™

Starting from the 2022.1 release, OpenVINO™ supports reading native Paddle models. The Core::ReadNetwork() method provides a uniform way to read models from either the Paddle format or IR, which is the recommended approach.

Read Paddle Models from IR

The Paddle Model can be read after it is converted to Intermediate Representation (IR).

C++ Example:

InferenceEngine::Core core;
auto network = core.ReadNetwork("model.xml");

Python Example:

from openvino.inference_engine import IECore
ie = IECore()
net = ie.read_network("model.xml")

Read Paddle Models from The Paddle Format (Paddle inference model model type)

C++ Example:

InferenceEngine::Core core;
auto network = core.ReadNetwork("model.pdmodel");

Python Example:

from openvino.inference_engine import IECore
ie = IECore()
net = ie.read_network("model.pdmodel")

The Reshape feature:

OpenVINO™ does not provide a mechanism to specify pre-processing, such as mean values subtraction or reverse input channels, for the Paddle format. If a Paddle model contains dynamic shapes for input, use the CNNNetwork::reshape method for shape specialization.

NOTES

  • The Paddle inference model mainly contains two kinds of files model.pdmodel(model file) and model.pdiparams(params file), which are used for inference.
  • The list of supported Paddle models and a description of how to export them can be found in Convert a Paddle Model. The following Paddle models are supported by intel CPU only: Fast-SCNN, Yolo v3, ppyolo, MobileNetv3-SSD, BERT.
  • For Normalize Paddle Models, the input data should be in FP32 format.
  • When reading Paddle models from The Paddle format, make sure that model.pdmodel and model.pdiparams are in the same folder directory.