2.1 KiB
2.1 KiB
Paddle Support in OpenVINO™
Starting from the 2022.1 release, OpenVINO™ supports reading native Paddle models.
The Core::ReadNetwork() method provides a uniform way to read models from either the Paddle format or IR, which is the recommended approach.
Read Paddle Models from IR
The Paddle Model can be read after it is converted to Intermediate Representation (IR).
C++ Example:
InferenceEngine::Core core;
auto network = core.ReadNetwork("model.xml");
Python Example:
from openvino.inference_engine import IECore
ie = IECore()
net = ie.read_network("model.xml")
Read Paddle Models from The Paddle Format (Paddle inference model model type)
C++ Example:
InferenceEngine::Core core;
auto network = core.ReadNetwork("model.pdmodel");
Python Example:
from openvino.inference_engine import IECore
ie = IECore()
net = ie.read_network("model.pdmodel")
The Reshape feature:
OpenVINO™ does not provide a mechanism to specify pre-processing, such as mean values subtraction or reverse input channels, for the Paddle format.
If a Paddle model contains dynamic shapes for input, use the CNNNetwork::reshape method for shape specialization.
NOTES
- The Paddle
inference modelmainly contains two kinds of filesmodel.pdmodel(model file) andmodel.pdiparams(params file), which are used for inference. - The list of supported Paddle models and a description of how to export them can be found in Convert a Paddle Model. The following Paddle models are supported by intel CPU only:
Fast-SCNN,Yolo v3,ppyolo,MobileNetv3-SSD,BERT. - For
NormalizePaddle Models, the input data should be in FP32 format. - When reading Paddle models from The Paddle format, make sure that
model.pdmodelandmodel.pdiparamsare in the same folder directory.