* add doc:'Paddle_Support.md' * Apply suggestions from code review Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> * Apply suggestions from code review * Update docs/IE_DG/Paddle_Support.md Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
1.7 KiB
1.7 KiB
Paddle Support in the OpenVINO™
Starting from the 2022.1 release, OpenVINO™ supports reading native Paddle models.
Core::ReadNetwork() method provides a uniform way to read models from IR or Paddle format, it is a recommended approach to reading models.
Read Paddle Models from IR
After Converting a Paddle Model to Intermediate Representation (IR), it can be read as recommended. Example:
InferenceEngine::Core core;
auto network = core.ReadNetwork("model.xml");
Read Paddle Models from Paddle Format (Paddle inference model model type)
Example:
InferenceEngine::Core core;
auto network = core.ReadNetwork("model.pdmodel");
Reshape feature:
OpenVINO™ does not provide a mechanism to specify pre-processing, such as mean values subtraction and reverse input channels, for the Paddle format.
If a Paddle model contains dynamic shapes for input, use the CNNNetwork::reshape method for shape specialization.
NOTE
- Paddle
inference modelmainly contains two kinds of filesmodel.pdmodel(model file) andmodel.pdiparams(params file), which are used for inference. - Supported Paddle models list and how to export these models are described in Convert a Paddle Model.
- For
NormalizePaddle Models, the input data should be in FP32 format. - When reading Paddle models from Paddle format, make sure that
model.pdmodelandmodel.pdiparamsare in the same folder directory.