Files
openvino/docs/IE_DG/Paddle_Support.md
Liu Bo 26c96a44cc add doc:'Paddle_Support.md' (#7122)
* add doc:'Paddle_Support.md'

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Apply suggestions from code review

* Update docs/IE_DG/Paddle_Support.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2021-08-19 12:55:27 +03:00

1.7 KiB

Paddle Support in the OpenVINO™

Starting from the 2022.1 release, OpenVINO™ supports reading native Paddle models. Core::ReadNetwork() method provides a uniform way to read models from IR or Paddle format, it is a recommended approach to reading models.

Read Paddle Models from IR

After Converting a Paddle Model to Intermediate Representation (IR), it can be read as recommended. Example:

InferenceEngine::Core core;
auto network = core.ReadNetwork("model.xml");

Read Paddle Models from Paddle Format (Paddle inference model model type)

Example:

InferenceEngine::Core core;
auto network = core.ReadNetwork("model.pdmodel");

Reshape feature:

OpenVINO™ does not provide a mechanism to specify pre-processing, such as mean values subtraction and reverse input channels, for the Paddle format. If a Paddle model contains dynamic shapes for input, use the CNNNetwork::reshape method for shape specialization.

NOTE

  • Paddle inference model mainly contains two kinds of files model.pdmodel(model file) and model.pdiparams(params file), which are used for inference.
  • Supported Paddle models list and how to export these models are described in Convert a Paddle Model.
  • For Normalize Paddle Models, the input data should be in FP32 format.
  • When reading Paddle models from Paddle format, make sure that model.pdmodel and model.pdiparams are in the same folder directory.