* Update md files. Add cpp in docs/examples * Normalize all the line endings * Fix block_id in snippets * Fix utf-8 encoding * Add new folder for snippets * Fix issues with compiling code from snippets * Added conteiner iterator fix
3.1 KiB
[DEPRECATED] Migration from Inference Engine Plugin API to Core API
For 2019 R2 Release, the new Inference Engine Core API is introduced. This guide is updated to reflect the new API approach. The Inference Engine Plugin API is still supported, but is going to be deprecated in future releases.
This section provides common steps to migrate your application written using the Inference Engine Plugin API (InferenceEngine::InferencePlugin) to the Inference Engine Core API (InferenceEngine::Core).
To learn how to write a new application using the Inference Engine, refer to Integrate the Inference Engine Request API with Your Application and Inference Engine Samples Overview.
Inference Engine Core Class
The Inference Engine Core class is implemented on top existing Inference Engine Plugin API and handles plugins internally.
The main responsibility of the InferenceEngine::Core class is to hide plugin specifics inside and provide a new layer of abstraction that works with devices (InferenceEngine::Core::GetAvailableDevices). Almost all methods of this class accept deviceName as an additional parameter that denotes an actual device you are working with. Plugins are listed in the plugins.xml file, which is loaded during constructing InferenceEngine::Core objects:
<ie>
<plugins>
<plugin name="CPU" location="libMKLDNNPlugin.so">
</plugin>
...
</ie>
Migration Steps
Common migration process includes the following steps:
- Migrate from the
InferenceEngine::InferencePlugininitialization:
@snippet openvino/docs/snippets/Migration_CoreAPI.cpp part0
to the InferenceEngine::Core class initialization:
@snippet openvino/docs/snippets/Migration_CoreAPI.cpp part1
- Instead of using
InferenceEngine::CNNNetReaderto read IR:
@snippet openvino/docs/snippets/Migration_CoreAPI.cpp part2
read networks using the Core class:
@snippet openvino/docs/snippets/Migration_CoreAPI.cpp part3
The Core class also allows reading models from the ONNX format (more information is here):
@snippet openvino/docs/snippets/Migration_CoreAPI.cpp part4
- Instead of adding CPU device extensions to the plugin:
@snippet openvino/docs/snippets/Migration_CoreAPI.cpp part5
add extensions to CPU device using the Core class:
@snippet openvino/docs/snippets/Migration_CoreAPI.cpp part6
- Instead of setting configuration keys to a particular plugin, set (key, value) pairs via
InferenceEngine::Core::SetConfig
@snippet openvino/docs/snippets/Migration_CoreAPI.cpp part7
Note
: If
deviceNameis omitted as the last argument, configuration is set for all Inference Engine devices.
- Migrate from loading the network to a particular plugin:
@snippet openvino/docs/snippets/Migration_CoreAPI.cpp part8
to InferenceEngine::Core::LoadNetwork to a particular device:
@snippet openvino/docs/snippets/Migration_CoreAPI.cpp part9
After you have an instance of InferenceEngine::ExecutableNetwork, all other steps are as usual.