* Extensibility guide with FE extensions and remove OV_FRAMEWORK_MAP from docs * Rework of Extensibility Intro, adopted examples to missing OPENVINO_FRAMEWORK_MAP * Removed OPENVINO_FRAMEWORK_MAP reference * Frontend extension detailed documentation * Fixed distributed snippets * Fixed snippet inclusion in FE extension document and chapter headers * Fixed wrong name in a snippet reference * Fixed test for template extension due to changed number of loaded extensions * Update docs/Extensibility_UG/frontend_extensions.md Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com> * Minor fixes in extension snippets * Small grammar fix Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com> Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com> * DOCS: transition banner (#10973) * transition banner * minor fix * update transition banner * updates * update custom.js * updates * updates * Documentation fixes (#11044) * Benchmark app usage * Fixed link to the devices * More fixes * Update docs/OV_Runtime_UG/multi_device.md Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com> * Removed several hardcoded links Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com> * Updated documentation for compile_tool (#11049) * Added deployment guide (#11060) * Added deployment guide * Added local distribution * Updates * Fixed more indentations * Removed obsolete code snippets (#11061) * Removed obsolete code snippets * NCC style * Fixed NCC for BA * Add a troubleshooting issue for PRC installation (#11074) * updates * adding gna to linux * add missing reference * update * Update docs/install_guides/installing-model-dev-tools.md Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com> * Update docs/install_guides/installing-model-dev-tools.md Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com> * Update docs/install_guides/installing-model-dev-tools.md Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com> * Update docs/install_guides/installing-model-dev-tools.md Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com> * Update docs/install_guides/installing-model-dev-tools.md Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com> * update * minor updates * add gna item to yum and apt * add gna to get started page * update reference formatting * merge commit * add a troubleshooting issue * update * update * fix CVS-71846 Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com> * DOCS: fixed hardcoded links (#11100) * Fixes * Use links * applying reviewers comments to the Opt Guide (#11093) * applying reviewrs comments * fixed refs, more structuring (bold, bullets, etc) * refactoring tput/latency sections * next iteration (mostly latency), also brushed the auto-batching and other sections * updates sync/async images * common opts brushed * WIP tput redesigned * minor brushing of common and auto-batching * Tput fully refactored * fixed doc name in the link * moved int8 perf counters to the right section * fixed links * fixed broken quotes * fixed more links * add ref to the internals to the TOC * Added a note on the batch size Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> * [80085] New images for docs (#11114) * change doc structure * fix manager tools * fix manager tools 3 step * fix manager tools 3 step * new img * new img for OV Runtime * fix steps * steps * fix intendents * change list * fix space * fix space * code snippets fix * change display * Benchmarks 2022 1 (#11130) * Minor fixes * Updates for 2022.1 * Edits according to the review * Edits according to review comments * Edits according to review comments * Edits according to review comments * Fixed table * Edits according to review comments * Removed config for Intel® Core™ i7-11850HE * Removed forward-tacotron-duration-prediction-241 graph * Added resnet-18-pytorch * Add info about Docker images in Deployment guide (#11136) * Renamed user guides (#11137) * fix screenshot (#11140) * More conservative recommendations on dynamic shapes usage in docs (#11161) * More conservative recommendations about using dynamic shapes * Duplicated statement from C++ part to Python part of reshape doc (no semantical changes) * Update ShapeInference.md (#11168) * Benchmarks 2022 1 updates (#11180) * Updated graphs * Quick fix for TODO in Dynamic Shapes article * Anchor link fixes * Fixed DM config (#11199) * DOCS: doxy sphinxtabs (#11027) * initial implementation of doxy sphinxtabs * fixes * fixes * fixes * fixes * fixes * WA for ignored visibility attribute * Fixes Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com> Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com> Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com> Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com> Co-authored-by: Yuan Xu <yuan1.xu@intel.com> Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com> Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com> Co-authored-by: Tatiana Savina <tatiana.savina@intel.com> Co-authored-by: Ilya Naumov <ilya.naumov@intel.com> Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
4.1 KiB
OpenVINO™ Python API exclusives
OpenVINO™ Runtime Python API is exposing additional features and helpers to elevate user experience. Main goal of Python API is to provide user-friendly and simple, still powerful, tool for Python users.
Easier model compilation
CompiledModel can be easily created with the helper method. It hides Core creation and applies AUTO device by default.
@snippet docs/snippets/ov_python_exclusives.py auto_compilation
Model/CompiledModel inputs and outputs
Besides functions aligned to C++ API, some of them have their Pythonic counterparts or extensions. For example, Model and CompiledModel inputs/outputs can be accessed via properties.
@snippet docs/snippets/ov_python_exclusives.py properties_example
Refer to Python API documentation on which helper functions or properties are available for different classes.
Working with Tensor
Python API allows passing data as tensors. Tensor object holds a copy of the data from the given array. dtype of numpy arrays is converted to OpenVINO™ types automatically.
@snippet docs/snippets/ov_python_exclusives.py tensor_basics
Shared memory mode
Tensor objects can share the memory with numpy arrays. By specifing shared_memory argument, a Tensor object does not perform copy of data and has access to the memory of the numpy array.
@snippet docs/snippets/ov_python_exclusives.py tensor_shared_mode
Slices of array's memory
One of the Tensor class constructors allows to share the slice of array's memory. When shape is specified in the constructor that has the numpy array as first argument, it triggers the special shared memory mode.
@snippet docs/snippets/ov_python_exclusives.py tensor_slice_mode
Running inference
Python API supports extra calling methods to synchronous and asynchronous modes for inference.
All infer methods allow users to pass data as popular numpy arrays, gathered in either Python dicts or lists.
@snippet docs/snippets/ov_python_exclusives.py passing_numpy_array
Results from inference can be obtained in various ways:
@snippet docs/snippets/ov_python_exclusives.py getting_results
Synchronous mode - extended
Python API provides different synchronous calls to infer model, which block the application execution. Additionally these calls return results of inference:
@snippet docs/snippets/ov_python_exclusives.py sync_infer
AsyncInferQueue
Asynchronous mode pipelines can be supported with wrapper class called AsyncInferQueue. This class automatically spawns pool of InferRequest objects (also called "jobs") and provides synchronization mechanisms to control flow of the pipeline.
Each job is distinguishable by unique id, which is in the range from 0 up to number of jobs specified in AsyncInferQueue constructor.
Function call start_async is not required to be synchronized, it waits for any available job if queue is busy/overloaded. Every AsyncInferQueue code block should end with wait_all function. It provides "global" synchronization of all jobs in the pool and ensure that access to them is safe.
@snippet docs/snippets/ov_python_exclusives.py asyncinferqueue
Acquire results from requests
After the call to wait_all, jobs and their data can be safely accessed. Acquring of a specific job with [id] returns InferRequest object, which results in seamless retrieval of the output data.
@snippet docs/snippets/ov_python_exclusives.py asyncinferqueue_access
Setting callbacks
Another feature of AsyncInferQueue is ability of setting callbacks. When callback is set, any job that ends inference, calls upon Python function. Callback function must have two arguments. First is the request that calls the callback, it provides InferRequest API. Second one being called "userdata", provides possibility of passing runtime values, which can be of any Python type and later used inside callback function.
The callback of AsyncInferQueue is uniform for every job. When executed, GIL is acquired to ensure safety of data manipulation inside the function.
@snippet docs/snippets/ov_python_exclusives.py asyncinferqueue_set_callback