Mikhail Nosov f57dc05c66 [OV20] Convert NV12 to RGB operation + preprocessing (#7508)
* # Conflicts:
#	docs/template_plugin/tests/functional/op_reference/convert_color_nv12.cpp
#	inference-engine/tests/functional/plugin/cpu/shared_tests_instances/single_layer_tests/convert_color_nv12.cpp
#	inference-engine/tests/functional/shared_test_classes/include/shared_test_classes/single_layer/convert_color_nv12.hpp
#	inference-engine/tests/functional/shared_test_classes/src/single_layer/convert_color_nv12.cpp
#	ngraph/core/include/openvino/core/preprocess/input_tensor_info.hpp
#	ngraph/core/include/openvino/core/preprocess/preprocess_steps.hpp
#	ngraph/core/include/openvino/op/nv12_to_bgr.hpp
#	ngraph/core/include/openvino/op/nv12_to_rgb.hpp
#	ngraph/core/src/op/nv12_to_bgr.cpp
#	ngraph/core/src/op/nv12_to_rgb.cpp
#	ngraph/core/src/preprocess/pre_post_process.cpp
#	ngraph/core/src/preprocess/preprocess_steps_impl.hpp
#	ngraph/test/CMakeLists.txt

* Added more test to cover 100% of code
Allow convert element type for 'multi-plane' color format

* Inherit tensor names for 'convert_color'

* Clang

* Fix tests

* Disable 'int8' preprocessing resize test

* Fix review comments

* Add more restrictions and tests for planes sub-names

* 1) Added check for uniqueness of tensor names generated for nodes
Raise error if user's plane sub-name conflicts with some node in a function
2) Added exception safety to preprocess build. Before, when input #2 fail, only one preprocess will be applied to function and it will be corrupted
Exception guard will restore function to original state if exception occurs

* Fix clang-format
2021-10-06 15:22:05 +03:00
2021-09-13 13:39:42 +03:00
2021-10-06 13:53:47 +02:00
2021-10-04 11:14:13 +03:00
2021-05-31 15:24:56 +03:00
2018-10-16 13:45:03 +03:00
2020-11-17 16:44:44 +03:00

OpenVINO™ Toolkit

Stable release Apache License Version 2.0 GitHub branch checks state Azure DevOps builds (branch) PyPI Downloads

This toolkit allows developers to deploy pre-trained deep learning models through a high-level C++ Inference Engine API integrated with application logic.

This open source version includes several components: namely Model Optimizer, nGraph and Inference Engine, as well as CPU, GPU, MYRIAD, multi device and heterogeneous plugins to accelerate deep learning inferencing on Intel® CPUs and Intel® Processor Graphics. It supports pre-trained models from the Open Model Zoo, along with 100+ open source and public models in popular formats such as Caffe*, TensorFlow*, MXNet* and ONNX*.

Repository components:

License

Deep Learning Deployment Toolkit is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.

Resources:

Support

Please report questions, issues and suggestions using:


* Other names and brands may be claimed as the property of others.

Languages
C++ 80.5%
Python 15.5%
C 2.8%
CMake 0.9%
Cython 0.1%