* Added info on DockerHub CI Framework
* Feature/azaytsev/change layout (#3295)
* Changes according to feedback comments
* Replaced @ref's with html links
* Fixed links, added a title page for installing from repos and images, fixed formatting issues
* Added links
* minor fix
* Added DL Streamer to the list of components installed by default
* Link fixes
* Link fixes
* ovms doc fix (#2988)
* added OpenVINO Model Server
* ovms doc fixes
Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
* Updated openvino_docs.xml
* Updated the link to software license agreements
* Revert "Updated the link to software license agreements"
This reverts commit 706dac500e.
* Docs to Sphinx (#8151)
* docs to sphinx
* Update GPU.md
* Update CPU.md
* Update AUTO.md
* Update performance_int8_vs_fp32.md
* update
* update md
* updates
* disable doc ci
* disable ci
* fix index.rst
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
# Conflicts:
# .gitignore
# docs/CMakeLists.txt
# docs/IE_DG/Deep_Learning_Inference_Engine_DevGuide.md
# docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md
# docs/IE_DG/Extensibility_DG/VPU_Kernel.md
# docs/IE_DG/InferenceEngine_QueryAPI.md
# docs/IE_DG/Int8Inference.md
# docs/IE_DG/Integrate_with_customer_application_new_API.md
# docs/IE_DG/Model_caching_overview.md
# docs/IE_DG/supported_plugins/GPU_RemoteBlob_API.md
# docs/IE_DG/supported_plugins/HETERO.md
# docs/IE_DG/supported_plugins/MULTI.md
# docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
# docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
# docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md
# docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md
# docs/MO_DG/prepare_model/convert_model/Converting_Model.md
# docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md
# docs/MO_DG/prepare_model/convert_model/Cutting_Model.md
# docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RNNT.md
# docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md
# docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md
# docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
# docs/doxygen/Doxyfile.config
# docs/doxygen/ie_docs.xml
# docs/doxygen/ie_plugin_api.config
# docs/doxygen/ngraph_cpp_api.config
# docs/doxygen/openvino_docs.xml
# docs/get_started/get_started_macos.md
# docs/get_started/get_started_raspbian.md
# docs/get_started/get_started_windows.md
# docs/img/cpu_int8_flow.png
# docs/index.md
# docs/install_guides/VisionAcceleratorFPGA_Configure.md
# docs/install_guides/VisionAcceleratorFPGA_Configure_Windows.md
# docs/install_guides/deployment-manager-tool.md
# docs/install_guides/installing-openvino-linux.md
# docs/install_guides/installing-openvino-macos.md
# docs/install_guides/installing-openvino-windows.md
# docs/optimization_guide/dldt_optimization_guide.md
# inference-engine/ie_bridges/c/include/c_api/ie_c_api.h
# inference-engine/ie_bridges/python/docs/api_overview.md
# inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md
# inference-engine/ie_bridges/python/sample/speech_sample/README.md
# inference-engine/ie_bridges/python/src/openvino/inference_engine/ie_api.pyx
# inference-engine/include/ie_api.h
# inference-engine/include/ie_core.hpp
# inference-engine/include/ie_version.hpp
# inference-engine/samples/benchmark_app/README.md
# inference-engine/samples/speech_sample/README.md
# inference-engine/src/plugin_api/exec_graph_info.hpp
# inference-engine/src/plugin_api/file_utils.h
# inference-engine/src/transformations/include/transformations_visibility.hpp
# inference-engine/tools/benchmark_tool/README.md
# ngraph/core/include/ngraph/ngraph.hpp
# ngraph/frontend/onnx_common/include/onnx_common/parser.hpp
# ngraph/python/src/ngraph/utils/node_factory.py
# openvino/itt/include/openvino/itt.hpp
# thirdparty/ade
# tools/benchmark/README.md
* Cherry-picked remove font-family (#8211)
* Cherry-picked: Update get_started_scripts.md (#8338)
* doc updates (#8268)
* Various doc changes
* theme changes
* remove font-family (#8211)
* fix css
* Update uninstalling-openvino.md
* fix css
* fix
* Fixes for Installation Guides
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: kblaszczak-intel <karol.blaszczak@intel.com>
# Conflicts:
# docs/IE_DG/Bfloat16Inference.md
# docs/IE_DG/InferenceEngine_QueryAPI.md
# docs/IE_DG/OnnxImporterTutorial.md
# docs/IE_DG/supported_plugins/AUTO.md
# docs/IE_DG/supported_plugins/HETERO.md
# docs/IE_DG/supported_plugins/MULTI.md
# docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
# docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
# docs/install_guides/installing-openvino-macos.md
# docs/install_guides/installing-openvino-windows.md
# docs/ops/opset.md
# inference-engine/samples/benchmark_app/README.md
# inference-engine/tools/benchmark_tool/README.md
# thirdparty/ade
* Cherry-picked: doc script changes (#8568)
* fix openvino-sphinx-theme
* add linkcheck target
* fix
* change version
* add doxygen-xfail.txt
* fix
* AA
* fix
* fix
* fix
* fix
* fix
# Conflicts:
# thirdparty/ade
* Cherry-pick: Feature/azaytsev/doc updates gna 2021 4 2 (#8567)
* Various doc changes
* Reformatted C++/Pythob sections. Updated with info from PR8490
* additional fix
* Gemini Lake replaced with Elkhart Lake
* Fixed links in IGs, Added 12th Gen
# Conflicts:
# docs/IE_DG/supported_plugins/GNA.md
# thirdparty/ade
* Cherry-pick: Feature/azaytsev/doc fixes (#8897)
* Various doc changes
* Removed the empty Learning path topic
* Restored the Gemini Lake CPIU list
# Conflicts:
# docs/IE_DG/supported_plugins/GNA.md
# thirdparty/ade
* Cherry-pick: sphinx copybutton doxyrest code blocks (#8992)
# Conflicts:
# thirdparty/ade
* Cherry-pick: iframe video enable fullscreen (#9041)
# Conflicts:
# thirdparty/ade
* Cherry-pick: fix untitled titles (#9213)
# Conflicts:
# thirdparty/ade
* Cherry-pick: perf bench graph animation (#9045)
* animation
* fix
# Conflicts:
# thirdparty/ade
* Cherry-pick: doc pytest (#8888)
* docs pytest
* fixes
# Conflicts:
# docs/doxygen/doxygen-ignore.txt
# docs/scripts/ie_docs.xml
# thirdparty/ade
* Cherry-pick: restore deleted files (#9215)
* Added new operations to the doc structure (from removed ie_docs.xml)
* Additional fixes
* Update docs/IE_DG/InferenceEngine_QueryAPI.md
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
* Update docs/IE_DG/Int8Inference.md
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
* Update Custom_Layers_Guide.md
* Changes according to review comments
* doc scripts fixes
* Update docs/IE_DG/Int8Inference.md
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
* Update Int8Inference.md
* update xfail
* clang format
* updated xfail
Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: kblaszczak-intel <karol.blaszczak@intel.com>
Co-authored-by: Yury Gorbachev <yury.gorbachev@intel.com>
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
2.1 KiB
Inference Engine Developer Guide
@sphinxdirective
.. toctree:: :maxdepth: 1 :hidden:
openvino_docs_IE_DG_Integrate_with_customer_application_new_API
openvino_docs_deployment_optimization_guide_dldt_optimization_guide
openvino_docs_IE_DG_Device_Plugins
Direct ONNX Format Support <openvino_docs_IE_DG_ONNX_Support>
openvino_docs_IE_DG_Int8Inference
openvino_docs_IE_DG_Bfloat16Inference
openvino_docs_IE_DG_DynamicBatching
openvino_docs_IE_DG_ShapeInference
openvino_docs_IE_DG_Model_caching_overview
openvino_docs_IE_DG_Extensibility_DG_Intro
openvino_docs_IE_DG_Memory_primitives
openvino_docs_IE_DG_network_state_intro
openvino_docs_IE_DG_API_Changes
openvino_docs_IE_DG_Known_Issues_Limitations
openvino_docs_IE_DG_Glossary
@endsphinxdirective
Introduction
Inference Engine is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the Inference Engine API to read the Intermediate Representation (IR), ONNX and execute the model on devices.
Inference Engine uses a plugin architecture. Inference Engine plugin is a software component that contains complete implementation for inference on a certain Intel® hardware device: CPU, GPU, VPU, etc. Each plugin implements the unified API and provides additional hardware-specific APIs.
The scheme below illustrates the typical workflow for deploying a trained deep learning model:
\* nGraph is the internal graph representation in the OpenVINO™ toolkit. Use it to build a model from source code.
Video
@sphinxdirective
.. list-table::
-
-
.. raw:: html
<iframe allowfullscreen mozallowfullscreen msallowfullscreen oallowfullscreen webkitallowfullscreen height="315" width="100%" src="https://www.youtube.com/embed/e6R13V8nbak"> </iframe>
-
-
- Inference Engine Concept. Duration: 3:43
@endsphinxdirective