* Added info on DockerHub CI Framework
* Feature/azaytsev/change layout (#3295)
* Changes according to feedback comments
* Replaced @ref's with html links
* Fixed links, added a title page for installing from repos and images, fixed formatting issues
* Added links
* minor fix
* Added DL Streamer to the list of components installed by default
* Link fixes
* Link fixes
* ovms doc fix (#2988)
* added OpenVINO Model Server
* ovms doc fixes
Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
* Updated openvino_docs.xml
* Updated the link to software license agreements
* Revert "Updated the link to software license agreements"
This reverts commit 706dac500e.
* Docs to Sphinx (#8151)
* docs to sphinx
* Update GPU.md
* Update CPU.md
* Update AUTO.md
* Update performance_int8_vs_fp32.md
* update
* update md
* updates
* disable doc ci
* disable ci
* fix index.rst
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
# Conflicts:
# .gitignore
# docs/CMakeLists.txt
# docs/IE_DG/Deep_Learning_Inference_Engine_DevGuide.md
# docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md
# docs/IE_DG/Extensibility_DG/VPU_Kernel.md
# docs/IE_DG/InferenceEngine_QueryAPI.md
# docs/IE_DG/Int8Inference.md
# docs/IE_DG/Integrate_with_customer_application_new_API.md
# docs/IE_DG/Model_caching_overview.md
# docs/IE_DG/supported_plugins/GPU_RemoteBlob_API.md
# docs/IE_DG/supported_plugins/HETERO.md
# docs/IE_DG/supported_plugins/MULTI.md
# docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
# docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
# docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md
# docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md
# docs/MO_DG/prepare_model/convert_model/Converting_Model.md
# docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md
# docs/MO_DG/prepare_model/convert_model/Cutting_Model.md
# docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RNNT.md
# docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md
# docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md
# docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
# docs/doxygen/Doxyfile.config
# docs/doxygen/ie_docs.xml
# docs/doxygen/ie_plugin_api.config
# docs/doxygen/ngraph_cpp_api.config
# docs/doxygen/openvino_docs.xml
# docs/get_started/get_started_macos.md
# docs/get_started/get_started_raspbian.md
# docs/get_started/get_started_windows.md
# docs/img/cpu_int8_flow.png
# docs/index.md
# docs/install_guides/VisionAcceleratorFPGA_Configure.md
# docs/install_guides/VisionAcceleratorFPGA_Configure_Windows.md
# docs/install_guides/deployment-manager-tool.md
# docs/install_guides/installing-openvino-linux.md
# docs/install_guides/installing-openvino-macos.md
# docs/install_guides/installing-openvino-windows.md
# docs/optimization_guide/dldt_optimization_guide.md
# inference-engine/ie_bridges/c/include/c_api/ie_c_api.h
# inference-engine/ie_bridges/python/docs/api_overview.md
# inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md
# inference-engine/ie_bridges/python/sample/speech_sample/README.md
# inference-engine/ie_bridges/python/src/openvino/inference_engine/ie_api.pyx
# inference-engine/include/ie_api.h
# inference-engine/include/ie_core.hpp
# inference-engine/include/ie_version.hpp
# inference-engine/samples/benchmark_app/README.md
# inference-engine/samples/speech_sample/README.md
# inference-engine/src/plugin_api/exec_graph_info.hpp
# inference-engine/src/plugin_api/file_utils.h
# inference-engine/src/transformations/include/transformations_visibility.hpp
# inference-engine/tools/benchmark_tool/README.md
# ngraph/core/include/ngraph/ngraph.hpp
# ngraph/frontend/onnx_common/include/onnx_common/parser.hpp
# ngraph/python/src/ngraph/utils/node_factory.py
# openvino/itt/include/openvino/itt.hpp
# thirdparty/ade
# tools/benchmark/README.md
* Cherry-picked remove font-family (#8211)
* Cherry-picked: Update get_started_scripts.md (#8338)
* doc updates (#8268)
* Various doc changes
* theme changes
* remove font-family (#8211)
* fix css
* Update uninstalling-openvino.md
* fix css
* fix
* Fixes for Installation Guides
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: kblaszczak-intel <karol.blaszczak@intel.com>
# Conflicts:
# docs/IE_DG/Bfloat16Inference.md
# docs/IE_DG/InferenceEngine_QueryAPI.md
# docs/IE_DG/OnnxImporterTutorial.md
# docs/IE_DG/supported_plugins/AUTO.md
# docs/IE_DG/supported_plugins/HETERO.md
# docs/IE_DG/supported_plugins/MULTI.md
# docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
# docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
# docs/install_guides/installing-openvino-macos.md
# docs/install_guides/installing-openvino-windows.md
# docs/ops/opset.md
# inference-engine/samples/benchmark_app/README.md
# inference-engine/tools/benchmark_tool/README.md
# thirdparty/ade
* Cherry-picked: doc script changes (#8568)
* fix openvino-sphinx-theme
* add linkcheck target
* fix
* change version
* add doxygen-xfail.txt
* fix
* AA
* fix
* fix
* fix
* fix
* fix
# Conflicts:
# thirdparty/ade
* Cherry-pick: Feature/azaytsev/doc updates gna 2021 4 2 (#8567)
* Various doc changes
* Reformatted C++/Pythob sections. Updated with info from PR8490
* additional fix
* Gemini Lake replaced with Elkhart Lake
* Fixed links in IGs, Added 12th Gen
# Conflicts:
# docs/IE_DG/supported_plugins/GNA.md
# thirdparty/ade
* Cherry-pick: Feature/azaytsev/doc fixes (#8897)
* Various doc changes
* Removed the empty Learning path topic
* Restored the Gemini Lake CPIU list
# Conflicts:
# docs/IE_DG/supported_plugins/GNA.md
# thirdparty/ade
* Cherry-pick: sphinx copybutton doxyrest code blocks (#8992)
# Conflicts:
# thirdparty/ade
* Cherry-pick: iframe video enable fullscreen (#9041)
# Conflicts:
# thirdparty/ade
* Cherry-pick: fix untitled titles (#9213)
# Conflicts:
# thirdparty/ade
* Cherry-pick: perf bench graph animation (#9045)
* animation
* fix
# Conflicts:
# thirdparty/ade
* Cherry-pick: doc pytest (#8888)
* docs pytest
* fixes
# Conflicts:
# docs/doxygen/doxygen-ignore.txt
# docs/scripts/ie_docs.xml
# thirdparty/ade
* Cherry-pick: restore deleted files (#9215)
* Added new operations to the doc structure (from removed ie_docs.xml)
* Additional fixes
* Update docs/IE_DG/InferenceEngine_QueryAPI.md
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
* Update docs/IE_DG/Int8Inference.md
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
* Update Custom_Layers_Guide.md
* Changes according to review comments
* doc scripts fixes
* Update docs/IE_DG/Int8Inference.md
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
* Update Int8Inference.md
* update xfail
* clang format
* updated xfail
Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: kblaszczak-intel <karol.blaszczak@intel.com>
Co-authored-by: Yury Gorbachev <yury.gorbachev@intel.com>
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
8.6 KiB
OpenVINO™ Model Server
OpenVINO™ Model Server (OVMS) is a scalable, high-performance solution for serving machine learning models optimized for Intel® architectures. The server provides an inference service via gRPC or REST API - making it easy to deploy new algorithms and AI experiments using the same architecture as TensorFlow* Serving for any models trained in a framework that is supported by OpenVINO.
The server implements gRPC and REST API framework with data serialization and deserialization using TensorFlow Serving API, and OpenVINO™ as the inference execution provider. Model repositories may reside on a locally accessible file system (for example, NFS), Google Cloud Storage* (GCS), Amazon S3*, MinIO*, or Azure Blob Storage*.
OVMS is now implemented in C++ and provides much higher scalability compared to its predecessor in the Python version. You can take advantage of all the power of Xeon® CPU capabilities or AI accelerators and expose it over the network interface. Read the release notes to find out what's new in the C++ version.
Review the Architecture Concept document for more details.
A few key features:
- Support for multiple frameworks. Serve models trained in popular formats such as Caffe*, TensorFlow*, MXNet*, and ONNX*.
- Deploy new model versions without changing client code.
- Support for AI accelerators including Intel Movidius Myriad VPUs, GPU, and HDDL.
- The server can be enabled both on Bare Metal Hosts or in Docker* containers.
- Kubernetes deployments. The server can be deployed in a Kubernetes cluster allowing the inference service to scale horizontally and ensure high availability.
- Model reshaping. The server supports reshaping models in runtime.
- Model ensemble (preview). Connect multiple models to deploy complex processing solutions and reduce overhead of sending data back and forth.
Note
: OVMS has been tested on CentOS* and Ubuntu*. Publicly released Docker images are based on CentOS.
Build OpenVINO Model Server
-
Go to the root directory of the repository.
-
Build the Docker image with the command below:
make docker_build
The command generates:
- Image tagged as
openvino/model_server:latestwith CPU, NCS, and HDDL support - Image tagged as
openvino/model_server:latest-gpuwith CPU, NCS, HDDL, and iGPU support .tar.gzrelease package with OVMS binary and necessary libraries in the./distdirectory.
The release package is compatible with Linux machines on which glibc version is greater than or equal to the build image version.
For debugging, the command also generates an image with a suffix -build, namely openvino/model_server-build:latest.
Note
: Images include OpenVINO 2021.1 release.
Run OpenVINO Model Server
Find a detailed description of how to use the OpenVINO Model Server in the OVMS Quick Start Guide.
For more detailed guides on using the Model Server in various scenarios, visit the links below:
API Documentation
GRPC
OpenVINO™ Model Server gRPC API is documented in the proto buffer files in tensorflow_serving_api.
Note
: The implementations for
Predict,GetModelMetadata, andGetModelStatusfunction calls are currently available. These are the most generic function calls and should address most of the usage scenarios.
Predict proto defines two message specifications: PredictRequest and PredictResponse used while calling Prediction endpoint.
PredictRequestspecifies information about the model spec, that is name and version, and a map of input data serialized via TensorProto to a string format.PredictResponseincludes a map of outputs serialized by TensorProto and information about the used model spec.
Get Model Metadata proto defines three message definitions used while calling Metadata endpoint:
SignatureDefMap, GetModelMetadataRequest, GetModelMetadataResponse.
A function call GetModelMetadata accepts model spec information as input and returns Signature Definition content in the format similar to TensorFlow Serving.
Get Model Status proto defines three message definitions used while calling Status endpoint:
GetModelStatusRequest, ModelVersionStatus, GetModelStatusResponse that report all exposed versions including their state in their lifecycle.
Refer to the example client code to learn how to use this API and submit the requests using the gRPC interface.
Using the gRPC interface is recommended for optimal performance due to its faster implementation of input data deserialization. It enables you to achieve lower latency, especially with larger input messages like images.
REST
OpenVINO™ Model Server RESTful API follows the documentation from the TensorFlow Serving REST API.
Both row and column format of the requests are implemented.
Note
: Just like with gRPC, only the implementations for
Predict,GetModelMetadata, andGetModelStatusfunction calls are currently available.
Only the numerical data types are supported.
Review the exemplary clients below to find out more how to connect and run inference requests.
REST API is recommended when the primary goal is in reducing the number of client side Python dependencies and simpler application code.
Known Limitations
- Currently,
Predict,GetModelMetadata, andGetModelStatuscalls are implemented using the TensorFlow Serving API. Classify,Regress, andMultiInferenceare not included.Output_filteris not effective in thePredictcall. All outputs defined in the model are returned to the clients.
OpenVINO Model Server Contribution Policy
-
All contributed code must be compatible with the Apache 2 license.
-
All changes have to pass linter, unit, and functional tests.
-
All new features need to be covered by tests.
References
* Other names and brands may be claimed as the property of others.