[DOCS] Port workflow docs (#17761)
* [DOCS] Deploy and run documentation sections (#17708) * first draft * change name * restructure * workflow headers change * change note * remove deployment guide * change deployment description * fix conflicts * clean up conflict fixes
This commit is contained in:
@@ -1,33 +0,0 @@
|
||||
# Running and Deploying Inference {#openvino_docs_deployment_guide_introduction}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
Run and Deploy Locally <openvino_deployment_guide>
|
||||
Deploy via Model Serving <ovms_what_is_openvino_model_server>
|
||||
|
||||
|
||||
Once you have a model that meets both OpenVINO™ and your requirements, you can choose how to deploy it with your application.
|
||||
|
||||
.. panels::
|
||||
|
||||
:doc:`Deploy via OpenVINO Runtime <openvino_deployment_guide>`
|
||||
^^^^^^^^^^^^^^
|
||||
|
||||
Local deployment uses OpenVINO Runtime that is called from, and linked to, the application directly.
|
||||
It utilizes resources available to the system and provides the quickest way of launching inference.
|
||||
---
|
||||
|
||||
:doc:`Deploy via Model Server <ovms_what_is_openvino_model_server>`
|
||||
^^^^^^^^^^^^^^
|
||||
|
||||
Deployment via OpenVINO Model Server allows the application to connect to the inference server set up remotely.
|
||||
This way inference can use external resources instead of those available to the application itself.
|
||||
|
||||
|
||||
Apart from the default deployment options, you may also :doc:`deploy your application for the TensorFlow framework with OpenVINO Integration <ovtf_integration>`
|
||||
|
||||
@endsphinxdirective
|
||||
@@ -7,7 +7,6 @@
|
||||
:hidden:
|
||||
|
||||
ote_documentation
|
||||
ovtf_integration
|
||||
ovsa_get_started
|
||||
openvino_inference_engine_tools_compile_tool_README
|
||||
openvino_docs_tuning_utilities
|
||||
@@ -15,8 +14,8 @@
|
||||
|
||||
OpenVINO™ is not just one tool. It is an expansive ecosystem of utilities, providing a comprehensive workflow for deep learning solution development. Learn more about each of them to reach the full potential of OpenVINO™ Toolkit.
|
||||
|
||||
Neural Network Compression Framework (NNCF)
|
||||
###########################################
|
||||
|
||||
**Neural Network Compression Framework (NNCF)**
|
||||
|
||||
A suite of advanced algorithms for Neural Network inference optimization with minimal accuracy drop. NNCF applies quantization, filter pruning, binarization and sparsity algorithms to PyTorch and TensorFlow models during training.
|
||||
|
||||
@@ -27,8 +26,7 @@ More resources:
|
||||
* `PyPI <https://pypi.org/project/nncf/>`__
|
||||
|
||||
|
||||
OpenVINO™ Training Extensions
|
||||
#############################
|
||||
**OpenVINO™ Training Extensions**
|
||||
|
||||
A convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference.
|
||||
|
||||
@@ -38,8 +36,7 @@ More resources:
|
||||
* `GitHub <https://github.com/openvinotoolkit/training_extensions>`__
|
||||
* `Documentation <https://openvinotoolkit.github.io/training_extensions/stable/guide/get_started/introduction.html>`__
|
||||
|
||||
OpenVINO™ Security Add-on
|
||||
#########################
|
||||
**OpenVINO™ Security Add-on**
|
||||
|
||||
A solution for Model Developers and Independent Software Vendors to use secure packaging and secure model execution.
|
||||
|
||||
@@ -48,53 +45,7 @@ More resources:
|
||||
* :doc:`Documentation <ovsa_get_started>`
|
||||
* `GitHub <https://github.com/openvinotoolkit/security_addon>`__
|
||||
|
||||
|
||||
OpenVINO™ integration with TensorFlow (OVTF)
|
||||
############################################
|
||||
|
||||
A solution empowering TensorFlow developers with OpenVINO's optimization capabilities. With just two lines of code in your application, you can offload inference to OpenVINO, while keeping the TensorFlow API.
|
||||
|
||||
More resources:
|
||||
|
||||
* `Documentation <https://github.com/openvinotoolkit/openvino_tensorflow>`__
|
||||
* `PyPI <https://pypi.org/project/openvino-tensorflow/>`__
|
||||
* `GitHub <https://github.com/openvinotoolkit/openvino_tensorflow>`__
|
||||
|
||||
DL Streamer
|
||||
###########
|
||||
|
||||
A streaming media analytics framework, based on the GStreamer multimedia framework, for creating complex media analytics pipelines.
|
||||
|
||||
More resources:
|
||||
|
||||
* `Documentation on GitHub <https://dlstreamer.github.io/index.html>`__
|
||||
* `Installation Guide on GitHub <https://github.com/openvinotoolkit/dlstreamer_gst/wiki/Install-Guide>`__
|
||||
|
||||
DL Workbench
|
||||
############
|
||||
|
||||
A web-based tool for deploying deep learning models. Built on the core of OpenVINO and equipped with a graphics user interface, DL Workbench is a great way to explore the possibilities of the OpenVINO workflow, import, analyze, optimize, and build your pre-trained models. You can do all that by visiting `Intel® Developer Cloud <https://software.intel.com/content/www/us/en/develop/tools/devcloud.html>`__ and launching DL Workbench online.
|
||||
|
||||
More resources:
|
||||
|
||||
* `Documentation <https://docs.openvino.ai/2023.0/workbench_docs_Workbench_DG_Introduction.html>`__
|
||||
* `Docker Hub <https://hub.docker.com/r/openvino/workbench>`__
|
||||
* `PyPI <https://pypi.org/project/openvino-workbench/>`__
|
||||
|
||||
Computer Vision Annotation Tool (CVAT)
|
||||
######################################
|
||||
|
||||
An online, interactive video and image annotation tool for computer vision purposes.
|
||||
|
||||
More resources:
|
||||
|
||||
* `Documentation on GitHub <https://opencv.github.io/cvat/docs/>`__
|
||||
* `Web application <https://www.cvat.ai/>`__
|
||||
* `Docker Hub <https://hub.docker.com/r/openvino/cvat_server>`__
|
||||
* `GitHub <https://github.com/openvinotoolkit/cvat>`__
|
||||
|
||||
Dataset Management Framework (Datumaro)
|
||||
#######################################
|
||||
**Dataset Management Framework (Datumaro)**
|
||||
|
||||
A framework and CLI tool to build, transform, and analyze datasets.
|
||||
|
||||
@@ -104,5 +55,35 @@ More resources:
|
||||
* `PyPI <https://pypi.org/project/datumaro/>`__
|
||||
* `GitHub <https://github.com/openvinotoolkit/datumaro>`__
|
||||
|
||||
**Compile Tool**
|
||||
|
||||
|
||||
Compile tool is now deprecated. If you need to compile a model for inference on a specific device, use the following script:
|
||||
|
||||
.. tab:: python
|
||||
|
||||
.. doxygensnippet:: docs/snippets/export_compiled_model.py
|
||||
:language: python
|
||||
:fragment: [export_compiled_model]
|
||||
|
||||
.. tab:: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/export_compiled_model.cpp
|
||||
:language: cpp
|
||||
:fragment: [export_compiled_model]
|
||||
|
||||
|
||||
To learn which device supports the import / export functionality, see the :doc:`feature support matrix <openvino_docs_OV_UG_Working_with_devices>`.
|
||||
|
||||
For more details on preprocessing steps, refer to the :doc:`Optimize Preprocessing <openvino_docs_OV_UG_Preprocessing_Overview>`. To compile the model with advanced preprocessing capabilities, refer to the :doc:`Use Case - Integrate and Save Preprocessing Steps Into OpenVINO IR <openvino_docs_OV_UG_Preprocess_Usecase_save>`, which shows how to have all the preprocessing in the compiled blob.
|
||||
|
||||
**DL Workbench**
|
||||
|
||||
A web-based tool for deploying deep learning models. Built on the core of OpenVINO and equipped with a graphics user interface, DL Workbench is a great way to explore the possibilities of the OpenVINO workflow, import, analyze, optimize, and build your pre-trained models. You can do all that by visiting `Intel® Developer Cloud <https://software.intel.com/content/www/us/en/develop/tools/devcloud.html>`__ and launching DL Workbench online.
|
||||
|
||||
**OpenVINO™ integration with TensorFlow (OVTF)**
|
||||
|
||||
OpenVINO™ Integration with TensorFlow will no longer be supported as of OpenVINO release 2023.0. As part of the 2023.0 release, OpenVINO will feature a significantly enhanced TensorFlow user experience within native OpenVINO without needing offline model conversions. :doc:`Learn more <openvino_docs_MO_DG_TensorFlow_Frontend>`.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
@@ -1,55 +0,0 @@
|
||||
# OpenVINO™ integration with TensorFlow {#ovtf_integration}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
**OpenVINO™ integration with TensorFlow** is a solution for TensorFlow developers who want to get started with OpenVINO™ in their inferencing applications. By adding just two lines of code you can now take advantage of OpenVINO™ toolkit optimizations with TensorFlow inference applications across a range of Intel® computation devices.
|
||||
|
||||
This is all you need:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
import openvino_tensorflow
|
||||
openvino_tensorflow.set_backend('<backend_name>')
|
||||
|
||||
|
||||
**OpenVINO™ integration with TensorFlow** accelerates inference across many AI models on a variety of Intel® technologies, such as:
|
||||
|
||||
* Intel® CPUs
|
||||
* Intel® integrated GPUs
|
||||
|
||||
.. note::
|
||||
For maximum performance, efficiency, tooling customization, and hardware control, we recommend developers to adopt native OpenVINO™ solutions.
|
||||
|
||||
To find out more about the product itself, as well as learn how to use it in your project, check its dedicated `GitHub repository <https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/docs>`__.
|
||||
|
||||
|
||||
To see what you can do with **OpenVINO™ integration with TensorFlow**, explore the demos located in the `examples folder <https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/examples>`__ in our GitHub repository.
|
||||
|
||||
Sample tutorials are also hosted on `Intel® DevCloud <https://www.intel.com/content/www/us/en/developer/tools/devcloud/edge/build/ovtfoverview.html>`__. The demo applications are implemented using Jupyter Notebooks. You can interactively execute them on Intel® DevCloud nodes, compare the results of **OpenVINO™ integration with TensorFlow**, native TensorFlow, and OpenVINO™.
|
||||
|
||||
License
|
||||
#######
|
||||
|
||||
**OpenVINO™ integration with TensorFlow** is licensed under `Apache License Version 2.0 <https://github.com/openvinotoolkit/openvino_tensorflow/blob/master/LICENSE>`__.
|
||||
By contributing to the project, you agree to the license and copyright terms therein
|
||||
and release your contribution under these terms.
|
||||
|
||||
Support
|
||||
#######
|
||||
|
||||
Submit your questions, feature requests and bug reports via `GitHub issues <https://github.com/openvinotoolkit/openvino_tensorflow/issues>`__.
|
||||
|
||||
How to Contribute
|
||||
#################
|
||||
|
||||
We welcome community contributions to **OpenVINO™ integration with TensorFlow**. If you have an idea for improvement:
|
||||
|
||||
* Share your proposal via `GitHub issues <https://github.com/openvinotoolkit/openvino_tensorflow/issues>`__.
|
||||
* Submit a `pull request <https://github.com/openvinotoolkit/openvino_tensorflow/pulls>`__.
|
||||
|
||||
We will review your contribution as soon as possible. If any additional fixes or modifications are necessary, we will guide you and provide feedback. Before you make your contribution, make sure you can build **OpenVINO™ integration with TensorFlow** and run all the examples with your fix/patch. If you want to introduce a large feature, create test cases for your feature. Upon our verification of your pull request, we will merge it to the repository provided that the pull request has met the above mentioned requirements and proved acceptable.
|
||||
|
||||
\* Other names and brands may be claimed as the property of others.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -9,7 +9,9 @@
|
||||
|
||||
Model Preparation <openvino_docs_model_processing_introduction>
|
||||
Model Optimization and Compression <openvino_docs_model_optimization_guide>
|
||||
Running and Deploying Inference <openvino_docs_deployment_guide_introduction>
|
||||
Running Inference <openvino_docs_OV_UG_OV_Runtime_User_Guide>
|
||||
Deployment on a Local System <openvino_deployment_guide>
|
||||
Deployment on a Model Server <ovms_what_is_openvino_model_server>
|
||||
|
||||
|
||||
| :doc:`Model Preparation <openvino_docs_model_processing_introduction>`
|
||||
@@ -18,7 +20,23 @@
|
||||
| :doc:`Model Optimization and Compression <openvino_docs_model_optimization_guide>`
|
||||
| In this section you will find out how to optimize a model to achieve better inference performance. It describes multiple optimization methods for both the training and post-training stages.
|
||||
|
||||
| :doc:`Deployment <openvino_docs_deployment_guide_introduction>`
|
||||
| This section explains the process of deploying your own inference application using either OpenVINO Runtime or OpenVINO Model Server. It describes how to run inference which is the most basic form of deployment and the quickest way of launching inference.
|
||||
| :doc:`Running Inference <openvino_docs_OV_UG_OV_Runtime_User_Guide>`
|
||||
| This section explains describes how to run inference which is the most basic form of deployment and the quickest way of launching inference.
|
||||
|
||||
|
||||
Once you have a model that meets both OpenVINO™ and your requirements, you can choose how to deploy it with your application.
|
||||
|
||||
|
||||
| :doc:`Option 1. Deployment via OpenVINO Runtime <openvino_deployment_guide>`
|
||||
| Local deployment uses OpenVINO Runtime that is called from, and linked to, the application directly.
|
||||
| It utilizes resources available to the system and provides the quickest way of launching inference.
|
||||
| Deployment on a local system requires performing the steps from the running inference section.
|
||||
|
||||
|
||||
| :doc:`Option 2. Deployment via Model Server <ovms_what_is_openvino_model_server>`
|
||||
| Deployment via OpenVINO Model Server allows the application to connect to the inference server set up remotely.
|
||||
| This way inference can use external resources instead of those available to the application itself.
|
||||
| Deployment on a model server can be done quickly and without performing any additional steps described in the running inference section.
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
@@ -1,4 +1,4 @@
|
||||
# Run and Deploy Locally {#openvino_deployment_guide}
|
||||
# Deploy Locally {#openvino_deployment_guide}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
@@ -6,8 +6,6 @@
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
Run Inference <openvino_docs_OV_UG_OV_Runtime_User_Guide>
|
||||
Optimize Inference <openvino_docs_deployment_optimization_guide_dldt_optimization_guide>
|
||||
Deploy Application with Deployment Manager <openvino_docs_install_guides_deployment_manager_tool>
|
||||
Local Distribution Libraries <openvino_docs_deploy_local_distribution>
|
||||
|
||||
|
||||
@@ -14,6 +14,7 @@
|
||||
openvino_docs_OV_UG_ShapeInference
|
||||
openvino_docs_OV_UG_DynamicShapes
|
||||
openvino_docs_OV_UG_model_state_intro
|
||||
Optimize Inference <openvino_docs_deployment_optimization_guide_dldt_optimization_guide>
|
||||
|
||||
|
||||
OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read an Intermediate Representation (IR), TensorFlow, TensorFlow Lite, ONNX, or PaddlePaddle model and execute it on preferred devices.
|
||||
|
||||
@@ -17,7 +17,7 @@ before every release. These models are considered officially supported.
|
||||
| If your model is not included but is similar to those that are, it is still very likely to work.
|
||||
If your model fails to execute properly there are a few options available:
|
||||
|
||||
* If the model originates from a framework like TensorFlow or PyTorch, OpenVINO™ offers a hybrid solution. The original model can be run without explicit conversion into the OpenVINO format. For more information, see :doc:`OpenVINO TensorFlow Integration <ovtf_integration>`.
|
||||
|
||||
* You can create a GitHub request for the operation(s) that are missing. These requests are reviewed regularly. You will be informed if and how the request will be accommodated. Additionally, your request may trigger a reply from someone in the community who can help.
|
||||
* As OpenVINO™ is open source you can enhance it with your own contribution to the GitHub repository. To learn more, see the articles on :doc:`OpenVINO Extensibility <openvino_docs_Extensibility_UG_Intro>`.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user