DOCS shift to rst - Tensorflow Frontend Capabilities and Limitations (#16392)
This commit is contained in:
committed by
GitHub
parent
083596e285
commit
c5f65eea73
@@ -10,15 +10,15 @@
|
||||
openvino_docs_OV_UG_Running_on_multiple_devices
|
||||
openvino_docs_OV_UG_Hetero_execution
|
||||
openvino_docs_OV_UG_Automatic_Batching
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
OpenVINO Runtime offers multiple inference modes to allow optimum hardware utilization under different conditions. The most basic one is a single-device mode, which defines just one device responsible for the entire inference workload. It supports a range of Intel hardware by means of plugins embedded in the Runtime library, each set up to offer the best possible performance. For a complete list of supported devices and instructions on how to use them, refer to the [guide on inference devices](../OV_Runtime_UG/supported_plugins/Device_Plugins.md).
|
||||
|
||||
OpenVINO Runtime offers multiple inference modes to allow optimum hardware utilization under different conditions. The most basic one is a single-device mode, which defines just one device responsible for the entire inference workload. It supports a range of Intel hardware by means of plugins embedded in the Runtime library, each set up to offer the best possible performance. For a complete list of supported devices and instructions on how to use them, refer to the :doc:`guide on inference devices <openvino_docs_OV_UG_Working_with_devices>`.
|
||||
|
||||
The remaining modes assume certain levels of automation in selecting devices for inference. Using them in the deployed solution may potentially increase its performance and portability. The automated modes are:
|
||||
|
||||
* [Automatic Device Selection (AUTO)](../OV_Runtime_UG/auto_device_selection.md)
|
||||
* [Multi-Device Execution (MULTI)](../OV_Runtime_UG/multi_device.md)
|
||||
* [Heterogeneous Execution (HETERO)](../OV_Runtime_UG/hetero_execution.md)
|
||||
* [Automatic Batching Execution (Auto-batching)](../OV_Runtime_UG/automatic_batching.md)
|
||||
* :doc:`Automatic Device Selection (AUTO) <openvino_docs_OV_UG_supported_plugins_AUTO>`
|
||||
* :doc:``Multi-Device Execution (MULTI) <openvino_docs_OV_UG_Running_on_multiple_devices>`
|
||||
* :doc:`Heterogeneous Execution (HETERO) <openvino_docs_OV_UG_Hetero_execution>`
|
||||
* :doc:`Automatic Batching Execution (Auto-batching) <openvino_docs_OV_UG_Automatic_Batching>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:ccc7704d2a27f7491729767443f3d2bdd0ccc930f16fde631a7f9c67d158297a
|
||||
size 71369
|
||||
@@ -14,22 +14,21 @@
|
||||
openvino_docs_OV_UG_ShapeInference
|
||||
openvino_docs_OV_UG_DynamicShapes
|
||||
openvino_docs_OV_UG_model_state_intro
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
OpenVINO Runtime is a set of C++ libraries with C and Python bindings providing a common API to deliver inference solutions on the platform of your choice. Use the OpenVINO Runtime API to read an Intermediate Representation (IR), TensorFlow, ONNX, or PaddlePaddle model and execute it on preferred devices.
|
||||
|
||||
OpenVINO Runtime uses a plugin architecture. Its plugins are software components that contain complete implementation for inference on a particular Intel® hardware device: CPU, GPU, GNA, etc. Each plugin implements the unified API and provides additional hardware-specific APIs for configuring devices or API interoperability between OpenVINO Runtime and underlying plugin backend.
|
||||
|
||||
The scheme below illustrates the typical workflow for deploying a trained deep learning model:
|
||||
|
||||
<!-- TODO: need to update the picture below with PDPD files -->
|
||||

|
||||
The scheme below illustrates the typical workflow for deploying a trained deep learning model:
|
||||
|
||||
|
||||
## Video
|
||||
.. image:: _static/images/BASIC_FLOW_IE_C.svg
|
||||
|
||||
|
||||
Video
|
||||
####################
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. list-table::
|
||||
|
||||
@@ -39,5 +38,5 @@ The scheme below illustrates the typical workflow for deploying a trained deep l
|
||||
src="https://www.youtube.com/embed/e6R13V8nbak">
|
||||
</iframe>
|
||||
* - **OpenVINO Runtime Concept**. Duration: 3:43
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -1,8 +1,10 @@
|
||||
# OpenVINO TensorFlow Frontend Capabilities and Limitations {#openvino_docs_MO_DG_TensorFlow_Frontend}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
TensorFlow Frontend is C++ based Frontend for conversion of TensorFlow models and is available as a preview feature starting from 2022.3.
|
||||
That means that you can start experimenting with `--use_new_frontend` option passed to Model Optimizer to enjoy improved conversion time for limited scope of models
|
||||
or directly loading TensorFlow models through `read_model()` method.
|
||||
That means that you can start experimenting with ``--use_new_frontend`` option passed to Model Optimizer to enjoy improved conversion time for limited scope of models
|
||||
or directly loading TensorFlow models through ``read_model()`` method.
|
||||
|
||||
The current limitations:
|
||||
|
||||
@@ -10,4 +12,6 @@ The current limitations:
|
||||
* There is no full parity yet between legacy Model Optimizer TensorFlow Frontend and new TensorFlow Frontend so primary path for model conversion is still legacy frontend
|
||||
* Model coverage and performance is continuously improving so some conversion phase failures, performance and accuracy issues might occur in case model is not yet covered.
|
||||
Known unsupported models: object detection models and all models with transformation configs, models with TF1/TF2 control flow, Complex type and training parts
|
||||
* `read_model()` method supports only `*.pb` format while Model Optimizer (or `convert_model` call) will accept other formats as well which are accepted by existing legacy frontend
|
||||
* ``read_model()`` method supports only ``*.pb`` format while Model Optimizer (or ``convert_model`` call) will accept other formats as well which are accepted by existing legacy frontend
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
Reference in New Issue
Block a user