Docs menu recreate structure step2 port2master (#14552)

model optimization
deploy locally
run inference
remove OVSA from security (it was duplicated)
This commit is contained in:
Karol Blaszczak 2022-12-11 14:30:14 +01:00 committed by GitHub
parent e9e05e508a
commit 3c89da1838
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 28 additions and 37 deletions

View File

@ -1,11 +0,0 @@
# Introduction to OpenVINO™ Deployment {#openvino_docs_deployment_guide_introduction}
Once you have a model that meets both OpenVINO™ and your requirements, you can choose among several ways of deploying it with your application:
* [Deploy your application locally](../OV_Runtime_UG/deployment/deployment_intro.md).
* [Deploy your model with OpenVINO Model Server](@ref ovms_what_is_openvino_model_server).
* [Deploy your application for the TensorFlow framework with OpenVINO Integration](./openvino_ecosystem_ovtf.md).
> **NOTE**: Note that [running inference in OpenVINO Runtime](../OV_Runtime_UG/openvino_intro.md) is the most basic form of deployment. Before moving forward, make sure you know how to create a proper Inference configuration.

View File

@ -1,4 +1,16 @@
# Introduction to Model Processing {#openvino_docs_model_processing_introduction}
# Model Preparation {#openvino_docs_model_processing_introduction}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
Supported_Model_Formats
openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide
omz_tools_downloader
@endsphinxdirective
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as OpenVINO's [Open Model Zoo](../model_zoo.md).

View File

@ -11,7 +11,11 @@
@endsphinxdirective
Once the [OpenVINO™ application development](../integrate_with_your_application.md) has been finished, application developers usually need to deploy their applications to end users. There are several ways to achieve that:
Once [OpenVINO™ application development](../integrate_with_your_application.md) has been finished, application developers usually need to deploy their applications to end users. There are several ways to achieve that. This section will explain how you can deploy locally, using OpenVINO Runtime.
> **NOTE**: Note that [running inference in OpenVINO Runtime](../openvino_intro.md) is the most basic form of deployment. Before moving forward, make sure you know how to create a proper Inference configuration.
## Local Deployment Options
- Set a dependency on the existing prebuilt packages, also called "centralized distribution":
- using Debian / RPM packages - a recommended way for Linux operating systems;

View File

@ -7,29 +7,22 @@
:hidden:
openvino_2_0_transition_guide
API Reference <api/api_reference>
Model Preparation <openvino_docs_model_processing_introduction>
Model Optimization and Compression <openvino_docs_model_optimization_guide>
Run Inference <openvino_docs_OV_UG_OV_Runtime_User_Guide>
Deploy Locally <openvino_deployment_guide>
Tool Ecosystem <openvino_ecosystem>
OpenVINO Extensibility <openvino_docs_Extensibility_UG_Intro>
Media Processing and CV Libraries <media_processing_cv_libraries>
OpenVINO™ Security <openvino_docs_security_guide_introduction>
.. toctree::
:maxdepth: 1
:caption: Model preparation
:hidden:
openvino_docs_model_processing_introduction
Supported_Model_Formats
openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide
omz_tools_downloader
.. toctree::
:maxdepth: 1
:caption: Running Inference
:hidden:
openvino_docs_OV_UG_OV_Runtime_User_Guide
openvino_inference_engine_tools_compile_tool_README
@ -40,19 +33,10 @@
openvino_docs_optimization_guide_dldt_optimization_guide
openvino_docs_MO_DG_Getting_Performance_Numbers
openvino_docs_model_optimization_guide
openvino_docs_deployment_optimization_guide_dldt_optimization_guide
openvino_docs_tuning_utilities
openvino_docs_performance_benchmarks
.. toctree::
:maxdepth: 1
:caption: Deploying Inference
:hidden:
openvino_docs_deployment_guide_introduction
openvino_deployment_guide
@endsphinxdirective

View File

@ -11,6 +11,9 @@ OpenVINO™ Documentation
.. raw:: html
<div class="section" id="welcome-to-openvino-toolkit-s-documentation">
<link rel="stylesheet" type="text/css" href="_static/css/homepage_style.css">
@ -120,7 +123,6 @@ OpenVINO™ Documentation
GET STARTED <get_started>
LEARN OPENVINO <learn_openvino>
DOCUMENTATION <documentation>
API REFERENCE <api/api_reference>
MODEL ZOO <model_zoo>
RESOURCES <resources>
RELEASE NOTES <https://software.intel.com/content/www/us/en/develop/articles/openvino-relnotes.html>

View File

@ -8,7 +8,6 @@
openvino_docs_security_guide_workbench
openvino_docs_OV_UG_protecting_model_guide
ovsa_get_started
@endsphinxdirective
@ -18,3 +17,4 @@ Trained models are often valuable intellectual property and you may choose to pr
Actual security and privacy requirements depend on your unique deployment scenario.
This section provides general guidance on using OpenVINO tools and libraries securely.
The main security measure for OpenVINO is its [Security Add-on](../ovsa/ovsa_get_started.md). You can find its description in the Ecosystem section.