diff --git a/docs/Documentation/deployment_guide_introduction.md b/docs/Documentation/deployment_guide_introduction.md deleted file mode 100644 index 69e5fb6202d..00000000000 --- a/docs/Documentation/deployment_guide_introduction.md +++ /dev/null @@ -1,11 +0,0 @@ -# Introduction to OpenVINO™ Deployment {#openvino_docs_deployment_guide_introduction} - - -Once you have a model that meets both OpenVINO™ and your requirements, you can choose among several ways of deploying it with your application: - -* [Deploy your application locally](../OV_Runtime_UG/deployment/deployment_intro.md). -* [Deploy your model with OpenVINO Model Server](@ref ovms_what_is_openvino_model_server). -* [Deploy your application for the TensorFlow framework with OpenVINO Integration](./openvino_ecosystem_ovtf.md). - - -> **NOTE**: Note that [running inference in OpenVINO Runtime](../OV_Runtime_UG/openvino_intro.md) is the most basic form of deployment. Before moving forward, make sure you know how to create a proper Inference configuration. \ No newline at end of file diff --git a/docs/Documentation/model_introduction.md b/docs/Documentation/model_introduction.md index ac289cfad8a..c6f19ec10cd 100644 --- a/docs/Documentation/model_introduction.md +++ b/docs/Documentation/model_introduction.md @@ -1,4 +1,16 @@ -# Introduction to Model Processing {#openvino_docs_model_processing_introduction} +# Model Preparation {#openvino_docs_model_processing_introduction} + +@sphinxdirective +.. toctree:: + :maxdepth: 1 + :hidden: + + Supported_Model_Formats + openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide + omz_tools_downloader + +@endsphinxdirective + Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as OpenVINO's [Open Model Zoo](../model_zoo.md). diff --git a/docs/OV_Runtime_UG/deployment/deployment_intro.md b/docs/OV_Runtime_UG/deployment/deployment_intro.md index 89a170846d0..2e0bcfb8e34 100644 --- a/docs/OV_Runtime_UG/deployment/deployment_intro.md +++ b/docs/OV_Runtime_UG/deployment/deployment_intro.md @@ -11,7 +11,11 @@ @endsphinxdirective -Once the [OpenVINO™ application development](../integrate_with_your_application.md) has been finished, application developers usually need to deploy their applications to end users. There are several ways to achieve that: +Once [OpenVINO™ application development](../integrate_with_your_application.md) has been finished, application developers usually need to deploy their applications to end users. There are several ways to achieve that. This section will explain how you can deploy locally, using OpenVINO Runtime. + +> **NOTE**: Note that [running inference in OpenVINO Runtime](../openvino_intro.md) is the most basic form of deployment. Before moving forward, make sure you know how to create a proper Inference configuration. + +## Local Deployment Options - Set a dependency on the existing prebuilt packages, also called "centralized distribution": - using Debian / RPM packages - a recommended way for Linux operating systems; diff --git a/docs/documentation.md b/docs/documentation.md index 722ebb532bf..c20d8411fa6 100644 --- a/docs/documentation.md +++ b/docs/documentation.md @@ -7,29 +7,22 @@ :hidden: openvino_2_0_transition_guide + API Reference + Model Preparation + Model Optimization and Compression + Run Inference + Deploy Locally Tool Ecosystem OpenVINO Extensibility Media Processing and CV Libraries OpenVINO™ Security -.. toctree:: - :maxdepth: 1 - :caption: Model preparation - :hidden: - - openvino_docs_model_processing_introduction - Supported_Model_Formats - openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide - omz_tools_downloader - - .. toctree:: :maxdepth: 1 :caption: Running Inference :hidden: - openvino_docs_OV_UG_OV_Runtime_User_Guide openvino_inference_engine_tools_compile_tool_README @@ -40,19 +33,10 @@ openvino_docs_optimization_guide_dldt_optimization_guide openvino_docs_MO_DG_Getting_Performance_Numbers - openvino_docs_model_optimization_guide openvino_docs_deployment_optimization_guide_dldt_optimization_guide openvino_docs_tuning_utilities openvino_docs_performance_benchmarks - - -.. toctree:: - :maxdepth: 1 - :caption: Deploying Inference - :hidden: - - openvino_docs_deployment_guide_introduction - openvino_deployment_guide + @endsphinxdirective diff --git a/docs/home.rst b/docs/home.rst index ba61d17e857..f5e358ad7a5 100644 --- a/docs/home.rst +++ b/docs/home.rst @@ -11,6 +11,9 @@ OpenVINO™ Documentation .. raw:: html + + +
@@ -120,7 +123,6 @@ OpenVINO™ Documentation GET STARTED LEARN OPENVINO DOCUMENTATION - API REFERENCE MODEL ZOO RESOURCES RELEASE NOTES diff --git a/docs/security_guide/introduction.md b/docs/security_guide/introduction.md index b6dbae97d83..2d4557da3b0 100644 --- a/docs/security_guide/introduction.md +++ b/docs/security_guide/introduction.md @@ -8,7 +8,6 @@ openvino_docs_security_guide_workbench openvino_docs_OV_UG_protecting_model_guide - ovsa_get_started @endsphinxdirective @@ -18,3 +17,4 @@ Trained models are often valuable intellectual property and you may choose to pr Actual security and privacy requirements depend on your unique deployment scenario. This section provides general guidance on using OpenVINO tools and libraries securely. +The main security measure for OpenVINO is its [Security Add-on](../ovsa/ovsa_get_started.md). You can find its description in the Ecosystem section.