[DOCS] Remove DL Workbench (#15733)

* remove dl wb docs

* text correction

* change ecosystem description

* replace link
This commit is contained in:
Tatiana Savina 2023-02-21 14:16:02 +01:00 committed by GitHub
parent 0ddca519d6
commit f730edb084
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 5 additions and 51 deletions

View File

@ -1,15 +0,0 @@
# OpenVINO™ Deep Learning Workbench Overview {#workbench_docs_Workbench_DG_Introduction}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
workbench_docs_Workbench_DG_Install
workbench_docs_Workbench_DG_Work_with_Models_and_Sample_Datasets
Tutorials <workbench_docs_Workbench_DG_Tutorials>
User Guide <workbench_docs_Workbench_DG_User_Guide>
workbench_docs_Workbench_DG_Troubleshooting
@endsphinxdirective

View File

@ -7,7 +7,6 @@
:hidden:
ovtf_integration
ote_documentation
ovsa_get_started
openvino_inference_engine_tools_compile_tool_README
openvino_docs_tuning_utilities
@ -16,7 +15,6 @@
@endsphinxdirective
OpenVINO™ is not just one tool. It is an expansive ecosystem of utilities, providing a comprehensive workflow for deep learning solution development. Learn more about each of them to reach the full potential of OpenVINO™ Toolkit.
### Neural Network Compression Framework (NNCF)
@ -51,14 +49,14 @@ More resources:
* [installation Guide on GitHub](https://github.com/openvinotoolkit/dlstreamer_gst/wiki/Install-Guide)
### DL Workbench
A web-based tool for deploying deep learning models. Built on the core of OpenVINO and equipped with a graphics user interface, DL Workbench is a great way to explore the possibilities of the OpenVINO workflow, import, analyze, optimize, and build your pre-trained models. You can do all that by visiting [Intel® DevCloud for the Edge](https://software.intel.com/content/www/us/en/develop/tools/devcloud.html) and launching DL Workbench on-line.
A web-based tool for deploying deep learning models. Built on the core of OpenVINO and equipped with a graphics user interface, DL Workbench is a great way to explore the possibilities of the OpenVINO workflow, import, analyze, optimize, and build your pre-trained models. You can do all that by visiting [Intel® Developer Cloud](https://software.intel.com/content/www/us/en/develop/tools/devcloud.html) and launching DL Workbench online.
More resources:
* [documentation](dl_workbench_overview.md)
* [Documentation](https://docs.openvino.ai/2022.3/workbench_docs_Workbench_DG_Introduction.html)
* [Docker Hub](https://hub.docker.com/r/openvino/workbench)
* [PyPI](https://pypi.org/project/openvino-workbench/)
### OpenVINO™ Training Extensions (OTE)
### OpenVINO™ Training Extensions (OTX)
A convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference.
More resources:

View File

@ -46,7 +46,7 @@ Some of the OpenVINO Development Tools also support both OpenVINO IR v10 and v11
- Accuracy checker uses API 2.0 for model accuracy measurement by default. It also supports switching to the old API by using the `--use_new_api False` command-line parameter. Both launchers accept OpenVINO IR v10 and v11, but in some cases configuration files should be updated. For more details, see the [Accuracy Checker documentation](https://github.com/openvinotoolkit/open_model_zoo/blob/master/tools/accuracy_checker/openvino/tools/accuracy_checker/launcher/openvino_launcher_readme.md).
- [Compile tool](../../../tools/compile_tool/README.md) compiles the model to be used in API 2.0 by default. To use the resulting compiled blob under the Inference Engine API, the additional `ov_api_1_0` option should be passed.
However, Post-Training Optimization Tool and Deep Learning Workbench of OpenVINO 2022.1 do not support OpenVINO IR v10. They require the latest version of Model Optimizer to generate OpenVINO IR v11 files.
However, Post-Training Optimization Tool of OpenVINO 2022.1 does not support OpenVINO IR v10. They require the latest version of Model Optimizer to generate OpenVINO IR v11 files.
> **NOTE**: To quantize your OpenVINO IR v10 models to run with OpenVINO 2022.1, download and use Post-Training Optimization Tool of OpenVINO 2021.4.

View File

@ -30,7 +30,7 @@ Users in China might encounter errors while downloading sources via PIP during O
### <a name="proxy-issues"></a>Proxy Issues
If you met proxy issues during the installation with Docker, you need set up proxy settings for Docker. See the [Set Proxy section in DL Workbench Installation](https://docs.openvino.ai/latest/workbench_docs_Workbench_DG_Prerequisites.html#set-proxy) for more details.
If you met proxy issues during the installation with Docker, you need set up proxy settings for Docker. See the [Docker guide](https://docs.docker.com/network/proxy/) for more details.
@anchor yocto-install-issues

View File

@ -45,7 +45,6 @@ Similarly, different devices require a different number of execution streams to
In some cases, combination of streams and batching may be required to maximize the throughput.
One possible throughput optimization strategy is to **set an upper bound for latency and then increase the batch size and/or number of the streams until that tail latency is met (or the throughput is not growing anymore)**.
Consider [OpenVINO Deep Learning Workbench](@ref workbench_docs_Workbench_DG_Introduction) that builds handy latency vs throughput charts, iterating over possible values of the batch size and number of streams.
> **NOTE**: When playing with [dynamically-shaped inputs](../OV_Runtime_UG/ov_dynamic_shapes.md), use only the streams (no batching), as they tolerate individual requests having different shapes.

View File

@ -6,7 +6,6 @@
:maxdepth: 1
:hidden:
openvino_docs_security_guide_workbench
openvino_docs_OV_UG_protecting_model_guide
@endsphinxdirective

View File

@ -1,27 +0,0 @@
# Deep Learning Workbench Security {#openvino_docs_security_guide_workbench}
Deep Learning Workbench (DL Workbench) is a web application running within a Docker\* container.
## Run DL Workbench
Unless necessary, limit the connections to the DL Workbench to `localhost` (127.0.0.1), so that it
is only accessible from the machine the Docker container is built on.
When using `docker run` to [start the DL Workbench from Docker Hub](@ref workbench_docs_Workbench_DG_Run_Locally), limit connections for the host IP 127.0.0.1.
For example, limit the connections for the host IP to the port `5665` with the `-p 127.0.0.1:5665:5665` command . Refer to [Container networking](https://docs.docker.com/config/containers/container-networking/#published-ports) for details.
## Authentication Security
DL Workbench uses [authentication tokens](@ref workbench_docs_Workbench_DG_Authentication) to access the
application. The script starting the DL Workbench creates an authentication token each time the DL
Workbench starts. Anyone who has the authentication token can use the DL Workbench.
When you finish working with the DL Workbench, log out to prevent the use of the DL Workbench from
the same browser session without authentication.
To invalidate the authentication token completely, [restart the DL Workbench](@ref workbench_docs_Workbench_DG_Docker_Container).
## Use TLS to Protect Communications
[Configure Transport Layer Security (TLS)](@ref workbench_docs_Workbench_DG_Configure_TLS) to keep the
authentication token encrypted.