From 127f931a5e27b7b8db56b5e2cc445a59079ebb52 Mon Sep 17 00:00:00 2001 From: Tatiana Savina Date: Wed, 14 Apr 2021 22:33:04 +0300 Subject: [PATCH] Add MonoDepth Python Demo how-to (#5238) * Add POT how-to * added new how-to and updated the link --- docs/how_tos/MonoDepth_how_to.md | 70 ++++++++++++++++++++++++++++++ docs/how_tos/POT_how_to_example.md | 2 +- 2 files changed, 71 insertions(+), 1 deletion(-) create mode 100644 docs/how_tos/MonoDepth_how_to.md diff --git a/docs/how_tos/MonoDepth_how_to.md b/docs/how_tos/MonoDepth_how_to.md new file mode 100644 index 00000000000..329eac9e063 --- /dev/null +++ b/docs/how_tos/MonoDepth_how_to.md @@ -0,0 +1,70 @@ +# OpenVINO™ MonoDepth Python Demo + +This tutorial describes the example from the following YouTube* video: +/// + +To learn more about how to run the MonoDepth Python* demo application, refer to the [documentation](https://docs.openvinotoolkit.org/latest/omz_demos_monodepth_demo_python.html). + +Tested on OpenVINO™ 2021, Ubuntu 18.04. + +## 1. Set Environment + +Define the OpenVINO™ install directory: +``` +export OV=/opt/intel/openvino_2021/ +``` +Define the working directory. Make sure the directory exist: +``` +export WD=~/MonoDepth_Python/ +``` + +## 2. Install Prerequisits + +Initialize OpenVINO™: +``` +source $OV/bin/setupvars.sh +``` + +Install the Model Optimizer prerequisites: +``` +cd $OV/deployment_tools/model_optimizer/install_prerequisites/ +sudo ./install_prerequisites.sh +``` + +Install the Model Downloader prerequisites: + +``` +cd $OV/deployment_tools/tools/model_downloader/ +python3 -mpip install --user -r ./requirements.in +sudo python3 -mpip install --user -r ./requirements-pytorch.in +sudo python3 -mpip install --user -r ./requirements-caffe2.in +``` + +## 3. Download Models + +Download all models from the Demo Models list: +``` +python3 $OV/deployment_tools/tools/model_downloader/downloader.py --list $OV/deployment_tools/inference_engine/demos/python_demos/monodepth_demo/models.lst -o $WD +``` + +## 4. Convert Models to Intermediate Representation (IR) + +Use the convert script to convert the models to ONNX*, and then to IR format: +``` +cd $WD +python3 $OV/deployment_tools/tools/model_downloader/converter.py --list $OV/deployment_tools/inference_engine/demos/python_demos/monodepth_demo/models.lst +``` + +## 5. Run Demo + +Install required Python modules, for example, kiwisolver or cycler, if you get missing module indication. + +Use your input image: +``` +python3 $OV/inference_engine/demos/python_demos/monodepth_demo/monodepth_demo.py -m $WD/public/midasnet/FP32/midasnet.xml -i input-image.jpg +``` +Check the result depth image: +``` +eog disp.png & +``` +You can also try to use another model. Note that the algorithm is the same, but the depth map will be different. diff --git a/docs/how_tos/POT_how_to_example.md b/docs/how_tos/POT_how_to_example.md index 571269a92ff..28adc19062b 100644 --- a/docs/how_tos/POT_how_to_example.md +++ b/docs/how_tos/POT_how_to_example.md @@ -1,8 +1,8 @@ # Post-Training Optimization Tool - A real example This tutorial describes the example from the following YouTube* video: +https://www.youtube.com/watch?v=cGQesbWuRhk&t=49s -http://XXXXX Watch this video to learn the basics of Post-training Optimization Tool (POT): https://www.youtube.com/watch?v=SvkI25Ca_SQ