From 380c8656f3319fc0c1ba3ca30b33a04a861dcef6 Mon Sep 17 00:00:00 2001 From: Evan Date: Wed, 8 Jun 2022 09:16:40 -0600 Subject: [PATCH] Docs: Add links to specific object detection examples (#11820) * Docs: Add links to object detection examples * Docs: Add links to specific examples * Docs: Add links to specific examples * Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md Co-authored-by: Karol Blaszczak --- .../tf_specific/Convert_EfficientDet_Models.md | 13 ++++++++++--- .../Convert_Object_Detection_API_Models.md | 4 ++++ .../tf_specific/Convert_YOLO_From_Tensorflow.md | 7 +++++++ 3 files changed, 21 insertions(+), 3 deletions(-) diff --git a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md index a0e6c2b6cf9..7f57895edbe 100644 --- a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md +++ b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md @@ -67,8 +67,15 @@ The attribute names are self-explanatory or match the name in the `hparams_confi > **NOTE**: The color channel order (RGB or BGR) of an input data should match the channel order of the model training dataset. If they are different, perform the `RGB<->BGR` conversion specifying the command-line parameter: `--reverse_input_channels`. Otherwise, inference results may be incorrect. For more information about the parameter, refer to **When to Reverse Input Channels** section of [Converting a Model to Intermediate Representation (IR)](../Converting_Model.md). -OpenVINO™ toolkit provides samples that can be used to infer EfficientDet model. For more information, refer to -[Open Model Zoo Demos](@ref omz_demos) and +## OpenVINO™ Toolkit Samples and Open Model Zoo Demos +OpenVINO™ toolkit provides samples that can be used to infer EfficientDet models. For more information, refer to the following pages: +* [OpenVINO Samples](../../../../OV_Runtime_UG/Samples_Overview.md) + * [Hello Reshape SSD - Python](../../../../../samples/python/hello_reshape_ssd/README.md) + * [Hello Reshape SSD - C++](../../../../../samples/cpp/hello_reshape_ssd/README.md) +* [Open Model Zoo Demos](@ref omz_demos) + * [Object Detection Python Demo](https://github.com/openvinotoolkit/open_model_zoo/blob/master/demos/object_detection_demo/python) + * [Object Detection C++ Demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/object_detection_demo/cpp) +* [Hello Object Detection Jupyter notebook](https://docs.openvino.ai/latest/notebooks/004-hello-detection-with-output.html) ## Interpreting Results of the TensorFlow Model and the IR @@ -90,4 +97,4 @@ The output of the IR is a list of 7-element tuples: `[image_id, class_id, confid * `x_max` -- normalized `x` coordinate of the upper right corner of the detected object. * `y_max` -- normalized `y` coordinate of the upper right corner of the detected object. -The first element with `image_id = -1` means end of data. \ No newline at end of file +The first element with `image_id = -1` means end of data. diff --git a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Object_Detection_API_Models.md b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Object_Detection_API_Models.md index b8276191219..8174b13c390 100644 --- a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Object_Detection_API_Models.md +++ b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Object_Detection_API_Models.md @@ -64,7 +64,11 @@ Speech Recognition, Natural Language Processing and others. Refer to the links b * [OpenVINO Samples](../../../../OV_Runtime_UG/Samples_Overview.md) + * [Hello Reshape SSD - Python](../../../../../samples/python/hello_reshape_ssd/README.md) + * [Hello Reshape SSD - C++](../../../../../samples/cpp/hello_reshape_ssd/README.md) * [Open Model Zoo Demos](@ref omz_demos) + * [Object Detection Python Demo](https://github.com/openvinotoolkit/open_model_zoo/blob/master/demos/object_detection_demo/python) + * [Object Detection C++ Demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/object_detection_demo/cpp) ## Important Notes About Feeding Input Images to the Samples diff --git a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md index 395745c26a9..e26515eca01 100644 --- a/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md +++ b/docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md @@ -229,3 +229,10 @@ The model was trained with input values in the range `[0,1]`. OpenVINO™ to For other applicable parameters, refer to [Convert Model from TensorFlow](../Convert_Model_From_TensorFlow.md). > **NOTE**: The color channel order (RGB or BGR) of an input data should match the channel order of the model training dataset. If they are different, perform the `RGB<->BGR` conversion specifying the command-line parameter: `--reverse_input_channels`. Otherwise, inference results may be incorrect. For more information about the parameter, refer to **When to Reverse Input Channels** section of [Converting a Model to Intermediate Representation (IR)](../Converting_Model.md). + + + +## YOLO Sample Application +OpenVINO™ [Open Model Zoo Demos](@ref omz_demos) provide a sample application showing how to run inferencing on a video input with object detection models. The sample is compatible with YOLOv1, YOLOv2, YOLOv3, and YOLOv4 full-size and tiny-size models: +* [Object Detection Python Demo](https://github.com/openvinotoolkit/open_model_zoo/blob/master/demos/object_detection_demo/python) +* [Object Detection C++ Demo](https://github.com/openvinotoolkit/open_model_zoo/tree/master/demos/object_detection_demo/cpp)