Files
openvino/docs/ops/detection/Proposal_4.md
Andrey Zaytsev 4ae6258bed Feature/azaytsev/from 2021 4 (#9247)
* Added info on DockerHub CI Framework

* Feature/azaytsev/change layout (#3295)

* Changes according to feedback comments

* Replaced @ref's with html links

* Fixed links, added a title page for installing from repos and images, fixed formatting issues

* Added links

* minor fix

* Added DL Streamer to the list of components installed by default

* Link fixes

* Link fixes

* ovms doc fix (#2988)

* added OpenVINO Model Server

* ovms doc fixes

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>

* Updated openvino_docs.xml

* Updated the link to software license agreements

* Revert "Updated the link to software license agreements"

This reverts commit 706dac500e.

* Docs to Sphinx (#8151)

* docs to sphinx

* Update GPU.md

* Update CPU.md

* Update AUTO.md

* Update performance_int8_vs_fp32.md

* update

* update md

* updates

* disable doc ci

* disable ci

* fix index.rst

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
# Conflicts:
#	.gitignore
#	docs/CMakeLists.txt
#	docs/IE_DG/Deep_Learning_Inference_Engine_DevGuide.md
#	docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md
#	docs/IE_DG/Extensibility_DG/VPU_Kernel.md
#	docs/IE_DG/InferenceEngine_QueryAPI.md
#	docs/IE_DG/Int8Inference.md
#	docs/IE_DG/Integrate_with_customer_application_new_API.md
#	docs/IE_DG/Model_caching_overview.md
#	docs/IE_DG/supported_plugins/GPU_RemoteBlob_API.md
#	docs/IE_DG/supported_plugins/HETERO.md
#	docs/IE_DG/supported_plugins/MULTI.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md
#	docs/MO_DG/prepare_model/convert_model/Converting_Model.md
#	docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md
#	docs/MO_DG/prepare_model/convert_model/Cutting_Model.md
#	docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RNNT.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
#	docs/doxygen/Doxyfile.config
#	docs/doxygen/ie_docs.xml
#	docs/doxygen/ie_plugin_api.config
#	docs/doxygen/ngraph_cpp_api.config
#	docs/doxygen/openvino_docs.xml
#	docs/get_started/get_started_macos.md
#	docs/get_started/get_started_raspbian.md
#	docs/get_started/get_started_windows.md
#	docs/img/cpu_int8_flow.png
#	docs/index.md
#	docs/install_guides/VisionAcceleratorFPGA_Configure.md
#	docs/install_guides/VisionAcceleratorFPGA_Configure_Windows.md
#	docs/install_guides/deployment-manager-tool.md
#	docs/install_guides/installing-openvino-linux.md
#	docs/install_guides/installing-openvino-macos.md
#	docs/install_guides/installing-openvino-windows.md
#	docs/optimization_guide/dldt_optimization_guide.md
#	inference-engine/ie_bridges/c/include/c_api/ie_c_api.h
#	inference-engine/ie_bridges/python/docs/api_overview.md
#	inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md
#	inference-engine/ie_bridges/python/sample/speech_sample/README.md
#	inference-engine/ie_bridges/python/src/openvino/inference_engine/ie_api.pyx
#	inference-engine/include/ie_api.h
#	inference-engine/include/ie_core.hpp
#	inference-engine/include/ie_version.hpp
#	inference-engine/samples/benchmark_app/README.md
#	inference-engine/samples/speech_sample/README.md
#	inference-engine/src/plugin_api/exec_graph_info.hpp
#	inference-engine/src/plugin_api/file_utils.h
#	inference-engine/src/transformations/include/transformations_visibility.hpp
#	inference-engine/tools/benchmark_tool/README.md
#	ngraph/core/include/ngraph/ngraph.hpp
#	ngraph/frontend/onnx_common/include/onnx_common/parser.hpp
#	ngraph/python/src/ngraph/utils/node_factory.py
#	openvino/itt/include/openvino/itt.hpp
#	thirdparty/ade
#	tools/benchmark/README.md

* Cherry-picked remove font-family (#8211)

* Cherry-picked: Update get_started_scripts.md (#8338)

* doc updates (#8268)

* Various doc changes

* theme changes

* remove font-family (#8211)

* fix  css

* Update uninstalling-openvino.md

* fix css

* fix

* Fixes for Installation Guides

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: kblaszczak-intel <karol.blaszczak@intel.com>
# Conflicts:
#	docs/IE_DG/Bfloat16Inference.md
#	docs/IE_DG/InferenceEngine_QueryAPI.md
#	docs/IE_DG/OnnxImporterTutorial.md
#	docs/IE_DG/supported_plugins/AUTO.md
#	docs/IE_DG/supported_plugins/HETERO.md
#	docs/IE_DG/supported_plugins/MULTI.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
#	docs/install_guides/installing-openvino-macos.md
#	docs/install_guides/installing-openvino-windows.md
#	docs/ops/opset.md
#	inference-engine/samples/benchmark_app/README.md
#	inference-engine/tools/benchmark_tool/README.md
#	thirdparty/ade

* Cherry-picked: doc script changes (#8568)

* fix openvino-sphinx-theme

* add linkcheck target

* fix

* change version

* add doxygen-xfail.txt

* fix

* AA

* fix

* fix

* fix

* fix

* fix
# Conflicts:
#	thirdparty/ade

* Cherry-pick: Feature/azaytsev/doc updates gna 2021 4 2 (#8567)

* Various doc changes

* Reformatted C++/Pythob sections. Updated with info from PR8490

* additional fix

* Gemini Lake replaced with Elkhart Lake

* Fixed links in IGs, Added 12th Gen
# Conflicts:
#	docs/IE_DG/supported_plugins/GNA.md
#	thirdparty/ade

* Cherry-pick: Feature/azaytsev/doc fixes (#8897)

* Various doc changes

* Removed the empty Learning path topic

* Restored the Gemini Lake CPIU list
# Conflicts:
#	docs/IE_DG/supported_plugins/GNA.md
#	thirdparty/ade

* Cherry-pick: sphinx copybutton doxyrest code blocks (#8992)

# Conflicts:
#	thirdparty/ade

* Cherry-pick: iframe video enable fullscreen (#9041)

# Conflicts:
#	thirdparty/ade

* Cherry-pick: fix untitled titles (#9213)

# Conflicts:
#	thirdparty/ade

* Cherry-pick: perf bench graph animation (#9045)

* animation

* fix
# Conflicts:
#	thirdparty/ade

* Cherry-pick: doc pytest (#8888)

* docs pytest

* fixes
# Conflicts:
#	docs/doxygen/doxygen-ignore.txt
#	docs/scripts/ie_docs.xml
#	thirdparty/ade

* Cherry-pick: restore deleted files (#9215)

* Added new operations to the doc structure (from removed ie_docs.xml)

* Additional fixes

* Update docs/IE_DG/InferenceEngine_QueryAPI.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update Custom_Layers_Guide.md

* Changes according to review  comments

* doc scripts fixes

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update Int8Inference.md

* update xfail

* clang format

* updated xfail

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: kblaszczak-intel <karol.blaszczak@intel.com>
Co-authored-by: Yury Gorbachev <yury.gorbachev@intel.com>
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
2021-12-21 20:26:37 +03:00

7.3 KiB

Proposal

Versioned name: Proposal-4

Category: Object detection

Short description: Proposal operation filters bounding boxes and outputs only those with the highest prediction confidence.

Detailed description

Proposal has three inputs: a 4D tensor of shape [num_batches, 2*K, H, W] with probabilities whether particular bounding box corresponds to background or foreground, a 4D tensor of shape [num_batches, 4*K, H, W] with deltas for each of the bound box, and a tensor with input image size in the [image_height, image_width, scale_height_and_width] or [image_height, image_width, scale_height, scale_width] format. K is number of anchors and H, W are height and width of the feature map. Operation produces two tensors: the first mandatory tensor of shape [batch_size * post_nms_topn, 5] with proposed boxes and the second optional tensor of shape [batch_size * post_nms_topn] with probabilities (sometimes referred as scores).

Proposal layer does the following with the input tensor:

  1. Generates initial anchor boxes. Left top corner of all boxes is at (0, 0). Width and height of boxes are calculated from base_size with scale and ratio attributes.
  2. For each point in the first input tensor:
    • pins anchor boxes to the image according to the second input tensor that contains four deltas for each box: for x and y of center, for width and for height
    • finds out score in the first input tensor
  3. Filters out boxes with size less than min_size
  4. Sorts all proposals (box, score) by score from highest to lowest
  5. Takes top pre_nms_topn proposals
  6. Calculates intersections for boxes and filter out all boxes with \f$intersection/union > nms_thresh\f$
  7. Takes top post_nms_topn proposals
  8. Returns the results:
    • Top proposals, if there is not enough proposals to fill the whole output tensor, the valid proposals will be terminated with a single -1.
    • Optionally returns probabilities for each proposal, which are not terminated by any special value.

Attributes:

  • base_size

    • Description: base_size is the size of the anchor to which scale and ratio attributes are applied.
    • Range of values: a positive integer number
    • Type: int
    • Required: yes
  • pre_nms_topn

    • Description: pre_nms_topn is the number of bounding boxes before the NMS operation. For example, pre_nms_topn equal to 15 means to take top 15 boxes with the highest scores.
    • Range of values: a positive integer number
    • Type: int
    • Required: yes
  • post_nms_topn

    • Description: post_nms_topn is the number of bounding boxes after the NMS operation. For example, post_nms_topn equal to 15 means to take after NMS top 15 boxes with the highest scores.
    • Range of values: a positive integer number
    • Type: int
    • Required: yes
  • nms_thresh

    • Description: nms_thresh is the minimum value of the proposal to be taken into consideration. For example, nms_thresh equal to 0.5 means that all boxes with prediction probability less than 0.5 are filtered out.
    • Range of values: a positive floating-point number
    • Type: float
    • Required: yes
  • feat_stride

    • Description: feat_stride is the step size to slide over boxes (in pixels). For example, feat_stride equal to 16 means that all boxes are analyzed with the slide 16.
    • Range of values: a positive integer
    • Type: int
    • Required: yes
  • min_size

    • Description: min_size is the minimum size of box to be taken into consideration. For example, min_size equal 35 means that all boxes with box size less than 35 are filtered out.
    • Range of values: a positive integer number
    • Type: int
    • Required: yes
  • ratio

    • Description: ratio is the ratios for anchor generation.
    • Range of values: a list of floating-point numbers
    • Type: float[]
    • Required: yes
  • scale

    • Description: scale is the scales for anchor generation.
    • Range of values: a list of floating-point numbers
    • Type: float[]
    • Required: yes
  • clip_before_nms

    • Description: clip_before_nms flag that specifies whether to perform clip bounding boxes before non-maximum suppression or not.
    • Range of values: true or false
    • Type: boolean
    • Default value: true
    • Required: no
  • clip_after_nms

    • Description: clip_after_nms is a flag that specifies whether to perform clip bounding boxes after non-maximum suppression or not.
    • Range of values: true or false
    • Type: boolean
    • Default value: false
    • Required: no
  • normalize

    • Description: normalize is a flag that specifies whether to perform normalization of output boxes to [0,1] interval or not.
    • Range of values: true or false
    • Type: boolean
    • Default value: false
    • Required: no
  • box_size_scale

    • Description: box_size_scale specifies the scale factor applied to box sizes before decoding.
    • Range of values: a positive floating-point number
    • Type: float
    • Default value: 1.0
    • Required: no
  • box_coordinate_scale

    • Description: box_coordinate_scale specifies the scale factor applied to box coordinates before decoding.
    • Range of values: a positive floating-point number
    • Type: float
    • Default value: 1.0
    • Required: no
  • framework

    • Description: framework specifies how the box coordinates are calculated.
    • Range of values:
      • "" (empty string) - calculate box coordinates like in Caffe*
      • tensorflow - calculate box coordinates like in the TensorFlow* Object Detection API models
    • Type: string
    • Default value: "" (empty string)
    • Required: no

Inputs:

  • 1: 4D tensor of type T and shape [batch_size, 2*K, H, W] with class prediction scores. Required.

  • 2: 4D tensor of type T and shape [batch_size, 4*K, H, W] with deltas for each bounding box. Required.

  • 3: 1D tensor of type T with 3 or 4 elements: [image_height, image_width, scale_height_and_width] or [image_height, image_width, scale_height, scale_width]. Required.

Outputs

  • 1: tensor of type T and shape [batch_size * post_nms_topn, 5].

  • 2: tensor of type T and shape [batch_size * post_nms_topn] with probabilities.

Types

  • T: floating-point type.

Example

<layer ... type="Proposal" ... >
    <data base_size="16" feat_stride="8" min_size="16" nms_thresh="1.0" normalize="0" post_nms_topn="1000" pre_nms_topn="1000" ratio="1" scale="1,2"/>
    <input>
        <port id="0">
            <dim>7</dim>
            <dim>4</dim>
            <dim>28</dim>
            <dim>28</dim>
        </port>
        <port id="1">
            <dim>7</dim>
            <dim>8</dim>
            <dim>28</dim>
            <dim>28</dim>
        </port>
        <port id="2">
            <dim>3</dim>
        </port>
    </input>
    <output>
        <port id="3" precision="FP32">
            <dim>7000</dim>
            <dim>5</dim>
        </port>
        <port id="4" precision="FP32">
            <dim>7000</dim>
        </port>
    </output>
</layer>