Files
openvino/docs/ops/detection/DetectionOutput_1.md
Andrey Zaytsev 4ae6258bed Feature/azaytsev/from 2021 4 (#9247)
* Added info on DockerHub CI Framework

* Feature/azaytsev/change layout (#3295)

* Changes according to feedback comments

* Replaced @ref's with html links

* Fixed links, added a title page for installing from repos and images, fixed formatting issues

* Added links

* minor fix

* Added DL Streamer to the list of components installed by default

* Link fixes

* Link fixes

* ovms doc fix (#2988)

* added OpenVINO Model Server

* ovms doc fixes

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>

* Updated openvino_docs.xml

* Updated the link to software license agreements

* Revert "Updated the link to software license agreements"

This reverts commit 706dac500e.

* Docs to Sphinx (#8151)

* docs to sphinx

* Update GPU.md

* Update CPU.md

* Update AUTO.md

* Update performance_int8_vs_fp32.md

* update

* update md

* updates

* disable doc ci

* disable ci

* fix index.rst

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
# Conflicts:
#	.gitignore
#	docs/CMakeLists.txt
#	docs/IE_DG/Deep_Learning_Inference_Engine_DevGuide.md
#	docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md
#	docs/IE_DG/Extensibility_DG/VPU_Kernel.md
#	docs/IE_DG/InferenceEngine_QueryAPI.md
#	docs/IE_DG/Int8Inference.md
#	docs/IE_DG/Integrate_with_customer_application_new_API.md
#	docs/IE_DG/Model_caching_overview.md
#	docs/IE_DG/supported_plugins/GPU_RemoteBlob_API.md
#	docs/IE_DG/supported_plugins/HETERO.md
#	docs/IE_DG/supported_plugins/MULTI.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md
#	docs/MO_DG/prepare_model/convert_model/Converting_Model.md
#	docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md
#	docs/MO_DG/prepare_model/convert_model/Cutting_Model.md
#	docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RNNT.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
#	docs/doxygen/Doxyfile.config
#	docs/doxygen/ie_docs.xml
#	docs/doxygen/ie_plugin_api.config
#	docs/doxygen/ngraph_cpp_api.config
#	docs/doxygen/openvino_docs.xml
#	docs/get_started/get_started_macos.md
#	docs/get_started/get_started_raspbian.md
#	docs/get_started/get_started_windows.md
#	docs/img/cpu_int8_flow.png
#	docs/index.md
#	docs/install_guides/VisionAcceleratorFPGA_Configure.md
#	docs/install_guides/VisionAcceleratorFPGA_Configure_Windows.md
#	docs/install_guides/deployment-manager-tool.md
#	docs/install_guides/installing-openvino-linux.md
#	docs/install_guides/installing-openvino-macos.md
#	docs/install_guides/installing-openvino-windows.md
#	docs/optimization_guide/dldt_optimization_guide.md
#	inference-engine/ie_bridges/c/include/c_api/ie_c_api.h
#	inference-engine/ie_bridges/python/docs/api_overview.md
#	inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md
#	inference-engine/ie_bridges/python/sample/speech_sample/README.md
#	inference-engine/ie_bridges/python/src/openvino/inference_engine/ie_api.pyx
#	inference-engine/include/ie_api.h
#	inference-engine/include/ie_core.hpp
#	inference-engine/include/ie_version.hpp
#	inference-engine/samples/benchmark_app/README.md
#	inference-engine/samples/speech_sample/README.md
#	inference-engine/src/plugin_api/exec_graph_info.hpp
#	inference-engine/src/plugin_api/file_utils.h
#	inference-engine/src/transformations/include/transformations_visibility.hpp
#	inference-engine/tools/benchmark_tool/README.md
#	ngraph/core/include/ngraph/ngraph.hpp
#	ngraph/frontend/onnx_common/include/onnx_common/parser.hpp
#	ngraph/python/src/ngraph/utils/node_factory.py
#	openvino/itt/include/openvino/itt.hpp
#	thirdparty/ade
#	tools/benchmark/README.md

* Cherry-picked remove font-family (#8211)

* Cherry-picked: Update get_started_scripts.md (#8338)

* doc updates (#8268)

* Various doc changes

* theme changes

* remove font-family (#8211)

* fix  css

* Update uninstalling-openvino.md

* fix css

* fix

* Fixes for Installation Guides

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: kblaszczak-intel <karol.blaszczak@intel.com>
# Conflicts:
#	docs/IE_DG/Bfloat16Inference.md
#	docs/IE_DG/InferenceEngine_QueryAPI.md
#	docs/IE_DG/OnnxImporterTutorial.md
#	docs/IE_DG/supported_plugins/AUTO.md
#	docs/IE_DG/supported_plugins/HETERO.md
#	docs/IE_DG/supported_plugins/MULTI.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
#	docs/install_guides/installing-openvino-macos.md
#	docs/install_guides/installing-openvino-windows.md
#	docs/ops/opset.md
#	inference-engine/samples/benchmark_app/README.md
#	inference-engine/tools/benchmark_tool/README.md
#	thirdparty/ade

* Cherry-picked: doc script changes (#8568)

* fix openvino-sphinx-theme

* add linkcheck target

* fix

* change version

* add doxygen-xfail.txt

* fix

* AA

* fix

* fix

* fix

* fix

* fix
# Conflicts:
#	thirdparty/ade

* Cherry-pick: Feature/azaytsev/doc updates gna 2021 4 2 (#8567)

* Various doc changes

* Reformatted C++/Pythob sections. Updated with info from PR8490

* additional fix

* Gemini Lake replaced with Elkhart Lake

* Fixed links in IGs, Added 12th Gen
# Conflicts:
#	docs/IE_DG/supported_plugins/GNA.md
#	thirdparty/ade

* Cherry-pick: Feature/azaytsev/doc fixes (#8897)

* Various doc changes

* Removed the empty Learning path topic

* Restored the Gemini Lake CPIU list
# Conflicts:
#	docs/IE_DG/supported_plugins/GNA.md
#	thirdparty/ade

* Cherry-pick: sphinx copybutton doxyrest code blocks (#8992)

# Conflicts:
#	thirdparty/ade

* Cherry-pick: iframe video enable fullscreen (#9041)

# Conflicts:
#	thirdparty/ade

* Cherry-pick: fix untitled titles (#9213)

# Conflicts:
#	thirdparty/ade

* Cherry-pick: perf bench graph animation (#9045)

* animation

* fix
# Conflicts:
#	thirdparty/ade

* Cherry-pick: doc pytest (#8888)

* docs pytest

* fixes
# Conflicts:
#	docs/doxygen/doxygen-ignore.txt
#	docs/scripts/ie_docs.xml
#	thirdparty/ade

* Cherry-pick: restore deleted files (#9215)

* Added new operations to the doc structure (from removed ie_docs.xml)

* Additional fixes

* Update docs/IE_DG/InferenceEngine_QueryAPI.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update Custom_Layers_Guide.md

* Changes according to review  comments

* doc scripts fixes

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update Int8Inference.md

* update xfail

* clang format

* updated xfail

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: kblaszczak-intel <karol.blaszczak@intel.com>
Co-authored-by: Yury Gorbachev <yury.gorbachev@intel.com>
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
2021-12-21 20:26:37 +03:00

8.0 KiB

DetectionOutput

Versioned name: DetectionOutput-1

Category: Object detection

Short description: DetectionOutput performs non-maximum suppression to generate the detection output using information on location and confidence predictions.

Detailed description: Reference. The layer has 3 mandatory inputs: tensor with box logits, tensor with confidence predictions and tensor with box coordinates (proposals). It can have 2 additional inputs with additional confidence predictions and box coordinates described in the article. The output tensor contains information about filtered detections described with 7 element tuples: [batch_id, class_id, confidence, x_1, y_1, x_2, y_2]. The first tuple with batch_id equal to -1 means end of output.

At each feature map cell, DetectionOutput predicts the offsets relative to the default box shapes in the cell, as well as the per-class scores that indicate the presence of a class instance in each of those boxes. Specifically, for each box out of k at a given location, DetectionOutput computes class scores and the four offsets relative to the original default box shape. This results in a total of \f$(c + 4)k\f$ filters that are applied around each location in the feature map, yielding \f$(c + 4)kmn\f$ outputs for a m * n feature map.

Attributes:

  • num_classes

    • Description: number of classes to be predicted
    • Range of values: positive integer number
    • Type: int
    • Required: yes
  • background_label_id

    • Description: background label id. If there is no background class, set it to -1.
    • Range of values: integer values
    • Type: int
    • Default value: 0
    • Required: no
  • top_k

    • Description: maximum number of results to be kept per batch after NMS step. -1 means keeping all bounding boxes.
    • Range of values: integer values
    • Type: int
    • Default value: -1
    • Required: no
  • variance_encoded_in_target

    • Description: variance_encoded_in_target is a flag that denotes if variance is encoded in target. If flag is false then it is necessary to adjust the predicted offset accordingly.
    • Range of values: false or true
    • Type: boolean
    • Default value: false
    • Required: no
  • keep_top_k

    • Description: maximum number of bounding boxes per batch to be kept after NMS step. -1 means keeping all bounding boxes after NMS step.
    • Range of values: integer values
    • Type: int[]
    • Required: yes
  • code_type

    • Description: type of coding method for bounding boxes
    • Range of values: "caffe.PriorBoxParameter.CENTER_SIZE", "caffe.PriorBoxParameter.CORNER"
    • Type: string
    • Default value: "caffe.PriorBoxParameter.CORNER"
    • Required: no
  • share_location

    • Description: share_location is a flag that denotes if bounding boxes are shared among different classes.
    • Range of values: false or true
    • Type: boolean
    • Default value: true
    • Required: no
  • nms_threshold

    • Description: threshold to be used in the NMS stage
    • Range of values: floating-point values
    • Type: float
    • Required: yes
  • confidence_threshold

    • Description: only consider detections whose confidences are larger than a threshold. If not provided, consider all boxes.
    • Range of values: floating-point values
    • Type: float
    • Default value: 0
    • Required: no
  • clip_after_nms

    • Description: clip_after_nms flag that denotes whether to perform clip bounding boxes after non-maximum suppression or not.
    • Range of values: false or true
    • Type: boolean
    • Default value: false
    • Required: no
  • clip_before_nms

    • Description: clip_before_nms flag that denotes whether to perform clip bounding boxes before non-maximum suppression or not.
    • Range of values: false or true
    • Type: boolean
    • Default value: false
    • Required: no
  • decrease_label_id

    • Description: decrease_label_id flag that denotes how to perform NMS.
    • Range of values:
      • false - perform NMS like in Caffe*.
      • true - perform NMS like in MxNet*.
    • Type: boolean
    • Default value: false
    • Required: no
  • normalized

    • Description: normalized flag that denotes whether input tensor with proposal boxes is normalized. If tensor is not normalized then input_height and input_width attributes are used to normalize box coordinates.
    • Range of values: false or true
    • Type: boolean
    • Default value: false
    • Required: no
  • input_height (input_width)

    • Description: input image height (width). If the normalized is 1 then these attributes are not used.
    • Range of values: positive integer number
    • Type: int
    • Default value: 1
    • Required: no
  • objectness_score

    • Description: threshold to sort out confidence predictions. Used only when the DetectionOutput layer has 5 inputs.
    • Range of values: non-negative float number
    • Type: float
    • Default value: 0
    • Required: no

Inputs

  • 1: 2D input tensor with box logits with shape [N, num_prior_boxes * num_loc_classes * 4] and type T. num_loc_classes is equal to num_classes when share_location is 0 or it's equal to 1 otherwise. Required.
  • 2: 2D input tensor with class predictions with shape [N, num_prior_boxes * num_classes] and type T. Required.
  • 3: 3D input tensor with proposals with shape [priors_batch_size, 1, num_prior_boxes * prior_box_size] or [priors_batch_size, 2, num_prior_boxes * prior_box_size]. priors_batch_size is either 1 or N. Size of the second dimension depends on variance_encoded_in_target. If variance_encoded_in_target is equal to 0, the second dimension equals to 2 and variance values are provided for each boxes coordinates. If variance_encoded_in_target is equal to 1, the second dimension equals to 1 and this tensor contains proposals boxes only. prior_box_size is equal to 4 when normalized is set to 1 or it's equal to 5 otherwise. Required.
  • 4: 2D input tensor with additional class predictions information described in the article. Its shape must be equal to [N, num_prior_boxes * 2]. Optional.
  • 5: 2D input tensor with additional box predictions information described in the article. Its shape must be equal to first input tensor shape. Optional.

Outputs

  • 1: 4D output tensor with type T. Its shape depends on keep_top_k or top_k being set. It keep_top_k[0] is greater than zero, then the shape is [1, 1, N * keep_top_k[0], 7]. If keep_top_k[0] is set to -1 and top_k is greater than zero, then the shape is [1, 1, N * top_k * num_classes, 7]. Otherwise, the output shape is equal to [1, 1, N * num_classes * num_prior_boxes, 7].

Types

  • T: any supported floating-point type.

Example

<layer ... type="DetectionOutput" ... >
    <data background_label_id="1" code_type="caffe.PriorBoxParameter.CENTER_SIZE" confidence_threshold="0.019999999552965164" input_height="1" input_width="1" keep_top_k="200" nms_threshold="0.44999998807907104" normalized="true" num_classes="2" share_location="true" top_k="200" variance_encoded_in_target="false" clip_after_nms="false" clip_before_nms="false" objectness_score="0" decrease_label_id="false"/>
    <input>
        <port id="0">
            <dim>1</dim>
            <dim>5376</dim>
        </port>
        <port id="1">
            <dim>1</dim>
            <dim>2688</dim>
        </port>
        <port id="2">
            <dim>1</dim>
            <dim>2</dim>
            <dim>5376</dim>
        </port>
    </input>
    <output>
        <port id="3" precision="FP32">
            <dim>1</dim>
            <dim>1</dim>
            <dim>200</dim>
            <dim>7</dim>
        </port>
    </output>
</layer>