Files
openvino/docs/MO_DG/prepare_model/Getting_performance_numbers.md
Sebastian Golebiewski b88eed7645 Proofreading MO Guide (#11605)
* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/customize_model_optimizer/Customize_Model_Optimizer.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_DeepSpeech_From_Tensorflow.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_DeepSpeech_From_Tensorflow.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_CRNN_From_Tensorflow.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/FP16_Compression.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/customize_model_optimizer/Extending_Model_Optimizer_with_Caffe_Python_Layers.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_lm_1b_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_lm_1b_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update Additional_Optimizations.md

* Update Deep_Learning_Model_Optimizer_DevGuide.md

* Update IR_and_opsets.md

* Update Getting_performance_numbers.md

* Update Model_Optimizer_FAQ.md

* Update Supported_Frameworks_Layers.md

* Update Convert_Model_From_Caffe.md

* Update Convert_Model_From_Kaldi.md

* Update Convert_Model_From_MxNet.md

* Update Convert_Model_From_ONNX.md

* Update Convert_Model_From_Paddle.md

* Update Convert_Model_From_PyTorch.md

* Update Convert_Model_From_TensorFlow.md

* Update Convert_Model_Tutorials.md

* Update Converting_Model.md

* Update Cutting_Model.md

* Update IR_suitable_for_INT8_inference.md

* Update Aspire_Tdnn_Model.md

* Update Convert_GluonCV_Models.md

* Update Convert_Style_Transfer_From_MXNet.md

* Update Convert_Faster_RCNN.md

* Update Convert_Mask_RCNN.md

* Update Convert_Bert_ner.md

* Update Convert_Cascade_RCNN_res101.md

* Update Convert_F3Net.md

* Update Convert_QuartzNet.md

* Update Convert_RCAN.md

* Update Convert_RNNT.md

* Update Convert_YOLACT.md

* Update Convert_AttentionOCR_From_Tensorflow.md

* Update Convert_BERT_From_Tensorflow.md

* Update Convert_CRNN_From_Tensorflow.md

* Update Convert_DeepSpeech_From_Tensorflow.md

* Update Convert_EfficientDet_Models.md

* Update Convert_FaceNet_From_Tensorflow.md

* Update Convert_GNMT_From_Tensorflow.md

* Update Convert_NCF_From_Tensorflow.md

* Update Convert_Object_Detection_API_Models.md

* Update Convert_RetinaNet_From_Tensorflow.md

* Update Convert_Slim_Library_Models.md

* Update Convert_WideAndDeep_Family_Models.md

* Update Convert_XLNet_From_Tensorflow.md

* Update Convert_YOLO_From_Tensorflow.md

* Update Convert_lm_1b_From_Tensorflow.md

* Update Customize_Model_Optimizer.md

* Update Extending_Model_Optimizer_with_Caffe_Python_Layers.md

* Update docs/MO_DG/prepare_model/FP16_Compression.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/FP16_Compression.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/FP16_Compression.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md

Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Convert_Model_From_PyTorch.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Converting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Converting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Converting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/IR_suitable_for_INT8_inference.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/kaldi_specific/Aspire_Tdnn_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/kaldi_specific/Aspire_Tdnn_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/mxnet_specific/Convert_Style_Transfer_From_MXNet.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/onnx_specific/Convert_Faster_RCNN.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/onnx_specific/Convert_GPT2.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/onnx_specific/Convert_Mask_RCNN.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_Bert_ner.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_Cascade_RCNN_res101.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_Cascade_RCNN_res101.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RNNT.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_YOLACT.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_AttentionOCR_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_BERT_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_BERT_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_CRNN_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_DeepSpeech_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_DeepSpeech_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_FaceNet_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_FaceNet_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_NCF_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_NCF_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_NCF_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_RetinaNet_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Slim_Library_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Slim_Library_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Slim_Library_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update Getting_performance_numbers.md

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Additional_Optimizations.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Cutting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update Convert_Model_From_Kaldi.md

* Update docs/MO_DG/prepare_model/convert_model/Cutting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Model_Optimizer_FAQ.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Cutting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Cutting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/Cutting_Model.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Apply suggestions from code review

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_NCF_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_Object_Detection_API_Models.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_GNMT_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_FaceNet_From_Tensorflow.md

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update docs/MO_DG/IR_and_opsets.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Getting_performance_numbers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>

* Update

* Update docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update docs/MO_DG/prepare_model/Supported_Frameworks_Layers.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Apply suggestions from code review

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>

* Update Convert_Model_From_Paddle.md

Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2022-06-02 17:05:14 +02:00

9.1 KiB
Raw Blame History

Getting Performance Numbers

This guide introduces things to notice and how to use the benchmark_app to get performance numbers. It also explains how the performance numbers are reflected through internal inference performance counters and execution graphs. In the last section, it includes information on using ITT and Intel® VTune™ Profiler to get performance insights.

Tip 1: Select Proper Set of Operations to Measure

When evaluating the performance of a model with OpenVINO Runtime, it is required to measure proper set of operations. Remember the following tips:

  • Avoid including one-time costs such as model loading.

  • Track operations that occur outside OpenVINO Runtime (such as video decoding) separately.

Note

: Some image pre-processing can be baked into OpenVINO IR and accelerated accordingly. For more information, refer to Embedding the Pre-processing and General Runtime Optimizations.

Tip 2: Try to Get Credible Data

Performance conclusions should be build upon reproducible data. As for the performance measurements, they should be done with a large number of invocations of the same routine. Since the first iteration is almost always significantly slower than the subsequent ones, an aggregated value can be used for the execution time for final projections:

  • If the warm-up run does not help or execution time still varies, you can try running a large number of iterations and then average or find a mean of the results.
  • If the time values range too much, consider geomean.
  • Be aware of the throttling and other power oddities. A device can exist in one of several different power states. When optimizing your model, consider fixing the device frequency for better performance data reproducibility. However, the end-to-end (application) benchmarking should also be performed under real operational conditions.

Using benchmark_app to Measure Reference Performance Numbers

To get performance numbers, use the dedicated OpenVINO Benchmark app sample, which is the most-recommended solution to produce performance reference. It includes a lot of device-specific knobs, but the primary usage is as simple as in the following command to measure the performance of the model on GPU:

$ ./benchmark_app d GPU m <model> -i <input>

to measure the performance of the model on the GPU. Or

$ ./benchmark_app d CPU m <model> -i <input>

to execute on the CPU instead.

Each of the OpenVINO supported devices offers performance settings that contain command-line equivalents in the Benchmark app. While these settings provide really low-level control and allow leveraging the optimal model performance on the specific device, it is recommended to always start the performance evaluation with the OpenVINO High-Level Performance Hints first:

  • benchmark_app -hint tput -d 'device' -m 'path to your model'
  • benchmark_app -hint latency -d 'device' -m 'path to your model'

Notes for Comparing Performance with Native/Framework Code

When comparing the OpenVINO Runtime performance with the framework or another reference code, make sure that both versions are as similar as possible:

  • Wrap the exact inference execution (refer to the Benchmark app for examples).
  • Do not include model loading time.
  • Ensure that the inputs are identical for OpenVINO Runtime and the framework. For example, watch out for random values that can be used to populate the inputs.
  • In situations when any user-side pre-processing should be tracked separately, consider image pre-processing and conversion.
  • When applicable, leverage the Dynamic Shapes support.
  • If possible, demand the same accuracy. For example, TensorFlow allows FP16 execution, so when comparing to that, make sure to test the OpenVINO Runtime with the FP16 as well.

Data from Internal Inference Performance Counters and Execution Graphs

More detailed insights into inference performance breakdown can be achieved with device-specific performance counters and/or execution graphs. Both C++ and Python versions of the benchmark_app support a -pc command-line parameter that outputs internal execution breakdown.

For example, the table shown below is the part of performance counters for quantized TensorFlow implementation of ResNet-50 model inference on CPU Plugin. Keep in mind that since the device is CPU, the realTime wall clock and the cpu time layers are the same. Information about layer precision is also stored in the performance counters.

layerName execStatus layerType execType realTime (ms) cpuTime (ms)
resnet_model/batch_normalization_15/FusedBatchNorm/Add EXECUTED Convolution jit_avx512_1x1_I8 0.377 0.377
resnet_model/conv2d_16/Conv2D/fq_input_0 NOT_RUN FakeQuantize undef 0 0
resnet_model/batch_normalization_16/FusedBatchNorm/Add EXECUTED Convolution jit_avx512_I8 0.499 0.499
resnet_model/conv2d_17/Conv2D/fq_input_0 NOT_RUN FakeQuantize undef 0 0
resnet_model/batch_normalization_17/FusedBatchNorm/Add EXECUTED Convolution jit_avx512_1x1_I8 0.399 0.399
resnet_model/add_4/fq_input_0 NOT_RUN FakeQuantize undef 0 0
resnet_model/add_4 NOT_RUN Eltwise undef 0 0
resnet_model/add_5/fq_input_1 NOT_RUN FakeQuantize undef 0 0

The exeStatus column of the table includes the following possible values:

  • EXECUTED - the layer was executed by standalone primitive.
  • NOT_RUN - the layer was not executed by standalone primitive or was fused with another operation and executed in another layer primitive.

The execType column of the table includes inference primitives with specific suffixes. The layers could have the following marks:

  • The I8 suffix is for layers that had 8-bit data type input and were computed in 8-bit precision.
  • The FP32 suffix is for layers computed in 32-bit precision.

All Convolution layers are executed in int8 precision. The rest of the layers are fused into Convolutions using post-operation optimization, as described in CPU Device. This contains layer names (as seen in OpenVINO IR), type of the layer, and execution statistics.

Both benchmark_app versions also support the exec_graph_path command-line option. It requires OpenVINO to output the same execution statistics per layer, but in the form of plugin-specific Netron-viewable graph to the specified file.

Especially when performance-debugging the latency, note that the counters do not reflect the time spent in the plugin/device/driver/etc queues. If the sum of the counters is too different from the latency of an inference request, consider testing with less inference requests. For example, running single OpenVINO stream with multiple requests would produce nearly identical counters as running a single inference request, while the actual latency can be quite different.

Lastly, the performance statistics with both performance counters and execution graphs are averaged, so such data for the inputs of dynamic shapes should be measured carefully, preferably by isolating the specific shape and executing multiple times in a loop, to gather the reliable data.

Using ITT to Get Performance Insights

In general, OpenVINO and its individual plugins are heavily instrumented with Intel® Instrumentation and Tracing Technology (ITT). Therefore, you can also compile OpenVINO from the source code with ITT enabled and use tools like Intel® VTune™ Profiler to get detailed inference performance breakdown and additional insights in the application-level performance on the timeline view.