diff --git a/docs/articles_en/openvino_workflow/model_optimization_guide.rst b/docs/articles_en/openvino_workflow/model_optimization_guide.rst index 60f861a8ec8..c1c2fa48bc0 100644 --- a/docs/articles_en/openvino_workflow/model_optimization_guide.rst +++ b/docs/articles_en/openvino_workflow/model_optimization_guide.rst @@ -13,7 +13,7 @@ Model Optimization Guide weight_compression -Model optimization is an optional offline step of improving the final model performance and reducing the model size by applying special optimization methods, such as 8-bit quantization, pruning, etc. OpenVINO offers two optimization paths implemented in `Neural Network Compression Framework (NNCF) `__: +Model optimization is an optional offline step of improving the final model performance and reducing the model size by applying special optimization methods, such as 8-bit quantization, pruning, etc. OpenVINO offers three optimization paths implemented in `Neural Network Compression Framework (NNCF) `__: - :doc:`Post-training Quantization ` is designed to optimize the inference of deep learning models by applying the post-training 8-bit integer quantization that does not require model retraining or fine-tuning.