change wording (#21428)
This commit is contained in:
parent
bf760b663e
commit
ad12f114f4
@ -13,7 +13,7 @@ Model Optimization Guide
|
||||
weight_compression
|
||||
|
||||
|
||||
Model optimization is an optional offline step of improving the final model performance and reducing the model size by applying special optimization methods, such as 8-bit quantization, pruning, etc. OpenVINO offers two optimization paths implemented in `Neural Network Compression Framework (NNCF) <https://github.com/openvinotoolkit/nncf>`__:
|
||||
Model optimization is an optional offline step of improving the final model performance and reducing the model size by applying special optimization methods, such as 8-bit quantization, pruning, etc. OpenVINO offers three optimization paths implemented in `Neural Network Compression Framework (NNCF) <https://github.com/openvinotoolkit/nncf>`__:
|
||||
|
||||
- :doc:`Post-training Quantization <ptq_introduction>` is designed to optimize the inference of deep learning models by applying the post-training 8-bit integer quantization that does not require model retraining or fine-tuning.
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user