Replace 'quantized' with 'compressed' in MO help (#7607)

* Replace 'quantized' with 'compressed' in MO help

Signed-off-by: Andrei Kochin <andrei.kochin@intel.com>

* Add UG changes to reflect new help text

Signed-off-by: Andrei Kochin <andrei.kochin@intel.com>
This commit is contained in:
Andrei Kochin
2021-09-24 13:34:09 +03:00
committed by GitHub
parent ce9a229313
commit 0efc1a0763
2 changed files with 2 additions and 2 deletions

View File

@@ -99,7 +99,7 @@ Framework-agnostic parameters:
--data_type {FP16,FP32,half,float}
Data type for all intermediate tensors and weights. If
original model is in FP32 and --data_type=FP16 is
specified, all model weights and biases are quantized
specified, all model weights and biases are compressed
to FP16.
--disable_fusing Turn off fusing of linear operations to Convolution
--disable_resnet_optimization