Replace 'quantized' with 'compressed' in MO help (#7607)

* Replace 'quantized' with 'compressed' in MO help

Signed-off-by: Andrei Kochin <andrei.kochin@intel.com>

* Add UG changes to reflect new help text

Signed-off-by: Andrei Kochin <andrei.kochin@intel.com>
This commit is contained in:
Andrei Kochin 2021-09-24 13:34:09 +03:00 committed by GitHub
parent ce9a229313
commit 0efc1a0763
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 2 additions and 2 deletions

View File

@ -99,7 +99,7 @@ Framework-agnostic parameters:
--data_type {FP16,FP32,half,float}
Data type for all intermediate tensors and weights. If
original model is in FP32 and --data_type=FP16 is
specified, all model weights and biases are quantized
specified, all model weights and biases are compressed
to FP16.
--disable_fusing Turn off fusing of linear operations to Convolution
--disable_resnet_optimization

View File

@ -275,7 +275,7 @@ def get_common_cli_parser(parser: argparse.ArgumentParser = None):
common_group.add_argument('--data_type',
help='Data type for all intermediate tensors and weights. ' +
'If original model is in FP32 and --data_type=FP16 is specified, all model weights ' +
'and biases are quantized to FP16.',
'and biases are compressed to FP16.',
choices=["FP16", "FP32", "half", "float"],
default='float')
common_group.add_argument('--transform',