Replace 'quantized' with 'compressed' in MO help (#7607)
* Replace 'quantized' with 'compressed' in MO help Signed-off-by: Andrei Kochin <andrei.kochin@intel.com> * Add UG changes to reflect new help text Signed-off-by: Andrei Kochin <andrei.kochin@intel.com>
This commit is contained in:
parent
ce9a229313
commit
0efc1a0763
@ -99,7 +99,7 @@ Framework-agnostic parameters:
|
||||
--data_type {FP16,FP32,half,float}
|
||||
Data type for all intermediate tensors and weights. If
|
||||
original model is in FP32 and --data_type=FP16 is
|
||||
specified, all model weights and biases are quantized
|
||||
specified, all model weights and biases are compressed
|
||||
to FP16.
|
||||
--disable_fusing Turn off fusing of linear operations to Convolution
|
||||
--disable_resnet_optimization
|
||||
|
@ -275,7 +275,7 @@ def get_common_cli_parser(parser: argparse.ArgumentParser = None):
|
||||
common_group.add_argument('--data_type',
|
||||
help='Data type for all intermediate tensors and weights. ' +
|
||||
'If original model is in FP32 and --data_type=FP16 is specified, all model weights ' +
|
||||
'and biases are quantized to FP16.',
|
||||
'and biases are compressed to FP16.',
|
||||
choices=["FP16", "FP32", "half", "float"],
|
||||
default='float')
|
||||
common_group.add_argument('--transform',
|
||||
|
Loading…
Reference in New Issue
Block a user