* compress_to_fp16=False by default * Apply suggestions from code review Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com> * note abound RAM consumption for FP16 compressed models * detailed notion about RAM usage * update 'get_compression_message()' * corrected get_compression_message: remove infor about RAM * fix pytorch convert layer tests --------- Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>