diff --git a/docs/resources/prerelease_information.md b/docs/resources/prerelease_information.md index 3ab620e023b..7d260e086b7 100644 --- a/docs/resources/prerelease_information.md +++ b/docs/resources/prerelease_information.md @@ -21,12 +21,12 @@ a general changelog and the schedule for all versions for the current year. :animate: fade-in-slide-down :color: primary - - Added support for PaddlePaddle Framework 2.4. - - Tensorflow Lite Frontend - load models directly via "read_model" or export to OpenVINO IR, using Model Optimizer or "convert_model". - - New option to control whether to use CPU to accelerate first inference latency for accelerator HW devices like GPU. - - New NNCF API call - "prepare_for_inference()" returns the compressed model in the source framework format. "export()" becomes optional. - - Security fix to protect OpenVINO against dll injection - search paths change to absolute paths to secure locations. OpenVINO installed into protected directories, ignoring the relative path and start up directory will be safe from this vulnerability. - - Added support for new model use cases and optimizing the existing support (better accuracy or performance). - - New FrontEndManager `register_front_end(name, lib_path)` interface added, to remove "OV_FRONTEND_PATH" env var (a way to load non-default frontends). + OpenVINO™ repository tag: `2023.0.0.dev20230217 `__ + + * Enabled PaddlePaddle Framework 2.4 + * Preview of TensorFlow Lite Front End – Load models directly via “read_model” into OpenVINO Runtime and export OpenVINO IR format using Model Optimizer or “convert_model” + * Introduced new option ov::auto::enable_startup_fallback / ENABLE_STARTUP_FALLBACK to control whether to use CPU to accelerate first inference latency for accelerator HW devices like GPU. + * New FrontEndManager register_front_end(name, lib_path) interface added, to remove “OV_FRONTEND_PATH” env var (a way to load non-default frontends). + @endsphinxdirective \ No newline at end of file