* [Docs][PyOV] update python snippets * first snippet * Fix samples debug * Fix linter * part1 * Fix speech sample * update model state snippet * add serialize * add temp dir * CPU snippets update (#134) * snippets CPU 1/6 * snippets CPU 2/6 * snippets CPU 3/6 * snippets CPU 4/6 * snippets CPU 5/6 * snippets CPU 6/6 * make module TODO: REMEMBER ABOUT EXPORTING PYTONPATH ON CIs ETC * Add static model creation in snippets for CPU * export_comp_model done * leftovers * apply comments * apply comments -- properties * small fixes * rempve debug info * return IENetwork instead of Function * apply comments * revert precision change in common snippets * update opset * [PyOV] Edit docs for the rest of plugins (#136) * modify main.py * GNA snippets * GPU snippets * AUTO snippets * MULTI snippets * HETERO snippets * Added properties * update gna * more samples * Update docs/OV_Runtime_UG/model_state_intro.md * Update docs/OV_Runtime_UG/model_state_intro.md * attempt1 fix ci * new approach to test * temporary remove some files from run * revert cmake changes * fix ci * fix snippet * fix py_exclusive snippet * fix preprocessing snippet * clean-up main * remove numpy installation in gha * check for GPU * add logger * iexclude main * main update * temp * Temp2 * Temp2 * temp * Revert temp * add property execution devices * hide output from samples --------- Co-authored-by: p-wysocki <przemyslaw.wysocki@intel.com> Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com> Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
51 lines
1.4 KiB
Python
51 lines
1.4 KiB
Python
# Copyright (C) 2023 Intel Corporation
|
|
# SPDX-License-Identifier: Apache-2.0
|
|
#
|
|
|
|
from openvino import Core, properties
|
|
from snippets import get_model
|
|
|
|
model = get_model()
|
|
|
|
device_name = "CPU"
|
|
core = Core()
|
|
core.set_property("CPU", properties.intel_cpu.sparse_weights_decompression_rate(0.8))
|
|
|
|
# ! [ov:intel_cpu:multi_threading:part0]
|
|
# Use one logical processor for inference
|
|
compiled_model_1 = core.compile_model(
|
|
model=model,
|
|
device_name=device_name,
|
|
config={properties.inference_num_threads(): 1},
|
|
)
|
|
|
|
# Use logical processors of Efficient-cores for inference on hybrid platform
|
|
compiled_model_2 = core.compile_model(
|
|
model=model,
|
|
device_name=device_name,
|
|
config={
|
|
properties.hint.scheduling_core_type(): properties.hint.SchedulingCoreType.ECORE_ONLY,
|
|
},
|
|
)
|
|
|
|
# Use one logical processor per CPU core for inference when hyper threading is on
|
|
compiled_model_3 = core.compile_model(
|
|
model=model,
|
|
device_name=device_name,
|
|
config={properties.hint.enable_hyper_threading(): False},
|
|
)
|
|
# ! [ov:intel_cpu:multi_threading:part0]
|
|
|
|
# ! [ov:intel_cpu:multi_threading:part1]
|
|
# Disable CPU threads pinning for inference when system supoprt it
|
|
compiled_model_4 = core.compile_model(
|
|
model=model,
|
|
device_name=device_name,
|
|
config={properties.hint.enable_cpu_pinning(): False},
|
|
)
|
|
# ! [ov:intel_cpu:multi_threading:part1]
|
|
assert compiled_model_1
|
|
assert compiled_model_2
|
|
assert compiled_model_3
|
|
assert compiled_model_4
|