[EISW-81712] Rename VPUX to VPU in openvino codebase (#17949)
* Change `VPUX` occurrences to `VPU` * Switch `VPU` library name * Change remaining references to `VPUX`
This commit is contained in:
committed by
GitHub
parent
04cb55c37d
commit
5397e299fd
@@ -86,7 +86,7 @@ Several execution modes are supported via the ``-d`` flag:
|
||||
|
||||
- ``CPU`` - All calculations are performed on CPU device using CPU Plugin.
|
||||
- ``GPU`` - All calculations are performed on GPU device using GPU Plugin.
|
||||
- ``VPUX`` - All calculations are performed on VPUX device using VPUX Plugin.
|
||||
- ``VPU`` - All calculations are performed on VPU device using VPU Plugin.
|
||||
- ``GNA_AUTO`` - GNA hardware is used if available and the driver is installed. Otherwise, the GNA device is emulated in fast-but-not-bit-exact mode.
|
||||
- ``GNA_HW`` - GNA hardware is used if available and the driver is installed. Otherwise, an error will occur.
|
||||
- ``GNA_SW`` - Deprecated. The GNA device is emulated in fast-but-not-bit-exact mode.
|
||||
@@ -134,7 +134,7 @@ Usage message:
|
||||
-i "<path>" Required. Path(s) to input file(s). Usage for a single file/layer: <input_file.ark> or <input_file.npz>. Example of usage for several files/layers: <layer1>:<port_num1>=<input_file1.ark>,<layer2>:<port_num2>=<input_file2.ark>.
|
||||
-m "<path>" Required. Path to an .xml file with a trained model (required if -rg is missing).
|
||||
-o "<path>" Optional. Output file name(s) to save scores (inference results). Example of usage for a single file/layer: <output_file.ark> or <output_file.npz>. Example of usage for several files/layers: <layer1>:<port_num1>=<output_file1.ark>,<layer2>:<port_num2>=<output_file2.ark>.
|
||||
-d "<device>" Optional. Specify a target device to infer on. CPU, GPU, VPUX, GNA_AUTO, GNA_HW, GNA_HW_WITH_SW_FBACK, GNA_SW_FP32, GNA_SW_EXACT and HETERO with combination of GNA as the primary device and CPU as a secondary (e.g. HETERO:GNA,CPU) are supported. The sample will look for a suitable plugin for device specified.
|
||||
-d "<device>" Optional. Specify a target device to infer on. CPU, GPU, VPU, GNA_AUTO, GNA_HW, GNA_HW_WITH_SW_FBACK, GNA_SW_FP32, GNA_SW_EXACT and HETERO with combination of GNA as the primary device and CPU as a secondary (e.g. HETERO:GNA,CPU) are supported. The sample will look for a suitable plugin for device specified.
|
||||
-pc Optional. Enables per-layer performance report.
|
||||
-q "<mode>" Optional. Input quantization mode for GNA: static (default) or user defined (use with -sf).
|
||||
-qb "<integer>" Optional. Weight resolution in bits for GNA quantization: 8 or 16 (default)
|
||||
@@ -152,7 +152,7 @@ Usage message:
|
||||
-compile_target "<string>" Optional. Specify GNA compile target generation. May be one of GNA_TARGET_2_0, GNA_TARGET_3_0. By default, generation corresponds to the GNA HW available in the system or the latest fully supported generation by the software. See the GNA Plugin's GNA_COMPILE_TARGET config option description.
|
||||
-memory_reuse_off Optional. Disables memory optimizations for compiled model.
|
||||
|
||||
Available target devices: CPU GNA GPU VPUX
|
||||
Available target devices: CPU GNA GPU VPU
|
||||
|
||||
|
||||
.. _model-preparation-speech:
|
||||
|
||||
@@ -24,7 +24,7 @@ static const char model_message[] = "Required. Path to an .xml file with a train
|
||||
|
||||
/// @brief message for assigning calculation to device
|
||||
static const char target_device_message[] =
|
||||
"Optional. Specify a target device to infer on. CPU, GPU, VPUX, GNA_AUTO, GNA_HW, "
|
||||
"Optional. Specify a target device to infer on. CPU, GPU, VPU, GNA_AUTO, GNA_HW, "
|
||||
"GNA_HW_WITH_SW_FBACK, GNA_SW_FP32, "
|
||||
"GNA_SW_EXACT and HETERO with combination of GNA as the primary device and CPU"
|
||||
" as a secondary (e.g. HETERO:GNA,CPU) are supported. "
|
||||
@@ -274,7 +274,7 @@ bool parse_and_check_command_line(int argc, char* argv[]) {
|
||||
"HETERO:GNA_HW,CPU",
|
||||
"HETERO:GNA_SW_EXACT,CPU",
|
||||
"HETERO:GNA_SW_FP32,CPU",
|
||||
"VPUX"};
|
||||
"VPU"};
|
||||
|
||||
if (std::find(supportedDevices.begin(), supportedDevices.end(), FLAGS_d) == supportedDevices.end()) {
|
||||
throw std::logic_error("Specified device is not supported.");
|
||||
|
||||
Reference in New Issue
Block a user