@@ -54,6 +54,7 @@ If not specified, throughput is used as the default. To set the hint explicitly,
|
||||
.. note::
|
||||
|
||||
It is up to the user to ensure the environment on which the benchmark is running is optimized for maximum performance. Otherwise, different results may occur when using the application in different environment settings (such as power optimization settings, processor overclocking, thermal throttling).
|
||||
Stating flags that take only single option like `-m` multiple times, for example `./benchmark_app -m model.xml -m model2.xml`, results in only the first value being used.
|
||||
|
||||
Latency
|
||||
--------------------
|
||||
|
||||
@@ -94,6 +94,8 @@ To run the sample, you need to specify a model and image:
|
||||
|
||||
- The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
|
||||
|
||||
- Stating flags that take only single option like `-m` multiple times, for example `./classification_sample_async -m model.xml -m model2.xml`, results in only the first value being used.
|
||||
|
||||
Example
|
||||
+++++++
|
||||
|
||||
|
||||
@@ -185,6 +185,8 @@ Here, the floating point Kaldi-generated reference neural network scores (``dev9
|
||||
|
||||
- The sample supports input and output in numpy file format (.npz)
|
||||
|
||||
- Stating flags that take only single option like `-m` multiple times, for example `./speech_sample -m model.xml -m model2.xml`, results in only the first value being used.
|
||||
|
||||
Sample Output
|
||||
#############
|
||||
|
||||
|
||||
@@ -82,6 +82,8 @@ To run the sample, you need specify a model and image:
|
||||
|
||||
- The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
|
||||
|
||||
- Stating flags that take only single option like `-m` multiple times, for example `python classification_sample_async.py -m model.xml -m model2.xml`, results in only the last value being used.
|
||||
|
||||
Example
|
||||
+++++++
|
||||
|
||||
|
||||
@@ -209,6 +209,8 @@ You can do inference on Intel® Processors with the GNA co-processor (or emulati
|
||||
- Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (\*.xml + \*.bin) using the :doc:`Model Optimizer tool <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
|
||||
- The sample supports input and output in numpy file format (.npz)
|
||||
|
||||
- Stating flags that take only single option like `-m` multiple times, for example `python classification_sample_async.py -m model.xml -m model2.xml`, results in only the last value being used.
|
||||
|
||||
Sample Output
|
||||
#############
|
||||
|
||||
|
||||
@@ -50,6 +50,7 @@ If not specified, throughput is used as the default. To set the hint explicitly,
|
||||
.. note::
|
||||
|
||||
It is up to the user to ensure the environment on which the benchmark is running is optimized for maximum performance. Otherwise, different results may occur when using the application in different environment settings (such as power optimization settings, processor overclocking, thermal throttling).
|
||||
Stating flags that take only single option like `-m` multiple times, for example `benchmark_app -m model.xml -m model2.xml`, results in only the last value being used.
|
||||
|
||||
|
||||
Latency
|
||||
|
||||
@@ -70,6 +70,9 @@ For example, to compile a blob for inference on an Intel® Neural Compute Stick
|
||||
|
||||
./compile_tool -m <path_to_model>/model_name.xml -d CPU
|
||||
|
||||
|
||||
Stating flags that take only single option like `-m` multiple times, for example `./compile_tool -m model.xml -m model2.xml`, results in only the first value being used.
|
||||
|
||||
Import a Compiled Blob File to Your Application
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
|
||||
Reference in New Issue
Block a user