@@ -82,6 +82,8 @@ To run the sample, you need specify a model and image:
|
||||
|
||||
- The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
|
||||
|
||||
- Stating flags that take only single option like `-m` multiple times, for example `python classification_sample_async.py -m model.xml -m model2.xml`, results in only the last value being used.
|
||||
|
||||
Example
|
||||
+++++++
|
||||
|
||||
|
||||
@@ -209,6 +209,8 @@ You can do inference on Intel® Processors with the GNA co-processor (or emulati
|
||||
- Before running the sample with a trained model, make sure the model is converted to the intermediate representation (IR) format (\*.xml + \*.bin) using the :doc:`Model Optimizer tool <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
|
||||
- The sample supports input and output in numpy file format (.npz)
|
||||
|
||||
- Stating flags that take only single option like `-m` multiple times, for example `python classification_sample_async.py -m model.xml -m model2.xml`, results in only the last value being used.
|
||||
|
||||
Sample Output
|
||||
#############
|
||||
|
||||
|
||||
Reference in New Issue
Block a user