Integrate UAT fixes (#5517)
* Added info on DockerHub CI Framework * Feature/azaytsev/change layout (#3295) * Changes according to feedback comments * Replaced @ref's with html links * Fixed links, added a title page for installing from repos and images, fixed formatting issues * Added links * minor fix * Added DL Streamer to the list of components installed by default * Link fixes * Link fixes * ovms doc fix (#2988) * added OpenVINO Model Server * ovms doc fixes Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com> * Updated openvino_docs.xml * Edits to MO Per findings spreadsheet * macOS changes per issue spreadsheet * Fixes from review spreadsheet Mostly IE_DG fixes * Consistency changes * Make doc fixes from last round of review * integrate changes from baychub/master * Update Intro.md * Update Cutting_Model.md * Update Cutting_Model.md * Fixed link to Customize_Model_Optimizer.md Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com> Co-authored-by: baychub <cbay@yahoo.com>
This commit is contained in:
@@ -27,7 +27,7 @@ each sample step at [Integration Steps](../../../../../docs/IE_DG/Integrate_with
|
||||
|
||||
## Running
|
||||
|
||||
Run the application with the <code>-h</code> option to see the usage message:
|
||||
Run the application with the `-h` option to see the usage message:
|
||||
|
||||
```sh
|
||||
python hello_classification.py -h
|
||||
@@ -68,7 +68,7 @@ To run the sample, you need specify a model and image:
|
||||
>
|
||||
> - The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
|
||||
|
||||
You can do inference of an image using a pre-trained model on a GPU using the following command:
|
||||
For example, to perform inference of an image using a pre-trained model on a GPU, run the following command:
|
||||
|
||||
```sh
|
||||
python hello_classification.py -m <path_to_model>/alexnet.xml -i <path_to_image>/cat.bmp -d GPU
|
||||
|
||||
Reference in New Issue
Block a user