change hello reshape ssd sample port to 22.2 (#12657) (#13068)

* change hello reshape ssd sample (#12657)

ssdlite_mobilenet_v2 changed to mobilenet-ssd, as per J. Espinoza's request, to fix 
84516

* one more correction of mobilnet
This commit is contained in:
Karol Blaszczak
2022-09-19 14:47:15 +02:00
committed by GitHub
parent 4451ef7d42
commit 48ea77df85

View File

@@ -55,19 +55,19 @@ python -m pip install openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet]
2. Download a pre-trained model:
```
omz_downloader --name ssdlite_mobilenet_v2
omz_downloader --name mobilenet-ssd
```
3. If a model is not in the IR or ONNX format, it must be converted. You can do this using the model converter:
```
omz_converter --name ssdlite_mobilenet_v2
omz_converter --name mobilenet-ssd
```
4. Perform inference of `banana.jpg` using `ssdlite_mobilenet_v2` model on a `GPU`, for example:
4. Perform inference of `banana.jpg` using `mobilenet-ssd` model on a `GPU`, for example:
```
python hello_reshape_ssd.py ssdlite_mobilenet_v2.xml banana.jpg GPU
python hello_reshape_ssd.py mobilenet-ssd.xml banana.jpg GPU
```
## Sample Output
@@ -76,7 +76,7 @@ The sample application logs each step in a standard output stream and creates an
```
[ INFO ] Creating OpenVINO Runtime Core
[ INFO ] Reading the model: C:/test_data/models/ssdlite_mobilenet_v2.xml
[ INFO ] Reading the model: C:/test_data/models/mobilenet-ssd.xml
[ INFO ] Reshaping the model to the height and width of the input image
[ INFO ] Loading the model to the plugin
[ INFO ] Starting inference in synchronous mode