Modified samples to support ONNX models (#1680)
* Modified samples to support ONNX models - Removed manual loading of IR .bin files (hello_classification, object_detection_sample_ssd) - Corrected comments - Added Note in README files * Modified samples to support ONNX models - Removed manual loading of IR .bin files (hello_classification, object_detection_sample_ssd) - Corrected comments - Added Note in README files
This commit is contained in:
@@ -52,6 +52,8 @@ Running the application with the empty list of options yields the usage message
|
||||
To run the sample, use AlexNet and GoogLeNet or other public or pre-trained image classification models. To download the pre-trained models, use the OpenVINO [Model Downloader](@ref omz_tools_downloader_README) or go to [https://download.01.org/opencv/](https://download.01.org/opencv/).
|
||||
|
||||
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
|
||||
>
|
||||
> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
|
||||
|
||||
You can do inference of an image using a trained AlexNet network on FPGA with fallback to CPU using the following command:
|
||||
```sh
|
||||
|
||||
@@ -90,7 +90,7 @@ int main(int argc, char *argv[]) {
|
||||
std::cout << ie.GetVersions(FLAGS_d) << std::endl;
|
||||
// -----------------------------------------------------------------------------------------------------
|
||||
|
||||
// --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------
|
||||
// 2. Read a model in OpenVINO Intermediate Representation (.xml and .bin files) or ONNX (.onnx file) format
|
||||
slog::info << "Loading network files" << slog::endl;
|
||||
|
||||
/** Read network model **/
|
||||
|
||||
Reference in New Issue
Block a user