Files
openvino/samples/cpp/classification_sample_async/classification_sample_async.h
Vladimir Dudnik 5b25dbee22 ov2.0 IE samples modification (#8340)
* ov2.0 IE samples modification

apply code style

turn off clang style check for headers order

unify samples a bit

add yuv nv12 reader to format_reader, helloe_nv112 sample

hello_reshape_ssd ov2.0

* sync with PR 8629 preprocessing api changes

* fix for slog << vector<int>

* add operator<< for ov::Version from PR-8687

* Update samples/cpp/hello_nv12_input_classification/main.cpp

Co-authored-by: Mikhail Nosov <mikhail.nosov@intel.com>

* apply code style

* change according to review comments

* add const qualifier

* apply code style

* std::ostream for old inference engine version to make VPU plugin tests happy

* apply code style

* revert changes in print version for old api samples

* keep inference_engine.hpp for not ov2.0 yet samples

* fix merge artifacts

* fix compilation

* apply code style

* Fixed classification sample test

* Revert changes in hello_reshape_ssd sample

* rebase to master, sync with PR-9054

* fix issues found by C++ tests

* rebased and sync with PR-9051

* fix test result parsers for classification tests (except unicode one)

* fix mismatches after merge

* rebase and sync with PR-9144

Co-authored-by: Mikhail Nosov <mikhail.nosov@intel.com>
Co-authored-by: antonrom23 <anton.romanov@intel.com>
2021-12-13 11:30:58 +03:00

57 lines
1.9 KiB
C++

// Copyright (C) 2018-2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <iostream>
#include <string>
#include <vector>
#include "gflags/gflags.h"
/// @brief message for help argument
static const char help_message[] = "Print a usage message.";
/// @brief message for model argument
static const char model_message[] = "Required. Path to an .xml file with a trained model.";
/// @brief message for images argument
static const char image_message[] =
"Required. Path to a folder with images or path to an image files: a .ubyte file for LeNet"
" and a .bmp file for the other networks.";
/// @brief message for assigning cnn calculation to device
static const char target_device_message[] =
"Optional. Specify the target device to infer on (the list of available devices is shown below). "
"Default value is CPU. Use \"-d HETERO:<comma_separated_devices_list>\" format to specify HETERO plugin. "
"Sample will look for a suitable plugin for device specified.";
/// @brief Define flag for showing help message <br>
DEFINE_bool(h, false, help_message);
/// @brief Define parameter for set image file <br>
/// It is a required parameter
DEFINE_string(i, "", image_message);
/// @brief Define parameter for set model file <br>
/// It is a required parameter
DEFINE_string(m, "", model_message);
/// @brief device the target device to infer on <br>
/// It is an optional parameter
DEFINE_string(d, "CPU", target_device_message);
/**
* @brief This function show a help message
*/
static void showUsage() {
std::cout << std::endl;
std::cout << "classification_sample_async [OPTION]" << std::endl;
std::cout << "Options:" << std::endl;
std::cout << std::endl;
std::cout << " -h " << help_message << std::endl;
std::cout << " -m \"<path>\" " << model_message << std::endl;
std::cout << " -d \"<device>\" " << target_device_message << std::endl;
}