Removed obsolete code snippets (#11061)

* Removed obsolete code snippets

* NCC style

* Fixed NCC for BA
This commit is contained in:
Ilya Lavrenov
2022-03-21 09:27:43 +03:00
committed by GitHub
parent c3b05978e2
commit cf8ccb590a
42 changed files with 75 additions and 397 deletions

View File

@@ -23,7 +23,7 @@ execute_process(
ERROR_VARIABLE error_var)
if(NOT clang_find_result EQUAL "0")
message(WARNING "Please, install libclang-[N]-dev package (required for ncc naming style check)")
message(WARNING "Please, install clang-[N] libclang-[N]-dev package (required for ncc naming style check)")
message(WARNING "find_package(Clang) output: ${output_var}")
message(WARNING "find_package(Clang) error: ${error_var}")
set(ENABLE_NCC_STYLE OFF)
@@ -107,8 +107,11 @@ function(ov_ncc_naming_style)
list(APPEND NCC_STYLE_ADDITIONAL_INCLUDE_DIRECTORIES "${NCC_STYLE_SOURCE_DIRECTORY}")
# without it sources with same name from different directories will map to same .ncc_style target
file(RELATIVE_PATH source_dir_rel ${CMAKE_SOURCE_DIR} ${NCC_STYLE_SOURCE_DIRECTORY})
foreach(source IN LISTS sources)
set(output_file "${ncc_style_bin_dir}/${source}.ncc_style")
set(output_file "${ncc_style_bin_dir}/${source_dir_rel}/${source}.ncc_style")
set(full_source_path "${NCC_STYLE_SOURCE_DIRECTORY}/${source}")
add_custom_command(

View File

@@ -116,5 +116,5 @@ After the build you can use path to your extension library to load your extensio
## See Also
* [OpenVINO Transformations](./ov_transformations.md)
* [Using Inference Engine Samples](../OV_Runtime_UG/Samples_Overview.md)
* [Using OpenVINO Runtime Samples](../OV_Runtime_UG/Samples_Overview.md)
* [Hello Shape Infer SSD sample](../../samples/cpp/hello_reshape_ssd/README.md)

View File

@@ -259,7 +259,7 @@ Result model depends on different factors:
Information about layer precision is stored in the performance counters that are
available from the Inference Engine API. For example, the part of performance counters table for quantized [TensorFlow* implementation of ResNet-50](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) model inference on CPU Plugin looks as follows:
available from the OpenVINO Runtime API. For example, the part of performance counters table for quantized [TensorFlow* implementation of ResNet-50](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) model inference on CPU Plugin looks as follows:
| layerName | execStatus | layerType | execType | realTime (ms) | cpuTime (ms) |

View File

@@ -62,7 +62,7 @@ For 8-bit integer computations, a model must be quantized. Quantized models can
## Performance Counters
Information about layer precision is stored in the performance counters that are
available from the Inference Engine API. For example, the part of performance counters table for quantized [TensorFlow* implementation of ResNet-50](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) model inference on [CPU Plugin](supported_plugins/CPU.md) looks as follows:
available from the OpenVINO Runtime API. For example, the part of performance counters table for quantized [TensorFlow* implementation of ResNet-50](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/resnet-50-tf) model inference on [CPU Plugin](supported_plugins/CPU.md) looks as follows:
| layerName | execStatus | layerType | execType | realTime (ms) | cpuTime (ms) |

View File

@@ -1,14 +0,0 @@
# OpenVINO™ Python* Package
OpenVINO™ Python\* package includes types to measure model and calibrate to low precision.
The OpenVINO™ Python\* package available in the `<INSTALL_DIR>/python/python3.X` directory.
The OpenVINO™ Python\* package includes the following sub-packages:
- [openvino.inference_engine](../../src/bindings/python/docs/api_overview.md) - Python\* wrapper on OpenVINO™ Inference Engine.
- `openvino.tools.accuracy_checker` - Measure accuracy.
- `openvino.tools.benchmark` - Measure latency and throughput.
## See Also
* [Integrate with Customer Application New API](integrate_with_your_application.md)

View File

@@ -2,7 +2,7 @@
@sphinxdirective
.. _deep learning inference engine:
.. _deep learning openvino runtime:
.. toctree::
:maxdepth: 1
@@ -45,6 +45,6 @@ The scheme below illustrates the typical workflow for deploying a trained deep l
<iframe allowfullscreen mozallowfullscreen msallowfullscreen oallowfullscreen webkitallowfullscreen height="315" width="100%"
src="https://www.youtube.com/embed/e6R13V8nbak">
</iframe>
* - **Inference Engine Concept**. Duration: 3:43
* - **OpenVINO Runtime Concept**. Duration: 3:43
@endsphinxdirective

View File

@@ -73,7 +73,7 @@
openvino_docs_Extensibility_UG_Intro
openvino_docs_transformations
Inference Engine Plugin Developer Guide <openvino_docs_ie_plugin_dg_overview>
OpenVINO Plugin Developer Guide <openvino_docs_ie_plugin_dg_overview>
groupie_dev_api
Plugin Transformation Pipeline <openvino_docs_IE_DG_plugin_transformation_pipeline>

View File

@@ -219,7 +219,7 @@ To uninstall the toolkit, follow the steps on the [Uninstalling page](uninstalli
.. dropdown:: Additional Resources
* Converting models for use with OpenVINO™: :ref:`Model Optimizer Developer Guide <deep learning model optimizer>`
* Writing your own OpenVINO™ applications: :ref:`OpenVINO™ Runtime User Guide <deep learning inference engine>`
* Writing your own OpenVINO™ applications: :ref:`OpenVINO™ Runtime User Guide <deep learning openvino runtime>`
* Sample applications: :ref:`OpenVINO™ Toolkit Samples Overview <code samples>`
* Pre-trained deep learning models: :ref:`Overview of OpenVINO™ Toolkit Pre-Trained Models <model zoo>`
* IoT libraries and code samples in the GitHUB repository: `Intel® IoT Developer Kit`_

View File

@@ -143,7 +143,7 @@ To uninstall the toolkit, follow the steps on the [Uninstalling page](uninstalli
.. dropdown:: Additional Resources
* Converting models for use with OpenVINO™: :ref:`Model Optimizer Developer Guide <deep learning model optimizer>`
* Writing your own OpenVINO™ applications: :ref:`OpenVINO™ Runtime User Guide <deep learning inference engine>`
* Writing your own OpenVINO™ applications: :ref:`OpenVINO™ Runtime User Guide <deep learning openvino runtime>`
* Sample applications: :ref:`OpenVINO™ Toolkit Samples Overview <code samples>`
* Pre-trained deep learning models: :ref:`Overview of OpenVINO™ Toolkit Pre-Trained Models <model zoo>`
* IoT libraries and code samples in the GitHUB repository: `Intel® IoT Developer Kit`_

View File

@@ -180,7 +180,7 @@ To uninstall the toolkit, follow the steps on the [Uninstalling page](uninstalli
.. dropdown:: Additional Resources
* Converting models for use with OpenVINO™: :ref:`Model Optimizer Developer Guide <deep learning model optimizer>`
* Writing your own OpenVINO™ applications: :ref:`OpenVINO™ Runtime User Guide <deep learning inference engine>`
* Writing your own OpenVINO™ applications: :ref:`OpenVINO™ Runtime User Guide <deep learning openvino runtime>`
* Sample applications: :ref:`OpenVINO™ Toolkit Samples Overview <code samples>`
* Pre-trained deep learning models: :ref:`Overview of OpenVINO™ Toolkit Pre-Trained Models <model zoo>`
* IoT libraries and code samples in the GitHUB repository: `Intel® IoT Developer Kit`_

View File

@@ -4,6 +4,13 @@
set(TARGET_NAME ie_docs_snippets)
if(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG)
ie_add_compiler_flags(-Wno-unused-variable)
if(CMAKE_COMPILER_IS_GNUCXX)
ie_add_compiler_flags(-Wno-unused-variable -Wno-unused-but-set-variable)
endif()
endif()
file(GLOB SOURCES "${CMAKE_CURRENT_SOURCE_DIR}/*.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/gpu/*.cpp")
@@ -57,9 +64,9 @@ endif()
# remove OpenCV related sources
if (ENABLE_OPENCV)
find_package(OpenCV QUIET)
find_package(OpenCV QUIET)
else()
set(OpenCV_FOUND FALSE)
set(OpenCV_FOUND OFF)
endif()
if(NOT OpenCV_FOUND)
@@ -102,30 +109,23 @@ if(ENABLE_OV_ONNX_FRONTEND)
target_link_libraries(${TARGET_NAME} PRIVATE openvino_onnx_frontend)
endif()
if(NOT MSVC)
target_compile_options(${TARGET_NAME} PRIVATE -Wno-unused-variable)
if(CMAKE_COMPILER_IS_GNUCXX)
target_compile_options(${TARGET_NAME} PRIVATE -Wno-unused-but-set-variable)
endif()
endif()
target_link_libraries(${TARGET_NAME} PRIVATE openvino::runtime openvino::runtime::dev)
ov_ncc_naming_style(FOR_TARGET "${TARGET_NAME}"
SOURCE_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}")
#
# Example
#
set(TARGET_NAME "ov_integration_snippet")
# [cmake:integration_example]
cmake_minimum_required(VERSION 3.10)
set(CMAKE_CXX_STANDARD 11)
find_package(OpenVINO REQUIRED)
add_executable(${TARGET_NAME} src/main.cpp)
target_link_libraries(${TARGET_NAME} PRIVATE openvino::runtime)
# [cmake:integration_example]
if(NOT MSVC)
target_compile_options(${TARGET_NAME} PRIVATE -Wno-unused-variable)
if(CMAKE_COMPILER_IS_GNUCXX)
target_compile_options(${TARGET_NAME} PRIVATE -Wno-unused-but-set-variable)
endif()
endif()

View File

@@ -1,13 +0,0 @@
#include <ie_core.hpp>
#include <openvino/core/model.hpp>
#include <openvino/pass/visualize_tree.hpp>
int main() {
using namespace InferenceEngine;
//! [part0]
std::shared_ptr<ov::Model> model;
// ...
ov::pass::VisualizeTree("after.png").run_on_model(model); // Visualize the nGraph function to an image
//! [part0]
return 0;
}

View File

@@ -1,13 +0,0 @@
#include <ie_core.hpp>
#include <ngraph/function.hpp>
int main() {
using namespace InferenceEngine;
//! [part1]
std::shared_ptr<ngraph::Function> nGraph;
// ...
CNNNetwork network(nGraph);
network.serialize("test_ir.xml", "test_ir.bin");
//! [part1]
return 0;
}

View File

@@ -1,10 +0,0 @@
#include <ie_core.hpp>
int main() {
using namespace InferenceEngine;
//! [part0]
InferenceEngine::Core core;
std::vector<std::string> availableDevices = core.GetAvailableDevices();
//! [part0]
return 0;
}

View File

@@ -1,10 +0,0 @@
#include <ie_core.hpp>
int main() {
using namespace InferenceEngine;
//! [part1]
InferenceEngine::Core core;
bool dumpDotFile = core.GetConfig("HETERO", HETERO_CONFIG_KEY(DUMP_GRAPH_DOT)).as<bool>();
//! [part1]
return 0;
}

View File

@@ -1,10 +0,0 @@
#include <ie_core.hpp>
int main() {
using namespace InferenceEngine;
//! [part2]
InferenceEngine::Core core;
std::string cpuDeviceName = core.GetMetric("GPU", METRIC_KEY(FULL_DEVICE_NAME)).as<std::string>();
//! [part2]
return 0;
}

View File

@@ -1,12 +0,0 @@
#include <ie_core.hpp>
int main() {
using namespace InferenceEngine;
//! [part3]
InferenceEngine::Core core;
auto network = core.ReadNetwork("sample.xml");
auto exeNetwork = core.LoadNetwork(network, "CPU");
auto nireq = exeNetwork.GetMetric(METRIC_KEY(OPTIMAL_NUMBER_OF_INFER_REQUESTS)).as<unsigned int>();
//! [part3]
return 0;
}

View File

@@ -1,12 +0,0 @@
#include <ie_core.hpp>
int main() {
using namespace InferenceEngine;
//! [part4]
InferenceEngine::Core core;
auto network = core.ReadNetwork("sample.xml");
auto exeNetwork = core.LoadNetwork(network, "MYRIAD");
float temperature = exeNetwork.GetMetric(METRIC_KEY(DEVICE_THERMAL)).as<float>();
//! [part4]
return 0;
}

View File

@@ -1,12 +0,0 @@
#include <ie_core.hpp>
int main() {
using namespace InferenceEngine;
//! [part5]
InferenceEngine::Core core;
auto network = core.ReadNetwork("sample.xml");
auto exeNetwork = core.LoadNetwork(network, "CPU");
auto ncores = exeNetwork.GetConfig(PluginConfigParams::KEY_CPU_THREADS_NUM).as<std::string>();
//! [part5]
return 0;
}

View File

@@ -1,16 +0,0 @@
#include <ie_core.hpp>
int main() {
using namespace InferenceEngine;
//! [part1]
Core ie;
auto netReader = ie.ReadNetwork("sample.xml");
InferenceEngine::InputsDataMap info(netReader.getInputsInfo());
auto& inputInfoFirst = info.begin()->second;
for (auto& it : info) {
it.second->setPrecision(Precision::U8);
}
//! [part1]
return 0;
}

View File

@@ -1,14 +0,0 @@
#include <ie_core.hpp>
int main() {
using namespace InferenceEngine;
//! [part2]
//Lock Intel MSS surface
mfxFrameSurface1 *frame_in; //Input MSS surface.
mfxFrameAllocator* pAlloc = &m_mfxCore.FrameAllocator();
pAlloc->Lock(pAlloc->pthis, frame_in->Data.MemId, &frame_in->Data);
//Inference Engine code
//! [part2]
return 0;
}

View File

@@ -1,22 +0,0 @@
#include <ie_core.hpp>
int main() {
using namespace InferenceEngine;
//! [part3]
InferenceEngine::SizeVector dims_src = {
1 /* batch, N*/,
(size_t) frame_in->Info.Height /* Height */,
(size_t) frame_in->Info.Width /* Width */,
3 /*Channels,*/,
};
InferenceEngine::TensorDesc desc(InferenceEngine::Precision::U8, dims_src, InferenceEngine::NHWC);
/* wrapping the surface data, as RGB is interleaved, need to pass only ptr to the R, notice that this wouldnt work with planar formats as these are 3 separate planes/pointers*/
InferenceEngine::TBlob<uint8_t>::Ptr p = InferenceEngine::make_shared_blob<uint8_t>( desc, (uint8_t*) frame_in->Data.R);
inferRequest.SetBlob("input", p);
inferRequest.Infer();
//Make sure to unlock the surface upon inference completion, to return the ownership back to the Intel MSS
pAlloc->Unlock(pAlloc->pthis, frame_in->Data.MemId, &frame_in->Data);
//! [part3]
return 0;
}

View File

@@ -1,20 +0,0 @@
#include <ie_core.hpp>
int main() {
using namespace InferenceEngine;
//! [part4]
InferenceEngine::SizeVector dims_src = {
1 /* batch, N*/,
3 /*Channels,*/,
(size_t) frame_in->Info.Height /* Height */,
(size_t) frame_in->Info.Width /* Width */,
};
TensorDesc desc(InferenceEngine::Precision::U8, dims_src, InferenceEngine::NCHW);
/* wrapping the RGBP surface data*/
InferenceEngine::TBlob<uint8_t>::Ptr p = InferenceEngine::make_shared_blob<uint8_t>( desc, (uint8_t*) frame_in->Data.R);
inferRequest.SetBlob("input", p);
// …
//! [part4]
return 0;
}

View File

@@ -1,30 +0,0 @@
#include <opencv2/core/core.hpp>
#include <ie_core.hpp>
int main() {
InferenceEngine::InferRequest inferRequest;
//! [part5]
cv::Mat frame(cv::Size(100, 100), CV_8UC3); // regular CV_8UC3 image, interleaved
// creating blob that wraps the OpenCVs Mat
// (the data it points should persists until the blob is released):
InferenceEngine::SizeVector dims_src = {
1 /* batch, N*/,
(size_t)frame.rows /* Height */,
(size_t)frame.cols /* Width */,
(size_t)frame.channels() /*Channels,*/,
};
InferenceEngine::TensorDesc desc(InferenceEngine::Precision::U8, dims_src, InferenceEngine::NHWC);
InferenceEngine::TBlob<uint8_t>::Ptr p = InferenceEngine::make_shared_blob<uint8_t>( desc, (uint8_t*)frame.data, frame.step[0] * frame.rows);
inferRequest.SetBlob("input", p);
inferRequest.Infer();
// …
// similarly, you can wrap the output tensor (lets assume it is FP32)
// notice that the output should be also explicitly stated as NHWC with setLayout
auto output_blob = inferRequest.GetBlob("output");
const float* output_data = output_blob->buffer().as<float*>();
auto dims = output_blob->getTensorDesc().getDims();
cv::Mat res (dims[2], dims[3], CV_32FC3, (void *)output_data);
//! [part5]
return 0;
}

View File

@@ -1,24 +0,0 @@
#include <ie_core.hpp>
int main() {
using namespace InferenceEngine;
//! [part6]
InferenceEngine::Core ie;
auto network = ie.ReadNetwork("Model.xml", "Model.bin");
InferenceEngine::InputsDataMap input_info(network.getInputsInfo());
auto executable_network = ie.LoadNetwork(network, "GPU");
auto infer_request = executable_network.CreateInferRequest();
for (auto & item : input_info) {
std::string input_name = item.first;
auto input = infer_request.GetBlob(input_name);
/** Lock/Fill input tensor with data **/
unsigned char* data = input->buffer().as<PrecisionTrait<Precision::U8>::value_type*>();
// ...
}
infer_request.Infer();
//! [part6]
return 0;
}

View File

@@ -1,15 +0,0 @@
#include <ie_core.hpp>
int main() {
InferenceEngine::Core core;
auto network0 = core.ReadNetwork("sample.xml");
auto network1 = core.ReadNetwork("sample.xml");
//! [part7]
//these two networks go thru same plugin (aka device) and their requests will not overlap.
auto executable_network0 = core.LoadNetwork(network0, "CPU",
{{InferenceEngine::PluginConfigParams::KEY_EXCLUSIVE_ASYNC_REQUESTS, InferenceEngine::PluginConfigParams::YES}});
auto executable_network1 = core.LoadNetwork(network1, "GPU",
{{InferenceEngine::PluginConfigParams::KEY_EXCLUSIVE_ASYNC_REQUESTS, InferenceEngine::PluginConfigParams::YES}});
//! [part7]
return 0;
}

View File

@@ -12,11 +12,11 @@ class AcceleratorSyncRequest : public IInferRequestInternal {
public:
using Ptr = std::shared_ptr<AcceleratorSyncRequest>;
void Preprocess();
void WriteToDevice();
void RunOnDevice();
void ReadFromDevice();
void PostProcess();
void preprocess();
void write_to_device();
void run_on_device();
void read_from_device();
void post_process();
};
// ! [async_infer_request:define_pipeline]
@@ -40,19 +40,19 @@ class AcceleratorAsyncInferRequest : public AsyncInferRequestThreadSafeDefault {
// Five pipeline stages of synchronous infer request are run by different executors
_pipeline = {
{ _preprocessExecutor , [this] {
_accSyncRequest->Preprocess();
_accSyncRequest->preprocess();
}},
{ _writeToDeviceExecutor , [this] {
_accSyncRequest->WriteToDevice();
_accSyncRequest->write_to_device();
}},
{ _runOnDeviceExecutor , [this] {
_accSyncRequest->RunOnDevice();
_accSyncRequest->run_on_device();
}},
{ _readFromDeviceExecutor , [this] {
_accSyncRequest->ReadFromDevice();
_accSyncRequest->read_from_device();
}},
{ _postProcessExecutor , [this] {
_accSyncRequest->PostProcess();
_accSyncRequest->post_process();
}},
};
}

View File

@@ -1,36 +0,0 @@
#include <ie_core.hpp>
int main() {
InferenceEngine::Core core;
int numRequests = 42;
int i = 1;
auto network = core.ReadNetwork("sample.xml");
auto executable_network = core.LoadNetwork(network, "CPU");
//! [part0]
struct Request {
InferenceEngine::InferRequest inferRequest;
int frameidx;
};
//! [part0]
//! [part1]
// numRequests is the number of frames (max size, equal to the number of VPUs in use)
std::vector<Request> request(numRequests);
//! [part1]
//! [part2]
// initialize infer request pointer Consult IE API for more detail.
request[i].inferRequest = executable_network.CreateInferRequest();
//! [part2]
//! [part3]
// Run inference
request[i].inferRequest.StartAsync();
//! [part3]
//! [part4]
request[i].inferRequest.SetCompletionCallback([] () {});
//! [part4]
return 0;
}

View File

@@ -1,38 +0,0 @@
#include <ie_core.hpp>
#include "ngraph/opsets/opset.hpp"
#include "ngraph/opsets/opset3.hpp"
int main() {
//! [part0]
using namespace std;
using namespace ngraph;
auto arg0 = make_shared<opset3::Parameter>(element::f32, Shape{7});
auto arg1 = make_shared<opset3::Parameter>(element::f32, Shape{7});
// Create an 'Add' operation with two inputs 'arg0' and 'arg1'
auto add0 = make_shared<opset3::Add>(arg0, arg1);
auto abs0 = make_shared<opset3::Abs>(add0);
// Create a node whose inputs/attributes will be specified later
auto acos0 = make_shared<opset3::Acos>();
// Create a node using opset factories
auto add1 = shared_ptr<Node>(get_opset3().create("Add"));
// Set inputs to nodes explicitly
acos0->set_argument(0, add0);
add1->set_argument(0, acos0);
add1->set_argument(1, abs0);
// Create a graph with one output (add1) and four inputs (arg0, arg1)
auto ng_function = make_shared<Function>(OutputVector{add1}, ParameterVector{arg0, arg1});
// Run shape inference on the nodes
ng_function->validate_nodes_and_infer_types();
//! [part0]
//! [part1]
InferenceEngine::CNNNetwork net (ng_function);
//! [part1]
return 0;
}

View File

@@ -233,7 +233,7 @@ macro(ie_add_sample)
endif()
if(COMMAND ov_ncc_naming_style AND NOT c_sample)
ov_ncc_naming_style(FOR_TARGET "${IE_SAMPLE_NAME}"
SOURCE_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}")
SOURCE_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}")
endif()
endmacro()

View File

@@ -10,11 +10,7 @@ Performance can be measured for two inference modes: latency- and throughput-ori
Upon start-up, the application reads command-line parameters and loads a network and inputs (images/binary files) to the specified device.
**NOTE**: By default, Inference Engine samples, tools and demos expect input with BGR channels order.
If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application
or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified.
For more information about the argument, refer to **When to Reverse Input Channels** section of
[Converting a Model](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model.md).
> **NOTE**: By default, OpenVINO Runtime samples, tools and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model](../../../docs/MO_DG/prepare_model/convert_model/Converting_Model.md).
Device-specific execution parameters (number of streams, threads, and so on) can be either explicitly specified through the command line
or left default. In the last case, the sample logic will select the values for the optimal throughput.
@@ -156,7 +152,7 @@ If a model has mixed input types, input folder should contain all required files
To run the tool, you can use [public](@ref omz_models_group_public) or [Intel's](@ref omz_models_group_intel) pre-trained models from the Open Model Zoo. The models can be downloaded using the [Model Downloader](@ref omz_tools_downloader).
> **NOTE**: Before running the tool with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
> **NOTE**: Before running the tool with a trained model, make sure the model is converted to the OpenVINO IR (\*.xml + \*.bin) using the [Model Optimizer tool](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
>
> The sample accepts models in ONNX format (.onnx) that do not require preprocessing.
@@ -173,7 +169,7 @@ This section provides step-by-step instructions on how to run the Benchmark Tool
```sh
python3 downloader.py --name googlenet-v1 -o <models_dir>
```
2. Convert the model to the Inference Engine IR format. Run the Model Optimizer using the `mo` command with the path to the model, model format (which must be FP32 for CPU and FPG) and output directory to generate the IR files:
2. Convert the model to the OpenVINO IR format. Run the Model Optimizer using the `mo` command with the path to the model, model format (which must be FP32 for CPU and FPG) and output directory to generate the IR files:
```sh
mo --input_model <models_dir>/public/googlenet-v1/googlenet-v1.caffemodel --data_type FP32 --output_dir <ir_dir>
```
@@ -243,6 +239,6 @@ Below are fragments of sample output static and dynamic networks:
```
## See Also
* [Using Inference Engine Samples](../../../docs/OV_Runtime_UG/Samples_Overview.md)
* [Using OpenVINO Runtime Samples](../../../docs/OV_Runtime_UG/Samples_Overview.md)
* [Model Optimizer](../../../docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
* [Model Downloader](@ref omz_tools_downloader)

View File

@@ -32,7 +32,7 @@
static const size_t progressBarDefaultTotalCount = 1000;
bool ParseAndCheckCommandLine(int argc, char* argv[]) {
bool parse_and_check_command_line(int argc, char* argv[]) {
// ---------------------------Parsing and validating input
// arguments--------------------------------------
slog::info << "Parsing input parameters" << slog::endl;
@@ -88,7 +88,7 @@ static void next_step(const std::string additional_info = "") {
static size_t step_id = 0;
static const std::map<size_t, std::string> step_names = {
{1, "Parsing and validating input arguments"},
{2, "Loading Inference Engine"},
{2, "Loading OpenVINO Runtime"},
{3, "Setting device configuration"},
{4, "Reading network files"},
{5, "Resizing network to match image sizes and given batch"},
@@ -151,7 +151,7 @@ int main(int argc, char* argv[]) {
// -------------------------------------------------
next_step();
if (!ParseAndCheckCommandLine(argc, argv)) {
if (!parse_and_check_command_line(argc, argv)) {
return 0;
}
@@ -203,7 +203,7 @@ int main(int argc, char* argv[]) {
/** This vector stores paths to the processed images with input names**/
auto inputFiles = parse_input_arguments(gflags::GetArgvs());
// ----------------- 2. Loading the Inference Engine
// ----------------- 2. Loading the OpenVINO Runtime
// -----------------------------------------------------------
next_step();
@@ -1089,7 +1089,7 @@ int main(int argc, char* argv[]) {
if (!FLAGS_dump_config.empty()) {
dump_config(FLAGS_dump_config, config);
slog::info << "Inference Engine configuration settings were dumped to " << FLAGS_dump_config << slog::endl;
slog::info << "OpenVINO Runtime configuration settings were dumped to " << FLAGS_dump_config << slog::endl;
}
if (!FLAGS_exec_graph_path.empty()) {

View File

@@ -4,7 +4,7 @@
# SPDX-License-Identifier: Apache-2.0
usage() {
echo "Build inference engine samples"
echo "Build OpenVINO Runtime samples"
echo
echo "Options:"
echo " -h Print the help message"
@@ -70,7 +70,7 @@ else
fi
if ! command -v cmake &>/dev/null; then
printf "\n\nCMAKE is not installed. It is required to build Inference Engine samples. Please install it. \n\n"
printf "\n\nCMAKE is not installed. It is required to build OpenVINO Runtime samples. Please install it. \n\n"
exit 1
fi

View File

@@ -52,7 +52,7 @@ if exist "%SAMPLE_BUILD_DIR%\CMakeCache.txt" del "%SAMPLE_BUILD_DIR%\CMakeCache.
cd /d "%ROOT_DIR%" && cmake -E make_directory "%SAMPLE_BUILD_DIR%" && cd /d "%SAMPLE_BUILD_DIR%" && cmake -G "Visual Studio 16 2019" -A %PLATFORM% "%ROOT_DIR%"
echo.
echo ###############^|^| Build Inference Engine samples using MS Visual Studio (MSBuild.exe) ^|^|###############
echo ###############^|^| Build OpenVINO Runtime samples using MS Visual Studio (MSBuild.exe) ^|^|###############
echo.
echo cmake --build . --config Release
@@ -65,7 +65,7 @@ echo Done.
exit /b
:usage
echo Build inference engine samples
echo Build OpenVINO Runtime samples
echo.
echo Options:
echo -h Print the help message

View File

@@ -36,7 +36,7 @@ int main(int argc, char* argv[]) {
const std::string device_name{argv[3]};
// -------------------------------------------------------------------
// Step 1. Initialize inference engine core
// Step 1. Initialize OpenVINO Runtime core
ov::Core core;
// -------------------------------------------------------------------

View File

@@ -32,13 +32,13 @@
using namespace ov::preprocess;
/**
* @brief The entry point for inference engine automatic speech recognition sample
* @brief The entry point for OpenVINO Runtime automatic speech recognition sample
* @file speech_sample/main.cpp
* @example speech_sample/main.cpp
*/
int main(int argc, char* argv[]) {
try {
// ------------------------------ Get Inference Engine version ----------------------------------------------
// ------------------------------ Get OpenVINO Runtime version ----------------------------------------------
slog::info << "OpenVINO runtime: " << ov::get_openvino_version() << slog::endl;
// ------------------------------ Parsing and validation of input arguments ---------------------------------
@@ -79,7 +79,7 @@ int main(int argc, char* argv[]) {
}
size_t numInputFiles(inputFiles.size());
// --------------------------- Step 1. Initialize inference engine core and read model
// --------------------------- Step 1. Initialize OpenVINO Runtime core and read model
// -------------------------------------
ov::Core core;
slog::info << "Loading model files:" << slog::endl << FLAGS_m << slog::endl;

View File

@@ -133,7 +133,7 @@ def main():
device_name = sys.argv[2]
labels = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
number_top = 1
# ---------------------------Step 1. Initialize inference engine core--------------------------------------------------
# ---------------------------Step 1. Initialize OpenVINO Runtime Core--------------------------------------------------
log.info('Creating OpenVINO Runtime Core')
core = Core()

View File

@@ -39,12 +39,12 @@ The sample illustrates the general workflow of using the Intel(R) Deep Learning
- Downloads a public SqueezeNet model using the Model Downloader (extras\open_model_zoo\tools\downloader\downloader.py)
- Installs all prerequisites required for running the Model Optimizer using the scripts from the "tools\model_optimizer\install_prerequisites" folder
- Converts SqueezeNet to an IR using the Model Optimizer (tools\model_optimizer\mo.py) via the Model Converter (extras\open_model_zoo\tools\downloader\converter.py)
- Builds the Inference Engine classification_sample (samples\cpp\classification_sample)
- Builds the OpenVINO Runtime classification_sample (samples\cpp\classification_sample)
- Runs the sample with the car.png picture located in the demo folder
The sample application prints top-10 inference results for the picture.
For more information about the Inference Engine classification sample, refer to the documentation available in the sample folder.
For more information about the OpenVINO Runtime classification sample, refer to the documentation available in the sample folder.
Benchmark Sample Using SqueezeNet
===============================
@@ -56,9 +56,9 @@ The sample script does the following:
- Downloads a public SqueezeNet model using the Model Downloader (extras\open_model_zoo\tools\downloader\downloader.py)
- Installs all prerequisites required for running the Model Optimizer using the scripts from the "tools\model_optimizer\install_prerequisites" folder
- Converts SqueezeNet to an IR using the Model Optimizer (tools\model_optimizer\mo.py) via the Model Converter (extras\open_model_zoo\tools\downloader\converter.py)
- Builds the Inference Engine benchmark tool (samples\benchmark_app)
- Builds the OpenVINO Runtime benchmark tool (samples\benchmark_app)
- Runs the tool with the car.png picture located in the demo folder
The benchmark app prints performance counters, resulting latency, and throughput values.
For more information about the Inference Engine benchmark app, refer to the documentation available in the sample folder.
For more information about the OpenVINO Runtime benchmark app, refer to the documentation available in the sample folder.

View File

@@ -155,7 +155,7 @@ CALL :delay 7
:buildSample
echo.
echo ###############^|^| Generate VS solution for Inference Engine samples using cmake ^|^|###############
echo ###############^|^| Generate VS solution for OpenVINO Runtime samples using cmake ^|^|###############
echo.
CALL :delay 3
@@ -173,7 +173,7 @@ if ERRORLEVEL 1 GOTO errorHandling
CALL :delay 7
echo.
echo ###############^|^| Build Inference Engine samples using cmake ^|^|###############
echo ###############^|^| Build OpenVINO Runtime samples using cmake ^|^|###############
echo.
CALL :delay 3
@@ -186,7 +186,7 @@ CALL :delay 7
:runSample
echo.
echo ###############^|^| Run Inference Engine benchmark app ^|^|###############
echo ###############^|^| Run OpenVINO Runtime benchmark app ^|^|###############
echo.
CALL :delay 3
copy /Y "%ROOT_DIR%%model_name%.labels" "%ir_dir%"
@@ -198,7 +198,7 @@ benchmark_app.exe -i "%target_image_path%" -m "%ir_dir%\%model_name%.xml" -pc -
if ERRORLEVEL 1 GOTO errorHandling
echo.
echo ###############^|^| Inference Engine benchmark app completed successfully ^|^|###############
echo ###############^|^| OpenVINO Runtime benchmark app completed successfully ^|^|###############
CALL :delay 10
cd /d "%ROOT_DIR%"

View File

@@ -158,7 +158,7 @@ else
fi
# Step 3. Build samples
echo -ne "\n###############|| Build Inference Engine samples ||###############\n\n"
echo -ne "\n###############|| Build OpenVINO Runtime samples ||###############\n\n"
OS_PATH=$(uname -m)
NUM_THREADS="-j2"
@@ -181,7 +181,7 @@ cmake -DCMAKE_BUILD_TYPE=Release "$samples_path"
make $NUM_THREADS benchmark_app
# Step 4. Run samples
echo -ne "\n###############|| Run Inference Engine benchmark app ||###############\n\n"
echo -ne "\n###############|| Run OpenVINO Runtime benchmark app ||###############\n\n"
cd "$binaries_dir"
@@ -189,4 +189,4 @@ cp -f "$ROOT_DIR/${model_name}.labels" "${ir_dir}/"
print_and_run ./benchmark_app -d "$target" -i "$target_image_path" -m "${ir_dir}/${model_name}.xml" -pc "${sampleoptions[@]}"
echo -ne "\n###############|| Inference Engine benchmark app completed successfully ||###############\n\n"
echo -ne "\n###############|| OpenVINO Runtime benchmark app completed successfully ||###############\n\n"

View File

@@ -151,7 +151,7 @@ CALL :delay 7
:buildSample
echo.
echo ###############^|^| Generate VS solution for Inference Engine samples using cmake ^|^|###############
echo ###############^|^| Generate VS solution for OpenVINO Runtime samples using cmake ^|^|###############
echo.
CALL :delay 3
@@ -169,7 +169,7 @@ if ERRORLEVEL 1 GOTO errorHandling
CALL :delay 7
echo.
echo ###############^|^| Build Inference Engine samples using cmake ^|^|###############
echo ###############^|^| Build OpenVINO Runtime samples using cmake ^|^|###############
echo.
CALL :delay 3
@@ -182,7 +182,7 @@ CALL :delay 7
:runSample
echo.
echo ###############^|^| Run Inference Engine classification sample ^|^|###############
echo ###############^|^| Run OpenVINO Runtime classification sample ^|^|###############
echo.
CALL :delay 3
copy /Y "%ROOT_DIR%%model_name%.labels" "%ir_dir%"

View File

@@ -154,7 +154,7 @@ else
fi
# Step 3. Build samples
echo -ne "\n###############|| Build Inference Engine samples ||###############\n\n"
echo -ne "\n###############|| Build OpenVINO Runtime samples ||###############\n\n"
OS_PATH=$(uname -m)
NUM_THREADS="-j2"
@@ -178,7 +178,7 @@ cmake -DCMAKE_BUILD_TYPE=Release "$samples_path"
make $NUM_THREADS classification_sample_async
# Step 4. Run sample
echo -ne "\n###############|| Run Inference Engine classification sample ||###############\n\n"
echo -ne "\n###############|| Run OpenVINO Runtime classification sample ||###############\n\n"
cd "$binaries_dir"