samples overview & model protection: docs (#10596)
* Renamed hetero md * Renamed some guides * Updated OpenVINO_Runtime_User_Guide.md * Updated plugin's page * More updates * Fixed links * Updated link names * Fixed links * Fixed docs build * Self-review * Fixed issues in doc snippets * Updated Samples_Overview.md * Updated model protection guide * Renamed ngraph_function creation samples
This commit is contained in:
parent
37923a9183
commit
5b3b48aa17
@ -7,4 +7,3 @@ The sections below contain detailed list of changes made to the OpenVINO™ Runt
|
||||
### New API
|
||||
|
||||
* The OpenVINO™ 2.0 API was introduced.
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Inference Engine Samples {#openvino_docs_IE_DG_Samples_Overview}
|
||||
# OpenVINO Samples {#openvino_docs_IE_DG_Samples_Overview}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
@ -19,8 +19,8 @@
|
||||
openvino_inference_engine_ie_bridges_c_samples_hello_nv12_input_classification_README
|
||||
openvino_inference_engine_samples_hello_query_device_README
|
||||
openvino_inference_engine_ie_bridges_python_sample_hello_query_device_README
|
||||
openvino_inference_engine_samples_ngraph_function_creation_sample_README
|
||||
openvino_inference_engine_ie_bridges_python_sample_ngraph_function_creation_sample_README
|
||||
openvino_inference_engine_samples_model_creation_sample_README
|
||||
openvino_inference_engine_ie_bridges_python_sample_model_creation_sample_README
|
||||
openvino_inference_engine_samples_speech_sample_README
|
||||
openvino_inference_engine_ie_bridges_python_sample_speech_sample_README
|
||||
openvino_inference_engine_samples_benchmark_app_README
|
||||
@ -28,14 +28,14 @@
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
The Inference Engine sample applications are simple console applications that show how to utilize specific Inference Engine capabilities within an application, assist developers in executing specific tasks such as loading a model, running inference, querying specific device capabilities and etc.
|
||||
The OpenVINO sample applications are simple console applications that show how to utilize specific OpenVINO API capabilities within an application, assist developers in executing specific tasks such as loading a model, running inference, querying specific device capabilities and etc.
|
||||
|
||||
After installation of Intel® Distribution of OpenVINO™ toolkit, С, C++ and Python* sample applications are available in the following directories, respectively:
|
||||
* `<INSTALL_DIR>/samples/c`
|
||||
* `<INSTALL_DIR>/samples/cpp`
|
||||
* `<INSTALL_DIR>/samples/python`
|
||||
|
||||
Inference Engine sample applications include the following:
|
||||
OpenVINO sample applications include the following:
|
||||
|
||||
- **Speech Sample** - Acoustic model inference based on Kaldi neural networks and speech feature vectors.
|
||||
- [Automatic Speech Recognition C++ Sample](../../samples/cpp/speech_sample/README.md)
|
||||
@ -50,7 +50,7 @@ Inference Engine sample applications include the following:
|
||||
- **Hello NV12 Input Classification Sample** – Input of any size and layout can be provided to an infer request. The sample transforms the input to the NV12 color format and pre-process it automatically during inference. The sample supports only images as inputs.
|
||||
- [Hello NV12 Input Classification C++ Sample](../../samples/cpp/hello_nv12_input_classification/README.md)
|
||||
- [Hello NV12 Input Classification C Sample](../../samples/c/hello_nv12_input_classification/README.md)
|
||||
- **Hello Query Device Sample** – Query of available Inference Engine devices and their metrics, configuration values.
|
||||
- **Hello Query Device Sample** – Query of available OpenVINO devices and their metrics, configuration values.
|
||||
- [Hello Query Device C++ Sample](../../samples/cpp/hello_query_device/README.md)
|
||||
- [Hello Query Device Python* Sample](../../samples/python/hello_query_device/README.md)
|
||||
- **Hello Reshape SSD Sample** – Inference of SSD networks resized by ShapeInfer API according to an input size.
|
||||
@ -59,10 +59,10 @@ Inference Engine sample applications include the following:
|
||||
- **Image Classification Sample Async** – Inference of image classification networks like AlexNet and GoogLeNet using Asynchronous Inference Request API (the sample supports only images as inputs).
|
||||
- [Image Classification Async C++ Sample](../../samples/cpp/classification_sample_async/README.md)
|
||||
- [Image Classification Async Python* Sample](../../samples/python/classification_sample_async/README.md)
|
||||
- **nGraph Function Creation Sample** – Construction of the LeNet network using the nGraph function creation sample.
|
||||
- [nGraph Function Creation C++ Sample](../../samples/cpp/ngraph_function_creation_sample/README.md)
|
||||
- [nGraph Function Creation Python Sample](../../samples/python/ngraph_function_creation_sample/README.md)
|
||||
|
||||
- **OpenVINO Model Creation Sample** – Construction of the LeNet model using the OpenVINO model creation sample.
|
||||
- [OpenVINO Model Creation C++ Sample](../../samples/cpp/model_creation_sample/README.md)
|
||||
- [OpenVINO Model Creation Python Sample](../../samples/python/model_creation_sample/README.md)
|
||||
|
||||
> **NOTE**: All C++ samples support input paths containing only ASCII characters, except the Hello Classification Sample, that supports Unicode.
|
||||
|
||||
## Media Files Available for Samples
|
||||
@ -79,8 +79,8 @@ To run the sample, you can use [public](@ref omz_models_group_public) or [Intel'
|
||||
|
||||
The officially supported Linux* build environment is the following:
|
||||
|
||||
* Ubuntu* 18.04 LTS 64-bit or CentOS* 7 64-bit
|
||||
* GCC* 7.5.0 (for Ubuntu* 18.04) or GCC* 4.8.5 (for CentOS* 7.6)
|
||||
* Ubuntu* 18.04 LTS 64-bit or Ubuntu* 20.04 LTS 64-bit
|
||||
* GCC* 7.5.0 (for Ubuntu* 18.04) or GCC* 9.3.0 (for Ubuntu* 20.04)
|
||||
* CMake* version 3.10 or higher
|
||||
|
||||
> **NOTE**: For building samples from the open-source version of OpenVINO™ toolkit, see the [build instructions on GitHub](https://github.com/openvinotoolkit/openvino/wiki/BuildingCode).
|
||||
@ -102,7 +102,7 @@ You can also build the sample applications manually:
|
||||
```sh
|
||||
mkdir build
|
||||
```
|
||||
> **NOTE**: If you ran the Image Classification verification script during the installation, the C++ samples build directory was already created in your home directory: `~/inference_engine_samples_build/`
|
||||
> **NOTE**: If you ran the Image Classification verification script during the installation, the C++ samples build directory was already created in your home directory: `~/inference_engine_cpp_samples_build/`
|
||||
|
||||
2. Go to the created directory:
|
||||
```sh
|
||||
@ -130,22 +130,17 @@ for the debug configuration — in `<path_to_build_directory>/intel64/Debug/`.
|
||||
|
||||
The recommended Windows* build environment is the following:
|
||||
* Microsoft Windows* 10
|
||||
* Microsoft Visual Studio* 2017, or 2019
|
||||
* Microsoft Visual Studio* 2019
|
||||
* CMake* version 3.10 or higher
|
||||
|
||||
> **NOTE**: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14.
|
||||
> **NOTE**: If you want to use Microsoft Visual Studio 2019, you are required to install CMake 3.14 or higher.
|
||||
|
||||
To build the C or C++ sample applications on Windows, go to the `<INSTALL_DIR>\samples\c` or `<INSTALL_DIR>\samples\cpp` directory, respectively, and run the `build_samples_msvc.bat` batch file:
|
||||
```sh
|
||||
build_samples_msvc.bat
|
||||
```
|
||||
|
||||
By default, the script automatically detects the highest Microsoft Visual Studio version installed on the machine and uses it to create and build
|
||||
a solution for a sample code. Optionally, you can also specify the preferred Microsoft Visual Studio version to be used by the script. Supported
|
||||
versions are `VS2017` and `VS2019`. For example, to build the C++ samples using the Microsoft Visual Studio 2017, use the following command:
|
||||
```sh
|
||||
<INSTALL_DIR>\samples\cpp\build_samples_msvc.bat VS2017
|
||||
```
|
||||
By default, the script automatically detects the highest Microsoft Visual Studio version installed on the machine and uses it to create and build a solution for a sample code
|
||||
|
||||
Once the build is completed, you can find sample binaries in the following folders:
|
||||
* C samples: `C:\Users\<user>\Documents\Intel\OpenVINO\inference_engine_c_samples_build\intel64\Release`
|
||||
@ -159,7 +154,7 @@ directory.
|
||||
|
||||
The officially supported macOS* build environment is the following:
|
||||
|
||||
* macOS* 10.15 64-bit
|
||||
* macOS* 10.15 64-bit or higher
|
||||
* Clang* compiler from Xcode* 10.1 or higher
|
||||
* CMake* version 3.13 or higher
|
||||
|
||||
@ -180,7 +175,7 @@ You can also build the sample applications manually:
|
||||
|
||||
> **NOTE**: Before proceeding, make sure you have OpenVINO™ environment set correctly. This can be done manually by
|
||||
```sh
|
||||
cd <INSTALL_DIR>/bin
|
||||
cd <INSTALL_DIR>/
|
||||
source setupvars.sh
|
||||
```
|
||||
|
||||
@ -188,7 +183,7 @@ source setupvars.sh
|
||||
```sh
|
||||
mkdir build
|
||||
```
|
||||
> **NOTE**: If you ran the Image Classification verification script during the installation, the C++ samples build directory was already created in your home directory: `~/inference_engine_samples_build/`
|
||||
> **NOTE**: If you ran the Image Classification verification script during the installation, the C++ samples build directory was already created in your home directory: `~/inference_engine_cpp_samples_build/`
|
||||
|
||||
2. Go to the created directory:
|
||||
```sh
|
||||
@ -217,7 +212,7 @@ for the debug configuration — in `<path_to_build_directory>/intel64/Debug/`.
|
||||
### Get Ready for Running the Sample Applications on Linux*
|
||||
|
||||
Before running compiled binary files, make sure your application can find the
|
||||
Inference Engine and OpenCV libraries.
|
||||
OpenVINO Runtime libraries.
|
||||
Run the `setupvars` script to set all necessary environment variables:
|
||||
```sh
|
||||
source <INSTALL_DIR>/setupvars.sh
|
||||
@ -246,7 +241,7 @@ list above.
|
||||
### Get Ready for Running the Sample Applications on Windows*
|
||||
|
||||
Before running compiled binary files, make sure your application can find the
|
||||
Inference Engine and OpenCV libraries.
|
||||
OpenVINO Runtime libraries.
|
||||
Use the `setupvars` script, which sets all necessary environment variables:
|
||||
```sh
|
||||
<INSTALL_DIR>\setupvars.bat
|
||||
@ -255,13 +250,13 @@ Use the `setupvars` script, which sets all necessary environment variables:
|
||||
To debug or run the samples on Windows in Microsoft Visual Studio, make sure you
|
||||
have properly configured **Debugging** environment settings for the **Debug**
|
||||
and **Release** configurations. Set correct paths to the OpenCV libraries, and
|
||||
debug and release versions of the Inference Engine libraries.
|
||||
debug and release versions of the OpenVINO Runtime libraries.
|
||||
For example, for the **Debug** configuration, go to the project's
|
||||
**Configuration Properties** to the **Debugging** category and set the `PATH`
|
||||
variable in the **Environment** field to the following:
|
||||
|
||||
```sh
|
||||
PATH=<INSTALL_DIR>\runtime\bin;<INSTALL_DIR>\opencv\bin;%PATH%
|
||||
PATH=<INSTALL_DIR>\runtime\bin;%PATH%
|
||||
```
|
||||
where `<INSTALL_DIR>` is the directory in which the OpenVINO toolkit is installed.
|
||||
|
||||
|
@ -166,7 +166,7 @@ To feed input data of a shape that is different from the model input shape, resh
|
||||
|
||||
Once the input shape of IENetwork is set, call the `IECore.load_network` method to get an ExecutableNetwork object for inference with updated shapes.
|
||||
|
||||
There are other approaches to reshape the model during the stage of IR generation or [nGraph function](https://docs.openvino.ai/latest/openvino_docs_nGraph_DG_PythonAPI.html#create_an_ngraph_function_from_a_graph) creation.
|
||||
There are other approaches to reshape the model during the stage of IR generation or [OpenVINO model](https://docs.openvino.ai/latest/openvino_docs_nGraph_DG_PythonAPI.html#create_an_ngraph_function_from_a_graph) creation.
|
||||
|
||||
Practically, some models are not ready to be reshaped. In this case, a new input shape cannot be set with the Model Optimizer or the `IENetwork.reshape` method.
|
||||
|
||||
|
@ -16,22 +16,22 @@ This guide demonstrates how to use OpenVINO securely with protected models.
|
||||
|
||||
After a model is optimized by the OpenVINO Model Optimizer, it's deployed
|
||||
to target devices in the Intermediate Representation (IR) format. An optimized
|
||||
model is stored on an edge device and executed by the Inference Engine.
|
||||
(ONNX and nGraph models can also be read natively by the Inference Engine.)
|
||||
model is stored on an edge device and executed by the OpenVINO Runtime.
|
||||
(ONNX, PDPD models can also be read natively by the OpenVINO Runtime.)
|
||||
|
||||
To protect deep-learning models, you can encrypt an optimized model before
|
||||
deploying it to the edge device. The edge device should keep the stored model
|
||||
protected at all times and have the model decrypted **in runtime only** for use
|
||||
by the Inference Engine.
|
||||
by the OpenVINO Runtime.
|
||||
|
||||

|
||||
|
||||
## Loading Encrypted Models
|
||||
|
||||
The OpenVINO Inference Engine requires model decryption before loading. Allocate
|
||||
The OpenVINO Runtime requires model decryption before loading. Allocate
|
||||
a temporary memory block for model decryption and use the
|
||||
`InferenceEngine::Core::ReadNetwork` method to load the model from a memory buffer.
|
||||
For more information, see the `InferenceEngine::Core` Class Reference Documentation.
|
||||
`ov::Core::read_model` method to load the model from a memory buffer.
|
||||
For more information, see the `ov::Core` Class Reference Documentation.
|
||||
|
||||
@snippet snippets/protecting_model_guide.cpp part0
|
||||
|
||||
@ -40,12 +40,12 @@ Hardware-based protection such as Intel® Software Guard Extensions
|
||||
bind them to a device. For more information, go to [Intel® Software Guard
|
||||
Extensions](https://software.intel.com/en-us/sgx).
|
||||
|
||||
Use `InferenceEngine::Core::ReadNetwork()` to set model representations and
|
||||
Use `ov::Core::read_model` to set model representations and
|
||||
weights respectively.
|
||||
|
||||
Currently there is no way to read external weights from memory for ONNX models.
|
||||
The `ReadNetwork(const std::string& model, const Blob::CPtr& weights)` function
|
||||
should be called with `weights` passed as an empty `Blob`.
|
||||
The `ov::Core::read_model(const std::string& model, const Tensor& weights)` method
|
||||
should be called with `weights` passed as an empty `ov::Tensor`.
|
||||
|
||||
@snippet snippets/protecting_model_guide.cpp part1
|
||||
|
||||
@ -55,6 +55,6 @@ should be called with `weights` passed as an empty `Blob`.
|
||||
- OpenVINO™ toolkit online documentation: [https://docs.openvino.ai](https://docs.openvino.ai)
|
||||
- Model Optimizer Developer Guide: [Model Optimizer Developer Guide](../MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
|
||||
- [OpenVINO™ runTime User Guide](openvino_intro.md)
|
||||
- For more information on Sample Applications, see the [Inference Engine Samples Overview](Samples_Overview.md)
|
||||
- For more information on Sample Applications, see the [OpenVINO Samples Overview](Samples_Overview.md)
|
||||
- For information on a set of pre-trained models, see the [Overview of OpenVINO™ Toolkit Pre-Trained Models](@ref omz_models_group_intel)
|
||||
- For IoT Libraries and Code Samples see the [Intel® IoT Developer Kit](https://github.com/intel-iot-devkit).
|
||||
|
@ -1,7 +1,8 @@
|
||||
#include <ie_core.hpp>
|
||||
#include <fstream>
|
||||
#include <vector>
|
||||
|
||||
#include "openvino/runtime/core.hpp"
|
||||
|
||||
void decrypt_file(std::ifstream & stream,
|
||||
const std::string & pass,
|
||||
std::vector<uint8_t> & result) {
|
||||
@ -9,24 +10,22 @@ void decrypt_file(std::ifstream & stream,
|
||||
|
||||
int main() {
|
||||
//! [part0]
|
||||
std::vector<uint8_t> model;
|
||||
std::vector<uint8_t> weights;
|
||||
std::vector<uint8_t> model_data, weights_data;
|
||||
|
||||
std::string password; // taken from an user
|
||||
std::ifstream model_file("model.xml"), weights_file("model.bin");
|
||||
|
||||
// Read model files and decrypt them into temporary memory block
|
||||
decrypt_file(model_file, password, model);
|
||||
decrypt_file(weights_file, password, weights);
|
||||
decrypt_file(model_file, password, model_data);
|
||||
decrypt_file(weights_file, password, weights_data);
|
||||
//! [part0]
|
||||
|
||||
//! [part1]
|
||||
InferenceEngine::Core core;
|
||||
ov::Core core;
|
||||
// Load model from temporary memory block
|
||||
std::string strModel(model.begin(), model.end());
|
||||
InferenceEngine::CNNNetwork network = core.ReadNetwork(strModel,
|
||||
InferenceEngine::make_shared_blob<uint8_t>({InferenceEngine::Precision::U8,
|
||||
{weights.size()}, InferenceEngine::C}, weights.data()));
|
||||
std::string str_model(model_data.begin(), model_data.end());
|
||||
auto model = core.read_model(str_model,
|
||||
ov::Tensor(ov::element::u8, {weights_data.size()}, weights_data.data()));
|
||||
//! [part1]
|
||||
|
||||
return 0;
|
||||
|
@ -2,9 +2,9 @@
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
set(TARGET_NAME "ngraph_function_creation_sample")
|
||||
set(TARGET_NAME "model_creation_sample")
|
||||
|
||||
ie_add_sample(NAME ngraph_function_creation_sample
|
||||
ie_add_sample(NAME model_creation_sample
|
||||
SOURCES "${CMAKE_CURRENT_SOURCE_DIR}/main.cpp"
|
||||
HEADERS "${CMAKE_CURRENT_SOURCE_DIR}/ngraph_function_creation_sample.hpp"
|
||||
HEADERS "${CMAKE_CURRENT_SOURCE_DIR}/model_creation_sample.hpp"
|
||||
DEPENDENCIES format_reader ie_samples_utils)
|
@ -1,8 +1,8 @@
|
||||
# nGraph Function Creation C++ Sample {#openvino_inference_engine_samples_ngraph_function_creation_sample_README}
|
||||
# Model Creation C++ Sample {#openvino_inference_engine_samples_model_creation_sample_README}
|
||||
|
||||
This sample demonstrates how to execute an synchronous inference using [model](../../../docs/OV_Runtime_UG/model_representation.md) built on the fly which uses weights from LeNet classification model, which is known to work well on digit classification tasks.
|
||||
|
||||
You do not need an XML file to create a model. The API of ngraph::Function allows creating a model on the fly from the source code.
|
||||
You do not need an XML file to create a model. The API of ov::Model allows creating a model on the fly from the source code.
|
||||
|
||||
The following C++ API is used in the application:
|
||||
|
||||
@ -13,7 +13,7 @@ The following C++ API is used in the application:
|
||||
| Tensor Operations | `ov::Tensor::get_byte_size`, `ov::Tensor:data` | Get tensor byte size and its data |
|
||||
| Model Operations | `ov::set_batch` | Operate with model batch size |
|
||||
| Infer Request Operations | `ov::InferRequest::get_input_tensor` | Get a input tensor |
|
||||
| nGraph Functions | `ov::opset8::Parameter`, `ov::Node::output`, `ov::opset8::Constant`, `ov::opset8::Convolution`, `ov::opset8::Add`, `ov::opset1::MaxPool`, `ov::opset8::Reshape`, `ov::opset8::MatMul`, `ov::opset8::Relu`, `ov::opset8::Softmax`, `ov::descriptor::Tensor::set_names`, `ov::opset8::Result`, `ov::Model`, `ov::ParameterVector::vector` | Used to construct an nGraph function |
|
||||
| Model creation objects | `ov::opset8::Parameter`, `ov::Node::output`, `ov::opset8::Constant`, `ov::opset8::Convolution`, `ov::opset8::Add`, `ov::opset1::MaxPool`, `ov::opset8::Reshape`, `ov::opset8::MatMul`, `ov::opset8::Relu`, `ov::opset8::Softmax`, `ov::descriptor::Tensor::set_names`, `ov::opset8::Result`, `ov::Model`, `ov::ParameterVector::vector` | Used to construct an OpenVINO model |
|
||||
|
||||
Basic OpenVINO™ Runtime API is covered by [Hello Classification C++ sample](../hello_classification/README.md).
|
||||
|
||||
@ -23,7 +23,7 @@ Basic OpenVINO™ Runtime API is covered by [Hello Classification C++ sample](..
|
||||
| Model Format | model weights file (\*.bin) |
|
||||
| Validated images | single-channel `MNIST ubyte` images |
|
||||
| Supported devices | [All](../../../docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md) |
|
||||
| Other language realization | [Python](../../../samples/python/ngraph_function_creation_sample/README.md) |
|
||||
| Other language realization | [Python](../../../samples/python/model_creation_sample/README.md) |
|
||||
|
||||
## How It Works
|
||||
|
||||
@ -42,7 +42,7 @@ To build the sample, please use instructions available at [Build the Sample Appl
|
||||
## Running
|
||||
|
||||
```
|
||||
ngraph_function_creation_sample <path_to_lenet_weights> <device>
|
||||
model_creation_sample <path_to_lenet_weights> <device>
|
||||
```
|
||||
|
||||
> **NOTES**:
|
||||
@ -56,7 +56,7 @@ ngraph_function_creation_sample <path_to_lenet_weights> <device>
|
||||
You can do inference of an image using a pre-trained model on a GPU using the following command:
|
||||
|
||||
```
|
||||
ngraph_function_creation_sample lenet.bin GPU
|
||||
model_creation_sample lenet.bin GPU
|
||||
```
|
||||
|
||||
## Sample Output
|
||||
@ -176,10 +176,6 @@ classid probability label
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
*Starting with the OpenVINO™ toolkit 2020.2 release, all of the features previously available through nGraph have been merged into the OpenVINO™ toolkit. As a result, all the features previously available through ONNX RT Execution Provider for nGraph have been merged with ONNX RT Execution Provider for OpenVINO™ toolkit.*
|
||||
|
||||
*Therefore, ONNX RT Execution Provider for nGraph will be deprecated starting June 1, 2020 and will be completely removed on December 1, 2020. Users are recommended to migrate to the ONNX RT Execution Provider for OpenVINO™ toolkit as the unified solution for all AI inferencing on Intel® hardware.*
|
||||
|
||||
## See Also
|
||||
|
||||
- [Integrate the OpenVINO™ Runtime with Your Application](../../../docs/OV_Runtime_UG/Integrate_with_customer_application_new_API.md)
|
@ -21,7 +21,7 @@
|
||||
#include "samples/classification_results.h"
|
||||
#include "samples/slog.hpp"
|
||||
|
||||
#include "ngraph_function_creation_sample.hpp"
|
||||
#include "model_creation_sample.hpp"
|
||||
// clang-format on
|
||||
|
||||
constexpr auto N_TOP_RESULTS = 1;
|
||||
@ -214,10 +214,7 @@ std::shared_ptr<ov::Model> create_model(const std::string& path_to_weights) {
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief The entry point for inference engine automatic ov::Model
|
||||
* creation sample
|
||||
* @file ngraph_function_creation_sample/main.cpp
|
||||
* @example ngraph_function_creation_sample/main.cpp
|
||||
* @brief The entry point for OpenVINO ov::Model creation sample
|
||||
*/
|
||||
int main(int argc, char* argv[]) {
|
||||
try {
|
@ -1,13 +1,13 @@
|
||||
# nGraph Function Creation Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_ngraph_function_creation_sample_README}
|
||||
# Model Creation Python* Sample {#openvino_inference_engine_ie_bridges_python_sample_model_creation_sample_README}
|
||||
|
||||
This sample demonstrates how to run inference using a [model](../../../docs/OV_Runtime_UG/model_representation.md) built on the fly that uses weights from the LeNet classification model, which is known to work well on digit classification tasks. You do not need an XML file, the model is created from the source code on the fly.
|
||||
This sample demonstrates how to run inference using a [model](../../../docs/OV_Runtime_UG/model_representation.md) built on the fly that uses weights from the LeNet classification model, which is known to work well on digit classification tasks. You do not need an XML file, the model is created from the source code on the fly.
|
||||
|
||||
The following Python API is used in the application:
|
||||
The following OpenVINO Python API is used in the application:
|
||||
|
||||
| Feature | API | Description |
|
||||
| :--------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------ |
|
||||
| Model Operations | [openvino.runtime.Model], [openvino.runtime.set_batch], [openvino.runtime.Model.input] | Managing of model |
|
||||
| nGraph Functions | [openvino.runtime.op.Parameter], [openvino.runtime.op.Constant], [openvino.runtime.opset8.convolution], [openvino.runtime.opset8.add], [openvino.runtime.opset1.max_pool], [openvino.runtime.opset8.reshape], [openvino.runtime.opset8.matmul], [openvino.runtime.opset8.relu], [openvino.runtime.opset8.softmax] | Description of a model topology using nGraph Python API |
|
||||
| Opset operations | [openvino.runtime.op.Parameter], [openvino.runtime.op.Constant], [openvino.runtime.opset8.convolution], [openvino.runtime.opset8.add], [openvino.runtime.opset1.max_pool], [openvino.runtime.opset8.reshape], [openvino.runtime.opset8.matmul], [openvino.runtime.opset8.relu], [openvino.runtime.opset8.softmax] | Description of a model topology using OpenVINO Python API |
|
||||
|
||||
Basic OpenVINO™ Runtime API is covered by [Hello Classification Python* Sample](../hello_classification/README.md).
|
||||
|
||||
@ -16,7 +16,7 @@ Basic OpenVINO™ Runtime API is covered by [Hello Classification Python* Sample
|
||||
| Validated Models | LeNet |
|
||||
| Model Format | Model weights file (\*.bin) |
|
||||
| Supported devices | [All](../../../docs/OV_Runtime_UG/supported_plugins/Supported_Devices.md) |
|
||||
| Other language realization | [C++](../../../samples/cpp/ngraph_function_creation_sample/README.md) |
|
||||
| Other language realization | [C++](../../../samples/cpp/model_creation_sample/README.md) |
|
||||
|
||||
## How It Works
|
||||
|
||||
@ -35,7 +35,7 @@ each sample step at [Integration Steps](../../../docs/OV_Runtime_UG/Integrate_wi
|
||||
To run the sample, you need to specify model weights and device.
|
||||
|
||||
```
|
||||
python ngraph_function_creation_sample.py <path_to_model> <device_name>
|
||||
python model_creation_sample.py <path_to_model> <device_name>
|
||||
```
|
||||
|
||||
> **NOTE**:
|
||||
@ -49,7 +49,7 @@ python ngraph_function_creation_sample.py <path_to_model> <device_name>
|
||||
For example:
|
||||
|
||||
```
|
||||
python ngraph_function_creation_sample.py lenet.bin GPU
|
||||
python model_creation_sample.py lenet.bin GPU
|
||||
```
|
||||
|
||||
## Sample Output
|
@ -10,11 +10,11 @@
|
||||
namespace ov {
|
||||
namespace op {
|
||||
namespace v0 {
|
||||
/// \brief A function parameter.
|
||||
/// \brief A model parameter.
|
||||
///
|
||||
/// Parameters are nodes that represent the arguments that will be passed to
|
||||
/// user-defined functions. Function creation requires a sequence of parameters.
|
||||
/// Basic graph operations do not need parameters attached to a function.
|
||||
/// user-defined models. Model creation requires a sequence of parameters.
|
||||
/// Basic graph operations do not need parameters attached to a model.
|
||||
class OPENVINO_API Parameter : public op::Op {
|
||||
public:
|
||||
OPENVINO_OP("Parameter", "opset1");
|
||||
|
@ -187,7 +187,7 @@ private:
|
||||
std::unordered_map<std::string, std::shared_ptr<ov::op::util::Variable>>& m_variables;
|
||||
|
||||
///
|
||||
/// store information about parameters/results order during function creation
|
||||
/// store information about parameters/results order during a model creation
|
||||
/// it will be used during Inputs/Outputs Description creation in SubGraph processing
|
||||
///
|
||||
IoMap io_map;
|
||||
|
Loading…
Reference in New Issue
Block a user