Removed deprecated documentation (#6253)
* Removed deprecated documentation * Removed snippets
This commit is contained in:
parent
0361e7ca73
commit
6239ff8a1c
@ -1,70 +0,0 @@
|
||||
[DEPRECATED] Migration from Inference Engine Plugin API to Core API {#openvino_docs_IE_DG_Migration_CoreAPI}
|
||||
===============================
|
||||
|
||||
For 2019 R2 Release, the new Inference Engine Core API is introduced. This guide is updated to reflect the new API approach. The Inference Engine Plugin API is still supported, but is going to be deprecated in future releases.
|
||||
|
||||
This section provides common steps to migrate your application written using the Inference Engine Plugin API (`InferenceEngine::InferencePlugin`) to the Inference Engine Core API (`InferenceEngine::Core`).
|
||||
|
||||
To learn how to write a new application using the Inference Engine, refer to [Integrate the Inference Engine Request API with Your Application](Integrate_with_customer_application_new_API.md) and [Inference Engine Samples Overview](Samples_Overview.md).
|
||||
|
||||
## Inference Engine Core Class
|
||||
|
||||
The Inference Engine Core class is implemented on top existing Inference Engine Plugin API and handles plugins internally.
|
||||
The main responsibility of the `InferenceEngine::Core` class is to hide plugin specifics inside and provide a new layer of abstraction that works with devices (`InferenceEngine::Core::GetAvailableDevices`). Almost all methods of this class accept `deviceName` as an additional parameter that denotes an actual device you are working with. Plugins are listed in the `plugins.xml` file, which is loaded during constructing `InferenceEngine::Core` objects:
|
||||
|
||||
```bash
|
||||
<ie>
|
||||
<plugins>
|
||||
<plugin name="CPU" location="libMKLDNNPlugin.so">
|
||||
</plugin>
|
||||
...
|
||||
</ie>
|
||||
```
|
||||
|
||||
## Migration Steps
|
||||
|
||||
Common migration process includes the following steps:
|
||||
|
||||
1. Migrate from the `InferenceEngine::InferencePlugin` initialization:
|
||||
|
||||
@snippet snippets/Migration_CoreAPI.cpp part0
|
||||
|
||||
to the `InferenceEngine::Core` class initialization:
|
||||
|
||||
@snippet snippets/Migration_CoreAPI.cpp part1
|
||||
|
||||
2. Instead of using `InferenceEngine::CNNNetReader` to read IR:
|
||||
|
||||
@snippet snippets/Migration_CoreAPI.cpp part2
|
||||
|
||||
read networks using the Core class:
|
||||
|
||||
@snippet snippets/Migration_CoreAPI.cpp part3
|
||||
|
||||
The Core class also allows reading models from the ONNX format (more information is [here](./ONNX_Support.md)):
|
||||
|
||||
@snippet snippets/Migration_CoreAPI.cpp part4
|
||||
|
||||
3. Instead of adding CPU device extensions to the plugin:
|
||||
|
||||
@snippet snippets/Migration_CoreAPI.cpp part5
|
||||
|
||||
add extensions to CPU device using the Core class:
|
||||
|
||||
@snippet snippets/Migration_CoreAPI.cpp part6
|
||||
|
||||
4. Instead of setting configuration keys to a particular plugin, set (key, value) pairs via `InferenceEngine::Core::SetConfig`
|
||||
|
||||
@snippet snippets/Migration_CoreAPI.cpp part7
|
||||
|
||||
> **NOTE**: If `deviceName` is omitted as the last argument, configuration is set for all Inference Engine devices.
|
||||
|
||||
5. Migrate from loading the network to a particular plugin:
|
||||
|
||||
@snippet snippets/Migration_CoreAPI.cpp part8
|
||||
|
||||
to `InferenceEngine::Core::LoadNetwork` to a particular device:
|
||||
|
||||
@snippet snippets/Migration_CoreAPI.cpp part9
|
||||
|
||||
After you have an instance of `InferenceEngine::ExecutableNetwork`, all other steps are as usual.
|
@ -1,67 +0,0 @@
|
||||
# ONNX* Importer API Tutorial {#openvino_docs_IE_DG_OnnxImporterTutorial}
|
||||
|
||||
> **NOTE**: This tutorial is deprecated. Since OpenVINO™ 2020.4 version, Inference Engine enables reading ONNX models via the Inference Engine Core API
|
||||
> and there is no need to use directly the low-level ONNX* Importer API anymore.
|
||||
> To read ONNX\* models, it's recommended to use the `Core::ReadNetwork()` method that provide a uniform way to read models from IR or ONNX format.
|
||||
|
||||
This tutorial demonstrates how to use the ONNX\* Importer API.
|
||||
This API makes it possible to create an nGraph `Function` object from an imported ONNX model.
|
||||
|
||||
All functions of the ONNX Importer API are in the [onnx.hpp][onnx_header] header file.
|
||||
|
||||
Two categories of API functions:
|
||||
* Helper functions that check which ONNX ops are supported in a current version of the ONNX Importer
|
||||
* Functions that read ONNX models from a stream or file and result in an nGraph function, which can be executed using the Inference Engine
|
||||
|
||||
## Check Which ONNX Ops Are Supported
|
||||
|
||||
To list all supported ONNX ops in a specific version and domain, use the `get_supported_operators`
|
||||
as shown in the example below:
|
||||
|
||||
@snippet snippets/OnnxImporterTutorial0.cpp part0
|
||||
|
||||
The above code produces a list of all the supported operators for the `version` and `domain` you specified and outputs a list similar to this:
|
||||
```cpp
|
||||
Abs
|
||||
Acos
|
||||
...
|
||||
Xor
|
||||
```
|
||||
|
||||
To determine whether a specific ONNX operator in a particular version and domain is supported by the importer, use the `is_operator_supported` function as shown in the example below:
|
||||
|
||||
@snippet snippets/OnnxImporterTutorial1.cpp part1
|
||||
|
||||
## Import ONNX Model
|
||||
|
||||
To import an ONNX model, use the `import_onnx_model` function.
|
||||
The method has two overloads:
|
||||
* <a href="#stream">`import_onnx_model` takes a stream as an input</a>, for example, file stream, memory stream
|
||||
* <a href="#path">`import_onnx_model` takes a file path as an input</a>
|
||||
|
||||
Refer to the sections below for details.
|
||||
|
||||
> **NOTE**: The examples below use the ONNX ResNet50 model, which is available at the [ONNX Model Zoo][onnx_model_zoo]:
|
||||
> ```bash
|
||||
> $ wget https://s3.amazonaws.com/download.onnx/models/opset_8/resnet50.tar.gz
|
||||
> $ tar -xzvf resnet50.tar.gz
|
||||
> ```
|
||||
|
||||
Once you create the `ng_function`, you can use it to run computation on the Inference Engine.
|
||||
As it was shown in [Build a Model with nGraph Library](../nGraph_DG/build_function.md), `std::shared_ptr<ngraph::Function>` can be transformed into a `CNNNetwork`.
|
||||
|
||||
|
||||
### <a name="stream">Stream as Input</a>
|
||||
|
||||
The code below shows how to convert the ONNX ResNet50 model to the nGraph function using `import_onnx_model` with the stream as an input:
|
||||
|
||||
@snippet snippets/OnnxImporterTutorial2.cpp part2
|
||||
|
||||
### <a name="path">Filepath as Input</a>
|
||||
|
||||
The code below shows how to convert the ONNX ResNet50 model to the nGraph function using `import_onnx_model` with the filepath as an input:
|
||||
|
||||
@snippet snippets/OnnxImporterTutorial3.cpp part3
|
||||
|
||||
[onnx_header]: https://github.com/NervanaSystems/ngraph/blob/master/src/ngraph/frontend/onnx_import/onnx.hpp
|
||||
[onnx_model_zoo]: https://github.com/onnx/models
|
@ -1,22 +0,0 @@
|
||||
FPGA Plugin {#openvino_docs_IE_DG_supported_plugins_FPGA}
|
||||
===========
|
||||
|
||||
## Product Change Notice
|
||||
Intel® Distribution of OpenVINO™ toolkit for Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<td><strong>Change Notice Begins</strong></td>
|
||||
<td>July 2020</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><strong>Change Date</strong></td>
|
||||
<td>October 2020</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
Intel will be transitioning to the next-generation programmable deep-learning solution based on FPGAs in order to increase the level of customization possible in FPGA deep-learning. As part of this transition, future standard releases (i.e., non-LTS releases) of Intel® Distribution of OpenVINO™ toolkit will no longer include the Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA.
|
||||
|
||||
Intel® Distribution of OpenVINO™ toolkit 2020.3.X LTS release will continue to support Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA. For questions about next-generation programmable deep-learning solutions based on FPGAs, please talk to your sales representative or contact us to get the latest FPGA updates.
|
||||
|
||||
For documentation for the FPGA plugin available in previous releases of Intel® Distribution of OpenVINO™ toolkit with FPGA Support, see documentation for the [2020.4 version](https://docs.openvinotoolkit.org/2020.4/openvino_docs_IE_DG_supported_plugins_FPGA.html) and lower.
|
@ -22,6 +22,8 @@ inference-engine/include/vpu/vpu_config.hpp
|
||||
inference-engine/include/vpu/vpu_plugin_config.hpp
|
||||
openvino/docs/benchmarks/performance_int8_vs_fp32.md
|
||||
openvino/docs/get_started/get_started_macos.md
|
||||
openvino/docs/optimization_guide/dldt_optimization_guide.md
|
||||
openvino/docs/IE_DG/ShapeInference.md
|
||||
inference-engine/include/details/ie_so_pointer.hpp
|
||||
inference-engine/include/ie_compound_blob.h
|
||||
inference-engine/include/ie_data.h
|
||||
|
@ -291,11 +291,9 @@ limitations under the License.
|
||||
<tab type="user" title="Custom ONNX operators" url="@ref openvino_docs_IE_DG_Extensibility_DG_Custom_ONNX_Ops"/>
|
||||
</tab>
|
||||
<tab type="user" title="Integrate the Inference Engine with Your Application" url="@ref openvino_docs_IE_DG_Integrate_with_customer_application_new_API"/>
|
||||
<tab type="user" title="[DEPRECATED] Migration from Inference Engine Plugin API to Core API" url="@ref openvino_docs_IE_DG_Migration_CoreAPI"/>
|
||||
<tab type="user" title="Introduction to Performance Topics" url="@ref openvino_docs_IE_DG_Intro_to_Performance"/>
|
||||
<tab type="user" title="Inference Engine Python* API Overview" url="@ref openvino_inference_engine_ie_bridges_python_docs_api_overview"/>
|
||||
<tab type="user" title="Read an ONNX model" url="@ref openvino_docs_IE_DG_ONNX_Support"/>
|
||||
<tab type="user" title="[DEPRECATED] Import an ONNX model" url="@ref openvino_docs_IE_DG_OnnxImporterTutorial"/>
|
||||
<tab type="user" title="Using Dynamic Batching Feature" url="@ref openvino_docs_IE_DG_DynamicBatching"/>
|
||||
<tab type="user" title="Using Static Shape Infer Feature" url="@ref openvino_docs_IE_DG_ShapeInference"/>
|
||||
<tab type="usergroup" title="Using Bfloat16 Inference" url="@ref openvino_docs_IE_DG_Bfloat16Inference">
|
||||
@ -311,7 +309,6 @@ limitations under the License.
|
||||
<tab type="user" title="RemoteBlob API of GPU Plugin" url="@ref openvino_docs_IE_DG_supported_plugins_GPU_RemoteBlob_API"/>
|
||||
</tab>
|
||||
<tab type="user" title="CPU Plugin" url="@ref openvino_docs_IE_DG_supported_plugins_CPU"/>
|
||||
<tab type="user" title="[DEPRECATED] FPGA Plugin" url="@ref openvino_docs_IE_DG_supported_plugins_FPGA"/>
|
||||
<tab type="usergroup" title="VPU Plugins" url="@ref openvino_docs_IE_DG_supported_plugins_VPU">
|
||||
<tab type="user" title="MYRIAD Plugin " url="@ref openvino_docs_IE_DG_supported_plugins_MYRIAD"/>
|
||||
<tab type="user" title="HDDL Plugin " url="@ref openvino_docs_IE_DG_supported_plugins_HDDL"/>
|
||||
|
@ -26,17 +26,6 @@ if(NOT OpenCV_FOUND)
|
||||
"${CMAKE_CURRENT_SOURCE_DIR}/ShapeInference.cpp")
|
||||
endif()
|
||||
|
||||
# ONNX importer related files
|
||||
if(NOT NGRAPH_ONNX_IMPORT_ENABLE)
|
||||
list(REMOVE_ITEM SOURCES "${CMAKE_CURRENT_SOURCE_DIR}/OnnxImporterTutorial0.cpp"
|
||||
"${CMAKE_CURRENT_SOURCE_DIR}/OnnxImporterTutorial1.cpp"
|
||||
"${CMAKE_CURRENT_SOURCE_DIR}/OnnxImporterTutorial2.cpp"
|
||||
"${CMAKE_CURRENT_SOURCE_DIR}/OnnxImporterTutorial3.cpp")
|
||||
endif()
|
||||
|
||||
# remove snippets for deprecated / removed API
|
||||
list(REMOVE_ITEM SOURCES "${CMAKE_CURRENT_SOURCE_DIR}/Migration_CoreAPI.cpp")
|
||||
|
||||
# requires mfxFrameSurface1 and MSS API
|
||||
list(REMOVE_ITEM SOURCES "${CMAKE_CURRENT_SOURCE_DIR}/dldt_optimization_guide2.cpp"
|
||||
"${CMAKE_CURRENT_SOURCE_DIR}/dldt_optimization_guide3.cpp"
|
||||
|
@ -1,48 +0,0 @@
|
||||
#include <ie_core.hpp>
|
||||
|
||||
int main() {
|
||||
std::string deviceName = "Device name";
|
||||
//! [part0]
|
||||
InferenceEngine::InferencePlugin plugin = InferenceEngine::PluginDispatcher({ FLAGS_pp }).getPluginByDevice(FLAGS_d);
|
||||
//! [part0]
|
||||
|
||||
//! [part1]
|
||||
InferenceEngine::Core core;
|
||||
//! [part1]
|
||||
|
||||
//! [part2]
|
||||
InferenceEngine::CNNNetReader network_reader;
|
||||
network_reader.ReadNetwork(fileNameToString(input_model));
|
||||
network_reader.ReadWeights(fileNameToString(input_model).substr(0, input_model.size() - 4) + ".bin");
|
||||
InferenceEngine::CNNNetwork network = network_reader.getNetwork();
|
||||
//! [part2]
|
||||
|
||||
//! [part3]
|
||||
InferenceEngine::CNNNetwork network = core.ReadNetwork(input_model);
|
||||
//! [part3]
|
||||
|
||||
//! [part4]
|
||||
InferenceEngine::CNNNetwork network = core.ReadNetwork("model.onnx");
|
||||
//! [part4]
|
||||
|
||||
//! [part5]
|
||||
plugin.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>());
|
||||
//! [part5]
|
||||
|
||||
//! [part6]
|
||||
core.AddExtension(std::make_shared<Extensions::Cpu::CpuExtensions>(), "CPU");
|
||||
//! [part6]
|
||||
|
||||
//! [part7]
|
||||
core.SetConfig({{PluginConfigParams::KEY_CONFIG_FILE, FLAGS_c}}, "GPU");
|
||||
//! [part7]
|
||||
|
||||
//! [part8]
|
||||
auto execNetwork = plugin.LoadNetwork(network, { });
|
||||
//! [part8]
|
||||
|
||||
//! [part9]
|
||||
auto execNetwork = core.LoadNetwork(network, deviceName, { });
|
||||
//! [part9]
|
||||
return 0;
|
||||
}
|
@ -1,19 +0,0 @@
|
||||
#include <ie_core.hpp>
|
||||
#include <ngraph/ngraph.hpp>
|
||||
#include "onnx_import/onnx.hpp"
|
||||
#include <iostream>
|
||||
#include <set>
|
||||
|
||||
int main() {
|
||||
//! [part0]
|
||||
const std::int64_t version = 12;
|
||||
const std::string domain = "ai.onnx";
|
||||
const std::set<std::string> supported_ops = ngraph::onnx_import::get_supported_operators(version, domain);
|
||||
|
||||
for(const auto& op : supported_ops)
|
||||
{
|
||||
std::cout << op << std::endl;
|
||||
}
|
||||
//! [part0]
|
||||
return 0;
|
||||
}
|
@ -1,15 +0,0 @@
|
||||
#include <ie_core.hpp>
|
||||
#include <ngraph/ngraph.hpp>
|
||||
#include "onnx_import/onnx.hpp"
|
||||
|
||||
int main() {
|
||||
//! [part1]
|
||||
const std::string op_name = "Abs";
|
||||
const std::int64_t version = 12;
|
||||
const std::string domain = "ai.onnx";
|
||||
const bool is_abs_op_supported = ngraph::onnx_import::is_operator_supported(op_name, version, domain);
|
||||
|
||||
std::cout << "Abs in version 12, domain `ai.onnx`is supported: " << (is_abs_op_supported ? "true" : "false") << std::endl;
|
||||
//! [part1]
|
||||
return 0;
|
||||
}
|
@ -1,29 +0,0 @@
|
||||
#include <ie_core.hpp>
|
||||
#include <ngraph/ngraph.hpp>
|
||||
#include "onnx_import/onnx.hpp"
|
||||
#include <iostream>
|
||||
#include <fstream>
|
||||
|
||||
int main() {
|
||||
//! [part2]
|
||||
const char * resnet50_path = "resnet50/model.onnx";
|
||||
std::ifstream resnet50_stream(resnet50_path);
|
||||
if (resnet50_stream.is_open())
|
||||
{
|
||||
try
|
||||
{
|
||||
const std::shared_ptr<ngraph::Function> ng_function = ngraph::onnx_import::import_onnx_model(resnet50_stream);
|
||||
|
||||
// Check shape of the first output, for example
|
||||
std::cout << ng_function->get_output_shape(0) << std::endl;
|
||||
// The output is Shape{1, 1000}
|
||||
}
|
||||
catch (const ngraph::ngraph_error& error)
|
||||
{
|
||||
std::cout << "Error when importing ONNX model: " << error.what() << std::endl;
|
||||
}
|
||||
}
|
||||
resnet50_stream.close();
|
||||
//! [part2]
|
||||
return 0;
|
||||
}
|
@ -1,12 +0,0 @@
|
||||
#include <ie_core.hpp>
|
||||
#include <ngraph/ngraph.hpp>
|
||||
#include "onnx_import/onnx.hpp"
|
||||
#include <iostream>
|
||||
|
||||
int main() {
|
||||
//! [part3]
|
||||
const char * resnet50_path = "resnet50/model.onnx";
|
||||
const std::shared_ptr<ngraph::Function> ng_function = ngraph::onnx_import::import_onnx_model(resnet50_path);
|
||||
//! [part3]
|
||||
return 0;
|
||||
}
|
Loading…
Reference in New Issue
Block a user