diff --git a/docs/IE_DG/Migration_CoreAPI.md b/docs/IE_DG/Migration_CoreAPI.md deleted file mode 100644 index d49bd425bc8..00000000000 --- a/docs/IE_DG/Migration_CoreAPI.md +++ /dev/null @@ -1,70 +0,0 @@ -[DEPRECATED] Migration from Inference Engine Plugin API to Core API {#openvino_docs_IE_DG_Migration_CoreAPI} -=============================== - -For 2019 R2 Release, the new Inference Engine Core API is introduced. This guide is updated to reflect the new API approach. The Inference Engine Plugin API is still supported, but is going to be deprecated in future releases. - -This section provides common steps to migrate your application written using the Inference Engine Plugin API (`InferenceEngine::InferencePlugin`) to the Inference Engine Core API (`InferenceEngine::Core`). - -To learn how to write a new application using the Inference Engine, refer to [Integrate the Inference Engine Request API with Your Application](Integrate_with_customer_application_new_API.md) and [Inference Engine Samples Overview](Samples_Overview.md). - -## Inference Engine Core Class - -The Inference Engine Core class is implemented on top existing Inference Engine Plugin API and handles plugins internally. -The main responsibility of the `InferenceEngine::Core` class is to hide plugin specifics inside and provide a new layer of abstraction that works with devices (`InferenceEngine::Core::GetAvailableDevices`). Almost all methods of this class accept `deviceName` as an additional parameter that denotes an actual device you are working with. Plugins are listed in the `plugins.xml` file, which is loaded during constructing `InferenceEngine::Core` objects: - -```bash - - - - - ... - -``` - -## Migration Steps - -Common migration process includes the following steps: - -1. Migrate from the `InferenceEngine::InferencePlugin` initialization: - -@snippet snippets/Migration_CoreAPI.cpp part0 - -to the `InferenceEngine::Core` class initialization: - -@snippet snippets/Migration_CoreAPI.cpp part1 - -2. Instead of using `InferenceEngine::CNNNetReader` to read IR: - -@snippet snippets/Migration_CoreAPI.cpp part2 - -read networks using the Core class: - -@snippet snippets/Migration_CoreAPI.cpp part3 - -The Core class also allows reading models from the ONNX format (more information is [here](./ONNX_Support.md)): - -@snippet snippets/Migration_CoreAPI.cpp part4 - -3. Instead of adding CPU device extensions to the plugin: - -@snippet snippets/Migration_CoreAPI.cpp part5 - -add extensions to CPU device using the Core class: - -@snippet snippets/Migration_CoreAPI.cpp part6 - -4. Instead of setting configuration keys to a particular plugin, set (key, value) pairs via `InferenceEngine::Core::SetConfig` - -@snippet snippets/Migration_CoreAPI.cpp part7 - -> **NOTE**: If `deviceName` is omitted as the last argument, configuration is set for all Inference Engine devices. - -5. Migrate from loading the network to a particular plugin: - -@snippet snippets/Migration_CoreAPI.cpp part8 - -to `InferenceEngine::Core::LoadNetwork` to a particular device: - -@snippet snippets/Migration_CoreAPI.cpp part9 - -After you have an instance of `InferenceEngine::ExecutableNetwork`, all other steps are as usual. diff --git a/docs/IE_DG/OnnxImporterTutorial.md b/docs/IE_DG/OnnxImporterTutorial.md deleted file mode 100644 index f4538633a7e..00000000000 --- a/docs/IE_DG/OnnxImporterTutorial.md +++ /dev/null @@ -1,67 +0,0 @@ -# ONNX* Importer API Tutorial {#openvino_docs_IE_DG_OnnxImporterTutorial} - -> **NOTE**: This tutorial is deprecated. Since OpenVINO™ 2020.4 version, Inference Engine enables reading ONNX models via the Inference Engine Core API -> and there is no need to use directly the low-level ONNX* Importer API anymore. -> To read ONNX\* models, it's recommended to use the `Core::ReadNetwork()` method that provide a uniform way to read models from IR or ONNX format. - -This tutorial demonstrates how to use the ONNX\* Importer API. -This API makes it possible to create an nGraph `Function` object from an imported ONNX model. - -All functions of the ONNX Importer API are in the [onnx.hpp][onnx_header] header file. - -Two categories of API functions: -* Helper functions that check which ONNX ops are supported in a current version of the ONNX Importer -* Functions that read ONNX models from a stream or file and result in an nGraph function, which can be executed using the Inference Engine - -## Check Which ONNX Ops Are Supported - -To list all supported ONNX ops in a specific version and domain, use the `get_supported_operators` -as shown in the example below: - -@snippet snippets/OnnxImporterTutorial0.cpp part0 - -The above code produces a list of all the supported operators for the `version` and `domain` you specified and outputs a list similar to this: -```cpp -Abs -Acos -... -Xor -``` - -To determine whether a specific ONNX operator in a particular version and domain is supported by the importer, use the `is_operator_supported` function as shown in the example below: - -@snippet snippets/OnnxImporterTutorial1.cpp part1 - -## Import ONNX Model - -To import an ONNX model, use the `import_onnx_model` function. -The method has two overloads: -* `import_onnx_model` takes a stream as an input, for example, file stream, memory stream -* `import_onnx_model` takes a file path as an input - -Refer to the sections below for details. - -> **NOTE**: The examples below use the ONNX ResNet50 model, which is available at the [ONNX Model Zoo][onnx_model_zoo]: -> ```bash -> $ wget https://s3.amazonaws.com/download.onnx/models/opset_8/resnet50.tar.gz -> $ tar -xzvf resnet50.tar.gz -> ``` - -Once you create the `ng_function`, you can use it to run computation on the Inference Engine. -As it was shown in [Build a Model with nGraph Library](../nGraph_DG/build_function.md), `std::shared_ptr` can be transformed into a `CNNNetwork`. - - -### Stream as Input - -The code below shows how to convert the ONNX ResNet50 model to the nGraph function using `import_onnx_model` with the stream as an input: - -@snippet snippets/OnnxImporterTutorial2.cpp part2 - -### Filepath as Input - -The code below shows how to convert the ONNX ResNet50 model to the nGraph function using `import_onnx_model` with the filepath as an input: - -@snippet snippets/OnnxImporterTutorial3.cpp part3 - -[onnx_header]: https://github.com/NervanaSystems/ngraph/blob/master/src/ngraph/frontend/onnx_import/onnx.hpp -[onnx_model_zoo]: https://github.com/onnx/models diff --git a/docs/IE_DG/supported_plugins/FPGA.md b/docs/IE_DG/supported_plugins/FPGA.md deleted file mode 100644 index 63ae6e62ed7..00000000000 --- a/docs/IE_DG/supported_plugins/FPGA.md +++ /dev/null @@ -1,22 +0,0 @@ -FPGA Plugin {#openvino_docs_IE_DG_supported_plugins_FPGA} -=========== - -## Product Change Notice -Intel® Distribution of OpenVINO™ toolkit for Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA - - - - - - - - - - -
Change Notice BeginsJuly 2020
Change DateOctober 2020
- -Intel will be transitioning to the next-generation programmable deep-learning solution based on FPGAs in order to increase the level of customization possible in FPGA deep-learning. As part of this transition, future standard releases (i.e., non-LTS releases) of Intel® Distribution of OpenVINO™ toolkit will no longer include the Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA. - -Intel® Distribution of OpenVINO™ toolkit 2020.3.X LTS release will continue to support Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA. For questions about next-generation programmable deep-learning solutions based on FPGAs, please talk to your sales representative or contact us to get the latest FPGA updates. - -For documentation for the FPGA plugin available in previous releases of Intel® Distribution of OpenVINO™ toolkit with FPGA Support, see documentation for the [2020.4 version](https://docs.openvinotoolkit.org/2020.4/openvino_docs_IE_DG_supported_plugins_FPGA.html) and lower. \ No newline at end of file diff --git a/docs/doxygen/doxygen-ignore.txt b/docs/doxygen/doxygen-ignore.txt index 0be7a70dc06..7f963ac63e7 100644 --- a/docs/doxygen/doxygen-ignore.txt +++ b/docs/doxygen/doxygen-ignore.txt @@ -22,6 +22,8 @@ inference-engine/include/vpu/vpu_config.hpp inference-engine/include/vpu/vpu_plugin_config.hpp openvino/docs/benchmarks/performance_int8_vs_fp32.md openvino/docs/get_started/get_started_macos.md +openvino/docs/optimization_guide/dldt_optimization_guide.md +openvino/docs/IE_DG/ShapeInference.md inference-engine/include/details/ie_so_pointer.hpp inference-engine/include/ie_compound_blob.h inference-engine/include/ie_data.h diff --git a/docs/doxygen/ie_docs.xml b/docs/doxygen/ie_docs.xml index 120492baef0..b8581444627 100644 --- a/docs/doxygen/ie_docs.xml +++ b/docs/doxygen/ie_docs.xml @@ -291,11 +291,9 @@ limitations under the License. - - @@ -311,7 +309,6 @@ limitations under the License. - diff --git a/docs/snippets/CMakeLists.txt b/docs/snippets/CMakeLists.txt index 1d2a20eea0a..48edae1e832 100644 --- a/docs/snippets/CMakeLists.txt +++ b/docs/snippets/CMakeLists.txt @@ -26,17 +26,6 @@ if(NOT OpenCV_FOUND) "${CMAKE_CURRENT_SOURCE_DIR}/ShapeInference.cpp") endif() -# ONNX importer related files -if(NOT NGRAPH_ONNX_IMPORT_ENABLE) - list(REMOVE_ITEM SOURCES "${CMAKE_CURRENT_SOURCE_DIR}/OnnxImporterTutorial0.cpp" - "${CMAKE_CURRENT_SOURCE_DIR}/OnnxImporterTutorial1.cpp" - "${CMAKE_CURRENT_SOURCE_DIR}/OnnxImporterTutorial2.cpp" - "${CMAKE_CURRENT_SOURCE_DIR}/OnnxImporterTutorial3.cpp") -endif() - -# remove snippets for deprecated / removed API -list(REMOVE_ITEM SOURCES "${CMAKE_CURRENT_SOURCE_DIR}/Migration_CoreAPI.cpp") - # requires mfxFrameSurface1 and MSS API list(REMOVE_ITEM SOURCES "${CMAKE_CURRENT_SOURCE_DIR}/dldt_optimization_guide2.cpp" "${CMAKE_CURRENT_SOURCE_DIR}/dldt_optimization_guide3.cpp" diff --git a/docs/snippets/Migration_CoreAPI.cpp b/docs/snippets/Migration_CoreAPI.cpp deleted file mode 100644 index fd89803093b..00000000000 --- a/docs/snippets/Migration_CoreAPI.cpp +++ /dev/null @@ -1,48 +0,0 @@ -#include - -int main() { -std::string deviceName = "Device name"; -//! [part0] -InferenceEngine::InferencePlugin plugin = InferenceEngine::PluginDispatcher({ FLAGS_pp }).getPluginByDevice(FLAGS_d); -//! [part0] - -//! [part1] -InferenceEngine::Core core; -//! [part1] - -//! [part2] -InferenceEngine::CNNNetReader network_reader; -network_reader.ReadNetwork(fileNameToString(input_model)); -network_reader.ReadWeights(fileNameToString(input_model).substr(0, input_model.size() - 4) + ".bin"); -InferenceEngine::CNNNetwork network = network_reader.getNetwork(); -//! [part2] - -//! [part3] -InferenceEngine::CNNNetwork network = core.ReadNetwork(input_model); -//! [part3] - -//! [part4] -InferenceEngine::CNNNetwork network = core.ReadNetwork("model.onnx"); -//! [part4] - -//! [part5] -plugin.AddExtension(std::make_shared()); -//! [part5] - -//! [part6] -core.AddExtension(std::make_shared(), "CPU"); -//! [part6] - -//! [part7] -core.SetConfig({{PluginConfigParams::KEY_CONFIG_FILE, FLAGS_c}}, "GPU"); -//! [part7] - -//! [part8] -auto execNetwork = plugin.LoadNetwork(network, { }); -//! [part8] - -//! [part9] -auto execNetwork = core.LoadNetwork(network, deviceName, { }); -//! [part9] -return 0; -} diff --git a/docs/snippets/OnnxImporterTutorial0.cpp b/docs/snippets/OnnxImporterTutorial0.cpp deleted file mode 100644 index cf434622cb9..00000000000 --- a/docs/snippets/OnnxImporterTutorial0.cpp +++ /dev/null @@ -1,19 +0,0 @@ -#include -#include -#include "onnx_import/onnx.hpp" -#include -#include - -int main() { -//! [part0] -const std::int64_t version = 12; -const std::string domain = "ai.onnx"; -const std::set supported_ops = ngraph::onnx_import::get_supported_operators(version, domain); - -for(const auto& op : supported_ops) -{ - std::cout << op << std::endl; -} -//! [part0] -return 0; -} diff --git a/docs/snippets/OnnxImporterTutorial1.cpp b/docs/snippets/OnnxImporterTutorial1.cpp deleted file mode 100644 index 60122f1a1ea..00000000000 --- a/docs/snippets/OnnxImporterTutorial1.cpp +++ /dev/null @@ -1,15 +0,0 @@ -#include -#include -#include "onnx_import/onnx.hpp" - -int main() { -//! [part1] -const std::string op_name = "Abs"; -const std::int64_t version = 12; -const std::string domain = "ai.onnx"; -const bool is_abs_op_supported = ngraph::onnx_import::is_operator_supported(op_name, version, domain); - -std::cout << "Abs in version 12, domain `ai.onnx`is supported: " << (is_abs_op_supported ? "true" : "false") << std::endl; -//! [part1] -return 0; -} diff --git a/docs/snippets/OnnxImporterTutorial2.cpp b/docs/snippets/OnnxImporterTutorial2.cpp deleted file mode 100644 index 00ce2949a1d..00000000000 --- a/docs/snippets/OnnxImporterTutorial2.cpp +++ /dev/null @@ -1,29 +0,0 @@ -#include -#include -#include "onnx_import/onnx.hpp" -#include -#include - -int main() { -//! [part2] - const char * resnet50_path = "resnet50/model.onnx"; - std::ifstream resnet50_stream(resnet50_path); - if (resnet50_stream.is_open()) - { - try - { - const std::shared_ptr ng_function = ngraph::onnx_import::import_onnx_model(resnet50_stream); - - // Check shape of the first output, for example - std::cout << ng_function->get_output_shape(0) << std::endl; - // The output is Shape{1, 1000} - } - catch (const ngraph::ngraph_error& error) - { - std::cout << "Error when importing ONNX model: " << error.what() << std::endl; - } - } - resnet50_stream.close(); -//! [part2] -return 0; -} diff --git a/docs/snippets/OnnxImporterTutorial3.cpp b/docs/snippets/OnnxImporterTutorial3.cpp deleted file mode 100644 index 6fc1e1b59de..00000000000 --- a/docs/snippets/OnnxImporterTutorial3.cpp +++ /dev/null @@ -1,12 +0,0 @@ -#include -#include -#include "onnx_import/onnx.hpp" -#include - -int main() { -//! [part3] -const char * resnet50_path = "resnet50/model.onnx"; -const std::shared_ptr ng_function = ngraph::onnx_import::import_onnx_model(resnet50_path); -//! [part3] -return 0; -}