[DOC] Model caching feature overview (#5519)

* Docs: Model caching feature overview

* Update docs/IE_DG/Intro_to_Performance.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Apply suggestions from code review

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

* Review comments
- Moved code examples to snippets
- Added link to Model Caching overview from "Inference Engine Developer Guide"
- Few minor changes

* Update docs/IE_DG/Intro_to_Performance.md

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>

Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>
This commit is contained in:
Mikhail Nosov
2021-06-23 09:33:50 +03:00
committed by GitHub
parent 861d89c988
commit 4a4c3e8ec9
9 changed files with 142 additions and 0 deletions

View File

@@ -0,0 +1,13 @@
#include <ie_core.hpp>
int main() {
using namespace InferenceEngine;
std::string modelPath = "/tmp/myModel.xml";
std::string device = "GNA";
std::map<std::string, std::string> deviceConfig;
//! [part1]
InferenceEngine::Core ie; // Step 1: create Inference engine object
ie.LoadNetwork(modelPath, device, deviceConfig); // Step 2: LoadNetwork by model file path
//! [part1]
return 0;
}