[DOC] Model caching feature overview (#5519)
* Docs: Model caching feature overview * Update docs/IE_DG/Intro_to_Performance.md Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com> * Apply suggestions from code review Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com> * Review comments - Moved code examples to snippets - Added link to Model Caching overview from "Inference Engine Developer Guide" - Few minor changes * Update docs/IE_DG/Intro_to_Performance.md Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com> Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>
This commit is contained in:
13
docs/snippets/InferenceEngine_Caching1.cpp
Normal file
13
docs/snippets/InferenceEngine_Caching1.cpp
Normal file
@@ -0,0 +1,13 @@
|
||||
#include <ie_core.hpp>
|
||||
|
||||
int main() {
|
||||
using namespace InferenceEngine;
|
||||
std::string modelPath = "/tmp/myModel.xml";
|
||||
std::string device = "GNA";
|
||||
std::map<std::string, std::string> deviceConfig;
|
||||
//! [part1]
|
||||
InferenceEngine::Core ie; // Step 1: create Inference engine object
|
||||
ie.LoadNetwork(modelPath, device, deviceConfig); // Step 2: LoadNetwork by model file path
|
||||
//! [part1]
|
||||
return 0;
|
||||
}
|
||||
Reference in New Issue
Block a user