* Docs: Model caching feature overview * Update docs/IE_DG/Intro_to_Performance.md Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com> * Apply suggestions from code review Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com> * Review comments - Moved code examples to snippets - Added link to Model Caching overview from "Inference Engine Developer Guide" - Few minor changes * Update docs/IE_DG/Intro_to_Performance.md Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com> Co-authored-by: Anastasiya Ageeva <anastasiya.ageeva@intel.com>
15 lines
539 B
C++
15 lines
539 B
C++
#include <ie_core.hpp>
|
|
|
|
int main() {
|
|
using namespace InferenceEngine;
|
|
std::string modelPath = "/tmp/myModel.xml";
|
|
std::string device = "GNA";
|
|
std::map<std::string, std::string> deviceConfig;
|
|
//! [part2]
|
|
InferenceEngine::Core ie; // Step 1: create Inference engine object
|
|
ie.SetConfig({{CONFIG_KEY(CACHE_DIR), "myCacheFolder"}}); // Step 1b: Enable caching
|
|
ie.LoadNetwork(modelPath, device, deviceConfig); // Step 2: LoadNetwork by model file path
|
|
//! [part2]
|
|
return 0;
|
|
}
|