Compare commits

...

11 Commits

Author SHA1 Message Date
Alexey Suhov
1c794d971c Merge pull request #278 from asuhov/2019-r3
Publishing 2019 R3 content
2019-10-04 19:54:45 +03:00
Alexey Suhov
0923303e02 Publishing 2019 R3 content 2019-10-04 19:26:43 +03:00
Alexey Suhov
ba6e22b1b5 Publishing 2019 R2 content (#223) 2019-08-09 19:02:42 +03:00
Alexey Suhov
c585b530c1 replaced recommended stackoverflow dldt tag with openvino; removed outdated setup.py for python api (#195) 2019-06-27 21:01:22 +03:00
Alexey Suhov
693ab4e79a updated license headers in movidius sources (#163) 2019-05-28 15:40:53 +03:00
Alexey Suhov
0ef92871b6 Publishing 2019 R1.1 content and Myriad plugin sources (#162)
* Publishing 2019 R1.1 content and Myriad plugin sources
2019-05-27 21:18:32 +03:00
Alexey Suhov
e206d06f18 Publishing 2019 R1.0.1 content 2019-04-30 18:55:07 +03:00
Viacheslav Matveichev
b235c73481 Merge pull request #129 from asuhov/2019-r1
Publishing 2019 R1 content
2019-04-13 01:02:28 +03:00
Alexey Suhov
72660e9a4d Publishing 2019 R1 content 2019-04-12 18:25:53 +03:00
Dmitry Kurtaev
669bee86e5 Add a section of how to link IE with CMake project (#99) 2019-03-14 13:13:27 +03:00
Alexey Suhov
17e66dc5a6 Added unit tests and readme for model optimizer (#79)
* added unit tests
* added readme for model optimizer
* added a list of supported IE plugins
2019-01-23 20:23:27 +03:00
4904 changed files with 568919 additions and 150432 deletions

5
.gitmodules vendored
View File

@@ -1,3 +1,8 @@
[submodule "inference-engine/thirdparty/ade"]
path = inference-engine/thirdparty/ade
url = https://github.com/opencv/ade.git
ignore = dirty
[submodule "inference-engine/thirdparty/ngraph"]
path = inference-engine/thirdparty/ngraph
url = https://github.com/NervanaSystems/ngraph.git
ignore = dirty

View File

@@ -1,5 +1,5 @@
# [OpenVINO™ Toolkit](https://01.org/openvinotoolkit) - Deep Learning Deployment Toolkit repository
[![Stable release](https://img.shields.io/badge/version-2018.R5-green.svg)](https://github.com/opencv/dldt/releases/tag/2018_R5)
[![Stable release](https://img.shields.io/badge/version-2019.R3-green.svg)](https://github.com/opencv/dldt/releases/tag/2019_R3)
[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)
This toolkit allows developers to deploy pre-trained deep learning models through a high-level C++ Inference Engine API integrated with application logic.
@@ -15,7 +15,12 @@ Deep Learning Deployment Toolkit is licensed under [Apache License Version 2.0](
## Documentation
* [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
* Inference Engine [build instructions](inference-engine/README.md)
* [Inference Engine build instructions](inference-engine/README.md)
* [Get Started with Deep Learning Deployment Toolkit on Linux*](get-started-linux.md)
* [Introduction to Deep Learning Deployment Toolkit](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Introduction.html)
* [Inference Engine Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html)
* [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
## How to Contribute
We welcome community contributions to the Deep Learning Deployment Toolkit repository. If you have an idea how to improve the product, please share it with us doing the following steps:
@@ -29,7 +34,7 @@ Deep Learning Deployment Toolkit is licensed under Apache License, Version 2.0.
## Support
Please report questions, issues and suggestions using:
* [\#dldt](https://stackoverflow.com/search?q=%23dldt) tag on StackOverflow*
* [\#openvino](https://stackoverflow.com/search?q=%23openvino) tag on StackOverflow*
* [GitHub* Issues](https://github.com/opencv/dldt/issues)
* [Forum](https://software.intel.com/en-us/forums/computer-vision)

203
get-started-linux.md Normal file
View File

@@ -0,0 +1,203 @@
# Get Started with OpenVINO™ Deep Learning Deployment Toolkit (DLDT) on Linux*
This guide provides you with the information that will help you to start using the DLDT on Linux*. With this guide you will learn how to:
1. [Configure the Model Optimizer](#configure-the-model-optimizer)
2. [Prepare a model for sample inference:](#prepare-a-model-for-sample-inference)
1. [Download a pre-trained model](#download-a-trained-model)
2. [Convert the model to an Intermediate Representation (IR) with the Model Optimizer](#convert-the-model-to-an-intermediate-representation-with-the-model-optimizer)
3. [Run the Image Classification Sample Application with the model](#run-the-image-classification-sample-application)
## Prerequisites
1. This guide assumes that you have already cloned the `dldt` repo and successfully built the Inference Engine and Samples using the [build instructions](inference-engine/README.md).
2. The original structure of the repository directories is kept unchanged.
> **NOTE**: Below, the directory to which the `dldt` repository is cloned is referred to as `<DLDT_DIR>`.
## Configure the Model Optimizer
The Model Optimizer is a Python\*-based command line tool for importing trained models from popular deep learning frameworks such as Caffe\*, TensorFlow\*, Apache MXNet\*, ONNX\* and Kaldi\*.
You cannot perform inference on your trained model without running the model through the Model Optimizer. When you run a pre-trained model through the Model Optimizer, your output is an Intermediate Representation (IR) of the network. The Intermediate Representation is a pair of files that describe the whole model:
- `.xml`: Describes the network topology
- `.bin`: Contains the weights and biases binary data
For more information about the Model Optimizer, refer to the [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html). 
### Model Optimizer Configuration Steps
You can choose to either configure all supported frameworks at once **OR** configure one framework at a time. Choose the option that best suits your needs. If you see error messages, make sure you installed all dependencies.
> **NOTE**: Since the TensorFlow framework is not officially supported on CentOS*, the Model Optimizer for TensorFlow can't be configured and ran on those systems.
> **IMPORTANT**: The Internet access is required to execute the following steps successfully. If you have access to the Internet through the proxy server only, please make sure that it is configured in your OS environment.
**Option 1: Configure all supported frameworks at the same time**
1. Go to the Model Optimizer prerequisites directory:
```sh
cd <DLDT_DIR>/model_optimizer/install_prerequisites
```
2. Run the script to configure the Model Optimizer for Caffe,
TensorFlow, MXNet, Kaldi\*, and ONNX:
```sh
sudo ./install_prerequisites.sh
```
**Option 2: Configure each framework separately**
Configure individual frameworks separately **ONLY** if you did not select **Option 1** above.
1. Go to the Model Optimizer prerequisites directory:
```sh
cd <DLDT_DIR>/model_optimizer/install_prerequisites
```
2. Run the script for your model framework. You can run more than one script:
- For **Caffe**:
```sh
sudo ./install_prerequisites_caffe.sh
```
- For **TensorFlow**:
```sh
sudo ./install_prerequisites_tf.sh
```
- For **MXNet**:
```sh
sudo ./install_prerequisites_mxnet.sh
```
- For **ONNX**:
```sh
sudo ./install_prerequisites_onnx.sh
```
- For **Kaldi**:
```sh
sudo ./install_prerequisites_kaldi.sh
```
The Model Optimizer is configured for one or more frameworks. Continue to the next session to download and prepare a model for running a sample inference.
## Prepare a Model for Sample Inference
This paragraph contains the steps to get the pre-trained model for sample inference and to prepare the model's optimized Intermediate Representation that Inference Engine uses.
### Download a Trained Model
To run the Image Classification Sample you'll need a pre-trained model to run the inference on. This guide will use the public SqueezeNet 1.1 Caffe* model. You can find and download this model manually or use the OpenVINO™ [Model Downloader](https://github.com/opencv/open_model_zoo/tree/master/model_downloader).
With the Model Downloader, you can download other popular public deep learning topologies and the [OpenVINO™ pre-trained models](https://github.com/opencv/open_model_zoo/tree/master/intel_models) prepared for running inference for a wide list of inference scenarios: object detection, object recognition, object re-identification, human pose estimation, action recognition and others.
To download the SqueezeNet 1.1 Caffe* model to a models folder with the Model Downloader:
1. Install the [prerequisites](https://github.com/opencv/open_model_zoo/tree/master/model_downloader#prerequisites).
2. Run the `downloader.py` with specifying the topology name and a `<models_dir>` path. For example to download the model to the `~/public_models` directory:
```sh
./downloader.py --name squeezenet1.1 --output_dir ~/public_models
```
When the model files are successfully downloaded the output similar to the following is printed:
```sh
###############|| Downloading topologies ||###############
========= Downloading /home/username/public_models/classification/squeezenet/1.1/caffe/squeezenet1.1.prototxt
========= Downloading /home/username/public_models/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel
... 100%, 4834 KB, 3157 KB/s, 1 seconds passed
###############|| Post processing ||###############
========= Changing input dimensions in squeezenet1.1.prototxt =========
```
### Convert the model to an Intermediate Representation with the Model Optimizer
> **NOTE**: This section assumes that you have configured the Model Optimizer using the instructions from the [Configure the Model Optimizer](#configure-the-model-optimizer) section.
1. Create a `<ir_dir>` directory that will contains the Intermediate Representation (IR) of the model.
2. Inference Engine can perform inference on a [list of supported devices](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_Supported_Devices.html) using specific device plugins. Different plugins support models of [different precision formats](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_Supported_Devices.html#supported_model_formats), such as FP32, FP16, INT8. To prepare an IR to run inference on a particular hardware, run the Model Optimizer with the appropriate `--data_type` options:
**For CPU (FP32):**
```sh
python3 <DLDT_DIR>/model_optimizer/mo.py --input_model <models_dir>/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP32 --output_dir <ir_dir>
```
**For GPU and MYRIAD (FP16):**
```sh
python3 <DLDT_DIR>/model_optimizer/mo.py --input_model <models_dir>/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel --data_type FP16 --output_dir <ir_dir>
```
After the Model Optimizer script is completed, the produced IR files (`squeezenet1.1.xml`, `squeezenet1.1.bin`) are in the specified `<ir_dir>` directory.
3. Copy the `squeezenet1.1.labels` file from the `<DLDT_DIR>/inference-engine/samples/sample_data/` to the model IR directory. This file contains the classes that ImageNet uses so that the inference results show text instead of classification numbers:
```sh
cp <DLDT_DIR>/inference-engine/samples/sample_data/squeezenet1.1.labels <ir_dir>
```
Now you are ready to run the Image Classification Sample Application.
## Run the Image Classification Sample Application
The Inference Engine sample applications are automatically compiled when you built the Inference Engine using the [build instructions](inference-engine/README.md). The binary files are located in the `<DLDT_DIR>/inference-engine/bin/intel64/Release` directory.
Follow the steps below to run the Image Classification sample application on the prepared IR and with an input image:
1. Go to the samples build directory:
```sh
cd <DLDT_DIR>/inference-engine/bin/intel64/Release
```
2. Run the sample executable with specifying the `car.png` file from the `<DLDT_DIR>/inference-engine/samples/sample_data/` directory as an input image, the IR of your model and a plugin for a hardware device to perform inference on:
**For CPU:**
```sh
./classification_sample -i <DLDT_DIR>/inference-engine/samples/sample_data/car.png -m <ir_dir>/squeezenet1.1.xml -d CPU
```
**For GPU:**
```sh
./classification_sample -i <DLDT_DIR>/inference-engine/samples/sample_data/car.png -m <ir_dir>/squeezenet1.1.xml -d GPU
```
**For MYRIAD:**
>**NOTE**: Running inference on VPU devices (Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2) with the MYRIAD plugin requires performing [additional hardware configuration steps](inference-engine/README.md#optional-additional-installation-steps-for-the-intel-movidius-neural-compute-stick-and-neural-compute-stick-2).
```sh
./classification_sample -i <DLDT_DIR>/inference-engine/samples/sample_data/car.png -m <ir_dir>/squeezenet1.1.xml -d MYRIAD
```
When the Sample Application completes, you will have the label and confidence for the top-10 categories printed on the screen. Below is a sample output with inference results on CPU:
```sh
Top 10 results:
Image /home/user/dldt/inference-engine/samples/sample_data/car.png
classid probability label
------- ----------- -----
817 0.8363345 sports car, sport car
511 0.0946488 convertible
479 0.0419131 car wheel
751 0.0091071 racer, race car, racing car
436 0.0068161 beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon
656 0.0037564 minivan
586 0.0025741 half track
717 0.0016069 pickup, pickup truck
864 0.0012027 tow truck, tow car, wrecker
581 0.0005882 grille, radiator grille
total inference time: 2.6642941
Average running time of one iteration: 2.6642941 ms
Throughput: 375.3339402 FPS
[ INFO ] Execution successful
```
## Additional Resources
* [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
* [Inference Engine build instructions](inference-engine/README.md)
* [Introduction to Intel® Deep Learning Deployment Toolkit](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Introduction.html)
* [Inference Engine Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html)
* [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
* [Inference Engine Samples Overview](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html).

View File

@@ -1,157 +1,68 @@
# Copyright (C) 2018 Intel Corporation
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
cmake_minimum_required (VERSION 3.3)
if (APPLE)
# due to https://cmake.org/cmake/help/v3.12/policy/CMP0068.html
cmake_minimum_required(VERSION 3.9 FATAL_ERROR)
else()
cmake_minimum_required(VERSION 3.7.2 FATAL_ERROR)
endif()
project(InferenceEngine)
set(DEV_BUILD TRUE)
set(CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/cmake" ${CMAKE_MODULE_PATH})
set(IE_MAIN_SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR})
## WA for problem with gtest submodule. It cannot detect uint32 type.
## remove Gtest submodule and this two lines together
include (CheckTypeSize)
check_type_size (uint32_t uint32_t LANGUAGE CXX)
include(CTest)
include(features)
if (UNIX AND NOT APPLE)
set(LINUX TRUE)
endif()
# include developer package
include(developer_package)
option (OS_FOLDER "create OS dedicated folder in output" OFF)
# These options are shared with 3rdparty plugins
# by means of developer package
include(check_features)
if(CMAKE_SYSTEM_PROCESSOR STREQUAL "armv7l")
set (ARCH_FOLDER armv7l)
elseif("${CMAKE_SIZEOF_VOID_P}" EQUAL "8")
set (ARCH_FOLDER intel64)
else()
set (ARCH_FOLDER ia32)
endif()
if (OS_FOLDER)
message ("**** OS FOLDER IS: [${OS_FOLDER}]")
if ("${OS_FOLDER}" STREQUAL "ON")
message ("**** USING OS FOLDER: [${CMAKE_SYSTEM_NAME}]")
set (BIN_FOLDER bin/${CMAKE_SYSTEM_NAME}/${ARCH_FOLDER})
else()
set (BIN_FOLDER bin/${OS_FOLDER}/${ARCH_FOLDER})
endif()
else()
set (BIN_FOLDER bin/${ARCH_FOLDER})
endif()
set (IE_MAIN_SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR})
set (CMAKE_MODULE_PATH "${IE_MAIN_SOURCE_DIR}/cmake" ${CMAKE_MODULE_PATH})
#printing debug messages
include (debug)
if("${CMAKE_BUILD_TYPE}" STREQUAL "")
debug_message(STATUS "CMAKE_BUILD_TYPE not defined, 'Release' will be used")
set(CMAKE_BUILD_TYPE "Release")
endif()
message(STATUS "BUILD_CONFIGURATION: ${CMAKE_BUILD_TYPE}")
if(COVERAGE)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fprofile-arcs -ftest-coverage -O0")
endif()
if (UNIX)
SET(LIB_DL ${CMAKE_DL_LIBS})
endif()
set (OUTPUT_ROOT ${IE_MAIN_SOURCE_DIR})
include(os_flags)
#resolving dependencies for the project
include (dependencies)
set(CMAKE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX})
set(CMAKE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX})
if (WIN32)
# Support CMake multiconfiguration for Visual Studio build
set(IE_BUILD_POSTFIX $<$<CONFIG:Debug>:${IE_DEBUG_POSTFIX}>$<$<CONFIG:Release>:${IE_RELEASE_POSTFIX}>)
set(IE_BUILD_CONFIGURATION $<CONFIG>)
else ()
if (${CMAKE_BUILD_TYPE} STREQUAL "Debug" )
set(IE_BUILD_POSTFIX ${IE_DEBUG_POSTFIX})
else()
set(IE_BUILD_POSTFIX ${IE_RELEASE_POSTFIX})
endif()
set(IE_BUILD_CONFIGURATION ${CMAKE_BUILD_TYPE})
endif()
add_definitions(-DIE_BUILD_POSTFIX=\"${IE_BUILD_POSTFIX}\")
if(NOT(UNIX))
if (WIN32)
#set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /MT")
#set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /MTd")
endif()
set (CMAKE_LIBRARY_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set (CMAKE_LIBRARY_PATH ${OUTPUT_ROOT}/${BIN_FOLDER})
set (CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set (CMAKE_COMPILE_PDB_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set (CMAKE_PDB_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set (CMAKE_RUNTIME_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set (LIBRARY_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set (LIBRARY_OUTPUT_PATH ${LIBRARY_OUTPUT_DIRECTORY}) # compatibility issue: linux uses LIBRARY_OUTPUT_PATH, windows uses LIBRARY_OUTPUT_DIRECTORY
else ()
set (CMAKE_LIBRARY_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${IE_BUILD_CONFIGURATION}/lib)
set (CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${IE_BUILD_CONFIGURATION}/lib)
set (CMAKE_COMPILE_PDB_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${IE_BUILD_CONFIGURATION})
set (CMAKE_PDB_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${IE_BUILD_CONFIGURATION})
set (CMAKE_RUNTIME_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${IE_BUILD_CONFIGURATION})
set (LIBRARY_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${IE_BUILD_CONFIGURATION}/lib)
set (LIBRARY_OUTPUT_PATH ${LIBRARY_OUTPUT_DIRECTORY}/lib)
endif()
if (APPLE)
set(CMAKE_MACOSX_RPATH 1)
endif(APPLE)
#rpath fully disabled
if (NOT ENABLE_PLUGIN_RPATH)
SET (CMAKE_SKIP_RPATH TRUE)
endif()
#Use solution folders.
set_property(GLOBAL PROPERTY USE_FOLDERS ON)
#message("=====================> ${CMAKE_BUILD_TYPE} <=====================")
# resolving dependencies for the project
include(dependencies)
message (STATUS "PROJECT ............................... " ${PROJECT_NAME})
message (STATUS "CMAKE_BINARY_DIR ...................... " ${CMAKE_BINARY_DIR})
message (STATUS "IE_MAIN_SOURCE_DIR .................... " ${IE_MAIN_SOURCE_DIR})
message (STATUS "CMAKE_GENERATOR ....................... " ${CMAKE_GENERATOR})
message (STATUS "CMAKE_C_COMPILER_ID ................... " ${CMAKE_C_COMPILER_ID})
message (STATUS "CMAKE_BUILD_TYPE ...................... " ${CMAKE_BUILD_TYPE})
include(sdl)
set (CMAKE_POSITION_INDEPENDENT_CODE ON)
include (sanitizer)
include(CheckCXXCompilerFlag)
if(UNIX)
CHECK_CXX_COMPILER_FLAG("-fvisibility=hidden" COMPILER_SUPPORTS_VISIBILITY)
if (COMPILER_SUPPORTS_VISIBILITY)
#add_definitions(-fvisibility=hidden) todo: should be hidden? if so define default visibiliti explicite for each funtion
add_definitions(-fvisibility=default)
endif(COMPILER_SUPPORTS_VISIBILITY)
endif(UNIX)
# remove file with exported developer targets to force its regeneration
file(REMOVE "${CMAKE_BINARY_DIR}/targets_developer.cmake")
add_subdirectory(src)
add_subdirectory(tests)
add_subdirectory(thirdparty)
if (ENABLE_SAMPLES_CORE)
set(InferenceEngine_DIR "${CMAKE_BINARY_DIR}")
#to be able to link
set (LIB_FOLDER ${IE_MAIN_SOURCE_DIR}/${BIN_FOLDER}/${IE_BUILD_CONFIGURATION}/lib)
add_subdirectory(samples)
if(ENABLE_TESTS)
add_subdirectory(tests)
endif()
add_subdirectory(thirdparty)
add_subdirectory(tools)
if (ENABLE_SAMPLES)
# hint for find_package(InferenceEngine in the samples folder)
set(InferenceEngine_DIR "${CMAKE_BINARY_DIR}")
endif()
# gflags and format_reader targets are kept inside of samples directory and
# they must be built even if samples build is disabled (required for tests and tools).
add_subdirectory(samples)
file(GLOB_RECURSE SAMPLES_SOURCES samples/*.cpp samples/*.hpp samples/*.h)
add_cpplint_target(sample_cpplint
FOR_SOURCES ${SAMPLES_SOURCES}
EXCLUDE_PATTERNS "thirdparty/*" "pugixml/*")
if (ENABLE_PYTHON)
add_subdirectory(ie_bridges/python)
endif()
endif()
add_cpplint_report_target()

View File

@@ -1,52 +1,295 @@
## Build on Linux\* Systems
# Build Inference Engine
## Contents
- [Introduction](#introduction)
- [Build on Linux* Systems](#build-on-linux-systems)
- [Software Requirements](#software-requirements)
- [Build Steps](#build-steps)
- [Additional Build Options](#additional-build-options)
- [Build for Raspbian* Stretch OS](#build-for-raspbian-stretch-os)
- [Hardware Requirements](#hardware-requirements)
- [Native Compilation](#native-compilation)
- [Cross Compilation Using Docker*](#cross-compilation-using-docker)
- [Additional Build Options](#additional-build-options-1)
- [Build on Windows* Systems](#build-on-windows-systems)
- [Software Requirements](#software-requirements-1)
- [Build Steps](#build-steps-1)
- [Additional Build Options](#additional-build-options-2)
- [Building Inference Engine with Ninja* Build System](#building-inference-engine-with-ninja-build-system)
- [Build on macOS* Systems](#build-on-macos-systems)
- [Software Requirements](#software-requirements-2)
- [Build Steps](#build-steps-2)
- [Additional Build Options](#additional-build-options-3)
- [Use Custom OpenCV Builds for Inference Engine](#use-custom-opencv-builds-for-inference-engine)
- [(Optional) Additional Installation Steps for the Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2](#optional-additional-installation-steps-for-the-intel-movidius-neural-compute-stick-and-neural-compute-stick-2)
- [For Linux, Raspbian Stretch* OS](#for-linux-raspbian-stretch-os)
- [For Windows](#for-windows-1)
- [Next Steps](#next-steps)
- [Additional Resources](#additional-resources)
## Introduction
The Inference Engine can infer models in different formats with various input and output formats.
The open source version of Inference Engine includes the following plugins:
| PLUGIN | DEVICE TYPES |
| ---------------------| -------------|
| CPU plugin | Intel® Xeon® with Intel® AVX2 and AVX512, Intel® Core™ Processors with Intel® AVX2, Intel® Atom® Processors with Intel® SSE |
| GPU plugin | Intel® Processor Graphics, including Intel® HD Graphics and Intel® Iris® Graphics |
| GNA plugin | Intel® Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel® Pentium® Silver processor J5005, Intel® Celeron® processor J4005, Intel® Core™ i3-8121U processor |
| MYRIAD plugin | Intel® Movidius™ Neural Compute Stick powered by the Intel® Movidius™ Myriad™ 2, Intel® Neural Compute Stick 2 powered by the Intel® Movidius™ Myriad™ X |
| Heterogeneous plugin | Heterogeneous plugin enables computing for inference on one network on several Intel® devices. |
Inference Engine plugin for Intel® FPGA is distributed only in a binary form as a part of [Intel® Distribution of OpenVINO™](https://software.intel.com/en-us/openvino-toolkit).
## Build on Linux* Systems
The software was validated on:
- Ubuntu\* 16.04 with default GCC\* 5.4.0
- CentOS\* 7.4 with default GCC\* 4.8.5
- [Intel® Graphics Compute Runtime for OpenCL™ Driver package 18.28.11080](https://github.com/intel/compute-runtime/releases/tag/18.28.11080).
- Ubuntu\* 16.04 (64-bit) with default GCC\* 5.4.0
- CentOS\* 7.4 (64-bit) with default GCC\* 4.8.5
### Software Requirements
- [CMake\*](https://cmake.org/download/) 3.9 or higher
- [CMake\*](https://cmake.org/download/) 3.5 or higher
- GCC\* 4.8 or higher to build the Inference Engine
- Python 2.7 or higher for Inference Engine Python API wrapper
- (Optional) [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.04.12237](https://github.com/intel/compute-runtime/releases/tag/19.04.12237).
### Build Steps
1. Clone submodules:
```sh
cd dldt/inference-engine
git submodule init
git submodule update --recursive
```
2. Install build dependencies using the `install_dependencies.sh` script in the project root folder.
3. Create a build folder:
3. By default, the build enables the Inference Engine GPU plugin to infer models on your Intel® Processor Graphics. This requires you to [Install Intel® Graphics Compute Runtime for OpenCL™ Driver package 19.04.12237](https://github.com/intel/compute-runtime/releases/tag/19.04.12237) before running the build. If you don't want to use the GPU plugin, use the `-DENABLE_CLDNN=OFF` CMake build option and skip the installation of the Intel® Graphics Compute Runtime for OpenCL™ Driver.
4. Create a build folder:
```sh
mkdir build
mkdir build && cd build
```
4. Inference Engine uses a CMake-based build system. In the created `build` directory, run `cmake` to fetch project dependencies and create Unix makefiles, then run `make` to build the project:
5. Inference Engine uses a CMake-based build system. In the created `build` directory, run `cmake` to fetch project dependencies and create Unix makefiles, then run `make` to build the project:
```sh
cmake -DCMAKE_BUILD_TYPE=Release ..
make -j16
make --jobs=$(nproc --all)
```
### Additional Build Options
You can use the following additional build options:
- Internal JIT GEMM implementation is used by default.
- To switch to OpenBLAS\* implementation, use `GEMM=OPENBLAS` option and `BLAS_INCLUDE_DIRS` and `BLAS_LIBRARIES` cmake options to specify path to OpenBLAS headers and library, for example use the following options on CentOS\*: `-DGEMM=OPENBLAS -DBLAS_INCLUDE_DIRS=/usr/include/openblas -DBLAS_LIBRARIES=/usr/lib64/libopenblas.so.0`
- To switch to optimized MKL-ML\* GEMM implementation, use `GEMM=MKL` and `MKLROOT` cmake options to specify path to unpacked MKL-ML with `include` and `lib` folders, for example use the following options: `-DGEMM=MKL -DMKLROOT=<path_to_MKL>`. MKL-ML\* package can be downloaded [here](https://github.com/intel/mkl-dnn/releases/download/v0.17/mklml_lnx_2019.0.1.20180928.tgz)
- OpenMP threading is used by default. To build Inference Engine with TBB threading, set `-DTHREADING=TBB` option.
- To switch to OpenBLAS\* implementation, use the `GEMM=OPENBLAS` option and `BLAS_INCLUDE_DIRS` and `BLAS_LIBRARIES` CMake options to specify path to the OpenBLAS headers and library. For example use the following options on CentOS\*: `-DGEMM=OPENBLAS -DBLAS_INCLUDE_DIRS=/usr/include/openblas -DBLAS_LIBRARIES=/usr/lib64/libopenblas.so.0`.
- To build Python API wrapper, use -DENABLE_PYTHON=ON option. To specify exact Python version, use the following options: `-DPYTHON_EXECUTABLE=`which python3.6` -DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.6m.so -DPYTHON_INCLUDE_DIR=/usr/include/python3.6`
- To switch to the optimized MKL-ML\* GEMM implementation, use `-DGEMM=MKL` and `-DMKLROOT=<path_to_MKL>` CMake options to specify a path to unpacked MKL-ML with the `include` and `lib` folders. MKL-ML\* package can be downloaded from the [MKL-DNN repository](https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_lnx_2019.0.5.20190502.tgz).
- To switch on/off the CPU and GPU plugins, use `cmake` options `-DENABLE_MKL_DNN=ON/OFF` and `-DENABLE_CLDNN=ON/OFF`.
- Threading Building Blocks (TBB) is used by default. To build the Inference Engine with OpenMP* threading, set the `-DTHREADING=OMP` option.
## Build on Windows\* Systems:
- Required versions of TBB and OpenCV packages are downloaded automatically by the CMake-based script. If you want to use the automatically downloaded packages but you already have installed TBB or OpenCV packages configured in your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR` environment variables before running the `cmake` command, otherwise they won't be downloaded and the build may fail if incompatible versions were installed.
- If the CMake-based build script can not find and download the OpenCV package that is supported on your platform, or if you want to use a custom build of the OpenCV library, refer to the [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine) section for details.
- To build the Python API wrapper, use the `-DENABLE_PYTHON=ON` option. To specify an exact Python version, use the following options:
```sh
-DPYTHON_EXECUTABLE=`which python3.7` \
-DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.7m.so \
-DPYTHON_INCLUDE_DIR=/usr/include/python3.7
```
- To switch off/on the CPU and GPU plugins, use the `cmake` options `-DENABLE_MKL_DNN=ON/OFF` and `-DENABLE_CLDNN=ON/OFF` respectively.
5. Adding to your project
For CMake projects, set an environment variable `InferenceEngine_DIR`:
```sh
export InferenceEngine_DIR=/path/to/dldt/inference-engine/build/
```
Then you can find Inference Engine by `find_package`:
```cmake
find_package(InferenceEngine)
include_directories(${InferenceEngine_INCLUDE_DIRS})
target_link_libraries(${PROJECT_NAME} ${InferenceEngine_LIBRARIES} dl)
```
## Build for Raspbian Stretch* OS
> **NOTE**: Only the MYRIAD plugin is supported.
### Hardware Requirements
* Raspberry Pi\* 2 or 3 with Raspbian\* Stretch OS (32-bit). Check that it's CPU supports ARMv7 instruction set (`uname -m` command returns `armv7l`).
> **NOTE**: Despite the Raspberry Pi\* CPU is ARMv8, 32-bit OS detects ARMv7 CPU instruction set. The default `gcc` compiler applies ARMv6 architecture flag for compatibility with lower versions of boards. For more information, run the `gcc -Q --help=target` command and refer to the description of the `-march=` option.
You can compile the Inference Engine for Raspberry Pi\* in one of the two ways:
* [Native Compilation](#native-compilation), which is the simplest way, but time-consuming
* [Cross Compilation Using Docker*](#cross-compilation-using-docker), which is the recommended way
### Native Compilation
Native compilation of the Inference Engine is the most straightforward solution. However, it might take at least one hour to complete on Raspberry Pi\* 3.
1. Install dependencies:
```bash
sudo apt-get update
sudo apt-get install -y git cmake libusb-1.0-0-dev
```
2. Go to the `inference-engine` directory of the cloned `dldt` repository:
```bash
cd dldt/inference-engine
```
3. Initialize submodules:
```bash
git submodule init
git submodule update --recursive
```
4. Create a build folder:
```bash
mkdir build && cd build
```
5. Build the Inference Engine:
```bash
cmake -DCMAKE_BUILD_TYPE=Release \
-DENABLE_SSE42=OFF \
-DTHREADING=SEQ \
-DENABLE_GNA=OFF .. && make
```
### Cross Compilation Using Docker*
This compilation was tested on the following configuration:
* Host: Ubuntu\* 16.04 (64-bit, Intel® Core™ i7-6700K CPU @ 4.00GHz × 8)
* Target: Raspbian\* Stretch (32-bit, ARMv7, Raspberry Pi\* 3)
1. Install Docker\*:
```bash
sudo apt-get install -y docker.io
```
2. Add a current user to `docker` group:
```bash
sudo usermod -a -G docker $USER
```
Log out and log in for this to take effect.
3. Create a directory named `ie_cross_armhf` and add a text file named `Dockerfile`
with the following content:
```docker
FROM debian:stretch
USER root
RUN dpkg --add-architecture armhf && \
apt-get update && \
apt-get install -y --no-install-recommends \
build-essential \
crossbuild-essential-armhf \
git \
wget \
libusb-1.0-0-dev:armhf \
libgtk-3-dev:armhf \
libavcodec-dev:armhf \
libavformat-dev:armhf \
libswscale-dev:armhf \
libgstreamer1.0-dev:armhf \
libgstreamer-plugins-base1.0-dev:armhf \
libpython3-dev:armhf \
python3-pip
RUN wget https://www.cmake.org/files/v3.14/cmake-3.14.3.tar.gz && \
tar xf cmake-3.14.3.tar.gz && \
(cd cmake-3.14.3 && ./bootstrap --parallel=$(nproc --all) && make --jobs=$(nproc --all) && make install) && \
rm -rf cmake-3.14.3 cmake-3.14.3.tar.gz
```
It uses the Debian\* Stretch (Debian 9) OS for compilation because it is a base of the Raspbian\* Stretch.
4. Build a Docker\* image:
```bash
docker image build -t ie_cross_armhf ie_cross_armhf
```
5. Run Docker\* container with mounted source code folder from host:
```bash
docker run -it -v /absolute/path/to/dldt:/dldt ie_cross_armhf /bin/bash
```
6. While in the container:
1. Go to the `inference-engine` directory of the cloned `dldt` repository:
```bash
cd dldt/inference-engine
```
2. Create a build folder:
```bash
mkdir build && cd build
```
3. Build the Inference Engine:
```bash
cmake -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_TOOLCHAIN_FILE="../cmake/arm.toolchain.cmake" \
-DTHREADS_PTHREAD_ARG="-pthread" \
-DENABLE_SSE42=OFF \
-DTHREADING=SEQ \
-DENABLE_GNA=OFF .. && make --jobs=$(nproc --all)
```
7. Press "Ctrl"+"D" to exit from Docker\*. You can find the resulting binaries in the `dldt/inference-engine/bin/armv7l/` directory and the OpenCV* installation in the `dldt/inference-engine/temp`.
>**NOTE**: Native applications that link to cross-compiled Inference Engine library require an extra compilation flag `-march=armv7-a`.
### Additional Build Options
You can use the following additional build options:
- Required versions of OpenCV packages are downloaded automatically by the CMake-based script. If you want to use the automatically downloaded packages but you already have installed OpenCV packages configured in your environment, you may need to clean the `OpenCV_DIR` environment variable before running the `cmake` command, otherwise they won't be downloaded and the build may fail if incompatible versions were installed.
- If the CMake-based build script can not find and download the OpenCV package that is supported on your platform, or if you want to use a custom build of the OpenCV library, refer to the [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine) section for details.
- To build Python API wrapper, install `libpython3-dev:armhf` and `python3-pip` packages using `apt-get`, then install `numpy` and `cython` python modules using `pip3` command and add the following cmake options:
```sh
-DENABLE_PYTHON=ON \
-DPYTHON_EXECUTABLE=/usr/bin/python3.5 \
-DPYTHON_LIBRARY=/usr/lib/arm-linux-gnueabihf/libpython3.5m.so \
-DPYTHON_INCLUDE_DIR=/usr/include/python3.5
```
## Build on Windows* Systems
The software was validated on:
- Microsoft\* Windows\* 10 with Visual Studio 2017 and Intel® C++ Compiler 2018 Update 3
- [Intel® Graphics Driver for Windows* [24.20] driver package](https://downloadcenter.intel.com/download/27803/Graphics-Intel-Graphics-Driver-for-Windows-10?v=t).
- Microsoft\* Windows\* 10 (64-bit) with Visual Studio 2017 and Intel® C++ Compiler 2018 Update 3
### Software Requirements
- [CMake\*](https://cmake.org/download/) 3.9 or higher
- [CMake\*](https://cmake.org/download/) 3.5 or higher
- [OpenBLAS\*](https://sourceforge.net/projects/openblas/files/v0.2.14/OpenBLAS-v0.2.14-Win64-int64.zip/download) and [mingw64\* runtime dependencies](https://sourceforge.net/projects/openblas/files/v0.2.14/mingw64_dll.zip/download).
- [Intel® C++ Compiler](https://software.intel.com/en-us/intel-parallel-studio-xe) 18.0 to build the Inference Engine on Windows.
- (Optional) [Intel® Graphics Driver for Windows* [25.20] driver package](https://downloadcenter.intel.com/download/28646/Intel-Graphics-Windows-10-DCH-Drivers?product=80939).
- Python 3.4 or higher for Inference Engine Python API wrapper
### Build Steps
@@ -59,11 +302,12 @@ The software was validated on:
3. Install OpenBLAS:
1. Download [OpenBLAS\*](https://sourceforge.net/projects/openblas/files/v0.2.14/OpenBLAS-v0.2.14-Win64-int64.zip/download)
2. Unzip the downloaded package to a directory on your machine. In this document, this directory is referred to as `<OPENBLAS_DIR>`.
4. Create build directory:
4. By default, the build enables the Inference Engine GPU plugin to infer models on your Intel® Processor Graphics. This requires you to [download and install the Intel® Graphics Driver for Windows* [25.20] driver package](https://downloadcenter.intel.com/download/28646/Intel-Graphics-Windows-10-DCH-Drivers?product=80939) before running the build. If you don't want to use the GPU plugin, use the `-DENABLE_CLDNN=OFF` CMake build option and skip the installation of the Intel® Graphics Driver.
5. Create build directory:
```sh
mkdir build
```
5. In the `build` directory, run `cmake` to fetch project dependencies and generate a Visual Studio solution:
6. In the `build` directory, run `cmake` to fetch project dependencies and generate a Visual Studio solution:
```sh
cd build
cmake -G "Visual Studio 15 2017 Win64" -T "Intel C++ Compiler 18.0" ^
@@ -71,27 +315,167 @@ cmake -G "Visual Studio 15 2017 Win64" -T "Intel C++ Compiler 18.0" ^
-DICCLIB="C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2018\windows\compiler\lib" ..
```
7. Build generated solution in Visual Studio 2017 or run `cmake --build . --config Release` to build from the command line.
8. Before running the samples, add paths to TBB and OpenCV binaries used for the build to the `%PATH%` environment variable. By default, TBB binaries are downloaded by the CMake-based script to the `<dldt_repo>/inference-engine/temp/tbb/lib` folder, OpenCV binaries - to the `<dldt_repo>/inference-engine/temp/opencv_4.1.0/bin` folder.
### Additional Build Options
- Internal JIT GEMM implementation is used by default.
- To switch to OpenBLAS GEMM implementation, use -DGEMM=OPENBLAS cmake option and specify path to OpenBLAS using `-DBLAS_INCLUDE_DIRS=<OPENBLAS_DIR>\include` and `-DBLAS_LIBRARIES=<OPENBLAS_DIR>\lib\libopenblas.dll.a` options. Prebuilt OpenBLAS\* package can be downloaded [here](https://sourceforge.net/projects/openblas/files/v0.2.14/OpenBLAS-v0.2.14-Win64-int64.zip/download), mingw64* runtime dependencies [here](https://sourceforge.net/projects/openblas/files/v0.2.14/mingw64_dll.zip/download)
- To switch to optimized MKL-ML GEMM implementation, use `GEMM=MKL` and `MKLROOT` cmake options to specify path to unpacked MKL-ML with `include` and `lib` folders, for example use the following options: `-DGEMM=MKL -DMKLROOT=<path_to_MKL>`. MKL-ML\* package can be downloaded [here](https://github.com/intel/mkl-dnn/releases/download/v0.17/mklml_win_2019.0.1.20180928.zip)
- To switch to OpenBLAS GEMM implementation, use the `-DGEMM=OPENBLAS` CMake option and specify path to OpenBLAS using the `-DBLAS_INCLUDE_DIRS=<OPENBLAS_DIR>\include` and `-DBLAS_LIBRARIES=<OPENBLAS_DIR>\lib\libopenblas.dll.a` options. Prebuilt OpenBLAS\* package can be downloaded [here](https://sourceforge.net/projects/openblas/files/v0.2.14/OpenBLAS-v0.2.14-Win64-int64.zip/download). mingw64* runtime dependencies can be downloaded [here](https://sourceforge.net/projects/openblas/files/v0.2.14/mingw64_dll.zip/download).
- To switch to the optimized MKL-ML\* GEMM implementation, use the `-DGEMM=MKL` and `-DMKLROOT=<path_to_MKL>` CMake options to specify a path to unpacked MKL-ML with the `include` and `lib` folders. MKL-ML\* package can be downloaded from the [MKL-DNN repository](https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_win_2019.0.5.20190502.zip).
- OpenMP threading is used by default. To build Inference Engine with TBB threading, set `-DTHREADING=TBB` option.
- Threading Building Blocks (TBB) is used by default. To build the Inference Engine with OpenMP* threading, set the `-DTHREADING=OMP` option.
- To build Python API wrapper, use -DENABLE_PYTHON=ON option. To specify exact Python version, use the following options: `-DPYTHON_EXECUTABLE="C:\Program Files\Python36\python.exe" -DPYTHON_INCLUDE_DIR="C:\Program Files\Python36\include" -DPYTHON_LIBRARY="C:\Program Files\Python36\libs\python36.lib"`.
- Required versions of TBB and OpenCV packages are downloaded automatically by the CMake-based script. If you want to use the automatically downloaded packages but you already have installed TBB or OpenCV packages configured in your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR` environment variables before running the `cmake` command, otherwise they won't be downloaded and the build may fail if incompatible versions were installed.
6. Build generated solution in Visual Studio 2017 or run `cmake --build . --config Release` to build from the command line.
- If the CMake-based build script can not find and download the OpenCV package that is supported on your platform, or if you want to use a custom build of the OpenCV library, refer to the [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine) section for details.
### Building Inference Engine with Ninja
- To switch off/on the CPU and GPU plugins, use the `cmake` options `-DENABLE_MKL_DNN=ON/OFF` and `-DENABLE_CLDNN=ON/OFF` respectively.
- To build the Python API wrapper, use the `-DENABLE_PYTHON=ON` option. To specify an exact Python version, use the following options:
```sh
-DPYTHON_EXECUTABLE="C:\Program Files\Python37\python.exe" ^
-DPYTHON_LIBRARY="C:\Program Files\Python37\libs\python37.lib" ^
-DPYTHON_INCLUDE_DIR="C:\Program Files\Python37\include"
```
### Building Inference Engine with Ninja* Build System
```sh
call "C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2018\windows\bin\ipsxe-comp-vars.bat" intel64 vs2017
set CXX=icl
set CC=icl
:: clean TBBROOT value set by ipsxe-comp-vars.bat, required TBB package will be downloaded by dldt cmake script
set TBBROOT=
cmake -G Ninja -Wno-dev -DCMAKE_BUILD_TYPE=Release ..
cmake --build . --config Release
```
Before running the samples on Microsoft\* Windows\*, please add path to OpenMP library (<dldt_repo>/inference-engine/temp/omp/lib) and OpenCV libraries (<dldt_repo>/inference-engine/temp/opencv_4.0.0/bin) to the %PATH% environment variable.
## Build on macOS* Systems
> **NOTE**: The current version of the OpenVINO™ toolkit for macOS* supports inference on Intel CPUs only.
The software was validated on:
- macOS\* 10.14, 64-bit
### Software Requirements
- [CMake\*](https://cmake.org/download/) 3.5 or higher
- Clang\* compiler from Xcode\* 10.1
- Python\* 3.4 or higher for the Inference Engine Python API wrapper
### Build Steps
1. Clone submodules:
```sh
cd dldt/inference-engine
git submodule init
git submodule update --recursive
```
2. Install build dependencies using the `install_dependencies.sh` script in the project root folder.
3. Create a build folder:
```sh
mkdir build
```
4. Inference Engine uses a CMake-based build system. In the created `build` directory, run `cmake` to fetch project dependencies and create Unix makefiles, then run `make` to build the project:
```sh
cmake -DCMAKE_BUILD_TYPE=Release ..
make --jobs=$(nproc --all)
```
### Additional Build Options
You can use the following additional build options:
- Internal JIT GEMM implementation is used by default.
- To switch to the optimized MKL-ML\* GEMM implementation, use `-DGEMM=MKL` and `-DMKLROOT=<path_to_MKL>` cmake options to specify a path to unpacked MKL-ML with the `include` and `lib` folders. MKL-ML\* package can be downloaded [here](https://github.com/intel/mkl-dnn/releases/download/v0.19/mklml_mac_2019.0.5.20190502.tgz)
- Threading Building Blocks (TBB) is used by default. To build the Inference Engine with OpenMP* threading, set the `-DTHREADING=OMP` option.
- Required versions of TBB and OpenCV packages are downloaded automatically by the CMake-based script. If you want to use the automatically downloaded packages but you already have installed TBB or OpenCV packages configured in your environment, you may need to clean the `TBBROOT` and `OpenCV_DIR` environment variables before running the `cmake` command, otherwise they won't be downloaded and the build may fail if incompatible versions were installed.
- If the CMake-based build script can not find and download the OpenCV package that is supported on your platform, or if you want to use a custom build of the OpenCV library, refer to the [Use Custom OpenCV Builds](#use-custom-opencv-builds-for-inference-engine) section for details.
- To build the Python API wrapper, use the `-DENABLE_PYTHON=ON` option. To specify an exact Python version, use the following options:
```sh
-DPYTHON_EXECUTABLE=/Library/Frameworks/Python.framework/Versions/3.7/bin/python3.7 \
-DPYTHON_LIBRARY=/Library/Frameworks/Python.framework/Versions/3.7/lib/libpython3.7m.dylib \
-DPYTHON_INCLUDE_DIR=/Library/Frameworks/Python.framework/Versions/3.7/include/python3.7m
```
## Use Custom OpenCV Builds for Inference Engine
> **NOTE**: The recommended and tested version of OpenCV is 4.1. The minimum supported version is 3.4.0.
Required versions of OpenCV packages are downloaded automatically during the building Inference Engine library. If the build script can not find and download the OpenCV package that is supported on your platform, you can use one of the following options:
* Download the most suitable version from the list of available pre-build packages from [https://download.01.org/opencv/2019/openvinotoolkit](https://download.01.org/opencv/2019/openvinotoolkit) from the `<release_version>/inference_engine` directory.
* Use a system provided OpenCV package (e.g with running the `apt install libopencv-dev` command). The following modules must be enabled: `imgcodecs`, `videoio`, `highgui`.
* Get the OpenCV package using a package manager: pip, conda, conan etc. The package must have the development components included (header files and CMake scripts).
* Build OpenCV from source using the [build instructions](https://docs.opencv.org/master/df/d65/tutorial_table_of_content_introduction.html) on the OpenCV site.
After you got the built OpenCV library, perform the following preparation steps before running the Inference Engine build:
1. Set the `OpenCV_DIR` environment variable to the directory where the `OpenCVConfig.cmake` file of you custom OpenCV build is located.
2. Disable the package automatic downloading with using the `-DENABLE_OPENCV=OFF` option for CMake-based build script for Inference Engine.
## (Optional) Additional Installation Steps for the Intel® Movidius™ Neural Compute Stick and Neural Compute Stick 2
> **NOTE**: These steps are only required if you want to perform inference on Intel® Movidius™ Neural Compute Stick or the Intel® Neural Compute Stick 2 using the Inference Engine MYRIAD Plugin. See also [Intel® Neural Compute Stick 2 Get Started](https://software.intel.com/en-us/neural-compute-stick/get-started)
### For Linux, Raspbian\* Stretch OS
1. Add the current Linux user to the `users` group:
```sh
sudo usermod -a -G users "$(whoami)"
```
Log out and log in for it to take effect.
2. To perform inference on Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2, install the USB rules as follows:
```sh
cat <<EOF > 97-myriad-usbboot.rules
SUBSYSTEM=="usb", ATTRS{idProduct}=="2150", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
SUBSYSTEM=="usb", ATTRS{idProduct}=="2485", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
SUBSYSTEM=="usb", ATTRS{idProduct}=="f63b", ATTRS{idVendor}=="03e7", GROUP="users", MODE="0666", ENV{ID_MM_DEVICE_IGNORE}="1"
EOF
```
```sh
sudo cp 97-myriad-usbboot.rules /etc/udev/rules.d/
```
```sh
sudo udevadm control --reload-rules
```
```sh
sudo udevadm trigger
```
```sh
sudo ldconfig
```
```sh
rm 97-myriad-usbboot.rules
```
### For Windows
For Intel® Movidius™ Neural Compute Stick and Intel® Neural Compute Stick 2, install the Movidius™ VSC driver:
1. Go to the `<DLDT_ROOT_DIR>/inference-engine/thirdparty/movidius/MovidiusDriver` directory, where the `DLDT_ROOT_DIR` is the directory to which the DLDT repository was cloned.
2. Right click on the `Movidius_VSC_Device.inf` file and choose **Install** from the pop up menu.
You have installed the driver for your Intel® Movidius™ Neural Compute Stick or Intel® Neural Compute Stick 2.
## Next Steps
Congratulations, you have built the Inference Engine. To get started with the OpenVINO™ DLDT, proceed to the Get Started guides:
* [Get Started with Deep Learning Deployment Toolkit on Linux*](../get-started-linux.md)
## Additional Resources
* [OpenVINO™ Release Notes](https://software.intel.com/en-us/articles/OpenVINO-RelNotes)
* [Introduction to Intel® Deep Learning Deployment Toolkit](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Introduction.html)
* [Inference Engine Samples Overview](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Samples_Overview.html)
* [Inference Engine Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_IE_DG_Deep_Learning_Inference_Engine_DevGuide.html)
* [Model Optimizer Developer Guide](https://docs.openvinotoolkit.org/latest/_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html)
---
\* Other names and brands may be claimed as the property of others.
\* Other names and brands may be claimed as the property of others.

View File

@@ -1,5 +1,4 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
@@ -11,13 +10,15 @@ if(NOT DEFINED INTEL_VTUNE_DIR AND DEFINED ENV{INTEL_VTUNE_DIR})
endif()
if(NOT DEFINED INTEL_VTUNE_DIR)
if(EXISTS "/opt/intel/vtune_amplifier_xe/include")
set(INTEL_VTUNE_DIR "/opt/intel/vtune_amplifier_xe")
set(INTEL_VTUNE_DIR "/opt/intel/vtune_amplifier_xe")
elseif(EXISTS "/opt/intel/vtune_amplifier/include")
set(INTEL_VTUNE_DIR "/opt/intel/vtune_amplifier")
elseif (EXISTS "C:/Program Files (x86)/IntelSWTools/VTune Amplifier XE")
set(INTEL_VTUNE_DIR "C:/Program Files (x86)/IntelSWTools/VTune Amplifier XE")
elseif (EXISTS "C:/Program Files (x86)/IntelSWTools/VTune Amplifier")
set(INTEL_VTUNE_DIR "C:/Program Files (x86)/IntelSWTools/VTune Amplifier")
elseif (EXISTS "$ENV{HOME}/intel/vtune_amplifier_2019")
set(INTEL_VTUNE_DIR "$ENV{HOME}/intel/vtune_amplifier_2019")
endif()
endif()
@@ -33,7 +34,7 @@ if(DEFINED INTEL_VTUNE_DIR)
"libittnotify${CMAKE_STATIC_LIBRARY_SUFFIX}"
PATHS ${INTEL_VTUNE_DIR}/lib64)
set(Located_ITT_LIBS ${ITT_LIB} ${CMAKE_DL_LIBS})
set(Located_ITT_LIBS ${ITT_LIB})
set(Located_ITT_INCLUDE_DIRS ${ITT_INCLUDE_DIR})
else()
message(STATUS "INTEL_VTUNE_DIR is not defined")
@@ -46,17 +47,11 @@ find_package_handle_standard_args(INTEL_ITT
Located_ITT_INCLUDE_DIRS
Located_ITT_LIBS)
if(ENABLE_PROFILING_ITT AND INTEL_ITT_FOUND)
add_definitions(-DENABLE_PROFILING_ITT=1)
if(INTEL_ITT_FOUND)
add_library(ittnotify STATIC IMPORTED GLOBAL)
set_target_properties(ittnotify PROPERTIES IMPORTED_LOCATION "${Located_ITT_LIBS}"
INTERFACE_INCLUDE_DIRECTORIES ${Located_ITT_INCLUDE_DIRS}
INTERFACE_COMPILE_DEFINITIONS ENABLE_PROFILING_ITT)
set(INTEL_ITT_LIBS ${Located_ITT_LIBS})
set(INTEL_ITT_INCLUDE_DIRS ${Located_ITT_INCLUDE_DIRS})
message(STATUS "INTEL_ITT_INCLUDE_DIRS: ${INTEL_ITT_INCLUDE_DIRS}")
include_directories(${INTEL_ITT_INCLUDE_DIRS})
message(STATUS "INTEL_ITT_LIBS: ${INTEL_ITT_LIBS}")
else()
add_definitions(-DENABLE_PROFILING_ITT=0)
message(STATUS "INTEL_ITT is disabled")
set(INTEL_ITT_LIBS ittnotify ${CMAKE_DL_LIBS})
endif()

View File

@@ -1,39 +1,80 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
#module to locate GNA libraries
cmake_minimum_required(VERSION 2.8)
# module to locate GNA libraries
if (WIN32)
set(GNA_PLATFORM_DIR win64)
set(GNA_LIB_DIR x64)
set(GNA_LIB gna)
elseif (UNIX)
set(GNA_PLATFORM_DIR linux)
set(GNA_LIB_DIR lib)
set(GNA_LIB gna_api)
set(GNA_KERNEL_LIB gna_kernel)
else ()
message(FATAL_ERROR "GNA not supported on this platform, only linux, and windows")
endif ()
find_library(GNA_API_LIBRARY
${GNA_LIB}
set(libGNA_FOUND TRUE)
set(GNA_KERNEL_LIB_NAME gna)
set(GNA_LIBS_LIST
"libGNA::API"
"libGNA::KERNEL")
if (GNA_LIBRARY_VERSION STREQUAL "GNA1")
# use old version of GNA Library from gna_20181120
if (WIN32)
set(GNA_LIB_DIR x64)
else ()
list(APPEND GNA_LIBS_LIST
"libGNA::OLD_API_LIB")
set(GNA_LIB_DIR lib)
set(GNA_KERNEL_LIB_NAME gna_kernel)
endif()
set(libGNA_INCLUDE_DIRS "${GNA}/${GNA_PLATFORM_DIR}/include")
else()
# use current version of GNA library
set(GNA_LIB_DIR x64)
set(libGNA_INCLUDE_DIRS "${GNA}/include")
endif()
set(libGNA_LIBRARIES_BASE_PATH ${GNA}/${GNA_PLATFORM_DIR}/${GNA_LIB_DIR})
add_library(libGNA::KERNEL SHARED IMPORTED)
find_library(GNA_KERNEL_LIBRARY
${GNA_KERNEL_LIB_NAME}
HINTS
${GNA}/${GNA_PLATFORM_DIR}/${GNA_LIB_DIR})
${libGNA_LIBRARIES_BASE_PATH})
set_target_properties(libGNA::KERNEL PROPERTIES IMPORTED_LOCATION ${GNA_KERNEL_LIBRARY})
set(libGNA_INCLUDE_DIRS ${GNA}/${GNA_PLATFORM_DIR}/include)
set(libGNA_LIBRARY ${GNA_API_LIBRARY})
if (UNIX)
#message("Searching for libgna_kernel.so in: ${GNA}/${GNA_PLATFORM_DIR}/${GNA_KERNEL_LIB}")
find_library(GNA_KERNEL_LIBRARY
${GNA_KERNEL_LIB}
if ((GNA_LIBRARY_VERSION STREQUAL "GNA1") AND (NOT WIN32))
add_library(libGNA::OLD_API_LIB SHARED IMPORTED)
find_library(GNA_API_LIBRARY
gna_api
HINTS
${GNA}/${GNA_PLATFORM_DIR}/${GNA_LIB_DIR})
endif ()
${libGNA_LIBRARIES_BASE_PATH})
set_target_properties(libGNA::OLD_API_LIB PROPERTIES IMPORTED_LOCATION ${GNA_API_LIBRARY})
target_link_libraries(libGNA::OLD_API_LIB INTERFACE libGNA::KERNEL)
set_target_properties(libGNA::OLD_API_LIB PROPERTIES IMPORTED_NO_SONAME TRUE)
set_target_properties(libGNA::KERNEL PROPERTIES IMPORTED_NO_SONAME TRUE)
endif()
set(libGNA_LIBRARIES ${libGNA_LIBRARY} ${GNA_KERNEL_LIBRARY})
add_library(libGNA::API INTERFACE IMPORTED)
set_property(TARGET libGNA::API PROPERTY INTERFACE_INCLUDE_DIRECTORIES ${libGNA_INCLUDE_DIRS})
add_library(libGNA INTERFACE IMPORTED)
foreach(_lib_name ${GNA_LIBS_LIST})
set_property(TARGET libGNA APPEND PROPERTY INTERFACE_LINK_LIBRARIES ${_lib_name})
get_target_property(_target_type ${_lib_name} TYPE)
if (${_target_type} STREQUAL "INTERFACE_LIBRARY")
get_target_property(_target_location ${_lib_name} INTERFACE_INCLUDE_DIRECTORIES)
else()
get_target_property(_target_location ${_lib_name} IMPORTED_LOCATION)
endif ()
message(STATUS "${_lib_name} ${_target_type} : ${_target_location}")
endforeach(_lib_name)
if (WIN32)
set_target_properties(libGNA::KERNEL PROPERTIES
IMPORTED_IMPLIB ${GNA_KERNEL_LIBRARY})
elseif(NOT GNA_LIBRARY_VERSION STREQUAL "GNA1")
set_target_properties(libGNA PROPERTIES INTERFACE_LINK_OPTIONS "-Wl,-rpath-link,${libGNA_LIBRARIES_BASE_PATH}")
endif ()

View File

@@ -1,3 +1,7 @@
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_SYSTEM_PROCESSOR armv7l)

View File

@@ -0,0 +1,14 @@
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_SYSTEM_PROCESSOR aarch64)
set(CMAKE_C_COMPILER aarch64-linux-gnu-gcc)
set(CMAKE_CXX_COMPILER aarch64-linux-gnu-g++)
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)

View File

@@ -1,12 +1,7 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
include("features")
include("mode")
include("itt")
#64 bits platform
if ("${CMAKE_SIZEOF_VOID_P}" EQUAL "8")
message(STATUS "Detected 64 bit architecture")
@@ -18,8 +13,7 @@ else()
SET(ARCH_32 ON)
endif()
if (ARCH_64)
else()
if (NOT ARCH_64)
if (UNIX OR APPLE)
SET(ENABLE_CLDNN OFF)
endif()
@@ -41,58 +35,68 @@ if (WIN32)
endif()
endif()
# Linux specific - not all OS'es are supported
if (LINUX)
include("linux_name")
get_linux_name(LINUX_OS_NAME)
if (LINUX_OS_NAME)
if (NOT(
${LINUX_OS_NAME} STREQUAL "Ubuntu 14.04" OR
${LINUX_OS_NAME} STREQUAL "Ubuntu 16.04" OR
${LINUX_OS_NAME} STREQUAL "CentOS 7"))
endif()
else ()
message(WARNING "Cannot detect Linux OS via reading /etc/*-release:\n ${release_data}")
endif ()
endif ()
if (NOT ENABLE_MKL_DNN)
set(ENABLE_MKL OFF)
endif()
if (NOT ENABLE_VPU)
set(ENABLE_MYRIAD OFF)
endif()
#next section set defines to be accesible in c++/c code for certain feature
if (ENABLE_PROFILING_RAW)
add_definitions(-DENABLE_PROFILING_RAW=1)
endif()
if (ENABLE_GTEST_PATCHES)
add_definitions(-DENABLE_GTEST_PATCHES=1)
endif()
if (ENABLE_CLDNN)
add_definitions(-DENABLE_CLDNN=1)
endif()
if (ENABLE_MYRIAD)
add_definitions(-DENABLE_MYRIAD=1)
endif()
if (ENABLE_MYRIAD_NO_BOOT AND ENABLE_MYRIAD )
add_definitions(-DENABLE_MYRIAD_NO_BOOT=1)
endif()
if (ENABLE_MKL_DNN)
add_definitions(-DENABLE_MKL_DNN=1)
endif()
if (ENABLE_STRESS_UNIT_TESTS)
add_definitions(-DENABLE_STRESS_UNIT_TESTS=1)
endif()
if (ENABLE_SEGMENTATION_TESTS)
add_definitions(-DENABLE_SEGMENTATION_TESTS=1)
endif()
if (ENABLE_OBJECT_DETECTION_TESTS)
add_definitions(-DENABLE_OBJECT_DETECTION_TESTS=1)
endif()
if (ENABLE_GNA)
add_definitions(-DENABLE_GNA)
set (DEFAULT_GNA_LIB GNA1_1401)
# "GNA library version: GNA1|GNA1_1401|GNA2" - default is 1401
if (NOT GNA_LIBRARY_VERSION STREQUAL "GNA1"
AND NOT GNA_LIBRARY_VERSION STREQUAL "GNA1_1401"
AND NOT GNA_LIBRARY_VERSION STREQUAL "GNA2")
set (GNA_LIBRARY_VERSION ${DEFAULT_GNA_LIB})
message(STATUS "GNA_LIBRARY_VERSION not set. Can be GNA1, GNA1_1401 or GNA2. Default is ${GNA_LIBRARY_VERSION}")
endif()
if (GNA_LIBRARY_VERSION STREQUAL "GNA2")
message(WARNING "GNA2 is not currently supported. Fallback to ${DEFAULT_GNA_LIB}")
set(GNA_LIBRARY_VERSION ${DEFAULT_GNA_LIB})
endif()
if (UNIX AND NOT APPLE AND CMAKE_COMPILER_IS_GNUCC AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 5.4)
message(WARNING "${GNA_LIBRARY_VERSION} no supported on GCC version ${CMAKE_CXX_COMPILER_VERSION}. Fallback to GNA1")
set(GNA_LIBRARY_VERSION GNA1)
endif()
set(GNA_LIBRARY_VERSION "${GNA_LIBRARY_VERSION}" CACHE STRING "GNAVersion" FORCE)
list (APPEND IE_OPTIONS GNA_LIBRARY_VERSION)
endif()
if (ENABLE_SAMPLES)
set (ENABLE_SAMPLES_CORE ON)
endif()
#models dependend tests
if (DEVELOPMENT_PLUGIN_MODE)
message (STATUS "Enabled development plugin mode")
@@ -108,9 +112,18 @@ if (DEVELOPMENT_PLUGIN_MODE)
endif()
endif()
if (NOT ENABLE_TESTS)
set(ENABLE_GNA_MODELS OFF)
endif ()
if (VERBOSE_BUILD)
set(CMAKE_VERBOSE_MAKEFILE ON)
endif()
if(ENABLE_DUMP)
add_definitions(-DDEBUG_DUMP)
endif()
print_enabled_features()

View File

@@ -1,5 +1,4 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
@@ -8,6 +7,9 @@ if(DEFINED IE_MAIN_SOURCE_DIR AND TARGET inference_engine)
set(InferenceEngine_LIBRARIES inference_engine)
else()
include("${CMAKE_CURRENT_LIST_DIR}/targets.cmake")
if(NOT WIN32)
set_target_properties(IE::inference_engine PROPERTIES INTERFACE_COMPILE_OPTIONS "-Wno-error=deprecated-declarations")
endif()
get_target_property(InferenceEngine_INCLUDE_DIRS IE::inference_engine INTERFACE_INCLUDE_DIRECTORIES)
set(InferenceEngine_LIBRARIES IE::inference_engine)
endif()

View File

@@ -0,0 +1,28 @@
# Copyright (C) 2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if(ENABLE_CPPCHECK)
find_program(CPPCHECK_EXECUTABLE cppcheck)
if(NOT CPPCHECK_EXECUTABLE)
message(WARNING "cppcheck was not found : disable static analysis")
set(ENABLE_CPPCHECK OFF)
endif()
endif()
function(add_cppcheck)
if(NOT ENABLE_CPPCHECK)
return()
endif()
set_property(
TARGET ${ARGN}
PROPERTY CXX_CPPCHECK
${CPPCHECK_EXECUTABLE}
"--suppress=*:*/temp/*"
"--suppress=*:*/thirdparty/*"
"--error-exitcode=1"
"--template={file}:{line}: error: [cppcheck:{severity}] {message}"
"--quiet")
endfunction()

View File

@@ -0,0 +1,161 @@
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if(ENABLE_CPPLINT)
find_package(PythonInterp 2.7 EXACT)
if(NOT PYTHONINTERP_FOUND OR NOT PYTHON_VERSION_MAJOR EQUAL 2)
message(WARNING "Python 2.7 was not found (required for cpplint check)")
set(ENABLE_CPPLINT OFF)
endif()
endif()
if(ENABLE_CPPLINT)
add_custom_target(cpplint_all ALL)
set(CPPLINT_ALL_OUTPUT_FILES "" CACHE INTERNAL "All cpplint output files")
endif()
function(add_cpplint_target TARGET_NAME)
if(NOT ENABLE_CPPLINT)
return()
endif()
set(options "")
set(oneValueArgs "")
set(multiValueArgs "FOR_TARGETS" "FOR_SOURCES" "EXCLUDE_PATTERNS")
cmake_parse_arguments(CPPLINT "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
foreach(target IN LISTS CPPLINT_FOR_TARGETS)
get_target_property(target_sources "${target}" SOURCES)
list(APPEND CPPLINT_FOR_SOURCES ${target_sources})
endforeach()
list(REMOVE_DUPLICATES CPPLINT_FOR_SOURCES)
set(all_output_files "")
foreach(source_file IN LISTS CPPLINT_FOR_SOURCES)
set(exclude FALSE)
foreach(pattern IN LISTS CPPLINT_EXCLUDE_PATTERNS)
if(source_file MATCHES "${pattern}")
set(exclude TRUE)
break()
endif()
endforeach()
if(exclude)
continue()
endif()
file(RELATIVE_PATH source_file_relative "${CMAKE_CURRENT_SOURCE_DIR}" "${source_file}")
set(output_file "${CMAKE_CURRENT_BINARY_DIR}/cpplint/${source_file_relative}.cpplint")
string(REPLACE ".." "__" output_file "${output_file}")
get_filename_component(output_dir "${output_file}" DIRECTORY)
file(MAKE_DIRECTORY "${output_dir}")
add_custom_command(
OUTPUT
"${output_file}"
COMMAND
"${CMAKE_COMMAND}"
-D "PYTHON_EXECUTABLE=${PYTHON_EXECUTABLE}"
-D "CPPLINT_SCRIPT=${IE_MAIN_SOURCE_DIR}/scripts/cpplint.py"
-D "INPUT_FILE=${source_file}"
-D "OUTPUT_FILE=${output_file}"
-D "WORKING_DIRECTORY=${CMAKE_CURRENT_SOURCE_DIR}"
-D "SKIP_RETURN_CODE=${ENABLE_CPPLINT_REPORT}"
-P "${IE_MAIN_SOURCE_DIR}/cmake/cpplint_run.cmake"
DEPENDS
"${source_file}"
"${IE_MAIN_SOURCE_DIR}/scripts/cpplint.py"
"${IE_MAIN_SOURCE_DIR}/cmake/cpplint_run.cmake"
COMMENT
"[cpplint] ${source_file}"
VERBATIM)
list(APPEND all_output_files "${output_file}")
endforeach()
set(CPPLINT_ALL_OUTPUT_FILES
${CPPLINT_ALL_OUTPUT_FILES} ${all_output_files}
CACHE INTERNAL
"All cpplint output files")
add_custom_target(${TARGET_NAME} ALL
DEPENDS ${all_output_files}
COMMENT "[cpplint] ${TARGET_NAME}")
if(CPPLINT_FOR_TARGETS)
foreach(target IN LISTS CPPLINT_FOR_TARGETS)
add_dependencies(${target} ${TARGET_NAME})
endforeach()
endif()
add_dependencies(cpplint_all ${TARGET_NAME})
endfunction()
function(add_cpplint_report_target)
if(NOT ENABLE_CPPLINT OR NOT ENABLE_CPPLINT_REPORT)
return()
endif()
set(cpplint_output_file "${CMAKE_BINARY_DIR}/cpplint/final_output.cpplint")
add_custom_command(
OUTPUT
"${cpplint_output_file}"
COMMAND
"${CMAKE_COMMAND}"
-D "FINAL_OUTPUT_FILE=${cpplint_output_file}"
-D "OUTPUT_FILES=${CPPLINT_ALL_OUTPUT_FILES}"
-P "${IE_MAIN_SOURCE_DIR}/cmake/cpplint_merge.cmake"
DEPENDS
${CPPLINT_ALL_OUTPUT_FILES}
"${IE_MAIN_SOURCE_DIR}/cmake/cpplint_merge.cmake"
COMMENT
"[cpplint] Merge all output files"
VERBATIM)
set(cppcheck_output_file "${CMAKE_BINARY_DIR}/cpplint/cpplint-cppcheck-result.xml")
add_custom_command(
OUTPUT
"${cppcheck_output_file}"
COMMAND
"${CMAKE_COMMAND}"
-D "PYTHON_EXECUTABLE=${PYTHON_EXECUTABLE}"
-D "CONVERT_SCRIPT=${IE_MAIN_SOURCE_DIR}/scripts/cpplint_to_cppcheckxml.py"
-D "INPUT_FILE=${cpplint_output_file}"
-D "OUTPUT_FILE=${cppcheck_output_file}"
-P "${IE_MAIN_SOURCE_DIR}/cmake/cpplint_to_cppcheck_xml.cmake"
DEPENDS
"${cpplint_output_file}"
"${IE_MAIN_SOURCE_DIR}/scripts/cpplint_to_cppcheckxml.py"
"${IE_MAIN_SOURCE_DIR}/cmake/cpplint_to_cppcheck_xml.cmake"
COMMENT
"[cpplint] Convert to cppcheck XML format"
VERBATIM)
set(report_dir "${IE_MAIN_SOURCE_DIR}/report/cpplint")
set(html_output_file "${report_dir}/index.html")
add_custom_command(
OUTPUT
"${html_output_file}"
COMMAND
"${CMAKE_COMMAND}"
-D "PYTHON_EXECUTABLE=${PYTHON_EXECUTABLE}"
-D "CONVERT_SCRIPT=${IE_MAIN_SOURCE_DIR}/scripts/cppcheck-htmlreport.py"
-D "INPUT_FILE=${cppcheck_output_file}"
-D "REPORT_DIR=${report_dir}"
-D "SOURCE_DIR=${IE_MAIN_SOURCE_DIR}"
-D "TITLE=${CMAKE_PROJECT_NAME}"
-P "${IE_MAIN_SOURCE_DIR}/cmake/cpplint_html.cmake"
DEPENDS
"${cppcheck_output_file}"
"${IE_MAIN_SOURCE_DIR}/scripts/cppcheck-htmlreport.py"
"${IE_MAIN_SOURCE_DIR}/cmake/cpplint_html.cmake"
COMMENT
"[cpplint] Generate HTML report"
VERBATIM)
add_custom_target(cpplint_report
DEPENDS "${html_output_file}"
COMMENT "[cpplint] Generate report")
endfunction()

View File

@@ -0,0 +1,29 @@
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if(EXISTS "${REPORT_DIR}")
file(REMOVE_RECURSE "${REPORT_DIR}")
endif()
file(MAKE_DIRECTORY "${REPORT_DIR}")
execute_process(
COMMAND
"${PYTHON_EXECUTABLE}"
"${CONVERT_SCRIPT}"
"--file=${INPUT_FILE}"
"--report-dir=${REPORT_DIR}"
"--source-dir=${SOURCE_DIR}"
"--title=${TITLE}")
# Change cppcheck things to cpplint
file(READ "${REPORT_DIR}/index.html" cur_file_content)
string(REPLACE "Cppcheck" "cpplint" cur_file_content "${cur_file_content}")
string(REPLACE "a tool for static C/C++ code analysis" "an open source lint-like tool from Google" cur_file_content "${cur_file_content}")
string(REPLACE "http://cppcheck.sourceforge.net" "http://google-styleguide.googlecode.com/svn/trunk/cpplint/cpplint.py" cur_file_content "${cur_file_content}")
string(REPLACE "IRC: <a href=\"irc://irc.freenode.net/cppcheck\">irc://irc.freenode.net/cppcheck</a>" " " cur_file_content "${cur_file_content}")
file(WRITE "${REPORT_DIR}/index.html" "${cur_file_content}")

View File

@@ -0,0 +1,10 @@
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
file(WRITE "${FINAL_OUTPUT_FILE}" "")
foreach(output_file IN LISTS OUTPUT_FILES)
file(READ "${output_file}" cur_file_content)
file(APPEND "${FINAL_OUTPUT_FILE}" "${cur_file_content}\n")
endforeach()

View File

@@ -0,0 +1,36 @@
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
file(REMOVE "${OUTPUT_FILE}")
execute_process(
COMMAND
"${PYTHON_EXECUTABLE}"
"${CPPLINT_SCRIPT}"
"--linelength=160"
"--counting=detailed"
"--filter=-readability/fn_size"
"${INPUT_FILE}"
WORKING_DIRECTORY "${WORKING_DIRECTORY}"
RESULT_VARIABLE result
OUTPUT_VARIABLE output
ERROR_VARIABLE output)
# Display the cpplint output to console (to parse it form IDE)
message("${output}")
# Store cpplint output to file (replace problematic symbols)
string(REPLACE "\"" "&quot\;" output "${output}")
string(REPLACE "<" "&lt\;" output "${output}")
string(REPLACE ">" "&gt\;" output "${output}")
string(REPLACE "'" "&apos\;" output "${output}")
string(REPLACE "&" "&amp\;" output "${output}")
file(WRITE "${OUTPUT_FILE}" "${output}")
if(NOT SKIP_RETURN_CODE)
# Pass through the cpplint return code
if(NOT result EQUAL 0)
message(FATAL_ERROR "[cpplint] Code style check failed for : ${INPUT_FILE}")
endif()
endif()

View File

@@ -0,0 +1,11 @@
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
execute_process(
COMMAND
"${PYTHON_EXECUTABLE}"
"${CONVERT_SCRIPT}"
INPUT_FILE "${INPUT_FILE}"
OUTPUT_FILE "${OUTPUT_FILE}"
ERROR_FILE "${OUTPUT_FILE}")

View File

@@ -1,17 +1,13 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
cmake_minimum_required (VERSION 2.8)
function (debug_message)
if (VERBOSE_BUILD)
message(${ARGV})
endif()
endfunction()
function(clean_message type)
string (REPLACE ";" "" output_string "${ARGN}")
execute_process(COMMAND ${CMAKE_COMMAND} -E echo "${output_string}")
@@ -49,7 +45,12 @@ function (log_rpath_remove_top component component_remove_top lib lib_remove_top
# debug_message(STATUS "LIB-OUT=${lib_dir}")
# debug_message(STATUS "TOPLIB-OUT=${top_lib_dir}")
if (WIN32)
string (TOLOWER "${top_lib_dir}" top_lib_dir)
string (TOLOWER "${lib_dir}" lib_dir)
endif()
string (REPLACE "${top_lib_dir}" "" component_dir "${lib_dir}")
set(RPATH_INFO "${component}=${component_dir}")
@@ -58,9 +59,7 @@ function (log_rpath_remove_top component component_remove_top lib lib_remove_top
endfunction()
function (log_rpath_from_dir component lib_dir)
if(NOT APPLE)
log_rpath_remove_top("${component}" TRUE "${lib_dir}" FALSE)
endif()
log_rpath_remove_top("${component}" TRUE "${lib_dir}" FALSE)
endfunction()
function (log_rpath component lib_path)

View File

@@ -1,57 +1,36 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
cmake_minimum_required(VERSION 2.8)
cmake_policy(SET CMP0054 NEW)
#features trigger supported by build system
include(check_features)
include(debug)
#we have number of dependencies stored on ftp
include(dependency_solver)
#prepare temporary folder
if (DEFINED ENV{${DL_SDK_TEMP}})
if (WIN32)
string(REPLACE "\\" "\\\\" TEMP $ENV{${DL_SDK_TEMP}})
else(WIN32)
set(TEMP $ENV{${DL_SDK_TEMP}})
endif(WIN32)
if (ENABLE_ALTERNATIVE_TEMP)
set(ALTERNATIVE_PATH ${IE_MAIN_SOURCE_DIR}/temp)
endif()
else ()
message(STATUS "DL_SDK_TEMP envionment not set")
set(TEMP ${IE_MAIN_SOURCE_DIR}/temp)
endif ()
set_temp_directory(TEMP "${IE_MAIN_SOURCE_DIR}")
include(ExternalProject)
if (ENABLE_SAME_BRANCH_FOR_MODELS)
branchName(MODELS_BRANCH)
else()
set(MODELS_BRANCH "master")
include(linux_name)
if(COMMAND get_linux_name)
get_linux_name(LINUX_OS_NAME)
endif()
set(MODELS_PATH "${TEMP}/models")
debug_message(STATUS "MODELS_PATH=" ${MODELS_PATH})
if (ENABLE_MYRIAD)
include(vpu_dependencies)
endif()
## enable cblas_gemm from OpenBLAS package
if (GEMM STREQUAL "OPENBLAS")
if(NOT BLAS_LIBRARIES OR NOT BLAS_INCLUDE_DIRS)
find_package(BLAS REQUIRED)
if(BLAS_FOUND)
find_path(BLAS_INCLUDE_DIRS cblas.h)
else()
message(ERROR "OpenBLAS not found: install OpenBLAS or set -DBLAS_INCLUDE_DIRS=<path to dir with cblas.h> and -DBLAS_LIBRARIES=<path to libopenblas.so or openblas.lib>")
if(NOT BLAS_LIBRARIES OR NOT BLAS_INCLUDE_DIRS)
find_package(BLAS REQUIRED)
if(BLAS_FOUND)
find_path(BLAS_INCLUDE_DIRS cblas.h)
else()
message(ERROR "OpenBLAS not found: install OpenBLAS or set -DBLAS_INCLUDE_DIRS=<path to dir with cblas.h> and -DBLAS_LIBRARIES=<path to libopenblas.so or openblas.lib>")
endif()
endif()
endif()
debug_message(STATUS "openblas=" ${BLAS_LIBRARIES})
debug_message(STATUS "openblas=" ${BLAS_LIBRARIES})
endif ()
#MKL-ml package
@@ -65,99 +44,129 @@ endif ()
## Intel OMP package
if (THREADING STREQUAL "OMP")
if (WIN32)
RESOLVE_DEPENDENCY(OMP
ARCHIVE_WIN "iomp.zip"
TARGET_PATH "${TEMP}/omp"
ENVIRONMENT "OMP"
VERSION_REGEX ".*_([a-z]*_([a-z0-9]+\\.)*[0-9]+).*")
elseif(LINUX)
RESOLVE_DEPENDENCY(OMP
ARCHIVE_LIN "iomp.tgz"
TARGET_PATH "${TEMP}/omp"
ENVIRONMENT "OMP"
VERSION_REGEX ".*_([a-z]*_([a-z0-9]+\\.)*[0-9]+).*")
endif()
log_rpath_from_dir(OMP "${OMP}/lib")
debug_message(STATUS "intel_omp=" ${OMP})
if (WIN32)
RESOLVE_DEPENDENCY(OMP
ARCHIVE_WIN "iomp.zip"
TARGET_PATH "${TEMP}/omp"
ENVIRONMENT "OMP"
VERSION_REGEX ".*_([a-z]*_([a-z0-9]+\\.)*[0-9]+).*")
elseif(LINUX)
RESOLVE_DEPENDENCY(OMP
ARCHIVE_LIN "iomp.tgz"
TARGET_PATH "${TEMP}/omp"
ENVIRONMENT "OMP"
VERSION_REGEX ".*_([a-z]*_([a-z0-9]+\\.)*[0-9]+).*")
else(APPLE)
RESOLVE_DEPENDENCY(OMP
ARCHIVE_MAC "iomp_20190130_mac.tgz"
TARGET_PATH "${TEMP}/omp"
ENVIRONMENT "OMP"
VERSION_REGEX ".*_([a-z]*_([a-z0-9]+\\.)*[0-9]+).*")
endif()
log_rpath_from_dir(OMP "${OMP}/lib")
debug_message(STATUS "intel_omp=" ${OMP})
endif ()
## TBB package
if (THREADING STREQUAL "TBB")
if (WIN32)
#TODO: add target_path to be platform specific as well, to avoid following if
RESOLVE_DEPENDENCY(TBB
ARCHIVE_WIN "tbb2019_20181010_win.zip" #TODO: windows zip archive created incorrectly using old name for folder
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
VERSION_REGEX ".*_([a-z]*_([a-z0-9]+\\.)*[0-9]+).*")
elseif(LINUX)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_LIN "tbb2019_20181010_lin.tgz"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT")
endif()
log_rpath_from_dir(TBB "${TBB}/lib")
debug_message(STATUS "tbb=" ${TBB})
if (THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
if (WIN32)
#TODO: add target_path to be platform specific as well, to avoid following if
RESOLVE_DEPENDENCY(TBB
ARCHIVE_WIN "tbb2019_20181010_win.zip" #TODO: windows zip archive created incorrectly using old name for folder
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
VERSION_REGEX ".*_([a-z]*_([a-z0-9]+\\.)*[0-9]+).*")
elseif(LINUX)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_LIN "tbb2019_20181010_lin.tgz"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT")
else(APPLE)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_MAC "tbb2019_20190414_v1_mac.tgz"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
VERSION_REGEX ".*_([a-z]*_([a-z0-9]+\\.)*[0-9]+).*")
endif()
log_rpath_from_dir(TBB "${TBB}/lib")
debug_message(STATUS "tbb=" ${TBB})
endif ()
if (ENABLE_OPENCV)
if (WIN32)
RESOLVE_DEPENDENCY(OPENCV
ARCHIVE_WIN "opencv_4.0.1-0353.zip"
TARGET_PATH "${TEMP}/opencv_4.0.0"
ENVIRONMENT "OpenCV_DIR"
VERSION_REGEX ".*_([0-9]+.[0-9]+.[0-9]+).*")
log_rpath_from_dir(OPENCV "\\opencv_4.0.0\\bin")
set( ENV{OpenCV_DIR} ${OPENCV}/cmake )
elseif(LINUX)
if (${LINUX_OS_NAME} STREQUAL "Ubuntu 16.04")
RESOLVE_DEPENDENCY(OPENCV
ARCHIVE_LIN "opencv_4.0.0-0305_ubuntu16.tgz"
TARGET_PATH "${TEMP}/opencv_4.0.0_ubuntu"
ENVIRONMENT "OpenCV_DIR"
VERSION_REGEX ".*_([0-9]+.[0-9]+.[0-9]+).*")
log_rpath_from_dir(OPENCV "opencv_4.0.0_ubuntu/lib")
elseif (${LINUX_OS_NAME} STREQUAL "Ubuntu 18.04")
RESOLVE_DEPENDENCY(OPENCV
ARCHIVE_LIN "opencv_4.0.0-0305_ubuntu18.tgz"
TARGET_PATH "${TEMP}/opencv_4.0.0_ubuntu18"
ENVIRONMENT "OpenCV_DIR"
VERSION_REGEX ".*_([0-9]+.[0-9]+.[0-9]+).*")
log_rpath_from_dir(OPENCV "opencv_4.0.0_ubuntu/lib")
elseif (${LINUX_OS_NAME} STREQUAL "CentOS 7")
RESOLVE_DEPENDENCY(OPENCV
ARCHIVE_LIN "opencv_4.0.0-0305_centos.tgz"
TARGET_PATH "${TEMP}/opencv_4.0.0_centos"
ENVIRONMENT "OpenCV_DIR"
VERSION_REGEX ".*_([0-9]+.[0-9]+.[0-9]+).*")
log_rpath_from_dir(OPENCV "opencv_4.0.0_centos/lib")
endif()
set( ENV{OpenCV_DIR} ${OPENCV}/cmake )
endif()
debug_message(STATUS "opencv=" ${OPENCV})
endif()
set(OPENCV_VERSION "4.1.2")
set(OPENCV_BUILD "624")
set(OPENCV_SUFFIX "")
if (WIN32)
RESOLVE_DEPENDENCY(OPENCV
ARCHIVE_WIN "opencv_${OPENCV_VERSION}-${OPENCV_BUILD}.zip"
TARGET_PATH "${TEMP}/opencv_${OPENCV_VERSION}"
ENVIRONMENT "OpenCV_DIR"
VERSION_REGEX ".*_([0-9]+.[0-9]+.[0-9]+).*")
log_rpath_from_dir(OPENCV "\\opencv_${OPENCV_VERSION}\\bin")
elseif(APPLE)
RESOLVE_DEPENDENCY(OPENCV
ARCHIVE_MAC "opencv_${OPENCV_VERSION}-${OPENCV_BUILD}_osx.tar.xz"
TARGET_PATH "${TEMP}/opencv_${OPENCV_VERSION}_osx"
ENVIRONMENT "OpenCV_DIR"
VERSION_REGEX ".*_([0-9]+.[0-9]+.[0-9]+).*")
log_rpath_from_dir(OPENCV "opencv_${OPENCV_VERSION}_osx/lib")
elseif(LINUX)
if (${CMAKE_SYSTEM_PROCESSOR} STREQUAL "armv7l")
set(OPENCV_SUFFIX "debian9arm")
elseif (${LINUX_OS_NAME} STREQUAL "Ubuntu 16.04")
set(OPENCV_SUFFIX "ubuntu16")
elseif (${LINUX_OS_NAME} STREQUAL "Ubuntu 18.04")
set(OPENCV_SUFFIX "ubuntu18")
elseif (${LINUX_OS_NAME} STREQUAL "CentOS 7")
set(OPENCV_SUFFIX "centos7")
endif()
endif()
if (OPENCV_SUFFIX)
RESOLVE_DEPENDENCY(OPENCV
ARCHIVE_LIN "opencv_${OPENCV_VERSION}-${OPENCV_BUILD}_${OPENCV_SUFFIX}.tar.xz"
TARGET_PATH "${TEMP}/opencv_${OPENCV_VERSION}_${OPENCV_SUFFIX}"
ENVIRONMENT "OpenCV_DIR"
VERSION_REGEX ".*_([0-9]+.[0-9]+.[0-9]+).*")
log_rpath_from_dir(OPENCV "opencv_${OPENCV_VERSION}_${OPENCV_SUFFIX}/lib")
endif()
debug_message(STATUS "opencv=" ${OPENCV})
# OpenCV_DIR should point to cmake folder within the specified OpenCV binary package.
# It's required to successsfully find OpenCV libs using find_package(OpenCV ...) command.
# So, the cached OpenCV_DIR variable should be update if custom value wasn't previously set here.
if (NOT DEFINED ENV{OpenCV_DIR})
set(OpenCV_DIR "${OPENCV}/cmake" CACHE PATH "Path to OpenCV in temp directory")
endif()
endif()
include(ie_parallel)
if (ENABLE_GNA)
RESOLVE_DEPENDENCY(GNA
ARCHIVE_UNIFIED "gna_20181120.zip"
TARGET_PATH "${TEMP}/gna")
if (GNA_LIBRARY_VERSION STREQUAL "GNA1")
RESOLVE_DEPENDENCY(GNA
ARCHIVE_UNIFIED "gna_20181120.zip"
TARGET_PATH "${TEMP}/gna")
elseif(GNA_LIBRARY_VERSION STREQUAL "GNA1_1401")
set(GNA_VERSION "01.00.00.1401")
RESOLVE_DEPENDENCY(GNA
ARCHIVE_UNIFIED "GNA_${GNA_VERSION}.zip"
TARGET_PATH "${TEMP}/gna_${GNA_VERSION}"
VERSION_REGEX ".*_([0-9]+.[0-9]+.[0-9]+.[0-9]+).*")
endif()
debug_message(STATUS "gna=" ${GNA})
endif()
configure_file(
"${CMAKE_SOURCE_DIR}/cmake/share/InferenceEngineConfig.cmake.in"
"${PROJECT_SOURCE_DIR}/cmake/share/InferenceEngineConfig.cmake.in"
"${CMAKE_BINARY_DIR}/share/InferenceEngineConfig.cmake"
@ONLY)
configure_file(
"${CMAKE_SOURCE_DIR}/cmake/share/InferenceEngineConfig-version.cmake.in"
"${PROJECT_SOURCE_DIR}/cmake/share/InferenceEngineConfig-version.cmake.in"
"${CMAKE_BINARY_DIR}/share/InferenceEngineConfig-version.cmake"
COPYONLY)
configure_file(
"${CMAKE_SOURCE_DIR}/cmake/ie_parallel.cmake"
"${PROJECT_SOURCE_DIR}/cmake/ie_parallel.cmake"
"${CMAKE_BINARY_DIR}/share/ie_parallel.cmake"
COPYONLY)

View File

@@ -1,10 +1,7 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
cmake_minimum_required (VERSION 2.8)
include ("download")
function (resolve_archive_dependency VAR COMPONENT ARCHIVE ARCHIVE_UNIFIED ARCHIVE_WIN ARCHIVE_LIN ARCHIVE_MAC TARGET_PATH FOLDER ENVIRONMENT)
@@ -15,7 +12,7 @@ function (resolve_archive_dependency VAR COMPONENT ARCHIVE ARCHIVE_UNIFIED ARCHI
if (NOT DEFINED HAS_ENV)
if (ARCHIVE)
#TODO: check wether this is platform specific binary with same name per or it is in common folder
#TODO: check whether this is platform specific binary with same name per or it is in common folder
DownloadAndExtract(${COMPONENT} ${ARCHIVE} ${TARGET_PATH} result_path ${FOLDER})
else()
DownloadAndExtractPlatformSpecific(${COMPONENT} ${ARCHIVE_UNIFIED} ${ARCHIVE_WIN} ${ARCHIVE_LIN} ${ARCHIVE_MAC} ${TARGET_PATH} result_path ${FOLDER})
@@ -108,6 +105,8 @@ function (RESOLVE_DEPENDENCY NAME_OF_CMAKE_VAR)
set (FOLDER FALSE)
endif()
#for each dependency type have to do separate things
if (ARCHIVE_WIN OR ARCHIVE_LIN OR ARCHIVE_MAC OR ARCHIVE OR ARCHIVE_UNIFIED)
if (NOT DEFINED TARGET_PATH)
@@ -130,11 +129,3 @@ function (RESOLVE_DEPENDENCY NAME_OF_CMAKE_VAR)
endif()
endfunction(RESOLVE_DEPENDENCY)
function (resolve_model_dependency network archive network_model_path)
RESOLVE_DEPENDENCY(${network_model_path}
ARCHIVE "models_archives/${archive}"
TARGET_PATH "${MODELS_PATH}/${network}")
string (REPLACE ${MODELS_PATH} "" relative_path ${${network_model_path}})
set(${network_model_path} ".${relative_path}" PARENT_SCOPE)
endfunction()

View File

@@ -0,0 +1,159 @@
# Copyright (C) 2018 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
# printing debug messages
include(debug)
if (UNIX AND NOT APPLE)
set(LINUX ON)
endif()
string(TOLOWER ${CMAKE_SYSTEM_PROCESSOR} ARCH_FOLDER)
if(ARCH_FOLDER STREQUAL "x86_64" OR ARCH_FOLDER STREQUAL "amd64") # Windows detects Intel's 64-bit CPU as AMD64
set(ARCH_FOLDER intel64)
elseif(ARCH_FOLDER STREQUAL "i386")
set(ARCH_FOLDER ia32)
endif()
if(OS_FOLDER)
message ("**** OS FOLDER IS: [${OS_FOLDER}]")
if("${OS_FOLDER}" STREQUAL "ON")
message ("**** USING OS FOLDER: [${CMAKE_SYSTEM_NAME}]")
set(BIN_FOLDER "bin/${CMAKE_SYSTEM_NAME}/${ARCH_FOLDER}")
else()
set(BIN_FOLDER "bin/${OS_FOLDER}/${ARCH_FOLDER}")
endif()
else()
set(BIN_FOLDER "bin/${ARCH_FOLDER}")
endif()
if("${CMAKE_BUILD_TYPE}" STREQUAL "")
debug_message(STATUS "CMAKE_BUILD_TYPE not defined, 'Release' will be used")
set(CMAKE_BUILD_TYPE "Release")
endif()
if(COVERAGE)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fprofile-arcs -ftest-coverage -O0")
endif()
if(UNIX)
SET(LIB_DL ${CMAKE_DL_LIBS})
endif()
set(OUTPUT_ROOT ${IE_MAIN_SOURCE_DIR})
# Enable postfixes for Debug/Release builds
set(IE_DEBUG_POSTFIX_WIN "d")
set(IE_RELEASE_POSTFIX_WIN "")
set(IE_DEBUG_POSTFIX_LIN "")
set(IE_RELEASE_POSTFIX_LIN "")
set(IE_DEBUG_POSTFIX_MAC "d")
set(IE_RELEASE_POSTFIX_MAC "")
if(WIN32)
set(IE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX_WIN})
set(IE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX_WIN})
elseif(APPLE)
set(IE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX_MAC})
set(IE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX_MAC})
else()
set(IE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX_LIN})
set(IE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX_LIN})
endif()
set(CMAKE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX})
set(CMAKE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX})
if (WIN32)
# Support CMake multiconfiguration for Visual Studio build
set(IE_BUILD_POSTFIX $<$<CONFIG:Debug>:${IE_DEBUG_POSTFIX}>$<$<CONFIG:Release>:${IE_RELEASE_POSTFIX}>)
else ()
if (${CMAKE_BUILD_TYPE} STREQUAL "Debug" )
set(IE_BUILD_POSTFIX ${IE_DEBUG_POSTFIX})
else()
set(IE_BUILD_POSTFIX ${IE_RELEASE_POSTFIX})
endif()
endif()
message(STATUS "CMAKE_BUILD_TYPE: ${CMAKE_BUILD_TYPE}")
add_definitions(-DIE_BUILD_POSTFIX=\"${IE_BUILD_POSTFIX}\")
if(NOT UNIX)
if (WIN32)
# set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /MT")
# set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /MTd")
endif()
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set(CMAKE_LIBRARY_PATH ${OUTPUT_ROOT}/${BIN_FOLDER})
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set(CMAKE_COMPILE_PDB_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set(CMAKE_PDB_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set(LIBRARY_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER})
set(LIBRARY_OUTPUT_PATH ${LIBRARY_OUTPUT_DIRECTORY}) # compatibility issue: linux uses LIBRARY_OUTPUT_PATH, windows uses LIBRARY_OUTPUT_DIRECTORY
else()
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${CMAKE_BUILD_TYPE}/lib)
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${CMAKE_BUILD_TYPE}/lib)
set(CMAKE_COMPILE_PDB_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${CMAKE_BUILD_TYPE})
set(CMAKE_PDB_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${CMAKE_BUILD_TYPE})
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${CMAKE_BUILD_TYPE})
set(LIBRARY_OUTPUT_DIRECTORY ${OUTPUT_ROOT}/${BIN_FOLDER}/${CMAKE_BUILD_TYPE}/lib)
set(LIBRARY_OUTPUT_PATH ${LIBRARY_OUTPUT_DIRECTORY}/lib)
endif()
if(APPLE)
set(CMAKE_MACOSX_RPATH 1)
endif(APPLE)
# rpath fully disabled
if (NOT ENABLE_PLUGIN_RPATH)
set(CMAKE_SKIP_RPATH TRUE)
endif()
# prepare temporary folder
function(set_temp_directory temp_variable source_tree_dir)
if (DEFINED ENV{${DL_SDK_TEMP}} AND NOT $ENV{${DL_SDK_TEMP}} STREQUAL "")
if (WIN32)
string(REPLACE "\\" "\\\\" temp $ENV{${DL_SDK_TEMP}})
else(WIN32)
set(temp $ENV{${DL_SDK_TEMP}})
endif(WIN32)
if (ENABLE_ALTERNATIVE_TEMP)
set(ALTERNATIVE_PATH ${source_tree_dir}/temp)
endif()
else ()
message(STATUS "DL_SDK_TEMP envionment not set")
set(temp ${source_tree_dir}/temp)
endif()
set("${temp_variable}" "${temp}" PARENT_SCOPE)
if(ALTERNATIVE_PATH)
set(ALTERNATIVE_PATH "${ALTERNATIVE_PATH}" PARENT_SCOPE)
endif()
endfunction()
# Use solution folders
set_property(GLOBAL PROPERTY USE_FOLDERS ON)
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
include(os_flags)
include(sdl)
include(sanitizer)
include(cpplint)
include(cppcheck)
function(set_ci_build_number)
set(IE_MAIN_SOURCE_DIR "${CMAKE_SOURCE_DIR}")
include(version)
set(CI_BUILD_NUMBER "${CI_BUILD_NUMBER}" PARENT_SCOPE)
endfunction()
set_ci_build_number()
if(ENABLE_PROFILING_ITT)
find_package(ITT REQUIRED)
endif()
include(plugins/plugins)

View File

@@ -0,0 +1,48 @@
# Copyright (C) 2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
set(IE_MAIN_SOURCE_DIR "@CMAKE_SOURCE_DIR@")
file(TO_CMAKE_PATH "${CMAKE_CURRENT_LIST_DIR}" cache_path)
# inherit OpenCV from main IE project
load_cache("${cache_path}" READ_WITH_PREFIX "" OpenCV_DIR)
find_package(OpenCV COMPONENTS imgcodecs)
# Targets
include("${CMAKE_CURRENT_LIST_DIR}/targets_developer.cmake")
# add additional interface include directories needed for plugin development
if(NOT TARGET IE::inference_engine)
message(FATAL_ERROR "The target IE::inference_engine does not exist")
endif()
set(ie_plugin_headers "${IE_MAIN_SOURCE_DIR}/src/inference_engine")
set_property(TARGET IE::inference_engine APPEND PROPERTY INTERFACE_INCLUDE_DIRECTORIES "${ie_plugin_headers}")
set_property(TARGET IE::inference_engine PROPERTY IMPORTED_GLOBAL TRUE)
get_target_property(InferenceEngine_INCLUDE_DIRS IE::inference_engine INTERFACE_INCLUDE_DIRECTORIES)
set(InferenceEngine_LIBRARIES IE::inference_engine)
# Variables to export in plugin's projects
set(ie_options "@IE_OPTIONS@;CMAKE_BUILD_TYPE")
load_cache("${cache_path}" READ_WITH_PREFIX "" ${ie_options})
message(STATUS "The following CMake options are exported from Inference Engine Developer package")
message("")
foreach(option IN LISTS ie_options)
message(" ${option}: ${${option}}")
endforeach()
message("")
#
# Common cmake includes
#
list(APPEND CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/cmake;${IE_MAIN_SOURCE_DIR}/cmake")
# generic stuff from developer package
include(developer_package)

View File

@@ -1,10 +1,7 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
cmake_minimum_required (VERSION 2.8)
function (Download from to fatal result output)
if((NOT EXISTS "${to}"))

View File

@@ -1,10 +1,7 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
cmake_minimum_required (VERSION 2.8)
function (DownloadAndApply URL apply_to)
if (EXISTS ${apply_to})

View File

@@ -1,23 +1,21 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
cmake_minimum_required (VERSION 2.8)
include (FindWget)
function (DownloadAndCheck from to fatal result)
set(status_res "ON")
set(output 1)
set(status_res "ON")
set(output 1)
get_filename_component(download_dir ${to} DIRECTORY)
if (NOT EXISTS ${download_dir})
file(MAKE_DIRECTORY ${download_dir})
endif()
get_filename_component(download_dir ${to} DIRECTORY)
if (NOT EXISTS ${download_dir})
file(MAKE_DIRECTORY ${download_dir})
endif()
if(NOT EXISTS "${to}")
if(NOT EXISTS "${to}")
if (${from} MATCHES "(http:)|(https:)|(ftp:)")
message(STATUS "Downloading from ${from} to ${to} ...")
find_program(aria2c "aria2c")
if (${aria2c} STREQUAL "aria2c-NOTFOUND")
if (NOT ${WGET_FOUND})
@@ -48,9 +46,13 @@ function (DownloadAndCheck from to fatal result)
status_code: ${status_code}")
endif()
endif()
else()
message(STATUS "Copying from local folder ${from} to ${to} ... ")
file(COPY ${from} DESTINATION ${download_dir})
endif()
endif()
file(REMOVE ${to}.md5)
set(${result} "${status_res}" PARENT_SCOPE)
endfunction(DownloadAndCheck)
endfunction(DownloadAndCheck)

View File

@@ -1,9 +1,7 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
cmake_minimum_required (VERSION 2.8)
include ("extract")
include ("download_and_check")
@@ -120,12 +118,12 @@ function (DownloadOrExtractInternal URL archive_path unpacked_path folder fattal
if (ENABLE_UNSAFE_LOCATIONS)
ExtractWithVersion(${URL} ${archive_path} ${unpacked_path} ${folder} result)
if(NOT ${result})
DownloadAndExtractInternal(${URL} ${archive_path} ${unpacked_path} ${folder} ${fattal} result)
DownloadAndExtractInternal(${URL} ${archive_path} ${unpacked_path} ${folder} ${fattal} result)
endif()
else()
debug_message("archive found on FS : ${archive_path}, however we cannot check it's checksum and think that it is invalid")
file(REMOVE_RECURSE "${archive_path}")
DownloadAndExtractInternal(${URL} ${archive_path} ${unpacked_path} ${folder} ${fattal} result)
DownloadAndExtractInternal(${URL} ${archive_path} ${unpacked_path} ${folder} ${fattal} result)
endif()
@@ -144,7 +142,11 @@ function (CheckOrDownloadAndExtract component RELATIVE_URL archive_name unpacked
set (status "ON")
set (on_master FALSE)
set (URL "https://download.01.org/openvinotoolkit/2018_R5/dldt/inference_engine/${RELATIVE_URL}")
if(DEFINED ENV{IE_PATH_TO_DEPS})
set(URL "$ENV{IE_PATH_TO_DEPS}/${RELATIVE_URL}")
else()
set(URL "https://download.01.org/opencv/2019/openvinotoolkit/R3/inference_engine/${RELATIVE_URL}")
endif()
#no message on recursive calls
if (${use_alternatives})

View File

@@ -1,17 +1,14 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
cmake_minimum_required (VERSION 2.8)
function (extract archive_path unpacked_path folder result)
# Slurped from a generated extract-TARGET.cmake file.
if (NOT EXISTS ${unpacked_path})
get_filename_component(unpacked_dir ${unpacked_path} DIRECTORY)
file(MAKE_DIRECTORY ${unpacked_path})
message(STATUS "extracting...
src='${archive_path}'
dst='${unpacked_path}'")

View File

@@ -1,28 +1,27 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
cmake_minimum_required (VERSION 2.8)
include (options)
include ("options")
#this options are aimed to optimize build time on development system
#these options are aimed to optimize build time on development system
#backed targets
ie_option (ENABLE_GNA "GNA support for inference engine" ON)
ie_option (ENABLE_ROCKHOPER "use Rockhopper decoder for converting / output scores" ON)
ie_option (ENABLE_MKL_DNN "MKL-DNN plugin for inference engine" ON)
ie_option (ENABLE_CLDNN "clDnn based plugin for inference engine" ON)
ie_option (ENABLE_CLDNN_TESTS "Enable clDNN unit tests" OFF)
ie_option (ENABLE_CLDNN_BUILD "build clDnn from sources" OFF)
ie_option (ENABLE_PROFILING_ITT "ITT tracing of IE and plugins internals" ON)
ie_option (ENABLE_PROFILING_RAW "Raw counters profiling (just values, no start/stop time or timeline)" OFF)
#
# "MKL-DNN library might use MKL-ML or OpenBLAS for gemm tasks: MKL|OPENBLAS|JIT"
if (NOT GEMM STREQUAL "MKL"
AND NOT GEMM STREQUAL "OPENBLAS"
@@ -30,43 +29,42 @@ if (NOT GEMM STREQUAL "MKL"
set (GEMM "JIT")
message(STATUS "GEMM should be set to MKL, OPENBLAS or JIT. Default option is " ${GEMM})
endif()
set(GEMM "${GEMM}" CACHE STRING "Gemm implementation" FORCE)
list (APPEND IE_OPTIONS GEMM)
# "MKL-DNN library based on OMP or TBB or Sequential implementation: TBB|OMP|SEQ"
if (NOT THREADING STREQUAL "TBB"
AND NOT THREADING STREQUAL "TBB_AUTO"
AND NOT THREADING STREQUAL "OMP"
AND NOT THREADING STREQUAL "SEQ")
set (THREADING "OMP")
message(STATUS "THREADING should be set to TBB, OMP or SEQ. Default option is " ${THREADING})
set (THREADING "TBB")
message(STATUS "THREADING should be set to TBB, TBB_AUTO, OMP or SEQ. Default option is " ${THREADING})
endif()
set(THREADING "${THREADING}" CACHE STRING "Threading" FORCE)
list (APPEND IE_OPTIONS THREADING)
# Enable postfixes for Debug/Release builds
set (IE_DEBUG_POSTFIX_WIN "d")
set (IE_RELEASE_POSTFIX_WIN "")
set (IE_DEBUG_POSTFIX_LIN "")
set (IE_RELEASE_POSTFIX_LIN "")
if (WIN32)
set (IE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX_WIN})
set (IE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX_WIN})
else()
set (IE_DEBUG_POSTFIX ${IE_DEBUG_POSTFIX_LIN})
set (IE_RELEASE_POSTFIX ${IE_RELEASE_POSTFIX_LIN})
endif()
list (APPEND IE_OPTIONS IE_DEBUG_POSTFIX)
list (APPEND IE_OPTIONS IE_RELEASE_POSTFIX)
ie_option (ENABLE_VPU "vpu targeted plugins for inference engine" ON)
ie_option (ENABLE_MYRIAD "myriad targeted plugin for inference engine" ON)
ie_option (ENABLE_MYRIAD_NO_BOOT "myriad plugin will skip device boot" OFF)
ie_option (ENABLE_TESTS "unit and functional tests" OFF)
ie_option (ENABLE_GAPI_TESTS "unit tests for GAPI kernels" OFF)
ie_option (ENABLE_GAPI_TESTS "tests for GAPI kernels" OFF)
ie_option (GAPI_TEST_PERF "if GAPI unit tests should examine performance" OFF)
ie_option (ENABLE_MYRIAD_MVNC_TESTS "functional and behavior tests for mvnc api" OFF)
ie_option (ENABLE_SAMPLES "console samples are part of inference engine package" ON)
ie_option (ENABLE_SAMPLES_CORE "console samples core library" ON)
ie_option (ENABLE_SANITIZER "enable checking memory errors via AddressSanitizer" OFF)
ie_option (ENABLE_FUZZING "instrument build for fuzzing" OFF)
ie_option (COVERAGE "enable code coverage" OFF)
ie_option (ENABLE_STRESS_UNIT_TESTS "stress unit tests" OFF)
@@ -93,6 +91,39 @@ ie_option (ENABLE_DEBUG_SYMBOLS "generates symbols for debugging" OFF)
ie_option (ENABLE_PYTHON "enables ie python bridge build" OFF)
ie_option (DEVELOPMENT_PLUGIN_MODE "Disabled build of all plugins" OFF)
ie_option (TREAT_WARNING_AS_ERROR "Treat build warnings as errors" ON)
ie_option (ENABLE_CPP_CCT "enables C++ version of Cross Check Tool" OFF)
ie_option (ENABLE_UNICODE_PATH_SUPPORT "Enable loading models from Unicode paths" ON)
ie_option (ENABLE_LTO "Enable Link Time Optimization" OFF)
# FIXME: there are compiler failures with LTO and Cross-Compile toolchains. Disabling for now, but
# this must be addressed in a proper way
if(CMAKE_CROSSCOMPILING OR NOT (UNIX AND NOT APPLE))
set(ENABLE_LTO OFF)
endif()
if (UNIX AND NOT APPLE AND CMAKE_COMPILER_IS_GNUCC AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 5.3)
set(ENABLE_UNICODE_PATH_SUPPORT OFF)
endif()
if (UNIX AND NOT APPLE)
ie_option(ENABLE_CPPLINT "Enable cpplint checks during the build" ON)
ie_option(ENABLE_CPPLINT_REPORT "Build cpplint report instead of failing the build" OFF)
else()
set(ENABLE_CPPLINT OFF)
endif()
if (UNIX AND NOT APPLE AND CMAKE_VERSION VERSION_GREATER_EQUAL 3.10)
ie_option(ENABLE_CPPCHECK "Enable cppcheck during the build" ON)
else()
set(ENABLE_CPPCHECK OFF)
endif()
#environment variables used
#name of environment variable stored path to temp directory"

View File

@@ -0,0 +1,30 @@
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
function(enable_fuzzing)
# Enable (libFuzzer)[https://llvm.org/docs/LibFuzzer.html] if supported.
if(CMAKE_CXX_COMPILER_ID MATCHES "Clang" AND NOT WIN32)
# Communicate libfuzzer is enabled
set(WITH_LIBFUZZER ON PARENT_SCOPE)
add_compile_definitions(WITH_LIBFUZZER)
# Enable libfuzzer and code coverage
set(FUZZING_COMPILER_FLAGS "-fsanitize=fuzzer-no-link -fprofile-instr-generate -fcoverage-mapping")
set(FUZZING_LINKER_FLAGS "-fsanitize-coverage=trace-pc-guard -fprofile-instr-generate")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${FUZZING_COMPILER_FLAGS}" PARENT_SCOPE)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${FUZZING_COMPILER_FLAGS}" PARENT_SCOPE)
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} ${FUZZING_LINKER_FLAGS}" PARENT_SCOPE)
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} ${FUZZING_LINKER_FLAGS}")
endif()
endfunction(enable_fuzzing)
function(add_fuzzer FUZZER_EXE_NAME FUZZER_SOURCES)
add_executable(${FUZZER_EXE_NAME} ${FUZZER_SOURCES})
if(WITH_LIBFUZZER)
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fsanitize=fuzzer" PARENT_SCOPE)
endif()
target_link_libraries(${FUZZER_EXE_NAME} PRIVATE fuzz-testhelper)
endfunction(add_fuzzer)

View File

@@ -1,52 +1,82 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
function(set_ie_threading_interface_for TARGET_NAME)
set(IE_THREAD_DEFINE "IE_THREAD_SEQ")
if (THREADING STREQUAL "TBB")
if (NOT (IE_MAIN_SOURCE_DIR))
set(incl_path ${IE_EXTERNAL_DIR}/tbb/include)
if (WIN32)
set(lib_rel_path ${IE_LIB_REL_DIR})
set(lib_dbg_path ${IE_LIB_DBG_DIR})
if (THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
if (DEFINED ENV{TBBROOT})
# Check TBB package in case if custom TBBROOT path configured
find_package(TBB QUIET PATHS "$ENV{TBBROOT}/cmake")
if (TBB_FOUND)
set(IE_THREAD_DEFINE "IE_THREAD_TBB")
if (WIN32)
target_link_libraries(${TARGET_NAME} PUBLIC "-nodefaultlib:vcomp")
endif ()
target_link_libraries(${TARGET_NAME} PUBLIC ${TBB_IMPORTED_TARGETS})
else ()
set(lib_rel_path ${IE_EXTERNAL_DIR}/tbb/lib)
# TBB was not found by the configured TBBROOT path, SEQ method will be used
ext_message(WARNING "TBB not found by the configured TBBROOT path $ENV{TBBROOT}")
endif ()
else()
if (NOT (IE_MAIN_SOURCE_DIR))
set(incl_path ${IE_EXTERNAL_DIR}/tbb/include)
if (WIN32)
set(lib_rel_path ${IE_LIB_REL_DIR})
set(lib_dbg_path ${IE_LIB_DBG_DIR})
else ()
set(lib_rel_path ${IE_EXTERNAL_DIR}/tbb/lib)
set(lib_dbg_path ${lib_rel_path})
endif ()
else ()
set(incl_path ${TBB}/include)
set(lib_rel_path ${TBB}/lib)
set(lib_dbg_path ${lib_rel_path})
endif ()
else ()
set(incl_path ${TBB}/include)
set(lib_rel_path ${TBB}/lib)
set(lib_dbg_path ${lib_rel_path})
endif ()
if (NOT TBB_INCLUDE_DIRS OR NOT TBB_LIBRARIES_RELEASE OR NOT TBB_LIBRARIES_DEBUG)
find_path(TBB_INCLUDE_DIRS tbb/tbb.h ${incl_path} NO_DEFAULT_PATH)
find_library(TBB_LIBRARIES_RELEASE tbb ${lib_rel_path} NO_DEFAULT_PATH)
find_library(TBB_LIBRARIES_DEBUG tbb_debug ${lib_dbg_path} NO_DEFAULT_PATH)
ext_message(STATUS "TBB include: ${TBB_INCLUDE_DIRS}")
ext_message(STATUS "TBB Release lib: ${TBB_LIBRARIES_RELEASE}")
ext_message(STATUS "TBB Debug lib: ${TBB_LIBRARIES_DEBUG}")
endif ()
if (NOT TBB_INCLUDE_DIRS OR NOT TBB_LIBRARIES_RELEASE)
find_path(TBB_INCLUDE_DIRS tbb/tbb.h ${incl_path} NO_DEFAULT_PATH)
find_library(TBB_LIBRARIES_RELEASE tbb ${lib_rel_path} NO_DEFAULT_PATH)
ext_message(STATUS "TBB include: ${TBB_INCLUDE_DIRS}")
ext_message(STATUS "TBB Release lib: ${TBB_LIBRARIES_RELEASE}")
if (NOT LINUX)
find_library(TBB_LIBRARIES_DEBUG tbb_debug ${lib_dbg_path} NO_DEFAULT_PATH)
if (TBB_LIBRARIES_DEBUG)
ext_message(STATUS "TBB Debug lib: ${TBB_LIBRARIES_DEBUG}")
else ()
ext_message(WARNING "TBB Debug binaries are missed.")
endif ()
endif ()
endif ()
if (NOT TBB_INCLUDE_DIRS OR NOT TBB_LIBRARIES_RELEASE OR NOT TBB_LIBRARIES_DEBUG)
ext_message(WARNING "TBB not found. TBB support will be disabled. ${IE_THREAD_DEFINE} is defined")
else ()
set(IE_THREAD_DEFINE "IE_THREAD_TBB")
target_include_directories(${TARGET_NAME} PUBLIC ${TBB_INCLUDE_DIRS})
if (WIN32)
target_link_libraries(${TARGET_NAME} PUBLIC "-nodefaultlib:vcomp")
target_link_libraries(${TARGET_NAME} PUBLIC "$<$<CONFIG:DEBUG>:${TBB_LIBRARIES_DEBUG}>;$<$<NOT:$<CONFIG:DEBUG>>:${TBB_LIBRARIES_RELEASE}>")
else()
if ("${CMAKE_BUILD_TYPE}" STREQUAL "Debug")
target_link_libraries(${TARGET_NAME} PUBLIC ${TBB_LIBRARIES_DEBUG})
else()
if (NOT TBB_INCLUDE_DIRS OR NOT TBB_LIBRARIES_RELEASE)
ext_message(WARNING "TBB not found. TBB support will be disabled. ${IE_THREAD_DEFINE} is defined")
else ()
set(IE_THREAD_DEFINE "IE_THREAD_TBB")
target_include_directories(${TARGET_NAME} PUBLIC ${TBB_INCLUDE_DIRS})
if (WIN32)
target_link_libraries(${TARGET_NAME} PUBLIC "-nodefaultlib:vcomp")
endif ()
# Debug binaries are optional.
if (TBB_LIBRARIES_DEBUG AND NOT LINUX)
if (WIN32)
target_link_libraries(${TARGET_NAME} PUBLIC "$<$<CONFIG:DEBUG>:${TBB_LIBRARIES_DEBUG}>;$<$<NOT:$<CONFIG:DEBUG>>:${TBB_LIBRARIES_RELEASE}>")
else ()
if ("${CMAKE_BUILD_TYPE}" STREQUAL "Debug")
target_link_libraries(${TARGET_NAME} PUBLIC ${TBB_LIBRARIES_DEBUG})
else()
target_link_libraries(${TARGET_NAME} PUBLIC ${TBB_LIBRARIES_RELEASE})
endif ()
endif ()
else ()
# Link Release library to all configurations.
target_link_libraries(${TARGET_NAME} PUBLIC ${TBB_LIBRARIES_RELEASE})
endif ()
endif ()
endif ()
endif()
elseif (THREADING STREQUAL "OMP")
if (WIN32)
set(omp_lib_name libiomp5md)
@@ -67,34 +97,55 @@ function(set_ie_threading_interface_for TARGET_NAME)
set(lib_dbg_path ${lib_rel_path})
endif ()
if (NOT OMP_LIBRARIES_RELEASE OR NOT OMP_LIBRARIES_DEBUG)
if (NOT OMP_LIBRARIES_RELEASE)
find_library(OMP_LIBRARIES_RELEASE ${omp_lib_name} ${lib_rel_path} NO_DEFAULT_PATH)
find_library(OMP_LIBRARIES_DEBUG ${omp_lib_name} ${lib_dbg_path} NO_DEFAULT_PATH)
ext_message(STATUS "OMP Release lib: ${OMP_LIBRARIES_RELEASE}")
ext_message(STATUS "OMP Debug lib: ${OMP_LIBRARIES_DEBUG}")
endif ()
if (NOT OMP_LIBRARIES_RELEASE OR NOT OMP_LIBRARIES_DEBUG)
ext_message(WARNING "Intel OpenMP not found. Intel OpenMP support will be disabled. ${IE_THREAD_DEFINE} is defined")
else ()
set(IE_THREAD_DEFINE "IE_THREAD_OMP")
if (WIN32)
target_compile_options(${TARGET_NAME} PUBLIC ${OpenMP_CXX_FLAGS} /openmp)
target_compile_options(${TARGET_NAME} PUBLIC ${OpenMP_CXX_FLAGS} /Qopenmp)
target_link_libraries(${TARGET_NAME} PUBLIC "-nodefaultlib:vcomp")
target_link_libraries(${TARGET_NAME} PUBLIC "$<$<CONFIG:DEBUG>:${OMP_LIBRARIES_DEBUG}>;$<$<NOT:$<CONFIG:DEBUG>>:${OMP_LIBRARIES_RELEASE}>")
else()
target_compile_options(${TARGET_NAME} PUBLIC ${OpenMP_CXX_FLAGS} -fopenmp)
if ("${CMAKE_BUILD_TYPE}" STREQUAL "Debug")
target_link_libraries(${TARGET_NAME} PUBLIC ${OMP_LIBRARIES_DEBUG})
else()
target_link_libraries(${TARGET_NAME} PUBLIC ${OMP_LIBRARIES_RELEASE})
if (NOT LINUX)
find_library(OMP_LIBRARIES_DEBUG ${omp_lib_name} ${lib_dbg_path} NO_DEFAULT_PATH)
if (OMP_LIBRARIES_DEBUG)
ext_message(STATUS "OMP Debug lib: ${OMP_LIBRARIES_DEBUG}")
else ()
ext_message(WARNING "OMP Debug binaries are missed.")
endif ()
endif ()
endif ()
if (NOT OMP_LIBRARIES_RELEASE)
ext_message(WARNING "Intel OpenMP not found. Intel OpenMP support will be disabled. ${IE_THREAD_DEFINE} is defined")
else ()
set(IE_THREAD_DEFINE "IE_THREAD_OMP")
if (WIN32)
target_compile_options(${TARGET_NAME} PUBLIC ${OpenMP_CXX_FLAGS} /openmp)
target_compile_options(${TARGET_NAME} PUBLIC ${OpenMP_CXX_FLAGS} /Qopenmp)
target_link_libraries(${TARGET_NAME} PUBLIC "-nodefaultlib:vcomp")
else()
target_compile_options(${TARGET_NAME} PUBLIC ${OpenMP_CXX_FLAGS} -fopenmp)
endif ()
# Debug binaries are optional.
if (OMP_LIBRARIES_DEBUG AND NOT LINUX)
if (WIN32)
target_link_libraries(${TARGET_NAME} PUBLIC "$<$<CONFIG:DEBUG>:${OMP_LIBRARIES_DEBUG}>;$<$<NOT:$<CONFIG:DEBUG>>:${OMP_LIBRARIES_RELEASE}>")
else()
if ("${CMAKE_BUILD_TYPE}" STREQUAL "Debug")
target_link_libraries(${TARGET_NAME} PUBLIC ${OMP_LIBRARIES_DEBUG})
else()
target_link_libraries(${TARGET_NAME} PUBLIC ${OMP_LIBRARIES_RELEASE})
endif ()
endif ()
else ()
# Link Release library to all configurations.
target_link_libraries(${TARGET_NAME} PUBLIC ${OMP_LIBRARIES_RELEASE})
endif ()
endif ()
endif ()
target_compile_definitions(${TARGET_NAME} PUBLIC -DIE_THREAD=${IE_THREAD_DEFINE})
if (NOT THREADING STREQUAL "SEQ")
find_package(Threads REQUIRED)
target_link_libraries(${TARGET_NAME} PUBLIC ${CMAKE_THREAD_LIBS_INIT})
endif()
endfunction(set_ie_threading_interface_for)

View File

@@ -1,11 +1,8 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
cmake_minimum_required(VERSION 2.8)
if (UNIX)
if (LINUX)
function(get_linux_name res_var)
if (NOT EXISTS "/etc/lsb-release")
execute_process(COMMAND find -L /etc/ -maxdepth 1 -type f -name *-release -exec cat {} \;

View File

@@ -1,6 +0,0 @@
# Copyright (C) 2018 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
option(DEVELOPMENT_PLUGIN_MODE "Disabled build of all plugins" OFF)

View File

@@ -1,21 +1,27 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
# Usage: ie_option(<option_variable> "description" <initial value or boolean expression> [IF <condition>])
function (ie_option variable description value)
option(${variable} "${description}" ${value})
list (APPEND IE_OPTIONS "${variable}")
list(FIND IE_OPTIONS "${variable}" result)
set (IE_OPTIONS "${IE_OPTIONS}" PARENT_SCOPE)
if(${result} EQUAL -1)
option(${variable} "${description}" ${value})
list (APPEND IE_OPTIONS "${variable}")
set (IE_OPTIONS "${IE_OPTIONS}" PARENT_SCOPE)
endif()
endfunction()
include(version)
function (print_enabled_features)
message(STATUS "CI_BUILD_NUMBER: ${CI_BUILD_NUMBER}")
message(STATUS "Inference Engine enabled features: ")
message("")
message(" CI_BUILD_NUMBER: ${CI_BUILD_NUMBER}")
foreach(_var ${IE_OPTIONS})
message(STATUS "${_var} = ${${_var}}")
message(" ${_var} = ${${_var}}")
endforeach()
message("")
endfunction()

View File

@@ -1,20 +1,43 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
macro(disable_deprecated_warnings)
if(WIN32)
if(CMAKE_CXX_COMPILER_ID MATCHES Intel)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /Qdiag-warning:1478")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL MSVC)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /wd4996") # disable warning on deprecated API
endif()
else()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-deprecated-declarations")
endif()
endmacro()
if (WIN32)
set_property(DIRECTORY APPEND PROPERTY COMPILE_DEFINITIONS _CRT_SECURE_NO_WARNINGS)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -D_SCL_SECURE_NO_WARNINGS")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /EHsc") #no asynchronous structured exception handling
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} /LARGEADDRESSAWARE")
if (TREAT_WARNING_AS_ERROR)
if(CMAKE_CXX_COMPILER_ID MATCHES Intel)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /WX")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /Qdiag-warning:2586,177,3180,1740,1786,47,161")
elseif (CMAKE_CXX_COMPILER_ID MATCHES MSVC)
# set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /WX") # Too many warnings
endif()
endif()
set(CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG} /Z7")
set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} /Z7")
if(ENABLE_DEBUG_SYMBOLS)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /Zi")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /Zi")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /Z7")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} /Z7")
set(DEBUG_SYMBOLS_LINKER_FLAGS "/DEBUG")
if ("${CMAKE_BUILD_TYPE}" STREQUAL "Release")
if (CMAKE_BUILD_TYPE STREQUAL "Release")
# Keep default /OPT values. See /DEBUG reference for details.
set(DEBUG_SYMBOLS_LINKER_FLAGS "${DEBUG_SYMBOLS_LINKER_FLAGS} /OPT:REF /OPT:ICF")
endif()
@@ -23,17 +46,32 @@ if (WIN32)
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} ${DEBUG_SYMBOLS_LINKER_FLAGS}")
set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} ${DEBUG_SYMBOLS_LINKER_FLAGS}")
endif()
else()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Werror=return-type ")
if (APPLE)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-error=unused-command-line-argument")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-unused-function")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-unused-variable")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-unused-private-field")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-reorder")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wswitch")
elseif(UNIX)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wuninitialized -Winit-self")
if("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang")
if(CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-switch")
else()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wmaybe-uninitialized")
endif()
endif()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -diag-disable=remark")
endif()
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fvisibility-inlines-hidden")
if(LINUX)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -ffunction-sections -fdata-sections")
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -Wl,--gc-sections -Wl,--exclude-libs,ALL")
endif()
endif()

View File

@@ -0,0 +1,27 @@
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
set(newContent " <plugin name=\"${IE_DEVICE_NAME}\" location=\"${IE_PLUGIN_LIBRARY_NAME}\">")
if(IE_PLUGIN_PROPERTIES)
set(newContent "${newContent}
<properties>")
foreach(props IN LISTS IE_PLUGIN_PROPERTIES)
string(REPLACE "," ";" props "${props}")
list(GET props 0 key)
list(GET props 1 value)
set(newContent "${newContent}
<property key=\"${key}\" value=\"${value}\"/>")
endforeach()
set(newContent "${newContent}
</properties>")
endif()
set(newContent "${newContent}
</plugin>")
file(WRITE "${IE_CONFIG_OUTPUT_FILE}" "${newContent}")

View File

@@ -0,0 +1,132 @@
# Copyright (C) 2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
include(CMakeParseArguments)
set(PLUGIN_FILES "" CACHE INTERNAL "")
function(get_shared_library_name target_name library_name)
set(LIB_PREFIX "${CMAKE_SHARED_LIBRARY_PREFIX}")
set(LIB_SUFFIX "${IE_BUILD_POSTFIX}${CMAKE_SHARED_LIBRARY_SUFFIX}")
set("${library_name}" "${LIB_PREFIX}${target_name}${LIB_SUFFIX}" PARENT_SCOPE)
endfunction()
if(NOT TARGET ie_plugins)
add_custom_target(ie_plugins)
endif()
#
# ie_add_plugin(NAME <targetName>
# DEVICE_NAME <deviceName>
# SOURCES <sources>
# VERSION_DEFINES_FOR <source>
# )
#
function(ie_add_plugin)
set(options)
set(oneValueArgs NAME DEVICE_NAME VERSION_DEFINES_FOR)
set(multiValueArgs SOURCES)
cmake_parse_arguments(IE_PLUGIN "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
if(NOT IE_PLUGIN_NAME)
message(FATAL_ERROR "Please, specify plugin target name")
endif()
if(NOT IE_PLUGIN_DEVICE_NAME)
message(FATAL_ERROR "Please, specify device name for ${IE_PLUGIN_NAME}")
endif()
# create and configure target
if(IE_PLUGIN_VERSION_DEFINES_FOR)
addVersionDefines(${IE_PLUGIN_VERSION_DEFINES_FOR} CI_BUILD_NUMBER)
endif()
add_library(${IE_PLUGIN_NAME} SHARED ${IE_PLUGIN_SOURCES})
target_compile_definitions(${IE_PLUGIN_NAME} PRIVATE IMPLEMENT_INFERENCE_ENGINE_PLUGIN)
if(WIN32)
set_target_properties(${IE_PLUGIN_NAME} PROPERTIES COMPILE_PDB_NAME ${TARGET_NAME})
endif()
add_cpplint_target(${IE_PLUGIN_NAME}_cpplint FOR_TARGETS ${IE_PLUGIN_NAME})
# append plugin to the list to register
list(APPEND PLUGIN_FILES "${IE_PLUGIN_DEVICE_NAME}:${IE_PLUGIN_NAME}")
list(REMOVE_DUPLICATES PLUGIN_FILES)
set(PLUGIN_FILES "${PLUGIN_FILES}" CACHE INTERNAL "" FORCE)
add_dependencies(ie_plugins ${IE_PLUGIN_NAME})
endfunction()
#
# ie_register_plugins(MAIN_TARGET <main target name>
# POSSIBLE_PLUGINS <list of plugins which can be build by this repo>)
#
macro(ie_register_plugins)
set(options)
set(oneValueArgs MAIN_TARGET)
set(multiValueArgs POSSIBLE_PLUGINS)
cmake_parse_arguments(IE_REGISTER "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
if(NOT IE_REGISTER_MAIN_TARGET)
message(FATAL_ERROR "Please, define MAIN_TARGET")
endif()
set(plugins_to_remove ${IE_REGISTER_POSSIBLE_PLUGINS})
set(plugin_files_local)
set(config_output_file "$<TARGET_FILE_DIR:${IE_REGISTER_MAIN_TARGET}>/plugins.xml")
foreach(plugin IN LISTS plugins_to_remove)
add_custom_command(TARGET ${IE_REGISTER_MAIN_TARGET} POST_BUILD
COMMAND
"${CMAKE_COMMAND}"
-D "IE_CONFIG_OUTPUT_FILE=${config_output_file}"
-D "IE_PLUGIN_NAME=${plugin}"
-D "IE_CONFIGS_DIR=${CMAKE_BINARY_DIR}/plugins"
-P "${IE_MAIN_SOURCE_DIR}/cmake/plugins/unregister_plugin_cmake.cmake"
COMMENT
"Remove ${plugin} from the plugins.xml file"
VERBATIM)
endforeach()
foreach(name IN LISTS PLUGIN_FILES)
string(REPLACE ":" ";" name "${name}")
list(LENGTH name length)
if(NOT ${length} EQUAL 2)
message(FATAL_ERROR "Unexpected error, please, contact developer of this script")
endif()
list(GET name 0 device_name)
list(GET name 1 name)
# create plugin file
set(config_file_name "${CMAKE_BINARY_DIR}/plugins/${name}.xml")
get_shared_library_name(${name} library_name)
add_custom_command(TARGET ${IE_REGISTER_MAIN_TARGET} POST_BUILD
COMMAND
"${CMAKE_COMMAND}"
-D "IE_CONFIG_OUTPUT_FILE=${config_file_name}"
-D "IE_DEVICE_NAME=${device_name}"
-D "IE_PLUGIN_LIBRARY_NAME=${library_name}"
-P "${IE_MAIN_SOURCE_DIR}/cmake/plugins/create_plugin_file.cmake"
COMMENT "Register ${name} plugin"
VERBATIM)
list(APPEND plugin_files_local "${config_file_name}")
endforeach()
add_custom_command(TARGET ${IE_REGISTER_MAIN_TARGET} POST_BUILD
COMMAND
"${CMAKE_COMMAND}"
-D "CMAKE_SHARED_LIBRARY_PREFIX=${CMAKE_SHARED_LIBRARY_PREFIX}"
-D "IE_CONFIG_OUTPUT_FILE=${config_output_file}"
-D "IE_CONFIGS_DIR=${CMAKE_BINARY_DIR}/plugins"
-P "${IE_MAIN_SOURCE_DIR}/cmake/plugins/register_plugin_cmake.cmake"
COMMENT
"Registering plugins to plugins.xml config file"
VERBATIM)
endmacro()

View File

@@ -0,0 +1,65 @@
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
set(file_content
"<ie>
<plugins>
</plugins>
</ie>")
if(NOT EXISTS "${IE_CONFIG_OUTPUT_FILE}")
file(WRITE "${IE_CONFIG_OUTPUT_FILE}" "${file_content}")
endif()
# get list of plugin files
file(GLOB plugin_files "${IE_CONFIGS_DIR}/*.xml")
function(check_plugin_exists plugin_name outvar)
set(${outvar} OFF PARENT_SCOPE)
# check if config file already has this plugin
file(STRINGS "${IE_CONFIG_OUTPUT_FILE}" content REGEX "plugin .*=\"")
foreach(line IN LISTS content)
string(REGEX MATCH "location=\"([^\"]*)\"" location "${line}")
get_filename_component(location "${CMAKE_MATCH_1}" NAME_WE)
if("${CMAKE_SHARED_LIBRARY_PREFIX}${plugin_name}" MATCHES "${location}")
# plugin has already registered
set(${outvar} ON PARENT_SCOPE)
endif()
endforeach()
endfunction()
set(plugin_files_to_add)
foreach(plugin_file IN LISTS plugin_files)
get_filename_component(plugin_name "${plugin_file}" NAME_WE)
check_plugin_exists("${plugin_name}" exists)
if(NOT exists)
list(APPEND plugin_files_to_add "${plugin_file}")
endif()
endforeach()
# add plugin
set(newContent "")
file(STRINGS "${IE_CONFIG_OUTPUT_FILE}" content)
foreach(line IN LISTS content)
if("${line}" MATCHES "</plugins>")
foreach(plugin_file IN LISTS plugin_files_to_add)
file(READ "${plugin_file}" content)
set(newContent "${newContent}
${content}")
endforeach()
endif()
if(newContent)
set(newContent "${newContent}\n${line}")
else()
set(newContent "${line}")
endif()
endforeach()
file(WRITE "${IE_CONFIG_OUTPUT_FILE}" "${newContent}")

View File

@@ -0,0 +1,35 @@
# Copyright (C) 2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if(NOT EXISTS "${IE_CONFIG_OUTPUT_FILE}")
return()
endif()
# remove plugin file
file(REMOVE "${IE_CONFIGS_DIR}/${IE_PLUGIN_NAME}.xml")
# remove plugin
set(newContent "")
file(STRINGS "${IE_CONFIG_OUTPUT_FILE}" content)
set(skip_plugin OFF)
foreach(line IN LISTS content)
if("${line}" MATCHES "${IE_PLUGIN_NAME}")
set(skip_plugin ON)
endif()
if(NOT skip_plugin)
if(newContent)
set(newContent "${newContent}\n${line}")
else()
set(newContent "${line}")
endif()
endif()
if("${line}" MATCHES "</plugin>")
set(skip_plugin OFF)
endif()
endforeach()
file(WRITE "${IE_CONFIG_OUTPUT_FILE}" "${newContent}")

View File

@@ -1,5 +1,4 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
@@ -11,7 +10,11 @@ if (ENABLE_SANITIZER)
if (SANITIZE_RECOVER_SUPPORTED)
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize-recover=address")
endif()
set(SANITIZER_LINKER_FLAGS "-fsanitize=address -fuse-ld=gold")
set(SANITIZER_LINKER_FLAGS "-fsanitize=address")
if("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fuse-ld=gold")
endif()
set(CMAKE_CC_FLAGS "${CMAKE_CC_FLAGS} ${SANITIZER_COMPILER_FLAGS}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${SANITIZER_COMPILER_FLAGS}")

View File

@@ -1,14 +1,17 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
if (UNIX OR APPLE AND ${CMAKE_BUILD_TYPE} STREQUAL "Release")
if (UNIX OR APPLE AND CMAKE_BUILD_TYPE STREQUAL "Release")
set(CMAKE_CCXX_FLAGS "${CMAKE_CCXX_FLAGS} -fPIE -fPIC -Wformat -Wformat-security")
# TODO: double check it it's OK
if(CMAKE_CXX_COMPILER_ID MATCHES Intel)
string(REPLACE "-fPIE" "" CMAKE_CCXX_FLAGS "${CMAKE_CCXX_FLAGS}")
endif()
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -D_FORTIFY_SOURCE=2")
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -D_FORTIFY_SOURCE=2")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -pie")
if("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -z noexecstack -z relro -z now")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -z noexecstack -z relro -z now")
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)
@@ -18,12 +21,12 @@ if (UNIX OR APPLE AND ${CMAKE_BUILD_TYPE} STREQUAL "Release")
endif()
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -s -fvisibility=hidden")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -s -fvisibility=hidden")
elseif("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
set(CMAKE_CCXX_FLAGS "${CMAKE_CCXX_FLAGS} -fstack-protector-all")
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -fvisibility=hidden")
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -fvisibility=hidden")
elseif("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Intel")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fstack-protector")
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fstack-protector-strong")
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -z noexecstack -z relro -z now")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -z noexecstack -z relro -z now")
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -Wl,--strip-all -fvisibility=hidden")
@@ -33,7 +36,7 @@ if (UNIX OR APPLE AND ${CMAKE_BUILD_TYPE} STREQUAL "Release")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${CMAKE_CCXX_FLAGS}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${CMAKE_CCXX_FLAGS}")
elseif (WIN32)
if (${CMAKE_CXX_COMPILER_ID} STREQUAL MSVC)
if (CMAKE_CXX_COMPILER_ID STREQUAL MSVC)
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} /MP /sdl")
endif()
endif()

View File

@@ -1,9 +1,8 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
set(InferenceEngine_VERSION 1.5.0)
set(InferenceEngine_VERSION 2.1.0)
set(PACKAGE_VERSION ${InferenceEngine_VERSION})
set(PACKAGE_VERSION_EXACT False)

View File

@@ -1,5 +1,4 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
#
@@ -42,11 +41,10 @@ else()
if (WIN32)
set(_ARCH intel64)
else()
if(CMAKE_SYSTEM_PROCESSOR STREQUAL "armv7l")
set(_ARCH armv7l)
elseif(${CMAKE_SYSTEM_PROCESSOR} STREQUAL "x86_64")
string(TOLOWER ${CMAKE_SYSTEM_PROCESSOR} _ARCH)
if(_ARCH STREQUAL "x86_64" OR _ARCH STREQUAL "amd64") # Windows detects Intel's 64-bit CPU as AMD64
set(_ARCH intel64)
elseif(${CMAKE_SYSTEM_PROCESSOR} STREQUAL "i386")
elseif(_ARCH STREQUAL "i386")
set(_ARCH ia32)
endif()
endif()
@@ -54,112 +52,48 @@ else()
set(THREADING "@THREADING@")
# check whether setvars.sh is sourced
if(NOT IE_ROOT_DIR AND (DEFINED ENV{InferenceEngine_DIR} OR InferenceEngine_DIR OR DEFINED ENV{INTEL_CVSDK_DIR}))
if(NOT IE_ROOT_DIR AND (DEFINED ENV{InferenceEngine_DIR} OR InferenceEngine_DIR OR DEFINED ENV{INTEL_OPENVINO_DIR}))
if (EXISTS "${InferenceEngine_DIR}")
# InferenceEngine_DIR manually set via command line params
set(IE_ROOT_DIR "${InferenceEngine_DIR}/..")
elseif (EXISTS "$ENV{InferenceEngine_DIR}")
# InferenceEngine_DIR manually set via env
set(IE_ROOT_DIR "$ENV{InferenceEngine_DIR}/..")
elseif (EXISTS "$ENV{INTEL_CVSDK_DIR}/inference_engine")
elseif (EXISTS "$ENV{INTEL_OPENVINO_DIR}/inference_engine")
# if we installed DL SDK
set(IE_ROOT_DIR "$ENV{INTEL_CVSDK_DIR}/inference_engine")
elseif (EXISTS "$ENV{INTEL_CVSDK_DIR}/deployment_tools/inference_engine")
set(IE_ROOT_DIR "$ENV{INTEL_OPENVINO_DIR}/inference_engine")
elseif (EXISTS "$ENV{INTEL_OPENVINO_DIR}/deployment_tools/inference_engine")
# CV SDK is installed
set(IE_ROOT_DIR "$ENV{INTEL_CVSDK_DIR}/deployment_tools/inference_engine")
set(IE_ROOT_DIR "$ENV{INTEL_OPENVINO_DIR}/deployment_tools/inference_engine")
endif()
endif()
if(IE_ROOT_DIR)
if (WIN32)
set(_OS_PATH "")
else()
if (NOT EXISTS "/etc/lsb-release")
execute_process(COMMAND find -L /etc/ -maxdepth 1 -type f -name *-release -exec cat {} \;
OUTPUT_VARIABLE release_data RESULT_VARIABLE result)
set(name_regex "NAME=\"([^ \"\n]*).*\"\n")
set(version_regex "VERSION=\"([0-9]+(\\.[0-9]+)?)[^\n]*\"")
else()
#linux version detection using cat /etc/lsb-release
file(READ "/etc/lsb-release" release_data)
set(name_regex "DISTRIB_ID=([^ \n]*)\n")
set(version_regex "DISTRIB_RELEASE=([0-9]+(\\.[0-9]+)?)")
endif()
string(REGEX MATCH ${name_regex} name ${release_data})
set(os_name ${CMAKE_MATCH_1})
string(REGEX MATCH ${version_regex} version ${release_data})
set(os_name "${os_name} ${CMAKE_MATCH_1}")
if (NOT os_name)
ext_message(FATAL_ERROR "Cannot detect OS via reading /etc/*-release:\n ${release_data}")
endif()
if (NOT InferenceEngine_FIND_QUIETLY)
message (STATUS "/etc/*-release distrib: ${os_name}")
endif()
if (${os_name} STREQUAL "Ubuntu 14.04")
set(_OS_PATH "ubuntu_14.04/")
elseif (${os_name} STREQUAL "Ubuntu 16.04")
set(_OS_PATH "ubuntu_16.04/")
elseif (${os_name} STREQUAL "Ubuntu 18.04")
set(_OS_PATH "ubuntu_18.04/")
elseif (${os_name} STREQUAL "CentOS 7")
set(_OS_PATH "centos_7.4/")
elseif (${os_name} STREQUAL "poky 2.0")
set(_OS_PATH "ubuntu_16.04/")
elseif (${os_name} STREQUAL "poky 2.5")
set(_OS_PATH "ubuntu_18.04/")
elseif (${os_name} STREQUAL "Raspbian 9")
set(_OS_PATH "raspbian_9/")
else()
ext_message(FATAL_ERROR "${os_name} is not supported. List of supported OS: Ubuntu 14.04, Ubuntu 16.04, Ubuntu 18.04, CentOS 7, poky 2.0, poky 2.5, Raspbian 9")
endif()
endif()
if(NOT IE_ROOT_DIR)
ext_message(FATAL_ERROR "inference_engine root directory is not found")
endif()
if(IE_INCLUDE_DIR AND NOT "${IE_ROOT_DIR}/include" EQUAL "${IE_INCLUDE_DIR}")
unset(IE_INCLUDE_DIR CACHE)
endif()
find_path(IE_INCLUDE_DIR inference_engine.hpp "${IE_ROOT_DIR}/include" NO_DEFAULT_PATH)
find_path(IE_SRC_DIR extension "${IE_ROOT_DIR}/src" NO_DEFAULT_PATH)
if(IE_SRC_DIR AND NOT "${IE_ROOT_DIR}/src" EQUAL "${IE_SRC_DIR}")
unset(IE_SRC_DIR CACHE)
endif()
if(IE_LIBRARY AND NOT "${IE_ROOT_DIR}/lib/${_OS_PATH}/${_ARCH}" EQUAL "${IE_LIBRARY}")
unset(IE_LIBRARY CACHE)
endif()
set(_IE_ROOT_INCLUDE_DIR "${IE_ROOT_DIR}/include")
set(_IE_ROOT_SRC_DIR "${IE_ROOT_DIR}/src")
set(_IE_ROOT_LIBRARY "${IE_ROOT_DIR}/lib/${_OS_PATH}/${_ARCH}")
find_path(IE_INCLUDE_DIR inference_engine.hpp "${_IE_ROOT_INCLUDE_DIR}")
find_path(IE_SRC_DIR extension "${_IE_ROOT_SRC_DIR}")
set(IE_LIB_DIR "${_IE_ROOT_LIBRARY}")
set(IE_LIB_DIR "${IE_ROOT_DIR}/lib/${_ARCH}")
set(IE_LIB_REL_DIR "${IE_LIB_DIR}/Release")
set(IE_LIB_DBG_DIR "${IE_LIB_DIR}/Debug")
set(IE_EXTERNAL_DIR "${IE_ROOT_DIR}/external")
include(FindPackageHandleStandardArgs)
if (WIN32)
find_library(IE_RELEASE_LIBRARY inference_engine@IE_RELEASE_POSTFIX_WIN@ "${IE_LIB_REL_DIR}")
find_library(IE_DEBUG_LIBRARY inference_engine@IE_DEBUG_POSTFIX_WIN@ "${IE_LIB_DBG_DIR}")
find_package_handle_standard_args( InferenceEngine
FOUND_VAR INFERENCEENGINE_FOUND
REQUIRED_VARS IE_RELEASE_LIBRARY IE_DEBUG_LIBRARY IE_INCLUDE_DIR
FAIL_MESSAGE "Inference Engine cannot be found at ${_IE_ROOT_LIBRARY}. Please consult InferenceEgnineConfig.cmake module's help page.")
if(WIN32)
find_library(IE_RELEASE_LIBRARY inference_engine@IE_RELEASE_POSTFIX_WIN@ "${IE_LIB_REL_DIR}" NO_DEFAULT_PATH)
elseif(APPLE)
find_library(IE_RELEASE_LIBRARY inference_engine@IE_RELEASE_POSTFIX_MAC@ "${IE_LIB_DIR}" NO_DEFAULT_PATH)
else()
find_library(IE_LIBRARY inference_engine@IE_RELEASE_POSTFIX_LIN@ "${IE_LIB_DIR}")
find_package_handle_standard_args( InferenceEngine
FOUND_VAR INFERENCEENGINE_FOUND
REQUIRED_VARS IE_LIBRARY IE_INCLUDE_DIR
FAIL_MESSAGE "Inference Engine cannot be found at ${_IE_ROOT_LIBRARY}. Please consult InferenceEgnineConfig.cmake module's help page.")
find_library(IE_RELEASE_LIBRARY inference_engine@IE_RELEASE_POSTFIX_LIN@ "${IE_LIB_DIR}" NO_DEFAULT_PATH)
endif()
find_package_handle_standard_args( InferenceEngine
FOUND_VAR INFERENCEENGINE_FOUND
REQUIRED_VARS IE_RELEASE_LIBRARY IE_INCLUDE_DIR
FAIL_MESSAGE "Some of mandatory Inference Engine components are not found. Please consult InferenceEgnineConfig.cmake module's help page.")
if(INFERENCEENGINE_FOUND)
# to keep this line for successful execution in CMake 2.8
set(InferenceEngine_FOUND TRUE)
@@ -167,26 +101,52 @@ else()
add_library(IE::inference_engine SHARED IMPORTED GLOBAL)
if (WIN32)
set_property(TARGET IE::inference_engine APPEND PROPERTY IMPORTED_CONFIGURATIONS DEBUG)
set_property(TARGET IE::inference_engine APPEND PROPERTY IMPORTED_CONFIGURATIONS RELEASE)
set_target_properties(IE::inference_engine PROPERTIES
IMPORTED_CONFIGURATIONS RELEASE
IMPORTED_IMPLIB_RELEASE "${IE_RELEASE_LIBRARY}"
IMPORTED_IMPLIB_DEBUG "${IE_DEBUG_LIBRARY}"
MAP_IMPORTED_CONFIG_DEBUG Debug
MAP_IMPORTED_CONFIG_RELEASE Release
MAP_IMPORTED_CONFIG_RELWITHDEBINFO Release
INTERFACE_INCLUDE_DIRECTORIES "${IE_INCLUDE_DIR}")
else()
# Debug binaries are optional
find_library(IE_DEBUG_LIBRARY inference_engine@IE_DEBUG_POSTFIX_WIN@ "${IE_LIB_DBG_DIR}" NO_DEFAULT_PATH)
if (IE_DEBUG_LIBRARY)
set_property(TARGET IE::inference_engine APPEND PROPERTY IMPORTED_CONFIGURATIONS DEBUG)
set_target_properties(IE::inference_engine PROPERTIES
IMPORTED_IMPLIB_DEBUG "${IE_DEBUG_LIBRARY}"
MAP_IMPORTED_CONFIG_DEBUG Debug)
else()
ext_message(WARNING "Inference Engine DEBUG binaries are missed.")
endif()
elseif (APPLE)
set_target_properties(IE::inference_engine PROPERTIES
IMPORTED_LOCATION "${IE_LIBRARY}"
INTERFACE_INCLUDE_DIRECTORIES "${IE_INCLUDE_DIR}")
IMPORTED_LOCATION_RELEASE "${IE_RELEASE_LIBRARY}"
INTERFACE_INCLUDE_DIRECTORIES "${IE_INCLUDE_DIR}"
INTERFACE_COMPILE_OPTIONS "-Wno-error=deprecated-declarations")
# Debug binaries are optional
find_library(IE_DEBUG_LIBRARY inference_engine@IE_DEBUG_POSTFIX_MAC@ "${IE_LIB_DIR}" NO_DEFAULT_PATH)
if (IE_DEBUG_LIBRARY)
set_target_properties(IE::inference_engine PROPERTIES
IMPORTED_LOCATION_DEBUG "${IE_DEBUG_LIBRARY}")
else()
ext_message(WARNING "Inference Engine DEBUG binaries are missed")
endif()
target_link_libraries(IE::inference_engine INTERFACE ${CMAKE_DL_LIBS})
else()
# Only Release binaries are distributed for Linux systems
set_target_properties(IE::inference_engine PROPERTIES
IMPORTED_LOCATION "${IE_RELEASE_LIBRARY}"
INTERFACE_INCLUDE_DIRECTORIES "${IE_INCLUDE_DIR}"
INTERFACE_COMPILE_OPTIONS "-Wno-error=deprecated-declarations")
target_link_libraries(IE::inference_engine INTERFACE ${CMAKE_DL_LIBS})
endif()
set(InferenceEngine_INCLUDE_DIRS ${IE_INCLUDE_DIR})
set(InferenceEngine_LIBRARIES IE::inference_engine)
set(IE_EXTERNAL_DIR "${IE_ROOT_DIR}/external")
include("${IE_ROOT_DIR}/share/ie_parallel.cmake")
add_subdirectory(${IE_SRC_DIR}/extension EXCLUDE_FROM_ALL ie_cpu_extension)

View File

@@ -1,10 +1,7 @@
# Copyright (C) 2018 Intel Corporation
#
# Copyright (C) 2018-2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
cmake_minimum_required(VERSION 2.8)
function (branchName VAR)
execute_process(
COMMAND git rev-parse --abbrev-ref HEAD

View File

@@ -0,0 +1,68 @@
# Copyright (C) 2019 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
#
set(VPU_SUPPORTED_SOC ma2450 ma2x8x mv0262)
#
# Default firmware packages
#
RESOLVE_DEPENDENCY(VPU_FIRMWARE_MA2450
ARCHIVE_UNIFIED firmware_ma2450_759W.zip
TARGET_PATH "${TEMP}/vpu/firmware/ma2450"
ENVIRONMENT "VPU_FIRMWARE_MA2450"
FOLDER)
debug_message(STATUS "ma2450=" ${VPU_FIRMWARE_MA2450})
RESOLVE_DEPENDENCY(VPU_FIRMWARE_MV0262
ARCHIVE_UNIFIED firmware_mv0262_mdk_R9.8.zip
TARGET_PATH "${TEMP}/vpu/firmware/mv0262"
ENVIRONMENT "VPU_FIRMWARE_MV0262"
FOLDER)
debug_message(STATUS "mv0262=" ${VPU_FIRMWARE_MV0262})
RESOLVE_DEPENDENCY(VPU_FIRMWARE_MA2X8X
ARCHIVE_UNIFIED firmware_ma2x8x_mdk_R9.8.zip
TARGET_PATH "${TEMP}/vpu/firmware/ma2x8x"
ENVIRONMENT "VPU_FIRMWARE_MA2X8X"
FOLDER)
debug_message(STATUS "ma2x8x=" ${VPU_FIRMWARE_MA2X8X})
#
# CMake variables to override default firmware files
#
foreach(soc IN LISTS VPU_SUPPORTED_SOC)
string(TOUPPER "${soc}" soc_upper)
set(var_name VPU_FIRMWARE_${soc_upper}_FILE)
find_file(${var_name} MvNCAPI-${soc}.mvcmd "${VPU_FIRMWARE_${soc_upper}}/mvnc")
if(NOT ${var_name})
message(FATAL_ERROR "[VPU] Missing ${soc} firmware")
endif()
endforeach()
#
# `vpu_copy_firmware` CMake target
#
foreach(soc IN LISTS VPU_SUPPORTED_SOC)
string(TOUPPER "${soc}" soc_upper)
set(var_name VPU_FIRMWARE_${soc_upper}_FILE)
set(firmware_out_file "${CMAKE_LIBRARY_OUTPUT_DIRECTORY}/MvNCAPI-${soc}.mvcmd")
list(APPEND all_firmware_files ${firmware_out_file})
add_custom_command(
OUTPUT ${firmware_out_file}
COMMAND
${CMAKE_COMMAND} -E copy ${${var_name}} ${firmware_out_file}
MAIN_DEPENDENCY ${${var_name}}
COMMENT "[VPU] Copy ${${var_name}} to ${CMAKE_LIBRARY_OUTPUT_DIRECTORY}"
VERBATIM)
endforeach()
add_custom_target(vpu_copy_firmware
DEPENDS ${all_firmware_files}
COMMENT "[VPU] Copy firmware files")

View File

@@ -5,17 +5,15 @@ cmake_minimum_required (VERSION 3.3)
project (ie_python_api)
set (CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${CMAKE_CURRENT_LIST_DIR}/cmake)
if (CMAKE_SYSTEM_PROCESSOR STREQUAL "armv7l")
set (ARCH armv7l)
elseif ("${CMAKE_SIZEOF_VOID_P}" EQUAL "8")
set (ARCH intel64)
else()
set (ARCH ia32)
string(TOLOWER ${CMAKE_SYSTEM_PROCESSOR} ARCH)
if(ARCH STREQUAL "x86_64" OR ARCH STREQUAL "amd64") # Windows detects Intel's 64-bit CPU as AMD64
set(ARCH intel64)
elseif(ARCH STREQUAL "i386")
set(ARCH ia32)
endif()
# in case of independent python api build (out of Inference Engine root Cmake)
if (NOT(IE_MAIN_SOURCE_DIR))
if (NOT DEFINED IE_MAIN_SOURCE_DIR)
if("${CMAKE_BUILD_TYPE}" STREQUAL "")
message(STATUS "CMAKE_BUILD_TYPE not defined, 'Release' will be used")
set(CMAKE_BUILD_TYPE "Release")
@@ -26,6 +24,11 @@ if (NOT(IE_MAIN_SOURCE_DIR))
if(NOT(WIN32))
set (CMAKE_LIBRARY_OUTPUT_DIRECTORY ${CMAKE_LIBRARY_OUTPUT_DIRECTORY}/${CMAKE_BUILD_TYPE})
endif()
else()
if (UNIX OR APPLE)
# cython generated files requires public visibility. Force visibility required.
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -fvisibility=default")
endif()
endif()
include (UseCython)
@@ -42,8 +45,12 @@ else()
set (PYTHON_BRIDGE_OUTPUT_DIRECTORY ${CMAKE_LIBRARY_OUTPUT_DIRECTORY}/python_api/${PYTHON_VERSION}/openvino)
endif()
find_package (InferenceEngine REQUIRED)
if(DEFINED IE_MAIN_SOURCE_DIR)
find_package(InferenceEngine REQUIRED)
else()
find_package(InferenceEngineDeveloperPackage REQUIRED)
endif()
set (PYTHON_BRIDGE_SRC_ROOT ${CMAKE_CURRENT_SOURCE_DIR})
add_subdirectory (src/openvino/inference_engine)
add_subdirectory (src/openvino/inference_engine/dnn_builder)
add_subdirectory (src/openvino/tools/statistics_collector)

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2016 Intel Corporation
# Copyright (C) 2018-2019 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.

View File

@@ -46,7 +46,7 @@
#
# See also FindCython.cmake
# Copyright (c) 2016 Intel Corporation
# Copyright (C) 2018-2019 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,4 @@
opencv-python
numpy
cython
cython
progress

View File

@@ -1,81 +0,0 @@
# Benchmark Application Demo
This topic demonstrates how to run the Benchmark Application demo, which performs inference using convolutional networks.
## How It Works
> **NOTE:** To achieve benchmark results similar to the official published results, set CPU frequency to 2.9GHz and GPU frequency to 1GHz.
Upon the start-up, the application reads command-line parameters and loads a network and images to the Inference Engine plugin. The number of infer requests and execution approach depend on a mode defined with the `-api` command-line parameter.
### Synchronous API
For synchronous mode, the primary metric is latency. The application creates one infer request and executes the `Infer` method. A number of executions is defined by one of the two values:
* Number of iterations defined with the `-niter` command-line argument
* Predefined duration if `-niter` is skipped. Predefined duration value depends on device.
During the execution, the application collects two types of metrics:
* Latency for each infer request executed with `Infer` method
* Duration of all executions
Reported latency value is calculated as mean value of all collected latencies. Reported throughput value is a derivative from reported latency and additionally depends on batch size.
### Asynchronous API
For asynchronous mode, the primary metric is throughput in frames per second (FPS). The application creates a certain number of infer requests and executes the `StartAsync` method. A number of infer is specified with the `-nireq` command-line parameter. A number of executions is defined by one of the two values:
* Number of iterations defined with the `-niter` command-line argument
* Predefined duration if `-niter` is skipped. Predefined duration value depends on device.
The infer requests are executed asynchronously. `Wait` method is used to wait for previous execution to complete. The application measures all infer requests executions and reports the throughput metric based on batch size and total execution duration.
## Running
Running the application with the `-h` or `--help`' option yields the following usage message:
```python3 benchmark_app.py -h
benchmark_app [OPTION]
Options:
-h, --help Print a usage message
-i, --path_to_images "<path>" Required. Path to a folder with images or to image files.
-m, --path_to_model "<path>" Required. Path to an .xml file with a trained model.
-pp "<path>" Path to a plugin folder.
-api, --api_type "<sync/async>" Required. Enable using sync/async API.
-d, --target_device "<device>" Specify a target device to infer on: CPU, GPU, FPGA or MYRIAD. Use "-d HETERO:<comma separated devices list>" format to specify HETERO plugin. The application looks for a suitable plugin for the specified device.
-niter, --number_iterations "<integer>" Optional. Number of iterations. If not specified, the number of iterations is calculated depending on a device.
-nireq, --number_infer_requests "<integer>" Optional. Number of infer requests (default value is 2).
-l, --path_to_extension "<absolute_path>" Required for CPU custom layers. Absolute path to a shared library with the kernels implementations.
Or
-c, --path_to_cldnn_config "<absolute_path>" Required for GPU custom kernels. Absolute path to an .xml file with the kernels description.
-b, --batch_size "<integer>" Optional. Batch size value. If not specified, the batch size value is determined from IR.
-nthreads, --number_threads "<integer>" Number of threads to use for inference on the CPU (including Hetero cases).
-pin {YES,NO}, --infer_threads_pinning {YES,NO} Optional. Enable ("YES" is default value) or disable ("NO")CPU threads pinning for CPU-involved inference.
```
Running the application with the empty list of options yields the usage message given above and an error message.
To run the demo, you can use one-layer public models or one-layer pre-trained and optimized models delivered with the package that support images as input.
For example, to do inference on an image using a trained network with multiple outputs on CPU, run the following command:
```python3 benchmark_app.py -i <path_to_image>/inputImage.bmp -m <path_to_model>/multiple-output.xml -d CPU
```
> **NOTE**: Public models should be first converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
## Demo Output
Application output depends on a used API. For synchronous API, the application outputs latency and throughput:
```
[ INFO ] Start inference synchronously (10 s duration)
[BENCHMARK RESULT] Latency is 15.5520 msec
[BENCHMARK RESULT] Throughput is 1286.0082 FPS
```
For asynchronous API, the application outputs only throughput:
```
[ INFO ] Start inference asynchronously (10 s duration, 8 inference requests in parallel)
[BENCHMARK RESULT] Throughput is 1444.2591 FPS
```
## See Also
* [Using Inference Engine Samples](./docs/IE_DG/Samples_Overview.md)

View File

@@ -1,204 +0,0 @@
#!/usr/bin/env python
"""
Copyright (c) 2018 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from statistics import median
from openvino.inference_engine import IENetwork, IEPlugin
from utils.benchmark_utils import *
def main(args=None):
try:
if args is None:
args = parse_args()
validate_args(args)
# --------------------------------- 1. Load Plugin for inference engine ---------------------------------
logging.info("Loading plugin")
plugin = IEPlugin(args.target_device)
config = dict()
if CPU_DEVICE_NAME in args.target_device:
if args.path_to_extension:
plugin.add_cpu_extension(args.path_to_extension)
# limit threading for CPU portion of inference
if args.number_threads is not None:
config.update({'CPU_THREADS_NUM': str(args.number_threads)})
# pin threads for CPU portion of inference
config.update({'CPU_BIND_THREAD': args.infer_threads_pinning})
# for pure CPU execution, more throughput-oriented execution via streams
if args.api_type == 'async' and CPU_DEVICE_NAME in args.target_device:
config.update({'CPU_THROUGHPUT_STREAMS': str(args.number_infer_requests)})
elif GPU_DEVICE_NAME in args.target_device:
if args.path_to_cldnn_config:
config.update({'CONFIG_FILE': args.path_to_cldnn_config})
logger.info("GPU extensions is loaded {}".format(args.path_to_cldnn_config))
elif MYRIAD_DEVICE_NAME in args.target_device:
config.update({'LOG_LEVEL': 'LOG_INFO'})
config.update({'VPU_LOG_LEVEL': 'LOG_INFO'})
plugin.set_config(config)
logger.info("Device is {}".format(plugin.device))
logger.info("Plugin version is {}".format(plugin.version))
# --------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ---------------------
logger.info("Loading network files")
xml_filename = os.path.abspath(args.path_to_model)
head, tail = os.path.splitext(xml_filename)
bin_filename = os.path.abspath(head + BIN_EXTENSION)
ie_network = IENetwork(xml_filename, bin_filename)
input_info = ie_network.inputs
if len(input_info) == 0:
raise AttributeError('No inputs info is provided')
elif len(input_info) != 1:
raise AttributeError("only one input layer network is supported")
# -------------------------------------- 3. Change network batch_size -------------------------------------
batch_size = ie_network.batch_size
key = list(input_info.keys()).pop()
precision = input_info[key].precision
if args.batch_size and args.batch_size != ie_network.batch_size:
# deepcopy input_info
shape = input_info[key].shape
# We support models having only one input layers
if input_info[key].layout != LAYOUT_TYPE:
raise Exception('Unsupported model for batch size changing in automatic mode')
shape[BATCH_SIZE_ELEM] = args.batch_size
ie_network.reshape({key: shape})
input_info = ie_network.inputs
batch_size = args.batch_size
logger_message = "Network batch size was changed to: " if args.batch_size is not None else "Network batch size: "
logger_message += " {}, precision: {}".format(batch_size, precision)
logger.info(logger_message)
# ------------------------------------- 4. Loading model to the plugin -------------------------------------
logger.info("Loading model to the plugin")
exe_network = plugin.load(ie_network, args.number_infer_requests)
# ------------------------------------ 5. Performance measurements stuff -----------------------------------
inputs = get_images(os.path.abspath(args.path_to_images), batch_size)
if batch_size < len(inputs):
logger.warn("Network batch size {} is less then images count {}"
", some input files will be ignored".format(batch_size, len(inputs)))
input_images = {key: fill_blob_with_image(inputs, input_info[key].shape)}
times = list()
duration = 0
if args.number_iterations is None:
duration = get_duration_in_secs(args.target_device)
if args.api_type == 'sync':
# warming up - out of scope
exe_network.infer(input_images)
if args.number_iterations is not None:
logger.info(
"Start inference synchronously ({}) sync inference executions".format(args.number_iterations))
for iteration in range(args.number_iterations):
sync_infer_request(exe_network, times, input_images)
else:
logger.info("Start inference synchronously ({} s duration)".format(duration))
start_time = datetime.now()
current_time = start_time
while (current_time - start_time).total_seconds() < duration:
current_time = sync_infer_request(exe_network, times, input_images)
times.sort()
latency = median(times)
fps = batch_size / latency
print("[BENCHMARK RESULT] Latency is {:.4f} msec".format(latency * 1e3))
print("[BENCHMARK RESULT] Throughput is {:.4f} FPS".format(fps))
else:
infer_requests = exe_network.requests
if args.number_iterations is not None:
logger.info("Start inference asynchronously ({}"
" async inference executions, {} "
" inference requests in parallel".format(args.number_iterations,
args.number_infer_requests))
else:
logger.info("Start inference asynchronously ({} s duration, "
"{} inference requests in parallel)".format(duration, args.number_infer_requests))
current_inference = 0
required_inference_requests_were_executed = False
previous_inference = 1 - args.number_infer_requests
step = 0
steps_count = args.number_infer_requests - 1
if args.number_iterations is not None:
steps_count += args.number_iterations
# warming up - out of scope
infer_requests[0].async_infer(input_images)
infer_requests[0].wait()
start_time = datetime.now()
while not required_inference_requests_were_executed or step < steps_count or \
args.number_iterations is None and (datetime.now() - start_time).total_seconds() < duration:
exe_network.start_async(current_inference, input_images)
if previous_inference >= 0:
status = infer_requests[previous_inference].wait()
if status is not 0:
raise Exception("Infer request not completed successfully")
current_inference += 1
if current_inference >= args.number_infer_requests:
current_inference = 0
required_inference_requests_were_executed = True
previous_inference += 1
if previous_inference >= args.number_infer_requests:
previous_inference = 0
step += 1
# wait the latest inference executions
for not_completed_index in range(args.number_infer_requests):
if infer_requests[not_completed_index].wait(0) != 0:
infer_requests[not_completed_index].wait()
total_duration = (datetime.now() - start_time).total_seconds()
fps = batch_size * step / total_duration
print("[BENCHMARK RESULT] Throughput is {:.4f} FPS".format(fps))
del exe_network
del plugin
except Exception as e:
logging.exception(e)
if __name__ == "__main__":
main()

View File

@@ -1,122 +0,0 @@
"""
Copyright (c) 2018 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import logging
import argparse
import os
import cv2
import numpy as np
import sys
from glob import glob
from random import choice
from datetime import datetime
from fnmatch import fnmatch
from . constants import *
logging.basicConfig(format="[ %(levelname)s ] %(message)s", level=logging.INFO, stream=sys.stdout)
logger = logging.getLogger('BenchmarkApp')
def validate_args(args):
if args.number_iterations is not None and args.number_iterations < 0:
raise Exception("Number of iterations should be positive (invalid -niter option value)")
if args.number_infer_requests < 0:
raise Exception("Number of inference requests should be positive (invalid -nireq option value)")
if not fnmatch(args.path_to_model, XML_EXTENSION_PATTERN):
raise Exception('Path {} is not xml file.')
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('-i', '--path_to_images', type=str, required=True, help=HELP_MESSAGES['IMAGE_MESSAGE'])
parser.add_argument('-m', '--path_to_model', type=str, required=True, help=HELP_MESSAGES['MODEL_MESSAGE'])
parser.add_argument('-c', '--path_to_cldnn_config', type=str, required=False,
help=HELP_MESSAGES['CUSTOM_GPU_LIBRARY_MESSAGE'])
parser.add_argument('-l', '--path_to_extension', type=str, required=False, default=None,
help=HELP_MESSAGES['CUSTOM_GPU_LIBRARY_MESSAGE'])
parser.add_argument('-api', '--api_type', type=str, required=False, default='async', choices=['sync', 'async'],
help=HELP_MESSAGES['API_MESSAGE'])
parser.add_argument('-d', '--target_device', type=str, required=False, default="CPU",
help=HELP_MESSAGES['TARGET_DEVICE_MESSAGE'])
parser.add_argument('-niter', '--number_iterations', type=int, required=False, default=None,
help=HELP_MESSAGES['ITERATIONS_COUNT_MESSAGE'])
parser.add_argument('-nireq', '--number_infer_requests', type=int, required=False, default=2,
help=HELP_MESSAGES['INFER_REQUESTS_COUNT_MESSAGE'])
parser.add_argument('-nthreads', '--number_threads', type=int, required=False, default=None,
help=HELP_MESSAGES['INFER_NUM_THREADS_MESSAGE'])
parser.add_argument('-b', '--batch_size', type=int, required=False, default=None,
help=HELP_MESSAGES['BATCH_SIZE_MESSAGE'])
parser.add_argument('-pin', '--infer_threads_pinning', type=str, required=False, default='YES',
choices=['YES', 'NO'], help=HELP_MESSAGES['INFER_THREADS_PINNING_MESSAGE'])
return parser.parse_args()
def get_images(path_to_images, batch_size):
images = list()
if os.path.isfile(path_to_images):
while len(images) != batch_size:
images.append(path_to_images)
else:
path = os.path.join(path_to_images, '*')
files = glob(path, recursive=True)
for file in files:
file_extension = file.rsplit('.').pop().upper()
if file_extension in IMAGE_EXTENSIONS:
images.append(file)
if len(images) == 0:
raise Exception("No images found in {}".format(path_to_images))
if len(images) < batch_size:
while len(images) != batch_size:
images.append(choice(images))
return images
def get_duration_in_secs(target_device):
duration = 0
for device in DEVICE_DURATION_IN_SECS:
if device in target_device:
duration = max(duration, DEVICE_DURATION_IN_SECS[device])
if duration == 0:
duration = DEVICE_DURATION_IN_SECS[UNKNOWN_DEVICE_TYPE]
logger.warn("Default duration {} seconds for unknown device {} is used".format(duration, target_device))
return duration
def fill_blob_with_image(images_path, shape):
images = np.ndarray(shape)
for item in range(shape[0]):
image = cv2.imread(images_path[item])
new_im_size = tuple(shape[2:])
if image.shape[:-1] != new_im_size:
logger.warn("Image {} is resize from ({}) to ({})".format(images_path[item], image.shape[:-1], new_im_size))
image = cv2.resize(image, new_im_size)
image = image.transpose((2, 0, 1))
images[item] = image
return images
def sync_infer_request(exe_network, times, images):
iteration_start_time = datetime.now()
exe_network.infer(images)
current_time = datetime.now()
times.append((current_time - iteration_start_time).total_seconds())
return current_time

View File

@@ -1,63 +0,0 @@
"""
Copyright (c) 2018 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
HELP_MESSAGES = {
'IMAGE_MESSAGE': "Path to a folder with images or to image files.",
'MULTI_INPUT_MESSAGE': "Path to multi input file containing.",
'MODEL_MESSAGE': "Path to an .xml file with a trained model.",
'PLUGIN_PATH_MESSAGE': "Path to a plugin folder.",
'API_MESSAGE': "Enable using sync/async API. Default value is sync",
'TARGET_DEVICE_MESSAGE': "Specify a target device to infer on: CPU, GPU, FPGA or MYRIAD. "
"Use \"-d HETERO:<comma separated devices list>\" format to specify HETERO plugin. "
"The application looks for a suitable plugin for the specified device.",
'ITERATIONS_COUNT_MESSAGE': "Number of iterations. "
"If not specified, the number of iterations is calculated depending on a device.",
'INFER_REQUESTS_COUNT_MESSAGE': "Number of infer requests (default value is 2).",
'INFER_NUM_THREADS_MESSAGE': "Number of threads to use for inference on the CPU "
"(including Hetero cases).",
'CUSTOM_CPU_LIBRARY_MESSAGE': "Required for CPU custom layers. "
"Absolute path to a shared library with the kernels implementations.",
'CUSTOM_GPU_LIBRARY_MESSAGE': "Required for GPU custom kernels. Absolute path to an .xml file with the kernels description.",
'BATCH_SIZE_MESSAGE': "Optional. Batch size value. If not specified, the batch size value is determined from IR",
'INFER_THREADS_PINNING_MESSAGE': "Optional. Enable (\"YES\" is default value) or disable (\"NO\")"
"CPU threads pinning for CPU-involved inference."
}
DEVICE_DURATION_IN_SECS = {
"CPU": 60,
"GPU": 60,
"VPU": 60,
"MYRIAD": 60,
"FPGA": 120,
"HDDL": 60,
"UNKNOWN": 120
}
IMAGE_EXTENSIONS = ['JPEG', 'JPG', 'PNG', 'BMP']
MYRIAD_DEVICE_NAME = "MYRIAD"
CPU_DEVICE_NAME = "CPU"
GPU_DEVICE_NAME = "GPU"
UNKNOWN_DEVICE_TYPE = "UNKNOWN"
BATCH_SIZE_ELEM = 0
LAYOUT_TYPE = 'NCHW'
XML_EXTENSION = ".xml"
BIN_EXTENSION = ".bin"
XML_EXTENSION_PATTERN = '*' + XML_EXTENSION

View File

@@ -0,0 +1,71 @@
# Image Classification Python* Sample
This topic demonstrates how to run the Image Classification sample application, which performs
inference using image classification networks such as AlexNet and GoogLeNet.
## How It Works
Upon the start-up, the sample application reads command line parameters and loads a network and an image to the Inference
Engine plugin. When inference is done, the application creates an
output image and outputs data to the standard output stream.
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](./docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
## Running
Run the application with the `-h` option yields the usage message:
```
python3 classification_sample.py -h
```
The command yields the following usage message:
```
usage: classification_sample.py [-h] -m MODEL -i INPUT [INPUT ...]
[-l CPU_EXTENSION]
[-d DEVICE] [--labels LABELS] [-nt NUMBER_TOP]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to an .xml file with a trained model.
-i INPUT [INPUT ...], --input INPUT [INPUT ...]
Required. Path to a folder with images or path to an
image files
-l CPU_EXTENSION, --cpu_extension CPU_EXTENSION
Optional. Required for CPU custom layers. MKLDNN (CPU)-targeted custom layers.
Absolute path to a shared library with the kernels
implementations.
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, FPGA, HDDL or MYRIAD is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU
--labels LABELS Optional. Path to a labels mapping file
-nt NUMBER_TOP, --number_top NUMBER_TOP
Optional. Number of top results
```
Running the application with the empty list of options yields the usage message given above.
To run the sample, you can use AlexNet and GoogLeNet or other image classification models. You can download the pre-trained models with the OpenVINO [Model Downloader](https://github.com/opencv/open_model_zoo/tree/2018/model_downloader) or from [https://download.01.org/opencv/](https://download.01.org/opencv/).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
For example, to perform inference of an AlexNet model (previously converted to the Inference Engine format) on CPU, use the following command:
```
python3 classification_sample.py -i <path_to_image>/cat.bmp -m <path_to_model>/alexnet_fp32.xml
```
## Sample Output
By default the application outputs top-10 inference results.
Add the `-nt` option to the previous command to modify the number of top output results.
For example, to get the top-5 results on GPU, run the following command:
```
python3 classification_sample.py -i <path_to_image>/cat.bmp -m <path_to_model>/alexnet_fp32.xml -nt 5 -d GPU
```
## See Also
* [Using Inference Engine Samples](./docs/IE_DG/Samples_Overview.md)
* [Model Optimizer tool](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md)
* [Model Downloader](https://github.com/opencv/open_model_zoo/tree/2018/model_downloader)

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env python
"""
Copyright (c) 2018 Intel Corporation
Copyright (C) 2018-2019 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -17,31 +17,34 @@
from __future__ import print_function
import sys
import os
from argparse import ArgumentParser
from argparse import ArgumentParser, SUPPRESS
import cv2
import numpy as np
import logging as log
from time import time
from openvino.inference_engine import IENetwork, IEPlugin
from openvino.inference_engine import IENetwork, IECore
def build_argparser():
parser = ArgumentParser()
parser.add_argument("-m", "--model", help="Path to an .xml file with a trained model.", required=True, type=str)
parser.add_argument("-i", "--input", help="Path to a folder with images or path to an image files", required=True,
type=str, nargs="+")
parser.add_argument("-l", "--cpu_extension",
help="MKLDNN (CPU)-targeted custom layers.Absolute path to a shared library with the kernels "
"impl.", type=str, default=None)
parser.add_argument("-pp", "--plugin_dir", help="Path to a plugin folder", type=str, default=None)
parser.add_argument("-d", "--device",
help="Specify the target device to infer on; CPU, GPU, FPGA or MYRIAD is acceptable. Sample "
"will look for a suitable plugin for device specified (CPU by default)", default="CPU",
type=str)
parser.add_argument("--labels", help="Labels mapping file", default=None, type=str)
parser.add_argument("-nt", "--number_top", help="Number of top results", default=10, type=int)
parser.add_argument("-ni", "--number_iter", help="Number of inference iterations", default=1, type=int)
parser.add_argument("-pc", "--perf_counts", help="Report performance counters", default=False, action="store_true")
parser = ArgumentParser(add_help=False)
args = parser.add_argument_group('Options')
args.add_argument('-h', '--help', action='help', default=SUPPRESS, help='Show this help message and exit.')
args.add_argument("-m", "--model", help="Required. Path to an .xml file with a trained model.", required=True,
type=str)
args.add_argument("-i", "--input", help="Required. Path to a folder with images or path to an image files",
required=True,
type=str, nargs="+")
args.add_argument("-l", "--cpu_extension",
help="Optional. Required for CPU custom layers. "
"MKLDNN (CPU)-targeted custom layers. Absolute path to a shared library with the"
" kernels implementations.", type=str, default=None)
args.add_argument("-d", "--device",
help="Optional. Specify the target device to infer on; CPU, GPU, FPGA, HDDL, MYRIAD or HETERO: is "
"acceptable. The sample will look for a suitable plugin for device specified. Default "
"value is CPU",
default="CPU", type=str)
args.add_argument("--labels", help="Optional. Path to a labels mapping file", default=None, type=str)
args.add_argument("-nt", "--number_top", help="Optional. Number of top results", default=10, type=int)
return parser
@@ -53,19 +56,20 @@ def main():
model_bin = os.path.splitext(model_xml)[0] + ".bin"
# Plugin initialization for specified device and load extensions library if specified
plugin = IEPlugin(device=args.device, plugin_dirs=args.plugin_dir)
log.info("Creating Inference Engine")
ie = IECore()
if args.cpu_extension and 'CPU' in args.device:
plugin.add_cpu_extension(args.cpu_extension)
ie.add_extension(args.cpu_extension, "CPU")
# Read IR
log.info("Loading network files:\n\t{}\n\t{}".format(model_xml, model_bin))
net = IENetwork(model=model_xml, weights=model_bin)
if plugin.device == "CPU":
supported_layers = plugin.get_supported_layers(net)
if "CPU" in args.device:
supported_layers = ie.query_network(net, "CPU")
not_supported_layers = [l for l in net.layers.keys() if l not in supported_layers]
if len(not_supported_layers) != 0:
log.error("Following layers are not supported by the plugin for specified device {}:\n {}".
format(plugin.device, ', '.join(not_supported_layers)))
format(args.device, ', '.join(not_supported_layers)))
log.error("Please try to specify cpu extensions library path in sample's command line parameters using -l "
"or --cpu_extension command line argument")
sys.exit(1)
@@ -92,24 +96,11 @@ def main():
# Loading model to the plugin
log.info("Loading model to the plugin")
exec_net = plugin.load(network=net)
del net
exec_net = ie.load_network(network=net, device_name=args.device)
# Start sync inference
log.info("Starting inference ({} iterations)".format(args.number_iter))
infer_time = []
for i in range(args.number_iter):
t0 = time()
res = exec_net.infer(inputs={input_blob: images})
infer_time.append((time()-t0)*1000)
log.info("Average running time of one iteration: {} ms".format(np.average(np.asarray(infer_time))))
if args.perf_counts:
perf_counts = exec_net.requests[0].get_perf_counts()
log.info("Performance counters:")
print("{:<70} {:<15} {:<15} {:<15} {:<10}".format('name', 'layer_type', 'exet_type', 'status', 'real_time, us'))
for layer, stats in perf_counts.items():
print("{:<70} {:<15} {:<15} {:<15} {:<10}".format(layer, stats['layer_type'], stats['exec_type'],
stats['status'], stats['real_time']))
log.info("Starting inference in synchronous mode")
res = exec_net.infer(inputs={input_blob: images})
# Processing output blob
log.info("Processing output blob")
@@ -120,18 +111,25 @@ def main():
labels_map = [x.split(sep=' ', maxsplit=1)[-1].strip() for x in f]
else:
labels_map = None
classid_str = "classid"
probability_str = "probability"
for i, probs in enumerate(res):
probs = np.squeeze(probs)
top_ind = np.argsort(probs)[-args.number_top:][::-1]
print("Image {}\n".format(args.input[i]))
print(classid_str, probability_str)
print("{} {}".format('-' * len(classid_str), '-' * len(probability_str)))
for id in top_ind:
det_label = labels_map[id] if labels_map else "#{}".format(id)
print("{:.7f} label {}".format(probs[id], det_label))
det_label = labels_map[id] if labels_map else "{}".format(id)
label_length = len(det_label)
space_num_before = (len(classid_str) - label_length) // 2
space_num_after = len(classid_str) - (space_num_before + label_length) + 2
space_num_before_prob = (len(probability_str) - len(str(probs[id]))) // 2
print("{}{}{}{}{:.7f}".format(' ' * space_num_before, det_label,
' ' * space_num_after, ' ' * space_num_before_prob,
probs[id]))
print("\n")
del exec_net
del plugin
log.info("This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool\n")
if __name__ == '__main__':
sys.exit(main() or 0)

View File

@@ -1,136 +0,0 @@
#!/usr/bin/env python
"""
Copyright (c) 2018 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from __future__ import print_function
import sys
import os
from argparse import ArgumentParser
import cv2
import numpy as np
import logging as log
from time import time
from openvino.inference_engine import IENetwork, IEPlugin
def build_argparser():
parser = ArgumentParser()
parser.add_argument("-m", "--model", help="Path to an .xml file with a trained model.", required=True, type=str)
parser.add_argument("-i", "--input", help="Path to a folder with images or path to an image files", required=True,
type=str, nargs="+")
parser.add_argument("-l", "--cpu_extension",
help="MKLDNN (CPU)-targeted custom layers.Absolute path to a shared library with the kernels "
"impl.", type=str, default=None)
parser.add_argument("-pp", "--plugin_dir", help="Path to a plugin folder", type=str, default=None)
parser.add_argument("-d", "--device",
help="Specify the target device to infer on; CPU, GPU, FPGA or MYRIAD is acceptable. Sample "
"will look for a suitable plugin for device specified (CPU by default)", default="CPU",
type=str)
parser.add_argument("--labels", help="Labels mapping file", default=None, type=str)
parser.add_argument("-nt", "--number_top", help="Number of top results", default=10, type=int)
parser.add_argument("-ni", "--number_iter", help="Number of inference iterations", default=1, type=int)
parser.add_argument("-pc", "--perf_counts", help="Report performance counters", default=False, action="store_true")
return parser
def main():
log.basicConfig(format="[ %(levelname)s ] %(message)s", level=log.INFO, stream=sys.stdout)
args = build_argparser().parse_args()
model_xml = args.model
model_bin = os.path.splitext(model_xml)[0] + ".bin"
# Plugin initialization for specified device and load extensions library if specified
plugin = IEPlugin(device=args.device, plugin_dirs=args.plugin_dir)
if args.cpu_extension and 'CPU' in args.device:
plugin.add_cpu_extension(args.cpu_extension)
# Read IR
log.info("Loading network files:\n\t{}\n\t{}".format(model_xml, model_bin))
net = IENetwork(model=model_xml, weights=model_bin)
if plugin.device == "CPU":
supported_layers = plugin.get_supported_layers(net)
not_supported_layers = [l for l in net.layers.keys() if l not in supported_layers]
if len(not_supported_layers) != 0:
log.error("Following layers are not supported by the plugin for specified device {}:\n {}".
format(plugin.device, ', '.join(not_supported_layers)))
log.error("Please try to specify cpu extensions library path in sample's command line parameters using -l "
"or --cpu_extension command line argument")
sys.exit(1)
assert len(net.inputs.keys()) == 1, "Sample supports only single input topologies"
assert len(net.outputs) == 1, "Sample supports only single output topologies"
log.info("Preparing input blobs")
input_blob = next(iter(net.inputs))
out_blob = next(iter(net.outputs))
net.batch_size = len(args.input)
# Read and pre-process input images
n, c, h, w = net.inputs[input_blob].shape
images = np.ndarray(shape=(n, c, h, w))
for i in range(n):
image = cv2.imread(args.input[i])
if image.shape[:-1] != (h, w):
log.warning("Image {} is resized from {} to {}".format(args.input[i], image.shape[:-1], (h, w)))
image = cv2.resize(image, (w, h))
image = image.transpose((2, 0, 1)) # Change data layout from HWC to CHW
images[i] = image
log.info("Batch size is {}".format(n))
# Loading model to the plugin
log.info("Loading model to the plugin")
exec_net = plugin.load(network=net)
del net
# Start sync inference
log.info("Starting inference ({} iterations)".format(args.number_iter))
infer_time = []
for i in range(args.number_iter):
t0 = time()
infer_request_handle = exec_net.start_async(request_id=0, inputs={input_blob: images})
infer_request_handle.wait()
infer_time.append((time() - t0) * 1000)
log.info("Average running time of one iteration: {} ms".format(np.average(np.asarray(infer_time))))
if args.perf_counts:
perf_counts = infer_request_handle.get_perf_counts()
log.info("Performance counters:")
print("{:<70} {:<15} {:<15} {:<15} {:<10}".format('name', 'layer_type', 'exet_type', 'status', 'real_time, us'))
for layer, stats in perf_counts.items():
print("{:<70} {:<15} {:<15} {:<15} {:<10}".format(layer, stats['layer_type'], stats['exec_type'],
stats['status'], stats['real_time']))
# Processing output blob
log.info("Processing output blob")
res = infer_request_handle.outputs[out_blob]
log.info("Top {} results: ".format(args.number_top))
if args.labels:
with open(args.labels, 'r') as f:
labels_map = [x.split(sep=' ', maxsplit=1)[-1].strip() for x in f]
else:
labels_map = None
for i, probs in enumerate(res):
probs = np.squeeze(probs)
top_ind = np.argsort(probs)[-args.number_top:][::-1]
print("Image {}\n".format(args.input[i]))
for id in top_ind:
det_label = labels_map[id] if labels_map else "#{}".format(id)
print("{:.7f} {}".format(probs[id], det_label))
print("\n")
del exec_net
del plugin
if __name__ == '__main__':
sys.exit(main() or 0)

View File

@@ -0,0 +1,78 @@
# Image Classification Python* Sample Async
This sample demonstrates how to run the Image Classification sample application with inference executed in the asynchronous mode.
The sample demonstrates how to use the new Infer Request API of Inference Engine in applications.
Refer to [Integrate the Inference Engine New Request API with Your Application](./docs/IE_DG/Integrate_with_customer_application_new_API.md) for details.
The sample demonstrates how to build and execute an inference request 10 times in the asynchronous mode on example of classifications networks.
The asynchronous mode might increase the throughput of the pictures.
The batch mode is an independent attribute on the asynchronous mode. Asynchronous mode works efficiently with any batch size.
## How It Works
Upon the start-up, the sample application reads command line parameters and loads specified network and input images (or a
folder with images) to the Inference Engine plugin. The batch size of the network is set according to the number of read images.
Then, the sample creates an inference request object and assigns completion callback for it. In scope of the completion callback
handling the inference request is executed again.
After that, the application starts inference for the first infer request and waits of 10th inference request execution being completed.
When inference is done, the application outputs data to the standard output stream.
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](./docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
## Running
Running the application with the <code>-h</code> option yields the following usage message:
```
python3 classification_sample_async.py -h
```
The command yields the following usage message:
```
usage: classification_sample_async.py [-h] -m MODEL -i INPUT [INPUT ...]
[-l CPU_EXTENSION]
[-d DEVICE] [--labels LABELS]
[-nt NUMBER_TOP]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Required. Path to an .xml file with a trained model.
-i INPUT [INPUT ...], --input INPUT [INPUT ...]
Required. Path to a folder with images or path to an
image files
-l CPU_EXTENSION, --cpu_extension CPU_EXTENSION
Optional. Required for CPU custom layers. Absolute
path to a shared library with the kernels
implementations.
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, FPGA, HDDL or MYRIAD is acceptable. The sample
will look for a suitable plugin for device specified.
Default value is CPU
--labels LABELS Optional. Labels mapping file
-nt NUMBER_TOP, --number_top NUMBER_TOP
Optional. Number of top results
```
Running the application with the empty list of options yields the usage message given above and an error message.
To run the sample, you can use AlexNet and GoogLeNet or other image classification models. You can download the pre-trained models with the OpenVINO [Model Downloader](https://github.com/opencv/open_model_zoo/tree/2018/model_downloader) or from [https://download.01.org/opencv/](https://download.01.org/opencv/).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
You can do inference of an image using a trained AlexNet network on FPGA with fallback to CPU using the following command:
```
python3 classification_sample_async.py -i <path_to_image>/cat.bmp -m <path_to_model>/alexnet_fp32.xml -nt 5 -d HETERO:FPGA,CPU
```
## Sample Output
By default, the application outputs top-10 inference results for each infer request.
It also provides throughput value measured in frames per seconds.
## See Also
* [Using Inference Engine Samples](./docs/IE_DG/Samples_Overview.md)

View File

@@ -0,0 +1,184 @@
#!/usr/bin/env python
"""
Copyright (C) 2018-2019 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from __future__ import print_function
import sys
import os
from argparse import ArgumentParser, SUPPRESS
import cv2
import numpy as np
import logging as log
from time import time
from openvino.inference_engine import IENetwork, IECore
import threading
class InferReqWrap:
def __init__(self, request, id, num_iter):
self.id = id
self.request = request
self.num_iter = num_iter
self.cur_iter = 0
self.cv = threading.Condition()
self.request.set_completion_callback(self.callback, self.id)
def callback(self, statusCode, userdata):
if (userdata != self.id):
log.error("Request ID {} does not correspond to user data {}".format(self.id, userdata))
elif statusCode != 0:
log.error("Request {} failed with status code {}".format(self.id, statusCode))
self.cur_iter += 1
log.info("Completed {} Async request execution".format(self.cur_iter))
if self.cur_iter < self.num_iter:
# here a user can read output containing inference results and put new input
# to repeat async request again
self.request.async_infer(self.input)
else:
# continue sample execution after last Asynchronous inference request execution
self.cv.acquire()
self.cv.notify()
self.cv.release()
def execute(self, mode, input_data):
if (mode == "async"):
log.info("Start inference ({} Asynchronous executions)".format(self.num_iter))
self.input = input_data
# Start async request for the first time. Wait all repetitions of the async request
self.request.async_infer(input_data)
self.cv.acquire()
self.cv.wait()
self.cv.release()
elif (mode == "sync"):
log.info("Start inference ({} Synchronous executions)".format(self.num_iter))
for self.cur_iter in range(self.num_iter):
# here we start inference synchronously and wait for
# last inference request execution
self.request.infer(input_data)
log.info("Completed {} Sync request execution".format(self.cur_iter + 1))
else:
log.error("wrong inference mode is chosen. Please use \"sync\" or \"async\" mode")
sys.exit(1)
def build_argparser():
parser = ArgumentParser(add_help=False)
args = parser.add_argument_group('Options')
args.add_argument('-h', '--help', action='help', default=SUPPRESS, help='Show this help message and exit.')
args.add_argument("-m", "--model", help="Required. Path to an .xml file with a trained model.",
required=True, type=str)
args.add_argument("-i", "--input", help="Required. Path to a folder with images or path to an image files",
required=True, type=str, nargs="+")
args.add_argument("-l", "--cpu_extension",
help="Optional. Required for CPU custom layers. Absolute path to a shared library with the"
" kernels implementations.", type=str, default=None)
args.add_argument("-d", "--device",
help="Optional. Specify the target device to infer on; CPU, GPU, FPGA, HDDL or MYRIAD is "
"acceptable. The sample will look for a suitable plugin for device specified. Default value is CPU",
default="CPU", type=str)
args.add_argument("--labels", help="Optional. Labels mapping file", default=None, type=str)
args.add_argument("-nt", "--number_top", help="Optional. Number of top results", default=10, type=int)
return parser
def main():
log.basicConfig(format="[ %(levelname)s ] %(message)s", level=log.INFO, stream=sys.stdout)
args = build_argparser().parse_args()
model_xml = args.model
model_bin = os.path.splitext(model_xml)[0] + ".bin"
# Plugin initialization for specified device and load extensions library if specified
log.info("Creating Inference Engine")
ie = IECore()
if args.cpu_extension and 'CPU' in args.device:
ie.add_extension(args.cpu_extension, "CPU")
# Read IR
log.info("Loading network files:\n\t{}\n\t{}".format(model_xml, model_bin))
net = IENetwork(model=model_xml, weights=model_bin)
if "CPU" in args.device:
supported_layers = ie.query_network(net, "CPU")
not_supported_layers = [l for l in net.layers.keys() if l not in supported_layers]
if len(not_supported_layers) != 0:
log.error("Following layers are not supported by the plugin for specified device {}:\n {}".
format(args.device, ', '.join(not_supported_layers)))
log.error("Please try to specify cpu extensions library path in sample's command line parameters using -l "
"or --cpu_extension command line argument")
sys.exit(1)
assert len(net.inputs.keys()) == 1, "Sample supports only single input topologies"
assert len(net.outputs) == 1, "Sample supports only single output topologies"
log.info("Preparing input blobs")
input_blob = next(iter(net.inputs))
out_blob = next(iter(net.outputs))
net.batch_size = len(args.input)
# Read and pre-process input images
n, c, h, w = net.inputs[input_blob].shape
images = np.ndarray(shape=(n, c, h, w))
for i in range(n):
image = cv2.imread(args.input[i])
if image.shape[:-1] != (h, w):
log.warning("Image {} is resized from {} to {}".format(args.input[i], image.shape[:-1], (h, w)))
image = cv2.resize(image, (w, h))
image = image.transpose((2, 0, 1)) # Change data layout from HWC to CHW
images[i] = image
log.info("Batch size is {}".format(n))
# Loading model to the plugin
log.info("Loading model to the plugin")
exec_net = ie.load_network(network=net, device_name=args.device)
# create one inference request for asynchronous execution
request_id = 0
infer_request = exec_net.requests[request_id];
num_iter = 10
request_wrap = InferReqWrap(infer_request, request_id, num_iter)
# Start inference request execution. Wait for last execution being completed
request_wrap.execute("sync", {input_blob: images})
# Processing output blob
log.info("Processing output blob")
res = infer_request.outputs[out_blob]
log.info("Top {} results: ".format(args.number_top))
if args.labels:
with open(args.labels, 'r') as f:
labels_map = [x.split(sep=' ', maxsplit=1)[-1].strip() for x in f]
else:
labels_map = None
classid_str = "classid"
probability_str = "probability"
for i, probs in enumerate(res):
probs = np.squeeze(probs)
top_ind = np.argsort(probs)[-args.number_top:][::-1]
print("Image {}\n".format(args.input[i]))
print(classid_str, probability_str)
print("{} {}".format('-' * len(classid_str), '-' * len(probability_str)))
for id in top_ind:
det_label = labels_map[id] if labels_map else "{}".format(id)
label_length = len(det_label)
space_num_before = (7 - label_length) // 2
space_num_after = 7 - (space_num_before + label_length) + 2
space_num_before_prob = (11 - len(str(probs[id]))) // 2
print("{}{}{}{}{:.7f}".format(' ' * space_num_before, det_label,
' ' * space_num_after, ' ' * space_num_before_prob,
probs[id]))
print("\n")
log.info("This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool\n")
if __name__ == '__main__':
sys.exit(main() or 0)

View File

@@ -1,49 +0,0 @@
# This README demonstrates use of all GreenGrass samples
# GreenGrass Classification Sample
This topic demonstrates how to build and run the GreenGrass Image Classification sample application, which does inference using image classification networks like AlexNet and GoogLeNet on on Intel® Processors, Intel® HD Graphics and Intel® FPGA.
## Running
1. Modify the "accelerator" parameter inside the sample to deploy the sample on any accelerator option of your choice(CPU/GPU/FPGA)
For CPU, please specify "CPU"
For GPU, please specify "GPU"
For FPGA, please specify "HETERO:FPGA,CPU"
2. Enable the option(s) on how output is displayed/consumed
3. Now follow the instructions listed in the Greengrass-FaaS-User-Guide.pdf to create the lambda and deploy on edge device using Greengrass
### Outputs
The application publishes top-10 results on AWS IoT Cloud every second by default. For other output consumption options, please refer to Greengrass-FaaS-User-Guide.pdf
### How it works
Upon deployment,the sample application loads a network and an image to the Inference Engine plugin. When inference is done, the application publishes results to AWS IoT Cloud
=====================================================================================================
# GreenGrass Object Detection Sample SSD
This topic demonstrates how to run the GreenGrass Object Detection SSD sample application, which does inference using object detection networks like Squeezenet-SSD on Intel® Processors, Intel® HD Graphics and Intel® FPGA.
## Running
1. Modify the "accelerator" parameter inside the sample to deploy the sample on any accelerator option of your choice(CPU/GPU/FPGA)
For CPU, please specify "CPU"
For GPU, please specify "GPU"
For FPGA, please specify "HETERO:FPGA,CPU"
2. Enable the option(s) on how output is displayed/consumed
3. Set the variable is_async_mode to 'True' for Asynchronous execution and 'False' for Synchronous execution
3. Now follow the instructions listed in the Greengrass-FaaS-User-Guide.pdf to create the lambda and deploy on edge device using Greengrass
### Outputs
The application publishes detection outputs such as class label, class confidence, and bounding box coordinates on AWS IoT Cloud every second. For other output consumption options, please refer to Greengrass-FaaS-User-Guide.pdf
### How it works
Upon deployment,the sample application loads a network and an image to the Inference Engine plugin. When inference is done, the application publishes results to AWS IoT Cloud

View File

@@ -1,180 +0,0 @@
"""
BSD 3-clause "New" or "Revised" license
Copyright (C) 2018 Intel Corporation.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
import sys
import os
import cv2
import numpy as np
import greengrasssdk
import boto3
import timeit
import datetime
import json
from collections import OrderedDict
from openvino.inference_engine import IENetwork, IEPlugin
# Specify the delta in seconds between each report
reporting_interval = 1.0
# Parameters for IoT Cloud
enable_iot_cloud_output = True
# Parameters for Kinesis
enable_kinesis_output = False
kinesis_stream_name = ""
kinesis_partition_key = ""
kinesis_region = ""
# Parameters for S3
enable_s3_jpeg_output = False
s3_bucket_name = ""
# Parameters for jpeg output on local disk
enable_local_jpeg_output = False
# Create a Greengrass Core SDK client for publishing messages to AWS Cloud
client = greengrasssdk.client("iot-data")
# Create an S3 client for uploading files to S3
if enable_s3_jpeg_output:
s3_client = boto3.client("s3")
# Create a Kinesis client for putting records to streams
if enable_kinesis_output:
kinesis_client = boto3.client("kinesis", "us-west-2")
# Read environment variables set by Lambda function configuration
PARAM_MODEL_XML = os.environ.get("PARAM_MODEL_XML")
PARAM_INPUT_SOURCE = os.environ.get("PARAM_INPUT_SOURCE")
PARAM_DEVICE = os.environ.get("PARAM_DEVICE")
PARAM_OUTPUT_DIRECTORY = os.environ.get("PARAM_OUTPUT_DIRECTORY")
PARAM_CPU_EXTENSION_PATH = os.environ.get("PARAM_CPU_EXTENSION_PATH")
PARAM_LABELMAP_FILE = os.environ.get("PARAM_LABELMAP_FILE")
PARAM_TOPIC_NAME = os.environ.get("PARAM_TOPIC_NAME", "intel/faas/classification")
PARAM_NUM_TOP_RESULTS = int(os.environ.get("PARAM_NUM_TOP_RESULTS", "10"))
def report(res_json, frame):
now = datetime.datetime.now()
date_prefix = str(now).replace(" ", "_")
if enable_iot_cloud_output:
data = json.dumps(res_json)
client.publish(topic=PARAM_TOPIC_NAME, payload=data)
if enable_kinesis_output:
kinesis_client.put_record(StreamName=kinesis_stream_name, Data=json.dumps(res_json),
PartitionKey=kinesis_partition_key)
if enable_s3_jpeg_output:
temp_image = os.path.join(PARAM_OUTPUT_DIRECTORY, "inference_result.jpeg")
cv2.imwrite(temp_image, frame)
with open(temp_image) as file:
image_contents = file.read()
s3_client.put_object(Body=image_contents, Bucket=s3_bucket_name, Key=date_prefix + ".jpeg")
if enable_local_jpeg_output:
cv2.imwrite(os.path.join(PARAM_OUTPUT_DIRECTORY, date_prefix + ".jpeg"), frame)
def greengrass_classification_sample_run():
client.publish(topic=PARAM_TOPIC_NAME, payload="OpenVINO: Initializing...")
model_bin = os.path.splitext(PARAM_MODEL_XML)[0] + ".bin"
# Plugin initialization for specified device and load extensions library if specified
plugin = IEPlugin(device=PARAM_DEVICE, plugin_dirs="")
if "CPU" in PARAM_DEVICE:
plugin.add_cpu_extension(PARAM_CPU_EXTENSION_PATH)
# Read IR
net = IENetwork(model=PARAM_MODEL_XML, weights=model_bin)
assert len(net.inputs.keys()) == 1, "Sample supports only single input topologies"
assert len(net.outputs) == 1, "Sample supports only single output topologies"
input_blob = next(iter(net.inputs))
out_blob = next(iter(net.outputs))
# Read and pre-process input image
n, c, h, w = net.inputs[input_blob]
cap = cv2.VideoCapture(PARAM_INPUT_SOURCE)
exec_net = plugin.load(network=net)
del net
client.publish(topic=PARAM_TOPIC_NAME, payload="Starting inference on %s" % PARAM_INPUT_SOURCE)
start_time = timeit.default_timer()
inf_seconds = 0.0
frame_count = 0
res_json = []
labeldata = None
if PARAM_LABELMAP_FILE is not None:
with open(PARAM_LABELMAP_FILE) as labelmap_file:
labeldata = json.load(labelmap_file)
while (cap.isOpened()):
ret, frame = cap.read()
if not ret:
break
frameid = cap.get(cv2.CAP_PROP_POS_FRAMES)
initial_w = cap.get(3)
initial_h = cap.get(4)
in_frame = cv2.resize(frame, (w, h))
in_frame = in_frame.transpose((2, 0, 1)) # Change data layout from HWC to CHW
in_frame = in_frame.reshape((n, c, h, w))
# Start synchronous inference
inf_start_time = timeit.default_timer()
res = exec_net.infer(inputs={input_blob: in_frame})
inf_seconds += timeit.default_timer() - inf_start_time
top_ind = np.argsort(res[out_blob], axis=1)[0, -PARAM_NUM_TOP_RESULTS:][::-1]
# Parse detection results of the current request
res_json = OrderedDict()
res_json["Candidates"] = OrderedDict()
frame_timestamp = datetime.datetime.now()
for i in top_ind:
classlabel = labeldata[str(i)] if labeldata else str(i)
res_json["Candidates"][classlabel] = round(res[out_blob][0, i], 2)
frame_count += 1
# Measure elapsed seconds since the last report
seconds_elapsed = timeit.default_timer() - start_time
if seconds_elapsed >= reporting_interval:
res_json["timestamp"] = frame_timestamp.isoformat()
res_json["frame_id"] = int(frameid)
res_json["inference_fps"] = frame_count / inf_seconds
start_time = timeit.default_timer()
report(res_json, frame)
frame_count = 0
inf_seconds = 0.0
client.publish(topic=PARAM_TOPIC_NAME, payload="End of the input, exiting...")
del exec_net
del plugin
greengrass_classification_sample_run()
def function_handler(event, context):
client.publish(topic=PARAM_TOPIC_NAME, payload='HANDLER_CALLED!')
return

View File

@@ -1,184 +0,0 @@
"""
BSD 3-clause "New" or "Revised" license
Copyright (C) 2018 Intel Corporation.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
import sys
import os
import cv2
import numpy as np
import greengrasssdk
import boto3
import timeit
import datetime
import json
from collections import OrderedDict
from openvino.inference_engine import IENetwork, IEPlugin
# Specify the delta in seconds between each report
reporting_interval = 1.0
# Parameters for IoT Cloud
enable_iot_cloud_output = True
# Parameters for Kinesis
enable_kinesis_output = False
kinesis_stream_name = ""
kinesis_partition_key = ""
kinesis_region = ""
# Parameters for S3
enable_s3_jpeg_output = False
s3_bucket_name = "ssd_test"
# Parameters for jpeg output on local disk
enable_local_jpeg_output = False
# Create a Greengrass Core SDK client for publishing messages to AWS Cloud
client = greengrasssdk.client("iot-data")
# Create an S3 client for uploading files to S3
if enable_s3_jpeg_output:
s3_client = boto3.client("s3")
# Create a Kinesis client for putting records to streams
if enable_kinesis_output:
kinesis_client = boto3.client("kinesis", "us-west-2")
# Read environment variables set by Lambda function configuration
PARAM_MODEL_XML = os.environ.get("PARAM_MODEL_XML")
PARAM_INPUT_SOURCE = os.environ.get("PARAM_INPUT_SOURCE")
PARAM_DEVICE = os.environ.get("PARAM_DEVICE")
PARAM_OUTPUT_DIRECTORY = os.environ.get("PARAM_OUTPUT_DIRECTORY")
PARAM_CPU_EXTENSION_PATH = os.environ.get("PARAM_CPU_EXTENSION_PATH")
PARAM_LABELMAP_FILE = os.environ.get("PARAM_LABELMAP_FILE")
PARAM_TOPIC_NAME = os.environ.get("PARAM_TOPIC_NAME", "intel/faas/ssd")
def report(res_json, frame):
now = datetime.datetime.now()
date_prefix = str(now).replace(" ", "_")
if enable_iot_cloud_output:
data = json.dumps(res_json)
client.publish(topic=PARAM_TOPIC_NAME, payload=data)
if enable_kinesis_output:
kinesis_client.put_record(StreamName=kinesis_stream_name, Data=json.dumps(res_json),
PartitionKey=kinesis_partition_key)
if enable_s3_jpeg_output:
temp_image = os.path.join(PARAM_OUTPUT_DIRECTORY, "inference_result.jpeg")
cv2.imwrite(temp_image, frame)
with open(temp_image) as file:
image_contents = file.read()
s3_client.put_object(Body=image_contents, Bucket=s3_bucket_name, Key=date_prefix + ".jpeg")
if enable_local_jpeg_output:
cv2.imwrite(os.path.join(PARAM_OUTPUT_DIRECTORY, date_prefix + ".jpeg"), frame)
def greengrass_object_detection_sample_ssd_run():
client.publish(topic=PARAM_TOPIC_NAME, payload="OpenVINO: Initializing...")
model_bin = os.path.splitext(PARAM_MODEL_XML)[0] + ".bin"
# Plugin initialization for specified device and load extensions library if specified
plugin = IEPlugin(device=PARAM_DEVICE, plugin_dirs="")
if "CPU" in PARAM_DEVICE:
plugin.add_cpu_extension(PARAM_CPU_EXTENSION_PATH)
# Read IR
net = IENetwork(model=PARAM_MODEL_XML, weights=model_bin)
assert len(net.inputs.keys()) == 1, "Sample supports only single input topologies"
assert len(net.outputs) == 1, "Sample supports only single output topologies"
input_blob = next(iter(net.inputs))
out_blob = next(iter(net.outputs))
# Read and pre-process input image
n, c, h, w = net.inputs[input_blob]
cap = cv2.VideoCapture(PARAM_INPUT_SOURCE)
exec_net = plugin.load(network=net)
del net
client.publish(topic=PARAM_TOPIC_NAME, payload="Starting inference on %s" % PARAM_INPUT_SOURCE)
start_time = timeit.default_timer()
inf_seconds = 0.0
frame_count = 0
labeldata = None
if PARAM_LABELMAP_FILE is not None:
with open(PARAM_LABELMAP_FILE) as labelmap_file:
labeldata = json.load(labelmap_file)
while (cap.isOpened()):
ret, frame = cap.read()
if not ret:
break
frameid = cap.get(cv2.CAP_PROP_POS_FRAMES)
initial_w = cap.get(3)
initial_h = cap.get(4)
in_frame = cv2.resize(frame, (w, h))
in_frame = in_frame.transpose((2, 0, 1)) # Change data layout from HWC to CHW
in_frame = in_frame.reshape((n, c, h, w))
# Start synchronous inference
inf_start_time = timeit.default_timer()
res = exec_net.infer(inputs={input_blob: in_frame})
inf_seconds += timeit.default_timer() - inf_start_time
# Parse detection results of the current request
res_json = OrderedDict()
frame_timestamp = datetime.datetime.now()
object_id = 0
for obj in res[out_blob][0][0]:
if obj[2] > 0.5:
xmin = int(obj[3] * initial_w)
ymin = int(obj[4] * initial_h)
xmax = int(obj[5] * initial_w)
ymax = int(obj[6] * initial_h)
cv2.rectangle(frame, (xmin, ymin), (xmax, ymax), (255, 165, 20), 4)
obj_id = "Object" + str(object_id)
classlabel = labeldata[str(int(obj[1]))] if labeldata else ""
res_json[obj_id] = {"label": int(obj[1]), "class": classlabel, "confidence": round(obj[2], 2), "xmin": round(
obj[3], 2), "ymin": round(obj[4], 2), "xmax": round(obj[5], 2), "ymax": round(obj[6], 2)}
object_id += 1
frame_count += 1
# Measure elapsed seconds since the last report
seconds_elapsed = timeit.default_timer() - start_time
if seconds_elapsed >= reporting_interval:
res_json["timestamp"] = frame_timestamp.isoformat()
res_json["frame_id"] = int(frameid)
res_json["inference_fps"] = frame_count / inf_seconds
start_time = timeit.default_timer()
report(res_json, frame)
frame_count = 0
inf_seconds = 0.0
client.publish(topic=PARAM_TOPIC_NAME, payload="End of the input, exiting...")
del exec_net
del plugin
greengrass_object_detection_sample_ssd_run()
def function_handler(event, context):
client.publish(topic=PARAM_TOPIC_NAME, payload='HANDLER_CALLED!')
return

View File

@@ -0,0 +1,50 @@
# Hello Query Device Python* Sample
This topic demonstrates how to run the Hello Query Device sample application, which queries Inference Engine
devices and prints their metrics and default configuration values. The sample shows
how to use Query Device API feature.
## How It Works
The sample queries all available Inference Engine devices and prints their supported metrics and plugin configuration parameters.
## Running
The sample has no command-line parameters. To see the report, run the following command:
```
python3 hello_query_device.py
```
## Sample Output
The application prints all available devices with their supported metrics and default values for configuration parameters. For example:
```
Available devices:
Device: CPU
Metrics:
AVAILABLE_DEVICES: 0
SUPPORTED_METRICS: AVAILABLE_DEVICES, SUPPORTED_METRICS, FULL_DEVICE_NAME, OPTIMIZATION_CAPABILITIES, SUPPORTED_CONFIG_KEYS, RANGE_FOR_ASYNC_INFER_REQUESTS, RANGE_FOR_STREAMS
FULL_DEVICE_NAME: Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz
OPTIMIZATION_CAPABILITIES: WINOGRAD, FP32, INT8, BIN
SUPPORTED_CONFIG_KEYS: CPU_BIND_THREAD, CPU_THREADS_NUM, CPU_THROUGHPUT_STREAMS, DUMP_EXEC_GRAPH_AS_DOT, DYN_BATCH_ENABLED, DYN_BATCH_LIMIT, EXCLUSIVE_ASYNC_REQUESTS, PERF_COUNT, RANGE_FOR_ASYNC_INFER_REQUESTS, RANGE_FOR_STREAMS
RANGE_FOR_ASYNC_INFER_REQUESTS: 0, 6, 1
RANGE_FOR_STREAMS: 1, 12
Default values for device configuration keys:
CPU_BIND_THREAD: YES
CPU_THREADS_NUM: 0
CPU_THROUGHPUT_STREAMS: 1
DUMP_EXEC_GRAPH_AS_DOT:
DYN_BATCH_ENABLED: NO
DYN_BATCH_LIMIT: 0
EXCLUSIVE_ASYNC_REQUESTS: NO
PERF_COUNT: NO
RANGE_FOR_ASYNC_INFER_REQUESTS: 1
RANGE_FOR_STREAMS: 6
```
## See Also
* [Using Inference Engine Samples](./docs/IE_DG/Samples_Overview.md)

View File

@@ -0,0 +1,40 @@
import sys
from openvino.inference_engine import IECore
def param_to_string(metric):
if isinstance(metric, (list, tuple)):
return ", ".join([str(val) for val in metric])
elif isinstance(metric, dict):
str_param_repr = ""
for k, v in metric.items():
str_param_repr += "{}: {}\n".format(k, v)
return str_param_repr
else:
return str(metric)
def main():
ie = IECore()
print("Available devices:")
for device in ie.available_devices:
print("\tDevice: {}".format(device))
print("\tMetrics:")
for metric in ie.get_metric(device, "SUPPORTED_METRICS"):
try:
metric_val = ie.get_metric(device, metric)
print("\t\t{}: {}".format(metric, param_to_string(metric_val)))
except TypeError:
print("\t\t{}: UNSUPPORTED TYPE".format(metric))
print("\n\tDefault values for device configuration keys:")
for cfg in ie.get_metric(device, "SUPPORTED_CONFIG_KEYS"):
try:
cfg_val = ie.get_config(device, cfg)
print("\t\t{}: {}".format(cfg, param_to_string(cfg_val)))
except TypeError:
print("\t\t{}: UNSUPPORTED TYPE".format(cfg))
if __name__ == '__main__':
sys.exit(main() or 0)

File diff suppressed because it is too large Load Diff

View File

@@ -1,463 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This notebook demonstrates the worklflow of a simple image classification task.\n",
"We will go through all the pipeline steps: downloading the model, generating the Intermediate Representation (IR) using the Model Optimizer, running inference in Python, and parsing and interpretating the output results.\n",
"\n",
"To demonstrate the scenario, we will use the pre-trained SquezeNet V1.1 Caffe\\* model. SqueezeNet is a pretty accurate and at the same time lightweight network. For more information about the model, please visit <a href=\"https://github.com/DeepScale/SqueezeNet/\">GitHub</a> page and refer to original <a href=\"https://arxiv.org/abs/1602.07360\">SqueezeNet paper</a>.\n",
"\n",
"Follow the steps to perform image classification with the SquezeNet V1.1 model:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**1. Download the model files:** "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"echo \"Downloading deploy.protxt ...\"\n",
"if [ -f deploy.prototxt ]; then \n",
" echo \"deploy.protxt file already exists. Downloading skipped\"\n",
"else\n",
" wget https://raw.githubusercontent.com/DeepScale/SqueezeNet/a47b6f13d30985279789d08053d37013d67d131b/SqueezeNet_v1.1/deploy.prototxt -q\n",
" echo \"Finished!\"\n",
"fi"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"! echo \"Downloading squeezenet_v1.1.caffemodel ...\"\n",
"if [ -f squeezenet_v1.1.caffemodel ]; then\n",
" echo \"squeezenet_v1.1.caffemodel file already exists. Download skipped\"\n",
"else\n",
" wget https://github.com/DeepScale/SqueezeNet/raw/a47b6f13d30985279789d08053d37013d67d131b/SqueezeNet_v1.1/squeezenet_v1.1.caffemodel -q\n",
" echo \"Finished!\"\n",
"fi"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Run the following command to see the model files:**"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!ls -la"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"* `deploy.prototxt` contains the network toplogy description in text format. \n",
"* `squeezenet_v1.1.caffemodel` contains weights for all network layers"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**2. Optimize and convert the model from intial Caffe representation to the IR representation, which is required for scoring the model using Inference Engine. To convert and optimize the model, use the Model Optimizer command line tool.**\n",
"\n",
"To locate Model Optimizer scripts, specify the path to the Model Optimizer root directory in the `MO_ROOT` variable in the cell bellow and then run it (If you use the installed OpenVINO&trade; package, you can find the Model Optimizer in `<INSTALLATION_ROOT_DIR>/deployment_tools/model_optimizer`)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%%bash\n",
"MO_ROOT=/localdisk/repos/model-optimizer-tensorflow/\n",
"echo $MO_ROOT\n",
"python3 $MO_ROOT/mo.py --input_model squeezenet_v1.1.caffemodel --input_proto deploy.prototxt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**3. Now, you have the SqueezeNet model converted to the IR, and you can infer it.**\n",
"\n",
"a. First, import required modules:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from openvino.inference_engine import IENetwork, IEPlugin\n",
"import numpy as np\n",
"import cv2\n",
"import logging as log\n",
"from time import time\n",
"import sys\n",
"import glob\n",
"import os\n",
"from matplotlib import pyplot as plt\n",
"%matplotlib inline"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"b. Initialize required constants:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Configure logging format\n",
"log.basicConfig(format=\"[ %(levelname)s ] %(message)s\", level=log.INFO, stream=sys.stdout)\n",
"\n",
"# Path to IR model files\n",
"MODEL_XML = \"./squeezenet_v1.1.xml\"\n",
"MODEL_BIN = \"./squeezenet_v1.1.bin\"\n",
"\n",
"# Target device to run inference\n",
"TARGET_DEVICE = \"CPU\"\n",
"\n",
"# Folder with input images for the model\n",
"IMAGES_FOLDER = \"./images\"\n",
"\n",
"# File containing information about classes names \n",
"LABELS_FILE = \"./image_net_synset.txt\"\n",
"\n",
"# Number of top prediction results to parse\n",
"NTOP = 5\n",
"\n",
"# Required batch size - number of images which will be processed in parallel\n",
"BATCH = 4"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"c. Create a plugin instance for the specified target device \n",
"d. Read the IR files and create an `IENEtwork` instance"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plugin = IEPlugin(TARGET_DEVICE)\n",
"net = IENetwork(model=MODEL_XML, weights=MODEL_BIN)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"e. Set the network batch size to the constatns specified above. \n",
"\n",
"Batch size is an \"amount\" of input data that will be infered in parallel. In this cases it is a number of images, which will be classified in parallel. \n",
"\n",
"You can set the network batch size using one of the following options:\n",
"1. On the IR generation stage, run the Model Optimizer with `-b` command line option. For example, to generate the IR with batch size equal to 4, add `-b 4` to Model Optimizer command line options. By default, it takes the batch size from the original network in framework representation (usually, it is equal to 1, but in this case, the original Caffe model is provided with the batch size equal to 10). \n",
"2. Use Inference Engine after reading IR. We will use this option.\n",
"\n",
"To set the batch size with the Inference Engine:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"log.info(\"Current network batch size is {}, will be changed to {}\".format(net.batch_size, BATCH))\n",
"net.batch_size = BATCH"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"f. After setting batch size, you can get required information about network input layers.\n",
"To preprocess input images, you need to know input layer shape.\n",
"\n",
"`inputs` property of `IENetwork` returns the dicitonary with input layer names and `InputInfo` objects, which contain information about an input layer including its shape.\n",
"\n",
"SqueezeNet is a single-input toplogy, so to get the input layer name and its shape, you can get the first item from the `inputs` dictionary:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"input_layer = next(iter(net.inputs))\n",
"n,c,h,w = net.inputs[input_layer].shape\n",
"layout = net.inputs[input_layer].layout\n",
"log.info(\"Network input layer {} has shape {} and layout {}\".format(input_layer, (n,c,h,w), layout))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"So what do the shape and layout mean? \n",
"Layout will helps to interprete the shape dimsesnions meaning. \n",
"\n",
"`NCHW` input layer layout means:\n",
"* the fisrt dimension of an input data is a batch of **N** images processed in parallel \n",
"* the second dimension is a numnber of **C**hannels expected in the input images\n",
"* the third and the forth are a spatial dimensions - **H**eight and **W**idth of an input image\n",
"\n",
"Our shapes means that the network expects four 3-channel images running in parallel with size 227x227."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"g. Read and preprocess input images.\n",
"\n",
"For it, go to `IMAGES_FOLDER`, find all `.bmp` files, and take four images for inference:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"search_pattern = os.path.join(IMAGES_FOLDER, \"*.bmp\")\n",
"images = glob.glob(search_pattern)[:BATCH]\n",
"log.info(\"Input images:\\n {}\".format(\"\\n\".join(images)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now you can read and preprocess the image files and create an array with input blob data.\n",
"\n",
"For preprocessing, you must do the following:\n",
"1. Resize the images to fit the HxW input dimenstions.\n",
"2. Transpose the HWC layout.\n",
"\n",
"Transposing is tricky and not really obvious.\n",
"As you alredy saw above, the network has the `NCHW` layout, so each input image should be in `CHW` format. But by deafult, OpenCV\\* reads images in the `HWC` format. That is why you have to swap the axes using the `numpy.transpose()` function:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"input_data = np.ndarray(shape=(n, c, h, w))\n",
"orig_images = [] # Will be used to show image in notebook\n",
"for i, img in enumerate(images):\n",
" image = cv2.imread(img)\n",
" orig_images.append(image)\n",
" if image.shape[:-1] != (h, w):\n",
" log.warning(\"Image {} is resized from {} to {}\".format(img, image.shape[:-1], (h, w)))\n",
" image = cv2.resize(image, (w, h))\n",
" image = image.transpose((2, 0, 1)) # Change data layout from HWC to CHW\n",
" input_data[i] = image"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"i. Infer the model model to classify input images:\n",
"\n",
"1. Load the `IENetwork` object to the plugin to create `ExectuableNEtwork` object. \n",
"2. Start inference using the `infer()` function specifying dictionary with input layer name and prepared data as an argument for the function. \n",
"3. Measure inference time in miliseconds and calculate throughput metric in frames-per-second (FPS)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"exec_net = plugin.load(net)\n",
"t0 = time()\n",
"res_map = exec_net.infer({input_layer: input_data})\n",
"inf_time = (time() - t0) * 1000 \n",
"fps = BATCH * inf_time \n",
"log.info(\"Inference time: {} ms.\".format(inf_time))\n",
"log.info(\"Throughput: {} fps.\".format(fps))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**4. After the inference, you need to parse and interpretate the inference results.**\n",
"\n",
"First, you need to see the shape of the network output layer. It can be done in similar way as for the inputs, but here you need to call `outputs` property of `IENetwork` object:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"output_layer = next(iter(net.outputs))\n",
"n,c,h,w = net.outputs[output_layer].shape\n",
"layout = net.outputs[output_layer].layout\n",
"log.info(\"Network output layer {} has shape {} and layout {}\".format(output_layer, (n,c,h,w), layout))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"It is not a common case for classification netowrks to have output layer with *NCHW* layout. Usually, it is just *NC*. However, in this case, the last two dimensions are just a feature of the network and do not have much sense. Ignore them as you will remove them on the final parsing stage. \n",
"\n",
"What are the first and second dimensions of the output layer? \n",
"* The first dimension is a batch. We precoessed four images, and the prediction result for a particular image is stored in the first dimension of the output array. For example, prediction results for the third image is `res[2]` (since numeration starts from 0).\n",
"* The second dimension is an array with normalized probabilities (from 0 to 1) for each class. This network is trained using the <a href=\"http://image-net.org/index\">ImageNet</a> dataset with 1000 classes. Each `n`-th value in the output data for a certain image represent the probability of the image belonging to the `n`-th class. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To parse the output results:\n",
"\n",
"a. Read the `LABELS_FILE`, which maps the class ID to human-readable class names:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open(LABELS_FILE, 'r') as f:\n",
" labels_map = [x.split(sep=' ', maxsplit=1)[-1].strip() for x in f]\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"b. Parse the output array with prediction results. The parsing algorith is the following:\n",
"0. Squeeze the last two \"extra\" dimensions of the output data.\n",
"1. Iterate over all batches.\n",
"2. Sort the probabilities vector descendingly to get `NTOP` classes with the highest probabilities (by default, the `numpy.argsort` sorts the data in the ascending order, but using the array slicing `[::-1]`, you can reverse the data order).\n",
"3. Map the `NTOP` probabilities to the corresponding labeles in `labeles_map`.\n",
"\n",
"For the vizualization, you also need to store top-1 class and probability."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"top1_res = [] # will be used for the visualization\n",
"res = np.squeeze(res_map[output_layer])\n",
"log.info(\"Top {} results: \".format(NTOP))\n",
"for i, probs in enumerate(res):\n",
" top_ind = np.argsort(probs)[-NTOP:][::-1]\n",
" print(\"Image {}\".format(images[i]))\n",
" top1_ind = top_ind[0]\n",
" top1_res.append((labels_map[top1_ind], probs[top1_ind]))\n",
" for id in top_ind:\n",
" print(\"label: {} probability: {:.2f}% \".format(labels_map[id], probs[id] * 100))\n",
" print(\"\\n\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The code above prints the results as plain text. \n",
"You can also use OpenCV\\* to visualize the results using the `orig_images` and `top1_res` variables, which you created during images reading and results parsing:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plt.clf()\n",
"for i, img in enumerate(orig_images):\n",
" label_str = \"{}\".format(top1_res[i][0].split(',')[0])\n",
" prob_str = \"{:.2f}%\".format(top1_res[i][1])\n",
" cv2.putText(img, label_str, (5, 15), cv2.FONT_HERSHEY_COMPLEX, 0.6, (220,100,10), 1)\n",
" cv2.putText(img, prob_str, (5, 35), cv2.FONT_HERSHEY_COMPLEX, 0.6, (220,100,10), 1)\n",
" plt.figure()\n",
" plt.axis(\"off\")\n",
" \n",
" # We have to convert colors, because matplotlib expects an image in RGB color format \n",
" # but by default, the OpenCV read images in BRG format\n",
" im_to_show = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n",
" plt.imshow(im_to_show)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,73 @@
# Object Detection Python* Sample SSD
This sample demonstrates how to run the Object Detection sample application.
The sample demonstrates how to use the new Infer Request API of Inference Engine in applications.
Refer to [Integrate the Inference Engine New Request API with Your Application](./docs/IE_DG/Integrate_with_customer_application_new_API.md) for details.
The sample demonstrates how to build and execute an inference request on example of object detection networks.
Due to properties of SSD networks, this sample works correctly only on a batch of the size 1. For a greater number of images in a batch, network reshape is required.
## How It Works
Upon the start-up, the sample application reads command line parameters and loads specified network and input images (or a
folder with images) to the Inference Engine plugin.
Then, the sample creates an inference request object and executes inference on it.
When inference is done, the application outputs data to the standard output stream and creates an output image with bounding boxes drawn atop the initial image.
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](./docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
## Running
Running the application with the <code>-h</code> option yields the following usage message:
```
python3 object_detection_sample_ssd.py -h
```
The command yields the following usage message:
```
usage: object_detection_sample_ssd.py [-h] -m MODEL -i INPUT [INPUT ...]
[-l CPU_EXTENSION]
[-d DEVICE] [--labels LABELS]
[-nt NUMBER_TOP]
Options:
-h, --help Show this help message and exit
-m MODEL, --model MODEL
Required. Path to an .xml file with a trained model
-i INPUT [INPUT ...], --input INPUT [INPUT ...]
Required. Path to a folder with images or path to an
image files
-l CPU_EXTENSION, --cpu_extension CPU_EXTENSION
Optional. Required for CPU custom layers. Absolute
path to a shared library with the kernels
implementations
-d DEVICE, --device DEVICE
Optional. Specify the target device to infer on; CPU,
GPU, FPGA, HDDL or MYRIAD is acceptable. The sample
will look for a suitable plugin for device specified
Default value is CPU
--labels LABELS Optional. Labels mapping file
-nt NUMBER_TOP, --number_top NUMBER_TOP
Optional. Number of top results
```
Running the application with the empty list of options yields the usage message given above and an error message.
To run the sample, you can use RMNet_SSD or other object-detection models. You can download the pre-trained models with the OpenVINO [Model Downloader](https://github.com/opencv/open_model_zoo/tree/2018/model_downloader) or from [https://download.01.org/opencv/](https://download.01.org/opencv/).
> **NOTE**: Before running the sample with a trained model, make sure the model is converted to the Inference Engine format (\*.xml + \*.bin) using the [Model Optimizer tool](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md).
You can do inference of an image using a trained RMNet_SSD network on FPGA with fallback to CPU using the following command:
```
python3 object_detection_sample_ssd.py -i <path_to_image>/cat.bmp -m <path_to_model>/alexnet_fp32.xml -nt 5 -d HETERO:FPGA,CPU
```
## Sample Output
By default, the application outputs all inference results and draws bounding boxes for inference results with an over 50% confidence.
## See Also
* [Using Inference Engine Samples](./docs/IE_DG/Samples_Overview.md)

View File

@@ -0,0 +1,189 @@
#!/usr/bin/env python
"""
Copyright (c) 2018 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from __future__ import print_function
import sys
import os
from argparse import ArgumentParser, SUPPRESS
import cv2
import numpy as np
import logging as log
from time import time
from openvino.inference_engine import IENetwork, IECore
def build_argparser():
parser = ArgumentParser(add_help=False)
args = parser.add_argument_group("Options")
args.add_argument('-h', '--help', action='help', default=SUPPRESS, help='Show this help message and exit.')
args.add_argument("-m", "--model", help="Required. Path to an .xml file with a trained model.",
required=True, type=str)
args.add_argument("-i", "--input", help="Required. Path to image file.",
required=True, type=str, nargs="+")
args.add_argument("-l", "--cpu_extension",
help="Optional. Required for CPU custom layers. Absolute path to a shared library with the kernels implementations.",
type=str, default=None)
args.add_argument("-d", "--device",
help="Optional. Specify the target device to infer on; CPU, GPU, FPGA or MYRIAD is acceptable. Sample will look for a suitable plugin for device specified (CPU by default)",
default="CPU", type=str)
args.add_argument("--labels", help="Optional. Labels mapping file", default=None, type=str)
args.add_argument("-nt", "--number_top", help="Optional. Number of top results", default=10, type=int)
return parser
def main():
log.basicConfig(format="[ %(levelname)s ] %(message)s", level=log.INFO, stream=sys.stdout)
args = build_argparser().parse_args()
# --------------------------- 1. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------
model_xml = args.model
model_bin = os.path.splitext(model_xml)[0] + ".bin"
log.info("Loading network files:\n\t{}\n\t{}".format(model_xml, model_bin))
net = IENetwork(model=model_xml, weights=model_bin)
# -----------------------------------------------------------------------------------------------------
# ------------- 2. Load Plugin for inference engine and extensions library if specified --------------
log.info("Loading Inference Engine")
ie = IECore()
log.info("Device info:")
versions = ie.get_versions(args.device)
print("{}{}".format(" "*8, args.device))
print("{}MKLDNNPlugin version ......... {}.{}".format(" "*8, versions[args.device].major, versions[args.device].minor))
print("{}Build ........... {}".format(" "*8, versions[args.device].build_number))
if args.cpu_extension and "CPU" in args.device:
ie.add_extension(args.cpu_extension, "CPU")
log.info("CPU extension loaded: {}".format(args.cpu_extension))
if "CPU" in args.device:
supported_layers = ie.query_network(net, "CPU")
not_supported_layers = [l for l in net.layers.keys() if l not in supported_layers]
if len(not_supported_layers) != 0:
log.error("Following layers are not supported by the plugin for specified device {}:\n {}".
format(args.device, ', '.join(not_supported_layers)))
log.error("Please try to specify cpu extensions library path in sample's command line parameters using -l "
"or --cpu_extension command line argument")
sys.exit(1)
# -----------------------------------------------------------------------------------------------------
# --------------------------- 3. Read and preprocess input --------------------------------------------
input_blob = next(iter(net.inputs))
n, c, h, w = net.inputs[input_blob].shape
images = np.ndarray(shape=(n, c, h, w))
images_hw = []
for i in range(n):
image = cv2.imread(args.input[i])
ih, iw = image.shape[:-1]
images_hw.append((ih, iw))
log.info("File was added: ")
log.info(" {}".format(args.input[i]))
if (ih, iw) != (h, w):
image = cv2.resize(image, (w, h))
log.warning("Image {} is resized from {} to {}".format(args.input[i], image.shape[:-1], (h, w)))
image = image.transpose((2, 0, 1)) # Change data layout from HWC to CHW
images[i] = image
# -----------------------------------------------------------------------------------------------------
# --------------------------- 4. Configure input & output ---------------------------------------------
# --------------------------- Prepare input blobs -----------------------------------------------------
log.info("Preparing input blobs")
assert (len(net.inputs.keys()) == 1 or len(net.inputs.keys()) == 2), "Sample supports topologies only with 1 or 2 inputs"
input_blob = next(iter(net.inputs))
out_blob = next(iter(net.outputs))
input_name, input_info_name = "", ""
for input_key in net.inputs:
if len(net.inputs[input_key].layout) == 4:
input_name = input_key
log.info("Batch size is {}".format(net.batch_size))
net.inputs[input_key].precision = 'U8'
elif len(net.inputs[input_key].layout) == 2:
input_info_name = input_key
net.inputs[input_key].precision = 'FP32'
if net.inputs[input_key].shape[1] != 3 and net.inputs[input_key].shape[1] != 6 or net.inputs[input_key].shape[0] != 1:
log.error('Invalid input info. Should be 3 or 6 values length.')
# --------------------------- Prepare output blobs ----------------------------------------------------
log.info('Preparing output blobs')
output_name, output_info = "", net.outputs[next(iter(net.outputs.keys()))]
for output_key in net.outputs:
if net.layers[output_key].type == "DetectionOutput":
output_name, output_info = output_key, net.outputs[output_key]
if output_name == "":
log.error("Can't find a DetectionOutput layer in the topology")
output_dims = output_info.shape
if len(output_dims) != 4:
log.error("Incorrect output dimensions for SSD model")
max_proposal_count, object_size = output_dims[2], output_dims[3]
if object_size != 7:
log.error("Output item should have 7 as a last dimension")
output_info.precision = "FP32"
# -----------------------------------------------------------------------------------------------------
# --------------------------- Performing inference ----------------------------------------------------
log.info("Loading model to the device")
exec_net = ie.load_network(network=net, device_name=args.device)
log.info("Creating infer request and starting inference")
res = exec_net.infer(inputs={input_blob: images})
# -----------------------------------------------------------------------------------------------------
# --------------------------- Read and postprocess output ---------------------------------------------
log.info("Processing output blobs")
res = res[out_blob]
boxes, classes = {}, {}
data = res[0][0]
for number, proposal in enumerate(data):
if proposal[2] > 0:
imid = np.int(proposal[0])
ih, iw = images_hw[imid]
label = np.int(proposal[1])
confidence = proposal[2]
xmin = np.int(iw * proposal[3])
ymin = np.int(ih * proposal[4])
xmax = np.int(iw * proposal[5])
ymax = np.int(ih * proposal[6])
print("[{},{}] element, prob = {:.6} ({},{})-({},{}) batch id : {}"\
.format(number, label, confidence, xmin, ymin, xmax, ymax, imid), end="")
if proposal[2] > 0.5:
print(" WILL BE PRINTED!")
if not imid in boxes.keys():
boxes[imid] = []
boxes[imid].append([xmin, ymin, xmax, ymax])
if not imid in classes.keys():
classes[imid] = []
classes[imid].append(label)
else:
print()
for imid in classes:
tmp_image = cv2.imread(args.input[imid])
for box in boxes[imid]:
cv2.rectangle(tmp_image, (box[0], box[1]), (box[2], box[3]), (232, 35, 244), 2)
cv2.imwrite("out.bmp", tmp_image)
log.info("Image out.bmp created!")
# -----------------------------------------------------------------------------------------------------
log.info("Execution successful\n")
log.info("This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool")
if __name__ == '__main__':
sys.exit(main() or 0)

View File

@@ -0,0 +1,68 @@
# Neural Style Transfer Python* Sample
This topic demonstrates how to run the Neural Style Transfer sample application, which performs
inference of style transfer models.
> **NOTE**: The OpenVINO™ toolkit does not include a pre-trained model to run the Neural Style Transfer sample. A public model from the [Zhaw's Neural Style Transfer repository](https://github.com/zhaw/neural_style) can be used. Read the [Converting a Style Transfer Model from MXNet*](./docs/MO_DG/prepare_model/convert_model/mxnet_specific/Convert_Style_Transfer_From_MXNet.md) topic from the [Model Optimizer Developer Guide](./docs/MO_DG/Deep_Learning_Model_Optimizer_DevGuide.md) to learn about how to get the trained model and how to convert it to the Inference Engine format (\*.xml + \*.bin).
## How It Works
> **NOTE**: By default, Inference Engine samples and demos expect input with BGR channels order. If you trained your model to work with RGB order, you need to manually rearrange the default channels order in the sample or demo application or reconvert your model using the Model Optimizer tool with `--reverse_input_channels` argument specified. For more information about the argument, refer to **When to Reverse Input Channels** section of [Converting a Model Using General Conversion Parameters](./docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md).
## Running
Running the application with the <code>-h</code> option yields the following usage message:
```
python3 style_transfer_sample.py --help
```
The command yields the following usage message:
```
usage: style_transfer_sample.py [-h] -m MODEL -i INPUT [INPUT ...]
[-l CPU_EXTENSION] [-d DEVICE]
[-nt NUMBER_TOP]
[--mean_val_r MEAN_VAL_R]
[--mean_val_g MEAN_VAL_G]
[--mean_val_b MEAN_VAL_B]
Options:
-h, --help Show this help message and exit.
-m MODEL, --model MODEL
Path to an .xml file with a trained model.
-i INPUT [INPUT ...], --input INPUT [INPUT ...]
Path to a folder with images or path to an image files
-l CPU_EXTENSION, --cpu_extension CPU_EXTENSION
Optional. Required for CPU custom layers. Absolute
MKLDNN (CPU)-targeted custom layers. Absolute path to
a shared library with the kernels implementations
-d DEVICE, --device DEVICE
Specify the target device to infer on; CPU, GPU, FPGA,
HDDL or MYRIAD is acceptable. Sample will look for a
suitable plugin for device specified. Default value is CPU
-nt NUMBER_TOP, --number_top NUMBER_TOP
Number of top results
--mean_val_r MEAN_VAL_R, -mean_val_r MEAN_VAL_R
Mean value of red chanel for mean value subtraction in
postprocessing
--mean_val_g MEAN_VAL_G, -mean_val_g MEAN_VAL_G
Mean value of green chanel for mean value subtraction
in postprocessing
--mean_val_b MEAN_VAL_B, -mean_val_b MEAN_VAL_B
Mean value of blue chanel for mean value subtraction
in postprocessing
```
Running the application with the empty list of options yields the usage message given above and an error message.
To perform inference of an image using a trained model of NST network on Intel® CPUs, use the following command:
```
python3 style_transfer_sample.py -i <path_to_image>/cat.bmp -m <path_to_model>/1_decoder_FP32.xml
```
### Demo Output
The application outputs an image (`out1.bmp`) or a sequence of images (`out1.bmp`, ..., `out<N>.bmp`) which are redrawn in style of the style transfer model used for sample.
## See Also
* [Using Inference Engine Samples](./docs/IE_DG/Samples_Overview.md)

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env python
"""
Copyright (c) 2018 Intel Corporation
Copyright (C) 2018-2019 Intel Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -17,40 +17,39 @@
from __future__ import print_function
import sys
import os
from argparse import ArgumentParser
from argparse import ArgumentParser, SUPPRESS
import cv2
import numpy as np
import logging as log
from time import time
from openvino.inference_engine import IENetwork, IEPlugin
from openvino.inference_engine import IENetwork, IECore
def build_argparser():
parser = ArgumentParser()
parser.add_argument("-m", "--model", help="Path to an .xml file with a trained model.", required=True, type=str)
parser.add_argument("-i", "--input", help="Path to a folder with images or path to an image files", required=True,
type=str, nargs="+")
parser.add_argument("-l", "--cpu_extension",
help="MKLDNN (CPU)-targeted custom layers.Absolute path to a shared library with the kernels "
"impl.", type=str, default=None)
parser.add_argument("-pp", "--plugin_dir", help="Path to a plugin folder", type=str, default=None)
parser.add_argument("-d", "--device",
help="Specify the target device to infer on; CPU, GPU, FPGA or MYRIAD is acceptable. Sample "
"will look for a suitable plugin for device specified (CPU by default)", default="CPU",
type=str)
parser.add_argument("-nt", "--number_top", help="Number of top results", default=10, type=int)
parser.add_argument("-ni", "--number_iter", help="Number of inference iterations", default=1, type=int)
parser.add_argument("--mean_val_r", "-mean_val_r",
help="Mean value of red chanel for mean value subtraction in postprocessing ", default=0,
type=float)
parser.add_argument("--mean_val_g", "-mean_val_g",
help="Mean value of green chanel for mean value subtraction in postprocessing ", default=0,
type=float)
parser.add_argument("--mean_val_b", "-mean_val_b",
help="Mean value of blue chanel for mean value subtraction in postprocessing ", default=0,
type=float)
parser.add_argument("-pc", "--perf_counts", help="Report performance counters", default=False, action="store_true")
parser = ArgumentParser(add_help=False)
args = parser.add_argument_group('Options')
args.add_argument('-h', '--help', action='help', default=SUPPRESS, help='Show this help message and exit.')
args.add_argument("-m", "--model", help="Path to an .xml file with a trained model.", required=True, type=str)
args.add_argument("-i", "--input", help="Path to a folder with images or path to an image files", required=True,
type=str, nargs="+")
args.add_argument("-l", "--cpu_extension",
help="Optional. Required for CPU custom layers. "
"Absolute MKLDNN (CPU)-targeted custom layers. Absolute path to a shared library with the "
"kernels implementations", type=str, default=None)
args.add_argument("-d", "--device",
help="Specify the target device to infer on; CPU, GPU, FPGA, HDDL or MYRIAD is acceptable. Sample "
"will look for a suitable plugin for device specified. Default value is CPU", default="CPU",
type=str)
args.add_argument("-nt", "--number_top", help="Number of top results", default=10, type=int)
args.add_argument("--mean_val_r", "-mean_val_r",
help="Mean value of red chanel for mean value subtraction in postprocessing ", default=0,
type=float)
args.add_argument("--mean_val_g", "-mean_val_g",
help="Mean value of green chanel for mean value subtraction in postprocessing ", default=0,
type=float)
args.add_argument("--mean_val_b", "-mean_val_b",
help="Mean value of blue chanel for mean value subtraction in postprocessing ", default=0,
type=float)
return parser
@@ -61,19 +60,20 @@ def main():
model_bin = os.path.splitext(model_xml)[0] + ".bin"
# Plugin initialization for specified device and load extensions library if specified
plugin = IEPlugin(device=args.device, plugin_dirs=args.plugin_dir)
log.info("Creating Inference Engine")
ie = IECore()
if args.cpu_extension and 'CPU' in args.device:
plugin.add_cpu_extension(args.cpu_extension)
ie.add_extension(args.cpu_extension, "CPU")
# Read IR
log.info("Loading network files:\n\t{}\n\t{}".format(model_xml, model_bin))
net = IENetwork(model=model_xml, weights=model_bin)
if plugin.device == "CPU":
supported_layers = plugin.get_supported_layers(net)
if "CPU" in args.device:
supported_layers = ie.query_network(net, "CPU")
not_supported_layers = [l for l in net.layers.keys() if l not in supported_layers]
if len(not_supported_layers) != 0:
log.error("Following layers are not supported by the plugin for specified device {}:\n {}".
format(plugin.device, ', '.join(not_supported_layers)))
format(args.device, ', '.join(not_supported_layers)))
log.error("Please try to specify cpu extensions library path in sample's command line parameters using -l "
"or --cpu_extension command line argument")
sys.exit(1)
@@ -100,24 +100,12 @@ def main():
# Loading model to the plugin
log.info("Loading model to the plugin")
exec_net = plugin.load(network=net)
del net
exec_net = ie.load_network(network=net, device_name=args.device)
# Start sync inference
log.info("Starting inference ({} iterations)".format(args.number_iter))
infer_time = []
for i in range(args.number_iter):
t0 = time()
res = exec_net.infer(inputs={input_blob: images})
infer_time.append((time() - t0) * 1000)
log.info("Average running time of one iteration: {} ms".format(np.average(np.asarray(infer_time))))
if args.perf_counts:
perf_counts = exec_net.requests[0].get_perf_counts()
log.info("Performance counters:")
print("{:<70} {:<15} {:<15} {:<15} {:<10}".format('name', 'layer_type', 'exet_type', 'status', 'real_time, us'))
for layer, stats in perf_counts.items():
print("{:<70} {:<15} {:<15} {:<15} {:<10}".format(layer, stats['layer_type'], stats['exec_type'],
stats['status'], stats['real_time']))
log.info("Starting inference")
res = exec_net.infer(inputs={input_blob: images})
# Processing output blob
log.info("Processing output blob")
res = res[out_blob]
@@ -133,8 +121,7 @@ def main():
out_img = os.path.join(os.path.dirname(__file__), "out_{}.bmp".format(batch))
cv2.imwrite(out_img, data)
log.info("Result image was saved to {}".format(out_img))
del exec_net
del plugin
log.info("This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool\n")
if __name__ == '__main__':

View File

@@ -1,21 +0,0 @@
background
aeroplane
bicycle
bird
boat
bottle
bus
car
cat
chair
cow
diningtable
dog
horse
motorbike
person
pottedplant
sheep
sofa
train
tvmonitor

View File

@@ -1,200 +0,0 @@
import subprocess
from pathlib import Path
import platform
import sys
from itertools import chain
from distutils.command.build_py import build_py as _build_py
from distutils.command.clean import clean as _clean
import shutil
from setuptools import setup, Extension, find_packages
from setuptools.command.build_ext import build_ext as _build_ext
from setuptools.command.install import install as _install
IS_WINDOWS = (platform.system() == 'Windows')
IS_DARWIN = (platform.system() == 'Darwin')
IS_LINUX = (platform.system() == 'Linux')
REQUIREMENTS_FILE = 'requirements.txt'
PACKAGE_NAME = 'inference_engine'
PACKAGE = Path(PACKAGE_NAME)
C_LIB_NAME = '{}._C'.format(PACKAGE_NAME)
_build_cmd = ['cmake', '--build', '.']
INFERENCE_ENGINE_DIR = None
BUNDLE_INFERENCE_ENGINE = False
def parse_command_line_options(cls):
"""Propagates command line options to sub-commands.
Allows to run install command with build_ext options"""
base_user_options = getattr(cls, 'user_options', [])
base_boolean_options = getattr(cls, 'boolean_options', [])
base_run = cls.run
base_init_options = cls.initialize_options
cls.user_options = base_user_options + [
('copy-ie-libs', None, 'Copy Inference Engine Libraries to package directory'),
('inference-engine-dir=', None, 'Path to Inference Engine directory')
]
cls.boolean_options = base_boolean_options + [
'copy-ie-libs'
]
def initialize_options(self):
self.copy_ie_libs = False
self.inference_engine_dir = None
base_init_options(self)
def run(self):
global INFERENCE_ENGINE_DIR
global BUNDLE_INFERENCE_ENGINE
if self.copy_ie_libs:
BUNDLE_INFERENCE_ENGINE = True
if self.inference_engine_dir:
INFERENCE_ENGINE_DIR = self.inference_engine_dir
base_run(self)
cls.initialize_options = initialize_options
cls.run = run
return cls
@parse_command_line_options
class install(_install):
pass
@parse_command_line_options
class build_py(_build_py):
pass
@parse_command_line_options
class build_ext(_build_ext):
def run(self):
if not self.extensions:
return
for i, ext in enumerate(self.extensions):
if ext.name == C_LIB_NAME:
self._build_cmake()
self.extensions.pop(i)
break
super().run()
def _build_cmake(self):
print("Building C++ extension")
if Path.cwd().joinpath("Makefile").is_file():
# in build directory, run make only
subprocess.call(_build_cmd)
else:
# compile extension library and
self.build_cmake_lib()
print("Built C++ extension")
def build_cmake_lib(self):
def save_call(*args, error_msg=None, **kwargs):
if subprocess.call(*args, **kwargs) != 0:
if error_msg:
print(error_msg)
shutil.rmtree(tmp_build_dir.as_posix(), ignore_errors=True)
sys.exit(1)
tmp_build_dir = Path("tmp_build")
destination = Path(self.build_lib) / PACKAGE_NAME if not self.inplace else Path(PACKAGE_NAME)
tmp_build_dir.mkdir(exist_ok=False)
_python_executable_opt = ['-DPYTHON_EXECUTABLE={}'.format(sys.executable)]
_build_type_opt = ['-DCMAKE_BUILD_TYPE=Release']
_generator_opt = ['-G', 'NMake Makefiles' if IS_WINDOWS else "Unix Makefiles"]
_optional = []
if BUNDLE_INFERENCE_ENGINE:
_optional.append('-DCOPY_IE_LIBS=ON')
if INFERENCE_ENGINE_DIR:
_optional.append('-DInferenceEngine_DIR={}'.format(INFERENCE_ENGINE_DIR))
_cmake_cmd = list(chain(['cmake'], _generator_opt, _build_type_opt, _python_executable_opt, _optional, ['..']))
save_call(_cmake_cmd, cwd=tmp_build_dir.as_posix(), error_msg="Cmake generator failed")
save_call(_build_cmd, cwd=tmp_build_dir.as_posix(), error_msg="Build command failed")
build_ext.copy_compiled_libs(tmp_build_dir / PACKAGE_NAME, destination)
shutil.rmtree(tmp_build_dir.as_posix(), ignore_errors=False)
@staticmethod
def copy_compiled_libs(source_dir, destination):
extensions = ['so', 'dll', 'pyd']
for path in chain.from_iterable(source_dir.glob("*.%s" % ext) for ext in extensions):
shutil.copy(path.as_posix(), destination.as_posix())
class clean(_clean):
def run(self):
shutil.rmtree("tmp_build", ignore_errors=True)
extensions = ['so', 'dll', 'pyd']
for path in chain.from_iterable(PACKAGE.glob("*.%s" % ext) for ext in extensions):
path.unlink()
super().run()
def paths_to_str(paths):
return [p.as_posix() for p in paths]
with open(REQUIREMENTS_FILE) as reqs:
requirements = set(reqs.read().splitlines())
# do not spoil pre-installed opencv (in case it was built from source)
_opencv_package = "opencv-python"
try:
import cv2
if _opencv_package in requirements:
requirements.remove(_opencv_package)
except ImportError:
requirements.add(_opencv_package)
c_sources = [
PACKAGE / 'ie_driver.cpp',
PACKAGE / 'ie_driver.hpp',
PACKAGE / 'c_ie_driver.pxd',
PACKAGE / 'ie_driver.pyx',
PACKAGE / 'ie_driver.pxd',
]
extensions = [
Extension(C_LIB_NAME, paths_to_str(c_sources))
]
cmdclass = {
'build_ext': build_ext,
'build_py': build_py,
'clean': clean,
'install': install,
}
setup(
name="src",
version='1.0',
description='Python inference for Inference Engine',
packages=find_packages(exclude=['tests']),
package_data={PACKAGE_NAME: ['*.so', '*.dll', '*dylib*', '*.pyd']},
include_package_data=True,
ext_modules=extensions,
cmdclass=cmdclass,
install_requires=list(requirements),
zip_safe=False,
)

View File

@@ -5,24 +5,20 @@ set (TARGET_NAME "ie_api")
set (CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PYTHON_BRIDGE_OUTPUT_DIRECTORY}/inference_engine)
set (CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${CMAKE_LIBRARY_OUTPUT_DIRECTORY})
set_source_files_properties(
ie_api_impl_defs.pxd
ie_api_impl.hpp
ie_api_impl.cpp
ie_api.pyx
ie_api.pxd
file(GLOB SOURCE
${CMAKE_CURRENT_SOURCE_DIR}/*.pyx
${CMAKE_CURRENT_SOURCE_DIR}/*.cpp
)
PROPERTIES CYTHON_IS_CXX TRUE
set_source_files_properties(${SOURCE} PROPERTIES CYTHON_IS_CXX TRUE
)
cython_add_module (
${TARGET_NAME}
## Compatibility with python 2.7 which has deprecated "register" specifier
if("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang")
add_definitions("-Wno-register")
endif()
ie_api_impl_defs.pxd
ie_api_impl.hpp
ie_api_impl.cpp
ie_api.pyx
)
cython_add_module (${TARGET_NAME} ${SOURCE})
set_target_properties (${TARGET_NAME} PROPERTIES CXX_STANDARD 11 LINKER_LANGUAGE CXX)
target_link_libraries (${TARGET_NAME} PRIVATE ${InferenceEngine_LIBRARIES})

View File

@@ -1,3 +1,4 @@
from .ie_api import *
__all__ = ['IENetwork', "IEPlugin", "IECore", "get_version"]
__version__ = get_version()
__all__ = ['IENetwork', "IEPlugin", "IENetReader"]

View File

@@ -1,37 +0,0 @@
# If the pyx file is a C++ file, we should specify that here.
set(CMAKE_INCLUDE_CURRENT_DIR ON)
set(TARGET_NAME "dnn_builder")
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PYTHON_BRIDGE_OUTPUT_DIRECTORY}/inference_engine/${TARGET_NAME})
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${CMAKE_LIBRARY_OUTPUT_DIRECTORY})
set_source_files_properties(
dnn_builder_defs.pxd
dnn_builder_impl.hpp
dnn_builder_impl.cpp
dnn_builder.pyx
dnn_builder.pxd
PROPERTIES CYTHON_IS_CXX TRUE
)
cython_add_module(
${TARGET_NAME}
dnn_builder_impl_defs.pxd
dnn_builder_impl.hpp
dnn_builder_impl.cpp
dnn_builder.pyx
)
set_target_properties (${TARGET_NAME} PROPERTIES CXX_STANDARD 11 LINKER_LANGUAGE CXX)
add_dependencies (${TARGET_NAME} ie_api)
target_include_directories (${TARGET_NAME} PRIVATE ${PYTHON_BRIDGE_SRC_ROOT}/src/openvino/inference_engine )
target_link_libraries (${TARGET_NAME} PRIVATE ${InferenceEngine_LIBRARIES})
# perform copy
ADD_CUSTOM_COMMAND (TARGET ${TARGET_NAME}
POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy ${PYTHON_BRIDGE_SRC_ROOT}/src/openvino/inference_engine/${TARGET_NAME}/__init__.py ${CMAKE_LIBRARY_OUTPUT_DIRECTORY}
)

View File

@@ -1,2 +0,0 @@
from .dnn_builder import *
__all__ = ["NetworkBuilder", "LayerBuilder"]

View File

@@ -1,26 +0,0 @@
from .cimport dnn_builder_impl_defs as C
from libcpp.memory cimport shared_ptr
cdef class NetworkBuilder:
cdef C.NetworkBuilder impl
cdef class INetwork:
cdef C.INetwork impl
cdef class ILayer:
cdef C.ILayer impl
cdef class Port:
cdef C.Port impl
cdef class PortInfo:
cdef C.PortInfo impl
cdef class Connection:
cdef C.Connection impl
cdef class LayerBuilder:
cdef C.LayerBuilder impl
cdef class LayerConstantData(dict):
cdef shared_ptr[C.LayerBuilder] impl

View File

@@ -1,423 +0,0 @@
# #distutils: language=c++
#from cython.operator cimport dereference as deref
from libcpp.vector cimport vector
from libcpp.map cimport map
from libcpp.string cimport string
from ..ie_api cimport IENetwork, BlobBuffer
from .cimport dnn_builder_impl_defs as C
from .dnn_builder_impl_defs cimport Blob
import numpy as np
np_precision_map = {
"float32": "FP32",
"float16": "FP16",
"int32": "I32",
"int16": "I16",
"uint16": "U16",
"int8": "I8",
"uint8": "U8",
}
cdef class NetworkBuilder:
def __cinit__(self, name=None, IENetwork ie_net=None):
if name is not None and ie_net is not None:
raise AttributeError("Both name and ie_net arguments are defined")
elif name is not None:
self.impl = C.NetworkBuilder(name.encode())
elif ie_net is not None:
self.impl = C.NetworkBuilder().from_ie_network(ie_net.impl)
def build(self):
cdef INetwork i_net = INetwork()
i_net.impl = self.impl.build()
return i_net
def get_layer(self, id: int):
cdef LayerBuilder py_layer = LayerBuilder()
py_layer.impl = self.impl.getLayer(id)
return py_layer
@property
def layers(self):
cdef vector[C.LayerBuilder] c_layers = self.impl.getLayers()
cdef LayerBuilder py_layer
py_layers = {}
for l in c_layers:
py_layer = LayerBuilder()
py_layer.impl = l
py_layers[l.getName().decode()] = py_layer
return py_layers
def remove_layer(self, LayerBuilder layer):
self.impl.removeLayer(layer.impl)
def get_layer_connection(self, LayerBuilder layer):
cdef vector[C.Connection] c_connections = self.impl.getLayerConnections(layer.impl)
cdef Connection connection
connections = []
for con in c_connections:
connection = Connection()
connection.impl = con
connections.append(connection)
return connections
def disconnect(self, Connection connection):
self.impl.disconnect(connection.impl)
def connect(self, PortInfo input, PortInfo output):
self.impl.connect(input.impl, output.impl)
def add_layer(self, LayerBuilder layer, input_ports: list = None):
cdef vector[C.PortInfo] c_ports
cdef PortInfo c_port
if not input_ports:
return self.impl.addLayer(layer.impl)
else:
for p in input_ports:
c_port = PortInfo(p.layer_id, p.port_id)
c_ports.push_back(c_port.impl)
return self.impl.addAndConnectLayer(c_ports, layer.impl)
cdef class INetwork:
def __iter__(self):
cdef ILayer layer
layers = []
cdef vector[C.ILayer] c_layers = self.impl.layers
for l in c_layers:
layer = ILayer()
layer.impl = l
layers.append(layer)
return iter(layers)
@property
def layers(self):
cdef ILayer layer
layers = {}
cdef vector[C.ILayer] c_layers = self.impl.layers
for l in c_layers:
layer = ILayer()
layer.impl = l
layers[l.name.decode()] = layer
return layers
@property
def inputs(self):
cdef ILayer layer
layers = {}
cdef vector[C.ILayer] c_layers = self.impl.inputs
for l in c_layers:
layer = ILayer()
layer.impl = l
layers[l.name.decode()] = layer
return layers
@property
def outputs(self):
cdef ILayer layer
layers = {}
cdef vector[C.ILayer] c_layers = self.impl.outputs
for l in c_layers:
layer = ILayer()
layer.impl = l
layers[l.name.decode()] = layer
return layers
@property
def name(self):
return self.impl.name.decode()
@property
def size(self):
return self.impl.size
def get_layer_connection(self, layer: ILayer):
cdef Connection connection
connections = []
cdef vector[C.Connection] c_connections = self.impl.getLayerConnections(layer.id)
for con in c_connections:
connection = Connection()
connection.impl = con
connections.append(connection)
return connections
def to_ie_network(self):
cdef IENetwork net = IENetwork()
net.impl = self.impl.to_ie_network()
return net
cdef class ILayer:
@property
def name(self):
return self.impl.name.decode()
@property
def id(self):
return self.impl.id
@property
def type(self):
return self.impl.type.decode()
@property
def params(self):
return {k.decode(): v.decode() for k, v in self.impl.parameters}
@property
def input_ports(self):
cdef Port port
cdef vector[C.Port] c_ports = self.impl.in_ports
ports = []
for p in c_ports:
port = Port()
port.impl = p
ports.append(port)
return ports
@property
def output_ports(self):
cdef Port port
cdef vector[C.Port] c_ports = self.impl.out_ports
ports = []
for p in c_ports:
port = Port()
port.impl = p
ports.append(port)
return ports
@property
def constant_data(self):
cdef map[string, Blob.Ptr] c_constant_data
c_constant_data = self.impl.constant_data
constant_data = {}
cdef BlobBuffer weights_buffer
for weights in c_constant_data:
weights_buffer = BlobBuffer()
weights_buffer.reset(weights.second)
constant_data[weights.first.decode()] = weights_buffer.to_numpy()
return constant_data
cdef class Port:
def __cinit__(self, shape: list=[]):
cdef vector[size_t] c_shape
for d in shape:
c_shape.push_back(d)
self.impl = C.Port(c_shape)
@property
def shape(self):
return self.impl.shape
cdef class PortInfo:
def __cinit__(self, layer_id: int = -1, port_id: int = -1):
if layer_id != -1 and port_id != -1:
self.impl = C.PortInfo(layer_id, port_id)
else:
self.impl = C.PortInfo()
@property
def layer_id(self):
return self.impl.layer_id
@property
def port_id(self):
return self.impl.port_id
def __eq__(self, other):
return self.layer_id == other.layer_id and self.port_id == other.port_id
def __ne__(self, other):
return self.layer_id != other.layer_id and self.port_id != other.port_id
cdef class Connection:
def __cinit__(self, PortInfo input = None, PortInfo output = None):
if input and output:
self.impl = C.Connection(input.impl, output.impl)
else:
self.impl = C.Connection()
@property
def _from(self):
cdef PortInfo port_info = PortInfo()
port_info.impl = self.impl._from
return port_info
@property
def to(self):
cdef PortInfo port_info = PortInfo()
port_info.impl = self.impl.to
return port_info
def __eq__(self, other):
return self._from == other._from and self.to == other.to
def __ne__(self, other):
return self._from != other._from and self.to != other.to
def check_constant_data(data):
for k, v in data.items():
if not all([isinstance(x, type(v[0])) for x in v]):
raise TypeError("Elements of list for key {} have different data types! "
"Please specify list of 'int' or 'float' values.".format(k))
if isinstance(v, list):
if isinstance(v[0], float):
dtype = np.float32
elif isinstance(v[0], int):
dtype = np.int32
else:
raise TypeError("Unsupported precision of the data for key {}! Given {} but 'float or 'int' precision expected".
format(k, str(v.dtype)))
data[k] = np.asanyarray(v, dtype=dtype)
elif isinstance(v, np.ndarray):
pass
else:
raise TypeError("Unsupported data type for key '{}'. {} given but 'list' or 'numpy.ndarray' expected".
format(k, type(v)))
return data
# TODO: Fix LAyerBuilder object copying - pass by reference
# cdef class LayerConstantData(dict):
# def update(self, other=None, **kwargs):
# if other:
# other = check_constant_data(other)
# cdef vector[size_t] dims
# cdef Blob.Ptr blob_ptr
# cdef BlobBuffer buffer
# for k, v in other.items():
# if k in self.keys() and (v.shape == self[k].shape and v.dtype == self[k].dtype):
# print("Reuse blob for {}\n".format(k))
# self[k][:] = v
# else:
# for dim in v.shape:
# dims.push_back(dim)
# ie_precision = np_precision_map.get(str(v.dtype), None)
# if not ie_precision:
# raise BufferError("Unsupported precision of the data for key {}! Given {} but one of the {} precisions expected".
# format(k, str(v.dtype), ", ".join(np_precision_map.keys())))
# blob_ptr = deref(self.impl).allocateBlob(dims, ie_precision.encode())
# buffer = BlobBuffer()
# buffer.reset(blob_ptr)
# np_buffer = buffer.to_numpy()
# np_buffer[:] = v
# deref(self.impl).addConstantData(k.encode(), blob_ptr)
cdef class LayerBuilder:
def __cinit__(self, type: str=None, name: str=None):
if name and type:
self.impl = C.LayerBuilder(name.encode(), type.encode())
else:
self.impl = C.LayerBuilder()
@property
def id(self):
return self.impl.id
@property
def name(self):
return self.impl.getName().decode()
@name.setter
def name(self, name: str):
self.impl.setName(name.encode())
@property
def type(self):
return self.impl.getType().decode()
@type.setter
def type(self, type: str):
self.impl.setType(type.encode())
@property
def input_ports(self):
cdef Port port
cdef vector[C.Port] c_ports = self.impl.getInputPorts()
py_ports = []
for p in c_ports:
port = Port()
port.impl = p
py_ports.append(port)
return py_ports
@input_ports.setter
def input_ports(self, ports: list):
cdef vector[C.Port] c_ports
cdef Port c_port
for p in ports:
c_port = Port(p.shape)
c_ports.push_back(c_port.impl)
self.impl.setInputPorts(c_ports)
@property
def output_ports(self):
cdef Port port
cdef vector[C.Port] c_ports = self.impl.getOutputPorts()
py_ports = []
for p in c_ports:
port = Port()
port.impl = p
py_ports.append(port)
return py_ports
@output_ports.setter
def output_ports(self, ports: list):
cdef vector[C.Port] c_ports
cdef Port c_port
for p in ports:
c_port = Port(p.shape)
c_ports.push_back(c_port.impl)
self.impl.setOutputPorts(c_ports)
@property
def params(self):
return {k.decode(): v.decode() for k, v in self.impl.getParameters()}
@params.setter
def params(self, params_map: dict):
cdef map[string, string] c_params_map
for k, v in params_map.items():
c_params_map[k.encode()] = str(v).encode()
self.impl.setParameters(c_params_map)
def build(self):
cdef ILayer layer = ILayer()
layer.impl = self.impl.build()
return layer
@property
def constant_data(self):
cdef map[string, Blob.Ptr] c_constant_data
c_constant_data = self.impl.getConstantData()
constant_data = {}
# TODO: Fix LAyerBuilder object copying - pass by reference
# constant_data = LayerConstantData()
# constant_data.impl = make_shared[C.LayerBuilder](self.impl)
cdef BlobBuffer weights_buffer
for weights in c_constant_data:
weights_buffer = BlobBuffer()
weights_buffer.reset(weights.second)
constant_data[weights.first.decode()] = weights_buffer.to_numpy()
return constant_data
@constant_data.setter
def constant_data(self, data: dict):
cdef vector[size_t] dims
cdef map[string, Blob.Ptr] c_constant_data
cdef Blob.Ptr blob_ptr
cdef BlobBuffer buffer
data = check_constant_data(data)
for k, v in data.items():
for dim in v.shape:
dims.push_back(dim)
ie_precision = np_precision_map.get(str(v.dtype), None)
if not ie_precision:
raise BufferError("Unsupported precision of the data for key {}! Given {} but one of the {} precisions expected".
format(k, str(v.dtype), ", ".join(np_precision_map.keys())))
blob_ptr = self.impl.allocateBlob(dims, ie_precision.encode())
buffer = BlobBuffer()
buffer.reset(blob_ptr)
np_buffer = buffer.to_numpy()
np_buffer[:] = v
c_constant_data[k.encode()] = blob_ptr
self.impl.setConstantData(c_constant_data)
# TODO: Implement get\setGraph when will be supported

View File

@@ -1,330 +0,0 @@
// Copyright (c) 2018 Intel Corporation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "dnn_builder_impl.hpp"
// using namespace InferenceEnginePython;
// using namespace std;
std::map<std::string, InferenceEngine::Precision> precision_map = {{"FP32", InferenceEngine::Precision::FP32},
{"FP16", InferenceEngine::Precision::FP16},
{"Q78", InferenceEngine::Precision::Q78},
{"I32", InferenceEngine::Precision::I32},
{"I16", InferenceEngine::Precision::I16},
{"I8", InferenceEngine::Precision::I8},
{"U16", InferenceEngine::Precision::U16},
{"U8", InferenceEngine::Precision::U8}};
InferenceEnginePython::ILayer buildILayer(InferenceEngine::ILayer::CPtr it) {
std::vector<InferenceEnginePython::Port> in_ports;
std::vector<InferenceEnginePython::Port> out_ports;
for (const auto &port : it->getInputPorts()) {
in_ports.push_back(InferenceEnginePython::Port(port.shape()));
}
for (const auto &port : it->getOutputPorts()) {
out_ports.push_back(InferenceEnginePython::Port(port.shape()));
}
std::map<std::string, std::string> params_map;
for (const auto &params : it->getParameters()->getParameters()) {
params_map.emplace(params.first, params.second);
}
std::map<std::string, InferenceEngine::Blob::Ptr> data_map;
for (const auto &data : it->getParameters()->getConstantData()) {
data_map.emplace(data.first, std::const_pointer_cast<InferenceEngine::Blob>(data.second));
}
return {it,
it->getName(),
it->getId(),
it->getType(),
params_map,
data_map,
in_ports,
out_ports,
};
}
// NetworkBuilder
InferenceEnginePython::NetworkBuilder::NetworkBuilder(const std::string &name) {
// TODO( ): std::move or instance in heap? Please check in other places.
InferenceEngine::Builder::Network network(name);
network_ptr = std::make_shared<InferenceEngine::Builder::Network>(network);
}
InferenceEnginePython::NetworkBuilder InferenceEnginePython::NetworkBuilder::from_ie_network(
const InferenceEnginePython::IENetwork &icnn_net) {
InferenceEngine::Builder::Network network((InferenceEngine::ICNNNetwork &) icnn_net.actual);
NetworkBuilder net_builder = NetworkBuilder();
net_builder.network_ptr = std::make_shared<InferenceEngine::Builder::Network>(network);
return net_builder;
}
InferenceEnginePython::INetwork InferenceEnginePython::NetworkBuilder::build() {
InferenceEngine::INetwork::Ptr i_net = network_ptr->build();
std::vector<ILayer> layers;
for (const auto &it : *i_net) {
layers.push_back(buildILayer(it));
}
std::vector<ILayer> inputs;
for (const auto &it : i_net->getInputs()) {
inputs.push_back(buildILayer(it));
}
std::vector<ILayer> outputs;
for (const auto &it : i_net->getInputs()) {
outputs.push_back(buildILayer(it));
}
return {i_net, // INetwork ptr
i_net->getName(), // name
i_net->size(), // Number of layers
layers,
inputs,
outputs
};
}
std::vector<InferenceEnginePython::LayerBuilder> InferenceEnginePython::NetworkBuilder::getLayers() {
std::vector<LayerBuilder> layers;
for (const auto &it : network_ptr->getLayers()) {
LayerBuilder layer;
layer.actual = it;
layer.id = it.getId();
layers.push_back(layer);
}
return layers;
}
InferenceEnginePython::LayerBuilder InferenceEnginePython::NetworkBuilder::getLayer(size_t layer_id) {
LayerBuilder layer;
InferenceEngine::Builder::Layer ie_layer = network_ptr->getLayer(layer_id);
layer.actual = ie_layer;
layer.id = ie_layer.getId();
return layer;
}
void InferenceEnginePython::NetworkBuilder::removeLayer(const LayerBuilder &layer) {
network_ptr->removeLayer(layer.id);
}
const std::vector<InferenceEnginePython::Connection> InferenceEnginePython::NetworkBuilder::getLayerConnections(
const LayerBuilder &layer) {
std::vector<InferenceEngine::Connection> ie_connections = network_ptr->getLayerConnections(layer.id);
std::vector<Connection> connections;
for (auto const &it : ie_connections) {
PortInfo input(it.from().layerId(), it.from().portId());
PortInfo output(it.to().layerId(), it.to().portId());
connections.push_back(Connection(input, output));
}
return connections;
}
void InferenceEnginePython::NetworkBuilder::disconnect(const Connection &connection) {
network_ptr->disconnect(connection.actual);
}
void InferenceEnginePython::NetworkBuilder::connect(const PortInfo &input, const PortInfo &output) {
network_ptr->connect(input.actual, output.actual);
}
size_t InferenceEnginePython::NetworkBuilder::addLayer(const LayerBuilder &layer) {
return network_ptr->addLayer(layer.actual);
}
size_t InferenceEnginePython::NetworkBuilder::addAndConnectLayer(const std::vector<PortInfo> &input,
const LayerBuilder &layer) {
std::vector<InferenceEngine::PortInfo> ie_ports;
for (const auto &it : input) {
ie_ports.push_back(it.actual);
}
return network_ptr->addLayer(ie_ports, layer.actual);
}
// NetworkBuilder end
// NetworkBuilder end
// Port
InferenceEnginePython::Port::Port(const std::vector<size_t> &shapes) {
actual = InferenceEngine::Port(shapes);
shape = actual.shape();
}
InferenceEnginePython::PortInfo::PortInfo(size_t layer_id, size_t port_id) : PortInfo() {
this->actual = InferenceEngine::PortInfo(layer_id, port_id);
this->layer_id = layer_id;
this->port_id = port_id;
}
// Port end
// INetwork
std::vector<InferenceEnginePython::Connection> InferenceEnginePython::INetwork::getLayerConnections(size_t layer_id) {
std::vector<Connection> connections;
for (const auto &it : actual->getLayerConnections(layer_id)) {
PortInfo input = PortInfo(it.from().layerId(), it.from().portId());
PortInfo output = PortInfo(it.to().layerId(), it.to().portId());
connections.push_back(Connection(input, output));
}
return connections;
}
InferenceEnginePython::IENetwork InferenceEnginePython::INetwork::to_ie_network() {
std::shared_ptr<InferenceEngine::ICNNNetwork> icnn_net = InferenceEngine::Builder::convertToICNNNetwork(actual);
InferenceEngine::CNNNetwork cnn_net(icnn_net);
IENetwork ie_net = IENetwork();
ie_net.actual = cnn_net;
ie_net.name = name;
ie_net.batch_size = cnn_net.getBatchSize();
return ie_net;
}
// INetwork end
// Connection
InferenceEnginePython::Connection::Connection(PortInfo input, PortInfo output) : Connection() {
this->actual = InferenceEngine::Connection(InferenceEngine::PortInfo(input.layer_id, input.port_id),
InferenceEngine::PortInfo(output.layer_id, output.port_id));
this->_from = PortInfo(actual.from().layerId(), actual.from().portId());
this->to = PortInfo(actual.to().layerId(), actual.to().portId());
}
// Connection end
// LayerBuilder
InferenceEnginePython::LayerBuilder::LayerBuilder(const std::string &type, const std::string &name) : LayerBuilder() {
InferenceEngine::Builder::Layer layer(type, name);
this->actual = layer;
this->id = layer.getId();
}
const std::string &InferenceEnginePython::LayerBuilder::getName() {
return actual.getName();
}
const std::string &InferenceEnginePython::LayerBuilder::getType() {
return actual.getType();
}
std::vector<InferenceEnginePython::Port> InferenceEnginePython::LayerBuilder::getInputPorts() {
std::vector<Port> ports;
for (const auto &it : actual.getInputPorts()) {
ports.push_back(Port(it.shape()));
}
return ports;
}
std::vector<InferenceEnginePython::Port> InferenceEnginePython::LayerBuilder::getOutputPorts() {
std::vector<Port> ports;
for (const auto &it : actual.getOutputPorts()) {
ports.push_back(Port(it.shape()));
}
return ports;
}
std::map<std::string, std::string> InferenceEnginePython::LayerBuilder::getParameters() {
std::map<std::string, std::string> params_map;
for (const auto &it : actual.getParameters()) {
params_map.emplace(it.first, it.second);
}
return params_map;
}
void InferenceEnginePython::LayerBuilder::setParameters(std::map<std::string, std::string> params_map) {
std::map<std::string, InferenceEngine::Parameter> ie_params_map;
for (const auto &it : params_map) {
InferenceEngine::Parameter ie_param((it.second));
ie_params_map.emplace(it.first, ie_param);
}
actual = actual.setParameters(ie_params_map);
}
void InferenceEnginePython::LayerBuilder::setName(const std::string &name) {
actual = actual.setName(name);
}
void InferenceEnginePython::LayerBuilder::setType(const std::string &type) {
actual = actual.setType(type);
}
void InferenceEnginePython::LayerBuilder::setInputPorts(const std::vector<Port> ports) {
std::vector<InferenceEngine::Port> ie_ports;
for (const auto &it : ports) {
ie_ports.push_back(it.actual);
}
actual = actual.setInputPorts(ie_ports);
}
void InferenceEnginePython::LayerBuilder::setOutputPorts(const std::vector<Port> ports) {
std::vector<InferenceEngine::Port> ie_ports;
for (const auto &it : ports) {
ie_ports.push_back(it.actual);
}
actual = actual.setOutputPorts(ie_ports);
}
InferenceEnginePython::ILayer InferenceEnginePython::LayerBuilder::build() {
return buildILayer(actual.build());
}
std::map<std::string, InferenceEngine::Blob::Ptr> InferenceEnginePython::LayerBuilder::getConstantData() {
std::map<std::string, InferenceEngine::Blob::Ptr> data_map;
for (const auto &it : actual.getConstantData()) {
data_map.emplace(it.first, std::const_pointer_cast<InferenceEngine::Blob>(it.second));
}
return data_map;
}
InferenceEngine::Blob::Ptr InferenceEnginePython::LayerBuilder::allocateBlob(std::vector<size_t> dims,
const std::string &precision) {
InferenceEngine::Layout ie_layout;
ie_layout = InferenceEngine::TensorDesc::getLayoutByDims(dims);
InferenceEngine::Precision ie_precision = precision_map.at(precision);
const InferenceEngine::TensorDesc &tdesc = InferenceEngine::TensorDesc(ie_precision, dims, ie_layout);
InferenceEngine::Blob::Ptr blob;
switch (ie_precision) {
case InferenceEngine::Precision::FP32:
blob = InferenceEngine::make_shared_blob<float>(tdesc);
break;
case InferenceEngine::Precision::FP16:
blob = InferenceEngine::make_shared_blob<int>(tdesc);
break;
case InferenceEngine::Precision::I16:
blob = InferenceEngine::make_shared_blob<int>(tdesc);
break;
case InferenceEngine::Precision::U16:
blob = InferenceEngine::make_shared_blob<int>(tdesc);
break;
case InferenceEngine::Precision::U8:
blob = InferenceEngine::make_shared_blob<unsigned char>(tdesc);
break;
case InferenceEngine::Precision::I8:
blob = InferenceEngine::make_shared_blob<signed char>(tdesc);
break;
case InferenceEngine::Precision::I32:
blob = InferenceEngine::make_shared_blob<signed int>(tdesc);
break;
default:
blob = InferenceEngine::make_shared_blob<float>(tdesc);
break;
}
blob->allocate();
return blob;
}
void InferenceEnginePython::LayerBuilder::setConstantData(const std::map<std::string,
InferenceEngine::Blob::Ptr> &const_data) {
actual.setConstantData(const_data);
}
// TODO( ): Fix LAyerBuilder object copying - pass by reference
// void LayerBuilder::addConstantData(const std::string & name, InferenceEngine::Blob::Ptr data){
// InferenceEngine::Blob::CPtr c_data = const_pointer_cast<const InferenceEngine::Blob>(data);
// actual.addConstantData(name, c_data);
// }
// LayerBuilder end

View File

@@ -1,161 +0,0 @@
// Copyright (c) 2018 Intel Corporation
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#pragma once
#include <ie_blob.h>
#include <iterator>
#include <string>
#include <iostream>
#include <algorithm>
#include <vector>
#include <map>
#include <sstream>
#include <ie_builders.hpp>
#include <inference_engine.hpp>
#include <ie_api_impl.hpp>
// namespace IE Python
namespace InferenceEnginePython {
struct LayerBuilder;
struct Port {
Port() = default;
explicit Port(const std::vector<size_t> &shapes);
InferenceEngine::Port actual;
std::vector<size_t> shape;
};
struct ILayer {
InferenceEngine::ILayer::CPtr layer_ptr;
std::string name;
size_t id;
std::string type;
std::map<std::string, std::string> parameters;
std::map<std::string, InferenceEngine::Blob::Ptr> constant_data;
std::vector<Port> in_ports;
std::vector<Port> out_ports;
};
struct PortInfo {
PortInfo(size_t layer_id, size_t port_id);
PortInfo() : actual(0, 0) {}
InferenceEngine::PortInfo actual;
size_t layer_id;
size_t port_id;
};
struct Connection {
Connection() : actual(InferenceEngine::PortInfo(0), InferenceEngine::PortInfo(0)) {}
Connection(PortInfo input, PortInfo output);
InferenceEngine::Connection actual;
PortInfo _from;
PortInfo to;
};
struct INetwork {
InferenceEngine::INetwork::Ptr actual;
std::string name;
size_t size;
std::vector<ILayer> layers;
std::vector<ILayer> inputs;
std::vector<ILayer> outputs;
std::vector<Connection> getLayerConnections(size_t layer_id);
IENetwork to_ie_network();
};
struct NetworkBuilder {
InferenceEngine::Builder::Network::Ptr network_ptr;
explicit NetworkBuilder(const std::string &name);
NetworkBuilder() = default;
NetworkBuilder from_ie_network(const InferenceEnginePython::IENetwork &icnn_net);
INetwork build();
std::vector<LayerBuilder> getLayers();
LayerBuilder getLayer(size_t layer_id);
void removeLayer(const LayerBuilder &layer);
size_t addLayer(const LayerBuilder &layer);
size_t addAndConnectLayer(const std::vector<PortInfo> &input, const LayerBuilder &layer);
const std::vector<Connection> getLayerConnections(const LayerBuilder &layer);
void disconnect(const Connection &connection);
void connect(const PortInfo &input, const PortInfo &output);
};
struct LayerBuilder {
InferenceEngine::Builder::Layer actual;
size_t id;
LayerBuilder(const std::string &type, const std::string &name);
LayerBuilder() : actual("", "") {}
LayerBuilder from_ilayer(const ILayer &ilayer);
const std::string &getName();
void setName(const std::string &name);
const std::string &getType();
void setType(const std::string &type);
std::vector<Port> getInputPorts();
void setInputPorts(const std::vector<Port> ports);
std::vector<Port> getOutputPorts();
void setOutputPorts(const std::vector<Port> ports);
std::map<std::string, std::string> getParameters();
void setParameters(std::map<std::string, std::string> params_map);
ILayer build();
std::map<std::string, InferenceEngine::Blob::Ptr> getConstantData();
InferenceEngine::Blob::Ptr allocateBlob(std::vector<size_t> dims, const std::string &precision);
void setConstantData(const std::map<std::string, InferenceEngine::Blob::Ptr> &const_data);
// TODO( ): Fix LAyerBuilder object copying - pass by reference
// void addConstantData(const std::string & name, InferenceEngine::Blob::Ptr data);
};
} // namespace InferenceEnginePython

View File

@@ -1,97 +0,0 @@
from libcpp.string cimport string
from libcpp.vector cimport vector
from libc.stddef cimport size_t
from libcpp.memory cimport shared_ptr
from libcpp.map cimport map
from ..ie_api_impl_defs cimport IENetwork
cdef extern from "<inference_engine.hpp>" namespace "InferenceEngine":
ctypedef vector[size_t] SizeVector
cdef cppclass TensorDesc:
SizeVector& getDims()
const Precision& getPrecision() const
cdef cppclass Blob:
ctypedef shared_ptr[Blob] Ptr
const TensorDesc& getTensorDesc() const
size_t element_size() const
cdef cppclass Precision:
const char*name() const
cdef extern from "dnn_builder_impl.hpp" namespace "InferenceEnginePython":
cdef cppclass ILayer:
const string name
size_t id
string type
map[string, string] parameters
vector[Port] in_ports
vector[Port] out_ports
map[string, Blob.Ptr] constant_data;
cdef cppclass INetwork:
string name
size_t size
vector[ILayer] layers
vector[ILayer] inputs
vector[ILayer] outputs
vector[Port] in_ports;
vector[Port] out_ports;
vector[Connection] getLayerConnections(size_t layer_id);
IENetwork to_ie_network();
cdef cppclass NetworkBuilder:
NetworkBuilder() except +
NetworkBuilder(string name) except +
NetworkBuilder from_ie_network(IENetwork &icnn_net) except +
INetwork build() except +
vector[LayerBuilder] getLayers() except +
LayerBuilder getLayer(size_t layer_id) except +
void removeLayer(const LayerBuilder& layer) except +
const vector[Connection] getLayerConnections(const LayerBuilder& layer) except +
void disconnect(const Connection& connection) except +
void connect(const PortInfo& input, const PortInfo& output) except +
size_t addLayer(const LayerBuilder& layer) except +
size_t addAndConnectLayer(const vector[PortInfo]& input, const LayerBuilder& layer);
cdef cppclass Port:
Port() except +
Port(const vector[size_t] & shapes) except +
const vector[size_t] shape
cdef cppclass PortInfo:
PortInfo(size_t layer_id, size_t port_id) except +
PortInfo() except +
size_t layer_id
size_t port_id
cdef cppclass Connection:
Connection(PortInfo input, PortInfo output) except +
Connection() except +
PortInfo _from
PortInfo to
cdef cppclass LayerBuilder:
LayerBuilder()
LayerBuilder(const string& type, const string& name ) except +
size_t id
LayerBuilder from_ilayer(const ILayer& ilayer) except +
string getName() except +
string getType() except +
vector[Port] getInputPorts() except +
vector[Port] getOutputPorts() except +
map[string, string] getParameters() except +
void setParameters(map[string, string] params_map) except +
void setName(const string & name) except +
void setType(const string & type) except +
void setInputPorts(const vector[Port] ports) except +
void setOutputPorts(const vector[Port] ports) except +
ILayer build() except +
map[string, Blob.Ptr] getConstantData()
void setConstantData(map[string, Blob.Ptr] &const_data)
# TODO: Fix LAyerBuilder object copying - pass by reference
# void addConstantData(const string & name, Blob.Ptr data)
Blob.Ptr allocateBlob(vector[size_t] dims, const string & precision)

View File

@@ -25,16 +25,19 @@ cdef class InferRequest:
cpdef async_infer(self, inputs = ?)
cpdef wait(self, timeout = ?)
cpdef get_perf_counts(self)
cdef void user_callback(self, int status) with gil
cdef public:
_inputs_list, _outputs_list
_inputs_list, _outputs_list, _py_callback, _py_data, _py_callback_used, _py_callback_called
cdef class IENetwork:
cdef C.IENetwork impl
cdef class ExecutableNetwork:
cdef unique_ptr[C.IEExecNetwork] impl
cdef C.IEPlugin plugin_impl
cdef C.IECore ie_core_impl
cdef public:
_requests, inputs, outputs
_requests, _infer_requests, inputs, outputs
cdef class IEPlugin:
cdef C.IEPlugin impl
@@ -55,3 +58,7 @@ cdef class OutputInfo:
cdef class LayersStatsMap(dict):
cdef C.IENetwork net_impl
cdef class IECore:
cdef C.IECore impl
cpdef ExecutableNetwork load_network(self, IENetwork network, str device_name, config = ?, int num_requests = ?)

View File

@@ -7,12 +7,16 @@ from libcpp.vector cimport vector
from libcpp.pair cimport pair
from libcpp.map cimport map
from libcpp.memory cimport unique_ptr
from libc.stdint cimport int64_t
from libc.stdlib cimport malloc, free
from libc.stdint cimport int64_t, uint8_t
from libc.string cimport memcpy, strcpy
import os
import numpy as np
from copy import deepcopy
import warnings
from collections import OrderedDict, namedtuple
from collections import OrderedDict
import threading
cdef extern from "<utility>" namespace "std" nogil:
cdef unique_ptr[C.IEExecNetwork] move(unique_ptr[C.IEExecNetwork])
@@ -31,13 +35,103 @@ cdef dict_to_c_map(py_dict):
c_map[k.encode()] = v.encode()
return c_map
supported_precisions = ["FP32", "FP16", "Q78", "I32", "I16", "I8", "U32", "U16"]
supported_layouts = ["NCHW", "NHWC", "OIHW", "C", "CHW", "HW", "NC", "CN", "BLOCKED"]
known_plugins = ['CPU', 'GPU', 'FPGA', 'MYRIAD', 'HETERO', 'HDDL']
cdef c_map_to_dict(map[string, string] c_map):
py_dict = {}
for v in c_map:
py_dict[v.first.decode()] = v.second.decode()
return py_dict
supported_precisions = ["FP32", "FP16", "Q78", "I32", "I16", "I8", "U32", "U16", "U8"]
supported_layouts = ["NCHW", "NHWC", "OIHW", "C", "CHW", "HW", "NC", "CN", "BLOCKED", "NCDHW"]
known_plugins = ['CPU', 'GPU', 'FPGA', 'MYRIAD', 'HETERO', 'HDDL', 'MULTI']
ctypedef enum StatusCode:
OK = 0
GENERAL_ERROR = -1
NOT_IMPLEMENTED = -2
NETWORK_NOT_LOADED = -3
PARAMETER_MISMATCH = -4
NOT_FOUND = -5
OUT_OF_BOUNDS = -6
UNEXPECTED = -7
REQUEST_BUSY = -8
RESULT_NOT_READY = -9
NOT_ALLOCATED = -10
INFER_NOT_STARTED = -11
NETWORK_NOT_READ = -12
def get_version():
return C.get_version().decode()
cdef class IECore:
def __cinit__(self, xml_config_file: str = ""):
self.impl = C.IECore(xml_config_file.encode())
def get_versions(self, device_name: str):
cdef map[string, C.Version] versions_
versions_ = self.impl.getVersions(device_name.encode())
versions = {}
for v in versions_:
device = v.first.decode()
ver = v.second
versions[device] = namedtuple("Versions", ["major", "minor", "build_number", "description"])
versions[device].build_number = ver.buildNumber.decode()
versions[device].description = ver.description.decode()
versions[device].minor = ver.apiVersion.minor
versions[device].major = ver.apiVersion.major
return versions
cpdef ExecutableNetwork load_network(self, IENetwork network, str device_name, config=None, int num_requests=1):
cdef ExecutableNetwork exec_net = ExecutableNetwork()
cdef map[string, string] c_config
if config:
c_config = dict_to_c_map(config)
exec_net.ie_core_impl = self.impl
exec_net.impl = move(self.impl.loadNetwork(network.impl, device_name.encode(), c_config, num_requests))
exec_net.inputs = network.inputs.keys()
exec_net.outputs = list(network.outputs.keys())
return exec_net
def query_network(self, IENetwork network, str device_name, config=None):
cdef map[string, string] c_config
if config:
c_config = dict_to_c_map(config)
res = self.impl.queryNetwork(network.impl, device_name.encode(), c_config)
return c_map_to_dict(res)
def set_config(self, config: dict, device_name: str):
cdef map[string, string] c_config = dict_to_c_map(config)
self.impl.setConfig(c_config, device_name.encode())
def register_plugin(self, plugin_name: str, device_name: str = ""):
self.impl.registerPlugin(plugin_name.encode(), device_name.encode())
def register_plugins(self, xml_config_file: str):
self.impl.registerPlugins(xml_config_file.encode())
def unregister_plugin(self, device_name: str):
self.impl.unregisterPlugin(device_name.encode())
def add_extension(self, extension_path: str, device_name: str):
self.impl.addExtension(extension_path.encode(), device_name.encode())
def get_metric(self, device_name: str, metric_name: str):
return self.impl.getMetric(device_name.encode(), metric_name.encode())
def get_config(self, device_name: str, config_name: str):
return self.impl.getConfig(device_name.encode(), config_name.encode())
@property
def available_devices(self):
cdef vector[string] c_devices = self.impl.getAvailableDevices()
return [d.decode() for d in c_devices]
# TODO: Add import network functionality
# TODO: Extend API for query config and attributes when it will be merged in C++ API
cdef class IENetLayer:
@property
def name(self):
@@ -137,6 +231,7 @@ cdef class OutputInfo:
cdef class ExecutableNetwork:
def __init__(self):
self._infer_requests = []
self._requests = []
self.inputs = []
self.outputs = []
@@ -155,19 +250,53 @@ cdef class ExecutableNetwork:
@property
def requests(self):
requests = []
for i in range(deref(self.impl).infer_requests.size()):
infer_request = InferRequest()
infer_request.impl = &(deref(self.impl).infer_requests[i])
infer_request._inputs_list = self.inputs
infer_request._outputs_list = self.outputs
requests.append(infer_request)
return requests
if (len(self._infer_requests) == 0):
for i in range(deref(self.impl).infer_requests.size()):
infer_request = InferRequest()
infer_request.impl = &(deref(self.impl).infer_requests[i])
self._infer_requests.append(infer_request)
if (len(self._infer_requests) != deref(self.impl).infer_requests.size()):
raise Exception("Mismatch of infer requests number!")
for i in range(len(self._infer_requests)):
self._infer_requests[i]._inputs_list = self.inputs
self._infer_requests[i]._outputs_list = self.outputs
return self._infer_requests
def get_exec_graph_info(self):
ie_network = IENetwork()
ie_network.impl = deref(self.impl).GetExecGraphInfo()
return ie_network
def get_metric(self, metric_name: str):
return deref(self.impl).getMetric(metric_name.encode())
def get_config(self, config_name: str):
return deref(self.impl).getConfig(config_name.encode())
ctypedef extern void (*cb_type)(void*, int) with gil
cdef class InferRequest:
def __init__(self):
self._inputs_list = []
self._outputs_list = []
self._py_callback = lambda *args, **kwargs: None
self._py_callback_used = False
self._py_callback_called = threading.Event()
self._py_data = None
cdef void user_callback(self, int status) with gil:
if self._py_callback:
self._py_callback(status, self._py_data)
self._py_callback_called.set()
def set_completion_callback(self, py_callback, py_data = None):
self._py_callback = py_callback
self._py_data = py_data
self._py_callback_used = True
deref(self.impl).setCyCallback(<cb_type>self.user_callback, <void *>self)
cpdef BlobBuffer _get_blob_buffer(self, const string & blob_name):
cdef BlobBuffer buffer = BlobBuffer()
@@ -185,13 +314,19 @@ cdef class InferRequest:
cpdef async_infer(self, inputs=None):
if inputs is not None:
self._fill_inputs(inputs)
self._py_callback_called.clear()
deref(self.impl).infer_async()
cpdef wait(self, timeout=None):
if timeout is None:
timeout = -1
return deref(self.impl).wait(<int64_t> timeout)
if self._py_callback_used:
while not self._py_callback_called.is_set():
if not self._py_callback_called.wait(timeout):
return StatusCode.REQUEST_BUSY
return StatusCode.OK
else:
if timeout is None:
timeout = -1
return deref(self.impl).wait(<int64_t> timeout)
cpdef get_perf_counts(self):
cdef map[string, C.ProfileInfo] c_profile = deref(self.impl).getPerformanceCounts()
@@ -201,7 +336,7 @@ cdef class InferRequest:
# TODO: add execution index. Check if unsigned int is properly converted to int in python.
profile[l.first.decode()] = {"status": info.status.decode(), "exec_type": info.exec_type.decode(),
"layer_type": info.layer_type.decode(), "real_time": info.real_time,
"cpu_time": info.cpu_time}
"cpu_time": info.cpu_time, "execution_index": info.execution_index}
return profile
@property
@@ -218,6 +353,10 @@ cdef class InferRequest:
outputs[output] = self._get_blob_buffer(output.encode()).to_numpy()
return deepcopy(outputs)
@property
def latency(self):
return self.impl.exec_time
def set_batch(self, size):
if size <= 0:
raise ValueError("Batch size should be positive integer number but {} specified".format(size))
@@ -225,6 +364,7 @@ cdef class InferRequest:
def _fill_inputs(self, inputs):
for k, v in inputs.items():
assert k in self._inputs_list, "No input with name {} found in network".format(k)
self.inputs[k][:] = v
@@ -253,19 +393,30 @@ cdef class LayersStatsMap(dict):
self.net_impl.setStats(c_stats_map)
cdef class IENetwork:
def __cinit__(self, model: str="", weights: str=""):
def __cinit__(self, model: [str, bytes] ="", weights: [str, bytes] ="", init_from_buffer: bool=False,
ngraph_compatibility: bool = False):
cdef char* xml_buffer = <char*>malloc(len(model))
cdef uint8_t* bin_buffer = <uint8_t *>malloc(len(weights))
cdef string model_
cdef string weights_
if model and weights:
if not os.path.isfile(model):
raise Exception("Path to the model {} doesn't exists or it's a directory".format(model))
if not os.path.isfile(weights):
raise Exception("Path to the weights {} doesn't exists or it's a directory".format(weights))
model_ = model.encode()
weights_ = weights.encode()
self.impl = C.IENetwork(model_, weights_)
else:
if init_from_buffer:
strcpy(xml_buffer, model)
memcpy(bin_buffer, <uint8_t *>weights, len(weights))
self.impl = C.IENetwork()
self.impl.load_from_buffer(xml_buffer, len(model), bin_buffer, len(weights))
else:
if model and weights:
if not os.path.isfile(model):
raise Exception("Path to the model {} doesn't exists or it's a directory".format(model))
if not os.path.isfile(weights):
raise Exception("Path to the weights {} doesn't exists or it's a directory".format(weights))
model_ = model.encode()
weights_ = weights.encode()
self.impl = C.IENetwork(model_, weights_, ngraph_compatibility)
else:
self.impl = C.IENetwork()
free(xml_buffer)
@property
def name(self):
name = bytes(self.impl.name)
@@ -297,6 +448,10 @@ cdef class IENetwork:
def batch_size(self):
return self.impl.batch_size
@property
def precision(self):
return self.impl.precision.decode()
@batch_size.setter
def batch_size(self, batch: int):
if batch <= 0:
@@ -338,27 +493,30 @@ cdef class IENetwork:
cdef IENetwork net = IENetwork(model, weights)
return net
# TODO: Use enum with precision type instead of srting parameter when python2 support will not be required.
def add_outputs(self, outputs, precision="FP32"):
if precision.upper() not in supported_precisions:
raise AttributeError(
"Unsupported precision {}! List of supported precisions: {}".format(precision, supported_precisions))
def add_outputs(self, outputs):
if not isinstance(outputs, list):
outputs = [outputs]
cdef vector[string] _outputs
for l in outputs:
_outputs.push_back(l.encode())
self.impl.addOutputs(_outputs, precision.upper().encode())
for i, l in enumerate(outputs):
if isinstance(l, str):
self.impl.addOutput(l.encode(), 0)
elif isinstance(l, tuple) and len(l) == 2:
self.impl.addOutput(l[0].encode(), l[1])
else:
raise TypeError("Incorrect type {type} for layer to add at index {ind}. "
"Expected string with layer name or tuple with two elements: layer name as "
"first element and port id as second".format(type=type(l), ind=i))
def serialize(self, path_to_xml, path_to_bin):
def serialize(self, path_to_xml, path_to_bin: str = ""):
self.impl.serialize(path_to_xml.encode(), path_to_bin.encode())
def reshape(self, input_shapes: dict):
cdef map[string, vector[size_t]] c_input_shapes;
cdef vector[size_t] c_shape
net_inputs = self.inputs
for input, shape in input_shapes.items():
c_shape = []
if input not in net_inputs:
raise AttributeError("Specified {} layer not in network inputs {}! ".format(input, net_inputs))
raise AttributeError("Specified '{}' layer not in network inputs '{}'! ".format(input, net_inputs))
for v in shape:
c_shape.push_back(v)
c_input_shapes[input.encode()] = c_shape
@@ -387,16 +545,15 @@ cdef class IEPlugin:
self.impl = C.IEPlugin(device_, dirs_)
cpdef ExecutableNetwork load(self, IENetwork network, int num_requests=1, config=None):
if num_requests <= 0:
raise ValueError(
"Incorrect number of requests specified: {}. Expected positive integer number.".format(num_requests))
cdef ExecutableNetwork exec_net = ExecutableNetwork()
cdef map[string, string] c_config
if num_requests < 0:
raise ValueError("Incorrect number of requests specified: {}. Expected positive integer number "
"or zero for auto detection".format(num_requests))
if config:
for k, v in config.items():
c_config[to_std_string(k)] = to_std_string(v)
exec_net.plugin_impl = self.impl
exec_net.impl = move(self.impl.load(network.impl, num_requests, c_config))
exec_net.inputs = network.inputs.keys()
exec_net.outputs = list(network.outputs.keys())
@@ -432,6 +589,7 @@ cdef class IEPlugin:
c_config[to_std_string(k)] = to_std_string(v)
self.impl.setConfig(c_config)
# TODO: Add export compiled network functionality
cdef class BlobBuffer:
"""Copy-less accessor for Inference Engine Blob"""

View File

@@ -1,16 +1,6 @@
// Copyright (c) 2018 Intel Corporation
// Copyright (C) 2018-2019 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "ie_api_impl.hpp"
#include "hetero/hetero_plugin_config.hpp"
@@ -35,6 +25,7 @@ std::map<std::string, InferenceEngine::Layout> layout_map = {{"ANY", Inferen
{"HW", InferenceEngine::Layout::HW},
{"NC", InferenceEngine::Layout::NC},
{"CN", InferenceEngine::Layout::CN},
{"NCDHW", InferenceEngine::Layout::NCDHW},
{"BLOCKED", InferenceEngine::Layout::BLOCKED}};
#define stringify(name) # name
#define IE_CHECK_CALL(expr) { \
@@ -44,14 +35,165 @@ std::map<std::string, InferenceEngine::Layout> layout_map = {{"ANY", Inferen
} \
} \
uint32_t getOptimalNumberOfRequests(const InferenceEngine::IExecutableNetwork::Ptr actual) {
try {
InferenceEngine::ResponseDesc response;
InferenceEngine::Parameter parameter_value;
IE_CHECK_CALL(actual->GetMetric(METRIC_KEY(SUPPORTED_METRICS), parameter_value, &response));
auto supported_metrics = parameter_value.as<std::vector<std::string>>();
std::string key = METRIC_KEY(OPTIMAL_NUMBER_OF_INFER_REQUESTS);
if (std::find(supported_metrics.begin(), supported_metrics.end(), key) != supported_metrics.end()) {
IE_CHECK_CALL(actual->GetMetric(key, parameter_value, &response));
if (parameter_value.is<unsigned int>())
return parameter_value.as<unsigned int>();
else
THROW_IE_EXCEPTION << "Unsupported format for " << key << "!"
<< " Please specify number of infer requests directly!";
} else {
THROW_IE_EXCEPTION << "Can't load network: " << key << " is not supported!"
<< " Please specify number of infer requests directly!";
}
} catch (const std::exception& ex) {
THROW_IE_EXCEPTION << "Can't load network: " << ex.what()
<< " Please specify number of infer requests directly!";
}
}
InferenceEnginePython::IENetwork::IENetwork(const std::string &model, const std::string &weights) {
PyObject* parse_parameter(const InferenceEngine::Parameter & param){
// Check for std::string
if (param.is<std::string>()){
return PyUnicode_FromString(param.as<std::string>().c_str());
}
// Check for int
else if (param.is<int>()) {
auto val = param.as<int>();
return PyLong_FromLong((long)val);
}
// Check for unsinged int
else if (param.is<unsigned int>()) {
auto val = param.as<unsigned int>();
return PyLong_FromLong((unsigned long)val);
}
// Check for float
else if (param.is<float>()) {
auto val = param.as<float>();
return PyFloat_FromDouble((double)val);
}
// Check for bool
else if (param.is<bool>()) {
auto val = param.as<bool>();
return val ? Py_True : Py_False;
}
// Check for std::vector<std::string>
else if (param.is<std::vector<std::string>>()) {
auto val = param.as<std::vector<std::string>>();
PyObject *list = PyList_New(0);
for (const auto & it : val){
PyObject *str_val = PyUnicode_FromString(it.c_str());
PyList_Append(list, str_val);
}
return list;
}
// Check for std::vector<int>
else if (param.is<std::vector<int>>()){
auto val = param.as<std::vector<int>>();
PyObject *list = PyList_New(0);
for (const auto & it : val){
PyList_Append(list, PyLong_FromLong(it));
}
return list;
}
// Check for std::vector<unsigned int>
else if (param.is<std::vector<unsigned int>>()){
auto val = param.as<std::vector<unsigned int>>();
PyObject *list = PyList_New(0);
for (const auto & it : val){
PyList_Append(list, PyLong_FromLong(it));
}
return list;
}
// Check for std::vector<float>
else if (param.is<std::vector<float>>()){
auto val = param.as<std::vector<float>>();
PyObject *list = PyList_New(0);
for (const auto & it : val){
PyList_Append(list, PyFloat_FromDouble((double)it));
}
return list;
}
// Check for std::tuple<unsigned int, unsigned int>
else if (param.is<std::tuple<unsigned int, unsigned int >>()) {
auto val = param.as<std::tuple<unsigned int, unsigned int >>();
PyObject *tuple = PyTuple_New(2);
PyTuple_SetItem(tuple, 0, PyLong_FromUnsignedLong((unsigned long)std::get<0>(val)));
PyTuple_SetItem(tuple, 1, PyLong_FromUnsignedLong((unsigned long)std::get<1>(val)));
return tuple;
}
// Check for std::tuple<unsigned int, unsigned int, unsigned int>
else if (param.is<std::tuple<unsigned int, unsigned int, unsigned int >>()) {
auto val = param.as<std::tuple<unsigned int, unsigned int, unsigned int >>();
PyObject *tuple = PyTuple_New(3);
PyTuple_SetItem(tuple, 0, PyLong_FromUnsignedLong((unsigned long)std::get<0>(val)));
PyTuple_SetItem(tuple, 1, PyLong_FromUnsignedLong((unsigned long)std::get<1>(val)));
PyTuple_SetItem(tuple, 2, PyLong_FromUnsignedLong((unsigned long)std::get<2>(val)));
return tuple;
}
// Check for std::map<std::string, std::string>
else if (param.is<std::map<std::string, std::string>>()) {
auto val = param.as<std::map<std::string, std::string>>();
PyObject *dict = PyDict_New();
for (const auto &it : val){
PyDict_SetItemString(dict, it.first.c_str(), PyUnicode_FromString(it.second.c_str()));
}
return dict;
}
// Check for std::map<std::string, int>
else if (param.is<std::map<std::string, int>>()) {
auto val = param.as<std::map<std::string, int>>();
PyObject *dict = PyDict_New();
for (const auto &it : val){
PyDict_SetItemString(dict, it.first.c_str(), PyLong_FromLong((long)it.second));
}
return dict;
}
else {
PyErr_SetString(PyExc_TypeError, "Failed to convert parameter to Python representation!");
return (PyObject *) NULL;
}
}
InferenceEnginePython::IENetwork::IENetwork(const std::string &model, const std::string &weights, bool ngraph_compatibility = false) {
if (ngraph_compatibility){
InferenceEngine::IRReader ir_reader;
auto ngraph_function = ir_reader.read(model, weights);
actual = InferenceEngine::CNNNetwork(InferenceEngine::convertFunctionToICNNNetwork(ngraph_function));
} else {
InferenceEngine::CNNNetReader net_reader;
net_reader.ReadNetwork(model);
net_reader.ReadWeights(weights);
actual = net_reader.getNetwork();
}
name = actual.getName();
batch_size = actual.getBatchSize();
precision = actual.getPrecision().name();
}
InferenceEnginePython::IENetwork::IENetwork(const InferenceEngine::CNNNetwork& cnn_network)
: actual(cnn_network) {
name = actual.getName();
batch_size = actual.getBatchSize();
precision = actual.getPrecision().name();
}
void InferenceEnginePython::IENetwork::load_from_buffer(const char *xml, size_t xml_size, uint8_t *bin, size_t bin_size) {
InferenceEngine::CNNNetReader net_reader;
net_reader.ReadNetwork(model);
net_reader.ReadWeights(weights);
net_reader.ReadNetwork(xml, xml_size);
InferenceEngine::TensorDesc tensorDesc(InferenceEngine::Precision::U8, {bin_size}, InferenceEngine::Layout::C);
auto weights_blob = InferenceEngine::make_shared_blob<uint8_t>(tensorDesc, bin, bin_size);
net_reader.SetWeights(weights_blob);
name = net_reader.getName();
actual = net_reader.getNetwork();
batch_size = actual.getBatchSize();
precision = actual.getPrecision().name();
}
void InferenceEnginePython::IENetwork::serialize(const std::string &path_to_xml, const std::string &path_to_bin) {
@@ -86,7 +228,7 @@ InferenceEnginePython::IENetwork::getLayers() {
for (auto layer_iter : inputTo) {
InferenceEngine::CNNLayerPtr layer_in_data = layer_iter.second;
if (!layer_in_data) {
THROW_IE_EXCEPTION << "Layer which takes data " << data->name << " is nullptr";
THROW_IE_EXCEPTION << "Layer which takes data " << data->getName() << " is nullptr";
}
children.emplace_back(layer_in_data->name);
}
@@ -115,7 +257,7 @@ const std::map<std::string, InferenceEnginePython::InputInfo> InferenceEnginePyt
const InferenceEngine::InputsDataMap &inputsInfo = actual.getInputsInfo();
for (auto &in : inputsInfo) {
InferenceEnginePython::InputInfo info;
info.actual = *in.second;
info.actual = in.second;
const InferenceEngine::TensorDesc &inputTensorDesc = in.second->getTensorDesc();
info.dims = inputTensorDesc.getDims();
for (auto it : precision_map)
@@ -149,23 +291,8 @@ const std::map<std::string, InferenceEnginePython::OutputInfo> InferenceEnginePy
}
void
InferenceEnginePython::IENetwork::addOutputs(const std::vector<std::string> &out_layers, const std::string &precision) {
for (auto &&l : out_layers) {
InferenceEngine::OutputsDataMap outputsDataMap = actual.getOutputsInfo();
if (outputsDataMap.find(l) != outputsDataMap.end()) {
continue;
}
InferenceEngine::CNNLayerPtr cnnLayer = actual.getLayerByName(l.c_str());
std::vector<InferenceEngine::DataPtr> outData = cnnLayer->outData;
if (outData.size() != 1) {
std::cout << "Layer " << l << " has " << outData.size() << " output blobs and can not be set as output."
<< std::endl;
continue;
}
actual.addOutput(l);
InferenceEngine::OutputsDataMap outputsDataMapUpd = actual.getOutputsInfo();
outputsDataMapUpd[l]->setPrecision(precision_map[precision]);
}
InferenceEnginePython::IENetwork::addOutput(const std::string &out_layer, size_t port_id) {
actual.addOutput(out_layer, port_id);
}
void InferenceEnginePython::IENetwork::setBatch(const size_t size) {
@@ -191,9 +318,8 @@ const std::map<std::string, std::map<std::string, std::vector<float>>> Inference
return map;
}
void
InferenceEnginePython::IENetwork::setStats(
const std::map<std::string, std::map<std::string, std::vector<float>>> &stats) {
void InferenceEnginePython::IENetwork::setStats(const std::map<std::string, std::map<std::string,
std::vector<float>>> &stats) {
InferenceEngine::ICNNNetworkStats *pstats = nullptr;
InferenceEngine::ResponseDesc response;
IE_CHECK_CALL(((InferenceEngine::ICNNNetwork &) actual).getStats(&pstats, &response));
@@ -209,11 +335,11 @@ InferenceEnginePython::IENetwork::setStats(
}
void InferenceEnginePython::InputInfo::setPrecision(std::string precision) {
actual.setPrecision(precision_map[precision]);
actual->setPrecision(precision_map[precision]);
}
void InferenceEnginePython::InputInfo::setLayout(std::string layout) {
actual.setLayout(layout_map[layout]);
actual->setLayout(layout_map[layout]);
}
void InferenceEnginePython::OutputInfo::setPrecision(std::string precision) {
@@ -221,10 +347,11 @@ void InferenceEnginePython::OutputInfo::setPrecision(std::string precision) {
}
InferenceEnginePython::IEPlugin::IEPlugin(const std::string &device, const std::vector<std::string> &plugin_dirs) {
IE_SUPPRESS_DEPRECATED_START
InferenceEngine::PluginDispatcher dispatcher{plugin_dirs};
actual = dispatcher.getPluginByDevice(device);
const InferenceEngine::Version *pluginVersion;
actual->GetVersion(pluginVersion);
IE_SUPPRESS_DEPRECATED_END
auto pluginVersion = actual.GetVersion();
version = std::to_string(pluginVersion->apiVersion.major) + ".";
version += std::to_string(pluginVersion->apiVersion.minor) + ".";
version += pluginVersion->buildNumber;
@@ -232,17 +359,32 @@ InferenceEnginePython::IEPlugin::IEPlugin(const std::string &device, const std::
}
void InferenceEnginePython::IEPlugin::setInitialAffinity(const InferenceEnginePython::IENetwork &net) {
InferenceEngine::HeteroPluginPtr hetero_plugin(actual);
InferenceEngine::ResponseDesc response;
InferenceEngine::InferenceEnginePluginPtr hetero_plugin(actual);
InferenceEngine::QueryNetworkResult queryRes;
auto &network = net.actual;
IE_CHECK_CALL(hetero_plugin->SetAffinity(network, {}, &response));
hetero_plugin->QueryNetwork(network, {}, queryRes);
if (queryRes.rc != InferenceEngine::StatusCode::OK) {
THROW_IE_EXCEPTION << queryRes.resp.msg;
}
for (auto && layer : queryRes.supportedLayersMap) {
network.getLayerByName(layer.first.c_str())->affinity = layer.second;
}
}
std::set<std::string> InferenceEnginePython::IEPlugin::queryNetwork(const InferenceEnginePython::IENetwork &net) {
const InferenceEngine::CNNNetwork &network = net.actual;
InferenceEngine::QueryNetworkResult queryRes;
actual->QueryNetwork(network, queryRes);
return queryRes.supportedLayers;
actual.QueryNetwork(network, {}, queryRes);
std::set<std::string> supportedLayers;
for (auto && layer : queryRes.supportedLayersMap) {
supportedLayers.insert(layer.first);
}
return supportedLayers;
}
@@ -288,10 +430,9 @@ void InferenceEnginePython::IENetLayer::setPrecision(std::string precision) {
}
void InferenceEnginePython::IEPlugin::addCpuExtension(const std::string &extension_path) {
InferenceEngine::ResponseDesc response;
auto extension_ptr = InferenceEngine::make_so_pointer<InferenceEngine::IExtension>(extension_path);
auto extension = std::dynamic_pointer_cast<InferenceEngine::IExtension>(extension_ptr);
IE_CHECK_CALL(actual->AddExtension(extension, &response))
actual.AddExtension(extension);
}
std::unique_ptr<InferenceEnginePython::IEExecNetwork>
@@ -301,8 +442,12 @@ InferenceEnginePython::IEPlugin::load(const InferenceEnginePython::IENetwork &ne
InferenceEngine::ResponseDesc response;
auto exec_network = InferenceEnginePython::make_unique<InferenceEnginePython::IEExecNetwork>(net.name,
num_requests);
exec_network->actual = actual.LoadNetwork(net.actual, config);
IE_CHECK_CALL(actual->LoadNetwork(exec_network->actual, net.actual, config, &response))
if (0 == num_requests) {
num_requests = getOptimalNumberOfRequests(exec_network->actual);
exec_network->infer_requests.resize(num_requests);
}
for (size_t i = 0; i < num_requests; ++i) {
InferRequestWrap &infer_request = exec_network->infer_requests[i];
@@ -313,8 +458,7 @@ InferenceEnginePython::IEPlugin::load(const InferenceEnginePython::IENetwork &ne
}
void InferenceEnginePython::IEPlugin::setConfig(const std::map<std::string, std::string> &config) {
InferenceEngine::ResponseDesc response;
IE_CHECK_CALL(actual->SetConfig(config, &response))
actual.SetConfig(config);
}
InferenceEnginePython::IEExecNetwork::IEExecNetwork(const std::string &name, size_t num_requests) :
@@ -322,14 +466,33 @@ InferenceEnginePython::IEExecNetwork::IEExecNetwork(const std::string &name, siz
}
void InferenceEnginePython::IEExecNetwork::infer() {
InferenceEngine::ResponseDesc response;
InferRequestWrap &request = infer_requests[0];
request.request_ptr->Infer(&response);
request.infer();
}
InferenceEnginePython::IENetwork InferenceEnginePython::IEExecNetwork::GetExecGraphInfo() {
InferenceEngine::ResponseDesc response;
InferenceEngine::ICNNNetwork::Ptr graph;
IE_CHECK_CALL(actual->GetExecGraphInfo(graph, &response));
return IENetwork(InferenceEngine::CNNNetwork(graph));
}
void InferenceEnginePython::InferRequestWrap::getBlobPtr(const std::string &blob_name, InferenceEngine::Blob::Ptr &blob_ptr)
{
PyObject* InferenceEnginePython::IEExecNetwork::getMetric(const std::string &metric_name) {
InferenceEngine::Parameter parameter;
InferenceEngine::ResponseDesc response;
IE_CHECK_CALL(actual->GetMetric(metric_name, parameter, &response));
return parse_parameter(parameter);
}
PyObject* InferenceEnginePython::IEExecNetwork::getConfig(const std::string &metric_name) {
InferenceEngine::Parameter parameter;
InferenceEngine::ResponseDesc response;
IE_CHECK_CALL(actual->GetMetric(metric_name, parameter, &response));
return parse_parameter(parameter);
}
void InferenceEnginePython::InferRequestWrap::getBlobPtr(const std::string &blob_name,
InferenceEngine::Blob::Ptr &blob_ptr) {
InferenceEngine::ResponseDesc response;
IE_CHECK_CALL(request_ptr->GetBlob(blob_name.c_str(), blob_ptr, &response));
}
@@ -340,13 +503,41 @@ void InferenceEnginePython::InferRequestWrap::setBatch(int size) {
IE_CHECK_CALL(request_ptr->SetBatch(size, &response));
}
void latency_callback(InferenceEngine::IInferRequest::Ptr request, InferenceEngine::StatusCode code) {
if (code != InferenceEngine::StatusCode::OK) {
THROW_IE_EXCEPTION << "Async Infer Request failed with status code " << code;
}
InferenceEnginePython::InferRequestWrap *requestWrap;
InferenceEngine::ResponseDesc dsc;
request->GetUserData(reinterpret_cast<void **>(&requestWrap), &dsc);
auto end_time = Time::now();
auto execTime = std::chrono::duration_cast<ns>(end_time - requestWrap->start_time);
requestWrap->exec_time = static_cast<double>(execTime.count()) * 0.000001;
if (requestWrap->user_callback) {
requestWrap->user_callback(requestWrap->user_data, code);
}
}
void InferenceEnginePython::InferRequestWrap::setCyCallback(cy_callback callback, void *data) {
user_callback = callback;
user_data = data;
}
void InferenceEnginePython::InferRequestWrap::infer() {
InferenceEngine::ResponseDesc response;
start_time = Time::now();
IE_CHECK_CALL(request_ptr->Infer(&response));
auto end_time = Time::now();
auto execTime = std::chrono::duration_cast<ns>(end_time - start_time);
exec_time = static_cast<double>(execTime.count()) * 0.000001;
}
void InferenceEnginePython::InferRequestWrap::infer_async() {
InferenceEngine::ResponseDesc response;
start_time = Time::now();
IE_CHECK_CALL(request_ptr->SetUserData(this, &response));
request_ptr->SetCompletionCallback(latency_callback);
IE_CHECK_CALL(request_ptr->StartAsync(&response));
}
@@ -382,6 +573,7 @@ InferenceEnginePython::InferRequestWrap::getPerformanceCounts() {
profile_info.layer_type = it.second.layer_type;
profile_info.cpu_time = it.second.cpu_uSec;
profile_info.real_time = it.second.realTime_uSec;
profile_info.execution_index = it.second.execution_index;
perf_map[it.first] = profile_info;
}
return perf_map;
@@ -394,3 +586,77 @@ std::string InferenceEnginePython::get_version() {
version_str += version->buildNumber;
return version_str;
}
InferenceEnginePython::IECore::IECore(const std::string & xmlConfigFile) {
actual = InferenceEngine::Core(xmlConfigFile);
}
std::map<std::string, InferenceEngine::Version> InferenceEnginePython::IECore::getVersions(const std::string &deviceName) {
return actual.GetVersions(deviceName);
}
std::unique_ptr<InferenceEnginePython::IEExecNetwork> InferenceEnginePython::IECore::loadNetwork(IENetwork network,
const std::string & deviceName, const std::map<std::string, std::string> & config, int num_requests){
InferenceEngine::ResponseDesc response;
auto exec_network = InferenceEnginePython::make_unique<InferenceEnginePython::IEExecNetwork>(network.name,
num_requests);
exec_network->actual = actual.LoadNetwork(network.actual, deviceName, config);
if (0 == num_requests) {
num_requests = getOptimalNumberOfRequests(exec_network->actual);
exec_network->infer_requests.resize(num_requests);
}
for (size_t i = 0; i < num_requests; ++i) {
InferRequestWrap &infer_request = exec_network->infer_requests[i];
IE_CHECK_CALL(exec_network->actual->CreateInferRequest(infer_request.request_ptr, &response))
}
return exec_network;
}
std::map<std::string, std::string> InferenceEnginePython::IECore::queryNetwork(InferenceEnginePython::IENetwork network,
const std::string &deviceName,
const std::map<std::string, std::string> &config) {
auto res = actual.QueryNetwork(network.actual, deviceName, config);
return res.supportedLayersMap;
}
void InferenceEnginePython::IECore::setConfig(const std::map<std::string, std::string> &config,
const std::string &deviceName) {
actual.SetConfig(config, deviceName);
}
void InferenceEnginePython::IECore::registerPlugin(const std::string & pluginName, const std::string &deviceName) {
actual.RegisterPlugin(pluginName, deviceName);
}
void InferenceEnginePython::IECore::unregisterPlugin(const std::string & deviceName){
actual.UnregisterPlugin(deviceName);
}
void InferenceEnginePython::IECore::registerPlugins(const std::string & xmlConfigFile){
actual.RegisterPlugins(xmlConfigFile);
}
void InferenceEnginePython::IECore::addExtension(const std::string & ext_lib_path, const std::string &deviceName) {
auto extension_ptr = InferenceEngine::make_so_pointer<InferenceEngine::IExtension>(ext_lib_path);
auto extension = std::dynamic_pointer_cast<InferenceEngine::IExtension>(extension_ptr);
actual.AddExtension(extension, deviceName);
}
std::vector<std::string> InferenceEnginePython::IECore::getAvailableDevices() {
return actual.GetAvailableDevices();
}
PyObject* InferenceEnginePython::IECore::getMetric(const std::string &deviceName, const std::string &name) {
InferenceEngine::Parameter param = actual.GetMetric(deviceName, name);
return parse_parameter(param);
}
PyObject* InferenceEnginePython::IECore::getConfig(const std::string &deviceName, const std::string &name) {
InferenceEngine::Parameter param = actual.GetConfig(deviceName, name);
return parse_parameter(param);
}

View File

@@ -1,33 +1,29 @@
// Copyright (c) 2018 Intel Corporation
// Copyright (C) 2018-2019 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#pragma once
#include <ie_extension.h>
#include <iterator>
#include "Python.h"
#include <iterator>
#include <string>
#include <utility>
#include <map>
#include <vector>
#include <set>
#include <iostream>
#include <algorithm>
#include <sstream>
#include <inference_engine.hpp>
#include <chrono>
#include <ie_extension.h>
#include "inference_engine.hpp"
#include "../../../../../src/inference_engine/ie_ir_reader.hpp"
typedef std::chrono::high_resolution_clock Time;
typedef std::chrono::nanoseconds ns;
namespace InferenceEnginePython {
struct IENetLayer {
@@ -53,7 +49,7 @@ struct IENetLayer {
};
struct InputInfo {
InferenceEngine::InputInfo actual;
InferenceEngine::InputInfo::Ptr actual;
std::vector<size_t> dims;
std::string precision;
std::string layout;
@@ -85,10 +81,11 @@ struct IENetwork {
InferenceEngine::CNNNetwork actual;
std::string name;
std::size_t batch_size;
std::string precision;
void setBatch(const size_t size);
void addOutputs(const std::vector<std::string> &out_layers, const std::string &precision);
void addOutput(const std::string &out_layer, size_t port_id);
const std::vector<std::pair<std::string, InferenceEnginePython::IENetLayer>> getLayers();
@@ -104,13 +101,24 @@ struct IENetwork {
const std::map<std::string, std::map<std::string, std::vector<float>>> getStats();
IENetwork(const std::string &model, const std::string &weights);
void load_from_buffer(const char* xml, size_t xml_size, uint8_t* bin, size_t bin_size);
IENetwork(const std::string &model, const std::string &weights, bool ngraph_compatibility);
IENetwork(const InferenceEngine::CNNNetwork& cnn_network);
IENetwork() = default;
};
struct InferRequestWrap {
using cy_callback = void (*)(void*, int);
InferenceEngine::IInferRequest::Ptr request_ptr;
Time::time_point start_time;
double exec_time;
cy_callback user_callback;
void *user_data;
int status;
void infer();
@@ -118,6 +126,8 @@ struct InferRequestWrap {
int wait(int64_t timeout);
void setCyCallback(cy_callback callback, void *data);
void getBlobPtr(const std::string &blob_name, InferenceEngine::Blob::Ptr &blob_ptr);
void setBatch(int size);
@@ -133,7 +143,12 @@ struct IEExecNetwork {
IEExecNetwork(const std::string &name, size_t num_requests);
IENetwork GetExecGraphInfo();
void infer();
PyObject* getMetric(const std::string & metric_name);
PyObject* getConfig(const std::string & metric_name);
};
@@ -157,7 +172,25 @@ struct IEPlugin {
std::set<std::string> queryNetwork(const InferenceEnginePython::IENetwork &net);
InferenceEngine::InferenceEnginePluginPtr actual;
InferenceEngine::InferencePlugin actual;
};
struct IECore {
InferenceEngine::Core actual;
explicit IECore(const std::string & xmlConfigFile = std::string());
std::map<std::string, InferenceEngine::Version> getVersions(const std::string & deviceName);
std::unique_ptr<InferenceEnginePython::IEExecNetwork> loadNetwork(IENetwork network, const std::string & deviceName,
const std::map<std::string, std::string> & config, int num_requests);
std::map<std::string, std::string> queryNetwork(IENetwork network, const std::string & deviceName,
const std::map<std::string, std::string> & config);
void setConfig(const std::map<std::string, std::string> &config, const std::string & deviceName = std::string());
void registerPlugin(const std::string & pluginName, const std::string & deviceName);
void unregisterPlugin(const std::string & deviceName);
void registerPlugins(const std::string & xmlConfigFile);
void addExtension(const std::string & ext_lib_path, const std::string & deviceName);
std::vector<std::string> getAvailableDevices();
PyObject* getMetric(const std::string & deviceName, const std::string & name);
PyObject* getConfig(const std::string & deviceName, const std::string & name);
};
template<class T>

View File

@@ -1,12 +1,12 @@
from libc.stddef cimport size_t
from libcpp cimport bool
from libcpp.string cimport string
from libcpp.vector cimport vector
from libcpp.map cimport map
from libcpp.set cimport set
from libcpp.pair cimport pair
from libcpp.memory cimport unique_ptr, shared_ptr
from libc.stdint cimport int64_t
from libc.stdint cimport int64_t, uint8_t
cdef extern from "<inference_engine.hpp>" namespace "InferenceEngine":
@@ -24,6 +24,14 @@ cdef extern from "<inference_engine.hpp>" namespace "InferenceEngine":
cdef cppclass Precision:
const char*name() const
cdef struct apiVersion:
int minor
int major
cdef cppclass Version:
const char *buildNumber
const char *description
apiVersion apiVersion
cdef extern from "ie_api_impl.hpp" namespace "InferenceEnginePython":
cdef cppclass IENetLayer:
@@ -45,14 +53,14 @@ cdef extern from "ie_api_impl.hpp" namespace "InferenceEnginePython":
vector[size_t] dims
string precision
string layout
void setPrecision(string precision)
void setLayout(string layout)
void setPrecision(string precision) except +
void setLayout(string layout) except +
cdef cppclass OutputInfo:
vector[size_t] dims
string precision
string layout
void setPrecision(string precision)
void setPrecision(string precision) except +
cdef cppclass ProfileInfo:
string status
@@ -69,17 +77,21 @@ cdef extern from "ie_api_impl.hpp" namespace "InferenceEnginePython":
cdef cppclass IEExecNetwork:
vector[InferRequestWrap] infer_requests
IENetwork GetExecGraphInfo() except +
object getMetric(const string & metric_name)
object getConfig(const string & metric_name)
cdef cppclass IENetwork:
IENetwork() except +
IENetwork(const string &, const string &) except +
IENetwork(const string &, const string &, bool ngraph_compatibility) except +
string name
size_t batch_size
string precision
map[string, vector[size_t]] inputs
const vector[pair[string, IENetLayer]] getLayers() except +
map[string, InputInfo] getInputs() except +
map[string, OutputInfo] getOutputs() except +
void addOutputs(vector[string] &, string &) except +
void addOutput(string &, size_t) except +
void setAffinity(map[string, string] & types_affinity_map, map[string, string] & layers_affinity_map) except +
void setBatch(size_t size) except +
void setLayerParams(map[string, map[string, string]] params_map) except +
@@ -87,6 +99,7 @@ cdef extern from "ie_api_impl.hpp" namespace "InferenceEnginePython":
void reshape(map[string, vector[size_t]] input_shapes) except +
void setStats(map[string, map[string, vector[float]]] & stats) except +
map[string, map[string, vector[float]]] getStats() except +
void load_from_buffer(const char*xml, size_t xml_size, uint8_t*bin, size_t bin_size) except +
cdef cppclass IEPlugin:
IEPlugin() except +
@@ -100,12 +113,31 @@ cdef extern from "ie_api_impl.hpp" namespace "InferenceEnginePython":
string version
cdef cppclass InferRequestWrap:
void getBlobPtr(const string &blob_name, Blob.Ptr &blob_ptr)
double exec_time;
void getBlobPtr(const string & blob_name, Blob.Ptr & blob_ptr) except +
map[string, ProfileInfo] getPerformanceCounts() except +
void infer() except +
void infer_async() except +
int wait(int64_t timeout) except +
void setBatch(int size) except +
void setCyCallback(void (*)(void*, int), void *) except +
cdef cppclass IECore:
IECore() except +
IECore(const string & xml_config_file) except +
map[string, Version] getVersions(const string & deviceName) except +
unique_ptr[IEExecNetwork] loadNetwork(IENetwork network, const string deviceName,
const map[string, string] & config, int num_requests) except +
map[string, string] queryNetwork(IENetwork network, const string deviceName,
const map[string, string] & config) except +
void setConfig(const map[string, string] & config, const string & deviceName) except +
void registerPlugin(const string & pluginName, const string & deviceName) except +
void unregisterPlugin(const string & deviceName) except +
void registerPlugins(const string & xmlConfigFile) except +
void addExtension(const string & ext_lib_path, const string & deviceName) except +
vector[string] getAvailableDevices() except +
object getMetric(const string & deviceName, const string & name) except +
object getConfig(const string & deviceName, const string & name) except +
cdef T*get_buffer[T](Blob &)

View File

@@ -0,0 +1,39 @@
# If the pyx file is a C++ file, we should specify that here.
set (CMAKE_INCLUDE_CURRENT_DIR ON)
set (TARGET_NAME "statistics_collector_api")
set (CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PYTHON_BRIDGE_OUTPUT_DIRECTORY}/tools/statistics_collector)
set (CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${CMAKE_LIBRARY_OUTPUT_DIRECTORY})
file(GLOB SOURCE
${CMAKE_CURRENT_SOURCE_DIR}/*.pyx
)
set_source_files_properties(${SOURCE} PROPERTIES CYTHON_IS_CXX TRUE
)
include_directories (
${CMAKE_SOURCE_DIR}/samples/common
)
## Compatibility with python 2.7 which has deprecated "register" specifier
if("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang")
add_definitions("-Wno-register")
endif()
cython_add_module (${TARGET_NAME} ${SOURCE})
set_target_properties (${TARGET_NAME} PROPERTIES CXX_STANDARD 11 LINKER_LANGUAGE CXX)
target_link_libraries (${TARGET_NAME} PRIVATE ${InferenceEngine_LIBRARIES})
if(TARGET IE::statistics_collector_s)
target_link_libraries(${TARGET_NAME} PRIVATE IE::statistics_collector_s)
else()
target_link_libraries(${TARGET_NAME} PRIVATE statistics_collector_s)
endif()
# perform copy
ADD_CUSTOM_COMMAND (TARGET ${TARGET_NAME}
POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy ${PYTHON_BRIDGE_SRC_ROOT}/src/openvino/tools/__init__.py ${CMAKE_LIBRARY_OUTPUT_DIRECTORY}/../__init__.py
COMMAND ${CMAKE_COMMAND} -E copy ${PYTHON_BRIDGE_SRC_ROOT}/src/openvino/tools/statistics_collector/__init__.py ${CMAKE_LIBRARY_OUTPUT_DIRECTORY}/__init__.py
)

View File

@@ -0,0 +1,2 @@
from .statistics_collector_api import *
__all__ = ['StatisticsCollector']

View File

@@ -0,0 +1,8 @@
from .cimport statistics_collector_c as C
from libcpp.string cimport string
cdef class StatisticsCollector:
cdef C.StatisticsCollector* _impl
cdef C.ct_preprocessingOptions ppOptions
cpdef void collectStatisticsToIR(self, str outModelName, str output_precision)

View File

@@ -0,0 +1,25 @@
#distutils: language=c++
from .cimport statistics_collector_c as C
cdef class StatisticsCollector:
def __cinit__(self,
deviceName: [str, bytes],
custom_cpu_library: [str, bytes],
custom_cldnn: [str, bytes],
modelFilePath: [str, bytes],
imagesPath: [str, bytes],
img_number: int,
batch: int,
progress: [str, bytes]):
self.ppOptions._pp_size = 0
self.ppOptions._pp_width = 0
self.ppOptions._pp_height = 0
self._impl = new C.StatisticsCollector(deviceName.encode(), custom_cpu_library.encode(), custom_cldnn.encode(), modelFilePath.encode(), imagesPath.encode(), img_number, batch, self.ppOptions, progress.encode())
cpdef void collectStatisticsToIR(self, str outModelName, str output_precision):
self._impl.collectStatisticsToIR(outModelName.encode(), output_precision.encode())
def __dealloc__(self):
if self._impl is not NULL:
del self._impl

View File

@@ -0,0 +1,24 @@
from libc.stddef cimport size_t
from libcpp.string cimport string
cdef extern from "<statistics_processor.hpp>":
cdef struct ct_preprocessingOptions:
string _pp_type
size_t _pp_size
size_t _pp_width
size_t _pp_height
cdef cppclass StatisticsCollector:
StatisticsCollector(const string& deviceName,
const string& custom_cpu_library,
const string& custom_cldnn,
const string& modelFilePath,
const string& imagesPath,
size_t img_number,
size_t batch,
const ct_preprocessingOptions& preprocessingOptions,
const string& progress) except +
void collectStatisticsToIR(const string& outModelName, const string& output_precision)
ct_preprocessingOptions ppOptions

View File

@@ -1,11 +1,11 @@
// Copyright (C) 2018 Intel Corporation
// Copyright (C) 2018-2019 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <builders/ie_layer_fragment.hpp>
#include <ie_inetwork.hpp>
#include <builders/ie_layer_decorator.hpp>
#include <ie_network.hpp>
#include <string>
namespace InferenceEngine {
@@ -14,7 +14,7 @@ namespace Builder {
/**
* @brief The class represents a builder for ArgMax layer
*/
class INFERENCE_ENGINE_API_CLASS(ArgMaxLayer): public LayerFragment {
class INFERENCE_ENGINE_API_CLASS(ArgMaxLayer): public LayerDecorator {
public:
/**
* @brief The constructor creates a builder with the name
@@ -23,9 +23,14 @@ public:
explicit ArgMaxLayer(const std::string& name = "");
/**
* @brief The constructor creates a builder from generic builder
* @param genLayer generic builder
* @param layer pointer to generic builder
*/
explicit ArgMaxLayer(Layer& genLayer);
explicit ArgMaxLayer(const Layer::Ptr& layer);
/**
* @brief The constructor creates a builder from generic builder
* @param layer constant pointer to generic builder
*/
explicit ArgMaxLayer(const Layer::CPtr& layer);
/**
* @brief Sets the name for the layer
* @param name Layer name

View File

@@ -1,11 +1,11 @@
// Copyright (C) 2018 Intel Corporation
// Copyright (C) 2018-2019 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <builders/ie_layer_fragment.hpp>
#include <ie_inetwork.hpp>
#include <builders/ie_layer_decorator.hpp>
#include <ie_network.hpp>
#include <string>
namespace InferenceEngine {
@@ -14,7 +14,7 @@ namespace Builder {
/**
* @brief The class represents a builder for BatchNormalization layer
*/
class INFERENCE_ENGINE_API_CLASS(BatchNormalizationLayer): public LayerFragment {
class INFERENCE_ENGINE_API_CLASS(BatchNormalizationLayer): public LayerDecorator {
public:
/**
* @brief The constructor creates a builder with the name
@@ -23,9 +23,14 @@ public:
explicit BatchNormalizationLayer(const std::string& name = "");
/**
* @brief The constructor creates a builder from generic builder
* @param genLayer generic builder
* @param layer pointer to generic builder
*/
explicit BatchNormalizationLayer(Layer& genLayer);
explicit BatchNormalizationLayer(const Layer::Ptr& layer);
/**
* @brief The constructor creates a builder from generic builder
* @param layer constant pointer to generic builder
*/
explicit BatchNormalizationLayer(const Layer::CPtr& layer);
/**
* @brief Sets the name for the layer
* @param name Layer name
@@ -45,19 +50,6 @@ public:
*/
BatchNormalizationLayer& setPort(const Port &port);
/**
* @brief Sets weights for layer
* @param weights Constant blob with weights
* @return reference to layer builder
*/
BatchNormalizationLayer& setWeights(const Blob::CPtr& weights);
/**
* @brief Sets biases for layer
* @param biases Constant blob with biases
* @return reference to layer builder
*/
BatchNormalizationLayer& setBiases(const Blob::CPtr& biases);
/**
* @brief Returns epsilon
* @return Epsilon
@@ -69,12 +61,6 @@ public:
* @return reference to layer builder
*/
BatchNormalizationLayer& setEpsilon(float eps);
/**
* @brief Validates layer before creation
* @param layer generic layer builder
*/
static void validate(const Layer& layer);
};
} // namespace Builder

View File

@@ -1,11 +1,11 @@
// Copyright (C) 2018 Intel Corporation
// Copyright (C) 2018-2019 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <builders/ie_layer_fragment.hpp>
#include <ie_inetwork.hpp>
#include <builders/ie_layer_decorator.hpp>
#include <ie_network.hpp>
#include <string>
namespace InferenceEngine {
@@ -14,7 +14,7 @@ namespace Builder {
/**
* @brief The class represents a builder for Clamp layer
*/
class INFERENCE_ENGINE_API_CLASS(ClampLayer): public LayerFragment {
class INFERENCE_ENGINE_API_CLASS(ClampLayer): public LayerDecorator {
public:
/**
* @brief The constructor creates a builder with the name
@@ -23,9 +23,14 @@ public:
explicit ClampLayer(const std::string& name = "");
/**
* @brief The constructor creates a builder from generic builder
* @param genLayer generic builder
* @param layer pointer to generic builder
*/
explicit ClampLayer(Layer& genLayer);
explicit ClampLayer(const Layer::Ptr& layer);
/**
* @brief The constructor creates a builder from generic builder
* @param layer constant pointer to generic builder
*/
explicit ClampLayer(const Layer::CPtr& layer);
/**
* @brief Sets the name for the layer
* @param name Layer name

View File

@@ -1,11 +1,11 @@
// Copyright (C) 2018 Intel Corporation
// Copyright (C) 2018-2019 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <builders/ie_layer_fragment.hpp>
#include <ie_inetwork.hpp>
#include <builders/ie_layer_decorator.hpp>
#include <ie_network.hpp>
#include <string>
#include <vector>
@@ -15,7 +15,7 @@ namespace Builder {
/**
* @brief The class represents a builder for Concat layer
*/
class INFERENCE_ENGINE_API_CLASS(ConcatLayer): public LayerFragment {
class INFERENCE_ENGINE_API_CLASS(ConcatLayer): public LayerDecorator {
public:
/**
* @brief The constructor creates a builder with the name
@@ -24,9 +24,14 @@ public:
explicit ConcatLayer(const std::string& name = "");
/**
* @brief The constructor creates a builder from generic builder
* @param genLayer generic builder
* @param layer pointer to generic builder
*/
explicit ConcatLayer(Layer& genLayer);
explicit ConcatLayer(const Layer::Ptr& layer);
/**
* @brief The constructor creates a builder from generic builder
* @param layer constant pointer to generic builder
*/
explicit ConcatLayer(const Layer::CPtr& layer);
/**
* @brief Sets the name for the layer
* @param name Layer name
@@ -67,9 +72,6 @@ public:
* @return reference to layer builder
*/
ConcatLayer& setAxis(size_t axis);
private:
size_t axis;
};
} // namespace Builder

View File

@@ -1,11 +1,11 @@
// Copyright (C) 2018 Intel Corporation
// Copyright (C) 2018-2019 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <builders/ie_layer_fragment.hpp>
#include <ie_inetwork.hpp>
#include <builders/ie_layer_decorator.hpp>
#include <ie_network.hpp>
#include <string>
namespace InferenceEngine {
@@ -14,7 +14,7 @@ namespace Builder {
/**
* @brief The class represents a builder for Const layer
*/
class INFERENCE_ENGINE_API_CLASS(ConstLayer): public LayerFragment {
class INFERENCE_ENGINE_API_CLASS(ConstLayer): public LayerDecorator {
public:
/**
* @brief The constructor creates a builder with the name
@@ -23,9 +23,14 @@ public:
explicit ConstLayer(const std::string& name = "");
/**
* @brief The constructor creates a builder from generic builder
* @param genLayer generic builder
* @param layer pointer to generic builder
*/
explicit ConstLayer(Layer& genLayer);
explicit ConstLayer(const Layer::Ptr& layer);
/**
* @brief The constructor creates a builder from generic builder
* @param layer constant pointer to generic builder
*/
explicit ConstLayer(const Layer::CPtr& layer);
/**
* @brief Sets the name for the layer
* @param name Layer name
@@ -51,6 +56,12 @@ public:
* @return reference to layer builder
*/
ConstLayer& setData(const Blob::CPtr& data);
/**
* @brief Returns constant data
* @return constant blob with data
*/
const Blob::CPtr& getData() const;
};
} // namespace Builder

View File

@@ -1,11 +1,11 @@
// Copyright (C) 2018 Intel Corporation
// Copyright (C) 2018-2019 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <builders/ie_layer_fragment.hpp>
#include <ie_inetwork.hpp>
#include <builders/ie_layer_decorator.hpp>
#include <ie_network.hpp>
#include <vector>
#include <string>
@@ -15,7 +15,7 @@ namespace Builder {
/**
* @brief The class represents a builder for ArgMax layer
*/
class INFERENCE_ENGINE_API_CLASS(ConvolutionLayer): public LayerFragment {
class INFERENCE_ENGINE_API_CLASS(ConvolutionLayer): public LayerDecorator {
public:
/**
* @brief The constructor creates a builder with the name
@@ -24,14 +24,14 @@ public:
explicit ConvolutionLayer(const std::string& name = "");
/**
* @brief The constructor creates a builder from generic builder
* @param genLayer generic builder
* @param layer pointer to generic builder
*/
explicit ConvolutionLayer(Layer& genLayer);
explicit ConvolutionLayer(const Layer::Ptr& layer);
/**
* @brief Operator creates generic layer builder
* @return Generic layer builder
* @brief The constructor creates a builder from generic builder
* @param layer constant pointer to generic builder
*/
operator Layer() const override;
explicit ConvolutionLayer(const Layer::CPtr& layer);
/**
* @brief Sets the name for the layer
* @param name Layer name
@@ -39,19 +39,6 @@ public:
*/
ConvolutionLayer& setName(const std::string& name);
/**
* @brief Sets weights for layer
* @param weights Constant blob with weights
* @return reference to layer builder
*/
ConvolutionLayer& setWeights(const Blob::CPtr& weights);
/**
* @brief Sets biases for layer
* @param biases Constant blob with biases
* @return reference to layer builder
*/
ConvolutionLayer& setBiases(const Blob::CPtr& biases);
/**
* @brief Returns input port
* @return Input port
@@ -151,12 +138,6 @@ public:
* @return reference to layer builder
*/
ConvolutionLayer& setOutDepth(size_t outDepth);
/**
* @brief Validates layer before creation
* @param layer generic layer builder
*/
static void validate(const Layer& layer);
};
} // namespace Builder

View File

@@ -1,11 +1,11 @@
// Copyright (C) 2018 Intel Corporation
// Copyright (C) 2018-2019 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//
#pragma once
#include <builders/ie_layer_fragment.hpp>
#include <ie_inetwork.hpp>
#include <builders/ie_layer_decorator.hpp>
#include <ie_network.hpp>
#include <string>
#include <vector>
@@ -15,7 +15,7 @@ namespace Builder {
/**
* @brief The class represents a builder for Crop layer
*/
class INFERENCE_ENGINE_API_CLASS(CropLayer): public LayerFragment {
class INFERENCE_ENGINE_API_CLASS(CropLayer): public LayerDecorator {
public:
/**
* @brief The constructor creates a builder with the name
@@ -24,9 +24,14 @@ public:
explicit CropLayer(const std::string& name = "");
/**
* @brief The constructor creates a builder from generic builder
* @param genLayer generic builder
* @param layer pointer to generic builder
*/
explicit CropLayer(Layer& genLayer);
explicit CropLayer(const Layer::Ptr& layer);
/**
* @brief The constructor creates a builder from generic builder
* @param layer constant pointer to generic builder
*/
explicit CropLayer(const Layer::CPtr& layer);
/**
* @brief Sets the name for the layer
* @param name Layer name
@@ -78,12 +83,6 @@ public:
* @return reference to layer builder
*/
CropLayer& setOffset(const std::vector<size_t>& offsets);
/**
* @brief Validates layer before creation
* @param layer generic layer builder
*/
static void validate(const Layer& layer);
};
} // namespace Builder

Some files were not shown because too many files have changed in this diff Show More