Files
openvino/samples/python/model_creation_sample/model_creation_sample.py
Ilya Lavrenov a883dc0b85 DOCS: ported changes from 2022.1 release branch (#11206)
* Extensibility guide with FE extensions and remove OV_FRAMEWORK_MAP from docs

* Rework of Extensibility Intro, adopted examples to missing OPENVINO_FRAMEWORK_MAP

* Removed OPENVINO_FRAMEWORK_MAP reference

* Frontend extension detailed documentation

* Fixed distributed snippets

* Fixed snippet inclusion in FE extension document and chapter headers

* Fixed wrong name in a snippet reference

* Fixed test for template extension due to changed number of loaded extensions

* Update docs/Extensibility_UG/frontend_extensions.md

Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>

* Minor fixes in extension snippets

* Small grammar fix

Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>

Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>

* DOCS: transition banner (#10973)

* transition banner

* minor fix

* update transition banner

* updates

* update custom.js

* updates

* updates

* Documentation fixes (#11044)

* Benchmark app usage

* Fixed link to the devices

* More fixes

* Update docs/OV_Runtime_UG/multi_device.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Removed several hardcoded links

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Updated documentation for compile_tool (#11049)

* Added deployment guide (#11060)

* Added deployment guide

* Added local distribution

* Updates

* Fixed more indentations

* Removed obsolete code snippets (#11061)

* Removed obsolete code snippets

* NCC style

* Fixed NCC for BA

* Add a troubleshooting issue for PRC installation (#11074)

* updates

* adding gna to linux

* add missing reference

* update

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* Update docs/install_guides/installing-model-dev-tools.md

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* update

* minor updates

* add gna item to yum and apt

* add gna to get started page

* update reference formatting

* merge commit

* add a troubleshooting issue

* update

* update

* fix CVS-71846

Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>

* DOCS: fixed hardcoded links  (#11100)

* Fixes

* Use links

* applying reviewers comments to the Opt Guide (#11093)

* applying reviewrs comments

* fixed refs, more structuring (bold, bullets, etc)

* refactoring tput/latency sections

* next iteration (mostly latency), also brushed the auto-batching and other sections

* updates sync/async images

* common opts brushed

* WIP tput redesigned

* minor brushing of common and auto-batching

* Tput fully refactored

* fixed doc name in the link

* moved int8 perf counters to the right section

* fixed links

* fixed broken quotes

* fixed more links

* add ref to the internals to the TOC

* Added a note on the batch size

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>

* [80085] New images for docs (#11114)

* change doc structure

* fix manager tools

* fix manager tools 3 step

* fix manager tools 3 step

* new img

* new img for OV Runtime

* fix steps

* steps

* fix intendents

* change list

* fix space

* fix space

* code snippets fix

* change display

* Benchmarks 2022 1 (#11130)

* Minor fixes

* Updates for 2022.1

* Edits according to the review

* Edits according to review comments

* Edits according to review comments

* Edits according to review comments

* Fixed table

* Edits according to review comments

* Removed config for Intel® Core™ i7-11850HE

* Removed forward-tacotron-duration-prediction-241 graph

* Added resnet-18-pytorch

* Add info about Docker images in Deployment guide (#11136)

* Renamed user guides (#11137)

* fix screenshot (#11140)

* More conservative recommendations on dynamic shapes usage in docs (#11161)

* More conservative recommendations about using dynamic shapes

* Duplicated statement from C++ part to Python part of reshape doc (no semantical changes)

* Update ShapeInference.md (#11168)

* Benchmarks 2022 1 updates (#11180)

* Updated graphs

* Quick fix for TODO in Dynamic Shapes article

* Anchor link fixes

* Fixed DM config (#11199)

* DOCS: doxy sphinxtabs (#11027)

* initial implementation of doxy sphinxtabs

* fixes

* fixes

* fixes

* fixes

* fixes

* WA for ignored visibility attribute

* Fixes

Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>
Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: Sergey Lyubimtsev <sergey.lyubimtsev@intel.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Ilya Naumov <ilya.naumov@intel.com>
Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
2022-03-24 22:27:29 +03:00

212 lines
9.0 KiB
Python
Executable File

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright (C) 2018-2022 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
import logging as log
import sys
import typing
from functools import reduce
import numpy as np
from openvino.preprocess import PrePostProcessor
from openvino.runtime import (Core, Layout, Model, Shape, Type, op, opset1,
opset8, set_batch)
from data import digits
def create_ngraph_function(model_path: str) -> Model:
"""Create a model on the fly from the source code using ngraph"""
def shape_and_length(shape: list) -> typing.Tuple[list, int]:
length = reduce(lambda x, y: x * y, shape)
return shape, length
weights = np.fromfile(model_path, dtype=np.float32)
weights_offset = 0
padding_begin = padding_end = [0, 0]
# input
input_shape = [64, 1, 28, 28]
param_node = op.Parameter(Type.f32, Shape(input_shape))
# convolution 1
conv_1_kernel_shape, conv_1_kernel_length = shape_and_length([20, 1, 5, 5])
conv_1_kernel = op.Constant(Type.f32, Shape(conv_1_kernel_shape), weights[0:conv_1_kernel_length].tolist())
weights_offset += conv_1_kernel_length
conv_1_node = opset8.convolution(param_node, conv_1_kernel, [1, 1], padding_begin, padding_end, [1, 1])
# add 1
add_1_kernel_shape, add_1_kernel_length = shape_and_length([1, 20, 1, 1])
add_1_kernel = op.Constant(Type.f32, Shape(add_1_kernel_shape),
weights[weights_offset : weights_offset + add_1_kernel_length])
weights_offset += add_1_kernel_length
add_1_node = opset8.add(conv_1_node, add_1_kernel)
# maxpool 1
maxpool_1_node = opset1.max_pool(add_1_node, [2, 2], padding_begin, padding_end, [2, 2], 'ceil')
# convolution 2
conv_2_kernel_shape, conv_2_kernel_length = shape_and_length([50, 20, 5, 5])
conv_2_kernel = op.Constant(Type.f32, Shape(conv_2_kernel_shape),
weights[weights_offset : weights_offset + conv_2_kernel_length],
)
weights_offset += conv_2_kernel_length
conv_2_node = opset8.convolution(maxpool_1_node, conv_2_kernel, [1, 1], padding_begin, padding_end, [1, 1])
# add 2
add_2_kernel_shape, add_2_kernel_length = shape_and_length([1, 50, 1, 1])
add_2_kernel = op.Constant(Type.f32, Shape(add_2_kernel_shape),
weights[weights_offset : weights_offset + add_2_kernel_length],
)
weights_offset += add_2_kernel_length
add_2_node = opset8.add(conv_2_node, add_2_kernel)
# maxpool 2
maxpool_2_node = opset1.max_pool(add_2_node, [2, 2], padding_begin, padding_end, [2, 2], 'ceil')
# reshape 1
reshape_1_dims, reshape_1_length = shape_and_length([2])
# workaround to get int64 weights from float32 ndarray w/o unnecessary copying
dtype_weights = np.frombuffer(
weights[weights_offset : weights_offset + 2 * reshape_1_length],
dtype=np.int64,
)
reshape_1_kernel = op.Constant(Type.i64, Shape(list(dtype_weights.shape)), dtype_weights)
weights_offset += 2 * reshape_1_length
reshape_1_node = opset8.reshape(maxpool_2_node, reshape_1_kernel, True)
# matmul 1
matmul_1_kernel_shape, matmul_1_kernel_length = shape_and_length([500, 800])
matmul_1_kernel = op.Constant(Type.f32, Shape(matmul_1_kernel_shape),
weights[weights_offset : weights_offset + matmul_1_kernel_length],
)
weights_offset += matmul_1_kernel_length
matmul_1_node = opset8.matmul(reshape_1_node, matmul_1_kernel, False, True)
# add 3
add_3_kernel_shape, add_3_kernel_length = shape_and_length([1, 500])
add_3_kernel = op.Constant(Type.f32, Shape(add_3_kernel_shape),
weights[weights_offset : weights_offset + add_3_kernel_length],
)
weights_offset += add_3_kernel_length
add_3_node = opset8.add(matmul_1_node, add_3_kernel)
# ReLU
relu_node = opset8.relu(add_3_node)
# reshape 2
reshape_2_kernel = op.Constant(Type.i64, Shape(list(dtype_weights.shape)), dtype_weights)
reshape_2_node = opset8.reshape(relu_node, reshape_2_kernel, True)
# matmul 2
matmul_2_kernel_shape, matmul_2_kernel_length = shape_and_length([10, 500])
matmul_2_kernel = op.Constant(Type.f32, Shape(matmul_2_kernel_shape),
weights[weights_offset : weights_offset + matmul_2_kernel_length],
)
weights_offset += matmul_2_kernel_length
matmul_2_node = opset8.matmul(reshape_2_node, matmul_2_kernel, False, True)
# add 4
add_4_kernel_shape, add_4_kernel_length = shape_and_length([1, 10])
add_4_kernel = op.Constant(Type.f32, Shape(add_4_kernel_shape),
weights[weights_offset : weights_offset + add_4_kernel_length],
)
weights_offset += add_4_kernel_length
add_4_node = opset8.add(matmul_2_node, add_4_kernel)
# softmax
softmax_axis = 1
softmax_node = opset8.softmax(add_4_node, softmax_axis)
return Model(softmax_node, [param_node], 'lenet')
def main():
log.basicConfig(format='[ %(levelname)s ] %(message)s', level=log.INFO, stream=sys.stdout)
# Parsing and validation of input arguments
if len(sys.argv) != 3:
log.info(f'Usage: {sys.argv[0]} <path_to_model> <device_name>')
return 1
model_path = sys.argv[1]
device_name = sys.argv[2]
labels = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
number_top = 1
# ---------------------------Step 1. Initialize OpenVINO Runtime Core--------------------------------------------------
log.info('Creating OpenVINO Runtime Core')
core = Core()
# ---------------------------Step 2. Read a model in OpenVINO Intermediate Representation------------------------------
log.info(f'Loading the model using ngraph function with weights from {model_path}')
model = create_ngraph_function(model_path)
# ---------------------------Step 3. Apply preprocessing----------------------------------------------------------
# Get names of input and output blobs
ppp = PrePostProcessor(model)
# 1) Set input tensor information:
# - input() provides information about a single model input
# - precision of tensor is supposed to be 'u8'
# - layout of data is 'NHWC'
ppp.input().tensor() \
.set_element_type(Type.u8) \
.set_layout(Layout('NHWC')) # noqa: N400
# 2) Here we suppose model has 'NCHW' layout for input
ppp.input().model().set_layout(Layout('NCHW'))
# 3) Set output tensor information:
# - precision of tensor is supposed to be 'f32'
ppp.output().tensor().set_element_type(Type.f32)
# 4) Apply preprocessing modifing the original 'model'
model = ppp.build()
# Set a batch size equal to number of input images
set_batch(model, digits.shape[0])
# ---------------------------Step 4. Loading model to the device-------------------------------------------------------
log.info('Loading the model to the plugin')
compiled_model = core.compile_model(model, device_name)
# ---------------------------Step 5. Prepare input---------------------------------------------------------------------
n, c, h, w = model.input().shape
input_data = np.ndarray(shape=(n, c, h, w))
for i in range(n):
image = digits[i].reshape(28, 28)
image = image[:, :, np.newaxis]
input_data[i] = image
# ---------------------------Step 6. Do inference----------------------------------------------------------------------
log.info('Starting inference in synchronous mode')
results = compiled_model.infer_new_request({0: input_data})
# ---------------------------Step 7. Process output--------------------------------------------------------------------
predictions = next(iter(results.values()))
log.info(f'Top {number_top} results: ')
for i in range(n):
probs = predictions[i]
# Get an array of number_top class IDs in descending order of probability
top_n_idexes = np.argsort(probs)[-number_top :][::-1]
header = 'classid probability'
header = header + ' label' if labels else header
log.info(f'Image {i}')
log.info('')
log.info(header)
log.info('-' * len(header))
for class_id in top_n_idexes:
probability_indent = ' ' * (len('classid') - len(str(class_id)) + 1)
label_indent = ' ' * (len('probability') - 8) if labels else ''
label = labels[class_id] if labels else ''
log.info(f'{class_id}{probability_indent}{probs[class_id]:.7f}{label_indent}{label}')
log.info('')
# ----------------------------------------------------------------------------------------------------------------------
log.info('This sample is an API example, for any performance measurements please use the dedicated benchmark_app tool\n')
return 0
if __name__ == '__main__':
sys.exit(main())