Commit Graph

75 Commits

Author SHA1 Message Date
Mingyu Kim
de4bd44a38 [benchmark_app] Fix spacing (#9759) 2022-01-19 12:16:40 +03:00
Maxim Gordeev
ec3283ebe1 [IE SAMPLES] activated NCC tool for c++ samples (#9600)
* [IE SAMPLES] activated NCC tool for c++ samples

* exclude ov_ncc_naming_style for tests

* fixed NCC hit

* Added support for source files in samples

* changed style of methods for benchmark

* changed style for speech sample

* changed code style

* changed code style for shared_tensor

* benchmark changes

* changed remote_tensors_filling

* fixed notes

* rebase of branch
2022-01-19 01:08:07 +03:00
Alexey Suhov
a79830cb55 Update year to 2022 in copyright notice (#9755) 2022-01-19 01:07:49 +03:00
Dmitry Pigasin
071dc5aef6 [IE Python Speech Sample] Fix problem with different utterance names in input files (#9678)
* Fix problem with different utterance names in input files

* Update file utils

* Change cw arg errors to pass tests

* refactor variable names
2022-01-18 23:21:25 +03:00
Alexey Lebedev
9a59e871eb [PYTHON API] fix hash operator for ports (#9673)
* Fix get_node call

* Add operators and define hash

* rename infer request property

* add new line

* remove unused var

* Move tensor getters to InferRequestWrapper

* check node hash

* add new line

* fix samples

Co-authored-by: Anastasia Kuporosova <anastasia.kuporosova@intel.com>
2022-01-18 21:09:03 +03:00
Nadezhda Ageeva
d09c09d9fd [GNA] Deprecate GNA_SW mode (#9738) 2022-01-18 18:26:55 +03:00
Ivan Vikhrev
41aadfd116 remove ifdef opencv related to load & dump configs (#9736) 2022-01-18 14:08:45 +03:00
Fedor Zharinov
14f8614da6 Benchmarkapp: Added processing of inputs/outputs by index (#9703)
* Added processing of inputs/outputs by index

* fix

* All tensor's get_friendly_name are replaced with get_any_name

* stylefix
2022-01-18 13:40:54 +03:00
Ivan Vikhrev
a2cf98bebb [IE Samples] json configuration reader and dumper for benchmark_app (#9648)
* added load_config and dump_config functions implemented with json library

* add warning, upd readme

* Update samples/cpp/benchmark_app/README.md

Co-authored-by: Fedor Zharinov <fedor.zharinov@intel.com>

Co-authored-by: Fedor Zharinov <fedor.zharinov@intel.com>
2022-01-18 11:22:47 +03:00
Sergey Lyubimtsev
5ebbad9bcf Fixes for scripts (#9640)
* fixes for scripts

* reduce to warning python bitness check
2022-01-17 21:54:05 +03:00
Krzysztof Bruniecki
97df59a4ab [GNA] Restore deprecated options for GNA Plugin (#9611)
* Restore deprecatet options for GNA Plugin

* Apply review use INFERENCE_ENGINE_DEPRECATED macro

* Deprecate PWL_UNIFORM_DESIGN and PWL_MAX_ERROR_PERCENT

* Add doxygen deprecation message

* Use string for deprecated config keys to avoid compile time errors

* Use IE_SUPPRESS_DEPRECATED_START

* Fixup lint

* Fix indentation

* Usue future release instead 2022.2 in deprecated message, supress deprecation error in tests

* Fix test
2022-01-17 10:32:02 +03:00
Nadezhda Ageeva
12ab842970 [GNA] Deprecate GNA_LIB_N_THREADS and SINGLE_THREAD parameters + sample update (#9637)
* Deprecate GNA_LIB_N_THREADS and SINGLE_THREAD parameters

* Remove deprecated options from sample

* Fix speech sample test

* Adds doxy deprecated comment
2022-01-14 12:20:36 +03:00
Fedor Zharinov
6c69535d6c Benchmark_app batch calculation fix (#9554)
* BenchmarkApp - batch size calculation fix

* stylefix

* -ip/op fix

* stylefix
2022-01-13 23:34:38 +03:00
Fedor Zharinov
6a126ac6bb avg statistics calculation fix (#9612) 2022-01-13 22:57:20 +03:00
Vladimir Dudnik
28fb55dffe [IE Samples][OV2.0] final clean up of old API headers (#9494)
* final clean up of old API headers, compile_tool separated from samples

* make cpplint happy
2022-01-13 11:12:20 +03:00
Mikhail Znamenskiy
5b40f381cb Fix for benchmark_app: set model stream flags (#9609) 2022-01-13 10:15:33 +03:00
Wang, Yang
4546df5091 Enable THROUGHPUT by default for all the devices. (#9107)
* Set THROUGHPUT as the default configration for all the plugin and display the config of the plugin.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* updated format.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Update benchmark python API.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Replace str 'THROUGHPUT' with CONFIG_VALUE(THROUGHPUT).

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Using CONFIG_VALUE(THROUGHPUT) replace 'THROUGHPUT' string.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* update code style.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>

* Move the setting output code into the try block.

Signed-off-by: Wang, Yang <yang4.wang@intel.com>
2022-01-12 11:09:54 +03:00
Fedor Zharinov
fc4185e92a Compiled network loading is fixed (#9547)
* compiled network loading is fixed

* StyleFix
2022-01-10 23:37:46 +03:00
Fedor Zharinov
4dbc9ae2e7 benchmark_app with dynamic reshapes and API 2.0 (#8609)
* API 2.0 changes

* stylefix

* Update samples/cpp/benchmark_app/main.cpp

Co-authored-by: Nadezhda Ageeva <nkogteva@gmail.com>

* Update samples/cpp/benchmark_app/infer_request_wrap.hpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* Update samples/cpp/benchmark_app/utils.cpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* fixes

* fix for: gpu headers are moved to another folder... yet again

* fix for mac build paranoia

* function,classes and files renames/change logic to work with inputs()

* stylefix

* 2nd portion of fixes

* stylefix

* Batch warnings

Co-authored-by: Nadezhda Ageeva <nkogteva@gmail.com>
Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
2021-12-30 19:09:12 +03:00
Maxim Gordeev
b144089ef7 [IE_Samples] Updating information about methods in README.md according new API 2.0 (#9477) 2021-12-29 23:50:19 +03:00
Maxim Gordeev
39a1b98799 changed C++ samples due to OpenVINO style (#9463)
I'll merge this. We need to talk with someone who know openvino build system better than we to "use cmake function ov_ncc_style_check to perform such check automatically". As far as I can see, currently it used only in one place, so it is not common approach for openvino components
2021-12-29 15:36:33 +03:00
Vladimir Dudnik
04bb8bb9bb [IE Samples] fix hello classification cpp (#9450)
* fix image file read error message when sample built w/o opencv

* code style and use model inputs/outputs instead of parameters and results
2021-12-28 15:58:09 +03:00
Dmitry Pigasin
d6dcf58846 [IE Python Speech Sample] Migrate to OV 2.0 API (#9348)
* Create mvp

* Implement new API & Refactoring

* Fix -oname for models whose name of output layer contains a port number

* Fix step numbers

* Create utils.py

* Remove shebang from utils.py

* Fix `-iname` option
2021-12-27 09:22:15 +03:00
Vladimir Dudnik
a9cee5f101 [IE Samples] OV2.0 API python ngraph_function_creation_sample (#9440)
* [IE Python Speech Sample] Migrate to OV 2.0 API

* improvements

* flake notes

* improved code style like as C++

* linters changes

* changed data.py

* sync output with C++ sample

Co-authored-by: Maxim Gordeev <maxim.gordeev@intel.com>
2021-12-27 09:19:18 +03:00
Maxim Shevtsov
49b5e5728b Auto Batching impl (#7883)
* auto-batching POC squashed (all commits from auto-batch-2021.3 branch)

(cherry picked from commit d7742f2c747bc514a126cc9a4d5b99f0ff5cbbc7)

* applying/accomodating the API changes after rebase to the master

* replaying modified version of actual batch selection

* eearly experiments with model mem footprint

* changes from rebasing to the latest master

* experimenting with DG1 on the batch size selection, also collecting the mem footprint

* WIP:moving the auto-batching to the icore to let the MULT/AUTO support that, ALLOW_AUTO_BATCHING as a conventional config key. still fials hot device swap

* quick-n-dirty batch footpint vs device total mem

* code style

* testing which models perform badly due to kernels and NOT (batched) footprint

* stub  pipeline task to comunicate the readiness rather than promise/future

* quick-n-dirty timeout impl

* explicit _completionTasks,reverting BA to use the timeout

* inputs outputs copies, works with AUTO and demo now

* accomodate the config per device-id, after rebase to the latest master

* allowing the auto-batching only with tput hint to let more conventional tests pass

* fix the pre-mature timeout restaring via waiting for batch1 requests completion

* moved the bacthed request statring ( along with input copies) to the dedicated thread

* [IE CLDNN] Disable bs_fs_yx_bsv16_fsv16 format for int8 convolution

* code style

* increasing the timeout to test the ssd_* models perf (timeout?) issues

* reducing number of output stuff in BA to avoid bloating the logs in experiments

* more aggressive batching for experiments, not limited to 32 and also 4 as a min

* more accurate timeout debugging info

* getting the reqs limitation from the plugin SetConfig as well

* refactor the reshape logic a bit to accomodate CPU for bathcing, also added remeote context

* let the benchamrk_app to consume specific batch values for the auto-batching such as BATCH:GPU(4)

* auto-batching functional test (with results check vs ref) and GPU instance for that

* fixed arithemtic on blobs ptrs

* clang

* handling possible batched network failure

* BATCH as the constants device name in test

* ENABLE_BATCH

* func tests for CPU, also DetectionOutput hetero tests (CPU and GPU)

* DetectionOutput hetero test for the CPU

* reenabling the Auto-Batching in the AUTO

* auto-batching device enabled in the test

* fixed the DO test

* improve the loading loop logic

* brushed the config keys

* allow hetero code-path for explicit device name like BATCH:GPU(4), used in the hetero code-path tests

* fix the test after refactoring

* clang

* moving ThreadSafeQueue to the ie_parallel, as it is re-used in the AUTO/MULTI and BATCH now

* auto-batching hetero test (subgraph with DetectionOutput)

* fixed minor changes that were result of experiments with impl

* code-style

* brushing, disabling CPU's HETERO tests until planned activity for 22.2

* removing home-baked MAX_BATCH_SZIE and swicthing to the official impl by GPU team

* remote blobs tests for the auto-batching (old API)

* brushed names a bit

* CreateContext and LoadNEtwork with context for the Auto-Batching plus remote-blobs tests

* fixed the ieUnitTests with adding CreateContext stub to the MockICore

* clang

* improved remote-blobs tests

* revert the back BA from exeprimenents with AB + device_use_mem

* conformance tests for BATCH, alos batch size 1 is default for BATCH:DEVICE

* remote blobs 2.0 tests, issue with context having the orig device name

* debugging DG1 perf drop (presumably due to non-fitting the device-mem)

* disbaling WA with batch/=2 for excesive mem footptint, leaving only streams 2

* remote blobs 2.0 tests for different tensor sharing types

* converting assert to throw to accomodate legacy API where the lock() was possible to be called

* revert the timeout back to avoid mixing the studies, fixed the footprint calc

* reverting to estimating the max batch by extrapolating from bacth1 size

* more conservative footptint etimation (with bacth1), graceful bacth 1 handling without duplication

* even graceful batch 1 handling without duplication

* WA for MAX_BATCH_SIZE failure, removing batch4 as a min for the auto-batching

* AutoBatchPlugin -> ov_auto_batch_plugin

* WA for gcc 4.8

* clang

* fix misprint

* fixed errors resulted from recent OV's Variant to Any transition

* skip auto-batching for already-batched networks

* AUTO_BATCH_TIMEOUT and tests

* GPU-specific L3

* switched to pure config, also improved ALLOW_AUTO_BATCHING config key handling logic

* debugging device info

* enabling the config tests for the GPU and fixing the Auto-batching tests to pass

* making the default (when not recognized the driver) cache size more aggressive, to accomodate recent HW with old drivers

* skip auto-batching for RNNs and alikes (e.g. single CHW input)

* fixed fallback to the bacth1 and moved HETERO path under condition to avoid bloating

* brushing

* Auto plugin GetMetric support gpu auto-batch

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* add test case

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* add comments on test

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* brushing the vars names, alos adding the excpetion handling

* disabling the auto-batching for the networks with non-batched outputs and faster-rcnn and alikes (CVS-74085) to minimize the of #failures

* add try catch

Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>

* brushing the code changed in the GPU plugin

* Auto-Batch requests tests

* brushed varibles a bit (ref)

* cleaned debug output from the ie_core

* cleaned cmake for the Auto-Batch

* removed batchN estimation from batch1

* cleaned from debug printf

* comments, cleanup

* WA the mock test errors introduced with merging the https://github.com/myshevts/openvino/pull/13

* Adding back  removed batchN estimation from batch1 to debug degradations on DG1 (resulted from too optimistic MAX_BATCH_SIZE?). This partially reverts commit e8f1738ac1.

* brushing ie_core.cpp

* fix 32bit compilation

* Code review: ENABLE_AUTO_BATCH

* consolidate the auot-batching logic in ie_core.cpp into single ApplyAutoBAtching

* renamed brushed the OPTIMAL_BATCH (now with_SIZE) and mimicks the MAX_BATCH_SZIE  wrt MODEL_PTR

* default value for the OPTIMAL_BATCH_SIZE

* clang

* accomodate new func tests location

* fix shuffle of headers after clang + copyrights

* fixed misprint made during code refactoring

* moving the common therad-safe containers (like ThreadSafeQueue) to the dedicated dev_api header

* switch from the device name to the OPTIMAL_BATCH_SIZE metric presence as a conditin to consider Auto-Batching

* switching from the unsafe size() and minimizing time under lock

* code style

* brushed the ApplyAutoBatching

* brushed the netric/config names and descriptions

* completed the core intergration tests for the auto-batching

* ExecGraphInfo and check for incorrect cfg

* removed explicit dependencies from cmake file of the plugin

* disabling Auto-Batching thru the tput hint (to preserve current product default), only excplicit like BATCH:GPU used in the tests

Co-authored-by: Roman Lyamin <roman.lyamin@intel.com>
Co-authored-by: Hu, Yuan2 <yuan2.hu@intel.com>
2021-12-24 12:55:22 +03:00
Evgenya Stepyreva
41ace9d4e6 Use opsets in sample of function creation (#7792) 2021-12-23 13:41:27 +03:00
Ilya Churaev
b241d5227e Moved compile_tool to new API (#8501)
* Moved compile_tool to new API

* Fixed comments and added new tests

* Fixed comments

* Fixed build

* Fixed comments

* Fixed unit tests

* Fixed compilation

* Fixed legacy message

* Fixed readme

* Fixed comments

* FIxed build

* Fixed build

* Fixed tests

* Applied comments

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2021-12-23 12:45:02 +03:00
Elizaveta Lobanova
c4ce6c5430 [IE SAMPLE] Fixed inputs and outputs element type initialization (#9375) 2021-12-23 12:06:23 +03:00
Vladislav Volkov
60a11a6348 [CPU] Renamed CPU plugin to ov_intel_cpu_plugin (#9342) 2021-12-23 11:49:25 +03:00
Andrey Zaytsev
4ae6258bed Feature/azaytsev/from 2021 4 (#9247)
* Added info on DockerHub CI Framework

* Feature/azaytsev/change layout (#3295)

* Changes according to feedback comments

* Replaced @ref's with html links

* Fixed links, added a title page for installing from repos and images, fixed formatting issues

* Added links

* minor fix

* Added DL Streamer to the list of components installed by default

* Link fixes

* Link fixes

* ovms doc fix (#2988)

* added OpenVINO Model Server

* ovms doc fixes

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>

* Updated openvino_docs.xml

* Updated the link to software license agreements

* Revert "Updated the link to software license agreements"

This reverts commit 706dac500e.

* Docs to Sphinx (#8151)

* docs to sphinx

* Update GPU.md

* Update CPU.md

* Update AUTO.md

* Update performance_int8_vs_fp32.md

* update

* update md

* updates

* disable doc ci

* disable ci

* fix index.rst

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
# Conflicts:
#	.gitignore
#	docs/CMakeLists.txt
#	docs/IE_DG/Deep_Learning_Inference_Engine_DevGuide.md
#	docs/IE_DG/Extensibility_DG/Custom_ONNX_Ops.md
#	docs/IE_DG/Extensibility_DG/VPU_Kernel.md
#	docs/IE_DG/InferenceEngine_QueryAPI.md
#	docs/IE_DG/Int8Inference.md
#	docs/IE_DG/Integrate_with_customer_application_new_API.md
#	docs/IE_DG/Model_caching_overview.md
#	docs/IE_DG/supported_plugins/GPU_RemoteBlob_API.md
#	docs/IE_DG/supported_plugins/HETERO.md
#	docs/IE_DG/supported_plugins/MULTI.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Caffe.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_MxNet.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_ONNX.md
#	docs/MO_DG/prepare_model/convert_model/Converting_Model.md
#	docs/MO_DG/prepare_model/convert_model/Converting_Model_General.md
#	docs/MO_DG/prepare_model/convert_model/Cutting_Model.md
#	docs/MO_DG/prepare_model/convert_model/pytorch_specific/Convert_RNNT.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_EfficientDet_Models.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_WideAndDeep_Family_Models.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
#	docs/doxygen/Doxyfile.config
#	docs/doxygen/ie_docs.xml
#	docs/doxygen/ie_plugin_api.config
#	docs/doxygen/ngraph_cpp_api.config
#	docs/doxygen/openvino_docs.xml
#	docs/get_started/get_started_macos.md
#	docs/get_started/get_started_raspbian.md
#	docs/get_started/get_started_windows.md
#	docs/img/cpu_int8_flow.png
#	docs/index.md
#	docs/install_guides/VisionAcceleratorFPGA_Configure.md
#	docs/install_guides/VisionAcceleratorFPGA_Configure_Windows.md
#	docs/install_guides/deployment-manager-tool.md
#	docs/install_guides/installing-openvino-linux.md
#	docs/install_guides/installing-openvino-macos.md
#	docs/install_guides/installing-openvino-windows.md
#	docs/optimization_guide/dldt_optimization_guide.md
#	inference-engine/ie_bridges/c/include/c_api/ie_c_api.h
#	inference-engine/ie_bridges/python/docs/api_overview.md
#	inference-engine/ie_bridges/python/sample/ngraph_function_creation_sample/README.md
#	inference-engine/ie_bridges/python/sample/speech_sample/README.md
#	inference-engine/ie_bridges/python/src/openvino/inference_engine/ie_api.pyx
#	inference-engine/include/ie_api.h
#	inference-engine/include/ie_core.hpp
#	inference-engine/include/ie_version.hpp
#	inference-engine/samples/benchmark_app/README.md
#	inference-engine/samples/speech_sample/README.md
#	inference-engine/src/plugin_api/exec_graph_info.hpp
#	inference-engine/src/plugin_api/file_utils.h
#	inference-engine/src/transformations/include/transformations_visibility.hpp
#	inference-engine/tools/benchmark_tool/README.md
#	ngraph/core/include/ngraph/ngraph.hpp
#	ngraph/frontend/onnx_common/include/onnx_common/parser.hpp
#	ngraph/python/src/ngraph/utils/node_factory.py
#	openvino/itt/include/openvino/itt.hpp
#	thirdparty/ade
#	tools/benchmark/README.md

* Cherry-picked remove font-family (#8211)

* Cherry-picked: Update get_started_scripts.md (#8338)

* doc updates (#8268)

* Various doc changes

* theme changes

* remove font-family (#8211)

* fix  css

* Update uninstalling-openvino.md

* fix css

* fix

* Fixes for Installation Guides

Co-authored-by: Andrey Zaytsev <andrey.zaytsev@intel.com>
Co-authored-by: kblaszczak-intel <karol.blaszczak@intel.com>
# Conflicts:
#	docs/IE_DG/Bfloat16Inference.md
#	docs/IE_DG/InferenceEngine_QueryAPI.md
#	docs/IE_DG/OnnxImporterTutorial.md
#	docs/IE_DG/supported_plugins/AUTO.md
#	docs/IE_DG/supported_plugins/HETERO.md
#	docs/IE_DG/supported_plugins/MULTI.md
#	docs/MO_DG/prepare_model/convert_model/Convert_Model_From_Kaldi.md
#	docs/MO_DG/prepare_model/convert_model/tf_specific/Convert_YOLO_From_Tensorflow.md
#	docs/install_guides/installing-openvino-macos.md
#	docs/install_guides/installing-openvino-windows.md
#	docs/ops/opset.md
#	inference-engine/samples/benchmark_app/README.md
#	inference-engine/tools/benchmark_tool/README.md
#	thirdparty/ade

* Cherry-picked: doc script changes (#8568)

* fix openvino-sphinx-theme

* add linkcheck target

* fix

* change version

* add doxygen-xfail.txt

* fix

* AA

* fix

* fix

* fix

* fix

* fix
# Conflicts:
#	thirdparty/ade

* Cherry-pick: Feature/azaytsev/doc updates gna 2021 4 2 (#8567)

* Various doc changes

* Reformatted C++/Pythob sections. Updated with info from PR8490

* additional fix

* Gemini Lake replaced with Elkhart Lake

* Fixed links in IGs, Added 12th Gen
# Conflicts:
#	docs/IE_DG/supported_plugins/GNA.md
#	thirdparty/ade

* Cherry-pick: Feature/azaytsev/doc fixes (#8897)

* Various doc changes

* Removed the empty Learning path topic

* Restored the Gemini Lake CPIU list
# Conflicts:
#	docs/IE_DG/supported_plugins/GNA.md
#	thirdparty/ade

* Cherry-pick: sphinx copybutton doxyrest code blocks (#8992)

# Conflicts:
#	thirdparty/ade

* Cherry-pick: iframe video enable fullscreen (#9041)

# Conflicts:
#	thirdparty/ade

* Cherry-pick: fix untitled titles (#9213)

# Conflicts:
#	thirdparty/ade

* Cherry-pick: perf bench graph animation (#9045)

* animation

* fix
# Conflicts:
#	thirdparty/ade

* Cherry-pick: doc pytest (#8888)

* docs pytest

* fixes
# Conflicts:
#	docs/doxygen/doxygen-ignore.txt
#	docs/scripts/ie_docs.xml
#	thirdparty/ade

* Cherry-pick: restore deleted files (#9215)

* Added new operations to the doc structure (from removed ie_docs.xml)

* Additional fixes

* Update docs/IE_DG/InferenceEngine_QueryAPI.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update Custom_Layers_Guide.md

* Changes according to review  comments

* doc scripts fixes

* Update docs/IE_DG/Int8Inference.md

Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>

* Update Int8Inference.md

* update xfail

* clang format

* updated xfail

Co-authored-by: Trawinski, Dariusz <dariusz.trawinski@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
Co-authored-by: kblaszczak-intel <karol.blaszczak@intel.com>
Co-authored-by: Yury Gorbachev <yury.gorbachev@intel.com>
Co-authored-by: Helena Kloosterman <helena.kloosterman@intel.com>
2021-12-21 20:26:37 +03:00
Krzysztof Bruniecki
6c92ce48c1 Remove non official code (#9315)
- feature to export for embedded GNA3 which is not in fact supported.
    - SW emulation modes like SSE etc, which are not useful and are undocumented.
2021-12-21 09:44:06 +03:00
Maxim Gordeev
3c93c3e766 Hello classification, classification_async, hello_reshape_ssd python samples with API 2.0 (#9091)
* Hello classification python with API 2.0

* Update samples/python/hello_classification/hello_classification.py

Co-authored-by: Ekaterina Aidova <ekaterina.aidova@intel.com>

* Update samples/python/hello_classification/hello_classification.py

Co-authored-by: Ekaterina Aidova <ekaterina.aidova@intel.com>

* changed linters processing

* Update samples/python/hello_classification/hello_classification.py

Co-authored-by: Vladimir Dudnik <vladimir.dudnik@intel.com>

* Update samples/python/hello_classification/hello_classification.py

Co-authored-by: Vladimir Dudnik <vladimir.dudnik@intel.com>

* Update samples/python/hello_classification/hello_classification.py

Co-authored-by: Vladimir Dudnik <vladimir.dudnik@intel.com>

* Update samples/python/hello_classification/hello_classification.py

Co-authored-by: Vladimir Dudnik <vladimir.dudnik@intel.com>

* Update samples/python/hello_classification/hello_classification.py

Co-authored-by: Vladimir Dudnik <vladimir.dudnik@intel.com>

* updated import

* Moved classification_sample_async to new API 2.0

* moved hello_reshape_ssd sample to new API 2.0

* [classification_sample_async] refactoring

* [hello_classification] refactoring

* [hello_reshape_ssd] refactoring

Co-authored-by: Ekaterina Aidova <ekaterina.aidova@intel.com>
Co-authored-by: Vladimir Dudnik <vladimir.dudnik@intel.com>
Co-authored-by: Dmitry Pigasin <dmitry.pigasin@intel.com>
2021-12-21 01:33:12 +03:00
Maxim Gordeev
abcd7486a9 Moving of C++ speech sample to OpenVINO API 2.0 (#9027)
* Moved Speech sample to OpenVINO 2.0

* improved samples's score

* changed code style

* added GNA configs

* renamed function due to new API

* added dynamic batch

* reordered includes

* speech_sample has more than 1 input

* added oname processing

* added multi input

* fixed notes

* removed getFullDeviceName with old api

* getFullDeviceName for benchmark
2021-12-19 20:56:40 +03:00
Fedor Zharinov
e9874ec1d4 Dynamic reshapes (#7788)
* Merged and compiling

* Fix for dynamic shape type

* review fixes

* renamed blob shape to tensor shape, small improvements

* fix code style

* added parsing of multiple shapes

* store latency per group, add isIdleRequestAvailable() to Infer Queue

* added cached random inputs

* redesign pipeline, added new metrics(avg, max, min), added metrics per groups

* fixed code style

* small improvements

* modified tensor parameters parsing

* modified -i parameter parsing: added possibility to specify input names

* implemented image cashing

* added cashed blobs creating

* added -pcseq flag, modified batch filling, changes fps formula

* improvements

* code formatting

* code formatting2

* apply suggestions from review

* replaced Buffer class with InferenceEngine Blobs

* use batch size in blobs filling

* added shared blob allocator to handle blob's data

* fixed warnings & code style

* allocate blobs

* fix for networks with image info input

* added comments & fixed codestyle

* clear data in free() in SharedBlobAllocator

* remove unnecessary check

* Delimeter is changed to ::

* stylefix

* added layout from string function, small improvements

* modified parsing to enable : in input parameters

* small fixes

* small fixes

* added missed blob allocation, fixes

* [TEST]added support for remote blobs

* fix remote blobs

* new inputs/files output format

* removed vectors resize which caused bugs

* made cl::Buffer type under ifdef, fix inputs filling

* changed batch() function to not throwing exceptions

* removed unused var

* fix code style

* replace empty name in input files with name from net input

* restored old behaviour for static models

* fix code style

* fix warning - made const iterator

* fix warning - remove reference in loop variable

* added random and image_info input types to -i, fix problem with layout

* replaced batch() with getBatchSize() in main

* fix layout, shape, tensor shape parameters parsing

* upd help messages for input, tensor shape and pcseq command

* added buffer for cl output blobs, small fixes

Signed-off-by: ivikhrev <ivan.vikhrev@intel.com>

* added legacy mode

* restore setBlob

* code style formatting

* move collecting latency for groups under flag

* removed not applicable layouts

* added hint to error message when wrong input name in -tensor_shape was specified

* added new metrics to statistics report

* Apply suggestions from code review

* fix binary blobs filling when layout is CN

* apply suggestions

* moved file in the right place after rebase

* improved -pcseq output

* updated args and readme

* removed TEMPLATE plugin registration

* fix -shape arg  decsription

* enable providing several -i args as input

* renamed legacy_mode to inference_only and made it default for static models, renamed tensor_shape to data_shape

* upd readme

* use getBlob() in inference only mode

* fix old input type for static case

* fix typo

* upd readme

* move log about benchmark mode to the measuring perfomance step

* added class for latency metrics

* upd readme, fix typos, renamed funcs

* fix warning and upd parsing to avoid error with : in file paths

* fix error on centos : error: use of deleted function ‘std::basic_stringstream<char>::basic_stringstream(const std::basic_stringstream<char>&)

* added check for key in inputs

* renamed input to inputs

* adjust batch size for binary blobs

* replaced warning with exception in bench mode defining

* align measurement cycle with master

Co-authored-by: ivikhrev <ivan.vikhrev@intel.com>
2021-12-17 12:20:43 +03:00
Ilya Lavrenov
e6d08aef5b Don't use EXCLUDE_FROM_ALL with samples targets (#9237) 2021-12-15 21:21:46 +03:00
Vladimir Dudnik
aa457268d4 [IE Samples] make coverity happy (#9203)
* make coverity happy

* apply code style
2021-12-15 17:58:06 +03:00
Pavel Zamelin
a023c588ba Fix parseArgMap for layer names with : (#8826) 2021-12-14 12:36:14 +03:00
Ivan Vikhrev
b6176fa768 [IE Samples] restored support for multiple -i args (#9190)
* restored support for  multiple -i args

* replaced return with break, moved return out of cycle
2021-12-14 11:37:50 +03:00
Vladimir Dudnik
5b25dbee22 ov2.0 IE samples modification (#8340)
* ov2.0 IE samples modification

apply code style

turn off clang style check for headers order

unify samples a bit

add yuv nv12 reader to format_reader, helloe_nv112 sample

hello_reshape_ssd ov2.0

* sync with PR 8629 preprocessing api changes

* fix for slog << vector<int>

* add operator<< for ov::Version from PR-8687

* Update samples/cpp/hello_nv12_input_classification/main.cpp

Co-authored-by: Mikhail Nosov <mikhail.nosov@intel.com>

* apply code style

* change according to review comments

* add const qualifier

* apply code style

* std::ostream for old inference engine version to make VPU plugin tests happy

* apply code style

* revert changes in print version for old api samples

* keep inference_engine.hpp for not ov2.0 yet samples

* fix merge artifacts

* fix compilation

* apply code style

* Fixed classification sample test

* Revert changes in hello_reshape_ssd sample

* rebase to master, sync with PR-9054

* fix issues found by C++ tests

* rebased and sync with PR-9051

* fix test result parsers for classification tests (except unicode one)

* fix mismatches after merge

* rebase and sync with PR-9144

Co-authored-by: Mikhail Nosov <mikhail.nosov@intel.com>
Co-authored-by: antonrom23 <anton.romanov@intel.com>
2021-12-13 11:30:58 +03:00
Ilya Lavrenov
9e519946f0 Integrate JSON libs (#9145)
* Add nlohmann json (Release 3.10.4) as submodule

* Move nlohmann_json lib to json folder, add json_schema validator lib as submodule

* Move BUILD_SHARED_LIBS flag to a separete scope

* Add export of nlohmann_json_schema_validator

* Fix build

* set folder thirdparty

* link lib to offline_transformations and benchmark_app

* suppress shadowing names warning in nlohmann_json lib

* fix include in benchmark_app

* Resolve review comments: add json subdirs to samples cmake

* Fix static build

* Proper json integration

* removed cpp_samples_deps component

Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>
2021-12-12 20:40:41 +03:00
Ilya Churaev
37b0b6f7c8 Renamed ExecutableNetwork to CompiledModel (#9144)
* Renamed ExecutableNetwork to CompiledModel

* Fixed python

* Fixed comments

* Fixed build

* Fixed code style
2021-12-11 16:11:15 +03:00
Vladimir Dudnik
c85fb74efc ov2.0 cpp hello reshape ssd (#8874)
* OV2.0 API C++ hello_reashe_ssd sample

* clean header

* fix test for changed sample cmd line

* adopt to PR-8898

* sync with PR-9054, simplify code

* apply code_style.diff

* sync with PR-9051
2021-12-10 15:58:23 +03:00
Vladimir Dudnik
96fd5dce0b remove fast sample scripts (#9140) 2021-12-10 13:52:35 +03:00
Dmitry Pigasin
3f96a1bccd [IE C Samples] Implement bmp reader (#8848)
* Implement bmp reader

* Use not os specific functions

* Fix code style

* Move `i` declaration from `for` loop

Co-authored-by: Vladimir Dudnik <vladimir.dudnik@intel.com>

Co-authored-by: Vladimir Dudnik <vladimir.dudnik@intel.com>
2021-12-10 13:19:28 +03:00
Ilya Churaev
ec6f57872f Renamed ov::Function to ov::Model (#9051)
* Renamed ov::Function to ov::Model

* Fixed all for macos

* Fixed build

* Fixed build

* Revert changes in GPU plugin

* Fixed ngraphFunctions

* Fixed all for mac

* Fixed new test

* Fixed if for Windows

* Fixed unit tests and renamed Function in python API

* Fixed code style

* Fixed import

* Fixed conflict

* Fixed merge issues
2021-12-10 13:08:38 +03:00
Szymon Irzabek
1c6c7bac2d [GNA] Detect unsupported concat layers (#7599)
* [GNA] Detect unsupported concat layers

* [GNA] Add support for 3D transposes around convolutions and replace exception with user warning
2021-12-10 11:17:29 +03:00
Dmitry Pigasin
c41acdeaf3 [IE Python Sample] Migrate hello_query_device to OV2.0 API (#9029) 2021-12-09 23:01:45 +03:00
Ilya Lavrenov
64367fbca2 Export frontend_common as dev target (#9003) 2021-12-08 17:18:44 +03:00
Dmitry Pigasin
7fa6c42a8c [IE Python Speech Sample] Enable -we option (#8750)
* Enable `-we` option

* Update readme
2021-12-08 13:05:22 +03:00
Mikhail Nosov
20bf5fcc4a Rename "network" to "model" in preprocessing API (#9054) 2021-12-07 19:26:27 +03:00