* use ports instead parameters and results
* Fix element_type if preprocessing is skipped
* rename function to model
* rename exe_network to compiled_model
* Renamed ov::Function to ov::Model
* Fixed all for macos
* Fixed build
* Fixed build
* Revert changes in GPU plugin
* Fixed ngraphFunctions
* Fixed all for mac
* Fixed new test
* Fixed if for Windows
* Fixed unit tests and renamed Function in python API
* Fixed code style
* Fixed import
* Fixed conflict
* Fixed merge issues
* Preprocessing API - base classes
Includes API definition for trivial mean/scale operations (which don't require layout)
Mean/scale with 'layout' support will be done under separate task together
with Layout
Current test code coverage: 100%
* Python bindings for base preprocessing API
* remove pre_post_process directory from ngraph/core
* remove files from ngraph/python dir
* move pyngraph pre_post_process files from ngraph/python to runtime
* remove pre_post_process test from CMakeList
* move include to the header
* update include path for pre_post_process
* style fix
* bind InputTensorInfo::set_layout
* cleaned test_preprocess
* fix test expected output
* remove duplicate test
* update description of set_element_type
* fix style
* move preprocess from pyngraph to pyopenvino/graph
* update test_preprocess imports and remove unnecessary test
* remove duplicate import
* update custom method
* update test
* update test
* create decorator that changes Node into Output<Node>
* create function that cast Node to Output<Node>
* update test_preprocess to use decorator for custom function
* change _cast_to_output -> _from_node
* style fix
* add tests fro scale and mean with vector input
* style fix
* add docstring for custom_preprocess_function
* bind InputInfo network method
* style fix
* bind OutputInfo
* fix description of preprocess submodule
* fix style
* update copyright year
* bind OutputTensorInfo
* bind OutputNetworkInfo and InputNetworkInfo
* Bind exec core ov (#50)
* Output const node python tests (#52)
* add python bindings tests for Output<const ov::None>
* add proper tests
* add new line
* rename ie_version to version
* Pszmel/bind infer request (#51)
* remove set_batch, get_blob and set_blob
* update InferRequest class
* change InferenceEngine::InferRequest to ov::runtime::InferRequest
* update set_callback body
* update bindings to reflect ov::runtime::InferRequest
* bind set_input_tensor and get_input_tensor
* style fix
* clen ie_infer_queue.cpp
* Bind exec core ov (#50)
* bind core, exec_net classes
* rm unused function
* add new line
* rename ie_infer_request -> infer_request
* update imports
* update __init__.py
* update ie_api.py
* Replace old containers with the new one
* create impl for create_infer_request
* comment out infer_queue to avoid errors with old infer_request
* update infer_request bind to reflect new infer_request api
* comment out inpuit_info from ie_network to avoid errors with old containers
* Register new containers and comment out InferQueue
* update infer request tests
* style fix
* remove unused imports
* remove unused imports and 2 methods
* add tests to cover all new methods from infer_request
* style fix
* add test
* remove registration of InferResults
* update name of exception_ptr parameter
* update the loops that iterate through inputs and outputs
* clean setCustomCallbacks
* style fix
* add Tensor import
* style fix
* update infer and normalize_inputs
* style fix
* rename startTime and endTime
* Create test for mixed keys as infer arguments
* update infer function
* update return type of infer
Co-authored-by: Bartek Szmelczynski <bartosz.szmelczynski@intel.com>
* fix get_version
* fix opaque issue
* some cosmetic changes
* fix codestyle in tests
* make tests green
* Extend python InferRequest
* Extend python Function
* Change return value of infer call
* Fix missing precisions conversions in CPU plugin
* Rework of runtime for new tests
* Fixed onnx reading in python tests
* Edit compatibility tests
* Edit tests
* Add FLOAT_LIKE xfails
* bind ColorFormat and ResizeAlgorithm
* clean imports
* fix typo
* [Python API] bind ProfilingInfo (#55)
* bind ProfilingInfo
* Add tests
* Fix code style
* Add property
* fix codestyle
* Infer new request method (#56)
* fix conflicts, add infer_new_request function
* remove redundant functions, fix style
* revert the unwanted changes
* revert removal of the Blob
* revert removal of isTblob
* add add_extension from path
* codestyle
* add PostProcessSteps to init
* bind PreProcessSteps
* create additional tests
* fix win build
* add inputs-outputs to function
* update infer queue
* fix code style
* Hot-fix CPU plugin with precision
* fix start_async
* add performance hint to time infer (#8480)
* Updated common migration pipeline (#8176)
* Updated common migration pipeline
* Fixed merge issue
* Added new model and extended example
* Fixed typo
* Added v10-v11 comparison
* Avoid redundant graph nodes scans (#8415)
* Refactor work with env variables (#8208)
* del MO_ROOT
* del MO_ROOT from common_utils.py
* add MO_PATH to common_utils.py
* change mo_path
* [IE Sample Scripts] Use cmake to build samples (#8442)
* Use cmake to build samples
* Add the option to set custom build output folder
* Remove opset8 from compatibility ngraph python API (#8452)
* [GPU] OneDNN gpu submodule update to version 2.5 (#8449)
* [GPU] OneDNN gpu submodule update to version 2.5
* [GPU] Updated onednn submodule and added layout optimizer fix
* Install rules for static libraries case (#8384)
* Proper cmake install for static libraries case
* Added an ability to skip template plugin
* Added install rules for VPU / GPU
* Install more libraries
* Fixed absolute TBB include paths
* Disable GNA
* Fixed issue with linker
* Some fixes
* Fixed linkage issues in tests
* Disabled some tests
* Updated CI pipelines
* Fixed Windows linkage
* Fixed custom_opset test for static casr
* Fixed CVS-70313
* Continue on error
* Fixed clanf-format
* Try to fix Windows linker
* Fixed compilation
* Disable samples
* Fixed samples build with THREADING=SEQ
* Fixed link error on Windows
* Fixed ieFuncTests
* Added static Azure CI
* Revert "Fixed link error on Windows"
This reverts commit 78cca36fd2.
* Merge static and dynamic linux pipelines
* Fixed Azure
* fix codestyle
* rename all methods in this class to snake_case
* some updates
* code style
* fix code style in tests
* update statistics reporting
* update filling inputs
* change ngraph.Type to ov.Type
* fix typo
* save work
* save work
* save work
* compute latency in callback
* save work
* Fix get_idle_request
* save work
* fix latency
* Fix code style
* update AppInputInfo
* add iteration to PatrialShape
* fix rebasing
* bind result::get_layout()
* correct mistakes
* fix setup
* use parameters/results instead inputs/outputs
* move _from_node to node_output.hpp
* add read_model from buffer
* update imports
* revert package struct
* add new line
* remove bad quotes
* update imports
* style fix
* add new line
* Fix preprocessing
* rename functin args
* set NCHW layout to image as default
* Fix input fillings
* remove Type import
* update tests
* style fix
* test clean
* remove blank line
* Add tensor_shape
* fix comments
* update PrePostProcessor init and build methods
* create test with model update tests with new PrePostProcessor init and build
* Change filling inputs
* fix preprocessing
* basic support dynamic shapes
* fix legacy mode
* rename ie to core
* fix cpp code style
* fix input files parsing
* fix binary filling
* support dynamic batch size
* process images with original shapes if no tensor shapes were given
* fix fps and number of iterations
* Add new metrics
* support pass path to folder into input mapping
* add pcseq flag
* fix resolving conflicts
* dump statistic per group
* check for compatibility with partial shape
* revert statistic report names
* code refactoring
* update parameters
* enable legacy_mode if data size less than nireq
* add serialize to offline_transformations
* Fix preprocessing import
* change log output due to ci parsing
* fix layout
* allow to pass batch size with undefined layout
* add serializer
* fix comments from jiwaszki
* Fix latency parsing for ci
* code style
* rename tensor_shape to data_shape
* add message if image is processed with original shape
* fix syntax warning
* remove default legacy_mode if requests cover all data
* rewrite all file parsing
* fix preprocessing
* Fix preprocessing #2
* Use layout instead str
* Fix file extensions
* Fix image sizes filling
* sort input files
* [Python API] quick fix of packaging
* update tests
* fix setup.py
* small fix
* small fixes according to comments
* skip mo frontend tests
* full mode is default for dynamic models only
* backward compatibility
* Fix package
* set layout in runtime
* static mode for dynamic models with all equal data shapes
* use get_tensor instead set_tensor in legacy mode
* benchmarking dynamic model available in full mode only
* fix layout detection
* use batch_size * iteration instead processed_frames in legacy mode
* fix tensor naming
* represent --inference_only
* refactoring main loop
* Fix number of iterations for full mode
Co-authored-by: Michael Nosov <mikhail.nosov@intel.com>
Co-authored-by: pszmel <piotr.szmelczynski@intel.com>
Co-authored-by: Bartek Szmelczynski <bartosz.szmelczynski@intel.com>
Co-authored-by: Anastasia Kuporosova <anastasia.kuporosova@intel.com>
Co-authored-by: jiwaszki <jan.iwaszkiewicz@intel.com>
Co-authored-by: Victor Kuznetsov <victor.kuznetsov@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>
Co-authored-by: Dmitry Pigasin <dmitry.pigasin@intel.com>
Co-authored-by: Artur Kulikowski <artur.kulikowski@intel.com>
Co-authored-by: Ilya Znamenskiy <ilya.znamenskiy@intel.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
* Add inputs-to-files path to benchmark_tool
* Add inputs-to-files path to benchmark_tool
* Add inputs-to-files path to benchmark_tool
* Remove `<>`
* Fix binary input fill
* Remove redundant checks
Change `:` delimiter to `::`
* Delete print
* Change delimiter to `:`
Add support for list of files
* Add warning if some inputs filled with random values
* rebasing the perf-modes-2021.3 to the 2021.4
Caveats:
the (explicit) setting #streams is not disabled (as it was before for experiments with DLBenchmark), and the logic slighlty differ (streamsSet)
(cherry picked from commit 1ae1edc0ed)
* overriding streams (to force the TPUT mode to the DLBenchnark)
(cherry picked from commit 7f506cda31)
* disabling reducing #streams to fully mimic baseline c4df94d42d of the 2021.3 (before experiments)
(cherry picked from commit 85073dd1dd)
* clang/identation
(cherry picked from commit 050a4155a9)
* splitting the Transformation to general and CPU specific.
Now hopefully,this fully mimics the baseline c4df94d42d of the 2021.3 (before experiments), as the streams reduce num (as well as early exit on GRU/LSTM/TensorIterator) is deisabled
(cherry picked from commit e98b2c1a67)
* disabling GRU/LSTM/TI + reducing of streams + 5D considered compute-limited only for int8
(cherry picked from commit 32b8d80dee)
* refactored to avoid compute_limited_ratio, reverted the reducing #streams, removed LSTM from limitations
(cherry picked from commit f2b972171b)
* isa-based threshold logic
(cherry picked from commit b218457e1a)
* mode->hint
(cherry picked from commit ec20aa8eca)
* optional PERFORMANCE_HINT_NUM_REQUESTS
(cherry picked from commit 5a3883e3f3)
* moving the perfHints to the common OV config class + initial tests (CPU only, as the actual AUTO/MULTI should be accommodated on the master)
(cherry picked from commit (then fixed)45bafe7d527f466507dea0693aeed51be4ebf776)
* AUTO support for PerfHints
* MULTI support for PerfHints
* Enabling Perf hints for the GPU plugin
* brushing settings output a bit
* disabling "throughput" perf hint being default (until OV 2.0)
* uncommenting the logic which was disabled to force the DLBenchmark to use the throughput mode by default
* removing dead and experimental code, and debug printfs
* clang/code-style
* code-review remarks
* Moved the output of the actual params that the hint produced to the right place
* aligning MULTI's GetConfig beh to HETERO's as captured in the preso (CVS-59960) ratified with the ArchForum
* clang
* benchmark_app brushing
* Update inference-engine/samples/benchmark_app/README.md
* propagating the perf hints thru one more scenario in the merged AUTO-MULTI
* fixed mispint
* Python benchmark_app update for perf hints
* addresssing reviewers comments on the python benchmark_app
* simplifying/brushing logic a bit
* refactor the heuristic to the separate file (to be shared with iGPU soon)
* refactor conversion of modes to the specific GPU config per feedback from Vladimir
* add missed __init__.py files
* Update __init__.py
empty line
* Merge infirence_engine/tools/benchmark_tool with tools/benchmark_tool
* Update MD links
* remove benchmark_tool from package_BOM.txt
* add tools folder to the list of Doxygen files
* fix relative paths
* Update index.md
remove extra line
* Add input image scale flag in benchmark app.
- user set input image scale with -iscale.
input is divided by scale.
Signed-off-by: hyunback <hyunback.kim@intel.com>
* Apply image scale, mean parameter in benchmark APP
Means and sacles values per channel
Signed-off-by: hyunback <hyunback.kim@intel.com>
* Fix clang-format
Signed-off-by: hyunback <hyunback.kim@intel.com>
* fix clang-format issue2.
Signed-off-by: hyunback <hyunback.kim@intel.com>
* Update benchmark tool to align the format of mean and sacle values with MO arguments.
Signed-off-by: hyunback <hyunback.kim@intel.com>
* Remove debug print.
Signed-off-by: hyunback <hyunback.kim@intel.com>