* add conversion of padded to valid convolution without other parameters change
* [GNA] Fix graph loop when multiple connections exist from single layer to concat
* [GNA] Add 1d and 2d conv test cases
Add models covering all transform scenarios.
Add test cases covering 1d and 2d convolutions.
Update transform with the newest code.
Add minor fixes in transform and elsewhere.
* [GNA] Remove debug code
* [GNA] Fixes after review
* [GNA] Fix failing tests
Co-authored-by: prozen <piotr.rozen@intel.com>
* Implement nGraph transformation to decompose Einsum-7 operation
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Use MatMul instead of Eltwise-multiplication and ReduceSum
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Add description for new methods
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Fix code style
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Fix code style #2
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Remove unused variables.py
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Apply feedback after review: fix comments, new_register_node use
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Add Reshape if needed and apply code-review feedback
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Fix code-style
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Remove unused variable
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>
* Remove extern template from headers for RTTI classes
* Moove instantiation out of the namespace
* Use __ANDROID__ conditional compilation for TBlob
* One more attempt
* Reference implementation for memory
* memory reference implementation tests, fixes
* new class Variable context
* fix ngraph code style
* add new evaluate method to ngraph::function
* unordered_set instead of set in find_variables method
* added a new Memory base class; automatiс memory allocation for Variable context in Assign ops; refactoring
* ngraph codestyle
* ngraph codestyle
* temporary disable the check of variables in ngraph::function
* fix for evaluate hides overloaded virtual function warning
* ngraph codestyle
* uncomment a check in validate_and_infer method
* Removing a check (not backward compatible); adding docs
* Auto detect Parameters/Variables, new constructors in ngraph::function
* use zero initial values in ReadValue v6 evaluate
* fix unit tests
* fix codestyle
* fix build (werror)
* ngraph codestyle
* update unit tests, refactoring
* ngraph codestyle
* refactoring, docs, new unit tests
* Resolve review remarks
* rt_attributes likeapproach in EvaluationContext, codestyle
* fix build and unit tests
* resolve review comments
* resolve review comments
* codestyle
* export public API
* Add axis support
* Update dequant extractor
* Update qdeq ops
* Refactoring quantize
* Update dequantize resolver
* Update dequantize op
* Refactoring dequantize
* Some fixes for quantize and dequantize
* Update unit tests
* Reafctoring quantize/dequantize axis support
* Move quantize/dequantize resolvers to middle
* hot fix
* Fix unit tests
* Fix unit tests
* Update quintize resolver comment
* Refactoring code according to code review
* Fix according to review
* Change order for transforms quantize pipline
* Python API for LoadNetwork by model file name
* BenchmarkApp: Add caching and LoadNetworkFromFile support
2 new options are introduced
- cache_dir <dir> - enables models caching
- load_from_file - use new perform "LoadNetwork" by model file name
Using both parameters will achieve maximum performance of read/load network on startup
Tests:
1) Run "benchmark_app -h". Help will display 2 new options. After available devices there will be list of devices with cache support
2) ./benchmark_app -d CPU -i <model.xml> -load_from_file
Verify that some test steps are skipped (related to ReadNetwork, re-shaping etc)
3) Pre-requisite: support of caching shall be enabled for Template plugin
./benchmark_app -d TEMPLATE -i <model.onnx> -load_from_file -cache_dir someDir
Verify that "someDir" is created and generated blob is available
Run again, verify that loading works as well (should be faster as it will not load onnx model)
4) Run same test as (3), but without -load_from_file option. Verify that cache is properly created
For some devices loadNetwork time shall be improved when cache is available
* Removed additional timing prints
* Correction from old code
* Revert "Removed additional timing prints"
Additional change - when .blob is chosen instead of .xml, it takes priority over caching flags
* Removed new time printings
As discussed, these time measurements like 'total first inference time' will be available in 'timeTests' scripts
* Fix clang-format issues
* Store weights range in meta info instead of cloning whole constant
* Add command line option for constants size serialization threshold
* Update IR Runner to handle OP meta information
* Fix interpolate type_prop tests
* Skip new failures in SLT
* Fix models count
* Add dynamism elimination option.
* TopK shape propagation changed
* Fix type_prop tests for TopK
* Update specification for ConvolutionBackpropData.
* Add backticks to attribute types, changed layout description for input, filter and output.
* Correct xml example.
* Add new examples.
* Add link with convolution backprop description.
* Repleace additional link with argxiv website.
* Insert enumeration for examples.
* Fix example with output_shape input.
* add upgrading gather1 -> gather7 transformation
* added G1->G7 to common_optimizations list but by default in turned off
* fixed a couple of typos in comments
* corrected validation error messages for GatherBase
* removed redundant comments
* clang format fix
* coNverts v1::Gather into v7::Gather
* added explicit batch_dims = 0, corrected axis bound check for dynamic data_rank
* Add visitor test for convolution_backprop.
* Add test to CMakeLists, corrected input shapes.
* Add checking attributes count.
* Extend test for all autoPad types.
* Support old TBBs
* Don't reset environment
* Removed useless NO_CMAKE_FIND_ROOT_PATH
* Fixed build with old TBB for Windows
* Fixed ngraph code style
* [LPT] [CPU] Convert dequantization shift in low precision before FP32 conversion in CPU
* [LPT] Avoid not neccessary conversion to FP32
* [LPT] Split workaround: replace_node manual handling
* [nGraph] [LPT] Q/DQ representation on weights extension: update transformation for conversion to old format
* review notes fix
* [LPT] checkZeroPoint reuse
* Moved telemetry to repo root directory from MO
* New telemetry package in the "openvino" sub-directory
* Removed telemetry from the MO BOM
* Updated MO BOM and added stub file for telemetry
* Fixed license header
* Fixed license headers and cleaned up the telemetry setup.py
* Fixed import
* Added temporary dependency for openvino-telemetry
* Added ignore for pylint issues
* Fixed import statements
* Updated imports in the telemetry library
* Removed telemetry library. Added link to another private repo
* Removed redundant start_session event for the MO
* Changed approach to import the telemetry library
* Minor code refactoring
* Updated MO telemetry events sending messages
* Refactor sending events for the IE runtime check
* Disable forcing warnings for deprecated methods
* Removed changes from the requirements.txt to install telemetry library to avoid merge conflicts
* Update copyright in the model-optimizer/mo/utils/telemetry_stub.py
Co-authored-by: Gleb Kazantaev <gleb.nnstu@gmail.com>
Co-authored-by: Gleb Kazantaev <gleb.nnstu@gmail.com>
* Minimized legacy usage in tests
* Use legacy only for specific files
* Fixed code style
* Fixed linkage
* Removed old CPU / GPU tests binaries
* Test
* Disabled IB console
* Disabled test for AUTO QueryNetwork from parallel threads
* Remove temporary cmake solution and use updated tbbbind_2_4 package
* Remove testing version of dependency
* Add versioning
Co-authored-by: Kochin, Ivan <ivan.kochin@intel.com>
* align pypi deps of benchmark, cross check tool, python API
* move cython from python API requirements to requirements-dev
* change requirements to >= for most packages
* update requirements
* set pinned numpy major version in wheel requirements
* set more strict pip requirements-dev in wheel
* change scikit-image version to 0.17