* initial split of on_adapter function in order to check if all checks will pass
* remove blank line and change if statements for on_apdapter for void type
* add type checking instead of name checking
* add bracket two if/else if statements and move helper functions to private
* fix style
* revert removing ngarph check for empty pramas and result
* move map_type_from_body method to private, add const& function arguments
The optimization does not have a use case at the moment
since in scope of greedy mode we removed insertion of
convert/reorder after input layer.
This logic will be re-implemented after ngraph migration
and then we can bring back the optimization together with
some tests.
* do not convert Sequences to TensorIterator when plugin supports Sequence primitive
* fix referece implementations for Sequences, processing seq_len value == 0 case
* Adding new mode for LSTMSequence single layer tests
* update single layer tests
* fix failed unit tests, updated single layer tests for rnn/gru sequences
* fix failed unit tests
* fix single layer tests
* ignore failed single layer tests on gpu (known issue), fix review remarks
* Move Mul node before Reshape in GroupConvolutionMultiplyFusion
In following scenario:
+-----------+
| weights |
+-----------+
|
v
+-----------+
| Reshape |
.... +-----------+
\ /
\ /
v v
+--------------------+
| GroupConvolution |
+--------------------+
|
v
+-----------+ +----------+
| Multiply | <--- | Constant |
+-----------+ +----------+
if a second input (weights) to GroupConvolution is a Reshape,
we should apply Multiply node directly to weights rather than Reshape node:
+-----------+
| weights |
+-----------+
|
v
+-----------+ +----------+
| Multiply | <--- | Constant |
+-----------+ +----------+
|
v
+-----------+
| Reshape |
.... +-----------+
\ /
\ /
v v
+--------------------+
| GroupConvolution |
+--------------------+
|
v
That approach has no side effects in the usual scenario when weights are constant,
but it's necessary for models with FakeQuantize since it gives possibility to perform
GroupConvolutionTransformation from LPT transformations.
* remove unnecessary new lines
* simplify code
* check if input to reshape has shape that starts with G * O
* add pattern::has_static_dims({0, 1}) to weights
* Use K = 3 in WeightsWithFakeQuantizeAndReshape tests
* Add LPT tests for groupconvolution_qdq_transformation
* fix PullReshapeThroughDequantizationTransformation tests
* Small shape propagation & evaluators update
* minor style changes
* Selu operation shape inference
* Experimental tests for activation operations in the IE test infreastructure
* minor style fixes
* prelu fix
* Add ScatterElements value propogation
* Add condition for input nodes
* Add asserts
* Refactoring scatter according to review
* Add unit tests for 1d axis tensor
* Refactoring according to review
* refactoring unit test
* Refactoring according to review
* Update unit test
* Update unit test
* [Plugins test] Add functional test for correct import/export support by plugins
Test suite creates number of models with various precisions.
For each model:
- If plugin doesn't support IMPORT_EXPORT_SUPPORT metric - skip the test
- Try LoadNetwork without cache enabled fails - skip the test
- Do one inference
If Load and Infer request is succeeded and plugin has import metric:
- Load network with cache enabled. Infer request and compare outputs with original infer
- Import network, perform inference. Compare outputs with previous ones
* Fix Centos build warnings
Myriad: reset executableNetwork before next load
Myriad: Reduced time consumption for Myriad tests
* Caching test suite - batch size parameter support
* Add F3Net to list of supported models
* Add instruction for F3Net model pytorch->onnx conversion
* Add new file to ie_docs.xml
* Update instruction
* Apply comments
* Apply comments
* Apply review comments
* Build python wheel w/o strict dependency to _pyngraph
* Exclude extra components which are not needed for IE python wheel
- myriad_compile
- myriad_perfcheck
- compile_tool
- inference_engine_c
* Added add_clang_format commands
* Added clang_format targets as dependencies for original targets
* Update CI scripts
* Fixed code-style for nGraph unit tests
* Change job name
* Disable clang-format for Windows
* Fixed nGraph python build
* Fixed comments and enabled code style for ONNX Editor
* Disable code style check on compilation step
* Some additional checks in the MO transformation UnsqueezeTileReshapeBlockToInterpolate.
* Refactored transformation UnsqueezeTileReshapeBlockToInterpolate: checks of applicabilty were moved into the separate function.
* Rewritten the MO transformation UnsqueezeTileReshapeBlockToInterpolate.
* Now we replace all block Unsqueeze+Tile+Reshape by Interpolate-4.
* Fixed comment.
* Added comments about applicability criteria.
* Some fixes.
* Small fix.
* Deleted redundant type casts.
* Added an example into comment.
* Now an interpolated axis length is calculated using the function node_to_get_shape_value_of_indices.
* Fixed tests.
* Optimized imports.
* [GNA] Add gna model dumping
* [GNA] Modify debug layout, add timestamp, fix compound bias dump
* [GNA] Move data dump to another file
* [GNA] Create model data dump file only when needed
* Refactor specification
* Complete detail description section
* Rewrite mathematical formula
* Fix range of values for min and max attributes
* Add note for conversion policy between float and integral type of input tensor
* Address review comments
* Fix typo in max attribute
* Remove redundant examples
* Reshape interval propagation tests with scalar
* Set or_tensor result shape to log_or ourput
* Create result HostTensor without shape
* Add tests for scalar broadcast in binary ops
* Add tests for 1D inputs
Test:
Use benchmark_app from this PR #4814
Run ./benchmark_app -h
Verify that template plugin is listed in supported caching devices
Run ./benchmark_app -m <model.onnx> -d TEMPLATE -i <file> -cache tmpCache
Verify that tmpCache is created and network is exported to blob
Verify that if model is ONNX (e.g. ResNet50) - that loading of network is performed faster when cache is available
Run several times: ./benchmark_app -m <model.onnx> -d TEMPLATE -i <file> -cache tmpCache -single_load
Verify that if model is ONNX (e.g. ResNet50) - that loading of network is performed faster when cache is available
Verify that in this mode loading of network is faster than without "-single_load" option
* Subgraph extraction in ONNX models
* Windows compilation error fix + docs update
* Code cleanup after the first round of reviews
* CI compilation error fix
* Even more CI compilation error fixes
* Proper usage of ADL in generic code
* ONNX shape inference related code cleanup
* Disable the onnx test utils when pb-lite is used
* PB dependency removal from UT, strong types for input and output edges, more verbose errors in tests
* Fix for the protobuf descriptor database corruption
* testing visibility changes
* Revert the changes that didn't work
* Make tests green again?
* Make the current tests pass
* Remove the ONNX header from editor's tests
* Switch from stable_partition to remove_if because of compiler bugs
* Obsolete test removal and cmakelists cleanup in tests
* Macos failed, reverting some changes
* Handle the multiple output consumers UC
* Keep the tensor name when replacing an initializer
* Cutting a graph with multiple consumers of inputs and initializers
* Subgraph extraction with multiple initializer consumers
* Add CMakeList, move files
* fixed headers locations
* fixed calling tests
* apply code styles
* change namespace to onnx_editor
* removd not needed dependencies to onnx_editor
* Remove linking libonnx from unit-test
* Consider all flavors of protobuf libraries
* disable lto for onnx and onnx_proto
* revert set ONNX_INCLUDE_DIR change
* create new target onnx_common
* cmake dependencies clean-up
* Added onnx_common visibility
* pass ModelProto as reference + remove redundant dependencies
* expose onnx symbols by onnx_common
* remove -fno-lto, move InputEdge and OutputEdge to other header
* configure onnx linking for MSVC and APPLE
* remove unused comment
* fixed linking configuration for msvc and apple/clang
* clean cmakes
* added dependencies
* remove onnx and onnx_proto from NGRAPH_EXPORT_TARGETS_ENABLE
* make onnx and protobuf dynamic
* cmake dependency clean-up
* added onnx patch - make only onnx_proto shared
* use protobuf should depend on BUILD_SHARED_LIBS
* fixed patch
* fixed install targets
* add if to install protobuf
* fixed onnx flag
* move protobuf install
* cmake dependencies
* added protobuf header include dir + clean-up ng cmake
* added protobuf to ng targets
* fix apple build
* dependencies clean-up
* make onnx_editor static lib
* remove onnx editor visibility
* fixed protobuf export enablement
* fix apple build
* test build protobuf by fetch content
* fix apple protobuf build
* Docs update
* Producer name update in test onnx models
* More comments in the subgraph extraction code
* Merge remote-tracking branch 'upstream/master' into mbencer/MoveEditorToSeparateLib
* add dependency to onnx_proto for test utils
* dependency to ngraph_test_util
* ngraph_test_util dependency. part. 2
* conditional dependency of onnx to ngraph_test_util
* Apply suggestions from code review
Co-authored-by: Tomasz Socha <tomasz.socha@intel.com>
* review remarks
* remove onnx_proto default visibility
* camke remarks, not exporting onnx_common and onnx_editor to target set
* [test fix macos CI] Revert "remove onnx_proto default visibility"
* set onnx_proto visibility only for macos
* [PyPI] remove rpath for dylib
* Corrected cmd
* fix protobuf-lite linking
Co-authored-by: tomdol <tomasz.dolbniak@intel.com>
Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>
Co-authored-by: Mateusz Tabaka <mateusz.tabaka@intel.com>
Co-authored-by: Tomasz Socha <tomasz.socha@intel.com>
Co-authored-by: mryzhov <mikhail.ryzhov@intel.com>