The optimization does not have a use case at the moment
since in scope of greedy mode we removed insertion of
convert/reorder after input layer.
This logic will be re-implemented after ngraph migration
and then we can bring back the optimization together with
some tests.
* do not convert Sequences to TensorIterator when plugin supports Sequence primitive
* fix referece implementations for Sequences, processing seq_len value == 0 case
* Adding new mode for LSTMSequence single layer tests
* update single layer tests
* fix failed unit tests, updated single layer tests for rnn/gru sequences
* fix failed unit tests
* fix single layer tests
* ignore failed single layer tests on gpu (known issue), fix review remarks
* Move Mul node before Reshape in GroupConvolutionMultiplyFusion
In following scenario:
+-----------+
| weights |
+-----------+
|
v
+-----------+
| Reshape |
.... +-----------+
\ /
\ /
v v
+--------------------+
| GroupConvolution |
+--------------------+
|
v
+-----------+ +----------+
| Multiply | <--- | Constant |
+-----------+ +----------+
if a second input (weights) to GroupConvolution is a Reshape,
we should apply Multiply node directly to weights rather than Reshape node:
+-----------+
| weights |
+-----------+
|
v
+-----------+ +----------+
| Multiply | <--- | Constant |
+-----------+ +----------+
|
v
+-----------+
| Reshape |
.... +-----------+
\ /
\ /
v v
+--------------------+
| GroupConvolution |
+--------------------+
|
v
That approach has no side effects in the usual scenario when weights are constant,
but it's necessary for models with FakeQuantize since it gives possibility to perform
GroupConvolutionTransformation from LPT transformations.
* remove unnecessary new lines
* simplify code
* check if input to reshape has shape that starts with G * O
* add pattern::has_static_dims({0, 1}) to weights
* Use K = 3 in WeightsWithFakeQuantizeAndReshape tests
* Add LPT tests for groupconvolution_qdq_transformation
* fix PullReshapeThroughDequantizationTransformation tests
* Small shape propagation & evaluators update
* minor style changes
* Selu operation shape inference
* Experimental tests for activation operations in the IE test infreastructure
* minor style fixes
* prelu fix
* Add ScatterElements value propogation
* Add condition for input nodes
* Add asserts
* Refactoring scatter according to review
* Add unit tests for 1d axis tensor
* Refactoring according to review
* refactoring unit test
* Refactoring according to review
* Update unit test
* Update unit test
* [Plugins test] Add functional test for correct import/export support by plugins
Test suite creates number of models with various precisions.
For each model:
- If plugin doesn't support IMPORT_EXPORT_SUPPORT metric - skip the test
- Try LoadNetwork without cache enabled fails - skip the test
- Do one inference
If Load and Infer request is succeeded and plugin has import metric:
- Load network with cache enabled. Infer request and compare outputs with original infer
- Import network, perform inference. Compare outputs with previous ones
* Fix Centos build warnings
Myriad: reset executableNetwork before next load
Myriad: Reduced time consumption for Myriad tests
* Caching test suite - batch size parameter support
* Add F3Net to list of supported models
* Add instruction for F3Net model pytorch->onnx conversion
* Add new file to ie_docs.xml
* Update instruction
* Apply comments
* Apply comments
* Apply review comments
* Build python wheel w/o strict dependency to _pyngraph
* Exclude extra components which are not needed for IE python wheel
- myriad_compile
- myriad_perfcheck
- compile_tool
- inference_engine_c
* Added add_clang_format commands
* Added clang_format targets as dependencies for original targets
* Update CI scripts
* Fixed code-style for nGraph unit tests
* Change job name
* Disable clang-format for Windows
* Fixed nGraph python build
* Fixed comments and enabled code style for ONNX Editor
* Disable code style check on compilation step
* Some additional checks in the MO transformation UnsqueezeTileReshapeBlockToInterpolate.
* Refactored transformation UnsqueezeTileReshapeBlockToInterpolate: checks of applicabilty were moved into the separate function.
* Rewritten the MO transformation UnsqueezeTileReshapeBlockToInterpolate.
* Now we replace all block Unsqueeze+Tile+Reshape by Interpolate-4.
* Fixed comment.
* Added comments about applicability criteria.
* Some fixes.
* Small fix.
* Deleted redundant type casts.
* Added an example into comment.
* Now an interpolated axis length is calculated using the function node_to_get_shape_value_of_indices.
* Fixed tests.
* Optimized imports.
* [GNA] Add gna model dumping
* [GNA] Modify debug layout, add timestamp, fix compound bias dump
* [GNA] Move data dump to another file
* [GNA] Create model data dump file only when needed
* Refactor specification
* Complete detail description section
* Rewrite mathematical formula
* Fix range of values for min and max attributes
* Add note for conversion policy between float and integral type of input tensor
* Address review comments
* Fix typo in max attribute
* Remove redundant examples
* Reshape interval propagation tests with scalar
* Set or_tensor result shape to log_or ourput
* Create result HostTensor without shape
* Add tests for scalar broadcast in binary ops
* Add tests for 1D inputs
Test:
Use benchmark_app from this PR #4814
Run ./benchmark_app -h
Verify that template plugin is listed in supported caching devices
Run ./benchmark_app -m <model.onnx> -d TEMPLATE -i <file> -cache tmpCache
Verify that tmpCache is created and network is exported to blob
Verify that if model is ONNX (e.g. ResNet50) - that loading of network is performed faster when cache is available
Run several times: ./benchmark_app -m <model.onnx> -d TEMPLATE -i <file> -cache tmpCache -single_load
Verify that if model is ONNX (e.g. ResNet50) - that loading of network is performed faster when cache is available
Verify that in this mode loading of network is faster than without "-single_load" option
* Subgraph extraction in ONNX models
* Windows compilation error fix + docs update
* Code cleanup after the first round of reviews
* CI compilation error fix
* Even more CI compilation error fixes
* Proper usage of ADL in generic code
* ONNX shape inference related code cleanup
* Disable the onnx test utils when pb-lite is used
* PB dependency removal from UT, strong types for input and output edges, more verbose errors in tests
* Fix for the protobuf descriptor database corruption
* testing visibility changes
* Revert the changes that didn't work
* Make tests green again?
* Make the current tests pass
* Remove the ONNX header from editor's tests
* Switch from stable_partition to remove_if because of compiler bugs
* Obsolete test removal and cmakelists cleanup in tests
* Macos failed, reverting some changes
* Handle the multiple output consumers UC
* Keep the tensor name when replacing an initializer
* Cutting a graph with multiple consumers of inputs and initializers
* Subgraph extraction with multiple initializer consumers
* Add CMakeList, move files
* fixed headers locations
* fixed calling tests
* apply code styles
* change namespace to onnx_editor
* removd not needed dependencies to onnx_editor
* Remove linking libonnx from unit-test
* Consider all flavors of protobuf libraries
* disable lto for onnx and onnx_proto
* revert set ONNX_INCLUDE_DIR change
* create new target onnx_common
* cmake dependencies clean-up
* Added onnx_common visibility
* pass ModelProto as reference + remove redundant dependencies
* expose onnx symbols by onnx_common
* remove -fno-lto, move InputEdge and OutputEdge to other header
* configure onnx linking for MSVC and APPLE
* remove unused comment
* fixed linking configuration for msvc and apple/clang
* clean cmakes
* added dependencies
* remove onnx and onnx_proto from NGRAPH_EXPORT_TARGETS_ENABLE
* make onnx and protobuf dynamic
* cmake dependency clean-up
* added onnx patch - make only onnx_proto shared
* use protobuf should depend on BUILD_SHARED_LIBS
* fixed patch
* fixed install targets
* add if to install protobuf
* fixed onnx flag
* move protobuf install
* cmake dependencies
* added protobuf header include dir + clean-up ng cmake
* added protobuf to ng targets
* fix apple build
* dependencies clean-up
* make onnx_editor static lib
* remove onnx editor visibility
* fixed protobuf export enablement
* fix apple build
* test build protobuf by fetch content
* fix apple protobuf build
* Docs update
* Producer name update in test onnx models
* More comments in the subgraph extraction code
* Merge remote-tracking branch 'upstream/master' into mbencer/MoveEditorToSeparateLib
* add dependency to onnx_proto for test utils
* dependency to ngraph_test_util
* ngraph_test_util dependency. part. 2
* conditional dependency of onnx to ngraph_test_util
* Apply suggestions from code review
Co-authored-by: Tomasz Socha <tomasz.socha@intel.com>
* review remarks
* remove onnx_proto default visibility
* camke remarks, not exporting onnx_common and onnx_editor to target set
* [test fix macos CI] Revert "remove onnx_proto default visibility"
* set onnx_proto visibility only for macos
* [PyPI] remove rpath for dylib
* Corrected cmd
* fix protobuf-lite linking
Co-authored-by: tomdol <tomasz.dolbniak@intel.com>
Co-authored-by: Tomasz Jankowski <tomasz1.jankowski@intel.com>
Co-authored-by: Mateusz Tabaka <mateusz.tabaka@intel.com>
Co-authored-by: Tomasz Socha <tomasz.socha@intel.com>
Co-authored-by: mryzhov <mikhail.ryzhov@intel.com>
* Recently we have merged the WA for MyriadPlugin to be not unloaded at runtime for Linux systems.
It has been found that there are some problems with the usage of this linker option on CentOS, which led to Segmentation Fault on it, on the second call of dlopen() on myriadPlugin.
I tried to update glibc version on CentOS (to 2.28) and the problem disappeared.
Both Ubuntu 18 and 20 works fine and as it was requested as the used platform for the initial problem, I've decided to limit the solution for Ubuntu OS only.
* Created opset7.md to add the specification of the operation FFT.
* Started to write the specification of the operation FFT.
* Added link to the specification of the operation FFT.
* Continued to write the specification of the FFT.
* Written about inputs and outputs of FFT.
* Started to write examples.
* Added example when there is no input 'signal_size'.
* Added more example.
* Small fixes.
* Small fix.
* Renamed FFT to DFFT.
* Small fix.
* Small fix.
* Started to write the algorithm of FFT.
* Added asserts.
* Started to write the method __call__ of the class of DFFT calculation.
* Fixed category.
* Continued to write the algorithm of the FFT calculation.
* Continued to write the algorithm of the FFT calculation.
* Continued to write the algorithm of the FFT calculation.
* Continued to write the algorithm of the calculation of DFFT.
* Written the algorithm of the calculation of DFFT.
* Small fix.
* Renamed operation.
* Added examples of 3D input tensors.
* Covered complex number representation.
* Written formulas for FFT.
* Written about start point of trimming.
* Small fixes.
* Small fixes.
* Some fixes.
* Added some note.
* Added a description of the calculation of the output shape.
* Added examples with unsorted axes and with (-1) in signal_size.
* Fixed range of axes indices.
* Small fix.
* Small change.
* Added T_SIZE type.
* Added negative axes support.
* Some fixes.
* Some fixes.
* Written the draft of the specification of the operation IFFT.
* Small fix.
* Renamed the operation IFFT into DIFFT. Deleted attribute.
* Renamed operation to IDFT.
* Deleted int8 type.
* Added examples of 3D input tensors.
* Added formulas and text.
* Fixed ie_docs.xml.
* Fixed sign in the IFFT formula.
* Some fixes.
* Added examples with unsorted axes and with (-1) in signal_size.
* Some fixes.
* Small fix.
* Small fixes.
* Added type T_SIZE.
* Deleted redundant sentence.
* Added support for negative axes.
* Some changes.