Implemented three operations: EmbeddingBagPackedSum,
EmbeddingBagOffsetsSum and EmbeddingSegmentsSum. These operations do
the same work but have a different format of inputs.
- change repo name to openvino
- update driver version
- fix path to samples data
- remove section about Movidius driver installation
- change latest release to 2020.3
- merge fixes in install_dependencies.sh from 2020 branch
adds fusing support to all available pooling kernels
tests all possible input type/output type configurations
fixes minor bug in max pooling in pooling_gpu_test.cpp
fixed minor bug with yxbf format in pooling_gpu_ref and pooling_gpu_int8_ref kernels
fixes bug with b_fs_yx_fsv32 format in pooling_gpu kernel
resolves bug with max pooling accuracy missmatch in case of non zero pad end layer parameter
resolves average pooling accuracy missmatch in case of non zero pad end layer parameter
The problem behind this error was in program_impl::init_graph() where in calculate_prior_boxes we are trying to calculate output layout of an entire network recursively which causes stack overflow. Calculating output layouts beforehand in processing order fixes this issue.
fix the following compile error:
inference-engine/src/mkldnn_plugin/mkldnn_memory_solver.hpp:60:9: error: 'int64_t' does not name a type
| 60 | int64_t size;
| | ^~~~~~~
include stdint.h to fix this.
Signed-off-by: Liwei Song <liwei.song@windriver.com>
* Create generic RecurrentSequenceDirection enum.
* Helper class RecurrentSequenceOp.
* Add ONNX GRU & RNN operators.
* Use OutputVector.
* Update doc.
* Add UTs for GRU and skip them on IE_CPU
* Add UT for bidirectional mode and fix it.
* Normalize activation function name case.
* Add unit-tests for RNN operator.
* UT for GRU with linear_before_reset set to true.
* Fix ONNX GRU for linear_before_reset case.
* Remove unnecessary symbol export macro.
* Fix CentOS error.
* Update UTs.
- Update few tests accuracy tolerance
- Update rnn_fwd_activations with new reference values and model.
* Review comment: add check for static shape
* Add UT for RNN with constant inputs W, R.
* Skip UT with const W,R on IE_CPU
* [IE][VPU]: Enables pass for propagating dynamism to network outputs
If network had dynamic output and then myriad Front-End inserted
convert stage at the end (to convert FP16 -> FP32 - output precision)
then dynamism would not be propagated - we have convert stage that
has dynamic input, but static output. As a result, we have run-time
error in Convert kernel: input and output shapes do not match.
At the moment, pass supports only Convert stage as output stage
over which we should propagate dynamism to outputs.
Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>
* [IE][VPU]: Fixes parse DSR in case of output data
Replacing stage output must be done after replacing
data to shape parent, because the last one may access
original parent producer, but after replacing stage output
it'd not have one.
Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>
* [IE][VPU]: Fixes MacOS build
* [IE][VPU]: Fixes shape data naming convention
Plugin part assumes that if there is dynamic data object, that's
represented as 2 different data objects (data and shape), then
shape data object has name = data object name + @shape suffix.
Pass that creates new dynamic data object should respect that
assumption.
* [IE][VPU]: Fixes dis-alignment in names of data objects representing dynamic data object
MyriadInferRequest::GetResult assumes that in case of dynamic data object
"data" data object and "shape" data object will have aligned names:
"shape" name = "data" name + "@shape" suffix.
In order to meet that expectation propagating dynamism pass must use output
data object name as prefix. Additionally, propagating pass must be applied
before converting shape notation pass in order to make output shape in IE
notation, not MDK, as MyriadInferRequest::GetResult is expecting.
Signed-off-by: Gladilov, Gleb <gleb.gladilov@intel.com>
* Update activation layer test
Signed-off-by: Mikhail Treskin <mikhail.treskin@intel.com>
* Get rid of LayerTestsCommonDeprecated class
Signed-off-by: Mikhail Treskin <mikhail.treskin@intel.com>
* Fix activation tests instantiations for gpu and myriad plugins
* Remove leaking inferWithInterp function
WhereDecomposition transform is applied to Where operation in for-garbage sub-graph remained after SparseWeightedSum transform.
Signed-off-by: Roman Kazantsev <roman.kazantsev@intel.com>