* dft with single layer test
* idft with single layer test
* fix output param usage in dft
* update dft according to the clang-format
* move output layout setup to calc_output_layout
* add support for other dimensions
* add clDNN unit test for DFT/IDFT
* remove unnecessary original rank
* use defined formats in kernel
* fix dft docs
* changes after review
* Revert "fix dft docs"
This reverts commit 45b05172dfd161d92dae6d26e0f1b74748e56fd5.
Co-authored-by: Serhii Pavlovskyi <spavlovskyi@lohika.com>
Co-authored-by: Mykhailo Hnap <mhnap@lohika.com>
With new networkx release (2.8.1) some of MO tests started to fail
with following error:
```
def __setstate__(self, state):
self._graph = G = state["_graph"]
self._adjdict = G._pred if hasattr(G, "pred") else G._adj
AttributeError: 'Graph' object has no attribute '_adj'
```
Seems like regression that was introduced in
f50fc70b8c
convolution_gpu_yxfb_yxio_b16 for fp16 has hardcoded reqd_work_group_size
to (16, 1, 1). On devices where CL_DEVICE_MAX_WORK_GROUP_SIZE is 512
GetOptimalLocalWorkGroupSizes picks (16, 2, 1) for LWS.
That causes issues during clEnqueueNDRangeKernel since LWS doesn't match
with reqd_work_group_size in the kernel.
* Add single layer tests for GPU
* Add GPU primitive for ExperimentalDetectronGenerateProposalsSingleImage
* Add kernel for ExperimentalDetectronGenerateProposalsSingleImage
* Add unit test
* rename abbreviation edgpsi to the full name experimental_detectron_generate_proposal_single_image
* Add f16 support to operation
* Add f16 support to the unit test
* Add notification about the second output in primitive
Co-authored-by: Oleksii Khovan <okhovan@lohika.com>
* Added shell for Eye-9
* Updated spec for Eye-9
* Added reference for Eye-9
* eye cpu
* Added op impl check for Eye-9
* Fix unallowed dynamic to static dim conversion in eye shape_infer
* Add template plugin tests for dynamic shapes
* Add template plugin tests for dynamic shapes batch input
* Enable batch shape input dynamic rank
* Uncomment 3D batch cpu Eye tests
* Update assertions and messages
* use ov::element type
* Remove redundant evaluate from eval map
* Style fix
* Add static_cast<T>(1) to cpu eye
* Add defaults to eye cpu class members
* Reuse out_ptr and checks
* Reutrn if onesPerBatchNum == 0
* Add Eye CPU Dynamic shape tests with 2D batch
* Additional test cases for CPU and reference
* Disable 3D batch eye cpu tests
* Fix CPU implementation for matrix with not equal cols and rows
* Update CPU test name
* Disable CPU Eye 3D batch static shapes tests
Co-authored-by: Alexandra Sidorova <alexandra.sidorova@intel.com>
Co-authored-by: Yury Gaydaychuk <yury.gaydaychuk@intel.com>
* Update oneDNN rls-v2.6
* Support weight tag for oneDNN v2.6
* Fix first conv selection issue in oneDNN
* oneDNN v2.6 required specific tags to run jit:ir primitives.
* any_tag can find optimized primitives in oneDNN.
* Enable aBcd2b src tag for oneDNN v2.6
* Add create_memory_desc from format string.
* Apply group depthwise separable conv uses jit:ir in oneDNN v2.6
* Use byxf format.
* Update only use acdb format in shallow group conv
* Fix refconv selection in shallow conv with post operations.
* Enable reshape int8
* Fixed quantize fusing through reorder+reshape : Fixed the condition to check per_tensor_input_shift only when need_input_shift is true
* minor change
* Allow FP quant to be fused to FC/gemm
* Disable reshape tranform for onednn until onednn FC is optimized
* [GPU] Support implicit crop in input transposition.
+ Make the crop in front of quantize implicit by changing output format to bfyx.
+ Use implicit concat after quantize nodes.
* Add unit test for implicit crop and concat.
+ remove unnecessary code.
+ Modified jitter Load for planar input of fused eltwise
+ Bugfix in jitter if planar input has LT_ALIGNED_READ
Signed-off-by: Min, Byungil <byungil.min@intel.com>
Update the branch to be used for 2022.1 and remove reference to
-staticdev package which isn't generated anymore.
Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
2. Adds # to links that are broken in openvino_docs_get_started_get_started_demos.htm
Signed-off-by: intelkevinputnam <intelkevinputnam@github.com>
Co-authored-by: intelkevinputnam <intelkevinputnam@github.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* roi_align_9: ov_core, transformations, template_plugin
* roi_align_9: CPU Plugin
* keep only constructor with enums which is aligned with spec
* remove evaluate function for ROIAlign_9
* Add op check test for operation ROIAlign-9
* Apply suggestions from code review
* fix version name from 'v0' to 'v3' in transform part
* use common shape_infer function for v3 and v9
* remove'tf_' prefix for ROIAlign::AlignedMode to avoid misleading for models from different platforms
* Update Convert_Model_From_TensorFlow.md (#11425)
* Apply suggestions by Yuan
The changes are made in the port PR, so will be published with the 22.2 version.
Co-authored-by: Evan <evan.juras@gmail.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
* Docs: Add links to specific examples (#11618)
* Update docs/OV_Runtime_UG/integrate_with_your_application.md
* Add links to specific examples
This edit adds links to more example applications, making it easier for users to discover how to build an OpenVINO application around their specific model.
* Add links to MO installation and ONNX examples (#11617)
These edits help make it easier for a new user to find more information on how to convert ONNX models.
* Apply suggestions by Yuan
The changes are made in the port PR, so will be published with the 22.2 version.
Co-authored-by: Evan <evan.juras@gmail.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>