* Enable TransposeSinking in MOC
* replace TransposeSinking in TF Frontend
* fix TS for concat op
* Fix TS for Binary/Concat ops: broadcast transposed input
* Fix transpose sinking for Pad op
* fix pad tests
* fix dynamic case for Concat/Binary ops
* codestyle
* fix TransposeSinking for Split/Concat ops
* fix split
* Resolve review comments
* enable new method to generate CPU information and CPU map
* fix code style issue
* fix initialization issue of variable-sized object
* fix dependency issue
* add sample of CPU map
* add description and sample for CPU map
* fix code style issue
* fix code style issue
* add comments on using second processor as physical core
* enable new method to generate CPU information and CPU map on windows
* remove debug output
* add description for CPU map table
* remove changes for linux
* update description for better understanding
* update CPU mapping table on Windows
* fix precision issue of log2()
* fix memory leak
* use shared_ptr to manage memory life cycle
* Wrap parser for Windows into a separate function for mock testing later
* Revert "Wrap parser for Windows into a separate function for mock testing later"
This reverts commit 614ad718c2.
* add core type table for each socket on windows
* separate CPU map parser on Windows for validation
* fix core type table definition
* fix DWORD issue in header file
* update parser interface for validation
* fix socket count
* update processor count for XEON
* add discrption and example for processor type table
* remove conflicts
* fix merge conflicts
* fix document issue
* Onednn only supports 2D/3D gemm but openvino GPU plugin policy enforces 4D~6D.
This API mismatch causes problems in the post-op axis and requires massive code changes.
Therefore we decided to insert throw code for now and fix this issue later
if some models require non-(per tensor/full tensor) post-ops.
* Specifically, per-channel(=f) axis in this testcase becomes y-axis
because onednn gemm merges b,f axes into one batch axis.
* update linux CPU map parser and add unit test
* add one more test data
* fix clang issue
* update test case by using TEST_P
* fix code style issue
* add one more test data with hyper threading off
* remove duplicated test data
* fix issue for Windows build
* fix issue for Windows build
* add description for test data
* add core type table for each socket
* fix code style issue
* fix code style issue
* remove redundant content
* remove parse_processor_info_linux() from INFERENCE_ENGINE_API_CPP
* fix code style issue
* update example of core type table
* fix code style issue
* [TF FE] Support TensorList operations and RNN layers
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* Remove TensorList operations from the fallback
* Fix computation of dummy tensor size
---------
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
* serialization of read_value and assign primitives
* lines should be <= 160 characters long
* added unit tests for read_value and assign
* updated to store is_output_evnet in primitive_inst
* removing _is_output_event in typed_primitive_impl_ocl
* added comments for mem_allocated and is_output_null
* [C API] remote tensor support
Provide C interface for remote context and remote tensor:
1. OCL and VA context and buffer support
2. unite test for remote context and remote tensor
Change-Id: I2c449aef21cbe928ca470b4e3bcf1e03a1d1ca43
* Fix clang issue
Change-Id: I83c9592d21ff9cb8aeb85148277d96db74b455c7
* [CAPI] Add ocl nv12 input inference test case
1. Add fully nv12(2 ocl remote tensor) as input plus preprocess do csc+resize, then do inference
2. Add get_device_name for remote tensor
3. Add test case for preprocess to set mem type
Change-Id: Ieaab50c8de20e5c7258697030672e0b010627a81
* Update documentation
Change-Id: Ia7dbaea48d38f5534aba60fbb25cd0a1f2f9eab0
* Remove debug code
Change-Id: Ic5c5a24d3c40bb258b7007dcea44594af2d92344
* Fix issues brought by rebase
Change-Id: I2520c5ccf3620349e202ea40c08bb1c437d5af88
* Resolve document issue
Change-Id: Ia14500f8623147f481dda286a0afaa8ecfffa7c9
* Resolve some comments
1. Add specific header file for gpu plugin
2. clang-format issue
3. interface compatible issue
Change-Id: Icc4723af071af30f0422ac9a107e57ddeec94aac
* fix clang issue
Change-Id: I46e1fed3dd9a4e51260b695dc3fb194b9571ed58
* Add gpu header file directory
Change-Id: I8c15d9da58a46c070dcc68530cb2beea8cd4bba9
* Remove HAVE_OCL_SUPPORT macro
Change-Id: I10093a99c1858649f1c5502248729704fcec34ef
* Address some comments
Change-Id: I72830288d063623641e8946c8470631e81fdeb34
* Printov:AnyMmap with the help of ov::Any
Change-Id: I8abd3a8d94ba8116974c59a489cda2af15f225d7
* 1. Correct the device list by priority order from high to low.
2. Remove GNA, CUDA, HPU, HDDL, NVIDIA from device list supported by AUTO/MULTI.
Signed-off-by: Wang, Yang <yang4.wang@intel.com>
* Filter out supported device when not specify the candidate device for AUTO plugin.
* Add Debug MSG
* Update.
* Update AUTO mock test cases.
* Update.
* Update.
* Update code style.
---------
Signed-off-by: Wang, Yang <yang4.wang@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>