* [GPU] Support batch32 deconv onednn
onednn rls-v2.6-pc2 support deconv batch32,
so remove the batch size limitation.
Signed-off-by: hyunback <hyunback.kim@intel.com>
* Update to merge duplicated checking onednn condidton in deconv.
Signed-off-by: hyunback <hyunback.kim@intel.com>
* Update to use is_node_for_onednn func in get_preferred_impl_type
Signed-off-by: hyunback <hyunback.kim@intel.com>
* Delete _extension suffix in file names; add extension.hpp header to include all extensions
* add extension.hpp file to include all extensions
* codestyle
* fixed perf-counters
* explicit auto-batching params that should guarantee the auto-batching is triggered ( to avoid fallback to no-batching when the selected batch1 size is just 1)
* makeConvPoolReluNoReshapes and using that whenever applicable to gaurantee the auto-batching is required (not important for things like plugin/executable-network config tests, but important for the inference-requests)
* getDefaultNGraphFunctionForTheDevice moved to the ov_behavior_test_utils.hpp
* fixed version comparison: for comparsion extracted hashes are used
* shortened 7 -> 11 to match the current version fromat from nightly
* corrected regex, added comparing by minimal hash len
* remove formatTimeMilli from time_utils.cpp
* add traceCallStacks test case
* add traceCallStacks test case in format_test.cpp
* add param:"test" to function TraceCallStacks()
* catch the exception of checkFormat
* add space for try catch
* rollback time_utils.cpp time_utils.hpp and log_utils_format_test.cpp
* modify testcase for log.hpp
* modify testcase from format_s to format_s_d_ld_u_lu2
fix canConvolutionBeTransformed arguments
fix isAsymmetricOnWeights in GPU plugin
added defaultPrecisions in TestTransformationParams
set new default attribute precisions
try to set const default precisions in network_helper.cpp
apply precision_set
[LPT] Default precisions
rebase
remove extra const
used defaultPrecision in tests
fixed SimpleLowPrecisionTransformer default argument
fixed AttributeParameters default argument
added defaultPrecisions in functions
fix assign_and_read_value_transformation tests
fixed wrong defaultPrecisions definition
fixed ConcatWithNeighborsWithConvolutionTransformation tests
remove getDefaultPrecisions
rebase
remove getDefaultPrecisions from gpu plugin
remove getDefaultPrecisions from lpt_mkldnn_plugin.cpp
use predefined member
update mkldnn_plugin.cpp & lpt_mkldnn_plugin.cpp
resolved conversations
make all lambda captures by ref
* Used new config for streams and threads
* Fixed review coments in ba
* format fix
* fixed hello_query_device
* Added STL string io
* fixed tests
* Fixed test
* Fixed build
* fixed format
* Fixed build
* try fix win
* other any io specialization
* Fixed after merge
* renamed streams
* build fixed
* fixed build
* fixed format
* fix for old mac build
* Fixed type of exception
* test fix