* rename inference_engine to OpenVINO
* correct exception for batch
* check all inputs to find batch dimension before throwing exception
* correct warning about batch
* avoid set_shape in static case
* refactoring latency output
* message about benchmarking mode
* use new precision naming
* use pass manager instead offline_transformations
* Move 'NV12toRGB/BGR' reference evaluates to template plugin
CPU doesn't need this fallback, so implementation can be moved to reduce core binary size
* Moved evaluate_nv12 to 'runtime::reference'
* Fix arm build
* ShutdownProtobufLibrary when unload paddle frontend dynmaic library to fix probuf memory leak
* ShutdownProtobufLibrary if the frontend libraries use protobuf
* make shutdown_protobuf a library
* Set THROUGHPUT as the default configration for all the plugin and display the config of the plugin.
Signed-off-by: Wang, Yang <yang4.wang@intel.com>
* updated format.
Signed-off-by: Wang, Yang <yang4.wang@intel.com>
* Update benchmark python API.
Signed-off-by: Wang, Yang <yang4.wang@intel.com>
* Replace str 'THROUGHPUT' with CONFIG_VALUE(THROUGHPUT).
Signed-off-by: Wang, Yang <yang4.wang@intel.com>
* Using CONFIG_VALUE(THROUGHPUT) replace 'THROUGHPUT' string.
Signed-off-by: Wang, Yang <yang4.wang@intel.com>
* update code style.
Signed-off-by: Wang, Yang <yang4.wang@intel.com>
* Move the setting output code into the try block.
Signed-off-by: Wang, Yang <yang4.wang@intel.com>
* Calculate model layout based on 'tensor' layout and convert steps
Previously, 'model layout' is set to '...' by default,
thus no shape conversion happened when tensor layout is set to 'NHWC', then there was explicit convert_layout "NCHW"
Now "model layout" is calculated based on tensor layout and conversion steps:
Examples:
1) Tensor: NHWC, Convert: NCHW. Result: NCHW
2) Tensor: NHWC, Convert: 0312. Result: NCHW
* Fix for set_shape + resize case
* Implement the batch to space shape infer
* Implement the space_to_batch shape inference.
* Implement shape infer of space_to_depth and depth_to_space OPs
* Fix Azure building issue.
* Add namespace for the shape_infer function.
* Avoid using friend declaration for shape infer.
* update coding style issue
* Update based on review comments
* Apply review comments
* Add test cases.
* Update the shape infer flow.
* Fix the bug in the previous test case.
* Update coding style.
* Fix the code bug caused by the DepthToSpace check fix.
* update coding style.
* Implment the Dimension/StaticDimension division operator by a value
* Refine the the code.
* Fix the issue when T is implicitly construct StaticShape with PartialShape when comparing
* Update the CI issue.
* Move the shape_infer helper into src folder.
* Apply the review comments.
* Coding style fix.
* Remove the ngraph folder
* Applied review comments
* Fix CI windows building issue
* Move test into new folder.
* Not support divisor is negative.
* Apply review comments.
* Fix CI issues
* Apply review comments.
* Update
Co-authored-by: Evgenya Stepyreva <evgenya.stepyreva@intel.com>
* Q/DQ + mulichannel support
backup
fix interval
mfk_functiun.cpp
WIP moveDequantizationBefore
add moveDequantizationBefore function
add cpu and gpu tests
attribute cmp false
attribute cmp false
rm temp line
mkl-dnn update
concat with multichanels for mOve_fake_quantize_function, bad runtime info for q/dq
rm extra qualification
fix run time info for q/dq
add support of multichanel fakequantize, bad test for it
work tests for multi chanel FQ
rm workaround
cpplint fix
cpplint fix
don't worl Variadic split
ieFuncTest work
cpuFuncTest work
Fix benchmark_app build (#7577)
[GPU] Added onednn dependency. (#6564)
cpp lint
cpplint
fix get_shape
fix fq constants
cpp lint
some fix in mfk.cpp
resolve conversations, add spil_nodes function
add new tests for multi-chanels, rename NetworkHelper::split_consts_before_concat()
fix get fq constants
* add new multi-chanels test and use constant_fold to split constant
* remove extra spaces
fix namespase terminated
fix namespase terminated
* Updated requirements for MO and POT with telemetry.
* Added mock telemetry common class for unit tests.
* Used mock telemetry in preprocessing unit tests.
* Small correction.
* Fix in the transformation PreserveRuntimeInfo: now Transpose is inserted before input port 0 of Result only, not after data node of layer before Result layer.
* Deleted commented code.
* Added more tests for the MO transformation PreserveRuntimeInfo.
* Use fp16-int8 mixed precision, instead of fp32-int8 mixed precision for onednn
* Allow quantization fusion into bsv32_fsv16 conv
* For conv, do not select bsv16_fsv16. Select bsv32_fsv16 for mixed-layout
* depthwise conv is supported even though it is not fp16
* Allow resample kernel to work as cross-layout
* test case for cross-layout of resample_opt kernel
* Select onednn-friendly format from cldnn conv
* Optimization for fp16 mixed precision
* Choose mixed layout in case of mixed precision from reorder_inputs
* Support for mixed precision from depth_to_space
* Do not convert first conv format
* Use onednn for FC output of fp16
* Choose bsv8_fsv4 from quantization even when conv kernel size is not 7
* Select cldnn for first conv when input feature depth is 1
* For first conv, use onednn only when kernel size is 7x7
* Use short variable name and added is_i8_u8 helper function
Co-authored-by: Kim,SungEun <sungeun.kim@intel.com>
* [LPT] Documentation
* 1) ToC was removed 2) SVG => PNG temporary conversion
* [LPT] Refactoring + developer guide
* [LPT] attribute doxygen documentation was added
* [LPT] Developer Guide to Reference API links were added
* [LPT] comments fixes
* [LPT] Reference API to Developer Guide links were added
* [LPT] titles were changed
* [LPT] comments fixes#2
* [LPT] root document was moved to Plugin DG
* [LPT] Documentation: image link quick fix
* [LPT] Docummentation: PrecisionsAttribute description quick fix
* fix comments from Karol
* fixes
* movement
* directive was added
* movement #2
* LPT reference in Executable Network rollback
* snippets were updated ini accordance with new API
* Handle names collisions for old IR with new API
* Fixed load model
* Try to fix tests
* Try to fix tests
* Try to fix build
* Try to fix tests
* Fixed tests
* Revert "Fixed tests"
This reverts commit 35da307210.
* Refactoring
* Fixed functional test
* Try to fix CPU tests
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>