* don't check dynamic shape when there is only one device
Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
* remove redundant if
Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
* mod docs/_static/images/dataset.png and docs/_static/images/inputs.png
* add new hint cumulative_throughput
* clang format properties.hpp
* add set properties and get properties test case for CUMULATIVE_THROUGHPUT
* reset docs/_static/images/dataset.png and docs/_static/images/inputs.png
* reset docs/_static/images/dataset.png and docs/_static/images/inputs.png
* reset dataset.png and inputs.png
* reset dataset.png and inputs.png
* remove test value cumulative_throughput from gpuplugin and cpuplugin testcase
* rollback dataset.png and inputs.png to 41818a377
* add fps log
add format '%lf' for log
add INFO_RUN and DEBUG_RUN, code only run when greater than special log level
add fps log for device
print device config info with DEBUG_RUN
add mock test for DEBUG_RUN and INFO_RUN
Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
* use n / end -start instead of (n-1) / ((nst start) -(1st start))
Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
* Mark `get_type_info_static()` as hidden
Each plugin linked with openvino library contains `type_info_static` symbols. In case when one of the libraries is unloaded and app tries to get opset, it leads to segfault. So mark `get_type_info_static()` as hidden to use only one implementation exactly from openvino lib
* Fix "'visibility' attribute ignored" issue by moving `TestPass` out of test scope
* Fix clang format
* Small update of `If` op
* Revert "fix 79520 (#10449)" to correctly compare DiscreteTypeInfo via `==`
This reverts commit 29883a152a.
The change fixes FQ fusions for subgraphs like 'Const weights'->FQ->Transpose->Multiply.
After PullTransposeThroughFQUp transformation, we end up with following:
'Const weights'->Transpose->FQ->Multiply. Because of the Transpose on first
FakeQuantize inputs, Multiply could not be fused since FakeQuantizeMulFusion
expected that weights is a Constant node.
Ticket: 77785
* Performance improvement for constant creation
The issue is that 'are_all_data_elements_bitwise_identical()' is called every time in Constant constructor, and it potentially checks all buffer which is O(N) complexity.
While it is needed only if client uses 'get_all_data_elements_bitwise_identical'
Solution:
- Defer calculation until first call of 'get_all_data_elements_bitwise_identical'
- Store calculated value in mutable class member to reuse it on next calls of 'get_all_data_elements_bitwise_identical'
Test verifies both cases:
a) that constant creation with shared memory data (now O(1)) is significantly faster than creation+bitwiseCheck O(N)
b) Than once calculated, value is taken from cache, which is significantly faster than re-calculation
* fix clang-format
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
* Update Binder URL on the tutorials landing page
Binder URL was linking to a file. It should go to an actual Binder tutorial.
(Replaces https://github.com/openvinotoolkit/openvino/pull/10747)
* binder logo
* fixes
Co-authored-by: CCR\ntyukaev <nikolay.tyukaev@intel.com>
* InputTensorInfo::from implementation
If user's application already has `ov::runtime::Tensor` object created,
it will be possible to reuse basic characteristics for input (shape, precision) from tensor using InputTensorInfo::from method
* Rename 'from' to 'set_from' as in Python 'from' keyword is used for import modules
Python bindings: from ov.Tensor and from numpy array
* Style fix (quotes)
* Apply suggestions from code review
Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
* Fix code style
* Use set_from in hello_classification CPP sample
Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
* add placeholder for python version of first snippet
* fix problem with placeholder
* fix wrong file name
* fix fragment name
* update python snippets
* move imports to the top of the code fragments