* Added migration for deployment (#10800)
* Added migration for deployment
* Addressed comments
* more info after the What's new Sessions' questions (#10803)
* more info after the What's new Sessions' questions
* generalizing the optimal_batch_size vs explicit value message
* Update docs/OV_Runtime_UG/automatic_batching.md
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
* Update docs/OV_Runtime_UG/automatic_batching.md
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
* Update docs/OV_Runtime_UG/automatic_batching.md
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
* Update docs/OV_Runtime_UG/automatic_batching.md
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
* Update docs/OV_Runtime_UG/automatic_batching.md
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
* Update docs/OV_Runtime_UG/automatic_batching.md
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
* Perf Hints docs and General Opt Guide refactoring (#10815)
* Brushed the general optimization page
* Opt GUIDE, WIP
* perf hints doc placeholder
* WIP
* WIP2
* WIP 3
* added streams and few other details
* fixed titles, misprints etc
* Perf hints
* movin the runtime optimizations intro
* fixed link
* Apply suggestions from code review
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
* some details on the FIL and other means when pure inference time is not the only factor
* shuffled according to general->use-case->device-specifics flow, minor brushing
* next iter
* section on optimizing for tput and latency
* couple of links to the features support matrix
* Links, brushing, dedicated subsections for Latency/FIL/Tput
* had to make the link less specific (otherwise docs compilations fails)
* removing the Temp/Should be moved to the Opt Guide
* shuffled the tput/latency/etc info into separated documents. also the following docs moved from the temp into specific feature, general product desc or corresponding plugins
- openvino_docs_IE_DG_Model_caching_overview
- openvino_docs_IE_DG_Int8Inference
- openvino_docs_IE_DG_Bfloat16Inference
- openvino_docs_OV_UG_NoDynamicShapes
* fixed toc for ov_dynamic_shapes.md
* referring the openvino_docs_IE_DG_Bfloat16Inference to avoid docs compilation errors
* fixed main product TOC, removed ref from the second-level items
* reviewers remarks
* reverted the openvino_docs_OV_UG_NoDynamicShapes
* reverting openvino_docs_IE_DG_Bfloat16Inference and openvino_docs_IE_DG_Int8Inference
* "No dynamic shapes" to the "Dynamic shapes" as TOC
* removed duplication
* minor brushing
* Caching to the next level in TOC
* brushing
* more on the perf counters ( for latency and dynamic cases)
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
* Updated common IE pipeline infer-request section (#10844)
* Updated common IE pipeline infer-reqest section
* Update ov_infer_request.md
* Apply suggestions from code review
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
* DOCS: Removed useless 4 spaces in snippets (#10870)
* Updated snippets
* Added link to encryption
* [DOCS] ARM CPU plugin docs (#10885)
* initial commit
ARM_CPU.md added
ARM CPU is added to the list of supported devices
* Update the list of supported properties
* Update Device_Plugins.md
* Update CODEOWNERS
* Removed quotes in limitations section
* NVIDIA and Android are added to the list of supported devices
* Added See Also section and reg sign to arm
* Added Preprocessing acceleration section
* Update the list of supported layers
* updated list of supported layers
* fix typos
* Added support disclaimer
* update trade and reg symbols
* fixed typos
* fix typos
* reg fix
* add reg symbol back
Co-authored-by: Vitaly Tuzov <vitaly.tuzov@intel.com>
* Try to fix visualization (#10896)
* Try to fix visualization
* New try
* Update Install&Deployment for migration guide to 22/1 (#10933)
* updates
* update
* Getting started improvements (#10948)
* Onnx updates (#10962)
* onnx changes
* onnx updates
* onnx updates
* fix broken anchors api reference (#10976)
* add ote repo (#10979)
* DOCS: Increase content width (#10995)
* fixes
* fix
* Fixed compilation
Co-authored-by: Maxim Shevtsov <maxim.y.shevtsov@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
Co-authored-by: Aleksandr Voron <aleksandr.voron@intel.com>
Co-authored-by: Vitaly Tuzov <vitaly.tuzov@intel.com>
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
Co-authored-by: Yuan Xu <yuan1.xu@intel.com>
Co-authored-by: Victoria Yashina <victoria.yashina@intel.com>
Co-authored-by: Nikolay Tyukaev <nikolay.tyukaev@intel.com>
* CPU device documentation refresh
* Bfloat16 inference page aligned with the new API
* Bfloat16 inference section moved to CPU main
* First review comments applied
* Second review step comments applied
* OneDNN reference changed to the GitHub page
* AvgPool added to the oneDNN ops list
* Add readvalue, assign to templte plugin test
* Fix clang error
* Fix clang error
* Remove unnecessary comment
* Fix type-casting error
* Fix ci issue regarding const value
* Change Function to Model
* Fix op scope
* Change way to get variable
* Fix type-casting error
* Set variable id to const
* Fix side-effect in ieFuncTests
* Implement Assign-3, ReadValue-3 in evaluates_map
* Correct setting attribute
* Correct setting attribute
* Remove unnecessarily added method
* Roll back v6
* Use member variable for variable_id in assign-3, read_value-3
* Get data pointer from host tensor
* Remove visitor API test for ReadValue-6, Assign-6
* Implement visitor api test for read_value-6, assign-6
* Fix clang error
* Split read_value and assign into each file for visitor test
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
This behavior is already used by default because ONNX is enabled by default and thirdparty/onnx/onnx/CMakeLists.txt forcing CMAKE_BUILD_TYPE to Release if it is not set
It fixes the following issues:
- When ONNX frontend is disabled - source is built for Debug, which is very unexpected comparing to Release with ONNX frontend enabled
- When ONNX frontend is disabled, even libopenvino.so could not be built due to some generated makefiles issues
It is set to 'Release' (not to 'Debug') to comply with default behavior when ONNX is enabled (it is default option working for most users)
* Performance improvement for constant creation
The issue is that 'are_all_data_elements_bitwise_identical()' is called every time in Constant constructor, and it potentially checks all buffer which is O(N) complexity.
While it is needed only if client uses 'get_all_data_elements_bitwise_identical'
Solution:
- Defer calculation until first call of 'get_all_data_elements_bitwise_identical'
- Store calculated value in mutable class member to reuse it on next calls of 'get_all_data_elements_bitwise_identical'
Test verifies both cases:
a) that constant creation with shared memory data (now O(1)) is significantly faster than creation+bitwiseCheck O(N)
b) Than once calculated, value is taken from cache, which is significantly faster than re-calculation
* fix clang-format
* Stash - Linux implementation
* Windows mmap implementation + unicode
* Clang for windows
* removed debug print
* Add handling of empty bin file
* fix windows includes
* Fix python test
* Unit tests
Fix for Constant with size > 4GB
* Fix review comments
* refactoring: get bias shape in bc and fbc algoritms
* use scipy to take most frequent shape
* pylint
* update reference
* pylint
* Update test_sanity.py
* update test_sanity.py
* Update test_sanity.py
* [GNA] Added SW_FP32 mode w/o SF for BasicLSTM
* deleted additional test
added sw_fp32 mode for exisiting test
changed reference output for new mode
* [GNA] Fixed according to review
* [GNA] Parametrized weights range
* fixed after review
Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com>
* Written header files for the nGraph operations RDFT and IRDFT.
* Written nGraph shell for the operation RDFT.
* Added missed include.
* Added RDFT to opset9 table.
* Code style fixes.
* Written the nGraph shell of the operation IRDFT.
* Added IRDFT to opset9 table.
* Started to write shape infer tests for RDFT.
* Refactoring: shape infer functions of RDFT and IRDFT moved into separate files.
* Written shape infer tests for RDFT.
* Written shape infer tests for IRDFT operation.
* Fixed code style.
* Fixes in the shape infer function of RDFT.
* Fixes in the shape infer function of RDFT.
* Fixes in the shape infer function of IRDFT.
* Deleted redundant includes in include/ngraph/op/irdft.hpp and include/ngraph/op/rdft.hpp
* Deleted redundant includes in include/openvino/op/rdft.hpp and include/openvino/op/irdft.hpp.
* Deleted redundant includes in cpp-files of nGraph shells of operations IRDFT and RDFT.
* Code style fixes.
* Shape inference functions of operations RDFT and IRDFT moved to the namespace ov::op::util.
* Deleted RDFT and IRDFT from docs/template_plugin/backend/opset_int_tbl.hpp.
* Deleted 'using namespace ngraph' from cpp-files of nGraph shells of operations RDFT and IRDFT.
* Fixed typos.
* Merged some loops in shape inference functions of RDFT and IRDFT.
* Written visitor tests for RDFT and IRDFT.
* Small change.
* Common part of RDFT and IRDFT shape validation moved into the separate file.
Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
* don't check dynamic shape when there is only one device
Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
* remove redundant if
Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
* mod docs/_static/images/dataset.png and docs/_static/images/inputs.png
* add new hint cumulative_throughput
* clang format properties.hpp
* add set properties and get properties test case for CUMULATIVE_THROUGHPUT
* reset docs/_static/images/dataset.png and docs/_static/images/inputs.png
* reset docs/_static/images/dataset.png and docs/_static/images/inputs.png
* reset dataset.png and inputs.png
* reset dataset.png and inputs.png
* remove test value cumulative_throughput from gpuplugin and cpuplugin testcase
* rollback dataset.png and inputs.png to 41818a377
* add fps log
add format '%lf' for log
add INFO_RUN and DEBUG_RUN, code only run when greater than special log level
add fps log for device
print device config info with DEBUG_RUN
add mock test for DEBUG_RUN and INFO_RUN
Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>
* use n / end -start instead of (n-1) / ((nst start) -(1st start))
Signed-off-by: Hu, Yuan2 <yuan2.hu@intel.com>