Compare commits

...

417 Commits

Author SHA1 Message Date
Pawel Raasz
e7c1344d3c [core] Api 2.0/migrate Add operator to new API (#19984)
* Migrate Add operator to new API

* Remove `visit_attributes` as it calls base impl

* Use shape inference to calculate broadcast shape
2023-09-22 11:57:02 +04:00
Pawel Raasz
aa293c09ac [core]Api 2.0/migrate ceiling op to new API (#19909)
* Migrate Ceiling op to new API

* Remove f16 precision from evaluate

* Correct ceiling reference doxy comments
- correct functor arg to be const
2023-09-22 10:53:41 +04:00
Ekaterina Aidova
8d59fcd34f [PT FE]: extend logical operations support (#19981)
* [PT FE]: extend logical operations support

* tests

* more tests
2023-09-22 10:11:36 +04:00
Egor Duplenskii
3f3d89678e [CPU][ARM] Reorder FullyConnected weights in scope of compile_model (#19634) 2023-09-22 09:33:33 +04:00
Egor Duplenskii
6de8579b1d [CPU][ARM] Choose eltwise layout based on model type (#19234)
Eltwise order of the supported primitive descriptors affects the
performance of the model.
Often only one of the port descriptors matches with the layout
of the parent descriptors, i.e. when two parent ports have mixed
layout "nchw nhwc".
So either nchw or nhwc layout will be used for the eltwise node
and reorder will be used for either of the ports.
The shape of the ports also can be different (when one of the inputs is
broadcasted). So reorders on different ports have different
performance impact.
The layout of the eltwise node child has an effect on the performance
as well, since it may or may not require reorder on input.
2023-09-22 09:33:16 +04:00
Egor Duplenskii
aec4c6c843 [CPU][ARM][FP16] Use Transpose executor for layout Reorder (#19227) 2023-09-22 09:32:12 +04:00
Ekaterina Aidova
fde054e4a6 [PT FE]: support aten::minimum aten::maximum (#19996) 2023-09-22 09:30:57 +04:00
Taylor Yeonbok Lee
f1b8abe55a [GPU] Optimization for gemm & fc in iGPU. (#19780)
* Optimization for gemm & fc in iGPU.
FC: fake alignment for 16 is better in iGPU.
Gemm: permute + gemm_tiled_opt is better than transposed_input + gemm_ref kernel for unaligned shapes to 16. Note that this is an temporal optimization and will be removed once the final solution (i.e., support unaligned transposed input shape in gemm_tiled_opt kernel) is availalbe.

* Fix unittest

* Fix for model_cache

* Fix unittest
2023-09-21 22:07:53 -07:00
Pavel Esir
efe54362fd fix f16/f32 el type mismatch for shape subgraphs: Parameter type should not be fused for precision sensitive nodes (#19959)
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-09-22 02:06:39 +04:00
Georgy Krivoruchko
10c3b60aac Updated model_creation_sample.py (#19989)
Removed usage of ngraph in the sample
2023-09-22 02:05:24 +04:00
Maxim Vafin
058b45e608 [PT FE] Fix aten::repeat regression (#19991)
* Revert "[PT FE] Simplify repeat operation (#19926)"

This reverts commit f926e0e392.

* Fix aten::repeats regression

* Simplify

* Update src/frontends/pytorch/src/op_table.cpp

* Add impacted model
2023-09-21 23:58:09 +02:00
Oleg Pipikin
b1bf16c7cf Refactor ConversionLayerTest, ConcatLayerTest, ConstantLayerTest, ConvertColorI420LayerTest, ConvertColorNV12LayerTest (#19777)
* Refactor ConversionLayerTest

* Refactor ConcatLayerTest

* Refactor ConstantLayerTest

* Refactor ConvertColorI420LayerTest

* Refactor ConvertColorNV12LayerTest
2023-09-21 22:56:14 +02:00
Zlobin Vladimir
e068cfc5a3 benchmark: drop python3.7 (#19972)
Ticket 118797
2023-09-21 22:03:09 +04:00
Vladislav Golubev
0e0e1b0ee6 SmartReshape: ReshapeMatMul transformations are fixed (#19987)
* SmartReshape: ReshapeMatMul transformations are fixed

* clang-format fixes
2023-09-21 19:49:07 +02:00
Maxim Vafin
d0ef28e541 Reverse infer deduce second Squeeze input (#19923)
* Reverse infer deduce second Squeeze input

* Fix build

* Fix tf tests

* Small optimization

* Fix type

* Fix tests
2023-09-21 17:37:33 +02:00
Maxim Vafin
02f5b7a7e3 [PT FE] Add huggingface hub tests (#19858)
* [PT FE] Add huggingface hub tests

* Update requirements.txt

* natten can't be installed

* Remove librosa

* Add xfail and skip marks

* Remove t5

* Free up space

* Apply suggestions from code review

* Report free space

* Clean pip cache

* Temporarily disable det2 models

* More du info

* Remove tmp info

* More du stats

* Fix du report

* Optimize du

* Ignore error

* Finalize changes

* Update tests/model_hub_tests/torch_tests/test_transformers.py
2023-09-21 17:30:17 +02:00
Vitaliy Urusovskij
b00fbd04c5 Fix StressMemLeaksTests with several models (#19986)
* Fix `StressMemLeaksTests` with several models

* Fix OMZ branch name in `get_testdata.py`
2023-09-21 15:59:18 +02:00
Maxim Vafin
37d54bcb42 [PT FE] Support prim::TupleIndex operation (#19978)
* [PT FE] Support prim::TupleIndex

* Update src/frontends/pytorch/src/op/tuple_index.cpp

* Update src/frontends/pytorch/src/op/tuple_index.cpp
2023-09-21 15:57:42 +02:00
Maciej Smyk
f790a3b4f2 [DOCS] Python 3.7 support removal from docs for master (#19670)
* Python update

* update

* fix

* m suffix removal
2023-09-21 15:54:10 +02:00
Vitaliy Urusovskij
318106d17d StridedSliceLayerTest and SqueezeUnsqueezeLayerTest to API2.0 (#19955)
* `StridedSliceLayerTest` to API2.0

* `SqueezeUnsqueezeLayerTest` to API2.0

* Fix CppLint
2023-09-21 15:08:51 +02:00
Tomasz Jankowski
c4adf80ec6 [Core] Move SlicePlan to Dev API (#19971)
* Move SlicePlan to Dev API

* Recover legacy API

---------

Co-authored-by: Evgenya Nugmanova <evgeniia.nugmanova@intel.com>
2023-09-21 14:41:01 +02:00
Vitaliy Urusovskij
e5acf880ad Slice8, ShuffleChannels, ShapeOf layer tests to API2.0 (#19975)
* `Slice8LayerTest` to API2.0

* `ShuffleChannelsLayerTest` to API2.0

* `ShapeOfLayerTest` to API2.0
2023-09-21 14:36:26 +02:00
Vitaliy Urusovskij
fc9dedfd6a Split, SpaceToDepth, SpaceToBatch layer tests to API2.0 (#19973)
* `SplitLayerTest` to API2.0

* `SpaceToDepthLayerTest` to API2.0

* `SpaceToBatchLayerTest` to API2.0
2023-09-21 12:29:30 +00:00
Ilya Churaev
e277d48535 Move unsqueeze evaluate to new API (#19948)
* Move unsqueeze evaluate to new API

* Add new header

* Fixed comments

* Fixed build

* Revert redundant changes

* Fixed comments
2023-09-21 16:07:10 +04:00
Vitaliy Urusovskij
2489356121 Skip failed TensorIteratorTest and CTCGreedy* CPU f16 tests on Mac (#19969)
* Skip failed `TensorIteratorTest` CPU f16 tests

* Skip `CTCGreedyDecoder` on MacOS CPU ARM f16
2023-09-21 15:44:37 +04:00
Siddhant Chauhan
d6d4e3deeb Align openvino.compile_model and openvino.Core.compile_model functions (#19778)
Co-Authored-By: Anastasia Kuporosova <anastasia.kuporosova@intel.com>
2023-09-21 11:15:30 +00:00
Nesterov Alexander
194b9f5c38 [ARM CPU] Migration Arm Compute Library to 23.08 version (#19523) 2023-09-21 14:30:38 +04:00
Pawel Raasz
5bf5e488b7 Migrate LogicalXor to new API (#19913) 2023-09-21 12:09:49 +02:00
Pavel Durandin
d90667c190 [GPU] Coverity fix (#19968) 2023-09-21 12:35:57 +04:00
Katarzyna Mitrus
7204b0b78d [Spec][Opset13] NMSRotated-13 specification (#19574)
* Init rotated non max suppresion spec

* Add opset13 docs

* Apply minor refactor from review

* Update boxes definition

* Update example format from cpp to xml

* Add version in op list

* Add clockwise attr to the example

* Align indent

* Remove redundant input from example

* Add steps of iou_rotated

* Add default values for attributes

* Drop box encoding attribute

* Rephrase nput description

* Applay grammatical suggestions
2023-09-21 09:22:22 +02:00
Wilson Seok
c5843cf5d6 [GPU] use input memory buffer as output memory when input1/2 are empty (#19786)
* use input memory buffer as output memeory when input1/2 are empty

* fix wrong rebase

* add func test

* implement in on_execute()

* remove deleted function definitioin

* remove unused header files

* fix include error

* update condition of empty input check
2023-09-20 22:59:11 -07:00
Luwei Zhou
9dcd66c695 [CPU] Unify oc block to optimize peak memory. (#19575) 2023-09-21 09:54:55 +04:00
Roman Lyamin
a800d3e4f4 [GPU] Update weights reorder output shape for fully_connected (#19925) 2023-09-21 09:26:42 +04:00
Wanglei Shen
1786e366a2 Support new format for CPU information on Win7 (#19759) 2023-09-21 09:17:47 +04:00
Ilya Churaev
4a0e3fc77f Remove ngraph from some sources (#19966) 2023-09-21 06:27:30 +04:00
Oleg Pipikin
69e9124eb7 Refactor ConvolutionBackpropDataLayerTest, ConvolutionLayerTest, DeformableConvolutionLayerTest (#19810)
* Refactor ConvolutionBackpropDataLayerTest

* Refactor ConvolutionLayerTest

* Refactor DeformableConvolutionLayerTest

* Apply comments

* Apply comments

* Fix
2023-09-21 00:50:52 +04:00
Evgenya Nugmanova
c1a8380052 Symbolic shape inference and graph optimizations (#19392)
* Symbolic shape inference and graph optimizations
- Prepares a place in CommonOptimizations pipeline for symbolic optimizations
- Introduces symbolic propagation and symbolic optimizations for ChainedMaximum, NopBroadcast and shape sub-graph optimization
- Introduces utility runtime info for TableOfEquivalence passing and disabling of value invalidation during shape inference

* Executes NgramFusion in a symbolic environment. Relaxes Ngram fusion pattern utilizing symbolic knowledge

* Remove debug model visualization

* rt_info copying to new Add operation

* Fix visualization and place validation in nicer place in symbolic transformation

* Fix Slice operation not to propagate labels if input and output dimension is fully dynamic

* Covering Vladislav comments

* Replace value invalidation followed by validation to revalidation since it does the same thing

* Adding back invalidation of cached values to Symbolic Propagation pass

* Fix StridedSlice label propagation. Code style

* Update src/common/transformations/tests/symbolic_transformations/nop_broadcast.cpp
2023-09-20 18:00:07 +04:00
Aleksandr Voron
8558476047 [CPU] [ARM] Enable SoftMax SLT tests on ARM (#19823) 2023-09-20 17:57:34 +04:00
Fang Xu
228ea44743 [CPU] Use omp section instead of task in UpdateNodes logic (#19831) 2023-09-20 17:47:48 +04:00
Ilya Lavrenov
f53d880b2c Updated dependabot config for GHA updates (#19970) 2023-09-20 13:52:05 +04:00
Ilya Lavrenov
604aed1384 Updated Windows build docs (#17631) 2023-09-20 12:24:40 +04:00
Ekaterina Aidova
2c88fbf798 [PT FE]: support mixed precision in aten::min/max (#19936)
* [PT FE]: support mixed precision in aten::min/max

* fix eltwise dtype alignment for float16
2023-09-20 11:26:18 +04:00
Ilya Lavrenov
c67c0663fc Use target python packages (#19928) 2023-09-20 11:16:15 +04:00
Zhang Yi
7fe195a459 [Doc]Update cmake option for MLAS (#19963)
* [Doc]Update cmake option for MLAS

* Update docs/dev/cmake_options_for_custom_compilation.md

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-09-20 10:35:22 +04:00
dependabot[bot]
a558ebd4bc Bump actions/checkout from 3 to 4 (#19964)
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-20 10:31:34 +04:00
dependabot[bot]
5d8687ae30 Bump SimenB/github-actions-cpu-cores from 1 to 2 (#19965)
Bumps [SimenB/github-actions-cpu-cores](https://github.com/simenb/github-actions-cpu-cores) from 1 to 2.
- [Release notes](https://github.com/simenb/github-actions-cpu-cores/releases)
- [Commits](https://github.com/simenb/github-actions-cpu-cores/compare/v1...v2)

---
updated-dependencies:
- dependency-name: SimenB/github-actions-cpu-cores
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-20 10:31:01 +04:00
Sofya Balandina
0f18d2a0ea [template] Add device ro props and fix compile_model props test (#19886) 2023-09-20 06:05:01 +04:00
Andrew Kwangwoong Park
394e58fafb [GPU] Fix canonicalization for fused dep's shape (#19667)
* [GPU] Fix canonicalization for fused dep's shape

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Update TC to reproducible on the latest master

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Fix custom canonicalize shapes for Gather

---------

Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-09-19 16:57:10 -07:00
Ilya Churaev
631d6d3980 Fixed leftovers after migration to new API (#19941)
* Fixed leftovers after migration to new API

* Fixed tests

* Fixed clang format

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-09-20 01:36:44 +02:00
Anastasia Kuporosova
b3ee79520f [PyOV][Docs] Add docs for if-op (#19899)
* [PyOV][Docs] Add docs for if-op

* code style

* add rtypes

* add dots
2023-09-19 21:35:38 +00:00
Ilya Churaev
52e57e1777 Disable clang-format for legacy C API (#19944) 2023-09-19 22:26:57 +04:00
Evgenya Nugmanova
00bc4436a9 Allow avoiding Constant casting / printing VisualizeTree (#19845)
* Avoid Constant casting / printing when OV_VISUALIZE_TREE_CONST_MAX_ELEMENTS==0
Cast only requested amount of elements in Constant::cast_vector<>

* Refactor

* Revert style back

* Fix signed/unsigned comparison

* test

* Style

* Style
2023-09-19 20:13:19 +02:00
Anastasiia Pnevskaia
215a2f435b tf.Graph decoder accuracy fixes (#19903)
* Fixed outputs passing from decoder.

* Fixed get_variable method.

* Code style.

* Removed xfails from precommit models.

* Minor fix.

* Comment fixed.

* Added test.
2023-09-19 19:04:24 +02:00
Ilya Churaev
b6e0706961 Enable clang format for template plugin tests (#19942) 2023-09-19 18:20:30 +04:00
dependabot[bot]
472ad39a9d Update numpy requirement from <1.26,>=1.16.6 to >=1.16.6,<1.27 in /tests (#19938)
Updates the requirements on [numpy](https://github.com/numpy/numpy) to permit the latest version.
- [Release notes](https://github.com/numpy/numpy/releases)
- [Changelog](https://github.com/numpy/numpy/blob/main/doc/RELEASE_WALKTHROUGH.rst)
- [Commits](https://github.com/numpy/numpy/compare/v1.16.6...v1.26.0)

---
updated-dependencies:
- dependency-name: numpy
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-19 18:18:42 +04:00
Andrey Kashchikhin
233ae78ff4 [CI] [GHA] Introduce GHA Linux ONNX Runtime Pipeline (#19883)
* add pipeline

* address comments

* rm triggers
2023-09-19 16:06:47 +02:00
PiotrekCzajkowski
67a41b28ec Add file via upload (#19605)
* Add files via upload

add new words to dict

* Add files via upload

Fix typos in words and new words to cspell.json file.
2023-09-19 17:16:16 +04:00
Ilya Lavrenov
e5a3500174 Fixed compilation on macOS 14 with new core development tools (#19946) 2023-09-19 17:14:28 +04:00
Oleg Pipikin
57df7a44b7 Refactor CTCGreedyDecoderSeqLenLayerTest, GreedyDecoderLayerTest, CTCLossLayerTest (#19842)
* Refactor CTCGreedyDecoderSeqLenLayerTest

* Refactor usingGreedyDecoderLayerTest

* Refactor CTCLossLayerTest
2023-09-19 15:03:06 +02:00
Maxim Vafin
f926e0e392 [PT FE] Simplify repeat operation (#19926) 2023-09-19 16:02:46 +04:00
Vitaliy Urusovskij
ca344aea54 TensorIteratorTest to API2.0 (#19869)
* `TensorIteratorTest` to API2.0

* Port `TensorIteratorBody` to ov::test::utils
2023-09-19 15:46:13 +04:00
Roman Kazantsev
a139fb48b7 [TF FE][JAX] Add upper bound for jax dependencies (#19943)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-09-19 14:00:54 +04:00
Sofya Balandina
2de9df8832 [apiConformance] Remove bf16 precision as unsupported (#19761) 2023-09-19 14:00:14 +04:00
Evgeny Kotov
35d0d92ef7 fix TSGatherForward transformation (#19821)
* fix ts_gather_forward

* remove unneeded header
2023-09-19 13:55:43 +04:00
Ilya Churaev
9c908f5245 Migrate template backend to new api (#19843)
* Added set_element_type for tensor

* Moved template backend to new API

* Revert "Added set_element_type for tensor"

This reverts commit 27608d2ea0.

* Fixed build

* Fixed more errors

* Fixed loop implementation

* Fixed ONNX tests

* Small change

* Fixed set_shape for host tensor

* Fixed ReadValue Assign tests

* Fixed template tests

* Fixed build

* Fix Windows build

* Fixed more errors

* Fixed CTCLoss

* Fixed comments

* Removed all comments

* Fixed tensors update

* Try to fix tests
2023-09-19 11:46:11 +04:00
Oleg Pipikin
068cd4473d Refactor AdaPool, BatchToSpace, and BatchNorm shared tests (#19597)
* Refactor AdaPool, BatchToSpace, and BatchNorm shared tests
2023-09-19 09:40:29 +02:00
Vitaliy Urusovskij
6b5a22a656 Add bert-base-ner in MemLeak tests (#19817)
* Add `bert-base-ner` in MemLeak tests

* Fix segfault caused by `fillTensorRandom()`
2023-09-19 11:27:23 +04:00
dependabot[bot]
475ad32cc4 Update numpy requirement in /src/bindings/python (#19897)
Updates the requirements on [numpy](https://github.com/numpy/numpy) to permit the latest version.
- [Release notes](https://github.com/numpy/numpy/releases)
- [Changelog](https://github.com/numpy/numpy/blob/main/doc/RELEASE_WALKTHROUGH.rst)
- [Commits](https://github.com/numpy/numpy/compare/v1.16.6...v1.26.0)

---
updated-dependencies:
- dependency-name: numpy
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-19 11:19:35 +04:00
River Li
e34f3a1fee Fix warning in template plugin (#19932) 2023-09-19 11:13:57 +04:00
Karol Blaszczak
933d9c1c0a [DOCS] OVC docs adjustments (#19918) (#19924)
authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-09-19 08:56:45 +02:00
Chenhu Wang
e961ce307b [CPU] Eliminate dependency for emitters tails process (#19527) 2023-09-19 10:41:38 +04:00
Zhang Yi
9c4bc1cc2b [CPU] Fix OMP thread count for MLAS ThreadPool initialization (#19731) 2023-09-19 10:30:33 +04:00
Sofya Balandina
691f5938d2 [template] Add faked hint property (#19887)
* [template] Add faked hint property

* fix comments
2023-09-19 03:51:17 +02:00
Paul Youngsoo Ahn
03918c2cac bug fix update (#19568)
* [GPU] Fix gpu functional test failures
* set m_max_batch to 1
* add debug log for condition operation

* Add debug logs for condition and constant

* To fix zero byte allocation issue, Convert zero dimension to 1 dimension in constant

* Add the code to check if output shape is dynamic in split offset calculation and checking allow_new_shape_infer in program_builder

* Add unit test for fix checking output shape

* Add test case for zero dimennsion allocation and debug message

* Fix build failure for condition unit test

* Follow up code review
2023-09-18 14:13:38 -07:00
Przemyslaw Wysocki
e34c5a09c6 [PyOV] Add an __init__.py alignment check in CMake (#19882)
* Add init check

* Apply CR
2023-09-19 00:36:13 +04:00
Roman Kazantsev
6556d07c32 [TF FE][Tests] Fix sporadic failure in CTCLoss test (#19920) 2023-09-18 20:25:44 +00:00
Ilya Lavrenov
c7850276dd Check only build requirements for Python API (#19919) 2023-09-19 00:11:05 +04:00
Ilya Lavrenov
54609e7b72 Removed FixRtInfo pass (#16870)
* Removed FixRtInfo pass

* Removed FixRtInfo pass

* Fixed macOS compilation
2023-09-19 00:07:59 +04:00
Mikhail Ryzhov
10dc2d8b9b [GHA] Improvement of test execution time cache (#19881)
* renamed cache

* disabled PR cache

* corrected save condition

* removed id

* fixed path in save cache action

* corrected if condition
2023-09-18 19:29:01 +04:00
Maxim Vafin
c10b45fe9e [PT FE] Fix issue with http error when using torch.hub (#19901)
* [PT FE] Fix issue with http error when using torch.hub

* Mark failing models as xfail

* Remove incorrect model names
2023-09-18 17:04:39 +02:00
Karol Blaszczak
dbab89f047 [DOCS] minor post release tweaks (#19914) 2023-09-18 16:24:26 +02:00
Artyom Anokhov
6df420ed67 [Azure] Fix linux_debian.yml (#19911)
* Update linux_debian.yml

Fixed apt-get update

* More fixes

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-09-18 17:36:55 +04:00
Roman Kazantsev
d90ceb93d1 [TF Hub][TF FE] Fix 5D case for FusedBatchNorm (#19904)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-09-18 12:51:33 +00:00
Roman Kazantsev
df19699e3a [TF Hub API][TF FE] Support TF Keras Model OOB without example_input (#19892)
* [TF Hub] Cover TF Hub use cases with adoption to OpenVINO

This is necessarily to demonstrate support of models programmed with TF Hub API
through OV notebooks.

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Preserve original keras input and output tensor names

* Add tests with TF Hub API models

* No KerasLayer handling

* Handle specific signature

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-09-18 12:18:05 +00:00
Maxim Vafin
a4cbac3dee [PT FE] Add tests for detectron2 models (#19888)
* [PT FE] Add tests for detectron2 models

* Fix names of tests

* Apply suggestions from code review

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Create secondary requirements file

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-09-18 13:40:40 +02:00
Karol Blaszczak
49df8fa45e [DOCS] updated archives for 2023.1 port master (#19896) (#19902)
port: https://github.com/openvinotoolkit/openvino/pull/19896
2023-09-18 10:22:54 +00:00
Ilya Lavrenov
253ca8c746 Fixed GPU plugin static build with oneDNN (#19811)
* Fixed GPU plugin static build with oneDNN

* Fixed issue with absolute paths inside installed OpenVINOConfig.cmake

* Fixed absolute paths in installed OpenVINOConfig.cmake

* Changed components for installation
2023-09-18 13:58:20 +04:00
mei, yang
e9aaf9aa1b [CPU] Fix Interpolate impl bug and add related test case (#19783) 2023-09-18 12:39:11 +04:00
Przemyslaw Wysocki
09aece638d Fix typo in Good First Issue template (#19898)
* Add gfi

* Minor change

* Fix linter

* fix typo

* Fix typo'
2023-09-18 11:53:45 +04:00
dependabot[bot]
b7dcae3ab6 Bump actions/checkout from 3 to 4 (#19893)
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-18 10:42:07 +04:00
dependabot[bot]
5fd16dc92e Bump SimenB/github-actions-cpu-cores from 1 to 2 (#19894)
Bumps [SimenB/github-actions-cpu-cores](https://github.com/simenb/github-actions-cpu-cores) from 1 to 2.
- [Release notes](https://github.com/simenb/github-actions-cpu-cores/releases)
- [Commits](https://github.com/simenb/github-actions-cpu-cores/compare/v1...v2)

---
updated-dependencies:
- dependency-name: SimenB/github-actions-cpu-cores
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-18 10:41:41 +04:00
yanlan song
b8942b6dd6 Ensure so is there for lifecycle (#19510)
* ensure so is there for lifecycle

Signed-off-by: fishbell <bell.song@intel.com>

* batch plugin + batch not triggered case

Signed-off-by: fishbell <bell.song@intel.com>

* clang

Signed-off-by: fishbell <bell.song@intel.com>

* fix settensor failure

Signed-off-by: fishbell <bell.song@intel.com>

* fix model info mismatch when load with 1.0 API with ppp info

Signed-off-by: fishbell <bell.song@intel.com>

* remove unncessary ppp code

Signed-off-by: fishbell <bell.song@intel.com>

* Update src/plugins/auto_batch/src/compiled_model.cpp

Co-authored-by: Ilya Churaev <ilyachur@gmail.com>

* enable the meta holder cases

Signed-off-by: fishbell <bell.song@intel.com>

---------

Signed-off-by: fishbell <bell.song@intel.com>
Co-authored-by: Ilya Churaev <ilyachur@gmail.com>
2023-09-18 14:13:38 +08:00
Ilya Lavrenov
0b8237f508 Removed testdata repo usage (#19890) 2023-09-18 06:59:56 +04:00
Przemyslaw Wysocki
11016a9357 Add Good First Issue template (#19871)
* Add gfi

* Minor change

* Fix linter

* fix typo
2023-09-16 23:29:46 +04:00
Przemyslaw Wysocki
0247f4a9ab Clean up issue templates (#19874)
* Cleanup

* minor changes

* performance.yml
2023-09-16 23:29:33 +04:00
Maciej Smyk
0edd62b96a [DOCS] Notebooks iframe update for 23.1 2023-09-15 15:58:37 +02:00
Maciej Smyk
bcb469ab19 19680 & 19849 (#19879) 2023-09-15 14:55:32 +02:00
Andrey Kashchikhin
2cd1308104 [CI] [GHA] Introduce GHA Windows Conditional Compilation Pipeline (#19343)
* introduce win cc pipeline

* disable some triggers

* do not install unnecessary dependencies

* clone models

* rm triggers

---------

Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com>
2023-09-15 16:25:32 +04:00
Andrey Kashchikhin
e090e37f6f [CI] [GHA] Introduce GHA Linux Conditional Compilation Pipeline (#19341)
* introduce linux cc

* disable some triggers

* check dirs

* use another model

* return model

* rm triggers

---------

Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com>
2023-09-15 16:24:44 +04:00
Anastasia Kuporosova
68ba8873a2 [Docs] Update python snippets with new properties imports (#19872) 2023-09-15 16:19:44 +04:00
Karol Blaszczak
2ec80439d7 [DOCS] benchmark update 23.1 (#19683) 2023-09-15 11:50:21 +02:00
dependabot[bot]
5b6043069d Bump codecov/codecov-action from 3 to 4 (#19863)
Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 3 to 4.
- [Release notes](https://github.com/codecov/codecov-action/releases)
- [Changelog](https://github.com/codecov/codecov-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/codecov/codecov-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: codecov/codecov-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-15 08:49:24 +00:00
Evgenya Nugmanova
2205f48c71 Squeeze w/o axes [1, -1] -> dyn_rank (#19593)
Removes PDPD logic from core code, keeps PDPD specifics in the translator
2023-09-15 12:27:12 +04:00
Anastasia Kuporosova
d62348337f [PyOV] Import properties from openvino (#19815) 2023-09-15 10:17:44 +02:00
Ilya Lavrenov
619c4bfce1 Enable tests for Conan ARM build (#19860) 2023-09-15 10:19:34 +04:00
Zhang Yi
ceab8059d1 [CPU]apply sdl requirement (#19438) 2023-09-15 10:17:45 +04:00
Karol Blaszczak
545779f99f [DOCS] troubleshooting article 2023-09-15 08:07:53 +02:00
Ilya Lavrenov
db395155b3 Removed warnings suppressions for extra modukes (#16479) 2023-09-15 02:53:32 +00:00
Ilya Lavrenov
ba67db66ae Properly enable CMAKE_COMPILE_WARNING_AS_ERROR (#19828)
* Properly enable CMAKE_COMPILE_WARNING_AS_ERROR_DEFAULT

* Properly enable CMAKE_COMPILE_WARNING_AS_ERROR
2023-09-15 01:20:00 +04:00
Anastasiia Pnevskaia
18f29c02d2 Fixed scalar creating in tf.Graph decoder (#19735)
* Fixed scalar logic in tf.Graph decoder.

* Passed memory sharing flag to all Tensor constructors.

* Small correction.

* Test correction.
2023-09-14 23:36:22 +04:00
Egor Duplenskii
73d8843da8 [CPU] Correct ConvertFqRnnToQuantizeRnn transformation (#19850)
by ensuring convert is placed after u8/i8 output
2023-09-14 18:04:58 +00:00
Maciej Smyk
b3bfcfc399 [DOCS] Notebooks Tutorials Page Update for master (#19852) 2023-09-14 17:52:24 +02:00
Irina Efode
c979ece791 [CONFORMANCE][IE TESTS] Remove w/a with using Convert (#19748) 2023-09-14 17:01:17 +02:00
Karol Blaszczak
7445a9c77b [DOCS] release adjustments pass 3 - conversion port to master (#19846)
authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-09-14 16:13:21 +02:00
Vitaliy Urusovskij
53fef5f558 Fix incorrect use of ASSERT (#19838) 2023-09-14 13:34:14 +00:00
Ilya Lavrenov
35a0706dff Replaced several cmake utilities with new ov_ prefix (#19819)
* Replaced several cmake utilities with new ov_ prefix

* Replaced several cmake utilities with new ov_ prefix
2023-09-14 16:22:50 +04:00
Maciej Smyk
d6ef6e253c [DOCS] Notebooks update for master (#19822)
* notebooks-update

* notebooks-update

* fix

* Update 121-convert-to-openvino-with-output.rst

* Update 121-convert-to-openvino-with-output.rst

* fix

* table of content fix

* fix

* fix

* fix

* fix

* Update tutorials.md

* fix

* fix

* Update 115-async-api-with-output.rst

* Update 227-whisper-subtitles-generation-with-output.rst
2023-09-14 14:21:12 +02:00
Przemyslaw Wysocki
b1b3343ffc [PyOV] Bump scipy to a secure version and bump OMZ (#19458)
* Bump omz and scipy

* Separate versions per python version

* Change scipy in pot
2023-09-14 13:41:27 +02:00
Ilya Churaev
fa667156cb Check HolderTests under the proxy (#19785)
* Skip only virtual device tests

* Fixed proxy life time

* Fixed compiled model get property

* Fixed code style

* Try to fix LTO
2023-09-14 15:11:26 +04:00
Maxim Vafin
1a950f9e8d [PT FE] Torchvision NMS can accept negative scores (#19826) 2023-09-14 11:07:24 +02:00
Edward Shogulin
16adb01810 [LPT] SpaceToBatch & BatchToSpace implementation (#19660)
* [LPT] SpaceToBatch & BatchToSpace implementation

* Update docs/IE_PLUGIN_DG/plugin_transformation_pipeline/low_precision_transformations/pipeline/step3_main.md

* comments: fixes & refactoring

* rebase fix

* Update docs/IE_PLUGIN_DG/plugin_transformation_pipeline/low_precision_transformations/pipeline/step3_main.md

* rebase fix

---------

Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-09-14 08:16:04 +01:00
Ilya Churaev
4df4ea9b31 Move ngraph function to new api (#19728)
* Moved ngraphFunctions to new API

* Fixed code style

* Fixed build function

* Fixed cpu unit tests

* Fixed code style

* Fixed transformation tests

* Fixed code style

* Fixed build

* Fixed LP tests

* Fixed build all for macOS

* Fixed more issues

* Fixed some func tests

* Try to fix CPU tests

* Revert incorrect change

* Try to fix tests

* Fixed merge conflicts

* Remove redundant headers

* Update src/tests/ngraph_helpers/ngraph_functions/src/non_max_suppression.cpp

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>

---------

Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-09-14 10:57:23 +04:00
Roman Lyamin
5ba60f845e [GPU] Added zero input support for Pad (#19720) 2023-09-14 09:59:53 +04:00
Przemyslaw Wysocki
66dd347d38 remove upper bound (#19802) 2023-09-14 07:53:00 +02:00
Mingyu Kim
b044757d8c [GPU] doc update for broken links (#19829) 2023-09-14 13:57:42 +09:00
Siddhant Chauhan
4ca3d51a40 Add a error message when creating an empty Constant (#19674)
Co-authored-by: Anastasia Kuporosova <anastasia.kuporosova@intel.com>
2023-09-14 00:11:32 +02:00
Anastasia Kuporosova
2bf8d910f6 [Docs][PyOV] update python snippets (#19367)
* [Docs][PyOV] update python snippets

* first snippet

* Fix samples debug

* Fix linter

* part1

* Fix speech sample

* update model state snippet

* add serialize

* add temp dir

* CPU snippets update (#134)

* snippets CPU 1/6

* snippets CPU 2/6

* snippets CPU 3/6

* snippets CPU 4/6

* snippets CPU 5/6

* snippets CPU 6/6

* make  module TODO: REMEMBER ABOUT EXPORTING PYTONPATH ON CIs ETC

* Add static model creation in snippets for CPU

* export_comp_model done

* leftovers

* apply comments

* apply comments -- properties

* small fixes

* rempve debug info

* return IENetwork instead of Function

* apply comments

* revert precision change in common snippets

* update opset

* [PyOV] Edit docs for the rest of plugins (#136)

* modify main.py

* GNA snippets

* GPU snippets

* AUTO snippets

* MULTI snippets

* HETERO snippets

* Added properties

* update gna

* more samples

* Update docs/OV_Runtime_UG/model_state_intro.md

* Update docs/OV_Runtime_UG/model_state_intro.md

* attempt1 fix ci

* new approach to test

* temporary remove some files from run

* revert cmake changes

* fix ci

* fix snippet

* fix py_exclusive snippet

* fix preprocessing snippet

* clean-up main

* remove numpy installation in gha

* check for GPU

* add logger

* iexclude main

* main update

* temp

* Temp2

* Temp2

* temp

* Revert temp

* add property execution devices

* hide output from samples

---------

Co-authored-by: p-wysocki <przemyslaw.wysocki@intel.com>
Co-authored-by: Jan Iwaszkiewicz <jan.iwaszkiewicz@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-09-13 21:05:24 +02:00
Maxim Vafin
4f92676c85 [PT FE] Support aten::one_hot (#19779)
* [PT FE] Support aten::one_hot

* Apply code style
2023-09-13 20:37:47 +02:00
Oleg Pipikin
f744869551 Refactor BroadcastLayerTest and GRUSequenceTest (#19486)
* Refactor GRUSequenceTest

* Refactor BroadcastLayerTest

* Temporary disable GRUSequenceTest
2023-09-13 19:18:56 +02:00
Mikhail Ryzhov
0234357869 [GHA] Changed ubuntu build runner to 20.04 (#19790)
* changed OS build runner to 20.04

* fixed PR

* changed version for test jobs

* Updated title

* Update .github/workflows/linux.yml

* Update linux.yml for PT models

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-09-13 18:46:50 +04:00
Ivan Tikhonov
d0213301a5 Transformations: API 2.0 transition part 3 for LPT transformations (#19610)
* lpt transformations: transition to api 2.0, ngraph -> openvino

* use ov namespace for lpt transformations

* fix low_precision usings

* includes refactoring

* delete RecurrentGraphRewrite and RecurrentMatcher as unused classes

* use ov header for itt; delete the disabled test

* delete the unused function

* suppress doxygen warning

* fix link in the documentation
2023-09-13 12:30:31 +00:00
Nadezhda Ageeva
3454139931 [HETERO] Hetero refactor subgraph collector. Adds unit tests. (#19656)
* [HETERO] Refactor subgraph collector. Add unit tests.

* [HETERO] Adds ov_hetero_unit_tests to azure

* [HETERO] Adds ov_hetero_unit_tests to github workflows

* Small updates

* Set CI_BUILD_NUMBER

* Fix cmake

* Fix cpplint

* STATIC -> OBJECT

* Fix .github/workflows/linux.yml

* Fix cmake

* Fix .github/workflows/linux_debian.yml

* Fix win build: separate version file
2023-09-13 12:17:32 +00:00
Vitaliy Urusovskij
972bb73298 TileLayerTest to API2.0 (#19770)
* `TileLayerTest` to API2.0

* Remove `ngraph::` use

* Fix cpplint
2023-09-13 12:10:02 +00:00
Karol Blaszczak
7445f5bea6 [DOCS] release adjustments pass 2 (#19805)
port: https://github.com/openvinotoolkit/openvino/pull/19796
2023-09-13 13:47:20 +02:00
Georgy Krivoruchko
5eff59a2d0 [ONNX] Switched ONNX to 1.14.1 (#18359)
* Switched ONNX to 1.14

* Updated IR_VERSION in the tests

* Assigned an extended tests to issues

* Switched ONNX to 1.14.1

* Slightly relaxed requirements for ONNX 1.14.1 and updated conan.lock
2023-09-13 15:40:08 +04:00
Maxim Vafin
3c762240f3 [PT FE] Add tests for torchvision models (#19702)
* [PT FE] Add tests for torchvision models

* Update tests/model_hub_tests/torch_tests/requirements.txt

* Apply review comments

* Clean tmp directory and make this separate job

* Update .github/workflows/linux.yml

* Update tests/model_hub_tests/torch_tests/test_torchvision_models.py
2023-09-13 15:24:09 +04:00
bstankix
4061b960f4 [DOCS] Add units to benchmark graphs (#19799) 2023-09-13 09:39:18 +00:00
Karol Blaszczak
dcbfbf8411 [DOCS] legacy adjustments pass 1 (#19787) 2023-09-13 11:20:25 +02:00
Andrey Kashchikhin
1f0c98ed8c [CI] [GHA] Create only build directory for conan build (#19789)
* create only build dir

* rm triggers
2023-09-13 13:08:04 +04:00
Ilya Lavrenov
8cff0697a7 Ability to use RapidJSON as find_package() (#19762)
* Ability to use RapidJSON as find_package()

* Use default features in vcpkg.json
2023-09-13 12:46:40 +04:00
Andrey Kashchikhin
e9f4e4db65 create dirs (#19769) 2023-09-13 00:37:22 +04:00
Ilya Lavrenov
08fb0a2722 Removed CMAKE_INSTALL_LIBDIR from oneDNN GPU configuration (#19716) (#19771) 2023-09-13 00:31:14 +04:00
Vladimir Paramuzov
541f2dc62f [GPU] Fixed static init order for serialization (#19768) 2023-09-13 00:31:05 +04:00
Roman Kazantsev
d1a8c8f914 [TF Hub] Set seed for input data generation and fix integer input data (#19765)
* [TF Hub] Set seed for input data generation and fix integer input data

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Clean-up workflow

* Update precommit model scope

* Avoid legacy generator

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-09-13 00:30:39 +04:00
Pratham Ingawale
9250d17e01 pytest:- extractor_test.py (#19487) 2023-09-12 22:23:09 +04:00
Pawel Raasz
e3f1ff7f2a Migrate mod op evaluate (#19687) 2023-09-12 15:27:51 +04:00
Pawel Raasz
693c6d7a11 Migrate the Abs operator to new API (#19763) 2023-09-12 15:26:45 +04:00
Pawel Raasz
f3d4665f7b Api 2.0/migrate shape inference test to new api (#19665)
* Migrate static shape inference test to new API

* Use new API in CPU custom shape inference tests

* Rename range shape inference test file
2023-09-12 15:15:04 +04:00
Pawel Raasz
4af1fd087c [core] Migrate the Assign operator to new API (#19664)
* Migrate the Assign operator to new API

* Use memcpy instead of tensor copy_to
2023-09-12 15:10:23 +04:00
Sergey Lyalin
adf7a24ec0 [DOCS] OVC/convert_model Documentation (#19555)
* Added OVC and ov.convert_model() description.

* Minor corrections.

* Small correction.

* Include page to toctree.

* WIP: Model Preparation

* Forked OVC/ov.convert_model documentation sub-directory; reworked model_introduction.md

* Reverted ovc-related changes in old MO_DG documentation

* State explicitly that MO is considered legacy API

* Reduced ovc description in model preparation part; added TF Hub exampe (via file)

* Grammar check; removed obsolexte parts not relevant to ovc; better wording

* Removed a duplicate of mo-to-ovc transition

* Fixed links and some other errors found in documentation build

* Resolved XYZ placeholder to the transition guide

* Fixed technical issues with links

* Up-to-date link to PTQ chapter (instead of obsolete POT)

* Fixed strong text ending

* Update docs/OV_Converter_UG/prepare_model/convert_model/MO_OVC_transition.md

Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com>

* Update docs/OV_Converter_UG/prepare_model/convert_model/MO_OVC_transition.md

Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com>

* Update docs/OV_Converter_UG/prepare_model/convert_model/MO_OVC_transition.md

Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com>

* Renamed Legacy conversion guides

* Fixed links and styles for inlined code

* Fixed style for code references

* Fixing technical syntax errors in docs

* Another attempt to fix docs

* Removed all unreferenced images

* Better content for Additional Resources in model preporation introduction

* MO to OVC transition guide. (#127)

* Examples code correction.

* Change format of example.

* Conflict fix.

* Remove wrong change.

* Added input_shapes example.

* batch example.

* Examples format changed.

* List item removed.

* Remove list for all examples.

* Corrected batch example.

* Transform example.

* Text corrections.

* Text correction.

* Example correction.

* Small correction.

* Small correction.

* Small correction.

* Small correction.

* Text corrections.

* Links corrected.

* Text corrections (#128)

* Text corrections.

* Example corrected.

* Update docs/install_guides/pypi-openvino-dev.md

Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>

---------

Co-authored-by: Sergey Lyalin <sergey.lyalin@intel.com>

* Many technical fixes, description of recursive flattening of list and tuples

* Reorganized structure of Model Conversion toc tree. Removed fp16 dedicated page, merged to Conversion Parameters.

* Update docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_TensorFlow.md

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Update docs/Documentation/model_introduction.md

Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>

* Fixed example from tf hub. Removed input_shape references

* Update docs/Documentation/model_introduction.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/Documentation/model_introduction.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/Documentation/model_introduction.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Removed

* Update docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_ONNX.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_ONNX.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_ONNX.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/OV_Converter_UG/prepare_model/convert_model/Convert_Model_From_ONNX.md

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Fixed links

* Removed TODO for model flow

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Restored lost code-blocks that leaded to wrong rendering of the code snippets in some places

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

* Update docs/Documentation/model_introduction.md

* Fixed links to notebooks

* Apply suggestions from code review

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>

---------

Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com>
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
Co-authored-by: Maciej Smyk <maciejx.smyk@intel.com>
Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-09-12 14:31:54 +04:00
Oleg Pipikin
0675d9fd8b Refactor ComparisonLayerTest, ClampLayerTest (#19681)
* Refactor ClampLayerTest

* Refactor ComparisonLayerTest
2023-09-12 11:46:30 +02:00
Vladislav Golubev
faa6b77247 [Snippets] LIR serialization: additional connections between LoopBegin and LoopEnd nodes (#19630)
* [Snippets] LIR serialization improvements

* Minor correction

* Review comments
2023-09-12 11:20:18 +02:00
Anastasia Kuporosova
3c1b384694 [PyOV] Expose missed properties (#19678) 2023-09-12 10:45:10 +02:00
Maxim Vafin
a6bc78dd0f [PT FE] Separate tracing and scripting modes (#19676)
* [PT FE] Separate scripting and tracing in decoder

* Fix convert_model to accept decoder

* Some fixes

* Fix code style

* Fix preprocessor tests

* Fix tests

* Fix tests

* Fix more tests

* Fix ovc tests
2023-09-12 12:40:20 +04:00
HenryLin-png
514f9864af CVS-98205 and CVS-114018 (#18592)
* Changed ls calls to /bin/ls, unset python_version before parsing cmd line

* Update setupvars.sh

Unset all temporary variables

---------

Co-authored-by: henry1.lin <linhenry@ttoycron01u.tor.intel.com>
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
2023-09-12 12:22:24 +04:00
Ilya Lavrenov
58546b2ecb Use ov_mark_target_as_cc in CPU oneDNN (#19766) 2023-09-12 12:20:49 +04:00
Vitaliy Urusovskij
7bb22b43b3 TopKLayerTest to API2.0 (#19738) 2023-09-12 12:02:01 +04:00
Vladimir Paramuzov
47fe50ca35 [GPU] 2.0 plugin api impl (#18920) 2023-09-12 11:13:59 +04:00
Oleksii Khovan
8e0d8dd36b [GPU] Pad-12 (#19083)
* GPU primitive and kernel changes to support Pad-12

* Exclude Pad-12 from GPU transformations pipeline

* add unit tests

* add single-layet test for Pad-12
2023-09-12 10:18:04 +04:00
Fang Xu
016c7dea8a update oneTBB with https://github.com/oneapi-src/oneTBB/releases/tag/v2021.2.3 (#19639) 2023-09-12 13:53:23 +08:00
Ilya Lavrenov
6a1d680f90 Partially fixed github issue 18274 (#19758) 2023-09-12 07:34:45 +04:00
Ilya Churaev
3be8b58d2a Update classes func tests (#19663)
* Remove legacy classes from functional_test_utils

* Fixed code style

* Fixed build all for macOS

* Suppress warning

* Revert old functions for internal plugins
2023-09-12 07:09:45 +04:00
Ilya Churaev
7becaf8494 Remove legacy API from common test utils (#19647)
* Remove legacy API from common test utils

* Fixed code style

* Fixed build

* Try to fix Windows build

* Fixed GNA build
2023-09-12 07:09:12 +04:00
Nikolay Shchegolev
497f42bd82 Post commit fix for #19521. (#19741) 2023-09-12 01:31:57 +04:00
Roman Kazantsev
fc5696321a [TF Hub][GA] Separate Workflow for TF Hub Tests Validation (#19754)
* [TF Hub][GA] Use Ubuntu 20.04 for TensorFlow Hub Models validation and Separate job

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Apply review comments: ubuntu-20.04 use and install deps

* Simplify validation pipeline for TF Hub Models

* Remove extra deps installations

* Remove not needed code

* Try to fix

* Try 22.04

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-09-12 01:05:50 +04:00
Andrey Kashchikhin
3e95e48309 [CI] [GHA] Remove unnecessary steps and cmake options, use proper # of CPU cores for ARM64 pipeline (#19746)
* address comments

* rm

* use machine's # of cpu core
2023-09-11 22:37:42 +04:00
Bartlomiej Bielawa
2320329a51 [DOCS] Modify dropdowns css 2023-09-11 17:07:28 +02:00
Maciej Smyk
e614b8f69a [DOCS] Update of model_conversion_diagram.svg for master (#19737)
* Update model_conversion_diagram.svg

* Update model_conversion_diagram.svg

* Update model_conversion_diagram.svg
2023-09-11 18:49:31 +04:00
Mateusz Tabaka
d0dda74fc2 Handle negative values in GroupedSliceToVSplitOptimization (#19495)
* Handle negative values in GroupedSliceToVSplitOptimization

CVS-118897

* change the way of getting slice inputs

* clamp value

---------

Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>
2023-09-11 18:31:39 +04:00
Vladimir Paramuzov
7e3e1e2480 [GPU] Support of int8 compressed weights for matmul (#19548) 2023-09-11 18:11:34 +04:00
Ilya Lavrenov
a1cc5e6692 Resolve ARM CPU plugin illegal instruction on older Linux systems (like Ubuntu 18.04) (#19717) 2023-09-11 17:15:42 +04:00
Vitaliy Urusovskij
847eb3f4f1 TransposeLayerTest to API2.0 (#19671) 2023-09-11 16:15:53 +04:00
Vitaliy Urusovskij
9f4e918ee2 Gracefully fail if test models weren't generated (#19705)
* Gracefully fail if test models weren't generated

* Add assert instead of return `nullptr`
2023-09-11 15:18:45 +04:00
Sebastian Golebiewski
3d872f14e4 [DOCS] Remove index file from notebooks (#19619) 2023-09-11 10:56:08 +00:00
Vitaliy Urusovskij
fb59d0eb36 VariadicSplitLayerTest refactoring to API2.0 (#19648) 2023-09-11 14:55:36 +04:00
Karol Blaszczak
da79964bd3 [DOCS] banner what's new text (#19730) 2023-09-11 12:15:47 +02:00
Ilya Lavrenov
f519f2990d Added python3 vcpkg port dependecies to GHA workflow (#19718) 2023-09-11 10:37:46 +01:00
Maciej Smyk
86c3184e2f Update installing-openvino-pip.md (#19726) 2023-09-11 11:02:26 +02:00
dependabot[bot]
a1a56a3d29 Bump SimenB/github-actions-cpu-cores from 1 to 2 (#19724)
Bumps [SimenB/github-actions-cpu-cores](https://github.com/simenb/github-actions-cpu-cores) from 1 to 2.
- [Release notes](https://github.com/simenb/github-actions-cpu-cores/releases)
- [Commits](https://github.com/simenb/github-actions-cpu-cores/compare/v1...v2)

---
updated-dependencies:
- dependency-name: SimenB/github-actions-cpu-cores
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-11 12:51:33 +04:00
dependabot[bot]
f3de5a2fba Bump actions/checkout from 3 to 4 (#19725)
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-11 12:51:09 +04:00
Andrew Kwangwoong Park
5604566795 [GPU] Minor fix to get correct input layout for dump layer (#19686)
Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-09-11 12:21:09 +04:00
Andrew Kwangwoong Park
161ba14796 [GPU] Fix GatherND shape agnostic ref kernel (#19706)
Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-09-11 12:20:10 +04:00
Roman Kazantsev
530da61a4e [TF Hub] Fix compute output size issue in test (#19719)
It helps to get 4 new models passed and other 2 models failed with accuracy issue.

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-09-11 12:14:09 +04:00
Irina Efode
1d0709f533 [CONFORMANCE] Enable QM + IE tests in Opset Conformance (#19693) 2023-09-11 11:10:47 +04:00
Kelvin Choi
2f4f76070f [GPU] Update strided_slice for partially dynamic shape case (#19467) 2023-09-10 22:44:13 -07:00
Pawel Raasz
5833e7d55d Migrate ReduceL1, ReduceL2 to new API (#19622)
* Migrate ReduceL1, ReduceL2 to new API
- add some new utils which are deprecated

* Add missing include

* Remove debug message

* Hide helper functions from public API
2023-09-11 07:17:57 +04:00
Ilya Lavrenov
51d77cb59f Migrate to ade v0.1.2c (#19714) 2023-09-11 07:15:59 +04:00
Roman Kazantsev
37f61551a3 [TF FE][TF Hub] Use ConcreteFunc input and output signatures (#19690)
* [TF Hub][TF FE] Preserve outputs of ConcreteFunction from signature and their names

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix naming and complete TODO

* Apply code-review: extra assert to check input_signature

* Fix inputs for fw

* Fix input data preparation and import convert_model

* Correct variable detection among all inputs

* Handle special input and output signature

* Fix adjust_saved_model_names

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-09-10 03:46:01 +00:00
Karol Blaszczak
932ba63744 [DOCS] feature transition section (#19506)
* [DOCS] legacy features section

* pass 2 of extensions

* Apply suggestions from code review

---------

Co-authored-by: Tatiana Savina <tatiana.savina@intel.com>
2023-09-09 20:30:51 +02:00
Ilya Lavrenov
8eb165021c Fixed compilation with C++17 on Windows (#19682) 2023-09-09 01:53:20 +04:00
Ilya Lavrenov
ed230cd879 Fixed build with oneDNN GPU in some Conan scenarios (#19711) 2023-09-09 01:52:58 +04:00
Ilya Lavrenov
3823360238 Try to use conan.lock file (#19709) 2023-09-09 01:52:25 +04:00
Ilya Lavrenov
888d4e9633 Updated vcpkg.json config with newer TBB version with default dependencies (#19708) 2023-09-09 01:52:07 +04:00
Maxim Vafin
e83697ded4 [PT FE] Implement override_all_inputs, override_all_outputs (#19642)
* Implement override_all_inputs, override_all_outputs

* Apply suggestions from code review

Co-authored-by: Piotr Krzemiński <piotrkrzeminski1234@gmail.com>

* Update src/frontends/pytorch/src/input_model.cpp

* Update src/frontends/pytorch/src/input_model.cpp

* Resolve problem with self input

* Update place.cpp

* Update place.cpp

* Fix build

---------

Co-authored-by: Piotr Krzemiński <piotrkrzeminski1234@gmail.com>
2023-09-08 22:31:49 +02:00
Alexander Suvorov
815ce1c595 Add selector tool 2023.1 2023-09-08 21:22:16 +02:00
Nikolay Shchegolev
f0421d94a6 [CPU] Scalar is passed as a tensor with shape [1] in custom op evaluate. (#19521)
[CPU] Scalar is passed as a tensor with shape [1] in custom op evaluate.
2023-09-08 19:40:44 +04:00
Andrey Kashchikhin
77d11f7dc8 [CI] [GHA] Introduce GHA Linux ARM64 Pipeline (#19230)
* add linux arm 64

* add comment

* prevent from scheduling on forks

* remove triggers

* rm unused exits

---------

Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com>
2023-09-08 15:30:54 +00:00
Andrey Kashchikhin
d5f684d934 [CI] [GHA] Introduce GHA Linux Android ARM64 Pipeline (#19246)
* add anroid arm64 pipeline

* rm triggers

* rm unnecessary exits
2023-09-08 15:26:52 +00:00
Ilya Lavrenov
1ef9cc70b5 Fixed compilation with gcc-13.2 (#19689) 2023-09-08 18:39:24 +04:00
bstankix
8f73cb19b1 [DOCS] Integrate coveo search engine (#19703) 2023-09-08 14:07:27 +00:00
Bartlomiej Bielawa
7b59190521 [DOCS] Move menu arrows to the left side (#19677) 2023-09-08 14:01:10 +00:00
Maciej Smyk
79ac4a5763 img-fix (#19699) 2023-09-08 15:16:03 +02:00
Maciej Smyk
d9805a8871 [DOCS] ShapeOf-3 & Supported Model Formats fix for master (#19694)
* fix

* Update supported_model_formats.md
2023-09-08 13:33:38 +02:00
Ivan Novoselov
8124f5c435 Snippets shape inference infrastructure (#18887) 2023-09-08 12:58:21 +04:00
Przemyslaw Wysocki
25b1b4e26c Add upper bound for setuptools (#19672) 2023-09-08 11:56:39 +04:00
dependabot[bot]
dd05c42951 Bump SimenB/github-actions-cpu-cores from 1 to 2 (#19685)
Bumps [SimenB/github-actions-cpu-cores](https://github.com/simenb/github-actions-cpu-cores) from 1 to 2.
- [Release notes](https://github.com/simenb/github-actions-cpu-cores/releases)
- [Commits](https://github.com/simenb/github-actions-cpu-cores/compare/v1...v2)

---
updated-dependencies:
- dependency-name: SimenB/github-actions-cpu-cores
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-08 11:56:18 +04:00
Mateusz Tabaka
a55b5381d3 Move BroadcastTransition to MOC (#19543)
* Move BroadcastTransition to MOC

Broadcast that could be eliminated by BroadcastElementwiseFusion are moved down the graph
(by BroadcastTransition registered in the plugins). That prevents BroadcastElementwiseFusion
to eliminate them.

Ticket: CVS-118899

* dont count const layers

* remove virtual inheritance
2023-09-08 11:05:54 +04:00
Jade Cho
e2b553302b [GPU] Use preferred output format if node impl type is onednn (#19601)
+ Changes to use preferred format in both cldnn and onednn for gemm and
FC when shape inferencing.
2023-09-08 15:56:40 +09:00
Artyom Anokhov
f9560518e3 [DOC] Add hints for debug build (#19675)
* Added HINTS for generating PDB files and debugging

* Fixed upper case

* Keep hint only for Windows OS

* build_raspbian.md: Added empty line
2023-09-07 16:39:02 +02:00
Sergey Shlyapnikov
4eb9c57424 [GPU] Add input feature leftovers processing for fully_connected_gpu_bs_f_bsv16_af8_vload kernel (#19650) 2023-09-07 13:20:11 +04:00
Taylor Yeonbok Lee
4124851d2b Revert "[GPU] Fixed reordered memory cache not to contain original weight memory (#19465)" (#19659)
This reverts commit e8f1df495c.
2023-09-06 22:35:00 -07:00
River Li
252afa3b6c [CPU] Fix incorrect output for float to bf16 in avx2 isa (#19358) 2023-09-07 09:09:16 +04:00
yanlan song
14e0b1fd2c Do not clean batch setting if proxy plugin (#19508)
* do not clean batch setting if proxy plugin

Signed-off-by: fishbell <bell.song@intel.com>

* add tests

Signed-off-by: fishbell <bell.song@intel.com>

---------

Signed-off-by: fishbell <bell.song@intel.com>
2023-09-07 07:47:45 +04:00
Roman Kazantsev
63a6d4c41e [TF Hub][GA] Set correct scheduler and model scope for regular validation (#19658)
* [TF Hub][GA] Set correct scheduler and model scope for regular validation

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Remove empty line

* Update .github/workflows/linux.yml

Co-authored-by: Andrey Kashchikhin <andrey.kashchikhin@intel.com>

* Correct a path to output html report

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
Co-authored-by: Andrey Kashchikhin <andrey.kashchikhin@intel.com>
2023-09-06 22:51:31 +02:00
David Nam
cb479f4a5d [GPU] No need to add reorder after strided_slice (#19411) 2023-09-06 11:43:51 -07:00
Maxim Vafin
bacb83f8a2 Support aten::tile op (#19645) 2023-09-06 17:28:22 +00:00
Karol Blaszczak
c123843d0d [DOCS] installation guide restructuring 23.1 master (#19241) 2023-09-06 17:46:55 +02:00
Andrey Kashchikhin
5644ca40f6 [GA] ccache: specify key (#19519)
Co-authored-by: Mikhail Ryzhov <mikhail.ryzhov@intel.com>
2023-09-06 15:45:00 +00:00
Roman Kazantsev
4f7ac430fc [TF Hub][TF FE][GA] Establish regular validation for all TF Hub models (#19649)
* [TF Hub][TF FE] Establish regular validation for all TF Hub models

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Correct names of reports

* Simplify configuration

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-09-06 17:02:59 +02:00
Roman Kazantsev
023a2f462a [TF Hub][Notebook] Secure future notebook models in the precommit (#19652)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-09-06 18:43:30 +04:00
Ilya Churaev
d9952b2455 Enabled clang format for unit test utils (#19653) 2023-09-06 14:37:12 +00:00
Vladislav Golubev
94bdaea965 [Snippets] ov::Node replaced with lowered::Expression in emitter constructors (#19481) 2023-09-06 16:45:50 +04:00
Maksim Kutakov
45cc4fdb33 [CPU] Fix input output tensor binding (#19589)
* Fix input output tensor binding plus test

* Clean up code
2023-09-06 17:04:38 +08:00
Ilya Churaev
7a4e765600 Removed legacy API from common frontend and shared tests (#19583)
* Removed legacy API from common frontend and shared tests

* Fixed build
2023-09-06 12:24:18 +04:00
Katarzyna Mitrus
bd66971dd6 [ShapeInfer] FFTBase shape infer improvement - preserve input sizes bounds and labels (#19463)
* Reduce number of rank checks

* Preserve data shape if signal_size input is not provided

* Add bounds propagation on fft input

* Improved preserving bounds on fft input

* Remove size_t rank cast and have_axes variable

* Check refactor

* Use ge helper for rank comparison

* Make bounds constexpr

* Pass raw pointer instead of unique_ptr ref

* Use normalize_axes helper

* Ensure to call set label if it's not zero
2023-09-06 12:20:58 +04:00
Anastasia Kuporosova
8723b5dd6d resolve gil (#19631) 2023-09-06 10:04:52 +02:00
Ilya Lavrenov
46d05cc820 Fixed CPU plugin compilation (#19629) 2023-09-06 10:48:26 +04:00
Ilya Lavrenov
a322b8256d Unlock custom creation of PLATFORM_TAG (#19609) 2023-09-06 10:46:37 +04:00
Maciej Smyk
4598da7a55 [DOCS][PT FE] Update pytorch conversion docs for master 2023-09-06 08:15:35 +02:00
Yury Gaydaychuk
8d6083e08e [Commit slider] Skipping for commits (#18966) 2023-09-06 09:12:17 +04:00
Sungeun Kim
b7758b0504 [GPU] update the data-type of primitive from ops (#19302)
* set the data-type of transpose by dt of ops.
* set output_data in calc_output_layouts
2023-09-06 13:30:53 +09:00
dependabot[bot]
be23ac7c26 Bump actions/checkout from 3 to 4 (#19602)
Bumps [actions/checkout](https://github.com/actions/checkout) from 3 to 4.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v3...v4)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-06 07:29:13 +04:00
HARI CHAND BALASUBRAMANIAM
954536d2d6 Create documentation.yml (#18926)
This is for customer to report any issue regarding documentation.

Co-authored-by: Ilya Churaev <ilya.churaev@intel.com>
2023-09-06 05:27:12 +02:00
Ilya Churaev
dc1339d8e3 Remove legacy API from samples (#19608) 2023-09-05 22:16:18 +04:00
Ilya Churaev
1d62f0141d Rename cmake ie_ macros and ie_parallel script (#19606)
* Rename cmake ie_ macros and ie_parallel script

* Add warning messages
2023-09-05 19:31:52 +02:00
Maciej Smyk
8f6b30a8f9 link fix (#19624) 2023-09-05 14:59:06 +00:00
Mikhail Ryzhov
bf5690fa7d [GA] Parallel tests (#18773)
* test job

* added script to the test package

* test call fix

* switched test to large runner

* Added option to split tests by suites

* extended logs

* enabled test cache

* fixed workload

* optimized splitting mode

* excluded disabled suites

* temroary removed parallel logs

* added failed logs

* fixed empty name in suites

* test on 4 cores

* make step optional

* fixed param

* test

* grouping suites

* set suite arg

* increase test timeout

* test commit

* test pip deps

* include requirements.txt to the test package

* fixed deps step order

* fixed test counter

* fixed smart filter for suites

* clean up

* disabled repeat failed tests

* review comments

* use runtime execution time for skipped tests

* removed disabled suites

* reduced command lines

* enabled tests results

* fixed typo

* removed unused argument pp

* Log improvements

* merge cached and runtime filters

* fixed order

* fixed init list error

* fixed cache writing

* enable windows pipeline

* changed runner for windows

* optimized balancing using heap

* Fixed test counter

* fixed windows pipeline

* extended logging

* changed pipelines

* added logs on Windows

* fixed pipelines

* debug

* removed os specific code

* fixed "#"

* fixed test results

* fixed win pipeline

* cleanup debug

* rebase fixes

* windows pip requirements

* aligned run_conformance.py

* Apply suggestions from code review

Co-authored-by: Andrey Kashchikhin <andrey.kashchikhin@intel.com>

* reverted windows changes

* reverted build runner

* fixed review comments

* minor review fixes

* make help func static

* renamed test runner

* fixed merge issue

* removed unused log

* reduced command line

* fixed issue fith conformance run

* fixed typo

* set testa  as default split unit

* fixed tasks queue  with time -1

* fixed test result caculation

* reverted wrong fix

* reverted changes

* set time limitation

* reverted unused change

* fix win command lines

* reuse env variables in pipeline

* fixed install files permissions

* fixed pipeline syntax

* reset validation schema

* fixed env names

* reverted initial setting of env

* increased test runner

* fixed pathes

* reuse env path

* reset validation schema

* Revert "reuse env path"

This reverts commit 97422ac595.

* Revert "increased test runner"

This reverts commit 010aa31641.

* revert command line reduction

* made if condition clearer

---------

Co-authored-by: Andrey Kashchikhin <andrey.kashchikhin@intel.com>
2023-09-05 14:23:17 +00:00
Roman Kazantsev
188d53d813 [TF Hub][TF FE] Clean-up all downloaded files for TF Hub models validation (#19612)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-09-05 13:07:13 +00:00
Vladimir Paramuzov
77600c7701 [GPU] Add FullyConnected custom op (#19539) 2023-09-05 16:47:05 +04:00
Mingyu Kim
3d679edf18 [GPU] Remove propagate_constants pass at pre_optimize_graph stage (#19323)
Co-authored-by: Lyamin-Roman <roman.lyamin@intel.com>
2023-09-05 16:37:31 +04:00
Ilya Churaev
5fd327ae30 Remove legacy API from IR frontend (#19582)
* Remove legacy API from IR frontend

* Remove dependency from inference dev API
2023-09-05 15:41:57 +04:00
Sofya Balandina
198da893d4 [analyzeConfFails] Fix arm device and exclude analyzing some files (#19591) 2023-09-05 12:28:45 +04:00
Alexander Suvorov
53414832eb add 2023.0.2 selector tool (#19598) 2023-09-05 09:51:04 +02:00
Xiuchuan Zhai
1b5f428752 eliminate broadcast node in masked_fill (#19595) 2023-09-05 10:36:30 +04:00
Roman Kazantsev
4eadef9e61 [TF Hub][TF FE] Make TF Hub validation more robust and add convenient xfail test marking (#19596)
* [TF Hub][TF FE] Use multiprocessing based tests for TF Hub validation

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix import and initialization

* [TF Hub][TF FE] Make TF Hub validation more robust and add convenient marking for failing cases

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-09-05 09:41:01 +04:00
Ilya Lavrenov
c3b0c531f7 Added tflite to 'predefined_frontends' list (#19599) 2023-09-05 09:33:35 +04:00
Ilya Churaev
75b6a24787 Remove ICore legacy mock object (#19573) 2023-09-05 00:42:42 +04:00
Ilya Churaev
509e00da60 Move Inference Functional Caching Tests to new API (#19513)
* Moved inference unit tests to new API

* Added infer request and variable state

* Try to fix LTO

* Try to avoid warning from gmock

* Try to fix azure build

* Try to fix Windows build

* Comment all variable_state_test file for future investigation

* Start migration of caching tests to new API

* Removed legacy API from mock_plugin

* Fixed more tests

* Remove redundant code

* Enable more tests

* Move all tests, need to reenable tests

* Revert incorrect change

* Cosmetic changes

* Fixed AUTO BATCH tests and disabled hetero tests

* Fixed crash in HETERO tests
2023-09-05 00:42:11 +04:00
Evgenya Stepyreva
fef4d4d641 Auto batch lost label fix (#19535)
* Restored opset1::Reshape label peropagation for -1 special value

* Lets opset1::Reshape keep same shape infer. Makes FindBatch transformation keep labels in output shapes of Result node

* uses Parameter from correct namespace
2023-09-04 17:11:09 +02:00
Ilya Churaev
15685e0141 Remove legacy API from snippets tests (#19577)
* Remove legacy API from snippets tests

* Fixed comment
2023-09-04 14:09:26 +00:00
Maciej Smyk
3677dda457 [DOCS] 23.0 to 23.1 link update for master (#19584)
* 2023.1 link fix

* 2023.1 link fix

* 2023.1 link fix

* 2023.1 link fix

* 2023.1 link fix
2023-09-04 15:40:02 +02:00
Vladimir Paramuzov
2f782b2131 [GPU] Add permute primitive instead of manual copy for deconv weights (#19516) 2023-09-04 17:03:49 +04:00
Sebastian Golebiewski
2d760ba1bf Adding Quantizing with Accuracy Control using NNCF notebook (#19585) 2023-09-04 14:57:35 +02:00
Sofya Balandina
8f4d72826a [apiConformance] Fix double numbers in results after merge xml (#19564) 2023-09-04 16:52:23 +04:00
Mateusz Tabaka
bd0c156a70 PullReshapeThroughReduce - skip transformation if Reshape doesn't unsqueeze input (#19477)
Ticket: CVS-118905
2023-09-04 13:58:53 +02:00
Mateusz Mikolajczyk
c46f6bf115 [PT FE] Add aten::swapaxes (#19483)
* Add aten::swapaxes

* Add comment

* Improve swapaxes tests
2023-09-04 13:04:28 +02:00
Maciej Smyk
511f06f9ba [DOCS] Fix for Install from Docker Image for master (#19505)
* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md
2023-09-04 11:35:07 +02:00
bstankix
9297bc5128 [DOCS] Extend sphinx_sitemap to add custom metadata (#19579) 2023-09-04 08:37:30 +00:00
Ekaterina Aidova
90ef7096b9 [PT FE]: support PReLU (#19515)
* [PT FE]: support PReLU

* Update tests/layer_tests/pytorch_tests/test_prelu.py

* Apply suggestions from code review

Co-authored-by: Piotr Krzemiński <piotrkrzeminski1234@gmail.com>

---------

Co-authored-by: Piotr Krzemiński <piotrkrzeminski1234@gmail.com>
2023-09-04 07:36:42 +00:00
Ilya Lavrenov
3bc38695c5 A set of fixes for Conan C++ package manager (#19552) 2023-09-04 11:32:39 +04:00
Sofya Balandina
f42a036157 [apiConformance] Fix some test results are not in report (#19522) 2023-09-04 10:52:31 +04:00
Karol Blaszczak
a84eee9127 [DOCS] pytorch usage adjustment - model formats master (#19558) 2023-09-04 08:41:32 +02:00
Maciej Smyk
3daf4cb3b5 [DOCS] Torch.compile() documentation for master (#19447)
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
2023-09-04 08:39:50 +02:00
Nesterov Alexander
1b9b9bdc1f [CPU ARM] Fix subgraph test instances (#19524) 2023-09-04 10:39:20 +04:00
Roman Kazantsev
44657183f3 [TF Hub][TF FE] Fix TF Hub models validation and extend precommit (#19567)
* Add all scope of models for testing

* Fix signature in test

* Fix workaround with output tensor names

* Fix unknown rank input resource case

* [TF Hub][TF FE] Fix TF Hub model validation and extend precommit

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Remove unneeded comments

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-09-04 07:41:42 +02:00
Ilya Lavrenov
00b06adee4 Fixed order of dependent cmake options (#19551)
* Fixed order of dependent cmake options

* Update .ci/azure/linux_cuda.yml

fixed typo in option name
2023-09-04 08:32:01 +04:00
Ilya Lavrenov
7173f530cc Fixed static build from build tree (#19556) 2023-09-04 08:08:50 +04:00
Tomasz Jankowski
51df17912b [Ref_Impl] Change namespace from nG to OV (#19363)
* Use quotes for openvino includes

* Drop runtime from openvino::reference

* Drop runtime::reference

* Replace ngraph::reference with ov::reference - defs

* Replace ngraph::reference with ov::reference - uses

* Drop redundant nesting

* Fix non arch64 builds

* Move coordinate*pp files under openvino

* Move Coordinate... helpers under ov:: namespace

* Revert not needed changes

* Fix missing namespace scope

* Fix compilation

* Fix code style

* Use ov suppress deprecated macro instead of ngraph

---------

Co-authored-by: Raasz, Pawel <pawel.raasz@intel.com>
2023-09-04 08:07:06 +04:00
Andrew Kwangwoong Park
92c6316e8e [GPU] Fix input feature map indexing with pad and batch indices for ROIAlign (#19511)
* [GPU] Fix input feature map indexing with pad and batch indices for ROIAlign

* Fix failed TCs for ov_gpu_func_tests

Signed-off-by: Andrew Park <andrew.park@intel.com>

* Fix to do batch interpretation for inconsistency between ROIALign input and const 1D tensor

Signed-off-by: Andrew Park <andrew.park@intel.com>

---------

Signed-off-by: Andrew Park <andrew.park@intel.com>
2023-09-01 12:43:23 -07:00
Mateusz Tabaka
441adcc122 SplitSqueezeConcatFusion - handle Squeeze nodes without second input (#19512)
Ticket: CVS-119330
2023-09-01 23:35:17 +04:00
Nikita Malinin
a4b0fe51af [POT] Update for CI (#19067)
* Update references
2023-09-01 18:12:24 +02:00
Ilya Churaev
3c7ea04c69 Enable variable state unit tests (#19529)
* Enable compilation variable state unit tests

* Enable all tests
2023-09-01 16:33:31 +04:00
Nadezhda Ageeva
b81cad6ae5 Update ConvertReduceToReshape transformation to support ReduceProd (#19532) 2023-09-01 10:52:05 +00:00
Ilya Lavrenov
7d718fbff2 Robust detection of Cython version (#19537) 2023-09-01 14:45:51 +04:00
Ilya Lavrenov
936dc051ff Aligned protobuf version in conanfile.txt with onnx recipe (#19526) (#19533) 2023-09-01 13:37:37 +04:00
Pavel Esir
b8226db465 [OVC] Fix output parsing (#19425)
* fix parsing output

* use always coma backspace to separate outputs ', '

* update docstring

* call parser only for ovc cli tool

* docstring correction

* separate docs for cli tool and convert_model; other minor changes

* drop redundant arg from cli_parser_test.py

* more solid test cases added

* remove redundant argv.framework from cli_parser_test.py

* shape correction in concat

* Apply suggestions from code review

fix: coma -> comma

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* apply review suggestions

* remove extra ')'

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-09-01 11:20:22 +02:00
Roman Kazantsev
2cf8f2bc1f [TF FE][GitHub issue] Support Selu operation and add test (#19528)
* [TF FE] Support Selu operation and add test

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix layer test

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-09-01 09:09:58 +00:00
Karol Blaszczak
9715ccd992 [DOCS] adjustment to supported devices
adjustments will continue in following PRs
2023-09-01 10:46:04 +02:00
Ivan Novoselov
b0d917f0cb Snippets pass manager (#18846) 2023-09-01 12:31:42 +04:00
Vladimir Paramuzov
38cad619af [GPU] Allow simple attached mem as input memory for network (#19419) 2023-09-01 09:54:30 +04:00
Pawel Raasz
dd258f9607 Improve acos ref accuracy for integral types (#19470) 2023-09-01 07:34:43 +02:00
Ilya Churaev
7c751883fc Moved functional tests to new API (#19488)
* Moved functional tests to new API

* Renamed legacy tests

* Fixed test

* Fixed test
2023-09-01 06:58:03 +04:00
Steve Yoo
05a24b1776 [GPU] Try to use softmax_ref when types are mismatched (#19209)
* Remove support key for UINT8 and INT8
2023-08-31 16:39:36 -07:00
Ilya Lavrenov
f617cc338e Aligned protobuf version in conanfile.txt with onnx recipe (#19525) 2023-08-31 23:48:32 +04:00
Wanglei Shen
0651d57c33 Add smoke test for CPU map on MacOS (#18753) 2023-08-31 19:31:21 +04:00
Przemyslaw Wysocki
d3ac00c6d2 Amend cython changes (#19514) 2023-08-31 14:43:28 +00:00
Ilya Lavrenov
ebe1d52e31 Update PePy OpenVINO badge (#19520) 2023-08-31 16:31:53 +02:00
Yury Gaydaychuk
2d3aab33b0 [CPU] Deformable Convolution minor fixes (#19415) 2023-08-31 16:13:39 +02:00
Ilya Churaev
2d2977ff4a Moved inference unit tests to new API (#19452)
* Moved inference unit tests to new API

* Added infer request and variable state

* Try to fix LTO

* Try to avoid warning from gmock

* Try to fix azure build

* Try to fix Windows build

* Comment all variable_state_test file for future investigation
2023-08-31 16:39:46 +04:00
Sebastian Golebiewski
1cf3fe96af [DOCS] Improve NNCF workflow images (#19040)
* Update DEVELOPMENT_FLOW_V3_crunch.svg

* Update DEVELOPMENT_FLOW_V3_crunch.svg

* update

* Update DEVELOPMENT_FLOW_V3_crunch.svg

* Update DEVELOPMENT_FLOW_V3_crunch.svg

* Update DEVELOPMENT_FLOW_V3_crunch.svg

* Update docs/optimization_guide/model_optimization_guide.md
2023-08-31 14:00:31 +02:00
Mateusz Tabaka
120a81ff5e Disallow LeakyReluFusion when alpha is greater than one (#19446)
Tickets: CVS-118898, CVS-82454
2023-08-31 14:41:46 +04:00
Pawel Raasz
463ae19207 Fix padding calculation if interval value is inf (#19383) 2023-08-31 11:57:09 +04:00
Vitaliy Urusovskij
81a02c5586 Fix stoi out_of_range issue (#19455)
* Fix `stoi` out_of_range issue

* Handle incorrect behavior and throw exception

* Remove use of `getVmSizeInKB()` in `TestsCommon()`
2023-08-31 04:19:13 +04:00
Mateusz Tabaka
6ad11108b5 Fix include path in TS ShapeOf tests (#19507) 2023-08-30 17:00:03 +02:00
Pavel Esir
9a1726a419 [ovc] check if input is correct in split_inputs (#19350) 2023-08-30 17:13:43 +04:00
Mateusz Tabaka
02d6c1cb5d TransposeSinking - add support for ShapeOf (#19471)
* TransposeSinking - add support for ShapeOf

It's often in the model that transposes have two consumers: ShapeOf and
another transpose and with ShapeOf absent - the transpose pair could be eliminated.
This patch's approach is to propagate a transpose through ShapeOf by creating
"ShapeOf->Gather" subgraph and replacing ShapeOf's input with transpose input.

Ticket: CVS-118896

* enhance docs
2023-08-30 12:45:48 +00:00
Ivan Tikhonov
38cf4764cb Transformations: API 2.0 transition part 2 (#19475)
* Transformation component API 2.0:part 2

* Refactoring

* fix build
2023-08-30 14:28:20 +02:00
Katarzyna Mitrus
9c61e0c4dd [Docs] Remove IR version and nGraph name from the Opset doc (#19503)
* Remove IR version and nGraph name from the Opset12 doc

* Propagate the changes to each opsetX.md file
2023-08-30 11:51:01 +00:00
Sebastian Golebiewski
8aec490128 add-253 (#19500) 2023-08-30 13:46:27 +02:00
Anastasia Kuporosova
23cad1770e [PyOV] clean up in tests (#19091)
* [PyOV] clean up in tests

* use generated model in tests

* fix ci

* return back fild

* fix ci 1

* fix ci2

* update

* move models

* return back deleted test

* move model creation from conftest

* fix ci

* fix ci
2023-08-30 13:40:14 +02:00
Liubov Talamanova
b790458da6 [POT] Fix bug in classification sample (#19490)
* Fix bug in classification sample

* fix readme
2023-08-30 13:18:44 +02:00
Nadezhda Ageeva
d84fa07841 [HETERO] Save plugin so in tensor (#19489) 2023-08-30 10:23:04 +00:00
Przemyslaw Wysocki
9af0f1eaae [PyOV] Remove constraints link from torchvision preprocessor converter requirements (#19459) 2023-08-30 11:57:26 +02:00
Sebastian Golebiewski
87f6e34a56 [DOCS] Improving code snippets for quantization (#19479)
* improve-snippets

* Apply suggestions from code review

Co-authored-by: Alexander Suslov <alexander.suslov@intel.com>

* Update docs/optimization_guide/nncf/ptq/code/ptq_tensorflow.py

Co-authored-by: Alexander Suslov <alexander.suslov@intel.com>

* update-path

* Update docs/optimization_guide/nncf/ptq/code/ptq_torch.py

---------

Co-authored-by: Alexander Suslov <alexander.suslov@intel.com>
2023-08-30 11:52:36 +02:00
Mateusz Tabaka
3e8c0fac1b Remove useless Slices (#19451)
Adjust UselessStridedSliceEraser to work with Slice nodes.

Ticket: CVS-118895
2023-08-30 11:28:33 +02:00
Nesterov Alexander
f2167a9545 [ARM CPU] Remove configure from exec func in eltwise, reduce and pooling (#19071) 2023-08-30 13:12:25 +04:00
Bo Liu
6b57360c55 Fix warnings for Paddle Frontend (#19476)
* Fixed warnings for Paddle Frontend

* fix CI linux-gnu-9 build fail issue
2023-08-30 12:08:08 +04:00
Aleksandr Voron
9b10ef6f6f [CPU][ARM] Fix inference precision for behaviour tests (#19485) 2023-08-30 09:38:30 +04:00
Taylor Yeonbok Lee
e8f1df495c [GPU] Fixed reordered memory cache not to contain original weight memory (#19465)
* Fixed reordered memory cache not to contain original weight memory

* Applied review comment

* Applied review comment
2023-08-29 21:54:32 -07:00
Xiuchuan Zhai
36b9de1f25 enable sin/cos && fix top_k_v2 (#17525) 2023-08-30 08:55:57 +08:00
Anton Voronov
49c4c922ff [CPU][OneDNN] Fix zero pad perf issues (#19417) 2023-08-29 16:55:45 +00:00
Oleg Pipikin
e3e1e8c811 Template plugin tests refactoring (#19397)
* Template plugin tests refactoring

* Apply comments
2023-08-29 13:48:53 +00:00
Aleksandr Voron
84fc6fb626 [CPU][ARM] Disable default fp16 inference precision (#19445) 2023-08-29 17:08:46 +04:00
Andrey Kashchikhin
f6ab1e4833 [CI] [GHA] Introduce GHA Linux Debian Pipeline (#19225)
* add linux debian pipeline

* remove push and pr triggers; prevent the workflow from scheduling of forks

* enclose cron

* rm unnecessary
2023-08-29 12:13:13 +01:00
Roman Kazantsev
928c75623b [JAX][TF Hub][TF FE] Support XlaConvV2 operation and add JAX test (#19466)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-08-29 12:28:12 +04:00
Ilya Churaev
f30afa9ad3 Finalization of migration core unit tests (#19468) 2023-08-29 11:58:54 +04:00
Maciej Smyk
e6f09ac197 [DOCS] Docker Guide Update for master (#19410)
* docker-update

* id fix

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md

* Update installing-openvino-docker-linux.md
2023-08-29 08:45:14 +02:00
Pawel Raasz
f9aa624099 [Core] Use API 2.0 in evaluate for trigonometric operators (#19414)
* Use API 2.0 in operators evaluate
- Drop ngraph namespace in ops
- Refactor reference implementation for modified ops

* Apply code style

* Fix build issue in reference impl

* Fix code style

* Fix compile warnings

* Add inputs check and set output shape in evaluates
2023-08-29 10:16:07 +04:00
Yuan Hu
915de21626 [CPU] Adopt the static shape inference interface developed by the ngraph (#17719) 2023-08-29 10:15:18 +04:00
Gorokhov Dmitriy
d32b6904bd [CPU] Fixed has_subnormals behavior for negative zero values (#19360) 2023-08-29 13:53:05 +08:00
Pratham Ingawale
82afb47e36 generator functionality to pytest (#19402)
* trying with pytest

* update as per suggested

* pytest testing on compress_quantized_weights_test

* resolved warning of assert on tuple

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-08-29 03:01:49 +04:00
Pawel Raasz
c0fb4fabce [ShapeInfer] Remove old API from shape inference sources (#19435)
* Remove old API from shape inference sources

* Fix build issues
2023-08-28 22:03:35 +02:00
Przemyslaw Wysocki
b7d73cbbe3 [PyOV] Limit cython version (#19443) 2023-08-28 17:36:51 +02:00
Sebastian Golebiewski
f306007e59 update-notebooks (#19450)
Add notebook 252-fastcomposer-image-generation. Fix indentation, admonitions, broken links and images.
2023-08-28 14:23:23 +02:00
Wilson Seok
94c21b53b3 fix build error by removing makeDynamicParam (#19431) 2023-08-28 12:10:05 +02:00
Pawel Raasz
2e78eec502 Fix boxes dim calculation when scores dynamic rank (#19097)
* Fix boxes dim calculation when scores dynamic rank

* NMS shape infer improve upper bound calculation

* Calculate boxes if required shapes has static rank

* Optimize shape_infer for NMS v4

* Reorder checks in nms v4 for selected boxes
2023-08-28 13:27:09 +04:00
Ilya Churaev
29ad3e8c92 Moved eval tests to new API (#19364)
* Moved eval tests to new API

* Fixed build

* Fixed eval tests
2023-08-28 10:22:14 +04:00
HARI CHAND BALASUBRAMANIAM
b87709a8a7 Create performance.yml (#18929)
* Create performance.yml

create new performance issue template for customers to report issue that related to performance

* Update performance.yml

Amend based on latest comment

* Update performance.yml

* Update performance.yml

amend based on suggestion
2023-08-28 07:29:04 +04:00
Ivan Tikhonov
0f9734aaa7 Transformation tests API 2.0: part 1 (#19366)
* API 2.0 replace ngraph headers and namespaces in CPU, LPT etc transformations tests

* revert the incorrect renaming

* function -> model in arm transformation tests
2023-08-27 18:52:03 +02:00
Wilson Seok
293c672064 add sqrt activation support in cpu_impl (#19421) 2023-08-25 12:10:01 -07:00
Wilson Seok
f6dca869b2 fix reduce perferred format selection and layout for partial shape (#19319) 2023-08-25 12:09:25 -07:00
Min, Byungil
bcedb0bb9b [GPU] Resolve accuracy issue from clamp fused prims (#19409)
+ Added condition when clamp activation is added to fused-ops for fp16 overflow
+ Added test-cases

Signed-off-by: Min, Byungil <byungil.min@intel.com>
2023-08-25 11:21:09 -07:00
Irina Efode
cddcec8ba8 [CONFORMANCE] Fix for Eye-9 op in the Opset Conformance report (#19404) 2023-08-25 17:22:18 +02:00
Wilson Seok
f962511a84 [GPU] add check condition of input dynamic shape in conv fusing (#19219) 2023-08-25 16:13:53 +04:00
Ilya Churaev
39b75fd213 Moved core tests from root folder to new API (#19381) 2023-08-25 15:23:41 +04:00
Vladimir Paramuzov
a45e5e03c5 [GPU] Added some formats for pvc (#19388) 2023-08-25 15:09:42 +04:00
Xiuchuan Zhai
1bdf4f0ab9 [CPU] Build fix: remove usage of makeDynamicParams from tests (#19412) 2023-08-25 10:29:51 +00:00
Katarzyna Mitrus
decc2e31f3 [ShapeInference] FFT based - revision and tests (part 1) (#19070)
* Reuse common shape validation for fft base

* Align helper names

* Common test class for fft ops

* Move all (I)DFT test cases to the common test class

* More test cases for param axes

* Init labels validation

* More label tests

* Labels validation for non const signal size

* Init label tests for IRDFT

* More label test for irdft

* Labels tests for RDFT

* Remove duplicated tests

* Rename common validation file

* Rename shape infer tests file

* Use node shape infer check

* Headers order alignment

* Add const to the test params vector

* Use this make_op

* Use OV_EXPECT_THROW in common fft tests

* Use OV_EXPECT_THROW iin rdft an irdft tests

* Pass input shapes and use SHAPE_INFER_CHECK

* Shorter error messages

* Update to use ov namespace in typeprop tests
2023-08-25 12:27:18 +02:00
Karol Blaszczak
06003f18d5 [DOCS] speech sample deprecation (#19228) 2023-08-25 12:26:44 +02:00
Ilya Churaev
679369c707 Move Visitor tests to new api (#19379)
* Moved visitor tests to new API

* Fixed build for Windows
2023-08-25 10:50:59 +04:00
Xiuchuan Zhai
350c4d2363 [CPU] Disable convert to BF16 in convert-range pattern (#18971) 2023-08-25 09:00:24 +04:00
Tomasz Jankowski
bcad953f5f [Ref_Impl] Rename file paths to openvino relative (#19284)
* Move files to new directories

* Use quotes for openvino includes

* Provide proxy calls for transition

of dependant components.

* Correct includes style

* Redo proxies

* Fix deprecated

* Move aliases to proxy files

* Apply code style
2023-08-25 06:43:06 +04:00
Sofya Balandina
20e4c629e9 [conformance] Add shape mode and graph conv logic to test name (#19403) 2023-08-25 01:03:13 +02:00
Maxim Vafin
c5b64e458b [PT FE] Align bool types and same bit int types (#19399)
* [PT FE] Align bool types and same bit int types

* Fix max value
2023-08-24 22:30:59 +02:00
Artyom Anokhov
eef6b35bef [packaging] APT/YUM: Added conflicts for 2023.0.2 (#19398) 2023-08-24 19:42:15 +02:00
Chenhu Wang
28a5bf7b04 [CPU][Snippets] Dynamism via recompilation and cache (#15430) 2023-08-24 21:31:42 +04:00
Roman Kazantsev
8df85badf8 [TF Hub][TF FE] Support TensorListLength and TensorListResize operations (#19390)
* [TF Hub][TF FE] Support TensorListLength and TensorListResize operations

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Add test with empty tensor list

* remove assert

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-08-24 17:54:13 +02:00
Irina Efode
ba6cca8740 [CONFORMANCE] Add comparator accuracy vs conformance (#19374) 2023-08-24 16:22:13 +01:00
Sofya Balandina
b607c00c95 [conformance] Enable local cache using in SubgraphsDumper tool (#18850)
* [conformance] Fix same name for models

* add model name

* Update hash wwith input_info and graph_priority

* Fix double cache

* Read meta from file

* temp to check

* Move loop with caches and add attr to rename

* [conformance] Enable local cache using in SubgraphsDumper tool
2023-08-24 17:05:37 +02:00
Irina Efode
e0a75b78d1 [CONFORMANCE] Update runner readme (#19393) 2023-08-24 18:41:42 +04:00
Irina Efode
07ca0cd426 [CONFORMANCE] Remove debug code from Conformance runner (#19389) 2023-08-24 16:13:04 +02:00
Pavel Esir
eaeeea9c54 [tests] save into different file in compression_test.py (#19356)
* save into different file in compression_test.py

* reuse existing mechanism for tmp_file
2023-08-24 13:58:14 +02:00
yanlan song
498731f8fd [AUTO] Fix static code scan issue (#19295)
* fix scan issue

Signed-off-by: fishbell <bell.song@intel.com>

* clang

Signed-off-by: fishbell <bell.song@intel.com>

* clang

Signed-off-by: fishbell <bell.song@intel.com>

---------

Signed-off-by: fishbell <bell.song@intel.com>
Co-authored-by: Chen Peter <peter.chen@intel.com>
2023-08-24 10:01:11 +00:00
Ilya Churaev
6deca48413 Moved type prop tests to new API from g to z (#19353)
* Moved type prop tests to new API from g to z

* Fixed build
2023-08-24 12:59:49 +04:00
Kelvin Choi
ce47522165 [GPU] Memory reuse false for dynamic and null impl case (#19354) 2023-08-24 17:32:07 +09:00
Kelvin Choi
c89b9edfe7 [GPU] 7-dimention only supports plain format (#19039)
* Skip concat_input_order opt in case dependancy is dynamic

* Add plain 7d 8d case for jitter pitch size
2023-08-23 20:00:02 -07:00
Ilya Churaev
475ce744af Remove legacy headers and namespaces from C-F type prop tests (#19332) 2023-08-24 06:56:53 +04:00
Ilya Churaev
b77e47970d Removed legacy headers from some core tests (#19328)
* Removed legacy headers from some core tests

* Fixed build
2023-08-24 06:55:21 +04:00
Paul Youngsoo Ahn
99cc3624b7 [GPU] Fix accuracy issue (#19351)
- [scatter_update] Use input index for input buffer instead of output index
- [concat cpu impl] Sync input layout and mem_ptr when input host tensor creation
- Add unit tests for scatter_update and concat cpu impl
2023-08-23 17:57:18 -07:00
Maxim Vafin
e11e8ede1b [MO] Fix issue in nncf version verification (#19347)
* Return deleted nncf import

* Remove try-except, it hides exception

* Get version visout importing nncf module
2023-08-23 21:16:26 +02:00
Roman Kazantsev
1d0d00bf22 [TF Hub][GitHub Actions][TF FE] Introduce TF Hub Models Validation in GitHub Actions (#19368) 2023-08-23 18:40:31 +00:00
Oleg Pipikin
ab900606cd Remove makeDynamicParams (#19226)
* Remove makeDynamicParams

* Apply comments

* Fix1

* Fix2

* Fix3
2023-08-23 18:57:29 +02:00
Sebastian Golebiewski
22fe12fe9b [DOCS] Updating MO documentation (#18757)
* restructure-mo-docs

* apply-commits-18214

Applying commits from:

https://github.com/openvinotoolkit/openvino/pull/18214

* update

* Apply suggestions from code review

Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com>

* Apply suggestions from code review

* Update model_introduction.md

* Update docs/resources/tensorflow_frontend.md

* Create MO_Python_API.md

* Apply suggestions from code review

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* revert

* Update Cutting_Model.md

* serialize

* serialize-in-image

* Update Deep_Learning_Model_Optimizer_DevGuide.md

* Apply suggestions from code review

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>

* Update model_conversion_diagram.svg

---------

Co-authored-by: Anastasiia Pnevskaia <anastasiia.pnevskaia@intel.com>
Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-08-23 18:53:27 +02:00
Sebastian Golebiewski
6d3726024d [DOCS] Updating Supported Model Formats article (#18495)
* supported_model_formats

* add-method

* apply-commits-18214

Applying commits from:
https://github.com/openvinotoolkit/openvino/pull/18214

* Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md

* Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md

* Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md

* Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md

* Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md

* Update supported_model_formats.md

* Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md

* Update docs/MO_DG/prepare_model/convert_model/supported_model_formats.md

* review-suggestions

* Update supported_model_formats.md
2023-08-23 18:26:07 +02:00
Maksim Kutakov
6a628f7056 [CPU] Fix deconvolution default primitive search algo (#19261)
* Fix deconvolution default primitive search

* Add dedicated test
2023-08-23 16:58:38 +02:00
Maksim Kutakov
c6a02b76be [CPU] Fix convolution plus sum layout alignment (#19279) 2023-08-23 16:29:26 +04:00
Vladislav Golubev
982d0f43c4 [CPU] Optimal number of streams calculation moved after LPT (#19313) 2023-08-23 16:28:42 +04:00
Marcin Kusmierski
25e89a754d [GNA] Fix memory leak in insert_copy_layer.cpp (#19266)
* Added cleanup transformation for inert copy layer transforamtions
2023-08-23 13:11:16 +01:00
Anton Voronov
59d58b2296 [CPU][ONEDNN] jit_uni_dw_conv_row_f32: fixed post ops start index (#19126) 2023-08-23 15:52:09 +04:00
Ilya Churaev
bc868a8873 Enable clang format for itt headers (#19326) 2023-08-23 15:13:59 +04:00
Ilya Churaev
dcfb6bb042 Remove some legacy headers from inference component (#19325)
* Remove some legacy headers from inference component

* Fixed code style
2023-08-23 15:13:30 +04:00
Mustafa Cavus
aa53394c07 TorchFX bugfix missing core object in get_device() (#19255) 2023-08-23 14:42:38 +04:00
Vladimir Paramuzov
3b2e263879 [GPU] Fix reshape optimization (#19270) 2023-08-23 10:25:04 +00:00
Ekaterina Aidova
80b8b6fff1 [PT FE]: allow example input list with single tensor (#19308) 2023-08-23 12:08:39 +04:00
Ivan Tikhonov
128ec5452e Transformations API 2.0: replace ngraph headers and namespaces with openvino (#19304)
* switch to OV headers and namespaces

* resolve review comments

* fix precomp header

* refactoring, add missing include
2023-08-23 11:45:34 +04:00
Oleg Pipikin
7aa51d6775 Remove makeParams (#19306) 2023-08-23 11:39:05 +04:00
Evgeny Kotov
f4cc3bf7d3 add callback check (#18397)
Co-authored-by: Ivan Tikhonov <ivan.tikhonov@intel.com>
2023-08-23 11:01:11 +04:00
Xuejun Zhai
50214511e7 Avoid creating new threads when converting legacy inference request to API 2.0 (#19342)
* Fix error in CVS-115961, caused by wrapper covert 1.0 req to 2.0 req create 2 more threads

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

* Eable the test of compareAutoBatchingToSingleBatch with batch size 4 & num req 64, after fix issue 115961

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>

---------

Signed-off-by: Zhai, Xuejun <xuejun.zhai@intel.com>
2023-08-23 10:25:47 +04:00
Alexandra Sidorova
27bf0f9e1e [Snippets] Fixed memory leak in LinearIR (#19316) 2023-08-23 09:18:04 +04:00
Maxim Vafin
4882ccde03 [PT FE] Fix issue when FakeQuantize is not inserted after regular operations (#19314) 2023-08-22 17:18:23 +02:00
Karol Blaszczak
6eee51a6ef [DOCS] including NPU documents (#19340) 2023-08-22 17:17:37 +02:00
Pratham Ingawale
9a76daf94b generator to pytest (#19298)
* trying with pytest

* update as per suggested

---------

Co-authored-by: Roman Kazantsev <roman.kazantsev@intel.com>
2023-08-22 17:42:56 +04:00
Sebastian Golebiewski
6cd70af7c2 update-notebooks (#19337) 2023-08-22 15:37:37 +02:00
Oleg Pipikin
de65abc6b3 Remove WA for vpu repo with CommonTestUtils namespace (#19275) 2023-08-22 13:03:45 +00:00
Zhang Yi
7e724565b5 [CPU] Use parallel_nt_static for MLAS threading (#19297) 2023-08-22 10:38:07 +00:00
Sergey Shlyapnikov
7df8d1ca2d [GPU] Add per iteration performance profiling mode (#18637) 2023-08-22 12:37:36 +02:00
Surya Siddharth Pemmaraju
acbac2f560 Added openvino torch backend import statement in init files (#19208)
* Added openvino torch backend import statement in init files

* Added openvino/torch folder for simplyfing the import

* Removed imports from all the other init.py files

* Fixed flake8 issues

* Removed requirements.txt

* Removed redundant functions
2023-08-22 14:24:53 +04:00
Irina Efode
551cb7ab1a [CONFORMANCE] Extend conformance runner to use in GA with expected_failures filelist (#19285)
* [CONFORMANCE] Extend conformance runner to use in GA with expected_failures filelist

* fix

* exclude failed tests from run in case without update

* Small refactoring
2023-08-22 13:32:34 +04:00
Sofya Balandina
0cc3044764 Add attr info to rename_conformance_ir hash (#19277) 2023-08-22 13:30:24 +04:00
Sebastian Golebiewski
4703196f5c link-to-frontend (#19178) 2023-08-22 11:21:56 +02:00
Zhang Yi
53c47aaa91 [CPU]Fix mlas threadpool of MlasExecuteThreaded (#19292) 2023-08-22 12:49:38 +04:00
Sebastian Golebiewski
20bf7aec13 [DOCS] Update tutorials for master (#19307)
* update-160823

* fixes

* fix-toc-headings

* fix-headings

* fix

* fix-headings

* fix

* fix-headings

* fixes

* Update 220-cross-lingual-books-alignment-with-output.rst

* fixes

* fix

* fix-toc-headings

* fix-headings

* fix toc

* fix toc

* fix toc

* add-missing-301-nncf

* Update 301-tensorflow-training-openvino-nncf-with-output.rst

* fix toc

* fixes
2023-08-21 16:47:14 +02:00
Wanglei Shen
7c273dc2c5 fix SDL issue (CID 1518459) (#19287) 2023-08-21 20:24:59 +08:00
Pavel Esir
90f6500871 [tests] switch layer tests to FP16 on pre-commit (#19090)
* switch to FP16 on layer tests on precommit; add Pytorch layer tests for precision sensitive subgraph

* remove redundant changes

* moved precision sensitive tests into test_mo_convert_pytorch.py

* remove redundant dumping

* skip layer tests with chaotic output

* add copy() to avoid side effects
2023-08-21 16:03:47 +04:00
Roman Kazantsev
5539d052b0 [JAX][TF Hub][TF FE] Introduce JAX layer tests and support of XLA operations (#19269)
* [JAX][TF Hub][TF FE] Introduce JAX layer tests and support of XLA operations

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix JAX layer tests infa

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Extend run for JAX layer tests

* Use ovc convert_model

* Fix translator and extend layer test cases

* Exclude jax testing on Windows

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-08-21 16:01:48 +04:00
Ilya Churaev
cbe744b717 Introduce model reader which works only with new API (#19077)
* Introduce model reader which works only with new API

* Fix GNA compilation

* Removed old code

* Fixed Windows build

* Remove legacy headers from core_impl

* Fixed caching tests if plugin on legacy API call ReadNetwork
2023-08-21 15:42:36 +04:00
Roman Kazantsev
19ff7fba3d [TF FE] Fix support of CTCLoss and add tests to pre-commit (#19291)
Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-08-21 14:26:36 +04:00
Wanglei Shen
61fcf3855a fix SDL issue (CID 1518457) (#19289)
* fix SDL issue (CID 1518457)

* update for comments
2023-08-21 17:43:09 +08:00
Maxim Vafin
d55e45f677 [PT FE] Support non boolean inputs for __or__ and __and__ operations (#19268)
* [PT FE] Support non boolean inputs for __or__ and __and__ operations

* Add test for __or__
2023-08-21 10:55:30 +02:00
Zlobin Vladimir
3813b0bc55 classification_sample_async: state that the samples support NCHW model layout only (#19259)
Ticket 107409
2023-08-21 09:02:29 +04:00
Karol Blaszczak
601cfadabf Update prerelease_information.md (#19282) 2023-08-18 20:00:54 +02:00
Karol Blaszczak
3e6a3eee6d [DOCS] contributing guidelines (#19218)
changes to the contribution guide
2023-08-18 17:59:31 +02:00
Karol Blaszczak
7635f89141 [DOCS] pre-releasenotes 23.1 Aug port master (#19273) 2023-08-18 17:58:01 +02:00
Vladimir Paramuzov
526d76c81f [GPU] New headers and namespaces in some parts (#19229) 2023-08-18 15:57:15 +04:00
Anton Voronov
4f29e60742 FIxed is_on_constant_path() using in all places (#19239)
* Fixed matmul weights check in snippets_mark_skipped

* fix

* ConvertMatMulToFC: is_on_constant_path fix

* [TESTS] added SplitMatMulConcat subgraph test

* MarkDequantizationSubgraph: is_on_constant_path fix
2023-08-18 14:01:07 +04:00
Roman Kazantsev
24ddf1b274 [TF FE] Use regular Convolution in case dynamic input channels (#19253)
* [TF FE] Use regular Convolution in case dynamic input channels

This solution is aligned with the legacy frontend but it has limitation.
This is a temporal solution until the core obtains ShapeOf evaluator.

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Remove unused variable from the test

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>

* Fix unit-test

* Update mo unit-test

---------

Signed-off-by: Kazantsev, Roman <roman.kazantsev@intel.com>
2023-08-18 13:39:59 +04:00
Vitaliy Urusovskij
ef33c2b3fd Fix uninit members in default GroupNormalization() (#19244) 2023-08-18 12:16:55 +04:00
Irina Efode
e9fdf0cac4 [CONFORMANCE] Add progress bar to Subgraphs Dumper (#19235) 2023-08-18 11:59:22 +04:00
Ivan Tikhonov
dc6f04f475 Handle unspecified type in wrap_tensor function; unit tests (#19220) 2023-08-18 10:25:09 +04:00
Ivan Tikhonov
20bb450af2 Fix TransposeSinking when the data input is broadcasted (#19233) 2023-08-18 10:23:59 +04:00
Georgy Krivoruchko
e4bed7a31c [ONNX] Fixed issue with missing sort when wstring path (#19250)
* Fixed issue with missing sort when wstring path

* Fixed CI linux builds
2023-08-18 01:24:59 +04:00
Anastasia Kuporosova
f51f0c7a6a try to remove redandant functions (#18761)
* try to remove redandant functions

* remove redundant imports

---------

Co-authored-by: Michal Lukaszewski <michal.lukaszewski@intel.com>
2023-08-17 16:11:39 +00:00
Edward Shogulin
318009f8d5 [LPT] MoveFakeQuantize: Q/DQ pattern identification generalization (#18945)
* [LPT] MoveFakeQuantize Q/D pattern dequantization generalization

* [LPT] MoveFakeQuantize Q/D pattern dequantization generalization: quantize op convert
2023-08-17 17:07:51 +01:00
Vitaliy Urusovskij
dea2310153 Add recreate_and_infer_in_thread memleak test (#19078)
* Add `recreate_and_infer_in_thread` memleak test

* Add custom threshold for memleak test

* Update tests/stress_tests/common/ie_pipelines/pipelines.cpp
2023-08-17 17:13:07 +04:00
Alina Kladieva
a3393e535b Increment OV version to 2023.2.0 (#19248) 2023-08-17 15:03:14 +02:00
Ilya Lavrenov
e49b208393 Enabled debug build for Python wheels (#19197) 2023-08-17 16:45:18 +04:00
Ilya Lavrenov
75b48e9cdc Added OpenCV minimal versions (#19231) 2023-08-17 16:45:01 +04:00
5183 changed files with 179052 additions and 126075 deletions

View File

@@ -134,6 +134,8 @@ jobs:
python3 -m pip install -U pip cmake
# vcpkg's tool dependencies
sudo -E apt --assume-yes install curl zip unzip tar
# vcpkg 'python3' port dependencies
sudo -E apt --assume-yes install autoconf libtool autoconf-archive
# vcpkg tree of dependencies require extra packages
sudo -E apt --assume-yes install pkg-config linux-libc-dev
# Install Android SDK, NDK and Tools

View File

@@ -34,12 +34,6 @@ resources:
name: openvinotoolkit/openvino_contrib
ref: master
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
variables:
- group: github
@@ -83,7 +77,6 @@ jobs:
BUILD_TYPE: Release
REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)/../openvino_contrib
MODELS_PATH: $(REPO_DIR)/../testdata
WORK_DIR: $(Pipeline.Workspace)/_w
BUILD_DIR: $(WORK_DIR)/build
BUILD_SAMPLES_DIR: $(WORK_DIR)/build_samples
@@ -216,13 +209,6 @@ jobs:
echo SourceBranch: $(Build.SourceBranch)
displayName: 'System info'
# Should be after 'Install dependencies' because Git lfs is not installed
- checkout: testdata
clean: 'true'
lfs: 'true'
path: testdata
- task: CMake@1
inputs:
# CMake must get Python 3.x version by default
@@ -368,6 +354,9 @@ jobs:
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_proxy_plugin_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVProxyTests.xml
displayName: 'OV Proxy Plugin Tests'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_hetero_unit_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVHeteroUnitTests.xml
displayName: 'OV Hetero Unit Tests'
- script: $(RUN_PREFIX) $(INSTALL_TEST_DIR)/ov_hetero_func_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVHeteroFuncTests.xml
displayName: 'OV Hetero Func Tests'
@@ -452,7 +441,7 @@ jobs:
--junitxml=$(INSTALL_TEST_DIR)/TEST-Pyngraph.xml \
--ignore=$(INSTALL_TEST_DIR)/pyopenvino/tests/test_utils/test_utils.py
displayName: 'Python API 2.0 Tests'
# Skip test_onnx/test_zoo_models and test_onnx/test_backend due to long execution time
- script: |
python3 -m pytest -sv $(REPO_DIR)/src/frontends/onnx/tests $(PYTHON_STATIC_ARGS) \
@@ -525,6 +514,15 @@ jobs:
TEST_DEVICE: CPU
displayName: 'TensorFlow 2 Layer Tests - TF FE'
- script: |
set -e
python3 -m pip install -r $(LAYER_TESTS_DIR)/requirements.txt
$(RUN_PREFIX) python3 -m pytest $(LAYER_TESTS_DIR)/jax_tests/ -m precommit --junitxml=$(INSTALL_TEST_DIR)/TEST-jax.xmlTEST
env:
PYTHONPATH: $(LAYER_TESTS_DIR)
TEST_DEVICE: CPU
displayName: 'JAX Layer Tests - TF FE'
- script: |
set -e
python3 -m pip install -r $(LAYER_TESTS_DIR)/requirements.txt

View File

@@ -183,7 +183,6 @@ jobs:
- script: |
set -e
source $(BUILD_OPENVINO)/dependencies/conanbuild.sh
# TODO: return tests building once GPU plugin migrates to Plugin API 2.0
cmake \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-DBUILD_SHARED_LIBS=OFF \
@@ -196,7 +195,7 @@ jobs:
-DPYTHON_MODULE_EXTENSION=$(aarch64-linux-gnu-python3-config --extension-suffix) \
-DPYTHON_LIBRARY=/usr/lib/aarch64-linux-gnu/libc-2.31.so \
-DPYTHON_INCLUDE_DIR=$(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/include/python$(OV_PYTHON_VERSION_MAJOR_MINOR) \
-DENABLE_DATA=OFF \
-DENABLE_TESTS=ON \
-DENABLE_SYSTEM_TBB=ON \
-DENABLE_SYSTEM_PROTOBUF=ON \
-DENABLE_SYSTEM_SNAPPY=ON \

View File

@@ -35,6 +35,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
variables:
- group: github

View File

@@ -33,20 +33,11 @@ pr:
resources:
repositories:
- repository: openvino
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino
- repository: openvino_contrib
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/openvino_contrib
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
jobs:
- job: CUDAPlugin_Lin
@@ -63,7 +54,6 @@ jobs:
HOME_DIR: $(Agent.HomeDirectory)
REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_REPO_DIR: $(REPO_DIR)/../openvino
MODELS_PATH: $(REPO_DIR)/../testdata
WORK_DIR: $(Pipeline.Workspace)/_w
BUILD_DIR: $(WORK_DIR)/build
BIN_DIR: $(OPENVINO_REPO_DIR)/bin/intel64/$(BUILD_TYPE)

View File

@@ -28,14 +28,6 @@ pr:
- '*/conformance/*'
- 'tests/layer_tests/*'
resources:
repositories:
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
jobs:
- job: Lin_Debian
# About 150% of total time
@@ -50,7 +42,6 @@ jobs:
VSTS_HTTP_TIMEOUT: 200
BUILD_TYPE: Release
REPO_DIR: $(Build.Repository.LocalPath)
MODELS_PATH: $(REPO_DIR)/../testdata
WORK_DIR: $(Pipeline.Workspace)/_w
BUILD_DIR: $(WORK_DIR)/build
BUILD_SAMPLES_DIR: $(WORK_DIR)/build_samples
@@ -149,12 +140,6 @@ jobs:
git clone https://github.com/google/gtest-parallel.git
displayName: 'Install build dependencies'
# Should be after 'Install dependencies' because Git lfs is not installed
- checkout: testdata
clean: 'true'
lfs: 'true'
path: testdata
- task: CMake@1
inputs:
# CMake must get Python 3.x version by default
@@ -243,7 +228,7 @@ jobs:
wget https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
sudo apt-key add GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
echo "deb https://apt.repos.intel.com/openvino/2023 ubuntu20 main" | sudo tee /etc/apt/sources.list.d/intel-openvino-2023.list
sudo apt-get update -o Dir::Etc::sourcelist=/etc/apt/sources.list.d/intel-openvino-2023.list
sudo apt-get update
sudo apt-get install openvino -y
# install our local one and make sure the conflicts are resolved
sudo apt-get install --no-install-recommends dpkg-dev -y
@@ -284,6 +269,12 @@ jobs:
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
displayName: 'OV Proxy Tests'
- script: |
$(INSTALL_TEST_DIR)/ov_hetero_unit_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVHeteroUnitTests.xml
env:
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
displayName: 'OV Hetero Unit Tests'
- script: |
$(INSTALL_TEST_DIR)/ov_hetero_func_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVHeteroFuncTests.xml
env:
@@ -332,15 +323,9 @@ jobs:
displayName: 'TEMPLATE FuncTests'
- script: $(INSTALL_TEST_DIR)/InferenceEngineCAPITests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-InferenceEngineCAPITests.xml
env:
DATA_PATH: $(MODELS_PATH)
MODELS_PATH: $(MODELS_PATH)
displayName: 'IE CAPITests'
- script: $(INSTALL_TEST_DIR)/ov_capi_test --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_capi_test.xml
env:
DATA_PATH: $(MODELS_PATH)
MODELS_PATH: $(MODELS_PATH)
displayName: 'OV CAPITests'
# Skip test_onnx/test_zoo_models and test_onnx/test_backend due to long execution time
@@ -373,7 +358,7 @@ jobs:
env:
LD_LIBRARY_PATH: $(INSTALL_TEST_DIR)
PYTHONPATH: $(INSTALL_TEST_DIR)
displayName: 'ONNX Frontend Python Tests'
displayName: 'ONNX Frontend Python Tests'
- script: |
set -e

View File

@@ -20,7 +20,6 @@ jobs:
# VSTS_HTTP_TIMEOUT: 200
# BUILD_TYPE: Release
# REPO_DIR: $(Build.Repository.LocalPath)
# MODELS_PATH: $(REPO_DIR)/../testdata
# WORK_DIR: $(Pipeline.Workspace)/_w
# BUILD_DIR: $(WORK_DIR)/build
@@ -38,13 +37,6 @@ jobs:
- script: git -C ~/work/openvino checkout -m $(Build.SourceVersion) && git -C ~/work/openvino submodule update --init --recursive
displayName: checkout
# Should be after 'Install dependencies' because Git lfs is not installed
# - checkout: testdata
# clean: 'true'
# submodules: 'true'
# lfs: 'true'
# path: testdata
- script: env -C ~/work ./configreleasenolto.sh
displayName: CMake

View File

@@ -37,12 +37,6 @@ resources:
name: openvinotoolkit/openvino_contrib
ref: master
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
variables:
- group: github
@@ -61,7 +55,6 @@ jobs:
BUILD_TYPE: Release
REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)/../openvino_contrib
MODELS_PATH: $(REPO_DIR)/../testdata
WORK_DIR: $(Pipeline.Workspace)/_w
BUILD_DIR: $(WORK_DIR)/build
INSTALL_DIR: $(WORK_DIR)/install_pkg
@@ -109,11 +102,6 @@ jobs:
submodules: 'true'
path: openvino_contrib
- checkout: testdata
clean: 'true'
lfs: 'true'
path: testdata
- script: |
set -e
brew install cython automake
@@ -189,6 +177,10 @@ jobs:
displayName: 'OV Proxy Plugin Tests'
enabled: 'false'
- script: $(SETUPVARS) && $(INSTALL_TEST_DIR)/ov_hetero_unit_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVHeteroUnitTests.xml
displayName: 'OV Hetero Unit Tests'
enabled: 'false'
- script: $(SETUPVARS) && $(INSTALL_TEST_DIR)/ov_hetero_func_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-OVHeteroFuncTests.xml
displayName: 'OV Hetero Func Tests'
enabled: 'false'
@@ -215,17 +207,11 @@ jobs:
- script: |
$(SETUPVARS) && $(INSTALL_TEST_DIR)/InferenceEngineCAPITests --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-InferenceEngineCAPITests.xml
env:
DATA_PATH: $(MODELS_PATH)
MODELS_PATH: $(MODELS_PATH)
displayName: 'IE CAPITests'
enabled: 'false'
- script: |
$(SETUPVARS) && $(INSTALL_TEST_DIR)/ov_capi_test --gtest_output=xml:$(INSTALL_TEST_DIR)/TEST-ov_capi_test.xml
env:
DATA_PATH: $(MODELS_PATH)
MODELS_PATH: $(MODELS_PATH)
displayName: 'IE CAPITests'
enabled: 'false'

View File

@@ -34,12 +34,6 @@ resources:
name: openvinotoolkit/openvino_contrib
ref: master
- repository: testdata
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
jobs:
- job: Win
strategy:
@@ -63,7 +57,6 @@ jobs:
BUILD_TYPE: Release
REPO_DIR: $(Build.Repository.LocalPath)
OPENVINO_CONTRIB_REPO_DIR: $(REPO_DIR)\..\openvino_contrib
MODELS_PATH: $(REPO_DIR)\..\testdata
WORK_DIR: $(Pipeline.Workspace)\_w
BUILD_DIR: $(WORK_DIR)\build
BUILD_SAMPLES_DIR: $(WORK_DIR)\build_samples
@@ -130,11 +123,6 @@ jobs:
submodules: 'true'
path: openvino_contrib
- checkout: testdata
clean: 'true'
lfs: 'true'
path: testdata
- script: |
python -m pip install --upgrade pip
rem For running Python API tests
@@ -264,6 +252,9 @@ jobs:
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_proxy_plugin_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-OVProxyTests.xml
displayName: 'OV Proxy Plugin Tests'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_hetero_unit_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-OVHeteroUnitTests.xml
displayName: 'OV Hetero Unit Tests'
- script: call $(SETUPVARS) && $(INSTALL_TEST_DIR)\ov_hetero_func_tests --gtest_print_time=1 --gtest_output=xml:$(INSTALL_TEST_DIR)\TEST-OVHeteroFuncTests.xml
displayName: 'OV Hetero Func Tests'

View File

@@ -35,6 +35,7 @@ resources:
type: github
endpoint: openvinotoolkit
name: openvinotoolkit/testdata
ref: master
variables:
- group: github

2
.github/CODEOWNERS vendored
View File

@@ -92,6 +92,8 @@
/tests/layer_tests/ @openvinotoolkit/openvino-tests-maintainers @openvinotoolkit/openvino-mo-maintainers
/tests/layer_tests/pytorch_tests/ @openvinotoolkit/openvino-pytorch-frontend-maintainers
/tests/layer_tests/tensorflow_tests @openvinotoolkit/openvino-tf-frontend-maintainers
/tests/layer_tests/jax_tests @openvinotoolkit/openvino-tf-frontend-maintainers
/tests/model_hub_tests @openvinotoolkit/openvino-tf-frontend-maintainers
# Tools:
/tools/ @openvinotoolkit/openvino-tools-maintainers

View File

@@ -1,5 +1,5 @@
name: Bug Report
description: Create a report to help us improve
name: Bug report
description: Help us improve OpenVINO.
title: "[Bug]: "
labels: ["bug", "support_request"]
body:
@@ -53,7 +53,7 @@ body:
id: framework
attributes:
label: Framework
description: Framework used in model optimization
description: Framework used for model optimization
options:
- TensorFlow 1
- Keras (TensorFlow 2)
@@ -68,7 +68,7 @@ body:
id: model_name
attributes:
label: Model used
description: Please provide us the link to your model in the description
description: Link to the model
placeholder: ResNet50 / YOLOv4
validations:
required: false
@@ -77,8 +77,7 @@ body:
attributes:
label: Issue description
description: What issue are you having, and what did you expect to happen instead?
placeholder: Please provide a detailed description of what happened
value: "Error when performing model optimization on yolov4 model."
placeholder: "Error when performing model optimization on yolov4 model."
validations:
required: true
- type: textarea
@@ -101,9 +100,9 @@ body:
label: Issue submission checklist
description: By submitting this issue, you agree to follow our [Code of Conduct](https://github.com/intel/intel-one-mono/blob/main/CODE_OF_CONDUCT.md)
options:
- label: I report the issue. It's not a question
- label: I'm reporting an issue. It's not a question.
required: true
- label: I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found the solution
- label: I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
required: true
- label: There is reproducer code and related data files such as images, videos, models, etc.
required: true

View File

@@ -1,5 +1,5 @@
name: Build Issue Report
description: This report is for the build/installation issue
name: Build Issue report
description: Report a build or installation issue.
title: "[Build]: "
labels: ["build", "support_request"]
body:
@@ -89,7 +89,7 @@ body:
label: Issue submission checklist
description: By submitting this issue, you agree to follow our [Code of Conduct](https://github.com/intel/intel-one-mono/blob/main/CODE_OF_CONDUCT.md)
options:
- label: I report the issue. It's not a question
- label: I'm reporting an issue. It's not a question.
required: true
- label: I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found the solution
- label: I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
required: true

View File

@@ -0,0 +1,32 @@
name: Documentation issue report
description: Report an issue with Documentation.
title: "[Docs]: "
labels: ["docs", "support_request"]
body:
- type: markdown
attributes:
value: |
Please provide all the necessary information to expedite the response.
- type: input
id: doc_link
attributes:
label: Documentation link
description: Please provide the link for the documentation issue
placeholder: e.g. intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html
validations:
required: true
- type: textarea
id: description
attributes:
label: Description
description: Provide a description of the issue you noticed.
validations:
required: true
- type: checkboxes
id: terms
attributes:
label: Issue submission checklist
description: By submitting this issue, you agree to follow our [Code of Conduct](https://github.com/intel/intel-one-mono/blob/main/CODE_OF_CONDUCT.md)
options:
- label: I'm reporting a documentation issue. It's not a question.
required: true

View File

@@ -1,5 +1,5 @@
name: Feature request
description: Suggest a feature or improvement for the OpenVINO toolkit
description: Suggest a feature or improvement for the OpenVINO toolkit.
title: "[Feature Request]: "
labels: ["enhancement", "feature"]
assignees:
@@ -9,9 +9,8 @@ body:
id: request_description
attributes:
label: Request Description
description: What is the request you would like us to improve on?
placeholder: Please provide a detailed description of your request
value: "To have OpenVINO support yolov8 model (with description)"
description: What would you like us to improve on?
placeholder: Please provide a detailed description of your request.
validations:
required: true
- type: textarea
@@ -19,8 +18,7 @@ body:
attributes:
label: Feature Use Case
description: What is the use case of the feature you are proposing?
placeholder: Please provide the use case where this will be useful
value: "Recent autonomous vehicles have been using the yolov8 model to perform object segmentation."
placeholder: What is the new feature use case? How will it be useful?
validations:
required: false
- type: checkboxes

View File

@@ -0,0 +1,59 @@
name: Good First Issue
description: Create a Good First Issue for new contributors.
title: "[Good First Issue]: "
labels: ["good first issue", "no_stale"]
body:
- type: textarea
id: context
attributes:
label: Context
description: |
Let the contributors know what your component is responsible for,
what's the importance of the change and why it's needed.
Keep in mind the Good First Issue is for new contributors.
placeholder: What is it and why is it important?
validations:
required: true
- type: textarea
id: todo_list
attributes:
label: What needs to be done?
description: |
Be as verbose as possible, provide a TODO list if viable.
validations:
required: true
- type: textarea
id: example_prs
attributes:
label: Example Pull Requests
description: |
Provide example Pull requests, if there are any.
validations:
required: false
- type: textarea
id: resources
attributes:
label: Resources
description: |
Any materials related to the task, such as operator specifications,
discussions, guides.
value: |
- [What is OpenVINO?](https://github.com/openvinotoolkit/openvino#what-is-openvino-toolkit)
- [Contribution guide](https://github.com/openvinotoolkit/openvino/blob/master/CONTRIBUTING.md)
- [Blog post on contributing to OpenVINO](https://github.com/openvinotoolkit/openvino/blob/master/CONTRIBUTING.md)
- [User documentation](https://docs.openvino.ai/)
validations:
required: true
- type: textarea
id: contact_points
attributes:
label: Contact points
description: |
People who can be asked questions about the task.
placeholder: GitHub users
validations:
required: true

146
.github/ISSUE_TEMPLATE/performance.yml vendored Normal file
View File

@@ -0,0 +1,146 @@
name: Performance Issue Report
description: This report is for the performance-related issue
title: "[Performance]: "
labels: ["performance", "support_request"]
body:
- type: markdown
attributes:
value: |
Please provide all the necessary information to expedite the response.
- type: input
id: ov_version
attributes:
label: OpenVINO Version
description: OpenVINO version, branch, or tag in OpenVINO GitHub
placeholder: 2021.4.0 LTS / Master Branch / tag 2022.3.0
validations:
required: false
- type: dropdown
id: os
attributes:
label: Operating System
description: What OS are you using?
options:
- Ubuntu 18.04 (LTS)
- Ubuntu 20.04 (LTS)
- Ubuntu 22.04 (LTS)
- Windows System
- Red Hat Enterprise Linux 8
- OpenSUSE
- Android System
- Raspbian Stretch OS
- macOS Systems for Intel CPU
- macOS Systems for Apple Silicon
- WebAssembly
- WSL2 on Windows
- Other (Please specify in description)
validations:
required: true
- type: dropdown
id: device_use
attributes:
label: Device used for inference
description: What hardware are you using for inference?
options:
- CPU
- iGPU
- dGPU
- NPU
validations:
required: false
- type: dropdown
id: openvino_installation
attributes:
label: OpenVINO installation
description: How do you install OpenVINO on your system?
options:
- PyPi
- Docker
- Build from source
- Other virtual machines
validations:
required: true
- type: dropdown
id: openvino_api
attributes:
label: Programming Language
description: What is the programming language you use in your performance test?
options:
- Python
- C++
- Other
validations:
required: true
- type: dropdown
id: architecture
attributes:
label: Hardware Architecture
description: What is your hardware architecture used in this test?
options:
- x86 (64 bits)
- x86 (32 bits)
- ARM (64 bits)
- ARM (32 bits)
- RISC-V
- Other (please specify in the description)
validations:
required: true
- type: input
id: model_name
attributes:
label: Model used
description: Link to the model
placeholder: ResNet50 / YOLOv4
validations:
required: true
- type: dropdown
id: model_quantized
attributes:
label: Model quantization
description: Is your model quantized?
options:
- 'Yes'
- 'No'
validations:
required: true
- type: textarea
id: target_platform
attributes:
label: Target Platform
description: |
You can also provide us full system log with the following command
Windows cmd - "systeminfo"
Linux terminal - "lscpu" and "lscpu -e"
placeholder: Paste your full platform/system information here
validations:
required: false
- type: textarea
id: performance_description
attributes:
label: Performance issue description
description: What issue are you having, and what did you expect to happen instead?
placeholder: |
Please provide a detailed description of what happened.
Can the issue be reproduced using benchmark_app?
validations:
required: true
- type: textarea
id: step_by_step
attributes:
label: Step-by-step reproduction
description: How can we reproduce your issue?
placeholder: Please provide detailed instructions on how to reproduce the issue
validations:
required: false
- type: checkboxes
id: terms
attributes:
label: Issue submission checklist
description: By submitting this issue, you agree to follow our [Code of Conduct](https://github.com/intel/intel-one-mono/blob/main/CODE_OF_CONDUCT.md)
options:
- label: I'm reporting a performance issue. It's not a question.
required: true
- label: I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
required: true
- label: There is reproducer code and related data files such as images, videos, models, etc.
required: true

View File

@@ -16,16 +16,15 @@ body:
attributes:
label: Pre-release feedback
description: What is the issue or feedback on the pre-release?
placeholder: Please describe the issue and/or feedback
value: "Inference performance drop in OpenVINO 2022.4."
placeholder: There is an inference performance drop in OpenVINO 2022.4.
validations:
required: true
- type: textarea
id: thoughts
attributes:
label: New Feature Feedback?
label: New Feature Feedback
description: Do you have any feedback on the new features released in the pre-release?
placeholder: Any thoughts on the new feature are welcome
placeholder: Any thoughts on the new features are welcome.
validations:
required: false
- type: markdown

View File

@@ -154,6 +154,7 @@ updates:
time: "09:00"
timezone: "Asia/Dubai"
assignees:
- "ilyachur"
- "akashchi"
- "mryzhov"
- "ilya-lavrenov"
open-pull-requests-limit: 3

3
.github/labeler.yml vendored
View File

@@ -17,6 +17,9 @@
- '.ci/**/*'
- 'Jenkinsfile'
'github_actions':
- '.github/workflows/*'
'category: Core':
- 'src/core/**/*'
- 'src/common/itt/**/*'

View File

@@ -19,7 +19,7 @@ jobs:
runs-on: ubuntu-20.04
steps:
- name: Clone OpenVINO
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
submodules: true
lfs: true
@@ -66,7 +66,7 @@ jobs:
key: sphinx-docs-cache
- name: Get number of CPU cores
uses: SimenB/github-actions-cpu-cores@v1
uses: SimenB/github-actions-cpu-cores@v2
id: cpu-cores
- name: Build docs

View File

@@ -6,7 +6,7 @@ jobs:
runs-on: ubuntu-22.04
steps:
- name: Clone OpenVINO
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Install dependencies
run: python3 -m pip install -r ./.github/github_org_control/requirements.txt

View File

@@ -25,10 +25,9 @@ jobs:
runs-on: ${{ matrix.os }}
steps:
- name: Clone OpenVINO
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
submodules: recursive
lfs: true
submodules: 'true'
- name: Install OpenCL
uses: awalsh128/cache-apt-pkgs-action@v1.3.0
@@ -38,10 +37,10 @@ jobs:
version: 3.0
- name: CMake configure
run: cmake -DCMAKE_BUILD_TYPE=Release -B build
run: cmake -DCMAKE_BUILD_TYPE=Release -DTHREADING=SEQ -B build
- name: Get number of CPU cores
uses: SimenB/github-actions-cpu-cores@v1
uses: SimenB/github-actions-cpu-cores@v2
id: cpu-cores
- name: Build snippets

View File

@@ -11,9 +11,9 @@ jobs:
permissions:
pull-requests: write
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
submodules: recursive
submodules: 'true'
- name: Install clang-format-9
run: |
@@ -47,9 +47,9 @@ jobs:
permissions:
pull-requests: write
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
submodules: recursive
submodules: 'true'
- name: Install ShellCheck
run: |
@@ -78,9 +78,9 @@ jobs:
NamingConventionCheck:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
submodules: recursive
submodules: 'true'
- name: Install Clang dependency
run: |

View File

@@ -28,9 +28,9 @@ jobs:
max-size: 50G
- name: Clone OpenVINO
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
submodules: recursive
submodules: 'true'
- name: Install dependencies
run: |
@@ -58,7 +58,7 @@ jobs:
python3 -m pip install -r ${{ github.workspace }}/tools/mo/requirements_dev.txt
- name: Get number of CPU cores
uses: SimenB/github-actions-cpu-cores@v1
uses: SimenB/github-actions-cpu-cores@v2
id: cpu-cores
- name: Build OpenVINO with CMake
@@ -108,7 +108,7 @@ jobs:
run: ${{ github.workspace }}/bin/intel64/Release/ov_hetero_func_tests
- name: Run IR frontend tests
run: ${{ github.workspace }}/bin/intel64/Release/ov_ir_frontend_tests # --gtest_print_time=1 --gtest_output=xml:${{ github.workspace }}/testdata/TEST-IRFrontend.xml
run: ${{ github.workspace }}/bin/intel64/Release/ov_ir_frontend_tests
- name: Run ONNX frontend tests
run: ${{ github.workspace }}/bin/intel64/Release/ov_onnx_frontend_tests --gtest_filter=-*IE_GPU*
@@ -144,6 +144,6 @@ jobs:
lcov --capture --directory ${{ github.workspace }}/. --output-file coverage.info
genhtml coverage.info --output-directory coverage-report
- name: Collect coverage
uses: codecov/codecov-action@v3
uses: codecov/codecov-action@v4
with:
verbose: true

View File

@@ -9,7 +9,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Clone OpenVINO
uses: actions/checkout@v3
uses: actions/checkout@v4
- name: Dependency Review
uses: actions/dependency-review-action@v3

View File

@@ -9,7 +9,7 @@ jobs:
Check_Files_Size:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: git ls-tree
run: git ls-tree -r -t -l --full-name HEAD | sort -n -r -k 4

View File

@@ -1,5 +1,8 @@
name: Tests on Linux (Ubuntu 22.04, Python 3.11)
name: Tests on Linux (Ubuntu 20.04, Python 3.11)
on:
schedule:
# at 00:00 on Wednesday and Saturday
- cron: '0 0 * * 3,6'
workflow_dispatch:
pull_request:
paths-ignore:
@@ -29,7 +32,7 @@ jobs:
defaults:
run:
shell: bash
runs-on: ubuntu-latest-8-cores
runs-on: ubuntu-20.04-8-cores
env:
CMAKE_BUILD_TYPE: 'Release'
CMAKE_GENERATOR: 'Ninja'
@@ -41,32 +44,22 @@ jobs:
INSTALL_TEST_DIR: ${{ github.workspace }}/install/tests
SAMPLES_INSTALL_DIR: ${{ github.workspace }}/install/samples
LAYER_TESTS_INSTALL_DIR: ${{ github.workspace }}/install/tests/layer_tests
MODEL_HUB_TESTS_INSTALL_DIR: ${{ github.workspace }}/install/tests/model_hub_tests
BUILD_DIR: ${{ github.workspace }}/build
DATA_PATH: ${{ github.workspace }}/testdata
MODELS_PATH: ${{ github.workspace }}/testdata
OV_TEMP: ${{ github.workspace }}/openvino_temp
PYTHON_STATIC_ARGS: -m "not dynamic_library"
steps:
- name: Clone OpenVINO
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
path: 'openvino'
submodules: 'recursive'
submodules: 'true'
- name: Clone OpenVINO Contrib
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
repository: 'openvinotoolkit/openvino_contrib'
path: 'openvino_contrib'
submodules: 'recursive'
- name: Clone testdata for C API tests
uses: actions/checkout@v3
with:
repository: 'openvinotoolkit/testdata'
path: 'testdata'
submodules: 'recursive'
lfs: 'true'
#
# Dependencies
@@ -128,19 +121,12 @@ jobs:
# github.ref_name is 'ref/PR_#' in case of the PR, and 'branch_name' when executed on push
save: ${{ github.ref_name == 'master' && 'true' || 'false' }}
verbose: 2
key: ${{ github.job }}-linux
key: linux-ubuntu
restore-keys: |
${{ github.job }}-linux
- name: Get tools versions
run: |
ninja --version || exit 1
ccache --version || exit 1
python3 --version || exit 1
cmake --version || exit 1
linux-ubuntu
- name: Get number of CPU cores
uses: SimenB/github-actions-cpu-cores@v1
uses: SimenB/github-actions-cpu-cores@v2
id: cpu-cores
- name: CMake configure
@@ -160,8 +146,6 @@ jobs:
-DENABLE_STRICT_DEPENDENCIES=OFF \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_LINKER_LAUNCHER=ccache \
-DCMAKE_C_LINKER_LAUNCHER=ccache \
-DENABLE_SYSTEM_SNAPPY=ON \
-DENABLE_SYSTEM_TBB=ON \
-DBUILD_nvidia_plugin=OFF \
@@ -183,15 +167,24 @@ jobs:
- name: Cmake Layer Tests
run: cmake -GNinja -S ${{ env.OPENVINO_REPO }}/tests/layer_tests -B ${{ env.BUILD_DIR }}/layer_tests
- name: Cmake Model Hub Tests
run: cmake -GNinja -S ${{ env.OPENVINO_REPO }}/tests/model_hub_tests -B ${{ env.BUILD_DIR }}/model_hub_tests
- name: Build Layer Tests
run: cmake --build ${{ env.BUILD_DIR }}/layer_tests --parallel --config Release
- name: Build Model Hub Tests
run: cmake --build ${{ env.BUILD_DIR }}/model_hub_tests --parallel --config Release
- name: Install wheel packages
run: cmake -DCOMPONENT=python_wheels -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_DIR }} -P ${{ env.BUILD_DIR }}/cmake_install.cmake
- name: Install Layer Tests
run: cmake -DCOMPONENT=tests -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_DIR }} -P ${{ env.BUILD_DIR }}/layer_tests/cmake_install.cmake
- name: Install Model Hub Tests
run: cmake -DCOMPONENT=tests -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_DIR }} -P ${{ env.BUILD_DIR }}/model_hub_tests/cmake_install.cmake
- name: Install python wheels
run: python3 -m pip install openvino-dev --find-links=${{ env.INSTALL_DIR }}/tools
@@ -278,7 +271,7 @@ jobs:
defaults:
run:
shell: bash
runs-on: ubuntu-22.04
runs-on: ubuntu-20.04
env:
INSTALL_DIR: ${{ github.workspace }}/install
INSTALL_TEST_DIR: ${{ github.workspace }}/install/tests
@@ -456,6 +449,11 @@ jobs:
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_proxy_plugin_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-OVProxyTests.xml
- name: Hetero Unit Tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_hetero_unit_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-OVHeteroUnitTests.xml
- name: Hetero Func Tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
@@ -474,7 +472,7 @@ jobs:
defaults:
run:
shell: bash
runs-on: ubuntu-22.04
runs-on: ubuntu-20.04
env:
OPENVINO_REPO: ${{ github.workspace }}/openvino
OPENVINO_CONTRIB_REPO: ${{ github.workspace }}/openvino_contrib
@@ -482,9 +480,8 @@ jobs:
INSTALL_TEST_DIR: ${{ github.workspace }}/install/tests
SAMPLES_INSTALL_DIR: ${{ github.workspace }}/install/samples
LAYER_TESTS_INSTALL_DIR: ${{ github.workspace }}/install/tests/layer_tests
MODEL_HUB_TESTS_INSTALL_DIR: ${{ github.workspace }}/install/tests/model_hub_tests
BUILD_DIR: ${{ github.workspace }}/build
DATA_PATH: ${{ github.workspace }}/testdata
MODELS_PATH: ${{ github.workspace }}/testdata
OV_TEMP: ${{ github.workspace }}/openvino_temp
PYTHON_STATIC_ARGS: -m "not dynamic_library"
@@ -494,10 +491,9 @@ jobs:
mkdir -p ${{ env.INSTALL_DIR }} ${{ env.INSTALL_TEST_DIR }}
- name: Clone OpenVINO
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
path: 'openvino'
submodules: 'recursive'
#
# Dependencies
@@ -590,6 +586,14 @@ jobs:
--ignore=${{ env.INSTALL_TEST_DIR }}/pyopenvino/tests/test_onnx/test_zoo_models.py \
--ignore=${{ env.INSTALL_TEST_DIR }}/pyopenvino/tests/test_onnx/test_backend.py
- name: Python API snippets
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
export PYTHONPATH=${{ env.INSTALL_TEST_DIR }}:${{ github.workspace }}/openvino/docs/:$PYTHONPATH
export LD_LIBRARY_PATH=${{ env.INSTALL_TEST_DIR }}:$LD_LIBRARY_PATH
python3 ${{ github.workspace }}/openvino/docs/snippets/main.py
- name: Model Optimizer UT
run: |
@@ -617,6 +621,7 @@ jobs:
python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/pytorch_tests -m precommit --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-pytorch.xml
env:
TEST_DEVICE: CPU
TEST_PRECISION: FP16
- name: TensorFlow 1 Layer Tests - TF FE
run: |
@@ -629,6 +634,7 @@ jobs:
python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/tensorflow_tests/ --use_new_frontend -m precommit_tf_fe --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-tf_fe.xml
env:
TEST_DEVICE: CPU
TEST_PRECISION: FP16
- name: TensorFlow 2 Layer Tests - TF FE
run: |
@@ -640,6 +646,18 @@ jobs:
python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/tensorflow2_keras_tests/ --use_new_frontend -m precommit_tf_fe --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-tf2_fe.xml
env:
TEST_DEVICE: CPU
TEST_PRECISION: FP16
- name: JAX Layer Tests - TF FE
run: |
python3 -m pip install -r ${{ env.LAYER_TESTS_INSTALL_DIR }}/requirements.txt
export PYTHONPATH=${{ env.LAYER_TESTS_INSTALL_DIR }}:$PYTHONPATH
source ${{ env.INSTALL_DIR }}/setupvars.sh
python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/jax_tests/ -m precommit --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-jax.xml
env:
TEST_DEVICE: CPU
- name: TensorFlow 1 Layer Tests - Legacy FE
run: |
@@ -659,6 +677,7 @@ jobs:
--ir_version=11 --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-tf2_Activation.xml -k "sigmoid"
env:
TEST_DEVICE: CPU
TEST_PRECISION: FP16
- name: TensorFlow Lite Layer Tests - TFL FE
run: |
@@ -669,6 +688,7 @@ jobs:
python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/tensorflow_lite_tests/ --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-tfl_fe.xml
env:
TEST_DEVICE: CPU
TEST_PRECISION: FP16
- name: MO Python API Tests
run: |
@@ -679,6 +699,7 @@ jobs:
python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/mo_python_api_tests --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-test_mo_convert.xml
env:
TEST_DEVICE: CPU
TEST_PRECISION: FP16
- name: Python Frontend tests
run: |
@@ -701,7 +722,9 @@ jobs:
if: ${{ always() }}
with:
name: test-results-python
path: ${{ env.INSTALL_TEST_DIR }}/TEST*.xml
path: |
${{ env.INSTALL_TEST_DIR }}/TEST*.html
${{ env.INSTALL_TEST_DIR }}/TEST*.xml
if-no-files-found: 'error'
CPU_Functional_Tests:
@@ -709,10 +732,12 @@ jobs:
defaults:
run:
shell: bash
runs-on: ubuntu-22.04
runs-on: ubuntu-20.04-4-cores
env:
INSTALL_DIR: ${{ github.workspace }}/install
INSTALL_TEST_DIR: ${{ github.workspace }}/install/tests
PARALLEL_TEST_SCRIPT: ${{ github.workspace }}/install/tests/functional_test_utils/run_parallel.py
PARALLEL_TEST_CACHE: ${{ github.workspace }}/install/tests/test_cache.lst
steps:
- name: Create Directories
@@ -744,15 +769,188 @@ jobs:
tar -xzf openvino_tests.tar.gz -C ${{ env.INSTALL_DIR }} && rm openvino_tests.tar.gz || exit 1
popd
- name: Intel CPU plugin func tests
- name: Install python dependencies
run: |
python3 -m pip install --upgrade pip
python3 -m pip install -r ${{ env.INSTALL_TEST_DIR }}/functional_test_utils/requirements.txt
- name: Restore tests execution time
uses: actions/cache/restore@v3
with:
path: ${{ env.PARALLEL_TEST_CACHE }}
key: ${{ runner.os }}-tests-functional-cpu-stamp-${{ github.sha }}
restore-keys: |
${{ runner.os }}-tests-functional-cpu-stamp
- name: Intel CPU plugin func tests (parallel)
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.INSTALL_TEST_DIR }}/ov_cpu_func_tests --gtest_print_time=1 --gtest_filter=*smoke* --gtest_output=xml:"${{ env.INSTALL_TEST_DIR }}/TEST-CPUFuncTests.xml"
python3 ${{ env.PARALLEL_TEST_SCRIPT }} -e ${{ env.INSTALL_TEST_DIR }}/ov_cpu_func_tests -c ${{ env.PARALLEL_TEST_CACHE }} -w ${{ env.INSTALL_TEST_DIR }} -s suite -rf 0 -- --gtest_print_time=1 --gtest_filter=*smoke*
timeout-minutes: 25
- name: Save tests execution time
uses: actions/cache/save@v3
if: github.ref_name == 'master'
with:
path: ${{ env.PARALLEL_TEST_CACHE }}
key: ${{ runner.os }}-tests-functional-cpu-stamp-${{ github.sha }}
- name: Upload Test Results
uses: actions/upload-artifact@v3
if: ${{ always() }}
with:
name: test-results-functional-cpu
path: ${{ env.INSTALL_TEST_DIR }}/TEST*.xml
path: |
${{ env.INSTALL_TEST_DIR }}/TEST*.xml
${{ env.INSTALL_TEST_DIR }}/logs/failed/*.log
${{ env.INSTALL_TEST_DIR }}/logs/crashed/*.log
${{ env.INSTALL_TEST_DIR }}/logs/hanged/*.log
${{ env.INSTALL_TEST_DIR }}/logs/interapted/*.log
${{ env.INSTALL_TEST_DIR }}/logs/disabled_tests.log
if-no-files-found: 'error'
TensorFlow_Hub_Models_Tests:
needs: Build
defaults:
run:
shell: bash
runs-on: ${{ github.event_name == 'schedule' && 'ubuntu-20.04-8-cores' || 'ubuntu-20.04'}}
env:
INSTALL_DIR: ${{ github.workspace }}/install
INSTALL_TEST_DIR: ${{ github.workspace }}/install/tests
MODEL_HUB_TESTS_INSTALL_DIR: ${{ github.workspace }}/install/tests/model_hub_tests
steps:
- name: Create Directories
run: |
mkdir -p ${{ env.INSTALL_DIR }} ${{ env.INSTALL_TEST_DIR }}
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Download OpenVINO package
uses: actions/download-artifact@v3
with:
name: openvino_package
path: ${{ env.INSTALL_DIR }}
- name: Download OpenVINO tests package
uses: actions/download-artifact@v3
with:
name: openvino_tests
path: ${{ env.INSTALL_TEST_DIR }}
- name: Extract OpenVINO packages
run: |
pushd ${{ env.INSTALL_DIR }}
tar -xzf openvino_package.tar.gz -C ${{ env.INSTALL_DIR }} && rm openvino_package.tar.gz || exit 1
popd
pushd ${{ env.INSTALL_TEST_DIR }}
tar -xzf openvino_tests.tar.gz -C ${{ env.INSTALL_DIR }} && rm openvino_tests.tar.gz || exit 1
popd
- name: Install Python wheels
run: |
python3 -m pip install openvino --find-links=${{ env.INSTALL_DIR }}/tools
- name: TensorFlow Hub Tests - TF FE
run: |
python3 -m pip install -r ${{ env.MODEL_HUB_TESTS_INSTALL_DIR }}/tf_hub_tests/requirements.txt
export PYTHONPATH=${{ env.MODEL_HUB_TESTS_INSTALL_DIR }}:$PYTHONPATH
python3 -m pytest ${{ env.MODEL_HUB_TESTS_INSTALL_DIR }}/tf_hub_tests/ -m ${{ env.TYPE }} --html=${{ env.INSTALL_TEST_DIR }}/TEST-tf_hub_tf_fe.html --self-contained-html
env:
TYPE: ${{ github.event_name == 'schedule' && 'nightly' || 'precommit'}}
TEST_DEVICE: CPU
- name: Upload Test Results
uses: actions/upload-artifact@v3
if: ${{ always() }}
with:
name: test-results-tensorflow-hub-models
path: |
${{ env.INSTALL_TEST_DIR }}/TEST*.html
if-no-files-found: 'error'
PyTorch_Models_Tests:
needs: Build
defaults:
run:
shell: bash
runs-on: ${{ github.event_name == 'schedule' && 'ubuntu-20.04-8-cores' || 'ubuntu-20.04'}}
env:
INSTALL_DIR: ${{ github.workspace }}/install
INSTALL_TEST_DIR: ${{ github.workspace }}/install/tests
MODEL_HUB_TESTS_INSTALL_DIR: ${{ github.workspace }}/install/tests/model_hub_tests
steps:
- name: Maximize build space
run: |
sudo rm -rf /usr/local/lib/android # will release about 10 GB if you don't need Android
sudo rm -rf /usr/share/dotnet # will release about 20GB if you don't need .NET
sudo rm -rf /opt/ghc
echo "Available storage:"
df -h
- name: Create Directories
run: |
mkdir -p ${{ env.INSTALL_DIR }} ${{ env.INSTALL_TEST_DIR }}
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Download OpenVINO package
uses: actions/download-artifact@v3
with:
name: openvino_package
path: ${{ env.INSTALL_DIR }}
- name: Download OpenVINO tests package
uses: actions/download-artifact@v3
with:
name: openvino_tests
path: ${{ env.INSTALL_TEST_DIR }}
- name: Extract OpenVINO packages
run: |
pushd ${{ env.INSTALL_DIR }}
tar -xzf openvino_package.tar.gz -C ${{ env.INSTALL_DIR }} && rm openvino_package.tar.gz || exit 1
popd
pushd ${{ env.INSTALL_TEST_DIR }}
tar -xzf openvino_tests.tar.gz -C ${{ env.INSTALL_DIR }} && rm openvino_tests.tar.gz || exit 1
popd
- name: Install Python wheels
run: |
python3 -m pip install openvino --find-links=${{ env.INSTALL_DIR }}/tools
- name: Install requirements
run: |
python3 -m pip install -r ${{ env.MODEL_HUB_TESTS_INSTALL_DIR }}/torch_tests/requirements.txt
python3 -m pip install -r ${{ env.MODEL_HUB_TESTS_INSTALL_DIR }}/torch_tests/requirements_secondary.txt
python3 -m pip cache purge
echo "Available storage:"
df -h
du -h -d0 ~/.cache ~/*
- name: PyTorch Models Tests
run: |
export PYTHONPATH=${{ env.MODEL_HUB_TESTS_INSTALL_DIR }}:$PYTHONPATH
python3 -m pytest ${{ env.MODEL_HUB_TESTS_INSTALL_DIR }}/torch_tests/ -m ${{ env.TYPE }} --html=${{ env.INSTALL_TEST_DIR }}/TEST-torch_model_tests.html --self-contained-html -v
env:
TYPE: ${{ github.event_name == 'schedule' && 'nightly' || 'precommit'}}
TEST_DEVICE: CPU
- name: Available storage after tests
run: |
echo "Available storage:"
df -h
du -h -d0 ~/.cache ~/*
- name: Upload Test Results
uses: actions/upload-artifact@v3
if: ${{ always() }}
with:
name: test-results-torch-models
path: |
${{ env.INSTALL_TEST_DIR }}/TEST*.html
if-no-files-found: 'error'

View File

@@ -0,0 +1,173 @@
name: Linux Android ARM64 (Ubuntu 20.04, Python 3.11)
on:
schedule:
# run daily at 00:00
- cron: '0 0 * * *'
workflow_dispatch:
# pull_request:
# paths-ignore:
# - '**/docs/**'
# - 'docs/**'
# - '**/**.md'
# - '**.md'
# - '**/layer_tests_summary/**'
# - '**/conformance/**'
# push:
# paths-ignore:
# - '**/docs/**'
# - 'docs/**'
# - '**/**.md'
# - '**.md'
# - '**/layer_tests_summary/**'
# - '**/conformance/**'
# branches:
# - master
concurrency:
group: ${{ github.head_ref || github.run_id }}-linux-android-arm64
cancel-in-progress: true
jobs:
Build:
# TODO: remove. Temporary measure to prevent the workflow from scheduling on forks.
if: ${{ github.repository_owner == 'openvinotoolkit' }}
defaults:
run:
shell: bash
runs-on: ubuntu-20.04-8-cores
env:
CMAKE_BUILD_TYPE: 'Release'
CMAKE_GENERATOR: 'Ninja'
CMAKE_CXX_COMPILER_LAUNCHER: ccache
CMAKE_C_COMPILER_LAUNCHER: ccache
BUILD_TYPE: Debug
OPENVINO_REPO: ${{ github.workspace }}/openvino
VCPKG_ROOT: ${{ github.workspace }}/vcpkg
BUILD_DIR: ${{ github.workspace }}/build
INSTALL_DIR: ${{ github.workspace }}/install
OV_TEMP: ${{ github.workspace }}/openvino_temp
ANDROID_TOOLS: ${{ github.workspace }}/android_tools
ANDROID_NDK_HOME: ${{ github.workspace }}/android_tools/ndk-bundle
ANDROID_SDK_VERSION: 29
ANDROID_ABI_CONFIG: arm64-v8a
steps:
- name: Clone OpenVINO
uses: actions/checkout@v4
with:
path: 'openvino'
- name: Init submodules for non vcpkg dependencies
run: |
pushd ${{ env.OPENVINO_REPO }}
git submodule update --init -- ${{ env.OPENVINO_REPO }}/src/plugins
git submodule update --init -- ${{ env.OPENVINO_REPO }}/thirdparty/gtest
git submodule update --init -- ${{ env.OPENVINO_REPO }}/thirdparty/open_model_zoo
popd
- name: Clone VCPKG
uses: actions/checkout@v4
with:
repository: 'microsoft/vcpkg'
path: 'vcpkg'
fetch-depth: '0'
- name: Setup Python 3.11
uses: actions/setup-python@v4
with:
python-version: '3.11'
#
# Dependencies
#
- name: Install dependencies
run: |
# generic dependencies
sudo -E apt update
sudo -E apt --assume-yes install ccache scons default-jdk python3-pip ninja-build build-essential
wget https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
unzip ninja-linux.zip
sudo cp -v ninja /usr/local/bin/
ln -s /usr/local/bin/ninja /usr/local/bin/ninja-build
# vcpkg's tool dependencies
sudo -E apt --assume-yes install curl zip unzip tar
# vcpkg 'python3' port dependencies
sudo -E apt --assume-yes install autoconf libtool autoconf-archive
# vcpkg tree of dependencies require extra packages
sudo -E apt --assume-yes install pkg-config linux-libc-dev
# Install Android SDK, NDK and Tools
sudo apt -y --no-install-recommends install unzip
wget https://dl.google.com/android/repository/commandlinetools-linux-7583922_latest.zip
unzip commandlinetools-linux-7583922_latest.zip
echo "yes" | ./cmdline-tools/bin/sdkmanager --sdk_root=${{ env.ANDROID_TOOLS }} --install "ndk-bundle" "platform-tools" "platforms;android-${{ env.ANDROID_SDK_VERSION }}"
- name: Setup ccache
uses: hendrikmuhs/ccache-action@v1.2
with:
max-size: "2000M"
# Should save cache only if run in the master branch of the base repo
# github.ref_name is 'ref/PR_#' in case of the PR, and 'branch_name' when executed on push
save: ${{ github.ref_name == 'master' && 'true' || 'false' }}
verbose: 2
key: ${{ github.job }}-linux-android-arm64
restore-keys: |
${{ github.job }}-linux-android-arm64
#
# Build
#
- name: Build vcpkg
run: |
${{ env.VCPKG_ROOT }}/bootstrap-vcpkg.sh --disableMetrics
# patch vcpkg default (community) toolchain to build only Release configuration
echo "set(VCPKG_BUILD_TYPE release)" >> ${{ env.VCPKG_ROOT }}/triplets/community/arm64-android.cmake
- name: Get number of CPU cores
uses: SimenB/github-actions-cpu-cores@v2
id: cpu-cores
- name: CMake configure
run: |
cmake \
-G Ninja \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-DCMAKE_BUILD_TYPE=${{ env.BUILD_TYPE }} \
-DVCPKG_TARGET_TRIPLET=arm64-android \
-DVCPKG_HOST_TRIPLET=x64-linux-release \
-DCMAKE_TOOLCHAIN_FILE=${{ env.VCPKG_ROOT }}/scripts/buildsystems/vcpkg.cmake \
-DVCPKG_CHAINLOAD_TOOLCHAIN_FILE=${{ env.ANDROID_NDK_HOME }}/build/cmake/android.toolchain.cmake \
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON \
-DANDROID_ABI=${{ env.ANDROID_ABI_CONFIG }} \
-DANDROID_PLATFORM=${{ env.ANDROID_SDK_VERSION }} \
-DENABLE_PYTHON=OFF \
-DENABLE_SYSTEM_OPENCL=ON \
-DENABLE_SYSTEM_PROTOBUF=ON \
-DENABLE_SYSTEM_PUGIXML=ON \
-DENABLE_SYSTEM_SNAPPY=ON \
-DENABLE_SYSTEM_TBB=ON \
-DENABLE_SYSTEM_FLATBUFFERS=ON \
-DENABLE_INTEL_GPU=ON \
-DENABLE_TESTS=ON \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-S ${{ env.OPENVINO_REPO }} \
-B ${{ env.BUILD_DIR }}
- name: Clean ccache stats
run: ccache --zero-stats --show-config
- name: Build Android ARM64
run: cmake --build ${{ env.BUILD_DIR }} --parallel ${{ steps.cpu-cores.outputs.count }} --config ${{ env.BUILD_TYPE }}
- name: Show ccache stats
run: ccache --show-stats
- name: List binary files
run: ls -alR ${{ env.OPENVINO_REPO }}/bin/

200
.github/workflows/linux_arm64.yml vendored Normal file
View File

@@ -0,0 +1,200 @@
name: Linux ARM64 with Conan (Ubuntu 20.04, Python 3.11)
on:
schedule:
# run daily at 00:00
- cron: '0 0 * * *'
workflow_dispatch:
# pull_request:
# paths-ignore:
# - '**/docs/**'
# - 'docs/**'
# - '**/**.md'
# - '**.md'
# - '**/layer_tests_summary/**'
# - '**/conformance/**'
# push:
# paths-ignore:
# - '**/docs/**'
# - 'docs/**'
# - '**/**.md'
# - '**.md'
# - '**/layer_tests_summary/**'
# - '**/conformance/**'
# branches:
# - master
concurrency:
group: ${{ github.head_ref || github.run_id }}-linux-arm64
cancel-in-progress: true
jobs:
Build:
# TODO: remove. Temporary measure to prevent the workflow from scheduling on forks.
if: ${{ github.repository_owner == 'openvinotoolkit' }}
defaults:
run:
shell: bash
runs-on: ubuntu-20.04-8-cores
env:
CMAKE_BUILD_TYPE: 'Release'
CMAKE_GENERATOR: 'Ninja'
CMAKE_CXX_COMPILER_LAUNCHER: ccache
CMAKE_C_COMPILER_LAUNCHER: ccache
BUILD_TYPE: Release
OPENVINO_REPO: ${{ github.workspace }}/openvino
BUILD_DIR: ${{ github.workspace }}/build
INSTALL_DIR: ${{ github.workspace }}/install
OV_TEMP: ${{ github.workspace }}/openvino_temp
steps:
- name: Clone OpenVINO
uses: actions/checkout@v4
with:
path: 'openvino'
- name: Init submodules for non Conan dependencies
run: |
pushd ${{ env.OPENVINO_REPO }}
git submodule update --init -- ${{ env.OPENVINO_REPO }}/src/plugins
git submodule update --init -- ${{ env.OPENVINO_REPO }}/thirdparty/gtest
git submodule update --init -- ${{ env.OPENVINO_REPO }}/thirdparty/open_model_zoo
popd
- name: Setup Python 3.11
uses: actions/setup-python@v4
with:
python-version: '3.11'
#
# Dependencies
#
- name: Install build dependencies
run: |
sudo -E apt update
# install dependencies needed to build CPU plugin for ARM
sudo -E apt --assume-yes install scons gcc-10-aarch64-linux-gnu g++-10-aarch64-linux-gnu
# generic dependencies
sudo -E apt --assume-yes install cmake ccache ninja-build unzip fdupes
- name: Install python dependencies
run: |
python3 -m pip install --upgrade pip
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/requirements.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/wheel/requirements-dev.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/src/compatibility/openvino/requirements-dev.txt
- name: Install arm64 libraries
run: |
echo deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ focal main restricted > arm64-sources.list
echo deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ focal-updates main restricted >> arm64-sources.list
echo deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ focal universe >> arm64-sources.list
echo deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ focal-updates universe >> arm64-sources.list
echo deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ focal multiverse >> arm64-sources.list
echo deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ focal-updates multiverse >> arm64-sources.list
echo deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ focal-backports main restricted universe multiverse >> arm64-sources.list
echo deb [arch=amd64] http://security.ubuntu.com/ubuntu/ focal-security main restricted >> arm64-sources.list
echo deb [arch=amd64] http://security.ubuntu.com/ubuntu/ focal-security universe >> arm64-sources.list
echo deb [arch=amd64] http://security.ubuntu.com/ubuntu/ focal-security multiverse >> arm64-sources.list
echo deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports/ focal main >> arm64-sources.list
echo deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports/ focal universe >> arm64-sources.list
echo deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports/ focal-updates main >> arm64-sources.list
echo deb [arch=arm64] http://ports.ubuntu.com/ubuntu-ports/ focal-security main >> arm64-sources.list
sudo mv arm64-sources.list /etc/apt/sources.list.d/
sudo -E dpkg --add-architecture arm64
sudo -E apt-get update -o Dir::Etc::sourcelist=/etc/apt/sources.list.d/arm64-sources.list
sudo -E apt-get install -y --no-install-recommends libpython3-dev:arm64
- name: Setup ccache
uses: hendrikmuhs/ccache-action@v1.2
with:
max-size: "2000M"
# Should save cache only if run in the master branch of the base repo
# github.ref_name is 'ref/PR_#' in case of the PR, and 'branch_name' when executed on push
save: ${{ github.ref_name == 'master' && 'true' || 'false' }}
verbose: 2
key: ${{ github.job }}-linux-arm64
restore-keys: |
${{ github.job }}-linux-arm64
- name: Install conan and dependencies
run: |
# create build directory
mkdir -p ${{ env.BUILD_DIR }}
python3 -m pip install conan
# install build profile compilers
sudo -E apt --assume-yes install gcc g++
# generate build profile
conan profile detect
# generate host profile for linux_arm64
echo "include(default)" > ${{ env.BUILD_DIR }}/linux_arm64
echo "[buildenv]" >> ${{ env.BUILD_DIR }}/linux_arm64
echo "CC=aarch64-linux-gnu-gcc-10" >> ${{ env.BUILD_DIR }}/linux_arm64
echo "CXX=aarch64-linux-gnu-g++-10" >> ${{ env.BUILD_DIR }}/linux_arm64
# install OpenVINO dependencies
conan install ${{ env.OPENVINO_REPO }}/conanfile.txt \
-pr:h ${{ env.BUILD_DIR }}/linux_arm64 \
-s:h arch=armv8 \
-of ${{ env.BUILD_DIR }}/dependencies \
-b missing
#
# Build
#
- name: Get number of CPU cores
uses: SimenB/github-actions-cpu-cores@v2
id: cpu-cores
- name: CMake configure
run: |
source ${{ env.BUILD_DIR }}/dependencies/conanbuild.sh
cmake \
-G Ninja \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-DBUILD_SHARED_LIBS=OFF \
-DCMAKE_COMPILE_WARNING_AS_ERROR=OFF \
-DENABLE_CPPLINT=ON \
-DENABLE_INTEL_GPU=ON \
-DENABLE_PYTHON=ON \
-DENABLE_WHEEL=ON \
-DPYBIND11_PYTHONLIBS_OVERWRITE=OFF \
-DPYTHON_MODULE_EXTENSION=$(aarch64-linux-gnu-python3-config --extension-suffix) \
-DPYTHON_LIBRARY=/usr/lib/aarch64-linux-gnu/libc-2.31.so \
-DPYTHON_INCLUDE_DIR=$(python3 -c "import sysconfig; print(sysconfig.get_path('include'))") \
-DENABLE_TESTS=ON \
-DENABLE_SYSTEM_TBB=ON \
-DENABLE_SYSTEM_PROTOBUF=ON \
-DENABLE_SYSTEM_SNAPPY=ON \
-DENABLE_SYSTEM_PUGIXML=ON \
-DCMAKE_TOOLCHAIN_FILE=${{ env.BUILD_DIR }}/dependencies/conan_toolchain.cmake \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DARM_COMPUTE_SCONS_JOBS=${{ steps.cpu-cores.outputs.count }} \
-DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_DIR }} \
-DCMAKE_BUILD_TYPE=${{ env.BUILD_TYPE }} \
-DENABLE_PYTHON_PACKAGING=ON \
-S ${{ env.OPENVINO_REPO }} \
-B ${{ env.BUILD_DIR }}
source ${{ env.BUILD_DIR }}/dependencies/deactivate_conanbuild.sh
- name: Clean ccache stats
run: ccache --zero-stats --show-config
- name: Build OpenVINO Runtime
run: cmake --build ${{ env.BUILD_DIR }} --parallel ${{ steps.cpu-cores.outputs.count }} --config ${{ env.BUILD_TYPE }}
- name: Show ccache stats
run: ccache --show-stats
- name: Install OpenVINO Runtime
run: cmake --build ${{ env.BUILD_DIR }} --parallel ${{ steps.cpu-cores.outputs.count }} --config ${{ env.BUILD_TYPE }} --target install
- name: Build OpenVINO C++ samples
run: |
source ${{ env.BUILD_DIR }}/dependencies/conanbuild.sh
${{ env.INSTALL_DIR }}/samples/cpp/build_samples.sh
source ${{ env.BUILD_DIR }}/dependencies/deactivate_conanbuild.sh
env:
CMAKE_TOOLCHAIN_FILE: ${{ env.BUILD_DIR }}/dependencies/conan_toolchain.cmake

View File

@@ -0,0 +1,140 @@
name: Linux Conditional Compilation (Ubuntu 22.04, Python 3.11)
on:
workflow_dispatch:
schedule:
# run daily at 00:00
- cron: '0 0 * * *'
# pull_request:
# paths-ignore:
# - '**/docs/**'
# - 'docs/**'
# - '**/**.md'
# - '**.md'
# - '**/layer_tests_summary/**'
# - '**/conformance/**'
# push:
# paths-ignore:
# - '**/docs/**'
# - 'docs/**'
# - '**/**.md'
# - '**.md'
# - '**/layer_tests_summary/**'
# - '**/conformance/**'
# branches:
# - master
concurrency:
group: ${{ github.head_ref || github.run_id }}-linux-cc
cancel-in-progress: true
jobs:
Build:
# TODO: remove. Temporary measure to prevent the workflow from scheduling on forks.
if: ${{ github.repository_owner == 'openvinotoolkit' }}
defaults:
run:
shell: bash
runs-on: ubuntu-latest-8-cores
env:
CMAKE_BUILD_TYPE: 'Release'
CMAKE_GENERATOR: 'Ninja'
CMAKE_CXX_COMPILER_LAUNCHER: ccache
CMAKE_C_COMPILER_LAUNCHER: ccache
OPENVINO_REPO: ${{ github.workspace }}/openvino
OPENVINO_CONTRIB_REPO: ${{ github.workspace }}/openvino_contrib
BUILD_DIR: ${{ github.workspace }}/build
MODELS_PATH: ${{ github.workspace }}/testdata
OV_TEMP: ${{ github.workspace }}/openvino_temp
PYTHON_STATIC_ARGS: -m "not dynamic_library and not template_plugin"
steps:
- name: Clone OpenVINO
uses: actions/checkout@v4
with:
path: 'openvino'
submodules: 'true'
- name: Clone test models
uses: actions/checkout@v4
with:
repository: 'openvinotoolkit/testdata'
path: 'testdata'
lfs: 'true'
#
# Dependencies
#
- name: Install build dependencies
run: |
sudo -E ${{ env.OPENVINO_REPO }}/install_build_dependencies.sh
sudo -E apt update
sudo -E apt --assume-yes install openjdk-11-jdk libbz2-dev clang unzip libpugixml-dev libtbb-dev intel-opencl-icd ocl-icd-opencl-dev opencl-headers
wget https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
unzip ninja-linux.zip
sudo cp -v ninja /usr/local/bin/
- uses: actions/setup-python@v4
with:
python-version: '3.11'
#
# Build
#
- name: Setup ccache
uses: hendrikmuhs/ccache-action@v1.2
with:
max-size: "2000M"
# Should save cache only if run in the master branch of the base repo
# github.ref_name is 'ref/PR_#' in case of the PR, and 'branch_name' when executed on push
# save: ${{ github.ref_name == 'master' && 'true' || 'false' }}
verbose: 2
key: linux-cc
restore-keys: |
linux-cc
- name: Get number of CPU cores
uses: SimenB/github-actions-cpu-cores@v2
id: cpu-cores
- name: CMake configure CC COLLECT
run: |
cmake \
-G "Ninja Multi-Config" \
-DENABLE_CPPLINT=OFF \
-DENABLE_GAPI_PREPROCESSING=OFF \
-DCMAKE_VERBOSE_MAKEFILE=ON \
-DCMAKE_COMPILE_WARNING_AS_ERROR=OFF \
-DENABLE_FASTER_BUILD=ON \
-DENABLE_PROFILING_ITT=ON \
-DSELECTIVE_BUILD=COLLECT \
-S ${{ env.OPENVINO_REPO }} \
-B ${{ env.BUILD_DIR }}
- name: Clean ccache stats
run: ccache --zero-stats --show-config
- name: Build CC COLLECT
run: cmake --build ${{ env.BUILD_DIR }} --parallel ${{ steps.cpu-cores.outputs.count }} --config Release --target openvino_intel_cpu_plugin openvino_ir_frontend benchmark_app sea_itt_lib
- name: Show ccache stats
run: ccache --show-stats
- name: Code usage analysis
run: |
python3 ${{ env.OPENVINO_REPO }}/thirdparty/itt_collector/runtool/sea_runtool.py \
--bindir ${{ env.OPENVINO_REPO }}/bin/intel64/Release -o ${{ env.BUILD_DIR }}/itt_stat ! \
${{ env.OPENVINO_REPO }}/bin/intel64/Release/benchmark_app -niter 1 -nireq 1 \
-m ${{ env.MODELS_PATH }}/models/test_model/test_model_fp32.xml -d CPU
- name: CMake configure with CC ON
run: cmake -DSELECTIVE_BUILD=ON -DSELECTIVE_BUILD_STAT=${{ env.BUILD_DIR }}/*.csv -S ${{ env.OPENVINO_REPO }} -B ${{ env.BUILD_DIR }}
- name: Build with CC ON
run: cmake --build ${{ env.BUILD_DIR }} --parallel ${{ steps.cpu-cores.outputs.count }} --config Release --target openvino_intel_cpu_plugin openvino_ir_frontend
- name: Use OpenVINO after CC
run: |
${{ env.OPENVINO_REPO }}/bin/intel64/Release/benchmark_app -niter 1 -nireq 1 \
-m ${{ env.MODELS_PATH }}/models/test_model/test_model_fp32.xml -d CPU

433
.github/workflows/linux_debian.yml vendored Normal file
View File

@@ -0,0 +1,433 @@
name: Linux Debian (Ubuntu 20.04, Python 3.11)
on:
schedule:
# run daily at 00:00
- cron: '0 0 * * *'
workflow_dispatch:
# pull_request:
# paths-ignore:
# - '**/docs/**'
# - 'docs/**'
# - '**/**.md'
# - '**.md'
# - '**/layer_tests_summary/**'
# - '**/conformance/**'
# push:
# paths-ignore:
# - '**/docs/**'
# - 'docs/**'
# - '**/**.md'
# - '**.md'
# - '**/layer_tests_summary/**'
# - '**/conformance/**'
# branches:
# - master
concurrency:
group: ${{ github.head_ref || github.run_id }}-linux-debian
cancel-in-progress: true
jobs:
Build:
# TODO: remove. Temporary measure to prevent the workflow from scheduling on forks.
if: ${{ github.repository_owner == 'openvinotoolkit' }}
defaults:
run:
shell: bash
runs-on: ubuntu-20.04-8-cores
env:
CMAKE_BUILD_TYPE: 'Release'
CMAKE_GENERATOR: 'Ninja'
CMAKE_CXX_COMPILER_LAUNCHER: ccache
CMAKE_C_COMPILER_LAUNCHER: ccache
CMAKE_CXX_LINKER_LAUNCHER: ccache
CMAKE_C_LINKER_LAUNCHER: ccache
BUILD_TYPE: Release
OPENVINO_REPO: ${{ github.workspace }}/openvino
BUILD_DIR: ${{ github.workspace }}/build
INSTALL_DIR: ${{ github.workspace }}/install
INSTALL_TEST_DIR: ${{ github.workspace }}/install/tests
LAYER_TESTS_INSTALL_DIR: ${{ github.workspace }}/install/tests/layer_tests
OV_TEMP: ${{ github.workspace }}/openvino_temp
SAMPLES_INSTALL_DIR: /usr/share/openvino/samples
PYTHON_STATIC_ARGS: -m "not dynamic_library and not template_plugin"
steps:
- name: Clone OpenVINO
uses: actions/checkout@v4
with:
path: 'openvino'
submodules: 'true'
- name: Create Directories
run: |
mkdir -p ${{ env.BUILD_DIR }}
mkdir -p ${{ env.INSTALL_DIR }}
- name: Setup Python 3.11
uses: actions/setup-python@v4
with:
python-version: '3.11'
#
# Dependencies
#
- name: Install build dependencies
run: |
sudo -E apt update
sudo -E ${{ env.OPENVINO_REPO }}/install_build_dependencies.sh
# 'clang' is used as a default compiler
sudo apt --assume-yes install clang
sudo apt --assume-yes install --no-install-recommends libopencv-imgproc-dev libopencv-imgcodecs-dev
# Speed up build
sudo apt -y --no-install-recommends install unzip
wget https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
unzip ninja-linux.zip
sudo cp -v ninja /usr/local/bin/
# Speed up tests
git clone https://github.com/google/gtest-parallel.git
- name: Install python dependencies
run: |
python3 -m pip install --upgrade pip
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/wheel/requirements-dev.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/requirements.txt
# For running Python API tests
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/src/compatibility/openvino/requirements-dev.txt
# For running Paddle frontend unit tests
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/frontends/paddle/tests/requirements.txt
# For running ONNX frontend unit tests
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/frontends/onnx/tests/requirements.txt
# For running TensorFlow frontend unit tests
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/frontends/tensorflow/tests/requirements.txt
# For MO unit tests
python3 -m pip install -U pip
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_mxnet.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_caffe.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_kaldi.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_onnx.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_tf2.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_dev.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/frontends/paddle/tests/requirements.txt
# for Python API tests
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/src/bindings/python/requirements_test.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements.txt
- name: Setup ccache
uses: hendrikmuhs/ccache-action@v1.2
with:
max-size: "2000M"
# Should save cache only if run in the master branch of the base repo
# github.ref_name is 'ref/PR_#' in case of the PR, and 'branch_name' when executed on push
save: ${{ github.ref_name == 'master' && 'true' || 'false' }}
verbose: 2
key: ${{ github.job }}-linux-debian
restore-keys: |
${{ github.job }}-linux-debian
- name: Get tools versions
run: |
ninja --version
ccache --version
python3 --version
cmake --version
#
# Build
#
- name: Get number of CPU cores
uses: SimenB/github-actions-cpu-cores@v2
id: cpu-cores
- name: CMake configure
run: |
cmake \
-GNinja \
-DENABLE_CPPLINT=OFF \
-DCMAKE_BUILD_TYPE=${{ env.BUILD_TYPE }} \
-DCMAKE_COMPILE_WARNING_AS_ERROR=OFF \
-DENABLE_PYTHON=ON \
-DENABLE_INTEL_GNA=OFF \
-DENABLE_TESTS=ON \
-DENABLE_FASTER_BUILD=ON \
-DENABLE_STRICT_DEPENDENCIES=OFF \
-DENABLE_SYSTEM_SNAPPY=ON \
-DENABLE_PYTHON_PACKAGING=ON \
-DCPACK_GENERATOR=DEB \
-S ${{ env.OPENVINO_REPO }} \
-B ${{ env.BUILD_DIR }}
- name: Clean ccache stats
run: ccache --zero-stats --show-config
- name: Build
run: cmake --build ${{ env.BUILD_DIR }} --parallel ${{ steps.cpu-cores.outputs.count }} --config ${{ env.BUILD_TYPE }}
- name: Show ccache stats
run: ccache --show-stats
- name: CMake Layer Tests
run: cmake -GNinja -S ${{ env.OPENVINO_REPO }}/tests/layer_tests -B ${{ env.BUILD_DIR }}/layer_tests
- name: Build Layer Tests
run: cmake --build ${{ env.BUILD_DIR }}/layer_tests --parallel --config ${{ env.BUILD_TYPE }}
# to check that wheel packages tested later contain all the dependencies like TBB or pugixml
- name: Remove debian dependencies
run: sudo apt-get remove libtbb2 libpugixml1v5 -y
- name: Install wheel packages
run: cmake -DCOMPONENT=python_wheels -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_DIR }} -P ${{ env.BUILD_DIR }}/cmake_install.cmake
- name: Install Python Samples
run: cmake -DCOMPONENT=python_samples -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_DIR }} -P ${{ env.BUILD_DIR }}/cmake_install.cmake
- name: Install Layer Tests
run: cmake -DCOMPONENT=tests -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_DIR }} -P ${{ env.BUILD_DIR }}/layer_tests/cmake_install.cmake
- name: Install tests
run: cmake -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_DIR }} -DCOMPONENT=tests -P ${{ env.BUILD_DIR }}/cmake_install.cmake
- name: List install test files
run: ls -alR ${{ env.INSTALL_DIR }}
- name: Install python wheels
run: python3 -m pip install openvino-dev --find-links=${{ env.INSTALL_DIR }}/tools
- name: Build Debian packages
run: |
sudo apt-get install libtbb-dev libpugixml-dev -y
cmake --build ${{ env.BUILD_DIR }} --config ${{ env.BUILD_TYPE }} --target package --parallel
- name: Install Debian packages
run: |
pushd ${{ env.BUILD_DIR }}
# install debian packages from previous release
sudo apt-get -y update
sudo apt-get install --no-install-recommends gnupg wget -y
wget https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
sudo apt-key add GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
echo "deb https://apt.repos.intel.com/openvino/2023 ubuntu20 main" | sudo tee /etc/apt/sources.list.d/intel-openvino-2023.list
sudo apt-get update
sudo apt-get install openvino -y
# install our local one and make sure the conflicts are resolved
sudo apt-get install --no-install-recommends dpkg-dev -y
rm -r _CPack_Packages
dpkg-scanpackages . /dev/null | gzip -9c > Packages.gz
echo "deb [trusted=yes] file:${{ env.BUILD_DIR }} ./" | sudo tee /etc/apt/sources.list.d/openvino-local.list
sudo apt-get update
sudo apt-get install openvino -y
popd
- name: List install files
run: ls -alR ${{ env.INSTALL_DIR }}
- name: Build cpp samples - gcc
run: ${{ env.SAMPLES_INSTALL_DIR }}/cpp/build_samples.sh -i ${{ env.INSTALL_DIR }}
- name: Build c samples
run: ${{ env.SAMPLES_INSTALL_DIR }}/c/build_samples.sh -i ${{ env.INSTALL_DIR }}
- name: OpenVINO Core Unit Tests
run: |
export LD_LIBRARY_PATH=${{ env.INSTALL_TEST_DIR }}:$LD_LIBRARY_PATH
${{ env.INSTALL_TEST_DIR }}/ov_core_unit_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU* \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-OVCoreUT.xml
- name: Proxy Plugin Tests
run: |
export LD_LIBRARY_PATH=${{ env.INSTALL_TEST_DIR }}:$LD_LIBRARY_PATH
${{ env.INSTALL_TEST_DIR }}/ov_proxy_plugin_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-OVProxyTests.xml
- name: Hetero Unit Tests
run: |
export LD_LIBRARY_PATH=${{ env.INSTALL_TEST_DIR }}:$LD_LIBRARY_PATH
${{ env.INSTALL_TEST_DIR }}/ov_hetero_unit_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-OVHeteroUnitTests.xml
- name: Hetero Func Tests
run: |
export LD_LIBRARY_PATH=${{ env.INSTALL_TEST_DIR }}:$LD_LIBRARY_PATH
${{ env.INSTALL_TEST_DIR }}/ov_hetero_func_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-OVHeteroFuncTests.xml
- name: ONNX frontend tests
run: |
export LD_LIBRARY_PATH=${{ env.INSTALL_TEST_DIR }}:$LD_LIBRARY_PATH
${{ env.INSTALL_TEST_DIR }}/ov_onnx_frontend_tests --gtest_print_time=1 --gtest_filter=-*IE_GPU*:*FrontEndLoadFromTest.testLoadFromTwoStreams*:*FrontEndLoadFromTest.testLoadFromTwoFiles* \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-ONNXFrontend.xml
- name: TensorFlow frontend tests
run: |
export LD_LIBRARY_PATH=${{ env.INSTALL_TEST_DIR }}:$LD_LIBRARY_PATH
${{ env.INSTALL_TEST_DIR }}/ov_tensorflow_frontend_tests --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-TensorFlowFrontend.xml
# Disabled in Azure: https://github.com/openvinotoolkit/openvino/blob/master/.ci/azure/linux.yml#L403
# - name: PaddlePaddle frontend tests
# run: |
# ${{ env.INSTALL_TEST_DIR }}/paddle_tests --gtest_print_time=1 --gtest_filter=*smoke* \
# --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-PaddleTests.xml
- name: TensorFlow Common tests
run: |
export LD_LIBRARY_PATH=${{ env.INSTALL_TEST_DIR }}:$LD_LIBRARY_PATH
${{ env.INSTALL_TEST_DIR }}/ov_tensorflow_common_tests --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-TensorFlowCommonFrontend.xml
- name: TensorFlow Lite frontend tests
run: |
export LD_LIBRARY_PATH=${{ env.INSTALL_TEST_DIR }}:$LD_LIBRARY_PATH
${{ env.INSTALL_TEST_DIR }}/ov_tensorflow_lite_frontend_tests --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-TensorFlowLiteFrontend.xml
- name: Snippets func tests
run: |
export LD_LIBRARY_PATH=${{ env.INSTALL_TEST_DIR }}:$LD_LIBRARY_PATH
${{ env.INSTALL_TEST_DIR }}/ov_snippets_func_tests --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-SnippetsFuncTests.xml
- name: CPU plugin unit tests
run: |
export LD_LIBRARY_PATH=${{ env.INSTALL_TEST_DIR }}:$LD_LIBRARY_PATH
${{ env.INSTALL_TEST_DIR }}/ov_cpu_unit_tests --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-CPUUnitTests.xml
- name: AUTO UT
run: |
export LD_LIBRARY_PATH=${{ env.INSTALL_TEST_DIR }}:$LD_LIBRARY_PATH
${{ env.INSTALL_TEST_DIR }}/ov_auto_unit_tests --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-ov_auto_unit_tests.xml
- name: Template plugin tests
run: |
export LD_LIBRARY_PATH=${{ env.INSTALL_TEST_DIR }}:$LD_LIBRARY_PATH
${{ env.INSTALL_TEST_DIR }}/ov_template_func_tests --gtest_print_time=1 \
--gtest_filter=*smoke* \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-TemplateFuncTests.xml
- name: Inference Engine C API tests
run: |
export LD_LIBRARY_PATH=${{ env.INSTALL_TEST_DIR }}:$LD_LIBRARY_PATH
${{ env.INSTALL_TEST_DIR }}/InferenceEngineCAPITests --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-InferenceEngineCAPITests.xml
- name: OpenVINO C API tests
run: |
export LD_LIBRARY_PATH=${{ env.INSTALL_TEST_DIR }}:$LD_LIBRARY_PATH
${{ env.INSTALL_TEST_DIR }}/ov_capi_test --gtest_print_time=1 \
--gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-OpenVINOCAPITests.xml
- name: nGraph and IE Python Bindings Tests
run: |
export LD_LIBRARY_PATH=${{ env.INSTALL_TEST_DIR }}:$LD_LIBRARY_PATH
python3 -m pytest -s ${{ env.INSTALL_TEST_DIR }}/pyngraph ${{ env.PYTHON_STATIC_ARGS }} \
--junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-Pyngraph.xml \
--ignore=${{ env.INSTALL_TEST_DIR }}/pyngraph/tests/test_onnx/test_zoo_models.py \
--ignore=${{ env.INSTALL_TEST_DIR }}/pyngraph/tests/test_onnx/test_backend.py
- name: Python API 2.0 Tests
run: |
# For python imports to import pybind_mock_frontend
export PYTHONPATH=${{ env.INSTALL_TEST_DIR }}:${{ env.OPENVINO_REPO }}/tools/mo:$PYTHONPATH
export LD_LIBRARY_PATH=${{ env.INSTALL_TEST_DIR }}:$LD_LIBRARY_PATH
python3 -m pytest -sv ${{ env.INSTALL_TEST_DIR }}/pyopenvino \
--junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-Pyngraph.xml \
--ignore=${{ env.INSTALL_TEST_DIR }}/pyopenvino/tests/test_utils/test_utils.py
- name: ONNX Frontend Python Tests
run: |
# For python imports to import pybind_mock_frontend
export PYTHONPATH=${{ env.INSTALL_TEST_DIR }}:${{ env.OPENVINO_REPO }}/tools/mo:$PYTHONPATH
export LD_LIBRARY_PATH=${{ env.INSTALL_TEST_DIR }}:$LD_LIBRARY_PATH
python3 -m pytest -sv ${{ env.OPENVINO_REPO }}/src/frontends/onnx/tests \
--junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-ONNX-FE-PYTHON.xml \
--ignore=${{ env.OPENVINO_REPO }}/src/frontends/onnx/tests/test_python/test_zoo_models.py \
--ignore=${{ env.OPENVINO_REPO }}/src/frontends/onnx/tests/test_python/test_backend.py
- name: Model Optimizer UT
run: |
export PYTHONPATH=${{ env.OPENVINO_REPO }}/tools/mo/:${{ env.OPENVINO_REPO }}/tools/ovc/:${{ env.LAYER_TESTS_INSTALL_DIR }}:${{ env.INSTALL_TEST_DIR }}:${{ env.INSTALL_DIR }}/python/python3.11:$PYTHONPATH
# Need to be reinstalled to have correct numpy version
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_mxnet.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_caffe.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_kaldi.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_onnx.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_tf2.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_dev.txt
python3 -m pytest -s ${{ env.INSTALL_TEST_DIR }}/mo/unit_tests \
--junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-ModelOptimizer.xml
# run not all smoke filter to save time in post-commit
- name: CPU FuncTests
run: ${{ env.INSTALL_TEST_DIR }}/ov_cpu_func_tests --gtest_filter=*OVCLass*:*CoreThreadingTests* --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-ov_cpu_func_tests.xml
- name: CMake Samples Tests
run: cmake -GNinja -S ${{ env.OPENVINO_REPO }}/tests/samples_tests -B ${{ env.BUILD_DIR }}/samples_tests
- name: Install Samples Tests
run: cmake -DCOMPONENT=tests -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_DIR }} -P ${{ env.BUILD_DIR }}/samples_tests/cmake_install.cmake
- name: Samples Smoke Tests
run: |
python3 -m pip install --ignore-installed PyYAML -r ${{ env.INSTALL_TEST_DIR }}/smoke_tests/requirements.txt
export LD_LIBRARY_PATH=${{ env.IE_APP_PATH }}:$LD_LIBRARY_PATH
python3 -m pytest -sv ${{ env.INSTALL_TEST_DIR }}/smoke_tests -k "not GNA" \
--env_conf ${{ env.INSTALL_TEST_DIR }}/smoke_tests/env_config.yml \
--junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-SamplesSmokeTests.xml
env:
IE_APP_PATH: ${{ env.INSTALL_DIR }}/samples_bin
IE_APP_PYTHON_PATH: ${{ env.INSTALL_DIR }}/share/openvino/samples/python
LD_LIBRARY_PATH: ${{ env.INSTALL_DIR }}/samples_bin
SHARE: ${{ env.INSTALL_TEST_DIR }}/smoke_tests/samples_smoke_tests_data
WORKSPACE: ${{ env.INSTALL_DIR }}
- name: TensorFlow 1 Layer Tests - Legacy FE
run: |
python3 -m pip install -r ${{ env.LAYER_TESTS_INSTALL_DIR }}/requirements.txt
export PYTHONPATH=${{ env.OPENVINO_REPO }}/tools/mo/:${{ env.LAYER_TESTS_INSTALL_DIR }}:$PYTHONPATH
python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/tensorflow_tests/test_tf_Roll.py --ir_version=10 --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-tf_Roll.xml
- name: TensorFlow Lite Layer Tests - TFL FE
run: |
python3 -m pip install -r ${{ env.LAYER_TESTS_INSTALL_DIR }}/requirements.txt
export PYTHONPATH=${{ env.OPENVINO_REPO }}/tools/mo/:${{ env.LAYER_TESTS_INSTALL_DIR }}:$PYTHONPATH
# Need to be reinstalled to have correct numpy version
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_caffe.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_kaldi.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_onnx.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_tf2.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_dev.txt
python3 -m pip install -r ${{ env.OPENVINO_REPO }}/tools/mo/requirements_mxnet.txt
python3 -m pytest ${{ env.LAYER_TESTS_INSTALL_DIR }}/tensorflow_lite_tests/ --junitxml=${{ env.INSTALL_TEST_DIR }}/TEST-tfl_fe.xml
env:
TEST_DEVICE: CPU
- name: Upload Test Results
uses: actions/upload-artifact@v3
if: ${{ always() }}
with:
name: test-results
path: ${{ env.INSTALL_TEST_DIR }}/TEST*.xml
if-no-files-found: 'error'

182
.github/workflows/linux_onnxruntime.yml vendored Normal file
View File

@@ -0,0 +1,182 @@
name: Linux ONNX Runtime (Ubuntu 20.04, Python 3.11)
on:
workflow_dispatch:
schedule:
# run daily at 00:00
- cron: '0 0 * * *'
# pull_request:
# paths-ignore:
# - '**/docs/**'
# - 'docs/**'
# - '**/**.md'
# - '**.md'
# - '**/layer_tests_summary/**'
# - '**/conformance/**'
# push:
# paths-ignore:
# - '**/docs/**'
# - 'docs/**'
# - '**/**.md'
# - '**.md'
# - '**/layer_tests_summary/**'
# - '**/conformance/**'
# branches:
# - master
concurrency:
group: ${{ github.head_ref || github.run_id }}-linux-onnx-runtime
cancel-in-progress: true
jobs:
Build:
# TODO: remove. Temporary measure to prevent the workflow from scheduling on forks.
if: ${{ github.repository_owner == 'openvinotoolkit' }}
defaults:
run:
shell: bash
runs-on: ubuntu-20.04-8-cores
env:
CMAKE_BUILD_TYPE: 'Release'
CMAKE_GENERATOR: 'Ninja'
CMAKE_CXX_COMPILER_LAUNCHER: ccache
CMAKE_C_COMPILER_LAUNCHER: ccache
CMAKE_CXX_LINKER_LAUNCHER: ccache
CMAKE_C_LINKER_LAUNCHER: ccache
BUILD_TYPE: Release
OPENVINO_REPO: ${{ github.workspace }}/openvino
ONNX_RUNTIME_REPO: ${{ github.workspace }}/onnxruntime
ONNX_RUNTIME_UTILS: ${{ github.workspace }}/openvino/.ci/azure/ci_utils/onnxruntime
ONNX_RUNTIME_BUILD_DIR: ${{ github.workspace }}/onnxruntime/build
BUILD_DIR: ${{ github.workspace }}/build
INSTALL_DIR: ${{ github.workspace }}/install/openvino
steps:
- name: Clone OpenVINO
uses: actions/checkout@v4
with:
path: 'openvino'
submodules: 'true'
- name: Clone ONNX Runtime
run: |
branch=`tr -s '\n ' < ${{ env.ONNX_RUNTIME_UTILS }}/version`
git clone --branch $branch --single-branch --recursive https://github.com/microsoft/onnxruntime.git ${{ env.ONNX_RUNTIME_REPO }}
- name: Create Directories
run: |
mkdir -p ${{ env.BUILD_DIR }}
mkdir -p ${{ env.INSTALL_DIR }}
- name: Setup Python 3.11
uses: actions/setup-python@v4
with:
python-version: '3.11'
#
# Dependencies
#
- name: Install build dependencies
run: |
sudo -E ${{ env.OPENVINO_REPO }}/install_build_dependencies.sh
- name: Setup ccache
uses: hendrikmuhs/ccache-action@v1.2
with:
max-size: "2000M"
# Should save cache only if run in the master branch of the base repo
# github.ref_name is 'ref/PR_#' in case of the PR, and 'branch_name' when executed on push
save: ${{ github.ref_name == 'master' && 'true' || 'false' }}
verbose: 2
key: ${{ github.job }}-linux-onnx-runtime
restore-keys: |
${{ github.job }}-linux-onnx-runtime
#
# Build
#
- name: Get number of CPU cores
uses: SimenB/github-actions-cpu-cores@v2
id: cpu-cores
- name: CMake configure
run: |
cmake \
-GNinja \
-DCMAKE_BUILD_TYPE=${{ env.BUILD_TYPE }} \
-DCMAKE_COMPILE_WARNING_AS_ERROR=OFF \
-DENABLE_INTEL_GNA=OFF \
-DENABLE_INTEL_GPU=OFF \
-DENABLE_CPPLINT=OFF \
-DENABLE_PROFILING_ITT=OFF \
-DENABLE_SAMPLES=OFF \
-DENABLE_OV_TF_FRONTEND=OFF \
-DENABLE_OV_TF_LITE=OFF \
-DENABLE_OV_PADDLE_FRONTEND=OFF \
-DENABLE_OV_PYTORCH_FRONTEND=OFF \
-S ${{ env.OPENVINO_REPO }} \
-B ${{ env.BUILD_DIR }}
- name: Clean ccache stats
run: ccache --zero-stats --show-config
- name: Build
run: cmake --build ${{ env.BUILD_DIR }} --parallel ${{ steps.cpu-cores.outputs.count }} --config ${{ env.BUILD_TYPE }}
- name: Show ccache stats
run: ccache --show-stats
- name: Install OpenVINO
run: cmake -DCMAKE_INSTALL_PREFIX=${{ env.INSTALL_DIR }} -P ${{ env.BUILD_DIR }}/cmake_install.cmake
- name: Build Lin ONNX Runtime
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
${{ env.ONNX_RUNTIME_REPO }}/build.sh \
--config RelWithDebInfo \
--use_openvino CPU_FP32 \
--build_shared_lib \
--parallel \
--skip_tests \
--compile_no_warning_as_error \
--build_dir ${{ env.ONNX_RUNTIME_BUILD_DIR }}
env:
CXXFLAGS: "-Wno-error=deprecated-declarations"
- name: Run onnxruntime_test_all
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
skip_tests=$(tr -s '\n ' ':' < ${{ env.ONNX_RUNTIME_UTILS }}/skip_tests)
./onnxruntime_test_all --gtest_filter=-$skip_tests
working-directory: ${{ env.ONNX_RUNTIME_BUILD_DIR }}/RelWithDebInfo
- name: Run onnxruntime_shared_lib_test
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
./onnxruntime_shared_lib_test --gtest_filter=-CApiTest.test_custom_op_openvino_wrapper_library
working-directory: ${{ env.ONNX_RUNTIME_BUILD_DIR }}/RelWithDebInfo
- name: Run onnxruntime_global_thread_pools_test
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
./onnxruntime_global_thread_pools_test
working-directory: ${{ env.ONNX_RUNTIME_BUILD_DIR }}/RelWithDebInfo
- name: Run onnxruntime_api_tests_without_env
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
./onnxruntime_api_tests_without_env
working-directory: ${{ env.ONNX_RUNTIME_BUILD_DIR }}/RelWithDebInfo
- name: Run pytorch-converted tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
./onnx_test_runner "${{ env.ONNX_RUNTIME_REPO }}/cmake/external/onnx/onnx/backend/test/data/pytorch-converted"
working-directory: ${{ env.ONNX_RUNTIME_BUILD_DIR }}/RelWithDebInfo
- name: Run pytorch-operator tests
run: |
source ${{ env.INSTALL_DIR }}/setupvars.sh
./onnx_test_runner "${{ env.ONNX_RUNTIME_REPO }}/cmake/external/onnx/onnx/backend/test/data/pytorch-operator"
working-directory: ${{ env.ONNX_RUNTIME_BUILD_DIR }}/RelWithDebInfo

View File

@@ -20,9 +20,8 @@ jobs:
Pylint-UT:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v3
with:
submodules: recursive
- name: Clone OpenVINO
- uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v4

View File

@@ -24,10 +24,8 @@ jobs:
linters:
runs-on: ubuntu-20.04
steps:
- name: Code checkout
uses: actions/checkout@v3
with:
submodules: recursive
- name: Clone OpenVINO
uses: actions/checkout@v4
- name: Setup Python
uses: actions/setup-python@v4

View File

@@ -36,8 +36,6 @@ env:
SAMPLES_INSTALL_DIR: "${{ github.workspace }}\\install\\samples"
LAYER_TESTS_INSTALL_DIR: "${{ github.workspace }}\\install\\tests\\layer_tests"
BUILD_DIR: "${{ github.workspace }}\\build"
DATA_PATH: "${{ github.workspace }}\\testdata"
MODELS_PATH: "${{ github.workspace }}\\testdata"
OV_TEMP: "${{ github.workspace }}\\openvino_temp"
PYTHON_STATIC_ARGS: -m "not dynamic_library and not template_plugin"
VCVARSPATH: "C:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\VC\\Auxiliary\\Build\\vcvarsall.bat"
@@ -50,25 +48,16 @@ jobs:
runs-on: windows-latest-8-cores
steps:
- name: Clone OpenVINO
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
path: 'openvino'
submodules: 'recursive'
submodules: 'true'
- name: Clone OpenVINO Contrib
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
repository: 'openvinotoolkit/openvino_contrib'
path: 'openvino_contrib'
submodules: 'recursive'
- name: Clone testdata for C API tests
uses: actions/checkout@v3
with:
repository: 'openvinotoolkit/testdata'
path: 'testdata'
submodules: 'recursive'
lfs: 'true'
#
# Dependencies
@@ -122,7 +111,7 @@ jobs:
#
- name: Get number of CPU cores
uses: SimenB/github-actions-cpu-cores@v1
uses: SimenB/github-actions-cpu-cores@v2
id: cpu-cores
- uses: ilammy/msvc-dev-cmd@v1
@@ -265,8 +254,6 @@ jobs:
SAMPLES_INSTALL_DIR: "${{ github.workspace }}\\install\\samples"
LAYER_TESTS_INSTALL_DIR: "${{ github.workspace }}\\install\\tests\\layer_tests"
BUILD_DIR: "${{ github.workspace }}\\build"
DATA_PATH: "${{ github.workspace }}\\testdata"
MODELS_PATH: "${{ github.workspace }}\\testdata"
PYTHON_STATIC_ARGS: -m "not dynamic_library and not template_plugin"
steps:
@@ -303,17 +290,9 @@ jobs:
ls "${{ env.INSTALL_TEST_DIR }}"
- name: Clone OpenVINO
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
path: 'openvino'
submodules: 'recursive'
- name: Clone OpenVINO Contrib
uses: actions/checkout@v3
with:
repository: 'openvinotoolkit/openvino_contrib'
path: 'openvino_contrib'
submodules: 'recursive'
- uses: actions/setup-python@v4
with:
@@ -645,6 +624,11 @@ jobs:
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_proxy_plugin_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-OVProxyTests.xml
- name: Hetero Unit Tests
shell: cmd
run: |
call "${{ env.INSTALL_DIR }}\\setupvars.bat" && ${{ env.INSTALL_TEST_DIR }}/ov_hetero_unit_tests --gtest_print_time=1 --gtest_output=xml:${{ env.INSTALL_TEST_DIR }}/TEST-OVHeteroUnitTests.xml
- name: Hetero Func Tests
shell: cmd
run: |

View File

@@ -0,0 +1,168 @@
name: Tests on Windows Conditional Compilation (VS 2022, Python 3.11)
on:
workflow_dispatch:
schedule:
# run daily at 00:00
- cron: '0 0 * * *'
# pull_request:
# paths-ignore:
# - '**/docs/**'
# - 'docs/**'
# - '**/**.md'
# - '**.md'
# - '**/layer_tests_summary/**'
# - '**/conformance/**'
# push:
# paths-ignore:
# - '**/docs/**'
# - 'docs/**'
# - '**/**.md'
# - '**.md'
# - '**/layer_tests_summary/**'
# - '**/conformance/**'
# branches:
# - master
concurrency:
group: ${{ github.head_ref || github.run_id }}-windows-cc
cancel-in-progress: true
env:
CMAKE_BUILD_TYPE: 'Release'
CMAKE_GENERATOR: 'Ninja'
CMAKE_CXX_COMPILER_LAUNCHER: sccache
CMAKE_C_COMPILER_LAUNCHER: sccache
OPENVINO_REPO: "${{ github.workspace }}\\openvino"
OPENVINO_CONTRIB_REPO: "${{ github.workspace }}\\openvino_contrib"
INSTALL_DIR: "${{ github.workspace }}\\install_pkg"
INSTALL_TEST_DIR: "${{ github.workspace }}\\install\\tests"
SAMPLES_INSTALL_DIR: "${{ github.workspace }}\\install\\samples"
LAYER_TESTS_INSTALL_DIR: "${{ github.workspace }}\\install\\tests\\layer_tests"
BUILD_DIR: "${{ github.workspace }}\\build"
BUILD_DIR_2: "${{ github.workspace }}\\build_s"
MODELS_PATH: "${{ github.workspace }}\\testdata"
OV_TEMP: "${{ github.workspace }}\\openvino_temp"
BUILD_TYPE: "Release"
PYTHON_STATIC_ARGS: -m "not dynamic_library and not template_plugin"
VCVARSPATH: "C:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\VC\\Auxiliary\\Build\\vcvarsall.bat"
jobs:
Build:
# TODO: remove. Temporary measure to prevent the workflow from scheduling on forks.
if: ${{ github.repository_owner == 'openvinotoolkit' }}
defaults:
run:
shell: pwsh
runs-on: windows-latest-8-cores
steps:
- name: Clone OpenVINO
uses: actions/checkout@v4
with:
path: 'openvino'
submodules: 'true'
- name: Clone test models
uses: actions/checkout@v4
with:
repository: 'openvinotoolkit/testdata'
path: 'testdata'
lfs: 'true'
#
# Dependencies
#
- uses: actions/setup-python@v4
with:
python-version: '3.11'
- name: Install build dependencies
run: |
choco install --no-progress ninja
#
# Build
#
- name: Get number of CPU cores
uses: SimenB/github-actions-cpu-cores@v2
id: cpu-cores
- uses: ilammy/msvc-dev-cmd@v1
- name: Setup sccache
uses: hendrikmuhs/ccache-action@v1.2
with:
variant: sccache
max-size: "2000M"
# Should save cache only if run in the master branch of the base repo
# github.ref_name is 'ref/PR_#' in case of the PR, and 'branch_name' when executed on push
save: ${{ github.ref_name == 'master' && 'true' || 'false' }}
key: ${{ github.job }}-windows-cc
restore-keys: |
${{ github.job }}-windows-cc
- name: CMake CC COLLECT
run: |
& "${{ env.VCVARSPATH }}" x64 && cmake -G Ninja `
-DENABLE_CPPLINT=OFF `
-DENABLE_GAPI_PREPROCESSING=OFF `
-DENABLE_PLUGINS_XML=ON `
-DCMAKE_COMPILE_WARNING_AS_ERROR=OFF `
-DCMAKE_BUILD_TYPE=${{ env.BUILD_TYPE }} `
-DENABLE_PROFILING_ITT=ON `
-DSELECTIVE_BUILD=COLLECT `
-S ${{ env.OPENVINO_REPO }} `
-B ${{ env.BUILD_DIR }}
- name: Build CC COLLECT
run: |
& "${{ env.VCVARSPATH }}" x64 && cmake --build ${{ env.BUILD_DIR }} --parallel ${{ steps.cpu-cores.outputs.count }} --config ${{ env.BUILD_TYPE }} `
--target openvino_intel_cpu_plugin openvino_ir_frontend benchmark_app sea_itt_lib
- name: List bin files
shell: cmd
run: dir ${{ env.OPENVINO_REPO }}\bin\ /s
- name: Code usage analysis
shell: cmd
working-directory: ${{ env.OPENVINO_REPO }}
run: |
set path=%path%;${{ env.OPENVINO_REPO }}\temp\tbb\bin
call "${{ env.VCVARSPATH }}" && python thirdparty\itt_collector\runtool\sea_runtool.py --bindir ${{ env.OPENVINO_REPO }}\bin\intel64\${{ env.BUILD_TYPE }} -o ${{ env.BUILD_DIR }}\itt_stat ! ${{ env.OPENVINO_REPO }}\bin\intel64\${{ env.BUILD_TYPE }}\benchmark_app.exe -niter 1 -nireq 1 -m ${{ env.MODELS_PATH }}\models\test_model\test_model_fp32.xml -d CPU
- name: List csv files
shell: cmd
run: dir ${{ env.BUILD_DIR }}\*.csv /s /p
- name: CMake CC ON
run: |
& "${{ env.VCVARSPATH }}" x64 && cmake -G "Visual Studio 17 2022" `
-DVERBOSE_BUILD=ON `
-DENABLE_CPPLINT=OFF `
-DENABLE_GAPI_PREPROCESSING=OFF `
-DENABLE_PROFILING_ITT=OFF `
-DSELECTIVE_BUILD=ON `
-DCMAKE_COMPILE_WARNING_AS_ERROR=OFF `
-DSELECTIVE_BUILD_STAT=${{ env.BUILD_DIR }}\*.csv `
-S ${{ env.OPENVINO_REPO }} `
-B ${{ env.BUILD_DIR_2 }}
- name: Build CC ON
run: |
& "${{ env.VCVARSPATH }}" x64 && cmake --build ${{ env.BUILD_DIR_2 }} --parallel ${{ steps.cpu-cores.outputs.count }} --config ${{ env.BUILD_TYPE }} `
--target openvino_intel_cpu_plugin openvino_ir_frontend benchmark_app
- name: List bin files ON
shell: cmd
run: dir ${{ env.OPENVINO_REPO }}\bin\ /s
- name: Check conditional_compilation_gen.h header
shell: cmd
run: type ${{ env.BUILD_DIR_2 }}\src\common\conditional_compilation\conditional_compilation_gen.h
- name: Use OpenVINO after CC
shell: cmd
run: |
set path=%path%;${{ env.OPENVINO_REPO }}\temp\tbb\bin
${{ env.OPENVINO_REPO }}\bin\intel64\${{ env.BUILD_TYPE }}\benchmark_app.exe -niter 1 -nireq 1 -m ${{ env.MODELS_PATH }}\models\test_model\test_model_fp32.xml -d CPU

View File

@@ -47,6 +47,7 @@ message (STATUS "CMAKE_GENERATOR ....................... " ${CMAKE_GENERATOR})
message (STATUS "CPACK_GENERATOR ....................... " ${CPACK_GENERATOR})
message (STATUS "CMAKE_C_COMPILER_ID ................... " ${CMAKE_C_COMPILER_ID})
message (STATUS "CMAKE_CXX_COMPILER_ID ................. " ${CMAKE_CXX_COMPILER_ID})
message (STATUS "CMAKE_CXX_STANDARD .................... " ${CMAKE_CXX_STANDARD})
if(OV_GENERATOR_MULTI_CONFIG)
string(REPLACE ";" " " config_types "${CMAKE_CONFIGURATION_TYPES}")
message (STATUS "CMAKE_CONFIGURATION_TYPES ............. " ${config_types})
@@ -105,8 +106,6 @@ function(openvino_developer_export_targets)
if(TARGET "${target_name}")
get_target_property(original_name ${target_name} ALIASED_TARGET)
if(TARGET "${original_name}")
message(STATUS "The name ${target_name} is an ALIAS for ${original_name}. "
"It will be exported to the OpenVINODeveloperPackage with the original name.")
list(REMOVE_ITEM ${EXPORT_COMPONENT} ${target_name})
list(APPEND ${EXPORT_COMPONENT} ${original_name})
endif()

View File

@@ -1,53 +1,88 @@
# How to contribute to the OpenVINO repository
# Contributing to OpenVINO
We welcome community contributions to OpenVINO™. Please read the following guide to learn how to find ideas for contribution, follow best practices for pull requests, and test your changes with our established checks.
## How to contribute to the OpenVINO project
OpenVINO™ is always looking for opportunities to improve and your contributions
play a big role in this process. There are several ways you can make the
product better:
## Before you start contributing you should
### Provide Feedback
- Make sure you agree to contribute your code under [OpenVINO™ (Apache 2.0) license](https://github.com/openvinotoolkit/openvino/blob/master/LICENSE).
- Decide what youre going to contribute. If you are not sure what you want to work on, check out [Contributions Welcome](https://github.com/openvinotoolkit/openvino/issues/17502). See if there isn't anyone already working on the subject you choose, in which case you may still contribute, providing support and suggestions for the given issue or pull request.
- If you are going to fix a bug, check if it still exists. You can do it by building the latest master branch and making sure that the error is still reproducible there. We do not fix bugs that only affect older non-LTS releases like 2020.2, for example (see more details about our [branching strategy](https://github.com/openvinotoolkit/openvino/wiki/Branches)).
* **Report bugs / issues**
If you experience faulty behavior in OpenVINO or its components, you can
[create a new issue](https://github.com/openvinotoolkit/openvino/issues)
in the GitHub issue tracker.
* **Propose new features / improvements**
If you have a suggestion for improving OpenVINO or want to share your ideas, you can open a new
[GitHub Discussion](https://github.com/openvinotoolkit/openvino/discussions).
If your idea is already well defined, you can also create a
[Feature Request Issue](https://github.com/openvinotoolkit/openvino/issues/new?assignees=octocat&labels=enhancement%2Cfeature&projects=&template=feature_request.yml&title=%5BFeature+Request%5D%3A+)
In both cases, provide a detailed description, including use cases, benefits, and potential challenges.
If your points are especially well aligned with the product vision, they will be included in the
[development roadmap](./ROADMAP.md).
User feedback is crucial for OpenVINO development and even if your input is not immediately prioritized,
it may be used at a later time or undertaken by the community, regardless of the official roadmap.
### Contribute Code Changes
* **Fix Bugs or Develop New Features**
If you want to help improving OpenVINO, choose one of the issues reported in
[GitHub Issue Tracker](https://github.com/openvinotoolkit/openvino/issues) and
[create a Pull Request](./CONTRIBUTING_PR.md) addressing it. Consider one of the
tasks listed as [first-time contributions](https://github.com/openvinotoolkit/openvino/issues/17502).
If the feature you want to develop is more complex or not well defined by the reporter,
it is always a good idea to [discuss it](https://github.com/openvinotoolkit/openvino/discussions)
with OpenVINO developers first. Before creating a new PR, check if nobody is already
working on it. In such a case, you may still help, having aligned with the other developer.
Importantly, always check if the change hasn't been implemented before you start working on it!
You can build OpenVINO using the latest master branch and make sure that it still needs your
changes. Also, do not address issues that only affect older non-LTS releases, like 2022.2.
* **Develop a New Device Plugin**
Since the market of computing devices is constantly evolving, OpenVINO is always open to extending
its support for new hardware. If you want to run inference on a device that is currently not supported,
you can see how to develop a new plugin for it in the
[Plugin Developer Guide](https://docs.openvino.ai/canonical/openvino_docs_ie_plugin_dg_overview.html).
### Improve documentation
* **OpenVINO developer documentation** is contained entirely in this repository, under the
[./docs/dev](https://github.com/openvinotoolkit/openvino/tree/master/docs/dev) folder.
* **User documentation** is built from several sources and published at
[docs.openvino.ai](docs.openvino.ai), which is the recommended place for reading
these documents. Use the files maintained in this repository only for editing purposes.
* The easiest way to help with documentation is to review it and provide feedback on the
existing articles. Whether you notice a mistake, see the possibility of improving the text,
or think more information should be added, you can reach out to any of the documentation
contributors to discuss the potential changes.
You can also create a Pull Request directly, following the [editor's guide](./docs/CONTRIBUTING_DOCS.md).
## "Fork & Pull Request model" for code contribution
### Promote and Support OpenVINO
### [](https://github.com/openvinotoolkit/openvino/blob/master/CONTRIBUTING.md#the-instruction-in-brief)The instruction in brief
* **Popularize OpenVINO**
Articles, tutorials, blog posts, demos, videos, and any other involvement
in the OpenVINO community is always a welcome contribution. If you discuss
or present OpenVINO on various social platforms, you are raising awareness
of the product among A.I. enthusiasts and enabling other people to discover
the toolkit. Feel free to reach out to OpenVINO developers if you need help
with making such community-based content.
- Register at GitHub. Create your fork of the OpenVINO™ repository [https://github.com/openvinotoolkit/openvino](https://github.com/openvinotoolkit/openvino) (see [https://help.github.com/articles/fork-a-repo](https://help.github.com/articles/fork-a-repo) for details).
- Install Git.
- Set your user name and email address in Git configuration according to the GitHub account (see [First-Time-Git-Setup](https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup) for details).
- Choose a task for yourself. It may be a bugfix or an entirely new piece of code.
- Choose a base branch for your work. More details about branches and policies are here: [Branches](https://github.com/openvinotoolkit/openvino/wiki/Branches)
- Clone your fork to your computer.
- Create a new branch (give it a meaningful name) from the base branch of your choice.
- Modify / add the code, following our [Coding Style Guide](./docs/dev/coding_style.md).
- If you want to add a new sample, please have a look at the [Guide for contributing to C++/C/Python IE samples](https://github.com/openvinotoolkit/openvino/wiki/SampleContribute)
- If you want to contribute to the documentation and want to add a new guide, follow that instruction [Documentation guidelines](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLinesDocumentation)
- Run testsuite locally:
- execute each test binary from the artifacts directory, e.g. `<source dir>/bin/intel64/Release/ieFuncTests`
- When you are done, make sure that your branch is up to date with latest state of the branch you want to contribute to (e.g. `git fetch upstream && git merge upstream/master`). If so, push your branch to your GitHub fork and create a pull request from your branch to the base branch (see [using-pull-requests](https://help.github.com/articles/using-pull-requests) for details).
## Making a good pull request
Following these guidelines will increase the likelihood of your pull request being accepted:
- One PR one issue.
- Build perfectly on your local system.
- Choose the right base branch, based on our [Branch Guidelines](https://github.com/openvinotoolkit/openvino/wiki/Branches).
- Follow the [Coding Style Guide](./docs/dev/coding_style.md) for your code.
- Document your contribution, if you decide it may benefit OpenVINO users. You may do it yourself by editing the files in the "docs" directory or contact someone working with documentation to provide them with the right information.
- Cover your changes with test.
- Add the license statement at the top of new files [C++ example](https://github.com/openvinotoolkit/openvino/blob/master/samples/cpp/classification_sample_async/main.cpp#L1-L2), [Python example](https://github.com/openvinotoolkit/openvino/blob/master/samples/python/hello_classification/hello_classification.py#L3-L4).
- Add proper information to the PR: a meaningful title, the reason why you made the commit, and a link to the issue page, if it exists.
- Remove changes unrelated to the PR.
- If it is still WIP and you want to check CI test results early, use a _Draft_ PR.
- Submit your PR and become an OpenVINO™ contributor!
* **Help Other Community Members**
If you are an experienced OpenVINO user and want to help, you can always
share your expertise with the community. Check GitHub Discussions and
Issues to see if you can help someone.
## Testing and merging pull requests
## License
Your pull request will be automatically tested by OpenVINO™'s precommit (testing statuses are automatically reported as "green" or "red" circles in precommit steps on the PR page). If any builders fail, you need to fix the issues before the PR can be merged. If you push any changes to your branch on GitHub the tests will re-run automatically. No need to close pull request and open a new one!
When an assigned reviewer accepts the pull request and the pre-commit is "green", the review status is set to "Approved", which informs OpenVINO™ maintainers that they can merge your pull request.
By contributing to the OpenVINO project, you agree that your contributions will be
licensed under the terms stated in the [LICENSE](./LICENSE.md) file.

111
CONTRIBUTING_DOCS.md Normal file
View File

@@ -0,0 +1,111 @@
# OpenVINO Documentation Guide
## Basic article structure
OpenVINO documentation is built using Sphinx and the reStructuredText formatting.
That means the basic formatting rules need to be used:
### White Spaces
OpenVINO documentation is developed to be easily readable in both html and
reStructuredText. Here are some suggestions on how to make it render nicely
and improve document clarity.
### Headings (including the article title)
They are made by "underscoring" text with punctuation marks (at least as
many marks as letters in the underscored header). We use the following convention:
```
H1
====================
H2
####################
H3
++++++++++++++++++++
H4
--------------------
H5
....................
```
### Line length
In programming, a limit of 80 characters per line is a common BKM. It may also apply
to reading natural languages fairly well. For this reason, we aim at lines of around
70 to 100 characters long. The limit is not a strict rule but rather a guideline to
follow in most cases. The breaks will not translate to html, and rightly so, but will
make reading and editing documents in GitHub or an editor much easier.
### Tables
Tables may be difficult to implement well in websites. For example, longer portions
of text, like descriptions, may render them difficult to read (e.g. improper cell
widths or heights). Complex tables may also be difficult to read in source files.
To prevent that, check the [table directive documentation](https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#table-directives)
and see our custom directives. Use the following guidelines for easier editing:
* For very big and complex data sets: use a list instead of a table or remove
the problematic content from the table and implement it differently.
* For very big and complex data sets that need to use tables: use an external
file (e.g. PDF) and link to it.
* For medium tables that look bad in source (e.g. due to long lines of text),
use the reStructuredText list table format.
* For medium and small tables, use the reStructuredText grid or simple table formats.
## Cross-linking
There are several directives Sphinx uses for linking, each has its purpose and format.
Follow these guidelines for consistent results:
* Avoid absolute references to internal documents as much as possible (link to source, not html).
* Note that sphinx uses the "back-tick" character and not the "inverted-comma" => ` vs. '
* When a file path starts at the same directory is used, put "./" at its beginning.
* Always add a space before the opening angle bracket ("<") for target files.
Use the following formatting for different links:
* link to an external page / file
* `` `text <url> `__ ``
* use a double underscore for consistency
* link to an internal documentation page / file
* `` :doc:`a docs page <relative file path>` ``
* Link to an rst or md file within our documentation, so that it renders properly in html
* link to a header on the same page
* `` 'a header in the same article <this-is-section-header-title>`__ ``
* anchors are created automatically for all existing headers
* such anchor looks like the header, with minor adjustments:
* all letters are lower case,
* remove all special glyphs, like brackets,
* replace spaces with hyphens
* Create an anchor in an article
* `` .. _anchor-in-the target-article:: ``
* put it before the header to which you want to link
* See the rules for naming anchors / labels at the bottom of this article
* link to an anchor on a different page in our documentation
* `` :ref:`the created anchor <anchor-in-the target-article>` ``
* link to the anchor using just its name
* anchors / labels
Read about anchors
Sphinx uses labels to create html anchors, which can be linked to from anywhere in documentation.
Although they may be put at the top of any article to make linking to it very easy, we do not use
this approach. Every label definition starts with an underscore, the underscore is not used in links.
Most importantly, every label needs to be globally unique. It means that it is always a good
practice to start their labels with a clear identifier of the article they reside in.

63
CONTRIBUTING_PR.md Normal file
View File

@@ -0,0 +1,63 @@
# How to Prepare a Good PR
OpenVINO is an open-source project and you can contribute to its code directly.
To do so, follow these guidelines for creating Pull Requests, so that your
changes get the highest chance of being merged.
## General Rules of a Good Pull Request
* Create your own fork of the repository and use it to create PRs.
Avoid creating change branches in the main repository.
* Choose a proper branch for your work and create your own branch based on it.
* Give your branches, commits, and Pull Requests meaningful names and descriptions.
It helps to track changes later. If your changes cover a particular component,
you can indicate it in the PR name as a prefix, for example: ``[DOCS] PR name``.
* Follow the [OpenVINO code style guide](https://github.com/openvinotoolkit/openvino/blob/master/docs/dev/coding_style.md).
* Make your PRs small - each PR should address one issue. Remove all changes
unrelated to the PR.
* Document your contribution! If your changes may impact how the user works with
OpenVINO, provide the information in proper articles. You can do it yourself,
or contact one of OpenVINO documentation contributors to work together on
developing the right content.
* For Work In Progress, or checking test results early, use a Draft PR.
## Ensure Change Quality
Your pull request will be automatically tested by OpenVINO™'s pre-commit and marked
as "green" if it is ready for merging. If any builders fail, the status is "red,"
you need to fix the issues listed in console logs. Any change to the PR branch will
automatically trigger the checks, so you don't need to recreate the PR, Just wait
for the updated results.
Regardless of the automated tests, you should ensure the quality of your changes:
* Test your changes locally:
* Make sure to double-check your code.
* Run tests locally to identify and fix potential issues (execute test binaries
from the artifacts directory, e.g. ``<source dir>/bin/intel64/Release/ieFuncTests``)
* Before creating a PR, make sure that your branch is up to date with the latest
state of the branch you want to contribute to (e.g. git fetch upstream && git
merge upstream/master).
## Branching Policy
* The "master" branch is used for development and constitutes the base for each new release.
* Each OpenVINO release has its own branch: ``releases/<year>/<release number>``.
* The final release each year is considered a Long Term Support version,
which means it remains active.
* Contributions are accepted only by active branches, which are:
* the "master" branch for future releases,
* the most recently published version for fixes,
* LTS versions (for two years from their release dates).
## Need Additional Help? Check these Articles
* [How to create a fork](https://help.github.com/articles/fork-a-rep)
* [Install Git](https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup)
* If you want to add a new sample, please have a look at the Guide for contributing
to C++/C/Python IE samples and add the license statement at the top of new files for
C++ example, Python example.

View File

@@ -5,7 +5,7 @@
[![Anaconda Status](https://anaconda.org/conda-forge/openvino/badges/version.svg)](https://anaconda.org/conda-forge/openvino)
[![brew Status](https://img.shields.io/homebrew/v/openvino)](https://formulae.brew.sh/formula/openvino)
[![PyPI Downloads](https://pepy.tech/badge/openvino)](https://pepy.tech/project/openvino)
[![PyPI Downloads](https://static.pepy.tech/badge/openvino)](https://pepy.tech/project/openvino)
[![Anaconda Downloads](https://anaconda.org/conda-forge/libopenvino/badges/downloads.svg)](https://anaconda.org/conda-forge/openvino/files)
[![brew Downloads](https://img.shields.io/homebrew/installs/dy/openvino)](https://formulae.brew.sh/formula/openvino)
</div>
@@ -68,24 +68,24 @@ The OpenVINO™ Runtime can infer models on different hardware devices. This sec
<tbody>
<tr>
<td rowspan=2>CPU</td>
<td> <a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
<td> <a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
<td><b><i><a href="./src/plugins/intel_cpu">openvino_intel_cpu_plugin</a></i></b></td>
<td>Intel Xeon with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel Core Processors with Intel AVX2, Intel Atom Processors with Intel® Streaming SIMD Extensions (Intel® SSE)</td>
</tr>
<tr>
<td> <a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_CPU.html">ARM CPU</a></tb>
<td> <a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_supported_plugins_CPU.html">ARM CPU</a></tb>
<td><b><i><a href="https://github.com/openvinotoolkit/openvino_contrib/tree/master/modules/arm_plugin">openvino_arm_cpu_plugin</a></i></b></td>
<td>Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices
</tr>
<tr>
<td>GPU</td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
<td><a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
<td><b><i><a href="./src/plugins/intel_gpu">openvino_intel_gpu_plugin</a></i></b></td>
<td>Intel Processor Graphics, including Intel HD Graphics and Intel Iris Graphics</td>
</tr>
<tr>
<td>GNA</td>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
<td><a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
<td><b><i><a href="./src/plugins/intel_gna">openvino_intel_gna_plugin</a></i></b></td>
<td>Intel Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel Pentium Silver J5005 Processor, Intel Pentium Silver N5000 Processor, Intel Celeron J4005 Processor, Intel Celeron J4105 Processor, Intel Celeron Processor N4100, Intel Celeron Processor N4000, Intel Core i3-8121U Processor, Intel Core i7-1065G7 Processor, Intel Core i7-1060G7 Processor, Intel Core i5-1035G4 Processor, Intel Core i5-1035G7 Processor, Intel Core i5-1035G1 Processor, Intel Core i5-1030G7 Processor, Intel Core i5-1030G4 Processor, Intel Core i3-1005G1 Processor, Intel Core i3-1000G1 Processor, Intel Core i3-1000G4 Processor</td>
</tr>
@@ -103,22 +103,22 @@ OpenVINO™ Toolkit also contains several plugins which simplify loading models
</thead>
<tbody>
<tr>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_IE_DG_supported_plugins_AUTO.html#doxid-openvino-docs-i-e-d-g-supported-plugins-a-u-t-o">Auto</a></td>
<td><a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_supported_plugins_AUTO.html">Auto</a></td>
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Auto plugin enables selecting Intel device for inference automatically</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
<td><a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
<td><b><i><a href="./src/plugins/auto_batch">openvino_auto_batch_plugin</a></i></b></td>
<td>Auto batch plugin performs on-the-fly automatic batching (i.e. grouping inference requests together) to improve device utilization, with no programming effort from the user</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
<td><a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
<td><b><i><a href="./src/plugins/hetero">openvino_hetero_plugin</a></i></b></td>
<td>Heterogeneous execution enables automatic inference splitting between several devices</td>
</tr>
<tr>
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
<td><a href="https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
<td>Multi plugin enables simultaneous inference of the same model on several devices in parallel</td>
</tr>
@@ -155,10 +155,9 @@ The list of OpenVINO tutorials:
## System requirements
The system requirements vary depending on platform and are available on dedicated pages:
- [Linux](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_linux_header.html)
- [Windows](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_windows_header.html)
- [macOS](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_macos_header.html)
- [Raspbian](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_raspbian.html)
- [Linux](https://docs.openvino.ai/2023.1/openvino_docs_install_guides_installing_openvino_linux_header.html)
- [Windows](https://docs.openvino.ai/2023.1/openvino_docs_install_guides_installing_openvino_windows_header.html)
- [macOS](https://docs.openvino.ai/2023.1/openvino_docs_install_guides_installing_openvino_macos_header.html)
## How to build
@@ -196,7 +195,7 @@ Report questions, issues and suggestions, using:
\* Other names and brands may be claimed as the property of others.
[Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo
[OpenVINO™ Runtime]:https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
[Model Optimizer]:https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/2023.0/pot_introduction.html
[OpenVINO™ Runtime]:https://docs.openvino.ai/2023.1/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
[Model Optimizer]:https://docs.openvino.ai/2023.1/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
[Post-Training Optimization Tool]:https://docs.openvino.ai/2023.1/pot_introduction.html
[Samples]:https://github.com/openvinotoolkit/openvino/tree/master/samples

View File

@@ -7,22 +7,6 @@ cmake_policy(SET CMP0054 NEW)
# TODO: fix it, outside of source dir MO cannot find TBB dependency
set_temp_directory(TEMP "${CMAKE_SOURCE_DIR}")
if(ENABLE_SAME_BRANCH_FOR_MODELS)
branchName(MODELS_BRANCH)
else()
set(MODELS_BRANCH "master")
endif()
if(ENABLE_DATA)
add_models_repo(${ENABLE_DATA} "data:https://github.com/openvinotoolkit/testdata.git")
set(MODELS_PATH "${TEMP}/models/src/data")
set(DATA_PATH "${MODELS_PATH}")
endif()
message(STATUS "MODELS_PATH=" ${MODELS_PATH})
fetch_models_and_validation_set()
## Intel OMP package
if(THREADING STREQUAL "OMP")
reset_deps_cache(OMP)
@@ -116,10 +100,10 @@ function(ov_download_tbb)
elseif(LINUX AND X86_64 AND OV_GLIBC_VERSION VERSION_GREATER_EQUAL 2.17)
# build oneTBB 2021.2.1 with gcc 4.8 (glibc 2.17)
RESOLVE_DEPENDENCY(TBB
ARCHIVE_LIN "oneapi-tbb-2021.2.1-lin-canary.tgz"
ARCHIVE_LIN "oneapi-tbb-2021.2.3-lin.tgz"
TARGET_PATH "${TEMP}/tbb"
ENVIRONMENT "TBBROOT"
SHA256 "3a2c2ec79b3cce7e6a2484754ba6f029fa968db2eefc6659540792b7db8fea0c"
SHA256 "f3f2edd8e7875b02220f11ab5b201411d5af6822e525e8da5444b4a666514e8b"
USE_NEW_LOCATION TRUE)
elseif(YOCTO_AARCH64)
RESOLVE_DEPENDENCY(TBB

View File

@@ -255,7 +255,7 @@ get_linux_name(LINUX_OS_NAME)
# macro to mark target as conditionally compiled
function(ie_mark_target_as_cc TARGET_NAME)
function(ov_mark_target_as_cc TARGET_NAME)
set(cc_library openvino::conditional_compilation)
if(TARGET IE::conditional_compilation)
set(cc_library IE::conditional_compilation)
@@ -275,8 +275,9 @@ function(ie_mark_target_as_cc TARGET_NAME)
add_dependencies(${TARGET_NAME} conditional_compilation_gen)
endfunction()
function(ov_mark_target_as_cc)
ie_mark_target_as_cc(${ARGN})
function(ie_mark_target_as_cc TARGET_NAME)
message(WARNING "This function is deprecated. Please use ov_mark_target_as_cc(TARGET_NAME) instead.")
ov_mark_target_as_cc(${TARGET_NAME})
endfunction()
include(python_requirements)

View File

@@ -120,7 +120,7 @@ function(addIeTarget)
endif()
if (ARG_ADD_CLANG_FORMAT)
# code style
add_clang_format_target(${ARG_NAME}_clang FOR_TARGETS ${ARG_NAME})
ov_add_clang_format_target(${ARG_NAME}_clang FOR_TARGETS ${ARG_NAME})
endif()
if (ARG_DEVELOPER_PACKAGE)
# developer package

View File

@@ -193,6 +193,11 @@ endfunction()
#
# ie_add_api_validator_post_build_step(TARGET <name>)
#
macro(ie_add_api_validator_post_build_step)
macro(ov_add_api_validator_post_build_step)
_ov_add_api_validator_post_build_step(${ARGV})
endmacro()
macro(ie_add_api_validator_post_build_step)
message(WARNING "ie_add_api_validator_post_build_step is deprecated, use ov_add_api_validator_post_build_step instead")
_ov_add_api_validator_post_build_step(${ARGV})
endmacro()

View File

@@ -32,10 +32,10 @@ if(ENABLE_CLANG_FORMAT AND NOT TARGET clang_format_check_all)
endif()
#
# add_clang_format_target(FOR_TARGETS <target1 target2 ...> | FOR_SOURCES <source1 source2 ...>
# [EXCLUDE_PATTERNS <pattern1 pattern2 ...>])
# ov_add_clang_format_target(FOR_TARGETS <target1 target2 ...> | FOR_SOURCES <source1 source2 ...>
# [EXCLUDE_PATTERNS <pattern1 pattern2 ...>])
#
function(add_clang_format_target TARGET_NAME)
function(ov_add_clang_format_target TARGET_NAME)
if(NOT ENABLE_CLANG_FORMAT)
return()
endif()
@@ -130,3 +130,8 @@ function(add_clang_format_target TARGET_NAME)
add_dependencies(clang_format_check_all ${TARGET_NAME})
add_dependencies(clang_format_fix_all ${TARGET_NAME}_fix)
endfunction()
function(add_clang_format_target)
message(WARNING "add_clang_format_target is deprecated, use ov_add_clang_format_target instead")
ov_add_clang_format_target(${ARGV})
endfunction()

View File

@@ -33,6 +33,7 @@ macro(ov_disable_deprecated_warnings)
endmacro()
macro(disable_deprecated_warnings)
message(WARNING "disable_deprecated_warnings is deprecated, use ov_disable_deprecated_warnings instead")
ov_disable_deprecated_warnings()
endmacro()
@@ -218,24 +219,26 @@ endfunction()
# Enables Link Time Optimization compilation
#
macro(ie_enable_lto)
message(WARNING "ie_add_compiler_flags is deprecated, set INTERPROCEDURAL_OPTIMIZATION_RELEASE target property instead")
set(CMAKE_INTERPROCEDURAL_OPTIMIZATION_RELEASE ON)
endmacro()
#
# ie_add_compiler_flags(<flag1 [flag2 flag3 ...>])
# ov_add_compiler_flags(<flag1 [flag2 flag3 ...>])
#
# Adds compiler flags to C / C++ sources
#
macro(ie_add_compiler_flags)
macro(ov_add_compiler_flags)
foreach(flag ${ARGN})
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${flag}")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${flag}")
endforeach()
endmacro()
function(ov_add_compiler_flags)
ie_add_compiler_flags(${ARGN})
endfunction()
macro(ie_add_compiler_flags)
message(WARNING "ie_add_compiler_flags is deprecated, use ov_add_compiler_flags instead")
ov_add_compiler_flags(${ARGN})
endmacro()
#
# ov_force_include(<target> <PUBLIC | PRIVATE | INTERFACE> <header file>)
@@ -267,11 +270,11 @@ function(ov_abi_free_target target)
endfunction()
#
# ie_python_minimal_api(<target>)
# ov_python_minimal_api(<target>)
#
# Set options to use only Python Limited API
#
function(ie_python_minimal_api target)
function(ov_python_minimal_api target)
# pybind11 uses a lot of API which is not a part of minimal python API subset
# Ref 1: https://docs.python.org/3.11/c-api/stable.html
# Ref 2: https://github.com/pybind/pybind11/issues/1755
@@ -301,7 +304,7 @@ if(NOT DEFINED CMAKE_CXX_STANDARD)
endif()
if(ENABLE_COVERAGE)
ie_add_compiler_flags(--coverage)
ov_add_compiler_flags(--coverage)
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} --coverage")
endif()
@@ -313,9 +316,9 @@ set(CMAKE_VISIBILITY_INLINES_HIDDEN ON)
if(CMAKE_CL_64)
# Default char Type Is unsigned
# ie_add_compiler_flags(/J)
# ov_add_compiler_flags(/J)
elseif(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG)
ie_add_compiler_flags(-fsigned-char)
ov_add_compiler_flags(-fsigned-char)
endif()
file(RELATIVE_PATH OV_RELATIVE_BIN_PATH ${CMAKE_CURRENT_BINARY_DIR} ${CMAKE_SOURCE_DIR})
@@ -335,22 +338,22 @@ if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# Common options / warnings enabled
#
ie_add_compiler_flags(/D_CRT_SECURE_NO_WARNINGS /D_SCL_SECURE_NO_WARNINGS)
ov_add_compiler_flags(/D_CRT_SECURE_NO_WARNINGS /D_SCL_SECURE_NO_WARNINGS)
# no asynchronous structured exception handling
ie_add_compiler_flags(/EHsc)
ov_add_compiler_flags(/EHsc)
# Allows the compiler to package individual functions in the form of packaged functions (COMDATs).
ie_add_compiler_flags(/Gy)
ov_add_compiler_flags(/Gy)
# This option helps ensure the fewest possible hard-to-find code defects. Similar to -Wall on GNU / Clang
ie_add_compiler_flags(/W3)
ov_add_compiler_flags(/W3)
# Increase Number of Sections in .Obj file
ie_add_compiler_flags(/bigobj)
ov_add_compiler_flags(/bigobj)
# Build with multiple processes
ie_add_compiler_flags(/MP)
ov_add_compiler_flags(/MP)
if(AARCH64 AND NOT MSVC_VERSION LESS 1930)
# otherwise, _ARM64_EXTENDED_INTRINSICS is defined, which defines 'mvn' macro
ie_add_compiler_flags(/D_ARM64_DISTINCT_NEON_TYPES)
ov_add_compiler_flags(/D_ARM64_DISTINCT_NEON_TYPES)
endif()
# Handle Large Addresses
@@ -362,7 +365,7 @@ if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
if(CMAKE_COMPILE_WARNING_AS_ERROR)
if(CMAKE_VERSION VERSION_LESS 3.24)
ie_add_compiler_flags(/WX)
ov_add_compiler_flags(/WX)
endif()
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} /WX")
set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} /WX")
@@ -374,9 +377,9 @@ if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
#
# C4251 needs to have dll-interface to be used by clients of class
ie_add_compiler_flags(/wd4251)
ov_add_compiler_flags(/wd4251)
# C4275 non dll-interface class used as base for dll-interface class
ie_add_compiler_flags(/wd4275)
ov_add_compiler_flags(/wd4275)
# Enable __FILE__ trim, use path with forward and backward slash as directory separator
add_compile_options(
@@ -400,7 +403,7 @@ elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel" AND WIN32)
#
if(CMAKE_COMPILE_WARNING_AS_ERROR AND CMAKE_VERSION VERSION_LESS 3.24)
ie_add_compiler_flags(/Qdiag-warning:47,1740,1786)
ov_add_compiler_flags(/Qdiag-warning:47,1740,1786)
endif()
#
@@ -408,40 +411,40 @@ elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel" AND WIN32)
#
# 161: unrecognized pragma
ie_add_compiler_flags(/Qdiag-disable:161)
ov_add_compiler_flags(/Qdiag-disable:161)
# 177: variable was declared but never referenced
ie_add_compiler_flags(/Qdiag-disable:177)
ov_add_compiler_flags(/Qdiag-disable:177)
# 556: not matched type of assigned function pointer
ie_add_compiler_flags(/Qdiag-disable:556)
ov_add_compiler_flags(/Qdiag-disable:556)
# 1744: field of class type without a DLL interface used in a class with a DLL interface
ie_add_compiler_flags(/Qdiag-disable:1744)
ov_add_compiler_flags(/Qdiag-disable:1744)
# 1879: unimplemented pragma ignored
ie_add_compiler_flags(/Qdiag-disable:1879)
ov_add_compiler_flags(/Qdiag-disable:1879)
# 2586: decorated name length exceeded, name was truncated
ie_add_compiler_flags(/Qdiag-disable:2586)
ov_add_compiler_flags(/Qdiag-disable:2586)
# 2651: attribute does not apply to any entity
ie_add_compiler_flags(/Qdiag-disable:2651)
ov_add_compiler_flags(/Qdiag-disable:2651)
# 3180: unrecognized OpenMP pragma
ie_add_compiler_flags(/Qdiag-disable:3180)
ov_add_compiler_flags(/Qdiag-disable:3180)
# 11075: To get full report use -Qopt-report:4 -Qopt-report-phase ipo
ie_add_compiler_flags(/Qdiag-disable:11075)
ov_add_compiler_flags(/Qdiag-disable:11075)
# 15335: was not vectorized: vectorization possible but seems inefficient.
# Use vector always directive or /Qvec-threshold0 to override
ie_add_compiler_flags(/Qdiag-disable:15335)
ov_add_compiler_flags(/Qdiag-disable:15335)
else()
#
# Common enabled warnings
#
# allow linker eliminating the unused code and data from the final executable
ie_add_compiler_flags(-ffunction-sections -fdata-sections)
ov_add_compiler_flags(-ffunction-sections -fdata-sections)
# emits text showing the command-line option controlling a diagnostic
ie_add_compiler_flags(-fdiagnostics-show-option)
ov_add_compiler_flags(-fdiagnostics-show-option)
# This enables all the warnings about constructions that some users consider questionable, and that are easy to avoid
ie_add_compiler_flags(-Wall)
ov_add_compiler_flags(-Wall)
# Warn if an undefined identifier is evaluated in an #if directive. Such identifiers are replaced with zero.
ie_add_compiler_flags(-Wundef)
ov_add_compiler_flags(-Wundef)
# To guarantee OpenVINO can be used with gcc versions 7 through 12
# - https://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Options.html
@@ -468,7 +471,7 @@ else()
#
if(CMAKE_COMPILE_WARNING_AS_ERROR AND CMAKE_VERSION VERSION_LESS 3.24)
ie_add_compiler_flags(-Werror)
ov_add_compiler_flags(-Werror)
endif()
#
@@ -477,7 +480,7 @@ else()
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
# 177: function "XXX" was declared but never referenced
ie_add_compiler_flags(-diag-disable=remark,177,2196)
ov_add_compiler_flags(-diag-disable=remark,177,2196)
endif()
#
@@ -493,8 +496,8 @@ else()
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -s ERROR_ON_MISSING_LIBRARIES=1 -s ERROR_ON_UNDEFINED_SYMBOLS=1")
# set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -s USE_PTHREADS=1 -s PTHREAD_POOL_SIZE=4")
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -s ALLOW_MEMORY_GROWTH=1")
ie_add_compiler_flags(-sDISABLE_EXCEPTION_CATCHING=0)
# ie_add_compiler_flags(-sUSE_PTHREADS=1)
ov_add_compiler_flags(-sDISABLE_EXCEPTION_CATCHING=0)
# ov_add_compiler_flags(-sUSE_PTHREADS=1)
else()
set(exclude_libs "-Wl,--exclude-libs,ALL")
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -Wl,--gc-sections ${exclude_libs}")
@@ -507,7 +510,6 @@ else()
endif()
add_compile_definitions(
# Defines to trim check of __FILE__ macro in case if not done by compiler.
OV_NATIVE_PARENT_PROJECT_ROOT_DIR="${OV_NATIVE_PARENT_PROJECT_ROOT_DIR}")
@@ -519,11 +521,11 @@ endif()
check_cxx_compiler_flag("-Wunused-but-set-variable" UNUSED_BUT_SET_VARIABLE_SUPPORTED)
#
# link_system_libraries(target <PUBLIC | PRIVATE | INTERFACE> <lib1 [lib2 lib3 ...]>)
# ov_link_system_libraries(target <PUBLIC | PRIVATE | INTERFACE> <lib1 [lib2 lib3 ...]>)
#
# Links provided libraries and include their INTERFACE_INCLUDE_DIRECTORIES as SYSTEM
#
function(link_system_libraries TARGET_NAME)
function(ov_link_system_libraries TARGET_NAME)
set(MODE PRIVATE)
foreach(arg IN LISTS ARGN)

View File

@@ -136,6 +136,8 @@ function (RESOLVE_DEPENDENCY NAME_OF_CMAKE_VAR)
endfunction(RESOLVE_DEPENDENCY)
function (resolve_model_dependency network archive network_model_path)
message(WARNING "DEPRECATED: 'resolve_model_dependency' must not be used")
RESOLVE_DEPENDENCY(${network_model_path}
ARCHIVE "models_archives/${archive}"
TARGET_PATH "${MODELS_PATH}/${network}")

View File

@@ -4,23 +4,23 @@
include(CMakeParseArguments)
function(ie_faster_build TARGET_NAME)
function(ov_build_target_faster TARGET_NAME)
if(NOT ENABLE_FASTER_BUILD)
return()
endif()
cmake_parse_arguments(IE_FASTER_BUILD "UNITY" "" "PCH" ${ARGN})
cmake_parse_arguments(FASTER_BUILD "UNITY" "" "PCH" ${ARGN})
if(IE_FASTER_BUILD_UNITY)
set_target_properties(${TARGET_NAME}
PROPERTIES
UNITY_BUILD ON
)
if(FASTER_BUILD_UNITY)
set_target_properties(${TARGET_NAME} PROPERTIES UNITY_BUILD ON)
endif()
if(IE_FASTER_BUILD_PCH)
target_precompile_headers(${TARGET_NAME}
${IE_FASTER_BUILD_PCH}
)
if(FASTER_BUILD_PCH)
target_precompile_headers(${TARGET_NAME} ${FASTER_BUILD_PCH})
endif()
endfunction()
function(ie_faster_build)
message(WARNING "ie_faster_build is deprecated, use ov_build_target_faster instead")
ov_build_target_faster(${ARGV})
endfunction()

View File

@@ -18,7 +18,7 @@ else()
ie_option(USE_BUILD_TYPE_SUBFOLDER "Create dedicated sub-folder per build type for output binaries" ON)
endif()
if(CI_BUILD_NUMBER)
if(DEFINED ENV{CI_BUILD_NUMBER} AND NOT (WIN32 OR CMAKE_CROSSCOMPILING))
set(CMAKE_COMPILE_WARNING_AS_ERROR_DEFAULT ON)
else()
set(CMAKE_COMPILE_WARNING_AS_ERROR_DEFAULT OFF)

View File

@@ -183,6 +183,11 @@ macro(ov_add_frontend)
"-Dget_api_version=get_api_version_${OV_FRONTEND_NAME}")
endif()
# remove -Wmissing-declarations warning, because of frontends implementation specific
if(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG)
target_compile_options(${TARGET_NAME} PRIVATE -Wno-missing-declarations)
endif()
target_include_directories(${TARGET_NAME}
PUBLIC
$<BUILD_INTERFACE:${${TARGET_NAME}_INCLUDE_DIR}>
@@ -219,11 +224,11 @@ macro(ov_add_frontend)
set(protobuf_target_name "protobuf::${protobuf_target_name}")
endif()
link_system_libraries(${TARGET_NAME} PRIVATE ${protobuf_target_name})
ov_link_system_libraries(${TARGET_NAME} PRIVATE ${protobuf_target_name})
# protobuf generated code emits -Wsuggest-override error
if(SUGGEST_OVERRIDE_SUPPORTED)
target_compile_options(${TARGET_NAME} PRIVATE -Wno-suggest-override)
target_compile_options(${TARGET_NAME} PRIVATE $<$<COMPILE_LANGUAGE:CXX>:-Wno-suggest-override>)
endif()
# install protobuf if it is not installed yet
@@ -242,8 +247,8 @@ macro(ov_add_frontend)
target_include_directories(${TARGET_NAME} SYSTEM PRIVATE ${flatbuffers_INCLUDE_DIRECTORIES})
endif()
add_clang_format_target(${TARGET_NAME}_clang FOR_TARGETS ${TARGET_NAME}
EXCLUDE_PATTERNS ${PROTO_SRCS} ${PROTO_HDRS} ${proto_files} ${flatbuffers_schema_files})
ov_add_clang_format_target(${TARGET_NAME}_clang FOR_TARGETS ${TARGET_NAME}
EXCLUDE_PATTERNS ${PROTO_SRCS} ${PROTO_HDRS} ${proto_files} ${flatbuffers_schema_files})
# enable LTO
set_target_properties(${TARGET_NAME} PROPERTIES
@@ -263,7 +268,7 @@ macro(ov_add_frontend)
add_dependencies(ov_frontends ${TARGET_NAME})
# must be called after all target_link_libraries
ie_add_api_validator_post_build_step(TARGET ${TARGET_NAME})
ov_add_api_validator_post_build_step(TARGET ${TARGET_NAME})
# since frontends are user-facing component which can be linked against,
# then we need to mark it to be CXX ABI free
@@ -284,8 +289,7 @@ macro(ov_add_frontend)
if(OV_FRONTEND_LINKABLE_FRONTEND)
set(export_set EXPORT OpenVINOTargets)
set(archive_dest ARCHIVE DESTINATION ${OV_CPACK_ARCHIVEDIR}
COMPONENT ${lib_component})
set(archive_dest ARCHIVE DESTINATION ${OV_CPACK_ARCHIVEDIR} COMPONENT ${lib_component})
set(namelink NAMELINK_COMPONENT ${dev_component})
else()
set(namelink NAMELINK_SKIP)
@@ -295,6 +299,12 @@ macro(ov_add_frontend)
${archive_dest}
LIBRARY DESTINATION ${OV_CPACK_LIBRARYDIR} COMPONENT ${lib_component}
${namelink})
# export to build tree
if(OV_FRONTEND_LINKABLE_FRONTEND)
export(TARGETS ${TARGET_NAME} NAMESPACE openvino::
APPEND FILE "${CMAKE_BINARY_DIR}/OpenVINOTargets.cmake")
endif()
else()
ov_install_static_lib(${TARGET_NAME} ${OV_CPACK_COMP_CORE})
endif()
@@ -306,9 +316,8 @@ macro(ov_add_frontend)
COMPONENT ${dev_component}
FILES_MATCHING PATTERN "*.hpp")
# public target name
set_target_properties(${TARGET_NAME} PROPERTIES EXPORT_NAME frontend::${OV_FRONTEND_NAME})
export(TARGETS ${TARGET_NAME} NAMESPACE openvino::
APPEND FILE "${CMAKE_BINARY_DIR}/OpenVINOTargets.cmake")
endif()
else()
# skipped frontend has to be installed in static libraries case

View File

@@ -2,14 +2,12 @@
# SPDX-License-Identifier: Apache-2.0
#
if(ENABLE_DATA)
find_package(Git REQUIRED)
endif()
set(MODELS_LST "")
set(MODELS_LST_TO_FETCH "")
function (add_models_repo add_to_fetcher model_name)
message(WARNING "DEPRECATED: 'add_models_repo' must not be used")
list(LENGTH ARGV add_models_args)
if (add_models_args EQUAL 3)
list(GET ARGV 2 branch_name)
@@ -28,6 +26,8 @@ function (add_models_repo add_to_fetcher model_name)
endfunction()
function(add_lfs_repo name prefix url tag)
message(WARNING "DEPRECATED: 'add_lfs_repo' must not be used")
if(TARGET ${name})
return()
endif()
@@ -44,6 +44,8 @@ function(add_lfs_repo name prefix url tag)
INSTALL_COMMAND ""
LOG_DOWNLOAD ON)
find_package(Git REQUIRED)
execute_process(
COMMAND ${GIT_EXECUTABLE} lfs install --local --force
WORKING_DIRECTORY ${prefix}/src/${name}
@@ -59,6 +61,8 @@ function(add_lfs_repo name prefix url tag)
endfunction()
function (fetch_models_and_validation_set)
message(WARNING "DEPRECATED: 'fetch_models_and_validation_set' must not be used")
foreach(loop_var ${MODELS_LST_TO_FETCH})
string(REPLACE ":" ";" MODEL_CONFIG_LST ${loop_var})

View File

@@ -24,21 +24,24 @@ macro(ov_archive_cpack_set_dirs)
set(OV_CPACK_DEVREQDIR tools)
set(OV_CPACK_PYTHONDIR python)
if(USE_BUILD_TYPE_SUBFOLDER)
set(build_type ${CMAKE_BUILD_TYPE})
else()
set(build_type $<CONFIG>)
endif()
if(WIN32)
set(OV_CPACK_LIBRARYDIR runtime/lib/${ARCH_FOLDER}/$<CONFIG>)
set(OV_CPACK_RUNTIMEDIR runtime/bin/${ARCH_FOLDER}/$<CONFIG>)
set(OV_CPACK_ARCHIVEDIR runtime/lib/${ARCH_FOLDER}/$<CONFIG>)
set(OV_WHEEL_RUNTIMEDIR runtime/bin/${ARCH_FOLDER}/Release)
set(OV_CPACK_LIBRARYDIR runtime/lib/${ARCH_FOLDER}/${build_type})
set(OV_CPACK_RUNTIMEDIR runtime/bin/${ARCH_FOLDER}/${build_type})
set(OV_CPACK_ARCHIVEDIR runtime/lib/${ARCH_FOLDER}/${build_type})
elseif(APPLE)
set(OV_CPACK_LIBRARYDIR runtime/lib/${ARCH_FOLDER}/$<CONFIG>)
set(OV_CPACK_RUNTIMEDIR runtime/lib/${ARCH_FOLDER}/$<CONFIG>)
set(OV_CPACK_ARCHIVEDIR runtime/lib/${ARCH_FOLDER}/$<CONFIG>)
set(OV_WHEEL_RUNTIMEDIR runtime/lib/${ARCH_FOLDER}/Release)
set(OV_CPACK_LIBRARYDIR runtime/lib/${ARCH_FOLDER}/${build_type})
set(OV_CPACK_RUNTIMEDIR runtime/lib/${ARCH_FOLDER}/${build_type})
set(OV_CPACK_ARCHIVEDIR runtime/lib/${ARCH_FOLDER}/${build_type})
else()
set(OV_CPACK_LIBRARYDIR runtime/lib/${ARCH_FOLDER})
set(OV_CPACK_RUNTIMEDIR runtime/lib/${ARCH_FOLDER})
set(OV_CPACK_ARCHIVEDIR runtime/lib/${ARCH_FOLDER})
set(OV_WHEEL_RUNTIMEDIR ${OV_CPACK_RUNTIMEDIR})
endif()
set(OV_CPACK_PLUGINSDIR ${OV_CPACK_RUNTIMEDIR})

View File

@@ -19,7 +19,6 @@ macro(ov_common_libraries_cpack_set_dirs)
else()
set(OV_CPACK_RUNTIMEDIR ${CMAKE_INSTALL_LIBDIR})
endif()
set(OV_WHEEL_RUNTIMEDIR ${OV_CPACK_RUNTIMEDIR})
set(OV_CPACK_ARCHIVEDIR ${CMAKE_INSTALL_LIBDIR})
if(CPACK_GENERATOR MATCHES "^(CONAN|VCPKG)$")
set(OV_CPACK_IE_CMAKEDIR ${CMAKE_INSTALL_DATADIR}/openvino)
@@ -84,7 +83,11 @@ macro(ov_define_component_include_rules)
unset(OV_CPACK_COMP_CORE_DEV_EXCLUDE_ALL)
set(OV_CPACK_COMP_CORE_C_DEV_EXCLUDE_ALL ${OV_CPACK_COMP_CORE_DEV_EXCLUDE_ALL})
# licensing
set(OV_CPACK_COMP_LICENSING_EXCLUDE_ALL EXCLUDE_FROM_ALL)
if(CPACK_GENERATOR STREQUAL "CONAN")
unset(OV_CPACK_COMP_LICENSING_EXCLUDE_ALL)
else()
set(OV_CPACK_COMP_LICENSING_EXCLUDE_ALL EXCLUDE_FROM_ALL)
endif()
# samples
set(OV_CPACK_COMP_CPP_SAMPLES_EXCLUDE_ALL EXCLUDE_FROM_ALL)
set(OV_CPACK_COMP_C_SAMPLES_EXCLUDE_ALL ${OV_CPACK_COMP_CPP_SAMPLES_EXCLUDE_ALL})

View File

@@ -24,7 +24,6 @@ macro(ov_debian_cpack_set_dirs)
endif()
endif()
set(OV_CPACK_LIBRARYDIR ${OV_CPACK_RUNTIMEDIR})
set(OV_WHEEL_RUNTIMEDIR ${OV_CPACK_RUNTIMEDIR})
set(OV_CPACK_ARCHIVEDIR ${OV_CPACK_RUNTIMEDIR})
set(OV_CPACK_PLUGINSDIR ${OV_CPACK_RUNTIMEDIR}/openvino-${OpenVINO_VERSION})
set(OV_CPACK_IE_CMAKEDIR ${OV_CPACK_RUNTIMEDIR}/cmake/inferenceengine${OpenVINO_VERSION})

View File

@@ -63,21 +63,24 @@ macro(ov_archive_cpack_set_dirs)
set(OV_CPACK_DEVREQDIR tools)
set(OV_CPACK_PYTHONDIR python)
if(USE_BUILD_TYPE_SUBFOLDER)
set(build_type ${CMAKE_BUILD_TYPE})
else()
set(build_type $<CONFIG>)
endif()
if(WIN32)
set(OV_CPACK_LIBRARYDIR runtime/lib/${ARCH_FOLDER}/$<CONFIG>)
set(OV_CPACK_RUNTIMEDIR runtime/bin/${ARCH_FOLDER}/$<CONFIG>)
set(OV_CPACK_ARCHIVEDIR runtime/lib/${ARCH_FOLDER}/$<CONFIG>)
set(OV_WHEEL_RUNTIMEDIR runtime/bin/${ARCH_FOLDER}/Release)
set(OV_CPACK_LIBRARYDIR runtime/lib/${ARCH_FOLDER}/${build_type})
set(OV_CPACK_RUNTIMEDIR runtime/bin/${ARCH_FOLDER}/${build_type})
set(OV_CPACK_ARCHIVEDIR runtime/lib/${ARCH_FOLDER}/${build_type})
elseif(APPLE)
set(OV_CPACK_LIBRARYDIR runtime/lib/${ARCH_FOLDER}/$<CONFIG>)
set(OV_CPACK_RUNTIMEDIR runtime/lib/${ARCH_FOLDER}/$<CONFIG>)
set(OV_CPACK_ARCHIVEDIR runtime/lib/${ARCH_FOLDER}/$<CONFIG>)
set(OV_WHEEL_RUNTIMEDIR runtime/lib/${ARCH_FOLDER}/Release)
set(OV_CPACK_LIBRARYDIR runtime/lib/${ARCH_FOLDER}/${build_type})
set(OV_CPACK_RUNTIMEDIR runtime/lib/${ARCH_FOLDER}/${build_type})
set(OV_CPACK_ARCHIVEDIR runtime/lib/${ARCH_FOLDER}/${build_type})
else()
set(OV_CPACK_LIBRARYDIR runtime/lib/${ARCH_FOLDER})
set(OV_CPACK_RUNTIMEDIR runtime/lib/${ARCH_FOLDER})
set(OV_CPACK_ARCHIVEDIR runtime/lib/${ARCH_FOLDER})
set(OV_WHEEL_RUNTIMEDIR ${OV_CPACK_RUNTIMEDIR})
endif()
set(OV_CPACK_PLUGINSDIR ${OV_CPACK_RUNTIMEDIR})

View File

@@ -24,6 +24,10 @@ macro(ov_install_static_lib target comp)
install(TARGETS ${target} EXPORT OpenVINOTargets
ARCHIVE DESTINATION ${OV_CPACK_ARCHIVEDIR} COMPONENT ${comp} ${ARGN})
# export to local tree to build against static build tree
export(TARGETS ${target} NAMESPACE openvino::
APPEND FILE "${CMAKE_BINARY_DIR}/OpenVINOTargets.cmake")
endif()
endmacro()

View File

@@ -15,7 +15,6 @@ macro(ov_rpm_cpack_set_dirs)
set(OV_CPACK_INCLUDEDIR ${CMAKE_INSTALL_INCLUDEDIR})
set(OV_CPACK_LIBRARYDIR ${CMAKE_INSTALL_LIBDIR})
set(OV_CPACK_RUNTIMEDIR ${CMAKE_INSTALL_LIBDIR})
set(OV_WHEEL_RUNTIMEDIR ${OV_CPACK_RUNTIMEDIR})
set(OV_CPACK_ARCHIVEDIR ${CMAKE_INSTALL_LIBDIR})
set(OV_CPACK_PLUGINSDIR ${CMAKE_INSTALL_LIBDIR}/openvino-${OpenVINO_VERSION})
set(OV_CPACK_IE_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/inferenceengine${OpenVINO_VERSION})

View File

@@ -103,7 +103,7 @@ function(ov_add_plugin)
endforeach()
if (OV_PLUGIN_ADD_CLANG_FORMAT)
add_clang_format_target(${OV_PLUGIN_NAME}_clang FOR_SOURCES ${OV_PLUGIN_SOURCES})
ov_add_clang_format_target(${OV_PLUGIN_NAME}_clang FOR_SOURCES ${OV_PLUGIN_SOURCES})
else()
add_cpplint_target(${OV_PLUGIN_NAME}_cpplint FOR_TARGETS ${OV_PLUGIN_NAME} CUSTOM_FILTERS ${custom_filter})
endif()
@@ -117,6 +117,10 @@ function(ov_add_plugin)
# install rules
if(NOT OV_PLUGIN_SKIP_INSTALL OR NOT BUILD_SHARED_LIBS)
string(TOLOWER "${OV_PLUGIN_DEVICE_NAME}" install_component)
if(NOT BUILD_SHARED_LIBS)
# in case of static libs everything is installed to 'core'
set(install_component ${OV_CPACK_COMP_CORE})
endif()
if(OV_PLUGIN_PSEUDO_DEVICE)
set(plugin_hidden HIDDEN)
@@ -358,7 +362,7 @@ function(ov_generate_plugins_hpp)
"${plugins_hpp_in}"
"${IEDevScripts_DIR}/plugins/create_plugins_hpp.cmake"
COMMENT
"Generate ov_plugins.hpp for build"
"Generate ov_plugins.hpp"
VERBATIM)
# for some reason dependency on source files does not work

View File

@@ -39,7 +39,6 @@ function(ie_shellcheck_process)
continue()
endif()
get_filename_component(dir_name "${script}" DIRECTORY)
string(REPLACE "${IE_SHELLCHECK_DIRECTORY}" "${CMAKE_BINARY_DIR}/shellcheck" output_file ${script})
set(output_file "${output_file}.txt")
get_filename_component(script_name "${script}" NAME)

View File

@@ -5,6 +5,9 @@
function(ie_generate_dev_package_config)
# dummy check that OpenCV is here
find_package(OpenCV QUIET)
if(OpenCV_VERSION VERSION_LESS 3.0)
set(OpenCV_FOUND OFF)
endif()
foreach(component IN LISTS openvino_export_components)
# export all targets with prefix and use them during extra modules build
@@ -37,6 +40,9 @@ endfunction()
function(ov_generate_dev_package_config)
# dummy check that OpenCV is here
find_package(OpenCV QUIET)
if(OpenCV_VERSION VERSION_LESS 3.0)
set(OpenCV_FOUND OFF)
endif()
foreach(component IN LISTS openvino_export_components)
# filter out targets which are installed by OpenVINOConfig.cmake static build case
@@ -126,12 +132,6 @@ endif()\n")
ov_dev_package_no_errors()
ov_deprecated_no_errors()
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# 'argument': conversion from 'size_t' to 'int', possible loss of data
ie_add_compiler_flags(/wd4267)
ie_add_compiler_flags(/wd4244)
endif()
# add each extra module
foreach(module_path IN LISTS extra_modules)
if(module_path)

View File

@@ -23,7 +23,7 @@ endif()
ie_dependent_option (ENABLE_INTEL_GPU "GPU OpenCL-based plugin for OpenVINO Runtime" ${ENABLE_INTEL_GPU_DEFAULT} "X86_64 OR AARCH64;NOT APPLE;NOT WINDOWS_STORE;NOT WINDOWS_PHONE" OFF)
if (ANDROID OR (CMAKE_COMPILER_IS_GNUCXX AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 7.0) OR NOT BUILD_SHARED_LIBS)
if (ANDROID OR MINGW OR (CMAKE_COMPILER_IS_GNUCXX AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 7.0) OR (NOT BUILD_SHARED_LIBS AND ENABLE_INTEL_CPU))
# oneDNN doesn't support old compilers and android builds for now, so we'll build GPU plugin without oneDNN
# also, in case of static build CPU's and GPU's oneDNNs will conflict, so we are disabling GPU's one in this case
set(ENABLE_ONEDNN_FOR_GPU_DEFAULT OFF)
@@ -84,6 +84,7 @@ else()
endif()
ie_dependent_option (ENABLE_TBBBIND_2_5 "Enable TBBBind_2_5 static usage in OpenVINO runtime" ${ENABLE_TBBBIND_2_5_DEFAULT} "THREADING MATCHES TBB; NOT APPLE" OFF)
ie_dependent_option (ENABLE_TBB_RELEASE_ONLY "Only Release TBB libraries are linked to the OpenVINO Runtime binaries" ON "THREADING MATCHES TBB;LINUX" OFF)
ie_dependent_option (ENABLE_INTEL_GNA "GNA support for OpenVINO Runtime" ON
"NOT APPLE;NOT ANDROID;X86_64;CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 5.4" OFF)
@@ -104,16 +105,12 @@ ie_dependent_option (ENABLE_PLUGINS_XML "Generate plugins.xml configuration file
ie_dependent_option (GAPI_TEST_PERF "if GAPI unit tests should examine performance" OFF "ENABLE_TESTS;ENABLE_GAPI_PREPROCESSING" OFF)
ie_dependent_option (ENABLE_DATA "fetch models from testdata repo" ON "ENABLE_FUNCTIONAL_TESTS;NOT ANDROID" OFF)
ie_dependent_option (ENABLE_FUNCTIONAL_TESTS "functional tests" ON "ENABLE_TESTS" OFF)
ie_option (ENABLE_SAMPLES "console samples are part of OpenVINO Runtime package" ON)
set(OPENVINO_EXTRA_MODULES "" CACHE STRING "Extra paths for extra modules to include into OpenVINO build")
ie_dependent_option(ENABLE_TBB_RELEASE_ONLY "Only Release TBB libraries are linked to the OpenVINO Runtime binaries" ON "THREADING MATCHES TBB;LINUX" OFF)
find_host_package(PythonInterp 3 QUIET)
ie_option(ENABLE_OV_ONNX_FRONTEND "Enable ONNX FrontEnd" ${PYTHONINTERP_FOUND})
ie_option(ENABLE_OV_PADDLE_FRONTEND "Enable PaddlePaddle FrontEnd" ON)
@@ -176,7 +173,6 @@ ie_dependent_option (ENABLE_SYSTEM_PROTOBUF "Enables use of system Protobuf" OFF
ie_dependent_option (ENABLE_SYSTEM_SNAPPY "Enables use of system version of Snappy" OFF
"ENABLE_SNAPPY_COMPRESSION" OFF)
# temporary option until we enable this by default when review python API distribution
ie_dependent_option (ENABLE_PYTHON_PACKAGING "Enables packaging of Python API in APT / YUM" OFF
"ENABLE_PYTHON;UNIX" OFF)

View File

@@ -90,7 +90,8 @@ macro(ov_cpack_settings)
# - 2022.1.1, 2022.2 do not have debian packages enabled, distributed only as archives
# - 2022.3 is the first release where Debian updated packages are introduced, others 2022.3.X are LTS
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5
2023.0.0 2023.0.1
2023.0.0 2023.0.1 2023.0.2 2023.0.3
2023.1.0
)
#

View File

@@ -76,7 +76,8 @@ macro(ov_cpack_settings)
# - 2022.1.1, 2022.2 do not have rpm packages enabled, distributed only as archives
# - 2022.3 is the first release where RPM updated packages are introduced, others 2022.3.X are LTS
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5
2023.0.0 2023.0.1
2023.0.0 2023.0.1 2023.0.2 2023.0.3
2023.1.0
)
find_host_program(rpmlint_PROGRAM NAMES rpmlint DOC "Path to rpmlint")

View File

@@ -49,10 +49,8 @@ if(ENABLE_SAMPLES)
set_and_check(gflags_DIR "@gflags_BINARY_DIR@")
endif()
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# Disable warning as error for private components
set(CMAKE_COMPILE_WARNING_AS_ERROR OFF)
endif()
# Disable warning as error for private components
set(CMAKE_COMPILE_WARNING_AS_ERROR OFF)
#
# Content

View File

@@ -223,6 +223,10 @@ macro(_ov_find_tbb)
PATHS ${_tbb_bind_dir}
NO_CMAKE_FIND_ROOT_PATH
NO_DEFAULT_PATH)
if(TARGET TBBbind::tbbbind_2_5)
# To solve https://cmake.org/cmake/help/latest/policy/CMP0111.html warnings
set_property(TARGET TBBbind::tbbbind_2_5 PROPERTY IMPORTED_CONFIGURATIONS RELEASE DEBUG)
endif()
unset(_tbb_bind_dir)
endif()
unset(install_tbbbind)
@@ -343,11 +347,15 @@ endmacro()
macro(_ov_find_intel_cpu_dependencies)
set(_OV_ENABLE_CPU_ACL "@DNNL_USE_ACL@")
if(_OV_ENABLE_CPU_ACL)
set(_ov_in_install_tree "@PACKAGE_ARM_COMPUTE_LIB_DIR@")
set(_ov_in_install_tree "@PACKAGE_OPENVINO_LIB_DIR@")
if(_ov_in_install_tree)
set_and_check(ARM_COMPUTE_LIB_DIR "@PACKAGE_ARM_COMPUTE_LIB_DIR@")
set_and_check(ARM_COMPUTE_LIB_DIR "@PACKAGE_OPENVINO_LIB_DIR@")
set(ACL_DIR "${CMAKE_CURRENT_LIST_DIR}")
else()
if(NOT TARGET arm_compute::arm_compute)
# for case when build tree is used separately, e.g. OpenVINODeveloperPackageConfig.cmake
set_and_check(ARM_COMPUTE_LIB_DIR "@PACKAGE_CMAKE_ARCHIVE_OUTPUT_DIRECTORY@")
endif()
set_and_check(ACL_DIR "@PACKAGE_FIND_ACL_PATH@")
endif()
@@ -363,16 +371,50 @@ macro(_ov_find_intel_gpu_dependencies)
set(_OV_ENABLE_INTEL_GPU "@ENABLE_INTEL_GPU@")
set(_OV_ENABLE_SYSTEM_OPENCL "@ENABLE_SYSTEM_OPENCL@")
if(_OV_ENABLE_INTEL_GPU AND _OV_ENABLE_SYSTEM_OPENCL)
set(_OV_OpenCLICDLoader_FOUND "@OpenCLICDLoader_FOUND@")
if(_OV_OpenCLICDLoader_FOUND)
_ov_find_dependency(OpenCLICDLoader)
else()
_ov_find_dependency(OpenCL)
endif()
unset(_OV_OpenCLICDLoader_FOUND)
_ov_find_dependency(OpenCL)
endif()
unset(_OV_ENABLE_INTEL_GPU)
unset(_OV_ENABLE_SYSTEM_OPENCL)
set(_OV_ENABLE_ONEDNN_FOR_GPU "@ENABLE_ONEDNN_FOR_GPU@")
if(_OV_ENABLE_ONEDNN_FOR_GPU AND NOT TARGET onednn_gpu_tgt)
set(_OV_DNNL_GPU_LIBRARY_NAME "@DNNL_GPU_LIBRARY_NAME@")
set(_ov_in_install_tree "@PACKAGE_OPENVINO_LIB_DIR@")
if(_ov_in_install_tree)
set(onednn_gpu_lib "${CMAKE_STATIC_LIBRARY_PREFIX}${_OV_DNNL_GPU_LIBRARY_NAME}${CMAKE_STATIC_LIBRARY_SUFFIX}")
set_and_check(onednn_gpu_lib_root "@PACKAGE_OPENVINO_LIB_DIR@")
if(WIN32)
if(OV_GENERATOR_MULTI_CONFIG)
set(extra_args PATH_SUFFIXES ${CMAKE_CONFIGURATION_TYPES})
else()
set(extra_args PATH_SUFFIXES ${CMAKE_BUILD_TYPE})
endif()
endif()
find_library(onednn_gpu_lib_path
NAMES ${_OV_DNNL_GPU_LIBRARY_NAME}
PATHS ${onednn_gpu_lib_root}
${extra_args})
if(NOT onednn_gpu_lib_path)
message(FATAL_ERROR "Internal error: failed to find '${_OV_DNNL_GPU_LIBRARY_NAME}' in '${onednn_gpu_lib_root}'")
endif()
unset(extra_args)
unset(onednn_gpu_lib)
else()
set_and_check(onednn_gpu_lib_path "@PACKAGE_ONEDNN_GPU_LIB_PATH@")
endif()
set_target_properties(openvino::onednn_gpu_tgt PROPERTIES
INTERFACE_LINK_LIBRARIES "${onednn_gpu_lib_path}")
unset(onednn_gpu_lib_path)
unset(_ov_in_install_tree)
unset(_OV_DNNL_GPU_LIBRARY_NAME)
endif()
unset(_OV_ENABLE_ONEDNN_FOR_GPU)
endmacro()
macro(_ov_find_intel_gna_dependencies)
@@ -455,6 +497,7 @@ set(_OV_ENABLE_OPENVINO_BUILD_SHARED "@BUILD_SHARED_LIBS@")
if(NOT TARGET openvino)
set(_ov_as_external_package ON)
include("${CMAKE_CURRENT_LIST_DIR}/OpenVINOTargets.cmake")
endif()
if(NOT _OV_ENABLE_OPENVINO_BUILD_SHARED)
@@ -487,8 +530,6 @@ set(_ov_imported_libs openvino::runtime openvino::runtime::c
openvino::frontend::pytorch openvino::frontend::tensorflow_lite)
if(_ov_as_external_package)
include("${CMAKE_CURRENT_LIST_DIR}/OpenVINOTargets.cmake")
foreach(target IN LISTS _ov_imported_libs)
if(TARGET ${target})
get_target_property(imported_configs ${target} IMPORTED_CONFIGURATIONS)

View File

@@ -54,10 +54,8 @@ if(ENABLE_SAMPLES)
endif()
endif()
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
# Disable warning as error for private components
set(CMAKE_COMPILE_WARNING_AS_ERROR OFF)
endif()
# Disable warning as error for private components
set(CMAKE_COMPILE_WARNING_AS_ERROR OFF)
#
# Content

View File

@@ -5,7 +5,7 @@
pc_path=${pcfiledir}
prefix=${pc_path}/@PKGCONFIG_OpenVINO_PREFIX@
exec_prefix=${prefix}/@OV_WHEEL_RUNTIMEDIR@
exec_prefix=${prefix}/@OV_CPACK_RUNTIMEDIR@
libdir=${exec_prefix}
include_prefix=${prefix}/@OV_CPACK_INCLUDEDIR@

View File

@@ -8,7 +8,7 @@ if(ENABLE_OV_ONNX_FRONTEND)
# if requirements are not installed automatically, we need to checks whether they are here
ov_check_pip_packages(REQUIREMENTS_FILE "${OpenVINO_SOURCE_DIR}/src/frontends/onnx/tests/requirements.txt"
RESULT_VAR onnx_FOUND
WARNING_MESSAGE "ONNX frontend tests will be skipped"
WARNING_MESSAGE "ONNX testing models weren't generated, some tests will fail due .onnx models not found"
MESSAGE_MODE WARNING)
endif()

View File

@@ -37,6 +37,8 @@ macro(ov_set_msvc_runtime var value)
endif()
endmacro()
# ade
ov_set_msvc_runtime(BUILD_WITH_STATIC_CRT ${use_static_runtime})
# static TBBBind_2_5 is built with dynamic CRT runtime
ov_set_msvc_runtime(ENABLE_TBBBIND_2_5 ${use_dynamic_runtime})
# ONNX

40
conan.lock Normal file
View File

@@ -0,0 +1,40 @@
{
"version": "0.5",
"requires": [
"zlib/1.2.13#97d5730b529b4224045fe7090592d4c1%1692672717.049",
"xbyak/6.73#250bc3bc73379f90f255876c1c00a4cd%1691853024.351",
"snappy/1.1.10#916523630083f6d855cb2977de8eefb6%1689780661.062",
"rapidjson/cci.20220822#8ca51918340f3a21127822258e95ec0f%1663194355.698",
"pybind11/2.10.4#dd44c80a5ed6a2ef11194380daae1248%1682692198.909",
"pugixml/1.13#f615c1fcec55122b2e177d17061276e7%1691917296.869",
"protobuf/3.21.12#d9f5f4e3b86552552dda4c0a2e928eeb%1685218275.69",
"opencl-icd-loader/2023.04.17#5f73dd9f0c023d416a7f162e320b9c77%1692732261.088",
"opencl-headers/2023.04.17#3d98f2d12a67c2400de6f11d5335b5a6%1683936272.16",
"opencl-clhpp-headers/2023.04.17#7c62fcc7ac2559d4839150d2ebaac5c8%1685450803.672",
"onnx/1.14.1#d95f4e64bedf3dc6898253847ac69005%1693130309.828",
"onetbb/2021.10.0#cbb2fc43088070b48f6e4339bc8fa0e1%1693812561.235",
"nlohmann_json/3.11.2#a35423bb6e1eb8f931423557e282c7ed%1666619820.488",
"ittapi/3.24.0#9246125f13e7686dee2b0c992b71db94%1682969872.743",
"hwloc/2.9.2#1c63e2eccac57048ae226e6c946ebf0e%1688677682.002",
"gflags/2.2.2#48d1262ffac8d30c3224befb8275a533%1676224985.343",
"flatbuffers/23.5.26#b153646f6546daab4c7326970b6cd89c%1685838458.449",
"ade/0.1.2c#8c03c130df6dc35186b38ba73a40a71d%1694253992.577"
],
"build_requires": [
"zlib/1.2.13#97d5730b529b4224045fe7090592d4c1%1692672717.049",
"protobuf/3.21.12#d9f5f4e3b86552552dda4c0a2e928eeb%1685218275.69",
"protobuf/3.21.9#515ceb0a1653cf84363d9968b812d6be%1678364058.993",
"pkgconf/1.9.5#743ca0d41d35a84b1f89af337ddaa1a0%1688570267.802",
"patchelf/0.13#0eaada8970834919c3ce14355afe7fac%1680534241.341",
"ninja/1.11.1#77587f8c8318662ac8e5a7867eb4be21%1684431244.21",
"meson/1.0.0#15586c0ac6f682805875ef903dbe7ee2%1673885561.647",
"m4/1.4.19#c1c4b1ee919e34630bb9b50046253d3c%1676610086.39",
"libtool/2.4.6#9ee8efc04c2e106e7fba13bb1e477617%1677509454.345",
"gnu-config/cci.20210814#15c3bf7dfdb743977b84d0321534ad90%1681250000.747",
"flatbuffers/23.5.26#b153646f6546daab4c7326970b6cd89c%1685838458.449",
"cmake/3.27.4#a7e78418b024dccacccc887f049f47ed%1693515860.005",
"automake/1.16.5#058bda3e21c36c9aa8425daf3c1faf50%1688481772.751",
"autoconf/2.71#53be95d228b2dcb30dc199cb84262d8f%1693395343.513"
],
"python_requires": []
}

View File

@@ -1,29 +1,30 @@
[requires]
ade/0.1.2a
ade/0.1.2c
onetbb/[>=2021.2.1]
pugixml/[>=1.10]
protobuf/3.21.9
protobuf/3.21.12
ittapi/[>=3.23.0]
zlib/[>=1.2.8]
opencl-icd-loader/2023.04.17
opencl-clhpp-headers/2023.04.17
opencl-headers/2023.04.17
opencl-icd-loader/[>=2023.04.17]
rapidjson/[>=1.1.0]
xbyak/[>=6.62]
snappy/[>=1.1.7]
gflags/2.2.2
onnx/1.13.1
onnx/1.14.1
nlohmann_json/[>=3.1.1]
pybind11/[>=2.10.1]
flatbuffers/[>=22.9.24]
[tool_requires]
cmake/[>=3.15]
cmake/[>=3.20]
pkgconf/1.9.5
patchelf/[>=0.12]
protobuf/3.21.9
flatbuffers/[>=22.9.24]
[options]
protobuf/*:lite=True
protobuf/*:shared=False
flatbuffers/*:header_only=True
[generators]

View File

@@ -4,80 +4,407 @@
"dictionaryDefinitions": [],
"dictionaries": [],
"words": [
"aarch64",
"acdadcfa",
"acea",
"abmrd",
"acfb",
"acosh",
"Acosh",
"adfcd",
"addcmul",
"addif",
"addmm",
"aeaa",
"agem",
"agew",
"armeabi",
"armhf",
"artefacts",
"ARTEFACTS",
"Asinh",
"asynch",
"Atanh",
"autodoc",
"Autograd",
"autoplugin",
"AUTOPLUGIN",
"autoremove",
"autosummary",
"bace",
"Backprop",
"bblayers",
"Beautif",
"Bilat",
"bindir",
"bitbake",
"BFYX",
"BFXY",
"bkgr",
"brctl",
"Bucketize",
"BUILDDIR",
"buildtools",
"buildsystems",
"BYXF",
"bvalue",
"bvlc",
"caffe",
"caffemodel",
"camvid",
"cbba",
"cbcd",
"cdad",
"cdrom",
"chrpath",
"classov",
"cldnn",
"clumber",
"codepath",
"codepaths",
"coeffs",
"concat",
"Concat",
"Conts",
"constexpr",
"consts",
"Consts",
"conv",
"Convolutional",
"CPPLINT",
"cpplint",
"crbegin",
"crend",
"ctest",
"ctput",
"CVAT",
"cython",
"dadb",
"DANDROID",
"DARM",
"Datumaro",
"datumaro",
"DBUILD",
"DCMAKE",
"ddepth",
"Depthwise",
"dearmor",
"devicesupport",
"dequantization",
"Dequantization",
"deeplabv",
"deeced",
"DENABLE",
"delif",
"denormal",
"DENORMAL",
"denormalized",
"Detectron",
"Dequantize",
"devel",
"devtoolset",
"dgpu",
"diffstat",
"dldt",
"dlstreamer",
"dkms",
"Dockerfiles",
"DOPENVINO",
"downscript",
"doxid",
"doxygen",
"Doxygen",
"doxygensnippet",
"DTHREADING",
"dpkg",
"DPYTHON",
"DSELECTIVE",
"dylib",
"DWORD",
"efficientdet",
"Efficientdet",
"Einsum",
"Elems",
"Elementwise",
"elementwise",
"Eltwise",
"endsphinxdirective",
"enumov",
"emcmake",
"emmake",
"emod",
"emom",
"emow",
"Emscripten",
"emscripten",
"emsdk",
"epel",
"ERRORLEVEL",
"evolutionally",
"executionpolicy",
"fafe",
"fdupes",
"flatbuffers",
"FLATBUFFERS",
"frontends",
"Frontends",
"FYXB",
"gaddb",
"GAPI",
"gapi",
"Gaussed",
"gcompoundkernel",
"gcomputation",
"GCPU",
"gcpukernel",
"Gelu",
"GELU",
"Geti",
"getitem",
"gimg",
"gitee",
"gflags",
"globbing",
"gmmlib",
"GNAs",
"gmock",
"gnueabihf",
"googlenet",
"gpgcheck",
"gpgkey",
"graphviz",
"Graphviz",
"groupov",
"gtest",
"hardtanh",
"hashfile",
"HDDL",
"HKLM",
"HOSTTOOLS",
"Hotspots",
"hotspots",
"hostnet",
"hwloc",
"hwquote",
"idbf",
"IDFT",
"iigd",
"ifdef",
"ifdown",
"ifup",
"imgproc",
"imshow",
"inet",
"INTEGRITYCHECK",
"ILSVRC",
"inferenced",
"Informations",
"insmod",
"intelocl",
"INTERPROCEDURAL",
"INSTALLDIR",
"IRDFT",
"jemalloc",
"kaldi",
"Keras",
"keypress",
"keyrings",
"Khronos",
"KROIs",
"Landm",
"landm",
"Latency",
"Lcov",
"ldconfig",
"libc",
"libopencl",
"libopencv",
"libpython",
"libtbb",
"libtbbbind",
"libtpm",
"libvirtd",
"linmac",
"Liskov",
"lowlatency",
"LTSC",
"LSTM",
"makefiles",
"malloc",
"memleaks",
"manylinux",
"maxdepth",
"miktext",
"Mish",
"mklink",
"mmap",
"mobilenet",
"Mobilenet",
"monodepth",
"mozallowfullscreen",
"msallowfullscreen",
"MSVC",
"msvc",
"Multiclass",
"muxed",
"mxnet",
"namespaceov",
"NCHW",
"ncpu",
"netdev",
"netplan",
"ngraph",
"nireq",
"NNCF",
"nncf",
"nocache",
"noglob",
"nohup",
"nlohmann",
"norestart",
"noqueue",
"nproc",
"NUMA",
"numpy",
"Numpy",
"oallowfullscreen",
"ocloc",
"OCSP",
"oneapi",
"onetbb",
"onnx",
"opencl",
"openembedded",
"openvino",
"Opset",
"opset",
"opsets",
"OVMS",
"ovms",
"ovsa",
"OVSA",
"ovsatool",
"OVTF",
"PACKAGECONFIG",
"paddlepaddle",
"parameterizable",
"partitioner",
"patchelf",
"passpattern",
"Pexels",
"pdmodel",
"PDPD",
"pkgdata",
"pkgs",
"pkill",
"polylines",
"postproc",
"postprocess",
"preprocess",
"Preprocess",
"protobuf",
"Protobuf",
"PROTOBUF",
"prototxt",
"PSROI",
"Pugi",
"pugixml",
"PUGIXML",
"pypi",
"PYTHONPATH",
"pzstd",
"qcow",
"qlen",
"QSPECTRE",
"Qspectre",
"quantizer",
"Rects",
"Relu",
"relu",
"rcnn",
"RCNN",
"RDFT",
"Redistributable",
"remotesigned",
"repolist",
"reproject",
"reshapable",
"Requantize",
"retval",
"RHODS",
"rmmod",
"runtool",
"scons",
"SCONS",
"segm",
"Selu",
"servercore",
"setuptools",
"setupvars",
"SETX",
"SIMD",
"Softmax",
"skylake",
"sphinxdirective",
"Strided",
"squeezenet",
"SWTPM",
"swtpm",
"TBBBIND",
"TBBROOT",
"Tensro",
"texlive",
"textrm",
"tflite",
"thirdparty",
"Thresholded",
"toctree",
"toolset",
"Torchvision",
"tpmrm",
"tpmstate",
"tput",
"Tunables",
"unet",
"Uninstallation",
"unixio",
"unsharp",
"Unsharp",
"Unsh",
"Unsqueeze",
"Usecase",
"usecases",
"USERPROFILE",
"userspace",
"VAAPI",
"valgrind",
"vcpkg",
"vcvars",
"venv",
"virbr",
"virsh",
"virt",
"virtio",
"VMHWM",
"VMRSS",
"VNNI",
"vtune",
"vtunesummary",
"vtunebottonup",
"WHOLEARCHIVE",
"WDDM",
"WORKDIR",
"WORKSIZE",
"xbyak",
"Xbyak",
"xdot",
"xvfz",
"yocto",
"yolo",
"YOLO",
"yolov",
"Yolov",
"YXFB",
"zstd"
],
"ignoreWords": [],

View File

@@ -76,7 +76,7 @@ function(build_docs)
# build with openvino notebooks
if(ENABLE_OPENVINO_NOTEBOOKS)
set(NBDOC_SCRIPT "${DOCS_SOURCE_DIR}/nbdoc/nbdoc.py")
list(APPEND commands
list(PREPEND commands
COMMAND ${PYTHON_EXECUTABLE} "${NBDOC_SCRIPT}" "${DOCS_SOURCE_DIR}/notebooks" "${RST_OUTPUT}/notebooks"
)
endif()

View File

@@ -14,7 +14,6 @@
Interactive Tutorials (Python) <tutorials>
Sample Applications (Python & C++) <openvino_docs_OV_UG_Samples_Overview>
OpenVINO API 2.0 Transition <openvino_2_0_transition_guide>
This section will help you get a hands-on experience with OpenVINO even if you are just starting

View File

@@ -3,60 +3,256 @@
@sphinxdirective
.. meta::
:description: Preparing models for OpenVINO Runtime. Learn how to convert and compile models from different frameworks or read them directly.
:description: Preparing models for OpenVINO Runtime. Learn about the methods
used to read, convert and compile models from different frameworks.
.. toctree::
:maxdepth: 1
:hidden:
Supported_Model_Formats
openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide
omz_tools_downloader
openvino_docs_OV_Converter_UG_Conversion_Options
openvino_docs_OV_Converter_UG_prepare_model_convert_model_Converting_Model
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as `TensorFlow Hub <https://tfhub.dev/>`__, `Hugging Face <https://huggingface.co/>`__, `Torchvision models <https://pytorch.org/hub/>`__.
Every deep learning workflow begins with obtaining a model. You can choose to prepare
a custom one, use a ready-made solution and adjust it to your needs, or even download
and run a pre-trained network from an online database, such as
`TensorFlow Hub <https://tfhub.dev/>`__, `Hugging Face <https://huggingface.co/>`__,
or `Torchvision models <https://pytorch.org/hub/>`__.
:doc:`OpenVINO™ supports several model formats <Supported_Model_Formats>` and allows converting them to it's own, `openvino.runtime.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__ (`ov.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__ ), providing a tool dedicated to this task.
If your selected model is in one of the :doc:`OpenVINO™ supported model formats <Supported_Model_Formats>`,
you can use it directly, without the need to save as the OpenVINO IR.
(`openvino.Model <api/ie_python_api/_autosummary/openvino.Model.html>`__ -
`ov.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__).
For this purpose, you can use ``openvino.Core.read_model`` and ``openvino.Core.compile_model``
methods, so that conversion is performed automatically before inference, for
maximum convenience (note that working with PyTorch differs slightly, the Python API
being the only option, while TensorFlow may present additional considerations
:doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`).
There are several options to convert a model from original framework to OpenVINO model format (``ov.Model``).
The ``read_model()`` method reads a model from a file and produces ``ov.Model``. If the file is in one of the supported original framework file formats, it is converted automatically to OpenVINO Intermediate Representation. If the file is already in the OpenVINO IR format, it is read "as-is", without any conversion involved. ``ov.Model`` can be serialized to IR using the ``ov.serialize()`` method. The serialized IR can be further optimized using :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` that applies post-training quantization methods.
For better performance and more optimization options, OpenVINO offers a conversion
API with two possible approaches: the Python API functions (``openvino.convert_model``
and ``openvino.save_model``) and the ``ovc`` command line tool, which are described in detail in this article.
Convert a model in Python
######################################
.. note::
Model conversion API prior to OpenVINO 2023.1 is considered deprecated.
Both existing and new projects are recommended to transition to the new
solutions, keeping in mind that they are not fully backwards compatible
with ``openvino.tools.mo.convert_model`` or the ``mo`` CLI tool.
For more details, see the :doc:`Model Conversion API Transition Guide <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>`.
Model conversion API, specifically, the ``mo.convert_model()`` method converts a model from original framework to ``ov.Model``. ``mo.convert_model()`` returns ``ov.Model`` object in memory so the ``read_model()`` method is not required. The resulting ``ov.Model`` can be inferred in the same training environment (python script or Jupiter Notebook). ``mo.convert_model()`` provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application. In addition to model files, ``mo.convert_model()`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO. The ``mo.convert_model()`` method also has a set of parameters to :doc:`cut the model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>`, :doc:`set input shapes or layout <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>`, :doc:`add preprocessing <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>`, etc.
.. image:: _static/images/model_conversion_diagram.svg
Convert a Model in Python: ``convert_model``
##############################################
You can use the Model conversion API in Python with the ``openvino.convert_model`` function. This function converts a model from its original framework representation, for example PyTorch or TensorFlow, to the object of type ``openvino.Model``. The resulting ``openvino.Model`` can be inferred in the same application (Python script or Jupyter Notebook) or saved into a file using``openvino.save_model`` for future use. Below, there are examples of how to use the ``openvino.convert_model`` with models from popular public repositories:
.. tab-set::
.. tab-item:: Torchvision
.. code-block:: py
:force:
import openvino as ov
import torch
from torchvision.models import resnet50
model = resnet50(pretrained=True)
# prepare input_data
input_data = torch.rand(1, 3, 224, 224)
ov_model = ov.convert_model(model, example_input=input_data)
###### Option 1: Save to OpenVINO IR:
# save model to OpenVINO IR for later use
ov.save_model(ov_model, 'model.xml')
###### Option 2: Compile and infer with OpenVINO:
# compile model
compiled_model = ov.compile_model(ov_model)
# run the inference
result = compiled_model(input_data)
.. tab-item:: Hugging Face Transformers
.. code-block:: py
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
import openvino as ov
ov_model = ov.convert_model(model, example_input={**encoded_input})
###### Option 1: Save to OpenVINO IR:
# save model to OpenVINO IR for later use
ov.save_model(ov_model, 'model.xml')
###### Option 2: Compile and infer with OpenVINO:
# compile model
compiled_model = ov.compile_model(ov_model)
# prepare input_data using HF tokenizer or your own tokenizer
# encoded_input is reused here for simplicity
# run inference
result = compiled_model({**encoded_input})
.. tab-item:: Keras Applications
.. code-block:: py
import tensorflow as tf
import openvino as ov
tf_model = tf.keras.applications.ResNet50(weights="imagenet")
ov_model = ov.convert_model(tf_model)
###### Option 1: Save to OpenVINO IR:
# save model to OpenVINO IR for later use
ov.save_model(ov_model, 'model.xml')
###### Option 2: Compile and infer with OpenVINO:
# compile model
compiled_model = ov.compile_model(ov_model)
# prepare input_data
import numpy as np
input_data = np.random.rand(1, 224, 224, 3)
# run inference
result = compiled_model(input_data)
.. tab-item:: TensorFlow Hub
.. code-block:: py
import tensorflow as tf
import tensorflow_hub as hub
import openvino as ov
model = tf.keras.Sequential([
hub.KerasLayer("https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/5")
])
# Check model page for information about input shape: https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/5
model.build([None, 224, 224, 3])
model.save('mobilenet_v1_100_224') # use a temporary directory
ov_model = ov.convert_model('mobilenet_v1_100_224')
###### Option 1: Save to OpenVINO IR:
ov.save_model(ov_model, 'model.xml')
###### Option 2: Compile and infer with OpenVINO:
compiled_model = ov.compile_model(ov_model)
# prepare input_data
import numpy as np
input_data = np.random.rand(1, 224, 224, 3)
# run inference
result = compiled_model(input_data)
.. tab-item:: ONNX Model Hub
.. code-block:: py
import onnx
model = onnx.hub.load("resnet50")
onnx.save(model, 'resnet50.onnx') # use a temporary file for model
import openvino as ov
ov_model = ov.convert_model('resnet50.onnx')
###### Option 1: Save to OpenVINO IR:
# save model to OpenVINO IR for later use
ov.save_model(ov_model, 'model.xml')
###### Option 2: Compile and infer with OpenVINO:
# compile model
compiled_model = ov.compile_model(ov_model)
# prepare input_data
import numpy as np
input_data = np.random.rand(1, 3, 224, 224)
# run inference
result = compiled_model(input_data)
In Option 1, where the ``openvino.save_model`` function is used, an OpenVINO model is serialized in the file system as two files with ``.xml`` and ``.bin`` extensions. This pair of files is called OpenVINO Intermediate Representation format (OpenVINO IR, or just IR) and useful for efficient model deployment. OpenVINO IR can be loaded into another application for inference using the ``openvino.Core.read_model`` function. For more details, refer to the :doc:`OpenVINO™ Runtime documentation <openvino_docs_OV_UG_OV_Runtime_User_Guide>`.
Option 2, where ``openvino.compile_model`` is used, provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your existing Python inference application. In this case, the converted model is not saved to IR. Instead, the model is compiled and used for inference within the same application.
Option 1 separates model conversion and model inference into two different applications. This approach is useful for deployment scenarios requiring fewer extra dependencies and faster model loading in the end inference application.
For example, converting a PyTorch model to OpenVINO usually demands the ``torch`` Python module and Python. This process can take extra time and memory. But, after the converted model is saved as OpenVINO IR with ``openvino.save_model``, it can be loaded in a separate application without requiring the ``torch`` dependency and the time-consuming conversion. The inference application can be written in other languages supported by OpenVINO, for example, in C++, and Python installation is not necessary for it to run.
Before saving the model to OpenVINO IR, consider applying :doc:`Post-training Optimization <ptq_introduction>` to enable more efficient inference and smaller model size.
The figure below illustrates the typical workflow for deploying a trained deep-learning model.
.. image:: ./_static/images/model_conversion_diagram.svg
:alt: model conversion diagram
Convert a model with ``mo`` command-line tool
#############################################
Convert a Model in CLI: ``ovc``
###############################
Another option to convert a model is to use ``mo`` command-line tool. ``mo`` is a cross-platform tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices in the same measure, as the ``mo.convert_model`` method.
Another option for model conversion is to use ``ovc`` command-line tool, which stands for OpenVINO Model Converter. The tool combines both ``openvino.convert_model`` and ``openvino.save_model`` functionalities. It is convenient to use when the original model is ready for inference and is in one of the supported file formats: ONNX, TensorFlow, TensorFlow Lite, or PaddlePaddle. As a result, ``ovc`` produces an OpenVINO IR, consisting of ``.xml`` and ``.bin`` files, which needs to be read with the ``openvino.Core.read_model`` method. You can compile and infer the ``ov.Model`` later with :doc:`OpenVINO™ Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`
``mo`` requires the use of a pre-trained deep learning model in one of the supported formats: TensorFlow, TensorFlow Lite, PaddlePaddle, or ONNX. ``mo`` converts the model to the OpenVINO Intermediate Representation format (IR), which needs to be read with the ``ov.read_model()`` method. Then, you can compile and infer the ``ov.Model`` later with :doc:`OpenVINO™ Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`.
.. note::
PyTorch models cannot be converted with ``ovc``, use ``openvino.convert_model`` instead.
The results of both ``ovc`` and ``openvino.convert_model``/``openvino.save_model`` conversion methods are the same. You can choose either of them based on your convenience. Note that there should not be any differences in the results of model conversion if the same set of parameters is used and the model is saved into OpenVINO IR.
The figure below illustrates the typical workflow for deploying a trained deep learning model:
.. image:: _static/images/BASIC_FLOW_MO_simplified.svg
where IR is a pair of files describing the model:
* ``.xml`` - Describes the network topology.
* ``.bin`` - Contains the weights and biases binary data.
Model files (not Python objects) from ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`) do not require a separate step for model conversion, that is ``mo.convert_model``. OpenVINO provides C++ and Python APIs for importing the models to OpenVINO Runtime directly by just calling the ``read_model`` method.
Additional Resources
####################
The results of both ``mo`` and ``mo.convert_model()`` conversion methods described above are the same. You can choose one of them, depending on what is most convenient for you. Keep in mind that there should not be any differences in the results of model conversion if the same set of parameters is used.
The following articles describe in details how to obtain and prepare your model depending on the source model type:
This section describes how to obtain and prepare your model for work with OpenVINO to get the best inference results:
* :doc:`Convert different model formats to the ov.Model format <Supported_Model_Formats>`.
* :doc:`Review all available conversion parameters <openvino_docs_OV_Converter_UG_Conversion_Options>`.
To achieve the best model inference performance and more compact OpenVINO IR representation follow:
* :doc:`Post-training optimization <ptq_introduction>`
* :doc:`Model inference in OpenVINO Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`
If you are using legacy conversion API (``mo`` or ``openvino.tools.mo.convert_model``), please refer to the following materials:
* :doc:`Transition from legacy mo and ov.tools.mo.convert_model <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>`
* :doc:`Legacy Model Conversion API <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`
.. api/ie_python_api/_autosummary/openvino.Model.html is a broken link for some reason - need to investigate python api article generation
* :doc:`See the supported formats and how to use them in your project <Supported_Model_Formats>`.
* :doc:`Convert different model formats to the ov.Model format <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
@endsphinxdirective

View File

@@ -3,7 +3,7 @@
@sphinxdirective
.. meta::
:description: OpenVINO™ is an ecosystem of utilities that have advanced capabilities, which help develop deep learning solutions.
:description: OpenVINO™ ecosystem offers various resources for developing deep learning solutions.
.. toctree::
@@ -13,7 +13,6 @@
ote_documentation
datumaro_documentation
ovsa_get_started
openvino_docs_tuning_utilities
OpenVINO™ is not just one tool. It is an expansive ecosystem of utilities, providing a comprehensive workflow for deep learning solution development. Learn more about each of them to reach the full potential of OpenVINO™ Toolkit.
@@ -28,6 +27,7 @@ More resources:
* :doc:`Documentation <tmo_introduction>`
* `GitHub <https://github.com/openvinotoolkit/nncf>`__
* `PyPI <https://pypi.org/project/nncf/>`__
* `Conda Forge <https://anaconda.org/conda-forge/nncf/>`__
**OpenVINO™ Training Extensions**
@@ -60,39 +60,6 @@ More resources:
* `GitHub <https://github.com/openvinotoolkit/datumaro>`__
* `Documentation <https://openvinotoolkit.github.io/datumaro/stable/docs/get-started/introduction.html>`__
**Compile Tool**
Compile tool is now deprecated. If you need to compile a model for inference on a specific device, use the following script:
.. tab-set::
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/export_compiled_model.py
:language: python
:fragment: [export_compiled_model]
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/export_compiled_model.cpp
:language: cpp
:fragment: [export_compiled_model]
To learn which device supports the import / export functionality, see the :doc:`feature support matrix <openvino_docs_OV_UG_Working_with_devices>`.
For more details on preprocessing steps, refer to the :doc:`Optimize Preprocessing <openvino_docs_OV_UG_Preprocessing_Overview>`. To compile the model with advanced preprocessing capabilities, refer to the :doc:`Use Case - Integrate and Save Preprocessing Steps Into OpenVINO IR <openvino_docs_OV_UG_Preprocess_Usecase_save>`, which shows how to have all the preprocessing in the compiled blob.
**DL Workbench**
A web-based tool for deploying deep learning models. Built on the core of OpenVINO and equipped with a graphics user interface, DL Workbench is a great way to explore the possibilities of the OpenVINO workflow, import, analyze, optimize, and build your pre-trained models. You can do all that by visiting `Intel® Developer Cloud <https://software.intel.com/content/www/us/en/develop/tools/devcloud.html>`__ and launching DL Workbench online.
**OpenVINO™ integration with TensorFlow (OVTF)**
OpenVINO™ Integration with TensorFlow will no longer be supported as of OpenVINO release 2023.0. As part of the 2023.0 release, OpenVINO will feature a significantly enhanced TensorFlow user experience within native OpenVINO without needing offline model conversions. :doc:`Learn more <openvino_docs_MO_DG_TensorFlow_Frontend>`.
@endsphinxdirective

View File

@@ -0,0 +1,141 @@
# Legacy Features and Components {#openvino_legacy_features}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
OpenVINO Development Tools package <openvino_docs_install_guides_install_dev_tools>
Model Optimizer / Conversion API <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>
OpenVINO API 2.0 transition <openvino_2_0_transition_guide>
Open Model ZOO <model_zoo>
Apache MXNet, Caffe, and Kaldi <mxnet_caffe_kaldi>
Post-training Optimization Tool <pot_introduction>
Since OpenVINO has grown very rapidly in recent years, some of its features
and components have been replaced by other solutions. Some of them are still
supported to assure OpenVINO users are given enough time to adjust their projects,
before the features are fully discontinued.
This section will give you an overview of these major changes and tell you how
you can proceed to get the best experience and results with the current OpenVINO
offering.
| **OpenVINO Development Tools Package**
| *New solution:* OpenVINO Runtime includes all supported components
| *Old solution:* discontinuation planned for OpenVINO 2025.0
|
| OpenVINO Development Tools used to be the OpenVINO package with tools for
advanced operations on models, such as Model conversion API, Benchmark Tool,
Accuracy Checker, Annotation Converter, Post-Training Optimization Tool,
and Open Model Zoo tools. Most of these tools have been either removed,
replaced by other solutions, or moved to the OpenVINO Runtime package.
| :doc:`See how to install Development Tools <openvino_docs_install_guides_install_dev_tools>`
| **Model Optimizer / Conversion API**
| *New solution:* Direct model support and OpenVINO Converter (OVC)
| *Old solution:* Legacy Conversion API discontinuation planned for OpenVINO 2025.0
|
| The role of Model Optimizer and later the Conversion API was largely reduced
when all major model frameworks became supported directly. For converting model
files explicitly, it has been replaced with a more light-weight and efficient
solution, the OpenVINO Converter (launched with OpenVINO 2023.1).
| :doc:`See how to use OVC <openvino_docs_model_processing_introduction>`
| :doc:`See how to transition from the legacy solution <openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition>`
| **Open Model ZOO**
| *New solution:* users are encouraged to use public model repositories
| *Old solution:* discontinuation planned for OpenVINO 2024.0
|
| Open Model ZOO provided a collection of models prepared for use with OpenVINO,
and a small set of tools enabling a level of automation for the process.
Since the tools have been mostly replaced by other solutions and several
other model repositories have recently grown in size and popularity,
Open Model ZOO will no longer be maintained. You may still use its resources
until they are fully removed.
| :doc:`See the Open Model ZOO documentation <model_zoo>`
| `Check the OMZ GitHub project <https://github.com/openvinotoolkit/open_model_zoo>`__
| **Apache MXNet, Caffe, and Kaldi model formats**
| *New solution:* conversion to ONNX via external tools
| *Old solution:* model support will be discontinued with OpenVINO 2024.0
|
| Since these three model formats proved to be far less popular among OpenVINO users
than the remaining ones, their support has been discontinued. Converting them to the
ONNX format is a possible way of retaining them in the OpenVINO-based pipeline.
| :doc:`See the previous conversion instructions <mxnet_caffe_kaldi>`
| :doc:`See the currently supported frameworks <Supported_Model_Formats>`
| **Post-training Optimization Tool (POT)**
| *New solution:* NNCF extended in OpenVINO 2023.0
| *Old solution:* POT discontinuation planned for 2024
|
| Neural Network Compression Framework (NNCF) now offers the same functionality as POT,
apart from its original feature set. It is currently the default tool for performing
both, post-training and quantization optimizations, while POT is considered deprecated.
| :doc:`See the deprecated POT documentation <pot_introduction>`
| :doc:`See how to use NNCF for model optimization <openvino_docs_model_optimization_guide>`
| `Check the NNCF GitHub project, including documentation <https://github.com/openvinotoolkit/nncf>`__
| **Old Inference API 1.0**
| *New solution:* API 2.0 launched in OpenVINO 2022.1
| *Old solution:* discontinuation planned for OpenVINO 2024.0
|
| API 1.0 (Inference Engine and nGraph) is now deprecated. It can still be
used but is not recommended. Its discontinuation is planned for 2024.
| :doc:`See how to transition to API 2.0 <openvino_2_0_transition_guide>`
| **Compile tool**
| *New solution:* the tool is no longer needed
| *Old solution:* deprecated in OpenVINO 2023.0
|
| Compile tool is now deprecated. If you need to compile a model for inference on
a specific device, use the following script:
.. tab-set::
.. tab-item:: Python
:sync: py
.. doxygensnippet:: docs/snippets/export_compiled_model.py
:language: python
:fragment: [export_compiled_model]
.. tab-item:: C++
:sync: cpp
.. doxygensnippet:: docs/snippets/export_compiled_model.cpp
:language: cpp
:fragment: [export_compiled_model]
| :doc:`see which devices support import / export <openvino_docs_OV_UG_Working_with_devices>`
| :doc:`Learn more on preprocessing steps <openvino_docs_OV_UG_Preprocessing_Overview>`
| :doc:`See how to integrate and save preprocessing steps into OpenVINO IR <openvino_docs_OV_UG_Preprocess_Usecase_save>`
| **DL Workbench**
| *New solution:* DevCloud version
| *Old solution:* local distribution discontinued in OpenVINO 2022.3
|
| The stand-alone version of DL Workbench, a GUI tool for previewing and benchmarking
deep learning models, has been discontinued. You can use its cloud version:
| `Intel® Developer Cloud for the Edge <https://www.intel.com/content/www/us/en/developer/tools/devcloud/edge/overview.html>`__.
| **OpenVINO™ integration with TensorFlow (OVTF)**
| *New solution:* Direct model support and OpenVINO Converter (OVC)
| *Old solution:* discontinued in OpenVINO 2023.0
|
| OpenVINO™ Integration with TensorFlow is longer supported, as OpenVINO now features a
native TensorFlow support, significantly enhancing user experience with no need for
explicit model conversion.
| :doc:`Learn more <openvino_docs_MO_DG_TensorFlow_Frontend>`
@endsphinxdirective

View File

@@ -0,0 +1,31 @@
# MX Net, Caffe, and Kaldi model formats {#mxnet_caffe_kaldi}
@sphinxdirective
.. toctree::
:maxdepth: 1
:hidden:
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe
openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi
openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_GluonCV_Models
openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_Style_Transfer_From_MXNet
openvino_docs_MO_DG_prepare_model_convert_model_kaldi_specific_Aspire_Tdnn_Model
The following articles present the deprecated conversion method for MX Net, Caffe,
and Kaldi model formats.
:doc:`Apache MX Net conversion <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_MxNet>`
:doc:`Caffe conversion <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Caffe>`
:doc:`Kaldi conversion <openvino_docs_MO_DG_prepare_model_convert_model_Convert_Model_From_Kaldi>`
Here are three examples of conversion for particular models.
:doc:`MXNet GluonCV conversion <openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_GluonCV_Models>`
:doc:`MXNet Style Transfer Model conversion <openvino_docs_MO_DG_prepare_model_convert_model_mxnet_specific_Convert_Style_Transfer_From_MXNet>`
:doc:`Kaldi ASpIRE Chain TDNN Model conversion <openvino_docs_MO_DG_prepare_model_convert_model_kaldi_specific_Aspire_Tdnn_Model>`
@endsphinxdirective

View File

@@ -17,6 +17,7 @@
Running Inference <openvino_docs_OV_UG_OV_Runtime_User_Guide>
Deployment on a Local System <openvino_deployment_guide>
Deployment on a Model Server <ovms_what_is_openvino_model_server>
pytorch_2_0_torch_compile
| :doc:`Model Preparation <openvino_docs_model_processing_introduction>`

View File

@@ -0,0 +1,157 @@
# PyTorch Deployment via "torch.compile" {#pytorch_2_0_torch_compile}
@sphinxdirective
The ``torch.compile`` feature enables you to use OpenVINO for PyTorch-native applications.
It speeds up PyTorch code by JIT-compiling it into optimized kernels.
By default, Torch code runs in eager-mode, but with the use of ``torch.compile`` it goes through the following steps:
1. **Graph acquisition** - the model is rewritten as blocks of subgraphs that are either:
* compiled by TorchDynamo and "flattened",
* falling back to the eager-mode, due to unsupported Python constructs (like control-flow code).
2. **Graph lowering** - all PyTorch operations are decomposed into their constituent kernels specific to the chosen backend.
3. **Graph compilation** - the kernels call their corresponding low-level device-specific operations.
How to Use
#################
To use ``torch.compile``, you need to add an import statement and define one of the two available backends:
| ``openvino``
| With this backend, Torch FX subgraphs are directly converted to OpenVINO representation without any additional PyTorch based tracing/scripting.
| ``openvino_ts``
| With this backend, Torch FX subgraphs are first traced/scripted with PyTorch Torchscript, and then converted to OpenVINO representation.
.. tab-set::
.. tab-item:: openvino
:sync: backend-openvino
.. code-block:: console
import openvino.torch
...
model = torch.compile(model, backend='openvino')
Execution diagram:
.. image:: _static/images/torch_compile_backend_openvino.svg
:width: 992px
:height: 720px
:scale: 60%
:align: center
.. tab-item:: openvino_ts
:sync: backend-openvino-ts
.. code-block:: console
import openvino.torch
...
model = torch.compile(model, backend='openvino_ts')
Execution diagram:
.. image:: _static/images/torch_compile_backend_openvino_ts.svg
:width: 1088px
:height: 720px
:scale: 60%
:align: center
Environment Variables
+++++++++++++++++++++++++++
* **OPENVINO_TORCH_BACKEND_DEVICE**: enables selecting a specific hardware device to run the application.
By default, the OpenVINO backend for ``torch.compile`` runs PyTorch applications using the CPU. Setting
this variable to GPU.0, for example, will make the application use the integrated graphics processor instead.
* **OPENVINO_TORCH_MODEL_CACHING**: enables saving the optimized model files to a hard drive, after the first application run.
This makes them available for the following application executions, reducing the first-inference latency.
By default, this variable is set to ``False``. Setting it to ``True`` enables caching.
* **OPENVINO_TORCH_CACHE_DIR**: enables defining a custom directory for the model files (if model caching set to ``True``).
By default, the OpenVINO IR is saved in the ``cache`` sub-directory, created in the application's root directory.
Windows support
++++++++++++++++++++++++++
Currently, PyTorch does not support ``torch.compile`` feature on Windows officially. However, it can be accessed by running
the below instructions:
1. Install the PyTorch nightly wheel file - `2.1.0.dev20230713 <https://download.pytorch.org/whl/nightly/cpu/torch-2.1.0.dev20230713%2Bcpu-cp38-cp38-win_amd64.whl>`__ ,
2. Update the file at ``<python_env_root>/Lib/site-packages/torch/_dynamo/eval_frames.py``
3. Find the function called ``check_if_dynamo_supported()``:
.. code-block:: console
def check_if_dynamo_supported():
if sys.platform == "win32":
raise RuntimeError("Windows not yet supported for torch.compile")
if sys.version_info >= (3, 11):
raise RuntimeError("Python 3.11+ not yet supported for torch.compile")
4. Put in comments the first two lines in this function, so it looks like this:
.. code-block:: console
def check_if_dynamo_supported():
#if sys.platform == "win32":
# raise RuntimeError("Windows not yet supported for torch.compile")
if sys.version_info >= (3, 11):
`raise RuntimeError("Python 3.11+ not yet supported for torch.compile")
Support for Automatic1111 Stable Diffusion WebUI
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Automatic1111 Stable Diffusion WebUI is an open-source repository that hosts a browser-based interface for the Stable Diffusion
based image generation. It allows users to create realistic and creative images from text prompts.
Stable Diffusion WebUI is supported on Intel CPUs, Intel integrated GPUs, and Intel discrete GPUs by leveraging OpenVINO
``torch.compile`` capability. Detailed instructions are available in
`Stable Diffusion WebUI repository. <https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon>`__
Architecture
#################
The ``torch.compile`` feature is part of PyTorch 2.0, and is based on:
* **TorchDynamo** - a Python-level JIT that hooks into the frame evaluation API in CPython,
(PEP 523) to dynamically modify Python bytecode right before it is executed (PyTorch operators
that cannot be extracted to FX graph are executed in the native Python environment).
It maintains the eager-mode capabilities using
`Guards <https://pytorch.org/docs/stable/dynamo/guards-overview.html>`__ to ensure the
generated graphs are valid.
* **AOTAutograd** - generates the backward graph corresponding to the forward graph captured by TorchDynamo.
* **PrimTorch** - decomposes complicated PyTorch operations into simpler and more elementary ops.
* **TorchInductor** - a deep learning compiler that generates fast code for multiple accelerators and backends.
When the PyTorch module is wrapped with ``torch.compile``, TorchDynamo traces the module and
rewrites Python bytecode to extract sequences of PyTorch operations into an FX Graph,
which can be optimized by the OpenVINO backend. The Torch FX graphs are first converted to
inlined FX graphs and the graph partitioning module traverses inlined FX graph to identify
operators supported by OpenVINO.
All the supported operators are clustered into OpenVINO submodules, converted to the OpenVINO
graph using OpenVINO's PyTorch decoder, and executed in an optimized manner using OpenVINO runtime.
All unsupported operators fall back to the native PyTorch runtime on CPU. If the subgraph
fails during OpenVINO conversion, the subgraph falls back to PyTorch's default inductor backend.
Additional Resources
############################
* `PyTorch 2.0 documentation <https://pytorch.org/docs/stable/index.html>`_
@endsphinxdirective

View File

@@ -22,11 +22,6 @@
openvino_docs_transformations
OpenVINO Plugin Developer Guide <openvino_docs_ie_plugin_dg_overview>
.. toctree::
:maxdepth: 1
:hidden:
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
The Intel® Distribution of OpenVINO™ toolkit supports neural-network models trained with various frameworks, including
TensorFlow, PyTorch, ONNX, TensorFlow Lite, and PaddlePaddle (OpenVINO support for Apache MXNet, Caffe, and Kaldi is currently
@@ -62,7 +57,7 @@ Mapping from Framework Operation
Mapping of custom operation is implemented differently, depending on model format used for import. You may choose one of the following:
1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX), TensorFlow Lite, PaddlePaddle or TensorFlow formats, then one of the classes from :doc:`Frontend Extension API <openvino_docs_Extensibility_UG_Frontend_Extensions>` should be used. It consists of several classes available in C++ which can be used with the ``--extensions`` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the ``read_model`` method. Python API is also available for runtime model import.
1. If a model is represented in the ONNX (including models exported from PyTorch in ONNX), TensorFlow Lite, PaddlePaddle or TensorFlow formats, then one of the classes from :doc:`Frontend Extension API <openvino_docs_Extensibility_UG_Frontend_Extensions>` should be used. It consists of several classes available in C++ which can be used with the ``--extensions`` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the ``read_model`` method. Python API is also available for runtime model import.
2. If a model is represented in the Caffe, Kaldi or MXNet formats (as legacy frontends), then :doc:`[Legacy] Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` should be used. This approach is available for model conversion in Model Optimizer only.

View File

@@ -301,11 +301,19 @@ This mapping also specifies the input name "X" and output name "Out".
The last step is to register this custom operation by following:
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
:language: cpp
:fragment: [frontend_extension_framework_map_macro_add_extension]
.. important::
To map an operation on a specific framework, you have to link to a respective
frontend (``openvino::frontend::onnx``, ``openvino::frontend::tensorflow``, ``openvino::frontend::paddle``) in the ``CMakeLists.txt`` file:
.. code-block:: sh
target_link_libraries(${TARGET_NAME} PRIVATE openvino::frontend::onnx)
Mapping to Multiple Operations with ConversionExtension
#######################################################

View File

@@ -94,7 +94,7 @@ Detailed Guides
API References
##############
* `OpenVINO Plugin API <https://docs.openvino.ai/2023.0/groupov_dev_api.html>`__
* `OpenVINO Transformation API <https://docs.openvino.ai/2023.0/groupie_transformation_api.html>`__
* `OpenVINO Plugin API <https://docs.openvino.ai/2023.1/groupov_dev_api.html>`__
* `OpenVINO Transformation API <https://docs.openvino.ai/2023.1/groupie_transformation_api.html>`__
@endsphinxdirective

View File

@@ -15,7 +15,7 @@
The guides below provides extra API references needed for OpenVINO plugin development:
* `OpenVINO Plugin API <https://docs.openvino.ai/2023.0/groupov_dev_api.html>`__
* `OpenVINO Transformation API <https://docs.openvino.ai/2023.0/groupie_transformation_api.html>`__
* `OpenVINO Plugin API <https://docs.openvino.ai/2023.1/groupov_dev_api.html>`__
* `OpenVINO Transformation API <https://docs.openvino.ai/2023.1/groupie_transformation_api.html>`__
@endsphinxdirective

View File

@@ -37,6 +37,7 @@
<tab type="user" title="Step 3. Main transformations" url="@ref openvino_docs_OV_UG_lpt_step3_main">
<tab type="user" title="AddTransformation" url="@ref openvino_docs_OV_UG_lpt_AddTransformation"/>
<tab type="user" title="AvgPoolTransformation" url="@ref openvino_docs_OV_UG_lpt_AvgPoolTransformation"/>
<tab type="user" title="BatchToSpaceTransformation" url="@ref openvino_docs_OV_UG_lpt_BatchToSpaceTransformation"/>
<tab type="user" title="ClampTransformation" url="@ref openvino_docs_OV_UG_lpt_ClampTransformation"/>
<tab type="user" title="ConcatTransformation" url="@ref openvino_docs_OV_UG_lpt_ConcatTransformation"/>
<tab type="user" title="ConvolutionTransformation" url="@ref openvino_docs_OV_UG_lpt_ConvolutionTransformation"/>
@@ -62,6 +63,7 @@
<tab type="user" title="ReshapeTransformation" url="@ref openvino_docs_OV_UG_lpt_ReshapeTransformation"/>
<tab type="user" title="SqueezeTransformation" url="@ref openvino_docs_OV_UG_lpt_SqueezeTransformation"/>
<tab type="user" title="ShuffleChannelsTransformation" url="@ref openvino_docs_OV_UG_lpt_ShuffleChannelsTransformation"/>
<tab type="user" title="SpaceToBatchTransformation" url="@ref openvino_docs_OV_UG_lpt_SpaceToBatchTransformation"/>
<tab type="user" title="SplitTransformation" url="@ref openvino_docs_OV_UG_lpt_SplitTransformation"/>
<tab type="user" title="StridedSliceTransformation" url="@ref openvino_docs_OV_UG_lpt_StridedSliceTransformation"/>
<tab type="user" title="TransposeTransformation" url="@ref openvino_docs_OV_UG_lpt_TransposeTransformation"/>

View File

@@ -188,6 +188,7 @@ Transformations:
* :doc:`AddTransformation <openvino_docs_OV_UG_lpt_AddTransformation>`
* :doc:`AvgPoolTransformation <openvino_docs_OV_UG_lpt_AvgPoolTransformation>`
* :doc:`ClampTransformation <openvino_docs_OV_UG_lpt_AvgPoolTransformation>`
* :doc:`BatchToSpaceTransformation <openvino_docs_OV_UG_lpt_BatchToSpaceTransformation>`
* :doc:`ConcatTransformation <openvino_docs_OV_UG_lpt_ConcatTransformation>`
* :doc:`ConvolutionTransformation <openvino_docs_OV_UG_lpt_ConvolutionTransformation>`
* :doc:`ConvolutionBackpropDataTransformation <openvino_docs_OV_UG_lpt_ConvolutionBackpropDataTransformation>`
@@ -211,6 +212,7 @@ Transformations:
* :doc:`ReshapeTransformation <openvino_docs_OV_UG_lpt_ReshapeTransformation>`
* :doc:`SqueezeTransformation <openvino_docs_OV_UG_lpt_SqueezeTransformation>`
* :doc:`ShuffleChannelsTransformation <openvino_docs_OV_UG_lpt_ShuffleChannelsTransformation>`
* :doc:`SpaceToBatchTransformation <openvino_docs_OV_UG_lpt_SpaceToBatchTransformation>`
* :doc:`SplitTransformation <openvino_docs_OV_UG_lpt_SplitTransformation>`
* :doc:`StridedSliceTransformation <openvino_docs_OV_UG_lpt_StridedSliceTransformation>`
* :doc:`TransposeTransformation <openvino_docs_OV_UG_lpt_TransposeTransformation>`

View File

@@ -105,7 +105,7 @@ Model display features (here and below):
The transformation is required and includes two tasks:
1. Mark operation input ports (create ``Precision`` attribute instance) by provided restrictions: input port index and required precisions. Restrictions are provided as input argument in ``:ref:`ngraph::pass::low_precision::LowPrecision <doxid-classngraph_1_1pass_1_1low__precision_1_1_low_precision>``` constructor.
1. Mark operation input ports (create ``Precision`` attribute instance) by provided restrictions: input port index and required precisions. Restrictions are provided as input argument in ``:ref:`ov::pass::low_precision::LowPrecision <doxid-classov_1_1pass_1_1low__precision_1_1_low_precision>``` constructor.
2. Mark precision preserved operations.
No attributes are required before the transformation. Changes in the example model after ``MarkupPrecisions`` transformation:

View File

@@ -10,8 +10,17 @@
Main transformations are the majority of low precision transformations. Transformations operate with dequantization operations. Main transformations include:
.. toctree::
:maxdepth: 1
:hidden:
BatchToSpaceTransformation <openvino_docs_OV_UG_lpt_BatchToSpaceTransformation>
SpaceToBatchTransformation <openvino_docs_OV_UG_lpt_SpaceToBatchTransformation>
* :doc:`AddTransformation <openvino_docs_OV_UG_lpt_AddTransformation>`
* :doc:`AvgPoolTransformation <openvino_docs_OV_UG_lpt_AvgPoolTransformation>`
* :doc:`BatchToSpaceTransformation <openvino_docs_OV_UG_lpt_BatchToSpaceTransformation>`
* :doc:`ClampTransformation <openvino_docs_OV_UG_lpt_AvgPoolTransformation>`
* :doc:`ConcatTransformation <openvino_docs_OV_UG_lpt_ConcatTransformation>`
* :doc:`ConvolutionTransformation <openvino_docs_OV_UG_lpt_ConvolutionTransformation>`
@@ -34,6 +43,7 @@ Main transformations are the majority of low precision transformations. Transfor
* :doc:`ReduceSumTransformation <openvino_docs_OV_UG_lpt_ReduceSumTransformation>`
* :doc:`ReluTransformation <openvino_docs_OV_UG_lpt_ReluTransformation>`
* :doc:`ReshapeTransformation <openvino_docs_OV_UG_lpt_ReshapeTransformation>`
* :doc:`SpaceToBatchTransformation <openvino_docs_OV_UG_lpt_SpaceToBatchTransformation>`
* :doc:`SqueezeTransformation <openvino_docs_OV_UG_lpt_SqueezeTransformation>`
* :doc:`ShuffleChannelsTransformation <openvino_docs_OV_UG_lpt_ShuffleChannelsTransformation>`
* :doc:`SplitTransformation <openvino_docs_OV_UG_lpt_SplitTransformation>`

View File

@@ -1,3 +1,3 @@
# ConvertSubtractConstant transformation {#openvino_docs_OV_UG_lpt_ConvertSubtractConstant}
ngraph::pass::low_precision::ConvertSubtractConstant class represents the `ConvertSubtractConstant` transformation.
ov::pass::low_precision::ConvertSubtractConstant class represents the `ConvertSubtractConstant` transformation.

View File

@@ -1,3 +1,3 @@
# PullReshapeThroughDequantization transformation {#openvino_docs_OV_UG_lpt_PullReshapeThroughDequantization}
ngraph::pass::low_precision::PullReshapeThroughDequantization class represents the `PullReshapeThroughDequantization` transformation.
ov::pass::low_precision::PullReshapeThroughDequantization class represents the `PullReshapeThroughDequantization` transformation.

View File

@@ -1,3 +1,3 @@
# PullTransposeThroughDequantization transformation {#openvino_docs_OV_UG_lpt_PullTransposeThroughDequantization}
ngraph::pass::low_precision::PullTransposeThroughDequantization class represents the `PullTransposeThroughDequantization` transformation.
ov::pass::low_precision::PullTransposeThroughDequantization class represents the `PullTransposeThroughDequantization` transformation.

View File

@@ -1,3 +1,3 @@
# AlignQuantizationIntervals transformation {#openvino_docs_OV_UG_lpt_AlignQuantizationIntervals}
ngraph::pass::low_precision::AlignQuantizationIntervals class represents the `AlignQuantizationIntervals` transformation.
ov::pass::low_precision::AlignQuantizationIntervals class represents the `AlignQuantizationIntervals` transformation.

Some files were not shown because too many files have changed in this diff Show More