Compare commits
610 Commits
2023.0.0.d
...
releases/2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
98a33ba770 | ||
|
|
d0647322de | ||
|
|
9f2f5fc59f | ||
|
|
f80f99f5c5 | ||
|
|
5127753183 | ||
|
|
f4a3f0a223 | ||
|
|
319711fca5 | ||
|
|
8c31771e6c | ||
|
|
29d01a5cbf | ||
|
|
002537729b | ||
|
|
088fe50dd9 | ||
|
|
be021afc0b | ||
|
|
74ee34e925 | ||
|
|
c516a7279e | ||
|
|
8252d74662 | ||
|
|
a3e746401e | ||
|
|
68db3844d3 | ||
|
|
4eb71aa5e0 | ||
|
|
256a6d2572 | ||
|
|
6fbcb94e20 | ||
|
|
2ac63aea24 | ||
|
|
e8d44d4502 | ||
|
|
816c2a24de | ||
|
|
177aa10040 | ||
|
|
68dfd60057 | ||
|
|
fd519c711a | ||
|
|
3d17c656d1 | ||
|
|
baab44c4f4 | ||
|
|
d972830c9a | ||
|
|
f322762818 | ||
|
|
c2da07a8e7 | ||
|
|
c08f68f1e5 | ||
|
|
433d2f2750 | ||
|
|
6b3863632d | ||
|
|
5509db87af | ||
|
|
e662b1a330 | ||
|
|
0aa5a8f704 | ||
|
|
54f6f11186 | ||
|
|
ea482d8391 | ||
|
|
a93f320a48 | ||
|
|
26e9c69440 | ||
|
|
4727efdb3c | ||
|
|
b7415f5c3b | ||
|
|
0262662050 | ||
|
|
576b99fee9 | ||
|
|
4e790d7b46 | ||
|
|
b0394cc3e4 | ||
|
|
18cb7c94c1 | ||
|
|
064364eb5e | ||
|
|
5ded6fb699 | ||
|
|
eabf199c3a | ||
|
|
0e0d166746 | ||
|
|
a6351294e7 | ||
|
|
cac7e2e1c4 | ||
|
|
13e674b1f8 | ||
|
|
a55d1c21ee | ||
|
|
91a4f73971 | ||
|
|
84a3aab115 | ||
|
|
4ddeecc031 | ||
|
|
9c10e33fc7 | ||
|
|
c32b9a0cd5 | ||
|
|
c32eef361b | ||
|
|
8d54bdd4d5 | ||
|
|
64395f0d5e | ||
|
|
9562161f76 | ||
|
|
cb59f057a0 | ||
|
|
28948502a9 | ||
|
|
34748ae3b5 | ||
|
|
06eb4afd41 | ||
|
|
967d74ade6 | ||
|
|
5ae4e2bb2d | ||
|
|
22f6a3bcc0 | ||
|
|
e842453865 | ||
|
|
2abbec386f | ||
|
|
afb2ebcdd4 | ||
|
|
83e45c5ff3 | ||
|
|
bdb6a44942 | ||
|
|
17cd26077a | ||
|
|
247eb8a9b9 | ||
|
|
68b8748c9f | ||
|
|
852efa2269 | ||
|
|
303fb7a121 | ||
|
|
7f1c6c8ce1 | ||
|
|
55530b47c0 | ||
|
|
69a6097a30 | ||
|
|
1f759456d6 | ||
|
|
b05a7f2ed6 | ||
|
|
f4709ffe8b | ||
|
|
bb1e353e58 | ||
|
|
99c7bbc25e | ||
|
|
33cfcb26fb | ||
|
|
39c84e03f7 | ||
|
|
f59126dde0 | ||
|
|
209d506341 | ||
|
|
a710adf81a | ||
|
|
fa1c41994f | ||
|
|
caae459f54 | ||
|
|
7ef5cbff30 | ||
|
|
85956dfa4d | ||
|
|
2d98cbed74 | ||
|
|
5d47cedcc9 | ||
|
|
9ab5a8f5d9 | ||
|
|
ad84dc6205 | ||
|
|
bd3e4347dd | ||
|
|
0adf0e27ee | ||
|
|
cb7cab1886 | ||
|
|
fd48b0bbdc | ||
|
|
691630b68c | ||
|
|
205feb9421 | ||
|
|
5ef750d5b3 | ||
|
|
80fddfe1c2 | ||
|
|
7eb59527a0 | ||
|
|
21fdda5609 | ||
|
|
9983f74dc7 | ||
|
|
ef0b8161c9 | ||
|
|
9e2dacbc53 | ||
|
|
d299be4202 | ||
|
|
99fe2e9bdc | ||
|
|
6668ec39d7 | ||
|
|
1e5dced9d4 | ||
|
|
7d73bae243 | ||
|
|
d8d4fb9c94 | ||
|
|
11cde296b7 | ||
|
|
44f8dac403 | ||
|
|
41b4fd1057 | ||
|
|
0f89782489 | ||
|
|
d894716fad | ||
|
|
f6fd84d2e1 | ||
|
|
648b2ad308 | ||
|
|
ea5c1b04e5 | ||
|
|
f3d88cbf99 | ||
|
|
e824e482b1 | ||
|
|
e4d0021e2c | ||
|
|
e74cb4084d | ||
|
|
e843e357cd | ||
|
|
ecc502733d | ||
|
|
d1de793552 | ||
|
|
ebaf6a2fcb | ||
|
|
88b006bce9 | ||
|
|
4aae068125 | ||
|
|
41c37c8af9 | ||
|
|
f40f0fa58b | ||
|
|
20dc436b6f | ||
|
|
b2b7a57a4c | ||
|
|
4481bfa17e | ||
|
|
366a5467d1 | ||
|
|
4be1dddb21 | ||
|
|
3fd9b8c3b7 | ||
|
|
66528622a8 | ||
|
|
4fb2cebf28 | ||
|
|
c18a24c05b | ||
|
|
95f0005793 | ||
|
|
9ac239de75 | ||
|
|
ad5c0808a6 | ||
|
|
66c6e125cf | ||
|
|
53bfc41a74 | ||
|
|
9b72c33039 | ||
|
|
c0e9e1b1a1 | ||
|
|
720e283ff1 | ||
|
|
0e87a28791 | ||
|
|
6d17bbb7e9 | ||
|
|
cebbfe65ac | ||
|
|
c4c6567182 | ||
|
|
1a9ce16dd6 | ||
|
|
4e8d5f3798 | ||
|
|
7351859ec2 | ||
|
|
405c5ea03a | ||
|
|
183253e834 | ||
|
|
cfea37b139 | ||
|
|
34f00bd173 | ||
|
|
17326abb72 | ||
|
|
8601042bea | ||
|
|
39958e0dc1 | ||
|
|
6fc9840e32 | ||
|
|
b4452d5630 | ||
|
|
4c69552656 | ||
|
|
6d8b3405ca | ||
|
|
4c2096ad9c | ||
|
|
0c67b90f47 | ||
|
|
83f51e0d00 | ||
|
|
8bb2a2a789 | ||
|
|
c9cfd6755c | ||
|
|
c0060aefa7 | ||
|
|
8a97d3c0e1 | ||
|
|
c5fd3300a2 | ||
|
|
a7f6f5292e | ||
|
|
804df84f7d | ||
|
|
1e49a594f7 | ||
|
|
d5ac1c2e5c | ||
|
|
afb2ae6b7a | ||
|
|
c5623b71cf | ||
|
|
152b11e77f | ||
|
|
5adf3b5ca8 | ||
|
|
a2ccbdf86e | ||
|
|
1440b9950f | ||
|
|
d88d4d22e8 | ||
|
|
ea79006a0a | ||
|
|
a4ff3318ea | ||
|
|
44e7a003e7 | ||
|
|
fa4112593d | ||
|
|
45e378f189 | ||
|
|
9320cbaa8c | ||
|
|
718b194ad6 | ||
|
|
8241540609 | ||
|
|
10d87b7332 | ||
|
|
386d773b33 | ||
|
|
a5312f70db | ||
|
|
8f113ef24e | ||
|
|
c651bc5f87 | ||
|
|
12aab024d1 | ||
|
|
3978511c5c | ||
|
|
0de0efd751 | ||
|
|
53e2997909 | ||
|
|
7779fea76f | ||
|
|
c785551b57 | ||
|
|
8c95c90e45 | ||
|
|
bf829eead4 | ||
|
|
1141e90435 | ||
|
|
15b62d77cc | ||
|
|
e6347544e2 | ||
|
|
fcf261a048 | ||
|
|
bba9f3094b | ||
|
|
aa13ab63f5 | ||
|
|
8f978d2c60 | ||
|
|
a349ba7295 | ||
|
|
73442bbc82 | ||
|
|
76c237da8b | ||
|
|
aebea2337e | ||
|
|
29c672d6d8 | ||
|
|
1f790df33c | ||
|
|
5625424b91 | ||
|
|
c7d0df39b5 | ||
|
|
85b57ea2bf | ||
|
|
893b29eab4 | ||
|
|
1d443c6da6 | ||
|
|
2b8a6ba99a | ||
|
|
ca02336c1b | ||
|
|
e06c4cc6fd | ||
|
|
40bf400b18 | ||
|
|
22bb3af7df | ||
|
|
c0767a7e27 | ||
|
|
59e28f8d0d | ||
|
|
40128cded1 | ||
|
|
8005a3d0b0 | ||
|
|
561bf6d478 | ||
|
|
cecd0e75a6 | ||
|
|
dbaa1f0c0d | ||
|
|
03a428f50c | ||
|
|
7fc65ae3c5 | ||
|
|
10392644e3 | ||
|
|
80519162ae | ||
|
|
82ff7e17c9 | ||
|
|
f1bc402b38 | ||
|
|
962df2cdcb | ||
|
|
c8ac7c9b82 | ||
|
|
6ed85178d5 | ||
|
|
14a14ecd76 | ||
|
|
546581bcce | ||
|
|
cfbfa18f34 | ||
|
|
e593cf8545 | ||
|
|
a032d67cc7 | ||
|
|
ca92eb96ad | ||
|
|
de30d8523d | ||
|
|
da91b33763 | ||
|
|
02bfa7804b | ||
|
|
6cb6c5958a | ||
|
|
737864bdc7 | ||
|
|
95ca54d0ab | ||
|
|
ce5f65af14 | ||
|
|
6389f423bf | ||
|
|
5857c4438b | ||
|
|
09265083ed | ||
|
|
7cf9d109e8 | ||
|
|
5682e178dd | ||
|
|
dfaa4e7bd6 | ||
|
|
da4316845f | ||
|
|
3c485feea8 | ||
|
|
dabd5ee412 | ||
|
|
edec7bb897 | ||
|
|
72533a7da1 | ||
|
|
49b5d039db | ||
|
|
acd424bb5e | ||
|
|
57d4ca27e6 | ||
|
|
923b6f297c | ||
|
|
a8278ba4a6 | ||
|
|
43a42fa9cd | ||
|
|
cd4c012f08 | ||
|
|
2e3deb8d8f | ||
|
|
d423491bcb | ||
|
|
11a2b75161 | ||
|
|
512b186231 | ||
|
|
ee4ccec190 | ||
|
|
27210b6505 | ||
|
|
ab879f143c | ||
|
|
6e11645018 | ||
|
|
0a5975bdfa | ||
|
|
1aec450fc6 | ||
|
|
8c09a128ac | ||
|
|
3f07c8b48b | ||
|
|
9c01de4b6e | ||
|
|
72906ca242 | ||
|
|
39ed9a624f | ||
|
|
f736c71feb | ||
|
|
9247906879 | ||
|
|
19f8f5a3a7 | ||
|
|
2255bb25fd | ||
|
|
ca1102b855 | ||
|
|
0617ce9089 | ||
|
|
22aee08958 | ||
|
|
e37288fbcc | ||
|
|
5533de5dd8 | ||
|
|
10f53cb40b | ||
|
|
4750523c81 | ||
|
|
478725c719 | ||
|
|
e79db660ce | ||
|
|
d1f1fa2b39 | ||
|
|
3bb0fb61f6 | ||
|
|
6663367183 | ||
|
|
28e54e75ea | ||
|
|
38a5ee719d | ||
|
|
8879ef53a7 | ||
|
|
00847cba7d | ||
|
|
ce23ce00f1 | ||
|
|
3830125e3b | ||
|
|
d972a71b4c | ||
|
|
22a81e0e58 | ||
|
|
9fce01f8cc | ||
|
|
758ec32001 | ||
|
|
64b5a4595a | ||
|
|
5c21dcec4d | ||
|
|
a6b1544acf | ||
|
|
8e5b0650a0 | ||
|
|
b452dab8f0 | ||
|
|
83cc2277b4 | ||
|
|
f39ab0dbc9 | ||
|
|
10c56708fd | ||
|
|
86ed1e93b6 | ||
|
|
f8522a6ea1 | ||
|
|
f410658d32 | ||
|
|
a3f14366d9 | ||
|
|
a34ef680f2 | ||
|
|
faba5fb71e | ||
|
|
e8ae1e41ea | ||
|
|
eac265722f | ||
|
|
63f5c2f0e7 | ||
|
|
6ff0cad127 | ||
|
|
01065338ef | ||
|
|
aa5b6ecac2 | ||
|
|
b3ea6ceefa | ||
|
|
656d7fe380 | ||
|
|
7997354359 | ||
|
|
219a0eebdc | ||
|
|
bb0be3c177 | ||
|
|
11c3623ebb | ||
|
|
fed06fcb91 | ||
|
|
2c450ced24 | ||
|
|
26029c2d48 | ||
|
|
462cdb54f8 | ||
|
|
46f8ebfaec | ||
|
|
fbc28297ec | ||
|
|
d7b775f583 | ||
|
|
e31b00c299 | ||
|
|
50a6c88ea3 | ||
|
|
793bbb6ee2 | ||
|
|
88cb428763 | ||
|
|
c4b155edc2 | ||
|
|
304991f88b | ||
|
|
6ea9cc7149 | ||
|
|
8fbd78fb07 | ||
|
|
6ad80576b7 | ||
|
|
6b44902bf2 | ||
|
|
b22d0641cb | ||
|
|
4ae7e1ff61 | ||
|
|
31efdfd00d | ||
|
|
70d80a750f | ||
|
|
ba23e2290e | ||
|
|
359b444558 | ||
|
|
c186ffdf0d | ||
|
|
344db564fc | ||
|
|
14d4fcf827 | ||
|
|
a8b5ccc03f | ||
|
|
0c12ee6015 | ||
|
|
dcfa1f6881 | ||
|
|
70e0eed075 | ||
|
|
77a5d1aa03 | ||
|
|
f100c36ac9 | ||
|
|
e53fc86988 | ||
|
|
bef25ddf43 | ||
|
|
70c3979602 | ||
|
|
0f7e6de346 | ||
|
|
7d574e3114 | ||
|
|
552143c9cd | ||
|
|
5026aa044a | ||
|
|
4e6a129672 | ||
|
|
d00731c0ab | ||
|
|
5bded05ae6 | ||
|
|
1bd9a1e01c | ||
|
|
f9fbcbe419 | ||
|
|
71880aadd3 | ||
|
|
1ec22a3180 | ||
|
|
fab8236af3 | ||
|
|
4c3a4a8992 | ||
|
|
3d33cb2b43 | ||
|
|
39f843fb78 | ||
|
|
d230ad9313 | ||
|
|
497a19edf6 | ||
|
|
d7083fb4db | ||
|
|
a611104b12 | ||
|
|
921bebc1ec | ||
|
|
7338257e00 | ||
|
|
bb6a3251a8 | ||
|
|
4ce5548c9a | ||
|
|
90b485715a | ||
|
|
00a4fc514c | ||
|
|
a8c7c19cb9 | ||
|
|
63c0089128 | ||
|
|
34b3abc0e2 | ||
|
|
1525f6cc16 | ||
|
|
dbd20ec799 | ||
|
|
498486588e | ||
|
|
ca0b30c082 | ||
|
|
626caf7f2a | ||
|
|
2401b0aa3c | ||
|
|
1281074e15 | ||
|
|
bd8ca523b9 | ||
|
|
3ad3a90e98 | ||
|
|
9f250edc7f | ||
|
|
38d97709d1 | ||
|
|
531b5a3657 | ||
|
|
d4ac0b0e79 | ||
|
|
078f28911b | ||
|
|
e93c8e1b1c | ||
|
|
d5cc696e00 | ||
|
|
566ef01a3f | ||
|
|
441dad2eea | ||
|
|
e6341917cd | ||
|
|
2a5c69abc6 | ||
|
|
d5123056bb | ||
|
|
e3fdfc4e09 | ||
|
|
f97eeb59d5 | ||
|
|
d70d8509c3 | ||
|
|
3494edeed2 | ||
|
|
bf2870a63b | ||
|
|
d15cdc81cd | ||
|
|
3f9cc0112a | ||
|
|
adc733f1e9 | ||
|
|
e52445dda4 | ||
|
|
c14d0d7389 | ||
|
|
ae06322cb7 | ||
|
|
14f38bfde8 | ||
|
|
930441b223 | ||
|
|
f4fe8400a7 | ||
|
|
f9098cd67c | ||
|
|
47f0d72f02 | ||
|
|
496a608a28 | ||
|
|
1471a6e8de | ||
|
|
25826bfe7d | ||
|
|
dc2fa65224 | ||
|
|
4a997de4a3 | ||
|
|
98393c0da1 | ||
|
|
816c0f76e2 | ||
|
|
7c41d78b5d | ||
|
|
834e611bde | ||
|
|
9ca85eb363 | ||
|
|
d72d833a96 | ||
|
|
589bd6d076 | ||
|
|
aa1f26a2b7 | ||
|
|
7282728cec | ||
|
|
9b9c31d46b | ||
|
|
175db3523a | ||
|
|
c96a5c4b70 | ||
|
|
31398bb3eb | ||
|
|
18da874c57 | ||
|
|
507b3251ef | ||
|
|
824a5aa7fb | ||
|
|
9f3bc22e7a | ||
|
|
4ba0ac5476 | ||
|
|
8bdc5bc85f | ||
|
|
de2e9faa58 | ||
|
|
cc6fd80d0a | ||
|
|
7ce40996e5 | ||
|
|
dc941f69ae | ||
|
|
fe98b8ee13 | ||
|
|
df5ada8b19 | ||
|
|
0c0aa5c997 | ||
|
|
129670ab1e | ||
|
|
25058da48f | ||
|
|
f5c2db73d5 | ||
|
|
24c9d95779 | ||
|
|
ae34720818 | ||
|
|
231569db16 | ||
|
|
cf12f92fae | ||
|
|
9e5be9ad24 | ||
|
|
9b38e5168f | ||
|
|
d07fa6f80e | ||
|
|
fd824cf036 | ||
|
|
b9f82e37b9 | ||
|
|
04a4971481 | ||
|
|
2c7cbdb293 | ||
|
|
d6f7e5e84d | ||
|
|
435a79a2a3 | ||
|
|
67aa807892 | ||
|
|
e98bd0dae4 | ||
|
|
a7228534af | ||
|
|
55fa8da5e4 | ||
|
|
802742e59f | ||
|
|
68f46ff9a1 | ||
|
|
de8f34c8f0 | ||
|
|
85f9d1392c | ||
|
|
6aeb054e48 | ||
|
|
25015f9790 | ||
|
|
0426c645eb | ||
|
|
3461064507 | ||
|
|
c592ecd44e | ||
|
|
5795a50a22 | ||
|
|
a016e4e6bb | ||
|
|
5299f26168 | ||
|
|
656428bc4f | ||
|
|
da7ee613a3 | ||
|
|
df6557cfad | ||
|
|
f70954bda9 | ||
|
|
ad2dc4d479 | ||
|
|
7782d85b26 | ||
|
|
5d80bca16e | ||
|
|
63c5be3ed2 | ||
|
|
ae350c7107 | ||
|
|
4921d1ad28 | ||
|
|
061ba1d773 | ||
|
|
e238bfc1d0 | ||
|
|
1037f24c46 | ||
|
|
67c07ccebe | ||
|
|
dcf6fb1e1a | ||
|
|
9c6d287a58 | ||
|
|
1ba87971d1 | ||
|
|
73be9d31b6 | ||
|
|
86142b0f4b | ||
|
|
0e975ffbb6 | ||
|
|
65a49e903c | ||
|
|
418f70abb0 | ||
|
|
bee357bcf8 | ||
|
|
298bf15a1b | ||
|
|
9b5ca2bb6a | ||
|
|
86d7c97fa9 | ||
|
|
c283d21215 | ||
|
|
a368e10fff | ||
|
|
c2a90f4c01 | ||
|
|
c2c2143f45 | ||
|
|
fb49228fec | ||
|
|
ed5148b75f | ||
|
|
7d4496bb12 | ||
|
|
e737e18b02 | ||
|
|
997f60f1c3 | ||
|
|
bdd79fe931 | ||
|
|
496fe7a7db | ||
|
|
69d6ef33fc | ||
|
|
b755d17090 | ||
|
|
23c90aecea | ||
|
|
132dceb146 | ||
|
|
4bb9222c6e | ||
|
|
4b16c7554e | ||
|
|
f8aacf3b19 | ||
|
|
0312d8cf1b | ||
|
|
2312ec79a2 | ||
|
|
586dd4fb0a | ||
|
|
31aa35b646 | ||
|
|
ea213f687a | ||
|
|
3740ba9226 | ||
|
|
920900fbda | ||
|
|
4a43753e02 | ||
|
|
209db8a29b | ||
|
|
63b16baa7e | ||
|
|
9e89b6c5f6 | ||
|
|
7513e9dee1 | ||
|
|
4fbd094cba | ||
|
|
98afdc848a | ||
|
|
296c2d6603 | ||
|
|
ca2265395d | ||
|
|
d41663694c | ||
|
|
d407bc1b3b | ||
|
|
f6ee6e92f8 | ||
|
|
f991f92f8c | ||
|
|
527c2dad2a | ||
|
|
615177ae09 | ||
|
|
234fe92931 | ||
|
|
efc647a512 | ||
|
|
81821f3dbb | ||
|
|
f1d6725477 | ||
|
|
81af7f52cb | ||
|
|
feb08c408f | ||
|
|
023dc1fa3d | ||
|
|
f36ee94b4b | ||
|
|
bc7a121a20 | ||
|
|
b921bf2e29 | ||
|
|
2075dcb7c3 | ||
|
|
ed50d3782c | ||
|
|
b7bf760516 | ||
|
|
57684e28ff | ||
|
|
48dee7c30a | ||
|
|
c7fe5ca73b | ||
|
|
b5a0497c19 | ||
|
|
f4179e8ee4 | ||
|
|
d1a23e964e | ||
|
|
13874b31e9 | ||
|
|
8c69100439 | ||
|
|
769353df00 | ||
|
|
8c40bfd9c7 | ||
|
|
c6fc8e5adc | ||
|
|
72952bdc45 | ||
|
|
8b7e6878e8 | ||
|
|
1eb6ad20c3 |
@@ -31,14 +31,6 @@ pr:
|
||||
- 'tools/*'
|
||||
- 'tests/layer_tests/*'
|
||||
|
||||
resources:
|
||||
repositories:
|
||||
- repository: openvino_contrib
|
||||
type: github
|
||||
endpoint: openvinotoolkit
|
||||
name: openvinotoolkit/openvino_contrib
|
||||
ref: master
|
||||
|
||||
variables:
|
||||
- group: github
|
||||
|
||||
@@ -56,7 +48,6 @@ jobs:
|
||||
VSTS_HTTP_TIMEOUT: 200
|
||||
BUILD_TYPE: Release
|
||||
OPENVINO_REPO_DIR: $(Build.Repository.LocalPath)
|
||||
OPENVINO_CONTRIB_REPO_DIR: $(OPENVINO_REPO_DIR)/../openvino_contrib
|
||||
WORK_DIR: $(Pipeline.Workspace)/_w
|
||||
BUILD_DIR: $(WORK_DIR)/build
|
||||
ANDROID_TOOLS: $(WORK_DIR)/android_tools
|
||||
@@ -66,7 +57,7 @@ jobs:
|
||||
SHARE_DIR: /mount/cinfsshare/onnxtestdata
|
||||
CCACHE_DIR: $(SHARE_DIR)/ccache/master/android_arm64
|
||||
LD_LIBRARY_PATH: $(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/lib
|
||||
OV_PYTHON_VERSION: 3.10.10 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
|
||||
OV_PYTHON_VERSION: 3.11.2 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
|
||||
|
||||
steps:
|
||||
- task: UsePythonVersion@0
|
||||
@@ -76,7 +67,7 @@ jobs:
|
||||
disableDownloadFromRegistry: false
|
||||
architecture: 'x64'
|
||||
githubToken: $(auth_token)
|
||||
displayName: Setup Python 3.10
|
||||
displayName: Setup Python 3.11
|
||||
name: setupPython
|
||||
- bash: |
|
||||
#!/bin/bash
|
||||
@@ -121,11 +112,6 @@ jobs:
|
||||
submodules: 'true'
|
||||
path: openvino
|
||||
|
||||
- checkout: openvino_contrib
|
||||
clean: 'true'
|
||||
submodules: 'true'
|
||||
path: openvino_contrib
|
||||
|
||||
- script: |
|
||||
set -e
|
||||
sudo -E $(OPENVINO_REPO_DIR)/install_build_dependencies.sh
|
||||
@@ -147,20 +133,14 @@ jobs:
|
||||
- task: CMake@1
|
||||
inputs:
|
||||
cmakeArgs: >
|
||||
-GNinja
|
||||
-G "Ninja Multi-Config"
|
||||
-DCMAKE_VERBOSE_MAKEFILE=ON
|
||||
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
|
||||
-DCMAKE_TOOLCHAIN_FILE=$(ANDROID_TOOLS)/ndk-bundle/build/cmake/android.toolchain.cmake
|
||||
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON
|
||||
-DANDROID_ABI=$(ANDROID_ABI_CONFIG)
|
||||
-DANDROID_STL=c++_shared
|
||||
-DANDROID_PLATFORM=$(ANDROID_SDK_VERSION)
|
||||
-DENABLE_TESTS=ON
|
||||
-DBUILD_java_api=ON
|
||||
-DBUILD_nvidia_plugin=OFF
|
||||
-DBUILD_custom_operations=OFF
|
||||
-DENABLE_INTEL_GPU=ON
|
||||
-DOPENVINO_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules
|
||||
-DCMAKE_CXX_LINKER_LAUNCHER=ccache
|
||||
-DCMAKE_C_LINKER_LAUNCHER=ccache
|
||||
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
|
||||
|
||||
@@ -32,13 +32,13 @@ resources:
|
||||
type: github
|
||||
endpoint: openvinotoolkit
|
||||
name: openvinotoolkit/openvino_contrib
|
||||
ref: master
|
||||
ref: releases/2023/0
|
||||
|
||||
- repository: testdata
|
||||
type: github
|
||||
endpoint: openvinotoolkit
|
||||
name: openvinotoolkit/testdata
|
||||
ref: master
|
||||
ref: releases/2023/0
|
||||
|
||||
variables:
|
||||
- group: github
|
||||
@@ -100,17 +100,17 @@ jobs:
|
||||
BUILD_PYTHON: $(WORK_DIR)/build_python
|
||||
INSTALL_PYTHON: $(INSTALL_OPENVINO)/extras/python
|
||||
LD_LIBRARY_PATH: $(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/lib
|
||||
OV_PYTHON_VERSION: 3.10.10 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
|
||||
OV_PYTHON_VERSION: 3.11.2 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
|
||||
|
||||
steps:
|
||||
- task: UsePythonVersion@0
|
||||
inputs:
|
||||
versionSpec: '$(OV_PYTHON_VERSION)' # Setting only major & minor version will download latest release from GH repo example 3.10 will be 3.10.10.
|
||||
versionSpec: '$(OV_PYTHON_VERSION)' # Setting only major & minor version will download latest release from GH repo example 3.10 will be 3.10.10.
|
||||
addToPath: true
|
||||
disableDownloadFromRegistry: false
|
||||
architecture: 'x64'
|
||||
githubToken: $(auth_token)
|
||||
displayName: Setup Python 3.10
|
||||
displayName: Setup Python 3.11
|
||||
name: setupPython
|
||||
- bash: |
|
||||
#!/bin/bash
|
||||
@@ -172,7 +172,8 @@ jobs:
|
||||
# For running Python API tests
|
||||
python3 -m pip install -r $(REPO_DIR)/src/bindings/python/src/compatibility/openvino/requirements-dev.txt
|
||||
# For running Paddle frontend unit tests
|
||||
python3 -m pip install -r $(REPO_DIR)/src/frontends/paddle/tests/requirements.txt
|
||||
# TODO Reenable PDPD after paddlepaddle==2.5.0 with compliant protobuf is released (ticket 95904)
|
||||
#python3 -m pip install -r $(REPO_DIR)/src/frontends/paddle/tests/requirements.txt
|
||||
# For running ONNX frontend unit tests
|
||||
python3 -m pip install -r $(REPO_DIR)/src/frontends/onnx/tests/requirements.txt
|
||||
# For running TensorFlow frontend unit tests
|
||||
@@ -244,6 +245,7 @@ jobs:
|
||||
-DCMAKE_CXX_COMPILER=clang++
|
||||
-DCMAKE_C_COMPILER=clang
|
||||
-DENABLE_SYSTEM_SNAPPY=ON
|
||||
-DENABLE_SYSTEM_TBB=ON
|
||||
-DCPACK_GENERATOR=$(CMAKE_CPACK_GENERATOR)
|
||||
-DBUILD_nvidia_plugin=OFF
|
||||
-S $(REPO_DIR)
|
||||
@@ -291,7 +293,10 @@ jobs:
|
||||
- script: cmake -DCOMPONENT=tests -DCMAKE_INSTALL_PREFIX=$(INSTALL_DIR) -P $(BUILD_LAYER_TESTS_DIR)/cmake_install.cmake
|
||||
displayName: 'Install Layer Tests'
|
||||
|
||||
- script: python3 -m pip install openvino-dev --find-links=$(INSTALL_DIR)/tools
|
||||
- script: |
|
||||
set -e
|
||||
python3 -m pip install $(INSTALL_DIR)/tools/openvino-*
|
||||
python3 -m pip install $(INSTALL_DIR)/tools/openvino_dev-*
|
||||
displayName: 'Install python wheels'
|
||||
|
||||
- script: |
|
||||
@@ -304,7 +309,7 @@ jobs:
|
||||
|
||||
# Skip test_onnx/test_zoo_models and test_onnx/test_backend due to long execution time
|
||||
- script: |
|
||||
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.1906/linux/x64:$(LD_LIBRARY_PATH)
|
||||
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.2116/linux/x64:$(LD_LIBRARY_PATH)
|
||||
python3 -m pytest -s $(INSTALL_TEST_DIR)/pyngraph $(PYTHON_STATIC_ARGS) \
|
||||
--junitxml=$(INSTALL_TEST_DIR)/TEST-Pyngraph.xml \
|
||||
--ignore=$(INSTALL_TEST_DIR)/pyngraph/tests/test_onnx/test_zoo_models.py \
|
||||
@@ -314,7 +319,7 @@ jobs:
|
||||
# Skip test_onnx/test_zoo_models and test_onnx/test_backend due to long execution time
|
||||
- script: |
|
||||
# For python imports to import pybind_mock_frontend
|
||||
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.1906/linux/x64:$(LD_LIBRARY_PATH)
|
||||
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.2116/linux/x64:$(LD_LIBRARY_PATH)
|
||||
export PYTHONPATH=$(INSTALL_TEST_DIR):$(INSTALL_DIR)/python/python3.8:$PYTHONPATH
|
||||
python3 -m pytest -sv $(INSTALL_TEST_DIR)/pyopenvino $(PYTHON_STATIC_ARGS) \
|
||||
--junitxml=$(INSTALL_TEST_DIR)/TEST-Pyngraph.xml \
|
||||
@@ -324,7 +329,7 @@ jobs:
|
||||
displayName: 'Python API 2.0 Tests'
|
||||
|
||||
- script: |
|
||||
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.1906/linux/x64:$(LD_LIBRARY_PATH)
|
||||
export LD_LIBRARY_PATH=$(REPO_DIR)/temp/gna_03.05.00.2116/linux/x64:$(LD_LIBRARY_PATH)
|
||||
python3 -m pytest -s $(INSTALL_TEST_DIR)/mo/unit_tests --junitxml=$(INSTALL_TEST_DIR)/TEST-ModelOptimizer.xml
|
||||
displayName: 'Model Optimizer UT'
|
||||
|
||||
@@ -365,7 +370,7 @@ jobs:
|
||||
displayName: 'Build cpp samples - gcc'
|
||||
|
||||
- script: $(SAMPLES_INSTALL_DIR)/cpp/build_samples.sh -b $(BUILD_DIR)/cpp_samples_clang
|
||||
env:
|
||||
env:
|
||||
CC: clang
|
||||
CXX: clang++
|
||||
displayName: 'Build cpp samples - clang'
|
||||
|
||||
@@ -31,14 +31,6 @@ pr:
|
||||
- 'tools/*'
|
||||
- 'tests/layer_tests/*'
|
||||
|
||||
resources:
|
||||
repositories:
|
||||
- repository: openvino_contrib
|
||||
type: github
|
||||
endpoint: openvinotoolkit
|
||||
name: openvinotoolkit/openvino_contrib
|
||||
ref: master
|
||||
|
||||
variables:
|
||||
- group: github
|
||||
|
||||
@@ -54,34 +46,18 @@ jobs:
|
||||
system.debug: true
|
||||
VSTS_HTTP_RETRY: 5
|
||||
VSTS_HTTP_TIMEOUT: 200
|
||||
PYTHON_ARM_VERSION: "3.10.6"
|
||||
PYTHON_EXEC: "python3.10"
|
||||
OPENVINO_ARCH: 'aarch64'
|
||||
NUM_PROC: 1
|
||||
BUILD_TYPE: Release
|
||||
OPENVINO_REPO_DIR: $(Build.Repository.LocalPath)
|
||||
OPENVINO_CONTRIB_REPO_DIR: $(OPENVINO_REPO_DIR)/../openvino_contrib
|
||||
OPENCV_REPO_DIR: $(OPENVINO_REPO_DIR)/../opencv
|
||||
ONETBB_REPO_DIR: $(OPENVINO_CONTRIB_REPO_DIR)/../oneTBB
|
||||
BUILD_PYTHON: $(WORK_DIR)/build_python
|
||||
BUILD_OPENCV: $(WORK_DIR)/build_opencv
|
||||
BUILD_ONETBB: $(WORK_DIR)/build_onetbb
|
||||
BUILD_OPENVINO: $(WORK_DIR)/build
|
||||
BUILD_OPENVINO_PYTHON: $(WORK_DIR)/build_python
|
||||
CROSSENV_DIR: $(WORK_DIR)/cross_env
|
||||
INSTALL_OPENVINO: $(WORK_DIR)/install_openvino
|
||||
INSTALL_PYTHON: $(INSTALL_OPENVINO)/extras/python
|
||||
INSTALL_ONETBB: $(WORK_DIR)/build/extras/oneTBB
|
||||
INSTALL_ONETBB_PACKAGE: $(INSTALL_OPENVINO)/extras/oneTBB
|
||||
INSTALL_OPENCV: $(INSTALL_OPENVINO)/extras/opencv
|
||||
WORK_DIR: $(Pipeline.Workspace)/_w
|
||||
SHARE_DIR: /mount/cinfsshare/onnxtestdata
|
||||
TMP_DIR: /mnt/tmp
|
||||
OPENVINO_CCACHE_DIR: $(SHARE_DIR)/ccache/master/linux_arm64
|
||||
OPENCV_CCACHE_DIR: $(SHARE_DIR)/ccache/master/linux_arm64_opencv
|
||||
ONETBB_CCACHE_DIR: $(SHARE_DIR)/ccache/master/linux_arm64_onetbb
|
||||
LD_LIBRARY_PATH: $(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/lib
|
||||
OV_PYTHON_VERSION: 3.10.10 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
|
||||
OV_PYTHON_VERSION: 3.11.2 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
|
||||
|
||||
steps:
|
||||
- task: UsePythonVersion@0
|
||||
@@ -91,7 +67,7 @@ jobs:
|
||||
disableDownloadFromRegistry: false
|
||||
architecture: 'x64'
|
||||
githubToken: $(auth_token)
|
||||
displayName: Setup Python 3.10
|
||||
displayName: Setup Python 3.11
|
||||
name: setupPython
|
||||
- bash: |
|
||||
#!/bin/bash
|
||||
@@ -121,93 +97,89 @@ jobs:
|
||||
|
||||
- script: |
|
||||
rm -rf $(WORK_DIR) ; mkdir $(WORK_DIR)
|
||||
mkdir -p $(BUILD_ONETBB) $(BUILD_OPENCV) $(BUILD_OPENVINO) $(BUILD_OPENVINO_PYTHON) $(BUILD_PYTHON)
|
||||
mkdir -p $(INSTALL_ONETBB) $(INSTALL_ONETBB_PACKAGE) $(INSTALL_OPENVINO) $(INSTALL_PYTHON) $(INSTALL_OPENCV)
|
||||
mkdir -p $(BUILD_OPENVINO)
|
||||
mkdir -p $(INSTALL_OPENVINO)
|
||||
sudo rm -rf $(TMP_DIR) ; sudo mkdir $(TMP_DIR) ; sudo chmod 777 -R $(TMP_DIR)
|
||||
sudo mkdir -p $(SHARE_DIR)
|
||||
sudo apt --assume-yes update && sudo apt --assume-yes install nfs-common
|
||||
sudo mount -vvv -t nfs cinfsshare.file.core.windows.net:/cinfsshare/onnxtestdata $(SHARE_DIR) -o vers=4,minorversion=1,sec=sys
|
||||
mkdir -p $(OPENVINO_CCACHE_DIR)
|
||||
mkdir -p $(OPENCV_CCACHE_DIR)
|
||||
mkdir -p $(ONETBB_CCACHE_DIR)
|
||||
displayName: 'Make directories'
|
||||
|
||||
- checkout: self
|
||||
clean: 'true'
|
||||
submodules: 'true'
|
||||
path: openvino
|
||||
|
||||
- checkout: openvino_contrib
|
||||
clean: 'true'
|
||||
submodules: 'true'
|
||||
path: openvino_contrib
|
||||
|
||||
- script: |
|
||||
set -e
|
||||
sudo -E $(OPENVINO_REPO_DIR)/install_build_dependencies.sh
|
||||
$(OPENVINO_CONTRIB_REPO_DIR)/modules/arm_plugin/scripts/install_build_dependencies.sh
|
||||
python3 -m pip install --upgrade pip
|
||||
python3 -m pip install -r $(OPENVINO_REPO_DIR)/src/bindings/python/requirements.txt
|
||||
python3 -m pip install -r $(OPENVINO_REPO_DIR)/src/bindings/python/wheel/requirements-dev.txt
|
||||
env:
|
||||
CCACHE_TEMPDIR: $(TMP_DIR)/ccache
|
||||
CCACHE_BASEDIR: $(Pipeline.Workspace)
|
||||
CCACHE_MAXSIZE: 50G
|
||||
USE_CCACHE: 1
|
||||
OPENCV_CCACHE_DIR: $(OPENCV_CCACHE_DIR)
|
||||
ONETBB_CCACHE_DIR: $(ONETBB_CCACHE_DIR)
|
||||
PYTHON_ARM_VERSION: $(PYTHON_ARM_VERSION)
|
||||
NUM_PROC: $(NUM_PROC)
|
||||
BUILD_PYTHON: $(BUILD_PYTHON)
|
||||
WORK_DIR: $(WORK_DIR)
|
||||
INSTALL_PYTHON: $(INSTALL_PYTHON)
|
||||
BUILD_TYPE: $(BUILD_TYPE)
|
||||
OPENVINO_REPO_DIR: $(OPENVINO_REPO_DIR)
|
||||
BUILD_ONETBB: $(BUILD_ONETBB)
|
||||
INSTALL_ONETBB: $(INSTALL_ONETBB)
|
||||
INSTALL_OPENCV: $(INSTALL_OPENCV)
|
||||
PYTHON_EXEC: $(PYTHON_EXEC)
|
||||
ONETBB_REPO_DIR: $(ONETBB_REPO_DIR)
|
||||
OPENCV_REPO_DIR: $(OPENCV_REPO_DIR)
|
||||
BUILD_OPENCV: $(BUILD_OPENCV)
|
||||
INSTALL_OPENVINO: $(INSTALL_OPENVINO)
|
||||
# install dependencies needed to build CPU plugin for ARM
|
||||
sudo -E apt --assume-yes install scons crossbuild-essential-arm64
|
||||
# generic dependencies
|
||||
sudo -E apt --assume-yes install cmake ccache
|
||||
# Speed up build
|
||||
sudo -E apt -y --no-install-recommends install unzip
|
||||
wget https://github.com/ninja-build/ninja/releases/download/v1.10.2/ninja-linux.zip
|
||||
unzip ninja-linux.zip
|
||||
sudo cp -v ninja /usr/local/bin/
|
||||
displayName: 'Install dependencies'
|
||||
|
||||
- script: |
|
||||
set -e
|
||||
/usr/local/bin/$(PYTHON_EXEC) -m pip install -U pip
|
||||
/usr/local/bin/$(PYTHON_EXEC) -m pip install crossenv
|
||||
/usr/local/bin/$(PYTHON_EXEC) -m crossenv $(INSTALL_PYTHON)/bin/$(PYTHON_EXEC) $(CROSSENV_DIR)
|
||||
source $(CROSSENV_DIR)/bin/activate
|
||||
build-pip3 install -U pip install -r $(OPENVINO_REPO_DIR)/src/bindings/python/wheel/requirements-dev.txt
|
||||
cross-pip3 install -U pip install -r $(OPENVINO_REPO_DIR)/src/bindings/python/wheel/requirements-dev.txt
|
||||
displayName: 'Create crossenv'
|
||||
git submodule update --init -- $(OPENVINO_REPO_DIR)/src/plugins
|
||||
git submodule update --init -- $(OPENVINO_REPO_DIR)/thirdparty/gtest
|
||||
displayName: 'Init submodules for non Conan dependencies'
|
||||
|
||||
- task: CMake@1
|
||||
inputs:
|
||||
cmakeArgs: >
|
||||
-GNinja
|
||||
-DCMAKE_VERBOSE_MAKEFILE=ON
|
||||
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON
|
||||
-DOpenCV_DIR=$(INSTALL_OPENCV)/cmake
|
||||
-DENABLE_PYTHON=OFF
|
||||
-DENABLE_TESTS=ON
|
||||
-DENABLE_DATA=OFF
|
||||
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake
|
||||
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
|
||||
-DTHREADING=TBB
|
||||
-DTBB_DIR=$(INSTALL_ONETBB)/lib/cmake/TBB
|
||||
-DCMAKE_VERBOSE_MAKEFILE=ON
|
||||
-DOPENVINO_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)/modules/arm_plugin
|
||||
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache
|
||||
-DCMAKE_C_COMPILER_LAUNCHER=ccache
|
||||
-DCMAKE_CXX_LINKER_LAUNCHER=ccache
|
||||
-DCMAKE_C_LINKER_LAUNCHER=ccache
|
||||
-DARM_COMPUTE_SCONS_JOBS=$(NUM_PROC)
|
||||
-DCMAKE_INSTALL_PREFIX=$(INSTALL_OPENVINO)
|
||||
-S $(OPENVINO_REPO_DIR)
|
||||
- script: |
|
||||
python3 -m pip install conan
|
||||
# generate build profile
|
||||
conan profile detect
|
||||
# generate host profile for linux_arm64
|
||||
echo "include(default)" > $(BUILD_OPENVINO)/linux_arm64
|
||||
echo "[buildenv]" >> $(BUILD_OPENVINO)/linux_arm64
|
||||
echo "CC=aarch64-linux-gnu-gcc" >> $(BUILD_OPENVINO)/linux_arm64
|
||||
echo "CXX=aarch64-linux-gnu-g++" >> $(BUILD_OPENVINO)/linux_arm64
|
||||
# install OpenVINO dependencies
|
||||
export CMAKE_CXX_COMPILER_LAUNCHER=ccache
|
||||
export CMAKE_C_COMPILER_LAUNCHER=ccache
|
||||
conan install $(OPENVINO_REPO_DIR)/conanfile.txt \
|
||||
-pr:h $(BUILD_OPENVINO)/linux_arm64 \
|
||||
-s:h arch=armv8 \
|
||||
-of $(BUILD_OPENVINO) \
|
||||
-b missing
|
||||
env:
|
||||
CCACHE_DIR: $(OPENVINO_CCACHE_DIR)
|
||||
CCACHE_TEMPDIR: $(TMP_DIR)/ccache
|
||||
CCACHE_BASEDIR: $(Pipeline.Workspace)
|
||||
CCACHE_MAXSIZE: 50G
|
||||
displayName: 'Install conan and dependencies'
|
||||
|
||||
- script: |
|
||||
source $(BUILD_OPENVINO)/conanbuild.sh
|
||||
cmake \
|
||||
-G Ninja \
|
||||
-DCMAKE_VERBOSE_MAKEFILE=ON \
|
||||
-DBUILD_SHARED_LIBS=ON \
|
||||
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON \
|
||||
-DENABLE_CPPLINT=OFF \
|
||||
-DENABLE_PYTHON=OFF \
|
||||
-DENABLE_TESTS=ON \
|
||||
-DENABLE_DATA=OFF \
|
||||
-DENABLE_SYSTEM_TBB=ON \
|
||||
-DENABLE_SYSTEM_PROTOBUF=ON \
|
||||
-DENABLE_SYSTEM_SNAPPY=ON \
|
||||
-DENABLE_SYSTEM_PUGIXML=ON \
|
||||
-DCMAKE_TOOLCHAIN_FILE=$(BUILD_OPENVINO)/conan_toolchain.cmake \
|
||||
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
|
||||
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
|
||||
-DARM_COMPUTE_SCONS_JOBS=$(NUM_PROC) \
|
||||
-DCMAKE_INSTALL_PREFIX=$(INSTALL_OPENVINO) \
|
||||
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) \
|
||||
-S $(OPENVINO_REPO_DIR) \
|
||||
-B $(BUILD_OPENVINO)
|
||||
displayName: 'CMake OpenVINO ARM plugin'
|
||||
source $(BUILD_OPENVINO)/deactivate_conanbuild.sh
|
||||
displayName: 'CMake configure'
|
||||
|
||||
- script: cmake --build $(BUILD_OPENVINO) --parallel --config $(BUILD_TYPE)
|
||||
env:
|
||||
@@ -215,38 +187,13 @@ jobs:
|
||||
CCACHE_TEMPDIR: $(TMP_DIR)/ccache
|
||||
CCACHE_BASEDIR: $(Pipeline.Workspace)
|
||||
CCACHE_MAXSIZE: 50G
|
||||
displayName: 'Build OpenVINO ARM plugin'
|
||||
displayName: 'Build OpenVINO Runtime'
|
||||
|
||||
- script: cmake --build $(BUILD_OPENVINO) --parallel --config $(BUILD_TYPE) --target install
|
||||
displayName: 'Install OpenVINO ARM plugin'
|
||||
|
||||
- script: |
|
||||
source $(CROSSENV_DIR)/bin/activate
|
||||
cmake \
|
||||
-GNinja \
|
||||
-DENABLE_PYTHON=ON \
|
||||
-DENABLE_WHEEL=ON \
|
||||
-DCMAKE_TOOLCHAIN_FILE=$(OPENVINO_REPO_DIR)/cmake/arm64.toolchain.cmake \
|
||||
-DOpenVINODeveloperPackage_DIR=$(BUILD_OPENVINO) \
|
||||
-DCMAKE_INSTALL_PREFIX=$(INSTALL_OPENVINO) \
|
||||
-S $(OPENVINO_REPO_DIR)/src/bindings/python \
|
||||
-B $(BUILD_OPENVINO_PYTHON)
|
||||
deactivate
|
||||
displayName: 'CMake OpenVINO python binding'
|
||||
|
||||
- script: cmake --build $(BUILD_OPENVINO_PYTHON) --parallel --config $(BUILD_TYPE)
|
||||
env:
|
||||
CCACHE_DIR: $(OPENVINO_CCACHE_DIR)
|
||||
CCACHE_TEMPDIR: $(TMP_DIR)/ccache
|
||||
CCACHE_BASEDIR: $(Pipeline.Workspace)
|
||||
CCACHE_MAXSIZE: 50G
|
||||
displayName: 'Build OpenVINO python binding'
|
||||
|
||||
- script: cmake --build $(BUILD_OPENVINO_PYTHON) --parallel --target install
|
||||
displayName: 'Install OpenVINO python binding'
|
||||
displayName: 'Install OpenVINO Runtime'
|
||||
|
||||
- task: PublishBuildArtifacts@1
|
||||
inputs:
|
||||
PathtoPublish: $(Build.ArtifactStagingDirectory)
|
||||
ArtifactName: 'openvino_aarch64_linux'
|
||||
displayName: 'Publish OpenVINO AArch64 linux package'
|
||||
displayName: 'Publish OpenVINO Runtime for ARM'
|
||||
|
||||
@@ -35,6 +35,7 @@ resources:
|
||||
type: github
|
||||
endpoint: openvinotoolkit
|
||||
name: openvinotoolkit/testdata
|
||||
ref: releases/2023/0
|
||||
|
||||
variables:
|
||||
- group: github
|
||||
@@ -59,7 +60,7 @@ jobs:
|
||||
INSTALL_DIR: $(WORK_DIR)/install_pkg
|
||||
SETUPVARS: $(INSTALL_DIR)/setupvars.sh
|
||||
LD_LIBRARY_PATH: $(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/lib
|
||||
OV_PYTHON_VERSION: 3.10.10 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
|
||||
OV_PYTHON_VERSION: 3.11.2 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
|
||||
|
||||
steps:
|
||||
- task: UsePythonVersion@0
|
||||
@@ -69,7 +70,7 @@ jobs:
|
||||
disableDownloadFromRegistry: false
|
||||
architecture: 'x64'
|
||||
githubToken: $(auth_token)
|
||||
displayName: Setup Python 3.10
|
||||
displayName: Setup Python 3.11
|
||||
name: setupPython
|
||||
- bash: |
|
||||
#!/bin/bash
|
||||
@@ -123,12 +124,11 @@ jobs:
|
||||
- task: CMake@1
|
||||
inputs:
|
||||
cmakeArgs: >
|
||||
-GNinja
|
||||
-G "Ninja Multi-Config"
|
||||
-DENABLE_CPPLINT=OFF
|
||||
-DENABLE_GAPI_PREPROCESSING=OFF
|
||||
-DCMAKE_VERBOSE_MAKEFILE=ON
|
||||
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON
|
||||
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
|
||||
-DENABLE_FASTER_BUILD=ON
|
||||
-DENABLE_PROFILING_ITT=ON
|
||||
-DSELECTIVE_BUILD=COLLECT
|
||||
@@ -152,11 +152,10 @@ jobs:
|
||||
- task: CMake@1
|
||||
inputs:
|
||||
cmakeArgs: >
|
||||
-GNinja
|
||||
-DSELECTIVE_BUILD=ON
|
||||
-DSELECTIVE_BUILD_STAT=$(BUILD_DIR)/*.csv
|
||||
-S $(REPO_DIR)
|
||||
-B $(BUILD_DIR)
|
||||
-S $(REPO_DIR)
|
||||
displayName: 'CMake CC ON'
|
||||
|
||||
- script: cmake --build $(BUILD_DIR) --parallel --config $(BUILD_TYPE) --target openvino_intel_cpu_plugin openvino_ir_frontend
|
||||
|
||||
@@ -4,7 +4,7 @@ resources:
|
||||
type: github
|
||||
endpoint: openvinotoolkit
|
||||
name: openvinotoolkit/openvino_contrib
|
||||
ref: master
|
||||
ref: releases/2023/0
|
||||
|
||||
variables:
|
||||
- group: github
|
||||
@@ -33,7 +33,7 @@ jobs:
|
||||
SHARE_DIR: /mount/cinfsshare/onnxtestdata
|
||||
CCACHE_DIR: $(SHARE_DIR)/ccache/master/linux_coverity
|
||||
LD_LIBRARY_PATH: $(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/lib
|
||||
OV_PYTHON_VERSION: 3.10.10 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
|
||||
OV_PYTHON_VERSION: 3.11.2 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
|
||||
|
||||
steps:
|
||||
- task: UsePythonVersion@0
|
||||
@@ -43,7 +43,7 @@ jobs:
|
||||
disableDownloadFromRegistry: false
|
||||
architecture: 'x64'
|
||||
githubToken: $(auth_token)
|
||||
displayName: Setup Python 3.10
|
||||
displayName: Setup Python 3.11
|
||||
name: setupPython
|
||||
- bash: |
|
||||
#!/bin/bash
|
||||
@@ -106,10 +106,9 @@ jobs:
|
||||
inputs:
|
||||
# Coverity has too many PARSE_ERROR errors with ENABLE_FASTER_BUILD=ON. Disabling FASTER_BUILD.
|
||||
cmakeArgs: >
|
||||
-GNinja
|
||||
-G "Ninja Multi-Config"
|
||||
-DENABLE_CPPLINT=OFF
|
||||
-DCMAKE_VERBOSE_MAKEFILE=ON
|
||||
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE)
|
||||
-DENABLE_FASTER_BUILD=OFF
|
||||
-DENABLE_STRICT_DEPENDENCIES=OFF
|
||||
-DBUILD_nvidia_plugin=OFF
|
||||
|
||||
@@ -42,11 +42,13 @@ resources:
|
||||
type: github
|
||||
endpoint: openvinotoolkit
|
||||
name: openvinotoolkit/openvino_contrib
|
||||
ref: releases/2023/0
|
||||
|
||||
- repository: testdata
|
||||
type: github
|
||||
endpoint: openvinotoolkit
|
||||
name: openvinotoolkit/testdata
|
||||
ref: releases/2023/0
|
||||
|
||||
jobs:
|
||||
- job: CUDAPlugin_Lin
|
||||
@@ -127,7 +129,7 @@ jobs:
|
||||
python3 -m pip install -r /root/repos/openvino/src/bindings/python/requirements.txt &&
|
||||
cmake -GNinja \
|
||||
-DCMAKE_VERBOSE_MAKEFILE=ON \
|
||||
-DDENABLE_CPPLINT=OFF \
|
||||
-DENABLE_CPPLINT=OFF \
|
||||
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) \
|
||||
-DOPENVINO_EXTRA_MODULES=/root/repos/openvino_contrib/modules/nvidia_plugin \
|
||||
-DENABLE_INTEL_CPU=OFF \
|
||||
|
||||
@@ -34,7 +34,7 @@ resources:
|
||||
type: github
|
||||
endpoint: openvinotoolkit
|
||||
name: openvinotoolkit/testdata
|
||||
ref: master
|
||||
ref: releases/2023/0
|
||||
|
||||
jobs:
|
||||
- job: Lin_Debian
|
||||
@@ -262,9 +262,9 @@ jobs:
|
||||
sudo apt-get install --no-install-recommends gnupg wget -y
|
||||
wget https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
|
||||
sudo apt-key add GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB
|
||||
echo "deb https://apt.repos.intel.com/openvino/2022 focal main" | sudo tee /etc/apt/sources.list.d/intel-openvino-2022.list
|
||||
sudo apt-get update -o Dir::Etc::sourcelist=/etc/apt/sources.list.d/intel-openvino-2022.list
|
||||
sudo apt-get install openvino -y || exit 1
|
||||
echo "deb https://apt.repos.intel.com/openvino/2023 ubuntu20 main" | sudo tee /etc/apt/sources.list.d/intel-openvino-2023.list
|
||||
sudo apt-get update -o Dir::Etc::sourcelist=/etc/apt/sources.list.d/intel-openvino-2023.list
|
||||
sudo apt-get install openvino-2023.0.1 -y || exit 1
|
||||
# install our local one and make sure the conflicts are resolved
|
||||
sudo apt-get install --no-install-recommends dpkg-dev -y
|
||||
rm -r _CPack_Packages
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
# type: github
|
||||
# endpoint: openvinotoolkit
|
||||
# name: openvinotoolkit/testdata
|
||||
# ref: master
|
||||
# ref: releases/2023/0
|
||||
|
||||
jobs:
|
||||
- job: Lin_lohika
|
||||
|
||||
@@ -56,7 +56,7 @@ jobs:
|
||||
ONNXRUNTIME_UTILS: $(REPO_DIR)/.ci/azure/ci_utils/onnxruntime
|
||||
ONNXRUNTIME_BUILD_DIR: $(ONNXRUNTIME_REPO_DIR)/build
|
||||
LD_LIBRARY_PATH: $(Agent.ToolsDirectory)/Python/$(OV_PYTHON_VERSION)/x64/lib
|
||||
OV_PYTHON_VERSION: 3.10.10 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
|
||||
OV_PYTHON_VERSION: 3.11.2 # Full version of Python its required for LD_LIBRARY_PATH. More details https://github.com/microsoft/azure-pipelines-tool-lib/blob/master/docs/overview.md#tool-cache
|
||||
|
||||
steps:
|
||||
- task: UsePythonVersion@0
|
||||
@@ -66,7 +66,7 @@ jobs:
|
||||
disableDownloadFromRegistry: false
|
||||
architecture: 'x64'
|
||||
githubToken: $(auth_token)
|
||||
displayName: Setup Python 3.10
|
||||
displayName: Setup Python 3.11
|
||||
name: setupPython
|
||||
- bash: |
|
||||
#!/bin/bash
|
||||
|
||||
@@ -35,13 +35,13 @@ resources:
|
||||
type: github
|
||||
endpoint: openvinotoolkit
|
||||
name: openvinotoolkit/openvino_contrib
|
||||
ref: master
|
||||
ref: releases/2023/0
|
||||
|
||||
- repository: testdata
|
||||
type: github
|
||||
endpoint: openvinotoolkit
|
||||
name: openvinotoolkit/testdata
|
||||
ref: master
|
||||
ref: releases/2023/0
|
||||
|
||||
variables:
|
||||
- group: github
|
||||
@@ -73,11 +73,11 @@ jobs:
|
||||
steps:
|
||||
- task: UsePythonVersion@0
|
||||
inputs:
|
||||
versionSpec: '3.10'
|
||||
versionSpec: '3.11.2'
|
||||
addToPath: true
|
||||
architecture: 'x64'
|
||||
githubToken: $(auth_token)
|
||||
displayName: Setup Python 3.10
|
||||
displayName: Setup Python 3.11
|
||||
name: setupPython
|
||||
|
||||
- script: |
|
||||
@@ -113,10 +113,6 @@ jobs:
|
||||
lfs: 'true'
|
||||
path: testdata
|
||||
|
||||
- task: UsePythonVersion@0
|
||||
inputs:
|
||||
versionSpec: '3.10'
|
||||
|
||||
- script: |
|
||||
brew install cython
|
||||
brew install automake
|
||||
@@ -127,7 +123,8 @@ jobs:
|
||||
|
||||
- script: |
|
||||
export PATH="/usr/local/opt/cython/bin:$PATH"
|
||||
cmake -GNinja \
|
||||
cmake \
|
||||
-G Ninja \
|
||||
-DENABLE_CPPLINT=OFF \
|
||||
-DCMAKE_VERBOSE_MAKEFILE=ON \
|
||||
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) \
|
||||
|
||||
@@ -32,13 +32,13 @@ resources:
|
||||
type: github
|
||||
endpoint: openvinotoolkit
|
||||
name: openvinotoolkit/openvino_contrib
|
||||
ref: master
|
||||
ref: releases/2023/0
|
||||
|
||||
- repository: testdata
|
||||
type: github
|
||||
endpoint: openvinotoolkit
|
||||
name: openvinotoolkit/testdata
|
||||
ref: master
|
||||
ref: releases/2023/0
|
||||
|
||||
jobs:
|
||||
- job: Win
|
||||
@@ -73,7 +73,7 @@ jobs:
|
||||
INSTALL_DIR: $(WORK_DIR)\install_pkg
|
||||
INSTALL_TEST_DIR: $(INSTALL_DIR)\tests
|
||||
SETUPVARS: $(INSTALL_DIR)\setupvars.bat
|
||||
PYTHON_DIR: C:\hostedtoolcache\windows\Python\3.10.7\x64
|
||||
PYTHON_DIR: C:\hostedtoolcache\windows\Python\3.11.2\x64
|
||||
CMAKE_VERSION: 3.24.0
|
||||
CMAKE_CMD: $(WORK_DIR)\cmake-$(CMAKE_VERSION)-windows-x86_64\cmake-$(CMAKE_VERSION)-windows-x86_64\bin\cmake.exe
|
||||
OV_CMAKE_TOOLCHAIN_FILE: $(REPO_DIR)\cmake\toolchains\mt.runtime.win32.toolchain.cmake
|
||||
@@ -84,26 +84,26 @@ jobs:
|
||||
- script: |
|
||||
rd /Q /S $(WORK_DIR) & mkdir $(WORK_DIR)
|
||||
rd /Q /S $(BUILD_DIR) & mkdir $(BUILD_DIR)
|
||||
rd /Q /S $(WORK_DIR) & mkdir C:\hostedtoolcache\windows\Python\3.10.7
|
||||
rd /Q /S $(BUILD_DIR) & mkdir C:\hostedtoolcache\windows\Python\3.10.7\x64
|
||||
rd /Q /S $(WORK_DIR) & mkdir C:\hostedtoolcache\windows\Python\3.11.2
|
||||
rd /Q /S $(BUILD_DIR) & mkdir C:\hostedtoolcache\windows\Python\3.11.2\x64
|
||||
rd /Q /S $(BUILD_SAMPLES_DIR) & mkdir $(BUILD_SAMPLES_DIR)
|
||||
rd /Q /S $(BUILD_SAMPLES_TESTS_DIR) & mkdir $(BUILD_SAMPLES_TESTS_DIR)
|
||||
displayName: 'Make dir'
|
||||
|
||||
- script: curl -O https://www.python.org/ftp/python/3.10.7/python-3.10.7-amd64.exe
|
||||
- script: curl -O https://www.python.org/ftp/python/3.11.2/python-3.11.2-amd64.exe
|
||||
displayName: 'Download Python'
|
||||
workingDirectory: $(WORK_DIR)
|
||||
|
||||
- script: |
|
||||
python-3.10.7-amd64.exe /passive InstallAllUsers=0 Include_launcher=0 TargetDir=C:\hostedtoolcache\windows\Python\3.10.7\x64
|
||||
cp C:\hostedtoolcache\windows\Python\3.8.2\x64.complete C:\hostedtoolcache\windows\Python\3.10.7\x64.complete
|
||||
python-3.11.2-amd64.exe /passive InstallAllUsers=0 Include_launcher=0 TargetDir=C:\hostedtoolcache\windows\Python\3.11.2\x64
|
||||
cp C:\hostedtoolcache\windows\Python\3.8.2\x64.complete C:\hostedtoolcache\windows\Python\3.11.2\x64.complete
|
||||
displayName: 'Install Python'
|
||||
workingDirectory: $(WORK_DIR)
|
||||
|
||||
- task: UsePythonVersion@0
|
||||
displayName: 'Use Python'
|
||||
inputs:
|
||||
versionSpec: '3.10'
|
||||
versionSpec: '3.11.2'
|
||||
disableDownloadFromRegistry: true
|
||||
|
||||
- script: |
|
||||
@@ -142,7 +142,8 @@ jobs:
|
||||
python -m pip install -r $(REPO_DIR)\src\bindings\python\wheel\requirements-dev.txt
|
||||
python -m pip install -r $(REPO_DIR)\src\bindings\python\requirements.txt
|
||||
rem For running Paddle frontend unit tests
|
||||
python -m pip install -r $(REPO_DIR)\src\frontends\paddle\tests\requirements.txt
|
||||
# TODO Reenable PDPD after paddlepaddle==2.5.0 with compliant protobuf is released (ticket 95904)
|
||||
#python -m pip install -r $(REPO_DIR)\src\frontends\paddle\tests\requirements.txt
|
||||
rem For running ONNX frontend unit tests
|
||||
python -m pip install -r $(REPO_DIR)\src\frontends\onnx\tests\requirements.txt
|
||||
rem For running TensorFlow frontend unit tests
|
||||
@@ -165,21 +166,21 @@ jobs:
|
||||
|
||||
- script: |
|
||||
set PATH=$(WORK_DIR)\ninja-win;%PATH%
|
||||
call "$(MSVS_VARS_PATH)" && $(CMAKE_CMD) -G "Ninja Multi-Config" ^
|
||||
call "$(MSVS_VARS_PATH)" && $(CMAKE_CMD) ^
|
||||
-G "Ninja Multi-Config" ^
|
||||
-DENABLE_CPPLINT=OFF ^
|
||||
-DENABLE_ONEDNN_FOR_GPU=$(CMAKE_BUILD_SHARED_LIBS) ^
|
||||
-DBUILD_SHARED_LIBS=$(CMAKE_BUILD_SHARED_LIBS) ^
|
||||
-DENABLE_FASTER_BUILD=ON ^
|
||||
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) ^
|
||||
-DENABLE_TESTS=ON ^
|
||||
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON ^
|
||||
-DENABLE_STRICT_DEPENDENCIES=OFF ^
|
||||
-DENABLE_PYTHON=ON ^
|
||||
-DBUILD_nvidia_plugin=OFF ^
|
||||
-DCUSTOM_OPERATIONS="calculate_grid;complex_mul;fft;grid_sample;sparse_conv;sparse_conv_transpose" ^
|
||||
-DPYTHON_EXECUTABLE="C:\hostedtoolcache\windows\Python\3.10.7\x64\python.exe" ^
|
||||
-DPYTHON_INCLUDE_DIR="C:\hostedtoolcache\windows\Python\3.10.7\x64\include" ^
|
||||
-DPYTHON_LIBRARY="C:\hostedtoolcache\windows\Python\3.10.7\x64\libs\python310.lib" ^
|
||||
-DPYTHON_EXECUTABLE="C:\hostedtoolcache\windows\Python\3.11.2\x64\python.exe" ^
|
||||
-DPYTHON_INCLUDE_DIR="C:\hostedtoolcache\windows\Python\3.11.2\x64\include" ^
|
||||
-DPYTHON_LIBRARY="C:\hostedtoolcache\windows\Python\3.11.2\x64\libs\python311.lib" ^
|
||||
-DOPENVINO_EXTRA_MODULES=$(OPENVINO_CONTRIB_REPO_DIR)\modules ^
|
||||
-DCMAKE_C_COMPILER:PATH="$(MSVC_COMPILER_PATH)" ^
|
||||
-DCMAKE_CXX_COMPILER:PATH="$(MSVC_COMPILER_PATH)" ^
|
||||
|
||||
@@ -35,6 +35,7 @@ resources:
|
||||
type: github
|
||||
endpoint: openvinotoolkit
|
||||
name: openvinotoolkit/testdata
|
||||
ref: releases/2023/0
|
||||
|
||||
variables:
|
||||
- group: github
|
||||
@@ -65,11 +66,11 @@ jobs:
|
||||
steps:
|
||||
- task: UsePythonVersion@0
|
||||
inputs:
|
||||
versionSpec: '3.10'
|
||||
versionSpec: '3.11.2'
|
||||
addToPath: true
|
||||
architecture: 'x64'
|
||||
githubToken: $(auth_token)
|
||||
displayName: Setup Python 3.10
|
||||
displayName: Setup Python 3.11
|
||||
name: setupPython
|
||||
|
||||
- script: |
|
||||
@@ -78,6 +79,8 @@ jobs:
|
||||
python --version
|
||||
where java
|
||||
java -version
|
||||
where cmake
|
||||
cmake --version
|
||||
wmic computersystem get TotalPhysicalMemory
|
||||
wmic cpu list
|
||||
wmic logicaldisk get description,name
|
||||
@@ -110,10 +113,11 @@ jobs:
|
||||
|
||||
- script: |
|
||||
set PATH=$(WORK_DIR)\ninja-win;%PATH%
|
||||
call "$(MSVS_VARS_PATH)" && cmake -GNinja ^
|
||||
call "$(MSVS_VARS_PATH)" && cmake ^
|
||||
-G Ninja ^
|
||||
-DENABLE_CPPLINT=OFF ^
|
||||
-DENABLE_GAPI_PREPROCESSING=OFF ^
|
||||
-DENABLE_FASTER_BUILD=ON ^
|
||||
-DENABLE_PLUGINS_XML=ON ^
|
||||
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON ^
|
||||
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) ^
|
||||
-DENABLE_PROFILING_ITT=ON ^
|
||||
@@ -145,12 +149,11 @@ jobs:
|
||||
displayName: 'List csv files'
|
||||
|
||||
- script: |
|
||||
call "$(MSVS_VARS_PATH)" && cmake -G"Visual Studio 16 2019" ^
|
||||
call "$(MSVS_VARS_PATH)" && cmake ^
|
||||
-G "Visual Studio 16 2019" ^
|
||||
-DVERBOSE_BUILD=ON ^
|
||||
-DENABLE_CPPLINT=OFF ^
|
||||
-DENABLE_GAPI_PREPROCESSING=OFF ^
|
||||
-DENABLE_FASTER_BUILD=ON ^
|
||||
-DCMAKE_BUILD_TYPE=$(BUILD_TYPE) ^
|
||||
-DENABLE_PROFILING_ITT=OFF ^
|
||||
-DSELECTIVE_BUILD=ON ^
|
||||
-DCMAKE_COMPILE_WARNING_AS_ERROR=ON ^
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
FROM ubuntu:22.04
|
||||
FROM ubuntu:23.04
|
||||
|
||||
LABEL version=2021.03.30.1
|
||||
|
||||
@@ -38,6 +38,7 @@ RUN apt-get update && apt-get -y --no-install-recommends install \
|
||||
python3 \
|
||||
python3-pip \
|
||||
python3-dev \
|
||||
pybind11-dev \
|
||||
python3-virtualenv \
|
||||
cython3 \
|
||||
tox && \
|
||||
@@ -71,5 +72,5 @@ RUN ninja install
|
||||
WORKDIR /openvino/src/bindings/python
|
||||
ENV OpenVINO_DIR=/openvino/dist/runtime/cmake
|
||||
ENV LD_LIBRARY_PATH=/openvino/dist/runtime/lib/intel64:/openvino/dist/runtime/3rdparty/tbb/lib
|
||||
ENV PYTHONPATH=/openvino/bin/intel64/${BUILD_TYPE}/python_api/python3.10:${PYTHONPATH}
|
||||
ENV PYTHONPATH=/openvino/bin/intel64/${BUILD_TYPE}/python_api/python3.11:${PYTHONPATH}
|
||||
CMD tox
|
||||
|
||||
25
.github/ISSUE_TEMPLATE/bug.md
vendored
25
.github/ISSUE_TEMPLATE/bug.md
vendored
@@ -1,5 +1,5 @@
|
||||
---
|
||||
name: Bug
|
||||
name: Bug
|
||||
about: Create a report to help us improve
|
||||
title: "[Bug]"
|
||||
labels: bug, support_request
|
||||
@@ -8,19 +8,28 @@ assignees: ''
|
||||
---
|
||||
|
||||
##### System information (version)
|
||||
<!-- Example
|
||||
- OpenVINO => 2020.4
|
||||
- Operating System / Platform => Windows 64 Bit
|
||||
- Compiler => Visual Studio 2017
|
||||
- Problem classification: Model Conversion
|
||||
<!-- Please use this template to submit a new issue and provide all the necessary information to expedite the response.
|
||||
Example
|
||||
- OpenVINO Source => Runtime /pip install / GitHub
|
||||
- OpenVINO Version => Version 2022.3 / Github Master Branch / tag 2023.0
|
||||
- Operating System / Platform => Windows 64 Bit / Ubuntu 20
|
||||
- Compiler => Visual Studio 2017 / Cmake
|
||||
- Problem classification: Model Conversion /Accuracy/TensorFlow FE
|
||||
- Device use: CPU / GPU / HDDL
|
||||
- Framework: TensorFlow (if applicable)
|
||||
- Model name: ResNet50 (if applicable)
|
||||
- Model name: ResNet50 and the link to pre-train modal (if applicable)
|
||||
Please provide us with the link to your model or attach .zip file.
|
||||
|
||||
-->
|
||||
|
||||
- OpenVINO=> :grey_question:
|
||||
- OpenVINO Source=> :grey_question:
|
||||
- OpenVINO Version=> :grey_question:
|
||||
- Operating System / Platform => :grey_question:
|
||||
- Compiler => :grey_question:
|
||||
- Problem classification => :grey_question:
|
||||
- Device use: => :grey_question:
|
||||
- Framework => :grey_question:
|
||||
- Model name => :grey_question:
|
||||
|
||||
##### Detailed description
|
||||
<!-- your description -->
|
||||
|
||||
4
.github/workflows/code_style.yml
vendored
4
.github/workflows/code_style.yml
vendored
@@ -85,8 +85,8 @@ jobs:
|
||||
- name: Install Clang dependency
|
||||
run: |
|
||||
sudo apt update
|
||||
sudo apt --assume-yes remove clang-7 clang-8 clang-9 clang-10 clang-11 clang-12 clang-13
|
||||
sudo apt --assume-yes install libclang-14-dev
|
||||
sudo apt --assume-yes remove clang-7 clang-8 clang-9 clang-10 clang-11 clang-12 clang-13 clang-15
|
||||
sudo apt --assume-yes install clang-14 libclang-14-dev
|
||||
|
||||
- name: Install Python-based dependencies
|
||||
run: python3 -m pip install -r cmake/developer_package/ncc_naming_style/requirements_dev.txt
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -26,6 +26,7 @@ temp/
|
||||
.repo/
|
||||
CMakeLists.txt.user
|
||||
docs/IE_PLUGIN_DG/html/
|
||||
CMakeUserPresets.json
|
||||
|
||||
*.project
|
||||
*.cproject
|
||||
|
||||
3
.gitmodules
vendored
3
.gitmodules
vendored
@@ -69,3 +69,6 @@
|
||||
[submodule "thirdparty/snappy"]
|
||||
path = thirdparty/snappy
|
||||
url = https://github.com/google/snappy.git
|
||||
[submodule "ARMComputeLibrary"]
|
||||
path = src/plugins/intel_cpu/thirdparty/ComputeLibrary
|
||||
url = https://github.com/ARM-software/ComputeLibrary.git
|
||||
|
||||
@@ -40,8 +40,6 @@ endif()
|
||||
|
||||
# resolving dependencies for the project
|
||||
message (STATUS "CMAKE_VERSION ......................... " ${CMAKE_VERSION})
|
||||
message (STATUS "CMAKE_BINARY_DIR ...................... " ${CMAKE_BINARY_DIR})
|
||||
message (STATUS "CMAKE_SOURCE_DIR ...................... " ${CMAKE_SOURCE_DIR})
|
||||
message (STATUS "OpenVINO_SOURCE_DIR ................... " ${OpenVINO_SOURCE_DIR})
|
||||
message (STATUS "OpenVINO_BINARY_DIR ................... " ${OpenVINO_BINARY_DIR})
|
||||
message (STATUS "CMAKE_GENERATOR ....................... " ${CMAKE_GENERATOR})
|
||||
@@ -66,7 +64,7 @@ endif()
|
||||
if(CMAKE_TOOLCHAIN_FILE)
|
||||
message (STATUS "CMAKE_TOOLCHAIN_FILE .................. " ${CMAKE_TOOLCHAIN_FILE})
|
||||
endif()
|
||||
if(OV_GLIBC_VERSION)
|
||||
if(NOT OV_GLIBC_VERSION VERSION_EQUAL 0.0)
|
||||
message (STATUS "GLIBC_VERSION ......................... " ${OV_GLIBC_VERSION})
|
||||
endif()
|
||||
|
||||
|
||||
121
CONTRIBUTING.md
121
CONTRIBUTING.md
@@ -1,55 +1,88 @@
|
||||
# How to contribute to the OpenVINO repository
|
||||
# Contributing to OpenVINO
|
||||
|
||||
We welcome community contributions to OpenVINO™. Please read the following guide to learn how to find ideas for contribution, practices for good pull requests, checking your changes with our tests and more.
|
||||
## How to contribute to the OpenVINO project
|
||||
|
||||
OpenVINO™ is always looking for opportunities to improve and your contributions
|
||||
play a big role in this process. There are several ways you can make the
|
||||
product better:
|
||||
|
||||
|
||||
## Before you start contributing you should
|
||||
### Provide Feedback
|
||||
|
||||
- Make sure you agree to contribute your code under [OpenVINO™ (Apache 2.0)](https://github.com/openvinotoolkit/openvino/blob/master/LICENSE) license.
|
||||
- Figure out what you’re going to contribute. If you don’t know what you are going to work on, navigate to the [Github "Issues" tab](https://github.com/openvinotoolkit/openvino/issues). Make sure that there isn't someone working on it. In the latter case you might provide support or suggestion in the issue or in the linked pull request.
|
||||
- If you are going to fix a bug, check that it's still exists in the latest release. This can be done by building the latest master branch, and make sure that the error is still reproducible there. We do not fix bugs that only affect older non-LTS releases like 2020.2 for example (more details about [branching strategy](https://github.com/openvinotoolkit/openvino/wiki/Branches)).
|
||||
* **Report bugs / issues**
|
||||
If you experience faulty behavior in OpenVINO or its components, you can
|
||||
[create a new issue](https://github.com/openvinotoolkit/openvino/issues)
|
||||
in the GitHub issue tracker.
|
||||
|
||||
* **Propose new features / improvements**
|
||||
If you have a suggestion for improving OpenVINO or want to share your ideas, you can open a new
|
||||
[GitHub Discussion](https://github.com/openvinotoolkit/openvino/discussions).
|
||||
If your idea is already well defined, you can also create a
|
||||
[Feature Request Issue](https://github.com/openvinotoolkit/openvino/issues/new?assignees=octocat&labels=enhancement%2Cfeature&projects=&template=feature_request.yml&title=%5BFeature+Request%5D%3A+)
|
||||
In both cases, provide a detailed description, including use cases, benefits, and potential challenges.
|
||||
If your points are especially well aligned with the product vision, they will be included in the
|
||||
[development roadmap](./ROADMAP.md).
|
||||
User feedback is crucial for OpenVINO development and even if your input is not immediately prioritized,
|
||||
it may be used at a later time or undertaken by the community, regardless of the official roadmap.
|
||||
|
||||
|
||||
### Contribute Code Changes
|
||||
|
||||
* **Fix Bugs or Develop New Features**
|
||||
If you want to help improving OpenVINO, choose one of the issues reported in
|
||||
[GitHub Issue Tracker](https://github.com/openvinotoolkit/openvino/issues) and
|
||||
[create a Pull Request](./CONTRIBUTING_PR.md) addressing it. Consider one of the
|
||||
tasks listed as [first-time contributions](https://github.com/openvinotoolkit/openvino/issues/17502).
|
||||
If the feature you want to develop is more complex or not well defined by the reporter,
|
||||
it is always a good idea to [discuss it](https://github.com/openvinotoolkit/openvino/discussions)
|
||||
with OpenVINO developers first. Before creating a new PR, check if nobody is already
|
||||
working on it. In such a case, you may still help, having aligned with the other developer.
|
||||
|
||||
Importantly, always check if the change hasn't been implemented before you start working on it!
|
||||
You can build OpenVINO using the latest master branch and make sure that it still needs your
|
||||
changes. Also, do not address issues that only affect older non-LTS releases, like 2022.2.
|
||||
|
||||
* **Develop a New Device Plugin**
|
||||
Since the market of computing devices is constantly evolving, OpenVINO is always open to extending
|
||||
its support for new hardware. If you want to run inference on a device that is currently not supported,
|
||||
you can see how to develop a new plugin for it in the
|
||||
[Plugin Developer Guide](https://docs.openvino.ai/canonical/openvino_docs_ie_plugin_dg_overview.html).
|
||||
|
||||
|
||||
### Improve documentation
|
||||
|
||||
* **OpenVINO developer documentation** is contained entirely in this repository, under the
|
||||
[./docs/dev](https://github.com/openvinotoolkit/openvino/tree/master/docs/dev) folder.
|
||||
|
||||
* **User documentation** is built from several sources and published at
|
||||
[docs.openvino.ai](docs.openvino.ai), which is the recommended place for reading
|
||||
these documents. Use the files maintained in this repository only for editing purposes.
|
||||
|
||||
* The easiest way to help with documentation is to review it and provide feedback on the
|
||||
existing articles. Whether you notice a mistake, see the possibility of improving the text,
|
||||
or think more information should be added, you can reach out to any of the documentation
|
||||
contributors to discuss the potential changes.
|
||||
|
||||
You can also create a Pull Request directly, following the [editor's guide](./docs/CONTRIBUTING_DOCS.md).
|
||||
|
||||
|
||||
## "Fork & Pull Request model" for code contribution
|
||||
### Promote and Support OpenVINO
|
||||
|
||||
### [](https://github.com/openvinotoolkit/openvino/blob/master/CONTRIBUTING.md#the-instruction-in-brief)The instruction in brief
|
||||
* **Popularize OpenVINO**
|
||||
Articles, tutorials, blog posts, demos, videos, and any other involvement
|
||||
in the OpenVINO community is always a welcome contribution. If you discuss
|
||||
or present OpenVINO on various social platforms, you are raising awareness
|
||||
of the product among A.I. enthusiasts and enabling other people to discover
|
||||
the toolkit. Feel free to reach out to OpenVINO developers if you need help
|
||||
with making such community-based content.
|
||||
|
||||
- Register at GitHub. Create your fork of OpenVINO™ repository [https://github.com/openvinotoolkit/openvino](https://github.com/openvinotoolkit/openvino) (see [https://help.github.com/articles/fork-a-repo](https://help.github.com/articles/fork-a-repo) for details).
|
||||
- Install Git.
|
||||
- Set your user name and email address in a Git configuration according to GitHub account (see [https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup](https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup) for details).
|
||||
- Choose a task for yourself. It could be a bugfix or some new code.
|
||||
- Choose a base branch for your work. More details about branches and policies are here: [Branches](https://github.com/openvinotoolkit/openvino/wiki/Branches)
|
||||
- Clone your fork to your computer.
|
||||
- Create a new branch (with a meaningful name) from the base branch you chose.
|
||||
- Modify / add the code following our [Coding Style Guide](./docs/dev/coding_style.md).
|
||||
- If you want to add a new sample, please look at this [Guide for contributing to C++/C/Python IE samples](https://github.com/openvinotoolkit/openvino/wiki/SampleContribute)
|
||||
- If you want to contribute to the documentation and want to add a new guide, follow that instruction [Documentation guidelines](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLinesDocumentation)
|
||||
- Run testsuite locally:
|
||||
- execute each test binary from the artifacts directory, e.g. `<source dir>/bin/intel64/Release/ieFuncTests`
|
||||
- When you are done, make sure that your branch is to date with latest state of the branch you want to contribute to (e.g. `git fetch upstream && git merge upstream/master`), push your branch to your GitHub fork; then create a pull request from your branch to the base branch (see [https://help.github.com/articles/using-pull-requests](https://help.github.com/articles/using-pull-requests) for details).
|
||||
|
||||
## Making a good pull request
|
||||
|
||||
Following these guidelines will increase the likelihood of your pull request being accepted:
|
||||
|
||||
- One PR – one issue.
|
||||
- Build perfectly on your local system.
|
||||
- Choose the right base branch [Branches](https://github.com/openvinotoolkit/openvino/wiki/Branches).
|
||||
- Follow the [Coding Style Guide](./docs/dev/coding_style.md) for your code.
|
||||
- Update documentation using [Documentation guidelines](https://github.com/openvinotoolkit/openvino/wiki/CodingStyleGuideLinesDocumentation) if needed.
|
||||
- Cover your changes with test.
|
||||
- Add license at the top of new files [C++ example](https://github.com/openvinotoolkit/openvino/blob/master/samples/cpp/classification_sample_async/main.cpp#L1-L2), [Python example](https://github.com/openvinotoolkit/openvino/blob/master/samples/python/hello_classification/hello_classification.py#L3-L4).
|
||||
- Add enough information: a meaningful title, the reason why you made the commit and a link to the issue page if exists.
|
||||
- Remove unrelated to PR changes.
|
||||
- If it is still WIP and you want to check CI test results early then use _Draft_ PR.
|
||||
- Submit your PR and become an OpenVINO™ contributor!
|
||||
* **Help Other Community Members**
|
||||
If you are an experienced OpenVINO user and want to help, you can always
|
||||
share your expertise with the community. Check GitHub Discussions and
|
||||
Issues to see if you can help someone.
|
||||
|
||||
|
||||
## Testing and merging pull requests
|
||||
## License
|
||||
|
||||
Your pull request will be automatically tested by OpenVINO™'s precommit (testing status are automatically reported as "green" or "red" circles in precommit steps on PR's page). If any builders have failed, you need fix the issue. To rerun the automatic builds just push changes to your branch on GitHub. No need to close pull request and open a new one!
|
||||
|
||||
|
||||
## Merging PR
|
||||
|
||||
When the reviewer accepts the pull request and the pre-commit shows a "green" status, the review status is set to "Approved", which signals to the OpenVINO™ maintainers that they can merge your pull request.
|
||||
By contributing to the OpenVINO project, you agree that your contributions will be
|
||||
licensed under the terms stated in the [LICENSE](./LICENSE.md) file.
|
||||
|
||||
111
CONTRIBUTING_DOCS.md
Normal file
111
CONTRIBUTING_DOCS.md
Normal file
@@ -0,0 +1,111 @@
|
||||
# OpenVINO Documentation Guide
|
||||
|
||||
## Basic article structure
|
||||
|
||||
OpenVINO documentation is built using Sphinx and the reStructuredText formatting.
|
||||
That means the basic formatting rules need to be used:
|
||||
|
||||
|
||||
### White Spaces
|
||||
|
||||
OpenVINO documentation is developed to be easily readable in both html and
|
||||
reStructuredText. Here are some suggestions on how to make it render nicely
|
||||
and improve document clarity.
|
||||
|
||||
### Headings (including the article title)
|
||||
|
||||
They are made by "underscoring" text with punctuation marks (at least as
|
||||
many marks as letters in the underscored header). We use the following convention:
|
||||
|
||||
```
|
||||
H1
|
||||
====================
|
||||
|
||||
H2
|
||||
####################
|
||||
|
||||
H3
|
||||
++++++++++++++++++++
|
||||
|
||||
H4
|
||||
--------------------
|
||||
|
||||
H5
|
||||
....................
|
||||
```
|
||||
|
||||
### Line length
|
||||
|
||||
In programming, a limit of 80 characters per line is a common BKM. It may also apply
|
||||
to reading natural languages fairly well. For this reason, we aim at lines of around
|
||||
70 to 100 characters long. The limit is not a strict rule but rather a guideline to
|
||||
follow in most cases. The breaks will not translate to html, and rightly so, but will
|
||||
make reading and editing documents in GitHub or an editor much easier.
|
||||
|
||||
### Tables
|
||||
|
||||
Tables may be difficult to implement well in websites. For example, longer portions
|
||||
of text, like descriptions, may render them difficult to read (e.g. improper cell
|
||||
widths or heights). Complex tables may also be difficult to read in source files.
|
||||
To prevent that, check the [table directive documentation](https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#table-directives)
|
||||
and see our custom directives. Use the following guidelines for easier editing:
|
||||
|
||||
* For very big and complex data sets: use a list instead of a table or remove
|
||||
the problematic content from the table and implement it differently.
|
||||
* For very big and complex data sets that need to use tables: use an external
|
||||
file (e.g. PDF) and link to it.
|
||||
* For medium tables that look bad in source (e.g. due to long lines of text),
|
||||
use the reStructuredText list table format.
|
||||
* For medium and small tables, use the reStructuredText grid or simple table formats.
|
||||
|
||||
|
||||
## Cross-linking
|
||||
|
||||
There are several directives Sphinx uses for linking, each has its purpose and format.
|
||||
Follow these guidelines for consistent results:
|
||||
|
||||
* Avoid absolute references to internal documents as much as possible (link to source, not html).
|
||||
* Note that sphinx uses the "back-tick" character and not the "inverted-comma" => ` vs. '
|
||||
* When a file path starts at the same directory is used, put "./" at its beginning.
|
||||
* Always add a space before the opening angle bracket ("<") for target files.
|
||||
|
||||
Use the following formatting for different links:
|
||||
|
||||
* link to an external page / file
|
||||
* `` `text <url> `__ ``
|
||||
* use a double underscore for consistency
|
||||
|
||||
* link to an internal documentation page / file
|
||||
* `` :doc:`a docs page <relative file path>` ``
|
||||
* Link to an rst or md file within our documentation, so that it renders properly in html
|
||||
|
||||
* link to a header on the same page
|
||||
* `` 'a header in the same article <this-is-section-header-title>`__ ``
|
||||
* anchors are created automatically for all existing headers
|
||||
* such anchor looks like the header, with minor adjustments:
|
||||
* all letters are lower case,
|
||||
* remove all special glyphs, like brackets,
|
||||
* replace spaces with hyphens
|
||||
|
||||
* Create an anchor in an article
|
||||
* `` .. _anchor-in-the target-article:: ``
|
||||
* put it before the header to which you want to link
|
||||
* See the rules for naming anchors / labels at the bottom of this article
|
||||
|
||||
* link to an anchor on a different page in our documentation
|
||||
* `` :ref:`the created anchor <anchor-in-the target-article>` ``
|
||||
* link to the anchor using just its name
|
||||
|
||||
|
||||
* anchors / labels
|
||||
|
||||
Read about anchors
|
||||
|
||||
Sphinx uses labels to create html anchors, which can be linked to from anywhere in documentation.
|
||||
Although they may be put at the top of any article to make linking to it very easy, we do not use
|
||||
this approach. Every label definition starts with an underscore, the underscore is not used in links.
|
||||
|
||||
Most importantly, every label needs to be globally unique. It means that it is always a good
|
||||
practice to start their labels with a clear identifier of the article they reside in.
|
||||
|
||||
|
||||
63
CONTRIBUTING_PR.md
Normal file
63
CONTRIBUTING_PR.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# How to Prepare a Good PR
|
||||
|
||||
OpenVINO is an open-source project and you can contribute to its code directly.
|
||||
To do so, follow these guidelines for creating Pull Requests, so that your
|
||||
changes get the highest chance of being merged.
|
||||
|
||||
|
||||
## General Rules of a Good Pull Request
|
||||
|
||||
* Create your own fork of the repository and use it to create PRs.
|
||||
Avoid creating change branches in the main repository.
|
||||
* Choose a proper branch for your work and create your own branch based on it.
|
||||
* Give your branches, commits, and Pull Requests meaningful names and descriptions.
|
||||
It helps to track changes later. If your changes cover a particular component,
|
||||
you can indicate it in the PR name as a prefix, for example: ``[DOCS] PR name``.
|
||||
* Follow the [OpenVINO code style guide](https://github.com/openvinotoolkit/openvino/blob/master/docs/dev/coding_style.md).
|
||||
* Make your PRs small - each PR should address one issue. Remove all changes
|
||||
unrelated to the PR.
|
||||
* Document your contribution! If your changes may impact how the user works with
|
||||
OpenVINO, provide the information in proper articles. You can do it yourself,
|
||||
or contact one of OpenVINO documentation contributors to work together on
|
||||
developing the right content.
|
||||
* For Work In Progress, or checking test results early, use a Draft PR.
|
||||
|
||||
|
||||
## Ensure Change Quality
|
||||
|
||||
Your pull request will be automatically tested by OpenVINO™'s pre-commit and marked
|
||||
as "green" if it is ready for merging. If any builders fail, the status is "red,"
|
||||
you need to fix the issues listed in console logs. Any change to the PR branch will
|
||||
automatically trigger the checks, so you don't need to recreate the PR, Just wait
|
||||
for the updated results.
|
||||
|
||||
Regardless of the automated tests, you should ensure the quality of your changes:
|
||||
|
||||
* Test your changes locally:
|
||||
* Make sure to double-check your code.
|
||||
* Run tests locally to identify and fix potential issues (execute test binaries
|
||||
from the artifacts directory, e.g. ``<source dir>/bin/intel64/Release/ieFuncTests``)
|
||||
* Before creating a PR, make sure that your branch is up to date with the latest
|
||||
state of the branch you want to contribute to (e.g. git fetch upstream && git
|
||||
merge upstream/master).
|
||||
|
||||
|
||||
## Branching Policy
|
||||
|
||||
* The "master" branch is used for development and constitutes the base for each new release.
|
||||
* Each OpenVINO release has its own branch: ``releases/<year>/<release number>``.
|
||||
* The final release each year is considered a Long Term Support version,
|
||||
which means it remains active.
|
||||
* Contributions are accepted only by active branches, which are:
|
||||
* the "master" branch for future releases,
|
||||
* the most recently published version for fixes,
|
||||
* LTS versions (for two years from their release dates).
|
||||
|
||||
|
||||
## Need Additional Help? Check these Articles
|
||||
|
||||
* [How to create a fork](https://help.github.com/articles/fork-a-rep)
|
||||
* [Install Git](https://git-scm.com/book/en/v2/Getting-Started-First-Time-Git-Setup)
|
||||
* If you want to add a new sample, please have a look at the Guide for contributing
|
||||
to C++/C/Python IE samples and add the license statement at the top of new files for
|
||||
C++ example, Python example.
|
||||
42
README.md
42
README.md
@@ -2,13 +2,14 @@
|
||||
|
||||
<img src="docs/img/openvino-logo-purple-black.png" width="400px">
|
||||
|
||||
[](https://github.com/openvinotoolkit/openvino/releases/tag/2022.2.0)
|
||||
[](LICENSE)
|
||||

|
||||

|
||||
[](https://badge.fury.io/py/openvino)
|
||||
[](https://anaconda.org/conda-forge/openvino)
|
||||
[](https://formulae.brew.sh/formula/openvino)
|
||||
|
||||
[](https://pepy.tech/project/openvino)
|
||||
|
||||
[](https://anaconda.org/conda-forge/openvino/files)
|
||||
[](https://formulae.brew.sh/formula/openvino)
|
||||
|
||||
</div>
|
||||
|
||||
## Contents:
|
||||
@@ -69,24 +70,24 @@ The OpenVINO™ Runtime can infer models on different hardware devices. This sec
|
||||
<tbody>
|
||||
<tr>
|
||||
<td rowspan=2>CPU</td>
|
||||
<td> <a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
|
||||
<td> <a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_CPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-c-p-u">Intel CPU</a></tb>
|
||||
<td><b><i><a href="./src/plugins/intel_cpu">openvino_intel_cpu_plugin</a></i></b></td>
|
||||
<td>Intel Xeon with Intel® Advanced Vector Extensions 2 (Intel® AVX2), Intel® Advanced Vector Extensions 512 (Intel® AVX-512), and AVX512_BF16, Intel Core Processors with Intel AVX2, Intel Atom Processors with Intel® Streaming SIMD Extensions (Intel® SSE)</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td> <a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_ARM_CPU.html">ARM CPU</a></tb>
|
||||
<td> <a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_CPU.html">ARM CPU</a></tb>
|
||||
<td><b><i><a href="https://github.com/openvinotoolkit/openvino_contrib/tree/master/modules/arm_plugin">openvino_arm_cpu_plugin</a></i></b></td>
|
||||
<td>Raspberry Pi™ 4 Model B, Apple® Mac mini with M1 chip, NVIDIA® Jetson Nano™, Android™ devices
|
||||
</tr>
|
||||
<tr>
|
||||
<td>GPU</td>
|
||||
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
|
||||
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-p-u">Intel GPU</a></td>
|
||||
<td><b><i><a href="./src/plugins/intel_gpu">openvino_intel_gpu_plugin</a></i></b></td>
|
||||
<td>Intel Processor Graphics, including Intel HD Graphics and Intel Iris Graphics</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td>GNA</td>
|
||||
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
|
||||
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GNA.html#doxid-openvino-docs-o-v-u-g-supported-plugins-g-n-a">Intel GNA</a></td>
|
||||
<td><b><i><a href="./src/plugins/intel_gna">openvino_intel_gna_plugin</a></i></b></td>
|
||||
<td>Intel Speech Enabling Developer Kit, Amazon Alexa* Premium Far-Field Developer Kit, Intel Pentium Silver J5005 Processor, Intel Pentium Silver N5000 Processor, Intel Celeron J4005 Processor, Intel Celeron J4105 Processor, Intel Celeron Processor N4100, Intel Celeron Processor N4000, Intel Core i3-8121U Processor, Intel Core i7-1065G7 Processor, Intel Core i7-1060G7 Processor, Intel Core i5-1035G4 Processor, Intel Core i5-1035G7 Processor, Intel Core i5-1035G1 Processor, Intel Core i5-1030G7 Processor, Intel Core i5-1030G4 Processor, Intel Core i3-1005G1 Processor, Intel Core i3-1000G1 Processor, Intel Core i3-1000G4 Processor</td>
|
||||
</tr>
|
||||
@@ -104,22 +105,22 @@ OpenVINO™ Toolkit also contains several plugins which simplify loading models
|
||||
</thead>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_IE_DG_supported_plugins_AUTO.html#doxid-openvino-docs-i-e-d-g-supported-plugins-a-u-t-o">Auto</a></td>
|
||||
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_AUTO.html">Auto</a></td>
|
||||
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
|
||||
<td>Auto plugin enables selecting Intel device for inference automatically</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
|
||||
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Automatic_Batching.html">Auto Batch</a></td>
|
||||
<td><b><i><a href="./src/plugins/auto_batch">openvino_auto_batch_plugin</a></i></b></td>
|
||||
<td>Auto batch plugin performs on-the-fly automatic batching (i.e. grouping inference requests together) to improve device utilization, with no programming effort from the user</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
|
||||
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Hetero_execution.html#doxid-openvino-docs-o-v-u-g-hetero-execution">Hetero</a></td>
|
||||
<td><b><i><a href="./src/plugins/hetero">openvino_hetero_plugin</a></i></b></td>
|
||||
<td>Heterogeneous execution enables automatic inference splitting between several devices</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td><a href="https://docs.openvino.ai/nightly/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
|
||||
<td><a href="https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Running_on_multiple_devices.html#doxid-openvino-docs-o-v-u-g-running-on-multiple-devices">Multi</a></td>
|
||||
<td><b><i><a href="./src/plugins/auto">openvino_auto_plugin</a></i></b></td>
|
||||
<td>Multi plugin enables simultaneous inference of the same model on several devices in parallel</td>
|
||||
</tr>
|
||||
@@ -156,10 +157,10 @@ The list of OpenVINO tutorials:
|
||||
## System requirements
|
||||
|
||||
The system requirements vary depending on platform and are available on dedicated pages:
|
||||
- [Linux](https://docs.openvino.ai/nightly/openvino_docs_install_guides_installing_openvino_linux_header.html)
|
||||
- [Windows](https://docs.openvino.ai/nightly/openvino_docs_install_guides_installing_openvino_windows_header.html)
|
||||
- [macOS](https://docs.openvino.ai/nightly/openvino_docs_install_guides_installing_openvino_macos_header.html)
|
||||
- [Raspbian](https://docs.openvino.ai/nightly/openvino_docs_install_guides_installing_openvino_raspbian.html)
|
||||
- [Linux](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_linux_header.html)
|
||||
- [Windows](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_windows_header.html)
|
||||
- [macOS](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_macos_header.html)
|
||||
- [Raspbian](https://docs.openvino.ai/2023.0/openvino_docs_install_guides_installing_openvino_raspbian.html)
|
||||
|
||||
## How to build
|
||||
|
||||
@@ -188,7 +189,6 @@ Report questions, issues and suggestions, using:
|
||||
* [Neural Network Compression Framework (NNCF)](https://github.com/openvinotoolkit/nncf) - a suite of advanced algorithms for model inference optimization including quantization, filter pruning, binarization and sparsity
|
||||
* [OpenVINO™ Training Extensions (OTE)](https://github.com/openvinotoolkit/training_extensions) - convenient environment to train Deep Learning models and convert them using OpenVINO for optimized inference.
|
||||
* [OpenVINO™ Model Server (OVMS)](https://github.com/openvinotoolkit/model_server) - a scalable, high-performance solution for serving deep learning models optimized for Intel architectures
|
||||
* [DL Workbench](https://docs.openvino.ai/nightly/workbench_docs_Workbench_DG_Introduction.html) - an alternative, web-based version of OpenVINO designed to facilitate optimization and compression of pre-trained deep learning models.
|
||||
* [Computer Vision Annotation Tool (CVAT)](https://github.com/opencv/cvat) - an online, interactive video and image annotation tool for computer vision purposes.
|
||||
* [Dataset Management Framework (Datumaro)](https://github.com/openvinotoolkit/datumaro) - a framework and CLI tool to build, transform, and analyze datasets.
|
||||
|
||||
@@ -196,7 +196,7 @@ Report questions, issues and suggestions, using:
|
||||
\* Other names and brands may be claimed as the property of others.
|
||||
|
||||
[Open Model Zoo]:https://github.com/openvinotoolkit/open_model_zoo
|
||||
[OpenVINO™ Runtime]:https://docs.openvino.ai/nightly/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
|
||||
[Model Optimizer]:https://docs.openvino.ai/nightly/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
|
||||
[Post-Training Optimization Tool]:https://docs.openvino.ai/nightly/pot_introduction.html
|
||||
[OpenVINO™ Runtime]:https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_OV_Runtime_User_Guide.html
|
||||
[Model Optimizer]:https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide.html
|
||||
[Post-Training Optimization Tool]:https://docs.openvino.ai/2023.0/pot_introduction.html
|
||||
[Samples]:https://github.com/openvinotoolkit/openvino/tree/master/samples
|
||||
|
||||
@@ -53,7 +53,7 @@ if(THREADING STREQUAL "OMP")
|
||||
update_deps_cache(OMP "${OMP}" "Path to OMP root folder")
|
||||
debug_message(STATUS "intel_omp=" ${OMP})
|
||||
|
||||
ie_cpack_add_component(omp HIDDEN)
|
||||
ov_cpack_add_component(omp HIDDEN)
|
||||
file(GLOB_RECURSE source_list "${OMP}/*${CMAKE_SHARED_LIBRARY_SUFFIX}*")
|
||||
install(FILES ${source_list}
|
||||
DESTINATION ${OV_CPACK_RUNTIMEDIR}
|
||||
@@ -96,11 +96,12 @@ function(ov_download_tbb)
|
||||
|
||||
if(WIN32 AND X86_64)
|
||||
# TODO: add target_path to be platform specific as well, to avoid following if
|
||||
# build oneTBB 2021.2.1 with Visual Studio 2019 (MSVC 14.21)
|
||||
RESOLVE_DEPENDENCY(TBB
|
||||
ARCHIVE_WIN "oneapi-tbb-2021.2.1-win.zip"
|
||||
ARCHIVE_WIN "oneapi-tbb-2021.2.2-win.zip"
|
||||
TARGET_PATH "${TEMP}/tbb"
|
||||
ENVIRONMENT "TBBROOT"
|
||||
SHA256 "d81591673bd7d3d9454054642f8ef799e1fdddc7b4cee810a95e6130eb7323d4"
|
||||
SHA256 "103b19a8af288c6a7d83ed3f0d2239c4afd0dd189fc12aad1d34b3c9e78df94b"
|
||||
USE_NEW_LOCATION TRUE)
|
||||
elseif(ANDROID AND X86_64)
|
||||
RESOLVE_DEPENDENCY(TBB
|
||||
@@ -108,7 +109,8 @@ function(ov_download_tbb)
|
||||
TARGET_PATH "${TEMP}/tbb"
|
||||
ENVIRONMENT "TBBROOT"
|
||||
SHA256 "f42d084224cc2d643314bd483ad180b081774608844000f132859fca3e9bf0ce")
|
||||
elseif(LINUX AND X86_64)
|
||||
elseif(LINUX AND X86_64 AND OV_GLIBC_VERSION VERSION_GREATER_EQUAL 2.17)
|
||||
# build oneTBB 2021.2.1 with gcc 4.8 (glibc 2.17)
|
||||
RESOLVE_DEPENDENCY(TBB
|
||||
ARCHIVE_LIN "oneapi-tbb-2021.2.1-lin.tgz"
|
||||
TARGET_PATH "${TEMP}/tbb"
|
||||
@@ -122,12 +124,37 @@ function(ov_download_tbb)
|
||||
ENVIRONMENT "TBBROOT"
|
||||
SHA256 "321261ff2eda6d4568a473cb883262bce77a93dac599f7bd65d2918bdee4d75b")
|
||||
elseif(APPLE AND X86_64)
|
||||
# build oneTBB 2021.2.1 with OS version 11.4
|
||||
RESOLVE_DEPENDENCY(TBB
|
||||
ARCHIVE_MAC "oneapi-tbb-2021.2.1-mac.tgz"
|
||||
TARGET_PATH "${TEMP}/tbb"
|
||||
ENVIRONMENT "TBBROOT"
|
||||
SHA256 "c57ce4b97116cd3093c33e6dcc147fb1bbb9678d0ee6c61a506b2bfe773232cb"
|
||||
USE_NEW_LOCATION TRUE)
|
||||
elseif(WIN32 AND AARCH64)
|
||||
# build oneTBB 2021.2.1 with Visual Studio 2022 (MSVC 14.35)
|
||||
RESOLVE_DEPENDENCY(TBB
|
||||
ARCHIVE_WIN "oneapi-tbb-2021.2.1-win-arm64.zip"
|
||||
TARGET_PATH "${TEMP}/tbb"
|
||||
ENVIRONMENT "TBBROOT"
|
||||
SHA256 "09fe7f5e7be589aa34ccd20fdfd7cad9e0afa89d1e74ecdb008a75d0af71d6e1"
|
||||
USE_NEW_LOCATION TRUE)
|
||||
elseif(LINUX AND AARCH64 AND OV_GLIBC_VERSION VERSION_GREATER_EQUAL 2.17)
|
||||
# build oneTBB 2021.2.1 with gcc 4.8 (glibc 2.17)
|
||||
RESOLVE_DEPENDENCY(TBB
|
||||
ARCHIVE_LIN "oneapi-tbb-2021.2.1-lin-arm64.tgz"
|
||||
TARGET_PATH "${TEMP}/tbb"
|
||||
ENVIRONMENT "TBBROOT"
|
||||
SHA256 "6b87194a845aa9314f3785d842e250d934e545eccc4636655c7b27c98c302c0c"
|
||||
USE_NEW_LOCATION TRUE)
|
||||
elseif(APPLE AND AARCH64)
|
||||
# build oneTBB 2021.2.1 with export MACOSX_DEPLOYMENT_TARGET=11.0
|
||||
RESOLVE_DEPENDENCY(TBB
|
||||
ARCHIVE_MAC "oneapi-tbb-2021.2.1-mac-arm64.tgz"
|
||||
TARGET_PATH "${TEMP}/tbb"
|
||||
ENVIRONMENT "TBBROOT"
|
||||
SHA256 "15d46ef19501e4315a5498af59af873dbf8180e9a3ea55253ccf7f0c0bb6f940"
|
||||
USE_NEW_LOCATION TRUE)
|
||||
else()
|
||||
message(WARNING "Prebuilt TBB is not available on current platform")
|
||||
endif()
|
||||
@@ -300,8 +327,8 @@ if(ENABLE_INTEL_GNA)
|
||||
GNA_LIB_DIR
|
||||
libGNA_INCLUDE_DIRS
|
||||
libGNA_LIBRARIES_BASE_PATH)
|
||||
set(GNA_VERSION "03.05.00.1906")
|
||||
set(GNA_HASH "4a5be86d9c026b0e10afac2a57fc7c99d762b30e3d506abb3a3380fbcfe2726e")
|
||||
set(GNA_VERSION "03.05.00.2116")
|
||||
set(GNA_HASH "960350567702bda17276ac4c060d7524fb7ce7ced785004bd861c81ff2bfe2c5")
|
||||
|
||||
set(FILES_TO_EXTRACT_LIST gna_${GNA_VERSION}/include)
|
||||
if(WIN32)
|
||||
|
||||
@@ -111,8 +111,8 @@ else()
|
||||
set(BIN_FOLDER "bin/${ARCH_FOLDER}")
|
||||
endif()
|
||||
|
||||
if(CMAKE_GENERATOR MATCHES "^Ninja Multi-Config$")
|
||||
# Ninja-Multi specific, see:
|
||||
if(CMAKE_GENERATOR STREQUAL "Ninja Multi-Config")
|
||||
# 'Ninja Multi-Config' specific, see:
|
||||
# https://cmake.org/cmake/help/latest/variable/CMAKE_DEFAULT_BUILD_TYPE.html
|
||||
set(CMAKE_DEFAULT_BUILD_TYPE "Release" CACHE STRING "CMake default build type")
|
||||
elseif(NOT OV_GENERATOR_MULTI_CONFIG)
|
||||
@@ -240,7 +240,7 @@ if(ENABLE_LTO)
|
||||
LANGUAGES C CXX)
|
||||
|
||||
if(NOT IPO_SUPPORTED)
|
||||
set(ENABLE_LTO "OFF" CACHE STRING "Enable Link Time Optmization" FORCE)
|
||||
set(ENABLE_LTO "OFF" CACHE STRING "Enable Link Time Optimization" FORCE)
|
||||
message(WARNING "IPO / LTO is not supported: ${OUTPUT_MESSAGE}")
|
||||
endif()
|
||||
endif()
|
||||
@@ -250,8 +250,8 @@ endif()
|
||||
macro(ov_install_static_lib target comp)
|
||||
if(NOT BUILD_SHARED_LIBS)
|
||||
get_target_property(target_type ${target} TYPE)
|
||||
if(${target_type} STREQUAL "STATIC_LIBRARY")
|
||||
set_target_properties(${target} PROPERTIES EXCLUDE_FROM_ALL FALSE)
|
||||
if(target_type STREQUAL "STATIC_LIBRARY")
|
||||
set_target_properties(${target} PROPERTIES EXCLUDE_FROM_ALL OFF)
|
||||
endif()
|
||||
install(TARGETS ${target} EXPORT OpenVINOTargets
|
||||
ARCHIVE DESTINATION ${OV_CPACK_ARCHIVEDIR} COMPONENT ${comp} ${ARGN})
|
||||
|
||||
@@ -4,23 +4,28 @@
|
||||
|
||||
if(WIN32)
|
||||
set(PROGRAMFILES_ENV "ProgramFiles(X86)")
|
||||
file(TO_CMAKE_PATH $ENV{${PROGRAMFILES_ENV}} PROGRAMFILES)
|
||||
|
||||
set(WDK_PATHS "${PROGRAMFILES}/Windows Kits/10/bin/${CMAKE_VS_WINDOWS_TARGET_PLATFORM_VERSION}/x64"
|
||||
"${PROGRAMFILES}/Windows Kits/10/bin/x64")
|
||||
# check that PROGRAMFILES_ENV is defined, because in case of cross-compilation for Windows
|
||||
# we don't have such variable
|
||||
if(DEFINED ENV{PROGRAMFILES_ENV})
|
||||
file(TO_CMAKE_PATH $ENV{${PROGRAMFILES_ENV}} PROGRAMFILES)
|
||||
|
||||
message(STATUS "Trying to find apivalidator in: ")
|
||||
foreach(wdk_path IN LISTS WDK_PATHS)
|
||||
message(" * ${wdk_path}")
|
||||
endforeach()
|
||||
set(WDK_PATHS "${PROGRAMFILES}/Windows Kits/10/bin/${CMAKE_VS_WINDOWS_TARGET_PLATFORM_VERSION}/x64"
|
||||
"${PROGRAMFILES}/Windows Kits/10/bin/x64")
|
||||
|
||||
find_host_program(ONECORE_API_VALIDATOR
|
||||
NAMES apivalidator
|
||||
PATHS ${WDK_PATHS}
|
||||
DOC "ApiValidator for OneCore compliance")
|
||||
message(STATUS "Trying to find apivalidator in: ")
|
||||
foreach(wdk_path IN LISTS WDK_PATHS)
|
||||
message(" * ${wdk_path}")
|
||||
endforeach()
|
||||
|
||||
if(ONECORE_API_VALIDATOR)
|
||||
message(STATUS "Found apivalidator: ${ONECORE_API_VALIDATOR}")
|
||||
find_host_program(ONECORE_API_VALIDATOR
|
||||
NAMES apivalidator
|
||||
PATHS ${WDK_PATHS}
|
||||
DOC "ApiValidator for OneCore compliance")
|
||||
|
||||
if(ONECORE_API_VALIDATOR)
|
||||
message(STATUS "Found apivalidator: ${ONECORE_API_VALIDATOR}")
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
|
||||
|
||||
@@ -4,8 +4,13 @@
|
||||
|
||||
macro(enable_fuzzing)
|
||||
# Enable (libFuzzer)[https://llvm.org/docs/LibFuzzer.html] if supported.
|
||||
set(FUZZING_COMPILER_FLAGS "-fsanitize=fuzzer-no-link -fprofile-instr-generate -fcoverage-mapping")
|
||||
set(FUZZING_LINKER_FLAGS "-fsanitize-coverage=trace-pc-guard -fprofile-instr-generate")
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
# see https://learn.microsoft.com/en-us/cpp/build/reference/fsanitize?view=msvc-160#remarks
|
||||
set(FUZZING_COMPILER_FLAGS "/fsanitize=fuzzer")
|
||||
elseif(OV_COMPILER_IS_CLANG)
|
||||
set(FUZZING_COMPILER_FLAGS "-fsanitize=fuzzer-no-link -fprofile-instr-generate -fcoverage-mapping")
|
||||
set(FUZZING_LINKER_FLAGS "-fsanitize-coverage=trace-pc-guard -fprofile-instr-generate")
|
||||
endif()
|
||||
|
||||
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${FUZZING_COMPILER_FLAGS}")
|
||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${FUZZING_COMPILER_FLAGS}")
|
||||
@@ -20,6 +25,10 @@ function(add_fuzzer FUZZER_EXE_NAME FUZZER_SOURCES)
|
||||
add_executable(${FUZZER_EXE_NAME} ${FUZZER_SOURCES})
|
||||
target_link_libraries(${FUZZER_EXE_NAME} PRIVATE fuzz-testhelper)
|
||||
if(ENABLE_FUZZING)
|
||||
set_target_properties(${FUZZER_EXE_NAME} PROPERTIES LINK_FLAGS "-fsanitize=fuzzer")
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
# no extra flags are required
|
||||
elseif(OV_COMPILER_IS_CLANG)
|
||||
set_target_properties(${FUZZER_EXE_NAME} PROPERTIES LINK_FLAGS "-fsanitize=fuzzer")
|
||||
endif()
|
||||
endif()
|
||||
endfunction(add_fuzzer)
|
||||
|
||||
@@ -12,23 +12,17 @@ include(CheckCXXCompilerFlag)
|
||||
# Defines ie_c_cxx_deprecated varaible which contains C / C++ compiler flags
|
||||
#
|
||||
macro(ov_disable_deprecated_warnings)
|
||||
if(WIN32)
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
set(ie_c_cxx_deprecated "/wd4996")
|
||||
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
|
||||
if(WIN32)
|
||||
set(ie_c_cxx_deprecated "/Qdiag-disable:1478,1786")
|
||||
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
set(ie_c_cxx_deprecated "/wd4996")
|
||||
elseif(OV_COMPILER_IS_CLANG)
|
||||
set(ie_c_cxx_deprecated "-Wno-deprecated-declarations")
|
||||
endif()
|
||||
else()
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
|
||||
else()
|
||||
set(ie_c_cxx_deprecated "-diag-disable=1478,1786")
|
||||
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
|
||||
set(ie_c_cxx_deprecated "-Wno-deprecated-declarations")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
if(NOT ie_c_cxx_deprecated)
|
||||
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
|
||||
set(ie_c_cxx_deprecated "-Wno-deprecated-declarations")
|
||||
else()
|
||||
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
|
||||
endif()
|
||||
|
||||
@@ -49,24 +43,18 @@ endmacro()
|
||||
# Defines ie_c_cxx_deprecated_no_errors varaible which contains C / C++ compiler flags
|
||||
#
|
||||
macro(ov_deprecated_no_errors)
|
||||
if(WIN32)
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
# show 4996 only for /w4
|
||||
set(ie_c_cxx_deprecated_no_errors "/wd4996")
|
||||
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
|
||||
if(WIN32)
|
||||
set(ie_c_cxx_deprecated_no_errors "/Qdiag-warning:1478,1786")
|
||||
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
# show 4996 only for /w4
|
||||
set(ie_c_cxx_deprecated_no_errors "/wd4996")
|
||||
elseif(OV_COMPILER_IS_CLANG)
|
||||
set(ie_c_cxx_deprecated_no_errors "-Wno-error=deprecated-declarations")
|
||||
endif()
|
||||
else()
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
|
||||
else()
|
||||
set(ie_c_cxx_deprecated_no_errors "-diag-warning=1478,1786")
|
||||
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
|
||||
set(ie_c_cxx_deprecated_no_errors "-Wno-error=deprecated-declarations")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
if(NOT ie_c_cxx_deprecated_no_errors)
|
||||
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
|
||||
set(ie_c_cxx_deprecated_no_errors "-Wno-error=deprecated-declarations")
|
||||
else()
|
||||
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
|
||||
endif()
|
||||
|
||||
@@ -101,23 +89,21 @@ endmacro()
|
||||
# Provides SSE4.2 compilation flags depending on an OS and a compiler
|
||||
#
|
||||
macro(ie_sse42_optimization_flags flags)
|
||||
if(WIN32)
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
# No such option for MSVC 2019
|
||||
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
# No such option for MSVC 2019
|
||||
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
|
||||
if(WIN32)
|
||||
set(${flags} /QxSSE4.2)
|
||||
else()
|
||||
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
|
||||
set(${flags} -xSSE4.2)
|
||||
endif()
|
||||
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
|
||||
set(${flags} -msse4.2)
|
||||
if(EMSCRIPTEN)
|
||||
list(APPEND ${flags} -msimd128)
|
||||
endif()
|
||||
else()
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
|
||||
set(${flags} -xSSE4.2)
|
||||
else()
|
||||
set(${flags} -msse4.2)
|
||||
if(EMSCRIPTEN)
|
||||
list(APPEND ${flags} -msimd128)
|
||||
endif()
|
||||
endif()
|
||||
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
|
||||
endif()
|
||||
endmacro()
|
||||
|
||||
@@ -127,20 +113,18 @@ endmacro()
|
||||
# Provides AVX2 compilation flags depending on an OS and a compiler
|
||||
#
|
||||
macro(ie_avx2_optimization_flags flags)
|
||||
if(WIN32)
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
set(${flags} /arch:AVX2)
|
||||
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
|
||||
if(WIN32)
|
||||
set(${flags} /QxCORE-AVX2)
|
||||
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
set(${flags} /arch:AVX2)
|
||||
else()
|
||||
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
|
||||
endif()
|
||||
else()
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
|
||||
set(${flags} -xCORE-AVX2)
|
||||
else()
|
||||
set(${flags} -mavx2 -mfma)
|
||||
endif()
|
||||
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
|
||||
set(${flags} -mavx2 -mfma)
|
||||
else()
|
||||
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
|
||||
endif()
|
||||
endmacro()
|
||||
|
||||
@@ -151,24 +135,18 @@ endmacro()
|
||||
# depending on an OS and a compiler
|
||||
#
|
||||
macro(ie_avx512_optimization_flags flags)
|
||||
if(WIN32)
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
set(${flags} /arch:AVX512)
|
||||
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
|
||||
if(WIN32)
|
||||
set(${flags} /QxCOMMON-AVX512)
|
||||
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
set(${flags} /arch:AVX512)
|
||||
else()
|
||||
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
|
||||
endif()
|
||||
else()
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
|
||||
set(${flags} -xCOMMON-AVX512)
|
||||
endif()
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
|
||||
set(${flags} -mavx512f -mfma)
|
||||
endif()
|
||||
if(CMAKE_CXX_COMPILER_ID MATCHES "^(Clang|AppleClang)$")
|
||||
set(${flags} -mavx512f -mfma)
|
||||
endif()
|
||||
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
|
||||
set(${flags} -mavx512f -mfma)
|
||||
else()
|
||||
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
|
||||
endif()
|
||||
endmacro()
|
||||
|
||||
@@ -265,8 +243,10 @@ endfunction()
|
||||
function(ov_force_include target scope header_file)
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
target_compile_options(${target} ${scope} /FI"${header_file}")
|
||||
else()
|
||||
elseif(OV_COMPILER_IS_CLANG OR CMAKE_COMPILER_IS_GNUCXX)
|
||||
target_compile_options(${target} ${scope} -include "${header_file}")
|
||||
else()
|
||||
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
@@ -318,11 +298,11 @@ set(CMAKE_VISIBILITY_INLINES_HIDDEN ON)
|
||||
if(CMAKE_CL_64)
|
||||
# Default char Type Is unsigned
|
||||
# ie_add_compiler_flags(/J)
|
||||
else()
|
||||
elseif(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG)
|
||||
ie_add_compiler_flags(-fsigned-char)
|
||||
endif()
|
||||
|
||||
if(WIN32)
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
#
|
||||
# Common options / warnings enabled
|
||||
#
|
||||
@@ -335,16 +315,14 @@ if(WIN32)
|
||||
# This option helps ensure the fewest possible hard-to-find code defects. Similar to -Wall on GNU / Clang
|
||||
ie_add_compiler_flags(/W3)
|
||||
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
# Increase Number of Sections in .Obj file
|
||||
ie_add_compiler_flags(/bigobj)
|
||||
# Build with multiple processes
|
||||
ie_add_compiler_flags(/MP)
|
||||
# Increase Number of Sections in .Obj file
|
||||
ie_add_compiler_flags(/bigobj)
|
||||
# Build with multiple processes
|
||||
ie_add_compiler_flags(/MP)
|
||||
|
||||
if(AARCH64 AND NOT MSVC_VERSION LESS 1930)
|
||||
# otherwise, _ARM64_EXTENDED_INTRINSICS is defined, which defines 'mvn' macro
|
||||
ie_add_compiler_flags(/D_ARM64_DISTINCT_NEON_TYPES)
|
||||
endif()
|
||||
if(AARCH64 AND NOT MSVC_VERSION LESS 1930)
|
||||
# otherwise, _ARM64_EXTENDED_INTRINSICS is defined, which defines 'mvn' macro
|
||||
ie_add_compiler_flags(/D_ARM64_DISTINCT_NEON_TYPES)
|
||||
endif()
|
||||
|
||||
# Handle Large Addresses
|
||||
@@ -361,42 +339,62 @@ if(WIN32)
|
||||
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} /WX")
|
||||
set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} /WX")
|
||||
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} /WX")
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
|
||||
ie_add_compiler_flags(/Qdiag-warning:47,1740,1786)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
#
|
||||
# Disable noisy warnings
|
||||
#
|
||||
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
# C4251 needs to have dll-interface to be used by clients of class
|
||||
ie_add_compiler_flags(/wd4251)
|
||||
# C4275 non dll-interface class used as base for dll-interface class
|
||||
ie_add_compiler_flags(/wd4275)
|
||||
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
|
||||
# 161: unrecognized pragma
|
||||
# 177: variable was declared but never referenced
|
||||
# 556: not matched type of assigned function pointer
|
||||
# 1744: field of class type without a DLL interface used in a class with a DLL interface
|
||||
# 1879: unimplemented pragma ignored
|
||||
# 2586: decorated name length exceeded, name was truncated
|
||||
# 2651: attribute does not apply to any entity
|
||||
# 3180: unrecognized OpenMP pragma
|
||||
# 11075: To get full report use -Qopt-report:4 -Qopt-report-phase ipo
|
||||
# 15335: was not vectorized: vectorization possible but seems inefficient. Use vector always directive or /Qvec-threshold0 to override
|
||||
ie_add_compiler_flags(/Qdiag-disable:161,177,556,1744,1879,2586,2651,3180,11075,15335)
|
||||
endif()
|
||||
# C4251 needs to have dll-interface to be used by clients of class
|
||||
ie_add_compiler_flags(/wd4251)
|
||||
# C4275 non dll-interface class used as base for dll-interface class
|
||||
ie_add_compiler_flags(/wd4275)
|
||||
|
||||
#
|
||||
# Debug information flags, by default CMake adds /Zi option
|
||||
# but provides no way to specify CMAKE_COMPILE_PDB_NAME on root level
|
||||
# In order to avoid issues with ninja we are replacing default flag instead of having two of them
|
||||
# and observing warning D9025 about flag override
|
||||
#
|
||||
|
||||
string(REPLACE "/Zi" "/Z7" CMAKE_C_FLAGS_DEBUG "${CMAKE_C_FLAGS_DEBUG}")
|
||||
string(REPLACE "/Zi" "/Z7" CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG}")
|
||||
string(REPLACE "/Zi" "/Z7" CMAKE_C_FLAGS_RELWITHDEBINFO "${CMAKE_C_FLAGS_RELWITHDEBINFO}")
|
||||
string(REPLACE "/Zi" "/Z7" CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO}")
|
||||
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel" AND WIN32)
|
||||
#
|
||||
# Warnings as errors
|
||||
#
|
||||
|
||||
if(CMAKE_COMPILE_WARNING_AS_ERROR AND CMAKE_VERSION VERSION_LESS 3.24)
|
||||
ie_add_compiler_flags(/Qdiag-warning:47,1740,1786)
|
||||
endif()
|
||||
|
||||
#
|
||||
# Disable noisy warnings
|
||||
#
|
||||
|
||||
# 161: unrecognized pragma
|
||||
ie_add_compiler_flags(/Qdiag-disable:161)
|
||||
# 177: variable was declared but never referenced
|
||||
ie_add_compiler_flags(/Qdiag-disable:177)
|
||||
# 556: not matched type of assigned function pointer
|
||||
ie_add_compiler_flags(/Qdiag-disable:556)
|
||||
# 1744: field of class type without a DLL interface used in a class with a DLL interface
|
||||
ie_add_compiler_flags(/Qdiag-disable:1744)
|
||||
# 1879: unimplemented pragma ignored
|
||||
ie_add_compiler_flags(/Qdiag-disable:1879)
|
||||
# 2586: decorated name length exceeded, name was truncated
|
||||
ie_add_compiler_flags(/Qdiag-disable:2586)
|
||||
# 2651: attribute does not apply to any entity
|
||||
ie_add_compiler_flags(/Qdiag-disable:2651)
|
||||
# 3180: unrecognized OpenMP pragma
|
||||
ie_add_compiler_flags(/Qdiag-disable:3180)
|
||||
# 11075: To get full report use -Qopt-report:4 -Qopt-report-phase ipo
|
||||
ie_add_compiler_flags(/Qdiag-disable:11075)
|
||||
# 15335: was not vectorized: vectorization possible but seems inefficient.
|
||||
# Use vector always directive or /Qvec-threshold0 to override
|
||||
ie_add_compiler_flags(/Qdiag-disable:15335)
|
||||
else()
|
||||
#
|
||||
# Common enabled warnings
|
||||
|
||||
@@ -5,7 +5,9 @@
|
||||
include(CheckCXXCompilerFlag)
|
||||
|
||||
if (ENABLE_SANITIZER)
|
||||
if (WIN32)
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
# the flag is available since MSVC 2019 16.9
|
||||
# see https://learn.microsoft.com/en-us/cpp/build/reference/fsanitize?view=msvc-160
|
||||
check_cxx_compiler_flag("/fsanitize=address" SANITIZE_ADDRESS_SUPPORTED)
|
||||
if (SANITIZE_ADDRESS_SUPPORTED)
|
||||
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} /fsanitize=address")
|
||||
@@ -14,21 +16,23 @@ if (ENABLE_SANITIZER)
|
||||
"Please, check requirements:\n"
|
||||
"https://github.com/openvinotoolkit/openvino/wiki/AddressSanitizer-and-LeakSanitizer")
|
||||
endif()
|
||||
else()
|
||||
elseif(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG)
|
||||
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize=address")
|
||||
check_cxx_compiler_flag("-fsanitize-recover=address" SANITIZE_RECOVER_ADDRESS_SUPPORTED)
|
||||
if (SANITIZE_RECOVER_ADDRESS_SUPPORTED)
|
||||
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize-recover=address")
|
||||
endif()
|
||||
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fsanitize=address")
|
||||
else()
|
||||
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
if (ENABLE_UB_SANITIZER)
|
||||
if (WIN32)
|
||||
message(FATAL_ERROR "UndefinedBehavior sanitizer is not supported in Windows")
|
||||
if(ENABLE_UB_SANITIZER)
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
message(FATAL_ERROR "UndefinedBehavior sanitizer is not supported in Windows with MSVC compiler. Please, use clang-cl or mingw")
|
||||
endif()
|
||||
|
||||
|
||||
# TODO: Remove -fno-sanitize=null as thirdparty/ocl/clhpp_headers UBSAN compatibility resolved:
|
||||
# https://github.com/KhronosGroup/OpenCL-CLHPP/issues/17
|
||||
# Mute -fsanitize=function Indirect call of a function through a function pointer of the wrong type.
|
||||
@@ -48,43 +52,50 @@ if (ENABLE_UB_SANITIZER)
|
||||
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fno-sanitize=function")
|
||||
endif()
|
||||
|
||||
if (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
|
||||
# TODO: Remove -Wno-maybe-uninitialized after CVS-61143 fix
|
||||
if(CMAKE_COMPILER_IS_GNUCXX)
|
||||
# TODO: Remove -Wno-maybe-uninitialized after CVS-61143 is fixed
|
||||
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -Wno-maybe-uninitialized")
|
||||
endif()
|
||||
check_cxx_compiler_flag("-fsanitize-recover=undefined" SANITIZE_RECOVER_UNDEFINED_SUPPORTED)
|
||||
if (SANITIZE_RECOVER_UNDEFINED_SUPPORTED)
|
||||
if(SANITIZE_RECOVER_UNDEFINED_SUPPORTED)
|
||||
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize-recover=undefined")
|
||||
endif()
|
||||
|
||||
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fsanitize=undefined")
|
||||
endif()
|
||||
|
||||
if (ENABLE_THREAD_SANITIZER)
|
||||
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize=thread")
|
||||
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fsanitize=thread")
|
||||
if(ENABLE_THREAD_SANITIZER)
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
message(FATAL_ERROR "Thread sanitizer is not supported in Windows with MSVC compiler. Please, use clang-cl or mingw")
|
||||
elseif(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG)
|
||||
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fsanitize=thread")
|
||||
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fsanitize=thread")
|
||||
else()
|
||||
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# common sanitizer options
|
||||
if (DEFINED SANITIZER_COMPILER_FLAGS)
|
||||
if(DEFINED SANITIZER_COMPILER_FLAGS)
|
||||
# ensure symbols are present
|
||||
if (NOT WIN32)
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} /Oy-")
|
||||
elseif(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG)
|
||||
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -g -fno-omit-frame-pointer")
|
||||
if(NOT OV_COMPILER_IS_CLANG)
|
||||
if(CMAKE_COMPILER_IS_GNUCXX)
|
||||
# GPU plugin tests compilation is slow with -fvar-tracking-assignments on GCC.
|
||||
# Clang has no var-tracking-assignments.
|
||||
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} -fno-var-tracking-assignments")
|
||||
endif()
|
||||
# prevent unloading libraries at runtime, so sanitizer can resolve their symbols
|
||||
if (NOT CMAKE_CXX_COMPILER_ID STREQUAL "AppleClang")
|
||||
if(NOT OV_COMPILER_IS_APPLECLANG)
|
||||
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -Wl,-z,nodelete")
|
||||
if(OV_COMPILER_IS_CLANG AND CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 8.0)
|
||||
set(SANITIZER_LINKER_FLAGS "${SANITIZER_LINKER_FLAGS} -fuse-ld=lld")
|
||||
endif()
|
||||
endif()
|
||||
|
||||
else()
|
||||
set(SANITIZER_COMPILER_FLAGS "${SANITIZER_COMPILER_FLAGS} /Oy-")
|
||||
message(WARNING "Unsupported CXX compiler ${CMAKE_CXX_COMPILER_ID}")
|
||||
endif()
|
||||
|
||||
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${SANITIZER_COMPILER_FLAGS}")
|
||||
|
||||
@@ -2,61 +2,68 @@
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
if(UNIX)
|
||||
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -Wformat -Wformat-security")
|
||||
if(CMAKE_COMPILER_IS_GNUCXX OR OV_COMPILER_IS_CLANG OR
|
||||
(UNIX AND CMAKE_CXX_COMPILER_ID STREQUAL "Intel"))
|
||||
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -Wformat -Wformat-security")
|
||||
|
||||
if (NOT ENABLE_SANITIZER)
|
||||
if(EMSCRIPTEN)
|
||||
# emcc does not support fortification, see:
|
||||
# https://stackoverflow.com/questions/58854858/undefined-symbol-stack-chk-guard-in-libopenh264-so-when-building-ffmpeg-wit
|
||||
else()
|
||||
# ASan does not support fortification https://github.com/google/sanitizers/issues/247
|
||||
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -D_FORTIFY_SOURCE=2")
|
||||
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -D_FORTIFY_SOURCE=2")
|
||||
endif()
|
||||
endif()
|
||||
if(NOT APPLE)
|
||||
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} -pie")
|
||||
endif()
|
||||
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
|
||||
set(IE_LINKER_FLAGS "${IE_LINKER_FLAGS} -z noexecstack -z relro -z now")
|
||||
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv")
|
||||
if(CMAKE_COMPILER_IS_GNUCXX)
|
||||
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fno-strict-overflow -fno-delete-null-pointer-checks -fwrapv")
|
||||
if(CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.9)
|
||||
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-all")
|
||||
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fstack-protector-all")
|
||||
else()
|
||||
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-strong")
|
||||
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fstack-protector-strong")
|
||||
endif()
|
||||
if (NOT ENABLE_SANITIZER)
|
||||
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -s")
|
||||
# Remove all symbol table and relocation information from the executable
|
||||
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -s")
|
||||
endif()
|
||||
if(NOT MINGW)
|
||||
set(OV_LINKER_FLAGS "${OV_LINKER_FLAGS} -z noexecstack -z relro -z now")
|
||||
endif()
|
||||
elseif(OV_COMPILER_IS_CLANG)
|
||||
if(EMSCRIPTEN)
|
||||
# emcc does not support fortification
|
||||
# https://stackoverflow.com/questions/58854858/undefined-symbol-stack-chk-guard-in-libopenh264-so-when-building-ffmpeg-wit
|
||||
else()
|
||||
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-all")
|
||||
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fstack-protector-all")
|
||||
endif()
|
||||
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
|
||||
if (NOT ENABLE_SANITIZER)
|
||||
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -Wl,--strip-all")
|
||||
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -Wl,--strip-all")
|
||||
endif()
|
||||
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} -fstack-protector-strong")
|
||||
set(IE_LINKER_FLAGS "${IE_LINKER_FLAGS} -z noexecstack -z relro -z now")
|
||||
endif()
|
||||
else()
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} /sdl")
|
||||
endif()
|
||||
set(IE_C_CXX_FLAGS "${IE_C_CXX_FLAGS} /guard:cf")
|
||||
if(ENABLE_INTEGRITYCHECK)
|
||||
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} /INTEGRITYCHECK")
|
||||
endif()
|
||||
if(ENABLE_QSPECTRE)
|
||||
ie_add_compiler_flags(/Qspectre)
|
||||
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} -fstack-protector-strong")
|
||||
set(OV_LINKER_FLAGS "${OV_LINKER_FLAGS} -z noexecstack -z relro -z now")
|
||||
endif()
|
||||
elseif(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC")
|
||||
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} /sdl /guard:cf")
|
||||
endif()
|
||||
|
||||
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} ${IE_C_CXX_FLAGS}")
|
||||
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} ${IE_C_CXX_FLAGS}")
|
||||
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} ${IE_LINKER_FLAGS}")
|
||||
set(CMAKE_MODULE_LINKER_FLAGS_RELEASE "${CMAKE_MODULE_LINKER_FLAGS_RELEASE} ${IE_LINKER_FLAGS}")
|
||||
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} ${IE_LINKER_FLAGS}")
|
||||
if(ENABLE_QSPECTRE)
|
||||
set(OV_C_CXX_FLAGS "${OV_C_CXX_FLAGS} /Qspectre")
|
||||
endif()
|
||||
|
||||
if(ENABLE_INTEGRITYCHECK)
|
||||
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} /INTEGRITYCHECK")
|
||||
endif()
|
||||
|
||||
set(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} ${OV_C_CXX_FLAGS}")
|
||||
set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} ${OV_C_CXX_FLAGS}")
|
||||
set(CMAKE_SHARED_LINKER_FLAGS_RELEASE "${CMAKE_SHARED_LINKER_FLAGS_RELEASE} ${OV_LINKER_FLAGS}")
|
||||
set(CMAKE_MODULE_LINKER_FLAGS_RELEASE "${CMAKE_MODULE_LINKER_FLAGS_RELEASE} ${OV_LINKER_FLAGS}")
|
||||
set(CMAKE_EXE_LINKER_FLAGS_RELEASE "${CMAKE_EXE_LINKER_FLAGS_RELEASE} ${OV_LINKER_FLAGS}")
|
||||
|
||||
unset(OV_C_CXX_FLAGS)
|
||||
unset(OV_LINKER_FLAGS)
|
||||
|
||||
8
cmake/developer_package/cpplint/cpplint.py
vendored
8
cmake/developer_package/cpplint/cpplint.py
vendored
@@ -641,7 +641,7 @@ _repository = None
|
||||
# Files to exclude from linting. This is set by the --exclude flag.
|
||||
_excludes = None
|
||||
|
||||
# Whether to supress PrintInfo messages
|
||||
# Whether to suppress PrintInfo messages
|
||||
_quiet = False
|
||||
|
||||
# The allowed line length of files.
|
||||
@@ -752,7 +752,7 @@ def ParseNolintSuppressions(filename, raw_line, linenum, error):
|
||||
'Unknown NOLINT error category: %s' % category)
|
||||
|
||||
|
||||
def ProcessGlobalSuppresions(lines):
|
||||
def ProcessGlobalSuppressions(lines):
|
||||
"""Updates the list of global error suppressions.
|
||||
|
||||
Parses any lint directives in the file that have global effect.
|
||||
@@ -780,7 +780,7 @@ def IsErrorSuppressedByNolint(category, linenum):
|
||||
"""Returns true if the specified error category is suppressed on this line.
|
||||
|
||||
Consults the global error_suppressions map populated by
|
||||
ParseNolintSuppressions/ProcessGlobalSuppresions/ResetNolintSuppressions.
|
||||
ParseNolintSuppressions/ProcessGlobalSuppressions/ResetNolintSuppressions.
|
||||
|
||||
Args:
|
||||
category: str, the category of the error.
|
||||
@@ -6203,7 +6203,7 @@ def ProcessFileData(filename, file_extension, lines, error,
|
||||
ResetNolintSuppressions()
|
||||
|
||||
CheckForCopyright(filename, lines, error)
|
||||
ProcessGlobalSuppresions(lines)
|
||||
ProcessGlobalSuppressions(lines)
|
||||
RemoveMultiLineComments(filename, lines, error)
|
||||
clean_lines = CleansedLines(lines)
|
||||
|
||||
|
||||
@@ -74,7 +74,12 @@ ie_option (VERBOSE_BUILD "shows extra information about build" OFF)
|
||||
|
||||
ie_option (ENABLE_UNSAFE_LOCATIONS "skip check for MD5 for dependency" OFF)
|
||||
|
||||
ie_dependent_option (ENABLE_FUZZING "instrument build for fuzzing" OFF "OV_COMPILER_IS_CLANG;NOT WIN32" OFF)
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "MSVC" AND MSVC_VERSION GREATER_EQUAL 1930)
|
||||
# Visual Studio 2022: 1930-1939 = VS 17.0 (v143 toolset)
|
||||
set(_msvc_version_2022 ON)
|
||||
endif()
|
||||
|
||||
ie_dependent_option (ENABLE_FUZZING "instrument build for fuzzing" OFF "OV_COMPILER_IS_CLANG OR _msvc_version_2022" OFF)
|
||||
|
||||
#
|
||||
# Check features
|
||||
|
||||
@@ -171,7 +171,7 @@ macro(ov_add_frontend)
|
||||
endforeach()
|
||||
|
||||
# Disable all warnings for generated code
|
||||
set_source_files_properties(${PROTO_SRCS} ${PROTO_HDRS} PROPERTIES COMPILE_OPTIONS -w GENERATED TRUE)
|
||||
set_source_files_properties(${PROTO_SRCS} ${PROTO_HDRS} PROPERTIES COMPILE_OPTIONS -w GENERATED ON)
|
||||
|
||||
# Create library
|
||||
add_library(${TARGET_NAME} ${LIBRARY_SRC} ${LIBRARY_HEADERS} ${LIBRARY_PUBLIC_HEADERS}
|
||||
@@ -201,11 +201,10 @@ macro(ov_add_frontend)
|
||||
${frontend_root_dir}/src
|
||||
${CMAKE_CURRENT_BINARY_DIR})
|
||||
|
||||
ie_add_vs_version_file(NAME ${TARGET_NAME}
|
||||
ov_add_vs_version_file(NAME ${TARGET_NAME}
|
||||
FILEDESCRIPTION ${OV_FRONTEND_FILEDESCRIPTION})
|
||||
|
||||
target_link_libraries(${TARGET_NAME} PUBLIC openvino::runtime)
|
||||
target_link_libraries(${TARGET_NAME} PRIVATE ${OV_FRONTEND_LINK_LIBRARIES})
|
||||
target_link_libraries(${TARGET_NAME} PRIVATE ${OV_FRONTEND_LINK_LIBRARIES} PUBLIC openvino::runtime)
|
||||
ov_add_library_version(${TARGET_NAME})
|
||||
|
||||
# WA for TF frontends which always require protobuf (not protobuf-lite)
|
||||
@@ -216,23 +215,34 @@ macro(ov_add_frontend)
|
||||
|
||||
if(proto_files)
|
||||
if(OV_FRONTEND_PROTOBUF_LITE)
|
||||
if(NOT protobuf_lite_installed)
|
||||
ov_install_static_lib(${Protobuf_LITE_LIBRARIES} ${OV_CPACK_COMP_CORE})
|
||||
set(protobuf_lite_installed ON CACHE INTERNAL "" FORCE)
|
||||
endif()
|
||||
link_system_libraries(${TARGET_NAME} PRIVATE ${Protobuf_LITE_LIBRARIES})
|
||||
set(protobuf_target_name libprotobuf-lite)
|
||||
set(protobuf_install_name "protobuf_lite_installed")
|
||||
else()
|
||||
if(NOT protobuf_installed)
|
||||
ov_install_static_lib(${Protobuf_LIBRARIES} ${OV_CPACK_COMP_CORE})
|
||||
set(protobuf_installed ON CACHE INTERNAL "" FORCE)
|
||||
endif()
|
||||
link_system_libraries(${TARGET_NAME} PRIVATE ${Protobuf_LIBRARIES})
|
||||
set(protobuf_target_name libprotobuf)
|
||||
set(protobuf_install_name "protobuf_installed")
|
||||
endif()
|
||||
if(ENABLE_SYSTEM_PROTOBUF)
|
||||
# use imported target name with namespace
|
||||
set(protobuf_target_name "protobuf::${protobuf_target_name}")
|
||||
endif()
|
||||
|
||||
# prptobuf generated code emits -Wsuggest-override error
|
||||
link_system_libraries(${TARGET_NAME} PRIVATE ${protobuf_target_name})
|
||||
|
||||
# protobuf generated code emits -Wsuggest-override error
|
||||
if(SUGGEST_OVERRIDE_SUPPORTED)
|
||||
target_compile_options(${TARGET_NAME} PRIVATE -Wno-suggest-override)
|
||||
endif()
|
||||
|
||||
# install protobuf if it is not installed yet
|
||||
if(NOT ${protobuf_install_name})
|
||||
if(ENABLE_SYSTEM_PROTOBUF)
|
||||
# we have to add find_package(Protobuf) to the OpenVINOConfig.cmake for static build
|
||||
# no needs to install protobuf
|
||||
else()
|
||||
ov_install_static_lib(${protobuf_target_name} ${OV_CPACK_COMP_CORE})
|
||||
set("${protobuf_install_name}" ON CACHE INTERNAL "" FORCE)
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
|
||||
if(flatbuffers_schema_files)
|
||||
@@ -273,7 +283,7 @@ macro(ov_add_frontend)
|
||||
set(dev_component "${OV_CPACK_COMP_CORE_DEV}")
|
||||
|
||||
# TODO: whether we need to do it configuralbe on Windows installer?
|
||||
ie_cpack_add_component(${lib_component} HIDDEN)
|
||||
ov_cpack_add_component(${lib_component} HIDDEN)
|
||||
|
||||
if(OV_FRONTEND_LINKABLE_FRONTEND)
|
||||
set(export_set EXPORT OpenVINOTargets)
|
||||
|
||||
@@ -2,41 +2,6 @@
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
include(target_flags)
|
||||
|
||||
# TODO: remove this function: we must not have conditions for particular OS names or versions
|
||||
|
||||
# cmake needs to look at /etc files only when we build for Linux on Linux
|
||||
if(CMAKE_HOST_LINUX AND LINUX)
|
||||
function(get_linux_name res_var)
|
||||
if(EXISTS "/etc/lsb-release")
|
||||
# linux version detection using cat /etc/lsb-release
|
||||
file(READ "/etc/lsb-release" release_data)
|
||||
set(name_regex "DISTRIB_ID=([^ \n]*)\n")
|
||||
set(version_regex "DISTRIB_RELEASE=([0-9]+(\\.[0-9]+)?)")
|
||||
else()
|
||||
execute_process(COMMAND find -L /etc/ -maxdepth 1 -type f -name *-release -exec cat {} \;
|
||||
OUTPUT_VARIABLE release_data
|
||||
RESULT_VARIABLE result)
|
||||
string(REPLACE "Red Hat" "CentOS" release_data "${release_data}")
|
||||
set(name_regex "NAME=\"([^ \"\n]*).*\"\n")
|
||||
set(version_regex "VERSION=\"([0-9]+(\\.[0-9]+)?)[^\n]*\"")
|
||||
endif()
|
||||
|
||||
string(REGEX MATCH ${name_regex} name ${release_data})
|
||||
set(os_name ${CMAKE_MATCH_1})
|
||||
|
||||
string(REGEX MATCH ${version_regex} version ${release_data})
|
||||
set(os_name "${os_name} ${CMAKE_MATCH_1}")
|
||||
|
||||
if(os_name)
|
||||
set(${res_var} ${os_name} PARENT_SCOPE)
|
||||
else ()
|
||||
set(${res_var} NOTFOUND PARENT_SCOPE)
|
||||
endif ()
|
||||
endfunction()
|
||||
else()
|
||||
function(get_linux_name res_var)
|
||||
set(${res_var} NOTFOUND PARENT_SCOPE)
|
||||
endfunction()
|
||||
endif ()
|
||||
function(get_linux_name res_var)
|
||||
set(${res_var} NOTFOUND PARENT_SCOPE)
|
||||
endfunction()
|
||||
|
||||
@@ -18,7 +18,7 @@ function(ov_native_compile_external_project)
|
||||
set(multiValueArgs CMAKE_ARGS NATIVE_TARGETS)
|
||||
cmake_parse_arguments(ARG "" "${oneValueRequiredArgs};${oneValueOptionalArgs}" "${multiValueArgs}" ${ARGN})
|
||||
|
||||
if(YOCTO_AARCH64)
|
||||
if(YOCTO_AARCH64 OR EMSCRIPTEN)
|
||||
# need to unset several variables which can set env to cross-environment
|
||||
foreach(var SDKTARGETSYSROOT CONFIG_SITE OECORE_NATIVE_SYSROOT OECORE_TARGET_SYSROOT
|
||||
OECORE_ACLOCAL_OPTS OECORE_BASELIB OECORE_TARGET_ARCH OECORE_TARGET_OS CC CXX
|
||||
@@ -31,10 +31,17 @@ function(ov_native_compile_external_project)
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
# set root path
|
||||
if(YOCTO_AARCH64)
|
||||
set(root_path "$ENV{OECORE_NATIVE_SYSROOT}")
|
||||
elseif(EMSCRIPTEN)
|
||||
set(root_path "$ENV{EMSDK}")
|
||||
endif()
|
||||
|
||||
# filter out PATH from yocto locations
|
||||
string(REPLACE ":" ";" custom_path "$ENV{PATH}")
|
||||
foreach(path IN LISTS custom_path)
|
||||
if(NOT path MATCHES "^$ENV{OECORE_NATIVE_SYSROOT}")
|
||||
if(DEFINED root_path AND NOT path MATCHES "^${root_path}")
|
||||
list(APPEND clean_path "${path}")
|
||||
endif()
|
||||
endforeach()
|
||||
@@ -81,6 +88,21 @@ function(ov_native_compile_external_project)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
if(compile_flags)
|
||||
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_CXX_FLAGS=${compile_flags}" "-DCMAKE_C_FLAGS=${compile_flags}")
|
||||
endif()
|
||||
|
||||
if(DEFINED CMAKE_CXX_COMPILER_LAUNCHER)
|
||||
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_CXX_COMPILER_LAUNCHER=${CMAKE_CXX_COMPILER_LAUNCHER}")
|
||||
endif()
|
||||
if(DEFINED CMAKE_C_COMPILER_LAUNCHER)
|
||||
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_C_COMPILER_LAUNCHER=${CMAKE_C_COMPILER_LAUNCHER}")
|
||||
endif()
|
||||
|
||||
if(DEFINED CMAKE_MAKE_PROGRAM)
|
||||
list(APPEND ARG_CMAKE_ARGS "-DCMAKE_MAKE_PROGRAM=${CMAKE_MAKE_PROGRAM}")
|
||||
endif()
|
||||
|
||||
ExternalProject_Add(${ARG_TARGET_NAME}
|
||||
# Directory Options
|
||||
SOURCE_DIR "${CMAKE_CURRENT_SOURCE_DIR}"
|
||||
@@ -89,12 +111,9 @@ function(ov_native_compile_external_project)
|
||||
INSTALL_DIR "${ARG_NATIVE_INSTALL_DIR}"
|
||||
# Configure Step Options:
|
||||
CMAKE_COMMAND
|
||||
${NATIVE_CMAKE_COMMAND}
|
||||
"${NATIVE_CMAKE_COMMAND}" -E env ${cmake_env}
|
||||
"${NATIVE_CMAKE_COMMAND}"
|
||||
CMAKE_ARGS
|
||||
"-DCMAKE_CXX_COMPILER_LAUNCHER=${CMAKE_CXX_COMPILER_LAUNCHER}"
|
||||
"-DCMAKE_C_COMPILER_LAUNCHER=${CMAKE_C_COMPILER_LAUNCHER}"
|
||||
"-DCMAKE_CXX_FLAGS=${compile_flags}"
|
||||
"-DCMAKE_C_FLAGS=${compile_flags}"
|
||||
"-DCMAKE_POLICY_DEFAULT_CMP0069=NEW"
|
||||
"-DCMAKE_INSTALL_PREFIX=${ARG_NATIVE_INSTALL_DIR}"
|
||||
${ARG_CMAKE_ARGS}
|
||||
@@ -102,7 +121,7 @@ function(ov_native_compile_external_project)
|
||||
${ARG_NATIVE_SOURCE_SUBDIR}
|
||||
# Build Step Options:
|
||||
BUILD_COMMAND
|
||||
${NATIVE_CMAKE_COMMAND}
|
||||
"${NATIVE_CMAKE_COMMAND}"
|
||||
--build "${CMAKE_CURRENT_BINARY_DIR}/build"
|
||||
--config Release
|
||||
--parallel
|
||||
|
||||
@@ -27,6 +27,8 @@ elseif(PYTHON_VERSION_MINOR EQUAL 9)
|
||||
set(clang_version 12)
|
||||
elseif(PYTHON_VERSION_MINOR EQUAL 10)
|
||||
set(clang_version 14)
|
||||
elseif(PYTHON_VERSION_MINOR EQUAL 11)
|
||||
set(clang_version 14)
|
||||
else()
|
||||
message(WARNING "Cannot suggest clang package for python ${PYTHON_VERSION_MAJOR}.${PYTHON_VERSION_MINOR}")
|
||||
endif()
|
||||
|
||||
@@ -25,7 +25,7 @@ macro(ov_common_libraries_cpack_set_dirs)
|
||||
set(OV_CPACK_IE_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/inferenceengine${OpenVINO_VERSION})
|
||||
set(OV_CPACK_NGRAPH_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/ngraph${OpenVINO_VERSION})
|
||||
set(OV_CPACK_OPENVINO_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/openvino${OpenVINO_VERSION})
|
||||
set(OV_CPACK_DOCDIR ${CMAKE_INSTALL_DATADIR}/doc/openvino-${OpenVINO_VERSION})
|
||||
set(OV_CPACK_LICENSESDIR licenses)
|
||||
|
||||
ov_get_pyversion(pyversion)
|
||||
if(pyversion)
|
||||
|
||||
@@ -31,6 +31,7 @@ macro(ov_debian_cpack_set_dirs)
|
||||
set(OV_CPACK_NGRAPH_CMAKEDIR ${OV_CPACK_RUNTIMEDIR}/cmake/ngraph${OpenVINO_VERSION})
|
||||
set(OV_CPACK_OPENVINO_CMAKEDIR ${OV_CPACK_RUNTIMEDIR}/cmake/openvino${OpenVINO_VERSION})
|
||||
set(OV_CPACK_DOCDIR ${CMAKE_INSTALL_DATADIR}/doc/openvino-${OpenVINO_VERSION})
|
||||
set(OV_CPACK_LICENSESDIR ${OV_CPACK_DOCDIR}/licenses)
|
||||
set(OV_CPACK_PYTHONDIR lib/python3/dist-packages)
|
||||
|
||||
# non-native stuff
|
||||
|
||||
@@ -29,6 +29,7 @@ macro(ov_cpack_set_dirs)
|
||||
set(OV_CPACK_NGRAPH_CMAKEDIR runtime/cmake)
|
||||
set(OV_CPACK_OPENVINO_CMAKEDIR runtime/cmake)
|
||||
set(OV_CPACK_DOCDIR docs)
|
||||
set(OV_CPACK_LICENSESDIR ${OV_CPACK_DOCDIR}/licenses)
|
||||
set(OV_CPACK_SAMPLESDIR samples)
|
||||
set(OV_CPACK_WHEELSDIR tools)
|
||||
set(OV_CPACK_TOOLSDIR tools)
|
||||
@@ -66,11 +67,11 @@ endmacro()
|
||||
ov_cpack_set_dirs()
|
||||
|
||||
#
|
||||
# ie_cpack_add_component(NAME ...)
|
||||
# ov_cpack_add_component(NAME ...)
|
||||
#
|
||||
# Wraps original `cpack_add_component` and adds component to internal IE list
|
||||
#
|
||||
function(ie_cpack_add_component name)
|
||||
function(ov_cpack_add_component name)
|
||||
if(NOT ${name} IN_LIST IE_CPACK_COMPONENTS_ALL)
|
||||
cpack_add_component(${name} ${ARGN})
|
||||
|
||||
@@ -99,10 +100,10 @@ endif()
|
||||
# if <FILE> is a symlink, we resolve it, but install file with a name of symlink
|
||||
#
|
||||
function(ov_install_with_name file component)
|
||||
if((APPLE AND file MATCHES "^[^\.]+\.[0-9]+${CMAKE_SHARED_LIBRARY_SUFFIX}$") OR
|
||||
(file MATCHES "^.*\.${CMAKE_SHARED_LIBRARY_SUFFIX}\.[0-9]+$"))
|
||||
get_filename_component(actual_name "${file}" NAME)
|
||||
if((APPLE AND actual_name MATCHES "^[^\.]+\.[0-9]+${CMAKE_SHARED_LIBRARY_SUFFIX}$") OR
|
||||
(actual_name MATCHES "^.*\.${CMAKE_SHARED_LIBRARY_SUFFIX}\.[0-9]+$"))
|
||||
if(IS_SYMLINK "${file}")
|
||||
get_filename_component(actual_name "${file}" NAME)
|
||||
get_filename_component(file "${file}" REALPATH)
|
||||
set(install_rename RENAME "${actual_name}")
|
||||
endif()
|
||||
@@ -162,7 +163,7 @@ elseif(CPACK_GENERATOR STREQUAL "RPM")
|
||||
include(packaging/rpm/rpm)
|
||||
elseif(CPACK_GENERATOR STREQUAL "NSIS")
|
||||
include(packaging/nsis)
|
||||
elseif(CPACK_GENERATOR MATCHES "^(CONDA-FORGE|BREW)$")
|
||||
elseif(CPACK_GENERATOR MATCHES "^(CONDA-FORGE|BREW|CONAN)$")
|
||||
include(packaging/common-libraries)
|
||||
endif()
|
||||
|
||||
|
||||
@@ -22,6 +22,11 @@ macro(ov_rpm_cpack_set_dirs)
|
||||
set(OV_CPACK_NGRAPH_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/ngraph${OpenVINO_VERSION})
|
||||
set(OV_CPACK_OPENVINO_CMAKEDIR ${CMAKE_INSTALL_LIBDIR}/cmake/openvino${OpenVINO_VERSION})
|
||||
set(OV_CPACK_DOCDIR ${CMAKE_INSTALL_DATADIR}/doc/openvino-${OpenVINO_VERSION})
|
||||
set(OV_CPACK_LICENSESDIR ${OV_CPACK_DOCDIR}/licenses)
|
||||
|
||||
# TODO:
|
||||
# 1. define python installation directories for RPM packages
|
||||
# 2. make sure only a single version of python API can be installed at the same time (define conflicts section)
|
||||
# set(OV_CPACK_PYTHONDIR lib/python3/dist-packages)
|
||||
|
||||
ov_get_pyversion(pyversion)
|
||||
|
||||
@@ -4,13 +4,13 @@
|
||||
|
||||
cmake_policy(SET CMP0007 NEW)
|
||||
|
||||
set(newContent " <plugin name=\"${IE_DEVICE_NAME}\" location=\"${IE_PLUGIN_LIBRARY_NAME}\">")
|
||||
set(newContent " <plugin name=\"${OV_DEVICE_NAME}\" location=\"${OV_PLUGIN_LIBRARY_NAME}\">")
|
||||
|
||||
if(IE_PLUGIN_PROPERTIES)
|
||||
if(OV_PLUGIN_PROPERTIES)
|
||||
set(newContent "${newContent}
|
||||
<properties>")
|
||||
|
||||
foreach(props IN LISTS IE_PLUGIN_PROPERTIES)
|
||||
foreach(props IN LISTS OV_PLUGIN_PROPERTIES)
|
||||
string(REPLACE ":" ";" props "${props}")
|
||||
|
||||
list(GET props 0 key)
|
||||
@@ -27,4 +27,4 @@ endif()
|
||||
set(newContent "${newContent}
|
||||
</plugin>")
|
||||
|
||||
file(WRITE "${IE_CONFIG_OUTPUT_FILE}" "${newContent}")
|
||||
file(WRITE "${OV_CONFIG_OUTPUT_FILE}" "${newContent}")
|
||||
|
||||
@@ -6,11 +6,15 @@ include(CMakeParseArguments)
|
||||
|
||||
set(PLUGIN_FILES "" CACHE INTERNAL "")
|
||||
|
||||
function(ie_plugin_get_file_name target_name library_name)
|
||||
function(ov_plugin_get_file_name target_name library_name)
|
||||
set(LIB_PREFIX "${CMAKE_SHARED_MODULE_PREFIX}")
|
||||
set(LIB_SUFFIX "${IE_BUILD_POSTFIX}${CMAKE_SHARED_MODULE_SUFFIX}")
|
||||
|
||||
set("${library_name}" "${LIB_PREFIX}${target_name}${LIB_SUFFIX}" PARENT_SCOPE)
|
||||
get_target_property(LIB_NAME ${target_name} OUTPUT_NAME)
|
||||
if (LIB_NAME STREQUAL "LIB_NAME-NOTFOUND")
|
||||
set(LIB_NAME ${target_name})
|
||||
endif()
|
||||
set("${library_name}" "${LIB_PREFIX}${LIB_NAME}${LIB_SUFFIX}" PARENT_SCOPE)
|
||||
endfunction()
|
||||
|
||||
if(NOT TARGET ov_plugins)
|
||||
@@ -18,7 +22,7 @@ if(NOT TARGET ov_plugins)
|
||||
endif()
|
||||
|
||||
#
|
||||
# ie_add_plugin(NAME <targetName>
|
||||
# ov_add_plugin(NAME <targetName>
|
||||
# DEVICE_NAME <deviceName>
|
||||
# [PSEUDO_DEVICE]
|
||||
# [PSEUDO_PLUGIN_FOR <actual_device>]
|
||||
@@ -32,25 +36,25 @@ endif()
|
||||
# [ADD_CLANG_FORMAT]
|
||||
# )
|
||||
#
|
||||
function(ie_add_plugin)
|
||||
function(ov_add_plugin)
|
||||
set(options SKIP_INSTALL PSEUDO_DEVICE ADD_CLANG_FORMAT AS_EXTENSION SKIP_REGISTRATION)
|
||||
set(oneValueArgs NAME DEVICE_NAME VERSION_DEFINES_FOR PSEUDO_PLUGIN_FOR)
|
||||
set(multiValueArgs DEFAULT_CONFIG SOURCES OBJECT_LIBRARIES CPPLINT_FILTERS)
|
||||
cmake_parse_arguments(IE_PLUGIN "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
|
||||
cmake_parse_arguments(OV_PLUGIN "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
|
||||
|
||||
if(NOT IE_PLUGIN_NAME)
|
||||
if(NOT OV_PLUGIN_NAME)
|
||||
message(FATAL_ERROR "Please, specify plugin target name")
|
||||
endif()
|
||||
|
||||
if(NOT IE_PLUGIN_DEVICE_NAME)
|
||||
message(FATAL_ERROR "Please, specify device name for ${IE_PLUGIN_NAME}")
|
||||
if(NOT OV_PLUGIN_DEVICE_NAME)
|
||||
message(FATAL_ERROR "Please, specify device name for ${OV_PLUGIN_NAME}")
|
||||
endif()
|
||||
|
||||
# create and configure target
|
||||
|
||||
if(NOT IE_PLUGIN_PSEUDO_PLUGIN_FOR)
|
||||
set(input_files ${IE_PLUGIN_SOURCES})
|
||||
foreach(obj_lib IN LISTS IE_PLUGIN_OBJECT_LIBRARIES)
|
||||
if(NOT OV_PLUGIN_PSEUDO_PLUGIN_FOR)
|
||||
set(input_files ${OV_PLUGIN_SOURCES})
|
||||
foreach(obj_lib IN LISTS OV_PLUGIN_OBJECT_LIBRARIES)
|
||||
list(APPEND input_files $<TARGET_OBJECTS:${obj_lib}>)
|
||||
add_cpplint_target(${obj_lib}_cpplint FOR_TARGETS ${obj_lib})
|
||||
endforeach()
|
||||
@@ -61,120 +65,122 @@ function(ie_add_plugin)
|
||||
set(library_type STATIC)
|
||||
endif()
|
||||
|
||||
add_library(${IE_PLUGIN_NAME} ${library_type} ${input_files})
|
||||
add_library(${OV_PLUGIN_NAME} ${library_type} ${input_files})
|
||||
|
||||
if(IE_PLUGIN_VERSION_DEFINES_FOR)
|
||||
ov_add_version_defines(${IE_PLUGIN_VERSION_DEFINES_FOR} ${IE_PLUGIN_NAME})
|
||||
if(OV_PLUGIN_VERSION_DEFINES_FOR)
|
||||
ov_add_version_defines(${OV_PLUGIN_VERSION_DEFINES_FOR} ${OV_PLUGIN_NAME})
|
||||
endif()
|
||||
|
||||
target_compile_definitions(${IE_PLUGIN_NAME} PRIVATE IMPLEMENT_INFERENCE_ENGINE_PLUGIN)
|
||||
target_compile_definitions(${OV_PLUGIN_NAME} PRIVATE IMPLEMENT_INFERENCE_ENGINE_PLUGIN)
|
||||
if(NOT BUILD_SHARED_LIBS)
|
||||
# to distinguish functions creating plugin objects
|
||||
target_compile_definitions(${IE_PLUGIN_NAME} PRIVATE
|
||||
IE_CREATE_PLUGIN=CreatePluginEngine${IE_PLUGIN_DEVICE_NAME}
|
||||
OV_CREATE_PLUGIN=CreatePluginEngine${IE_PLUGIN_DEVICE_NAME})
|
||||
if(IE_PLUGIN_AS_EXTENSION)
|
||||
target_compile_definitions(${OV_PLUGIN_NAME} PRIVATE
|
||||
IE_CREATE_PLUGIN=CreatePluginEngine${OV_PLUGIN_DEVICE_NAME}
|
||||
OV_CREATE_PLUGIN=CreatePluginEngine${OV_PLUGIN_DEVICE_NAME})
|
||||
if(OV_PLUGIN_AS_EXTENSION)
|
||||
# to distinguish functions creating extensions objects
|
||||
target_compile_definitions(${IE_PLUGIN_NAME} PRIVATE
|
||||
IE_CREATE_EXTENSION=CreateExtensionShared${IE_PLUGIN_DEVICE_NAME})
|
||||
target_compile_definitions(${OV_PLUGIN_NAME} PRIVATE
|
||||
IE_CREATE_EXTENSION=CreateExtensionShared${OV_PLUGIN_DEVICE_NAME})
|
||||
endif()
|
||||
endif()
|
||||
|
||||
ie_add_vs_version_file(NAME ${IE_PLUGIN_NAME}
|
||||
FILEDESCRIPTION "OpenVINO Runtime ${IE_PLUGIN_DEVICE_NAME} device plugin library")
|
||||
ov_add_vs_version_file(NAME ${OV_PLUGIN_NAME}
|
||||
FILEDESCRIPTION "OpenVINO Runtime ${OV_PLUGIN_DEVICE_NAME} device plugin library")
|
||||
|
||||
target_link_libraries(${IE_PLUGIN_NAME} PRIVATE openvino::runtime openvino::runtime::dev)
|
||||
target_link_libraries(${OV_PLUGIN_NAME} PRIVATE openvino::runtime openvino::runtime::dev)
|
||||
|
||||
if(WIN32)
|
||||
set_target_properties(${IE_PLUGIN_NAME} PROPERTIES COMPILE_PDB_NAME ${IE_PLUGIN_NAME})
|
||||
set_target_properties(${OV_PLUGIN_NAME} PROPERTIES COMPILE_PDB_NAME ${OV_PLUGIN_NAME})
|
||||
endif()
|
||||
|
||||
if(CMAKE_COMPILER_IS_GNUCXX AND NOT CMAKE_CROSSCOMPILING)
|
||||
target_link_options(${IE_PLUGIN_NAME} PRIVATE -Wl,--unresolved-symbols=ignore-in-shared-libs)
|
||||
target_link_options(${OV_PLUGIN_NAME} PRIVATE -Wl,--unresolved-symbols=ignore-in-shared-libs)
|
||||
endif()
|
||||
|
||||
set(custom_filter "")
|
||||
foreach(filter IN LISTS IE_PLUGIN_CPPLINT_FILTERS)
|
||||
foreach(filter IN LISTS OV_PLUGIN_CPPLINT_FILTERS)
|
||||
string(CONCAT custom_filter "${custom_filter}" "," "${filter}")
|
||||
endforeach()
|
||||
|
||||
if (IE_PLUGIN_ADD_CLANG_FORMAT)
|
||||
add_clang_format_target(${IE_PLUGIN_NAME}_clang FOR_TARGETS ${IE_PLUGIN_NAME})
|
||||
if (OV_PLUGIN_ADD_CLANG_FORMAT)
|
||||
add_clang_format_target(${OV_PLUGIN_NAME}_clang FOR_TARGETS ${OV_PLUGIN_NAME})
|
||||
else()
|
||||
add_cpplint_target(${IE_PLUGIN_NAME}_cpplint FOR_TARGETS ${IE_PLUGIN_NAME} CUSTOM_FILTERS ${custom_filter})
|
||||
add_cpplint_target(${OV_PLUGIN_NAME}_cpplint FOR_TARGETS ${OV_PLUGIN_NAME} CUSTOM_FILTERS ${custom_filter})
|
||||
endif()
|
||||
|
||||
add_dependencies(ov_plugins ${IE_PLUGIN_NAME})
|
||||
add_dependencies(ov_plugins ${OV_PLUGIN_NAME})
|
||||
|
||||
# install rules
|
||||
if(NOT IE_PLUGIN_SKIP_INSTALL OR NOT BUILD_SHARED_LIBS)
|
||||
string(TOLOWER "${IE_PLUGIN_DEVICE_NAME}" install_component)
|
||||
if(NOT OV_PLUGIN_SKIP_INSTALL OR NOT BUILD_SHARED_LIBS)
|
||||
string(TOLOWER "${OV_PLUGIN_DEVICE_NAME}" install_component)
|
||||
|
||||
if(IE_PLUGIN_PSEUDO_DEVICE)
|
||||
if(OV_PLUGIN_PSEUDO_DEVICE)
|
||||
set(plugin_hidden HIDDEN)
|
||||
endif()
|
||||
ie_cpack_add_component(${install_component}
|
||||
DISPLAY_NAME "${IE_PLUGIN_DEVICE_NAME} runtime"
|
||||
DESCRIPTION "${IE_PLUGIN_DEVICE_NAME} runtime"
|
||||
ov_cpack_add_component(${install_component}
|
||||
DISPLAY_NAME "${OV_PLUGIN_DEVICE_NAME} runtime"
|
||||
DESCRIPTION "${OV_PLUGIN_DEVICE_NAME} runtime"
|
||||
${plugin_hidden}
|
||||
DEPENDS ${OV_CPACK_COMP_CORE})
|
||||
|
||||
if(BUILD_SHARED_LIBS)
|
||||
install(TARGETS ${IE_PLUGIN_NAME}
|
||||
install(TARGETS ${OV_PLUGIN_NAME}
|
||||
LIBRARY DESTINATION ${OV_CPACK_PLUGINSDIR}
|
||||
COMPONENT ${install_component})
|
||||
install(TARGETS ${IE_PLUGIN_NAME}
|
||||
install(TARGETS ${OV_PLUGIN_NAME}
|
||||
LIBRARY DESTINATION ${OV_CPACK_PLUGINSDIR}
|
||||
COMPONENT ${install_component})
|
||||
else()
|
||||
ov_install_static_lib(${IE_PLUGIN_NAME} ${install_component})
|
||||
ov_install_static_lib(${OV_PLUGIN_NAME} ${install_component})
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# Enable for static build to generate correct plugins.hpp
|
||||
if(NOT IE_PLUGIN_SKIP_REGISTRATION OR NOT BUILD_SHARED_LIBS)
|
||||
if(NOT OV_PLUGIN_SKIP_REGISTRATION OR NOT BUILD_SHARED_LIBS)
|
||||
# check that plugin with such name is not registered
|
||||
foreach(plugin_entry IN LISTS PLUGIN_FILES)
|
||||
string(REPLACE ":" ";" plugin_entry "${plugin_entry}")
|
||||
list(GET plugin_entry -1 library_name)
|
||||
list(GET plugin_entry 0 plugin_name)
|
||||
if(plugin_name STREQUAL "${IE_PLUGIN_DEVICE_NAME}" AND
|
||||
NOT library_name STREQUAL ${IE_PLUGIN_NAME})
|
||||
message(FATAL_ERROR "${IE_PLUGIN_NAME} and ${library_name} are both registered as ${plugin_name}")
|
||||
if(plugin_name STREQUAL "${OV_PLUGIN_DEVICE_NAME}" AND
|
||||
NOT library_name STREQUAL ${OV_PLUGIN_NAME})
|
||||
message(FATAL_ERROR "${OV_PLUGIN_NAME} and ${library_name} are both registered as ${plugin_name}")
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
# append plugin to the list to register
|
||||
|
||||
list(APPEND PLUGIN_FILES "${IE_PLUGIN_DEVICE_NAME}:${IE_PLUGIN_NAME}")
|
||||
list(APPEND PLUGIN_FILES "${OV_PLUGIN_DEVICE_NAME}:${OV_PLUGIN_NAME}")
|
||||
set(PLUGIN_FILES "${PLUGIN_FILES}" CACHE INTERNAL "" FORCE)
|
||||
set(${IE_PLUGIN_DEVICE_NAME}_CONFIG "${IE_PLUGIN_DEFAULT_CONFIG}" CACHE INTERNAL "" FORCE)
|
||||
set(${IE_PLUGIN_DEVICE_NAME}_PSEUDO_PLUGIN_FOR "${IE_PLUGIN_PSEUDO_PLUGIN_FOR}" CACHE INTERNAL "" FORCE)
|
||||
set(${IE_PLUGIN_DEVICE_NAME}_AS_EXTENSION "${IE_PLUGIN_AS_EXTENSION}" CACHE INTERNAL "" FORCE)
|
||||
set(${OV_PLUGIN_DEVICE_NAME}_CONFIG "${OV_PLUGIN_DEFAULT_CONFIG}" CACHE INTERNAL "" FORCE)
|
||||
set(${OV_PLUGIN_DEVICE_NAME}_PSEUDO_PLUGIN_FOR "${OV_PLUGIN_PSEUDO_PLUGIN_FOR}" CACHE INTERNAL "" FORCE)
|
||||
set(${OV_PLUGIN_DEVICE_NAME}_AS_EXTENSION "${OV_PLUGIN_AS_EXTENSION}" CACHE INTERNAL "" FORCE)
|
||||
endif()
|
||||
endfunction()
|
||||
|
||||
function(ov_add_plugin)
|
||||
ie_add_plugin(${ARGN})
|
||||
function(ie_add_plugin)
|
||||
ov_add_plugin(${ARGN})
|
||||
endfunction()
|
||||
|
||||
#
|
||||
# ie_register_plugins_dynamic(MAIN_TARGET <main target name>)
|
||||
# ov_register_in_plugins_xml(MAIN_TARGET <main target name>)
|
||||
#
|
||||
macro(ie_register_plugins_dynamic)
|
||||
# Registers plugins in plugins.xml files for dynamic plugins build
|
||||
#
|
||||
macro(ov_register_in_plugins_xml)
|
||||
set(options)
|
||||
set(oneValueArgs MAIN_TARGET)
|
||||
set(multiValueArgs)
|
||||
cmake_parse_arguments(IE_REGISTER "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
|
||||
cmake_parse_arguments(OV_REGISTER "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN})
|
||||
|
||||
if(NOT IE_REGISTER_MAIN_TARGET)
|
||||
if(NOT OV_REGISTER_MAIN_TARGET)
|
||||
message(FATAL_ERROR "Please, define MAIN_TARGET")
|
||||
endif()
|
||||
|
||||
# Unregister <device_name>.xml files for plugins from current build tree
|
||||
|
||||
set(config_output_file "$<TARGET_FILE_DIR:${IE_REGISTER_MAIN_TARGET}>/plugins.xml")
|
||||
set(config_output_file "$<TARGET_FILE_DIR:${OV_REGISTER_MAIN_TARGET}>/plugins.xml")
|
||||
|
||||
foreach(name IN LISTS PLUGIN_FILES)
|
||||
string(REPLACE ":" ";" name "${name}")
|
||||
@@ -183,12 +189,12 @@ macro(ie_register_plugins_dynamic)
|
||||
message(FATAL_ERROR "Unexpected error, please, contact developer of this script")
|
||||
endif()
|
||||
list(GET name 0 device_name)
|
||||
add_custom_command(TARGET ${IE_REGISTER_MAIN_TARGET} POST_BUILD
|
||||
add_custom_command(TARGET ${OV_REGISTER_MAIN_TARGET} POST_BUILD
|
||||
COMMAND
|
||||
"${CMAKE_COMMAND}"
|
||||
-D "IE_CONFIG_OUTPUT_FILE=${config_output_file}"
|
||||
-D "IE_PLUGIN_NAME=${device_name}"
|
||||
-D "IE_CONFIGS_DIR=${CMAKE_BINARY_DIR}/plugins"
|
||||
-D "OV_CONFIG_OUTPUT_FILE=${config_output_file}"
|
||||
-D "OV_PLUGIN_NAME=${device_name}"
|
||||
-D "OV_CONFIGS_DIR=${CMAKE_BINARY_DIR}/plugins"
|
||||
-P "${IEDevScripts_DIR}/plugins/unregister_plugin_cmake.cmake"
|
||||
COMMENT
|
||||
"Remove ${device_name} from the plugins.xml file"
|
||||
@@ -209,15 +215,15 @@ macro(ie_register_plugins_dynamic)
|
||||
|
||||
# create plugin file
|
||||
set(config_file_name "${CMAKE_BINARY_DIR}/plugins/${device_name}.xml")
|
||||
ie_plugin_get_file_name(${name} library_name)
|
||||
ov_plugin_get_file_name(${name} library_name)
|
||||
|
||||
add_custom_command(TARGET ${IE_REGISTER_MAIN_TARGET} POST_BUILD
|
||||
add_custom_command(TARGET ${OV_REGISTER_MAIN_TARGET} POST_BUILD
|
||||
COMMAND
|
||||
"${CMAKE_COMMAND}"
|
||||
-D "IE_CONFIG_OUTPUT_FILE=${config_file_name}"
|
||||
-D "IE_DEVICE_NAME=${device_name}"
|
||||
-D "IE_PLUGIN_PROPERTIES=${${device_name}_CONFIG}"
|
||||
-D "IE_PLUGIN_LIBRARY_NAME=${library_name}"
|
||||
-D "OV_CONFIG_OUTPUT_FILE=${config_file_name}"
|
||||
-D "OV_DEVICE_NAME=${device_name}"
|
||||
-D "OV_PLUGIN_PROPERTIES=${${device_name}_CONFIG}"
|
||||
-D "OV_PLUGIN_LIBRARY_NAME=${library_name}"
|
||||
-P "${IEDevScripts_DIR}/plugins/create_plugin_file.cmake"
|
||||
COMMENT "Register ${device_name} device as ${library_name}"
|
||||
VERBATIM)
|
||||
@@ -227,17 +233,24 @@ macro(ie_register_plugins_dynamic)
|
||||
|
||||
# Combine all <device_name>.xml files into plugins.xml
|
||||
|
||||
if(ENABLE_PLUGINS_XML)
|
||||
add_custom_command(TARGET ${IE_REGISTER_MAIN_TARGET} POST_BUILD
|
||||
COMMAND
|
||||
"${CMAKE_COMMAND}"
|
||||
-D "CMAKE_SHARED_MODULE_PREFIX=${CMAKE_SHARED_MODULE_PREFIX}"
|
||||
-D "IE_CONFIG_OUTPUT_FILE=${config_output_file}"
|
||||
-D "IE_CONFIGS_DIR=${CMAKE_BINARY_DIR}/plugins"
|
||||
-P "${IEDevScripts_DIR}/plugins/register_plugin_cmake.cmake"
|
||||
COMMENT
|
||||
"Registering plugins to plugins.xml config file"
|
||||
VERBATIM)
|
||||
add_custom_command(TARGET ${OV_REGISTER_MAIN_TARGET} POST_BUILD
|
||||
COMMAND
|
||||
"${CMAKE_COMMAND}"
|
||||
-D "CMAKE_SHARED_MODULE_PREFIX=${CMAKE_SHARED_MODULE_PREFIX}"
|
||||
-D "OV_CONFIG_OUTPUT_FILE=${config_output_file}"
|
||||
-D "OV_CONFIGS_DIR=${CMAKE_BINARY_DIR}/plugins"
|
||||
-P "${IEDevScripts_DIR}/plugins/register_plugin_cmake.cmake"
|
||||
COMMENT
|
||||
"Registering plugins to plugins.xml config file"
|
||||
VERBATIM)
|
||||
endmacro()
|
||||
|
||||
#
|
||||
# ov_register_plugins()
|
||||
#
|
||||
macro(ov_register_plugins)
|
||||
if(BUILD_SHARED_LIBS AND ENABLE_PLUGINS_XML)
|
||||
ov_register_in_plugins_xml(${ARGN})
|
||||
endif()
|
||||
endmacro()
|
||||
|
||||
@@ -245,24 +258,13 @@ endmacro()
|
||||
# ie_register_plugins()
|
||||
#
|
||||
macro(ie_register_plugins)
|
||||
if(BUILD_SHARED_LIBS)
|
||||
ie_register_plugins_dynamic(${ARGN})
|
||||
endif()
|
||||
ov_register_plugins(${ARGN})
|
||||
endmacro()
|
||||
|
||||
#
|
||||
# ov_register_plugins()
|
||||
# ov_target_link_plugins(<TARGET_NAME>)
|
||||
#
|
||||
macro(ov_register_plugins)
|
||||
if(BUILD_SHARED_LIBS)
|
||||
ie_register_plugins_dynamic(${ARGN})
|
||||
endif()
|
||||
endmacro()
|
||||
|
||||
#
|
||||
# ie_target_link_plugins(<TARGET_NAME>)
|
||||
#
|
||||
function(ie_target_link_plugins TARGET_NAME)
|
||||
function(ov_target_link_plugins TARGET_NAME)
|
||||
if(BUILD_SHARED_LIBS)
|
||||
return()
|
||||
endif()
|
||||
@@ -283,6 +285,10 @@ endfunction()
|
||||
#
|
||||
# ov_generate_plugins_hpp()
|
||||
#
|
||||
# Generates plugins.hpp file for:
|
||||
# - static plugins build
|
||||
# - cases when plugins.xml file is disabled
|
||||
#
|
||||
function(ov_generate_plugins_hpp)
|
||||
set(device_mapping)
|
||||
set(device_configs)
|
||||
@@ -298,7 +304,7 @@ function(ov_generate_plugins_hpp)
|
||||
list(GET name 0 device_name)
|
||||
if(BUILD_SHARED_LIBS)
|
||||
list(GET name 1 library_name)
|
||||
ie_plugin_get_file_name(${library_name} library_name)
|
||||
ov_plugin_get_file_name(${library_name} library_name)
|
||||
list(APPEND device_mapping "${device_name}:${library_name}")
|
||||
else()
|
||||
if(${device_name}_PSEUDO_PLUGIN_FOR)
|
||||
@@ -322,12 +328,16 @@ function(ov_generate_plugins_hpp)
|
||||
endforeach()
|
||||
|
||||
# add plugins to libraries including ov_plugins.hpp
|
||||
ie_target_link_plugins(openvino)
|
||||
ov_target_link_plugins(openvino)
|
||||
if(TARGET inference_engine_s)
|
||||
ie_target_link_plugins(inference_engine_s)
|
||||
ov_target_link_plugins(inference_engine_s)
|
||||
endif()
|
||||
|
||||
set(ov_plugins_hpp "${CMAKE_BINARY_DIR}/src/inference/ov_plugins.hpp")
|
||||
if(OV_GENERATOR_MULTI_CONFIG AND CMAKE_VERSION VERSION_GREATER_EQUAL 3.20)
|
||||
set(ov_plugins_hpp "${CMAKE_BINARY_DIR}/src/inference/$<CONFIG>/ov_plugins.hpp")
|
||||
else()
|
||||
set(ov_plugins_hpp "${CMAKE_BINARY_DIR}/src/inference/ov_plugins.hpp")
|
||||
endif()
|
||||
set(plugins_hpp_in "${IEDevScripts_DIR}/plugins/plugins.hpp.in")
|
||||
|
||||
add_custom_command(OUTPUT "${ov_plugins_hpp}"
|
||||
@@ -348,7 +358,7 @@ function(ov_generate_plugins_hpp)
|
||||
VERBATIM)
|
||||
|
||||
# for some reason dependency on source files does not work
|
||||
# so, we have to use explicit target and make it dependency for inference_engine
|
||||
# so, we have to use explicit target and make it dependency for inference_engine_obj
|
||||
add_custom_target(_ov_plugins_hpp DEPENDS ${ov_plugins_hpp})
|
||||
add_dependencies(inference_engine_obj _ov_plugins_hpp)
|
||||
endfunction()
|
||||
|
||||
@@ -8,18 +8,18 @@ set(file_content
|
||||
</plugins>
|
||||
</ie>")
|
||||
|
||||
if(NOT EXISTS "${IE_CONFIG_OUTPUT_FILE}")
|
||||
file(WRITE "${IE_CONFIG_OUTPUT_FILE}" "${file_content}")
|
||||
if(NOT EXISTS "${OV_CONFIG_OUTPUT_FILE}")
|
||||
file(WRITE "${OV_CONFIG_OUTPUT_FILE}" "${file_content}")
|
||||
endif()
|
||||
|
||||
# get list of plugin files
|
||||
file(GLOB plugin_files "${IE_CONFIGS_DIR}/*.xml")
|
||||
file(GLOB plugin_files "${OV_CONFIGS_DIR}/*.xml")
|
||||
|
||||
function(check_plugin_exists plugin_name outvar)
|
||||
set(${outvar} OFF PARENT_SCOPE)
|
||||
|
||||
# check if config file already has this plugin
|
||||
file(STRINGS "${IE_CONFIG_OUTPUT_FILE}" content REGEX "plugin .*=\"")
|
||||
file(STRINGS "${OV_CONFIG_OUTPUT_FILE}" content REGEX "plugin .*=\"")
|
||||
|
||||
foreach(line IN LISTS content)
|
||||
string(REGEX MATCH "location=\"([^\"]*)\"" location "${line}")
|
||||
@@ -44,7 +44,7 @@ endforeach()
|
||||
|
||||
# add plugin
|
||||
set(newContent "")
|
||||
file(STRINGS "${IE_CONFIG_OUTPUT_FILE}" content)
|
||||
file(STRINGS "${OV_CONFIG_OUTPUT_FILE}" content)
|
||||
|
||||
set(already_exists_in_xml OFF)
|
||||
foreach(line IN LISTS content)
|
||||
@@ -77,4 +77,4 @@ ${content}")
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
file(WRITE "${IE_CONFIG_OUTPUT_FILE}" "${newContent}")
|
||||
file(WRITE "${OV_CONFIG_OUTPUT_FILE}" "${newContent}")
|
||||
|
||||
@@ -2,16 +2,16 @@
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
if(NOT EXISTS "${IE_CONFIG_OUTPUT_FILE}")
|
||||
if(NOT EXISTS "${OV_CONFIG_OUTPUT_FILE}")
|
||||
return()
|
||||
endif()
|
||||
|
||||
# remove plugin file
|
||||
file(REMOVE "${IE_CONFIGS_DIR}/${IE_PLUGIN_NAME}.xml")
|
||||
file(REMOVE "${OV_CONFIGS_DIR}/${IE_PLUGIN_NAME}.xml")
|
||||
|
||||
# remove plugin
|
||||
set(newContent "")
|
||||
file(STRINGS "${IE_CONFIG_OUTPUT_FILE}" content)
|
||||
file(STRINGS "${OV_CONFIG_OUTPUT_FILE}" content)
|
||||
|
||||
set(skip_plugin OFF)
|
||||
foreach(line IN LISTS content)
|
||||
@@ -32,4 +32,4 @@ foreach(line IN LISTS content)
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
file(WRITE "${IE_CONFIG_OUTPUT_FILE}" "${newContent}")
|
||||
file(WRITE "${OV_CONFIG_OUTPUT_FILE}" "${newContent}")
|
||||
|
||||
@@ -17,20 +17,44 @@ if(WIN32 AND CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
|
||||
endif()
|
||||
|
||||
if(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
|
||||
set(arch_flag X86_64)
|
||||
set(host_arch_flag X86_64)
|
||||
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "i686.*|i386.*|x86.*|amd64.*|AMD64.*")
|
||||
set(arch_flag X86)
|
||||
set(host_arch_flag X86)
|
||||
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^(arm64.*|aarch64.*|AARCH64.*|ARM64.*)")
|
||||
set(arch_flag AARCH64)
|
||||
set(host_arch_flag AARCH64)
|
||||
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^(arm.*|ARM.*)")
|
||||
set(arch_flag ARM)
|
||||
set(host_arch_flag ARM)
|
||||
elseif(CMAKE_HOST_SYSTEM_PROCESSOR MATCHES "^riscv64$")
|
||||
set(arch_flag RISCV64)
|
||||
set(host_arch_flag RISCV64)
|
||||
endif()
|
||||
|
||||
set(HOST_${arch_flag} ON)
|
||||
set(HOST_${host_arch_flag} ON)
|
||||
|
||||
macro(_ie_process_msvc_generator_platform arch_flag)
|
||||
macro(_ov_detect_arch_by_processor_type)
|
||||
if(CMAKE_OSX_ARCHITECTURES AND APPLE)
|
||||
if(CMAKE_OSX_ARCHITECTURES STREQUAL "arm64")
|
||||
set(AARCH64 ON)
|
||||
elseif(CMAKE_OSX_ARCHITECTURES STREQUAL "x86_64")
|
||||
set(X86_64 ON)
|
||||
elseif(CMAKE_OSX_ARCHITECTURES MATCHES ".*x86_64.*" AND CMAKE_OSX_ARCHITECTURES MATCHES ".*arm64.*")
|
||||
set(UNIVERSAL2 ON)
|
||||
else()
|
||||
message(FATAL_ERROR "Unsupported value: CMAKE_OSX_ARCHITECTURES = ${CMAKE_OSX_ARCHITECTURES}")
|
||||
endif()
|
||||
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
|
||||
set(X86_64 ON)
|
||||
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "i686.*|i386.*|x86.*|amd64.*|AMD64.*|wasm")
|
||||
set(X86 ON)
|
||||
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm64.*|aarch64.*|AARCH64.*|ARM64.*|armv8)")
|
||||
set(AARCH64 ON)
|
||||
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm.*|ARM.*)")
|
||||
set(ARM ON)
|
||||
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^riscv64$")
|
||||
set(RISCV64 ON)
|
||||
endif()
|
||||
endmacro()
|
||||
|
||||
macro(_ov_process_msvc_generator_platform)
|
||||
# if cmake -A <ARM|ARM64|x64|Win32> is passed
|
||||
if(CMAKE_GENERATOR_PLATFORM STREQUAL "ARM64")
|
||||
set(AARCH64 ON)
|
||||
@@ -41,45 +65,30 @@ macro(_ie_process_msvc_generator_platform arch_flag)
|
||||
elseif(CMAKE_GENERATOR_PLATFORM STREQUAL "Win32")
|
||||
set(X86 ON)
|
||||
else()
|
||||
set(${arch_flag} ON)
|
||||
_ov_detect_arch_by_processor_type()
|
||||
endif()
|
||||
endmacro()
|
||||
|
||||
# TODO: why OpenCV is found by cmake
|
||||
if(MSVC64 OR MINGW64)
|
||||
_ie_process_msvc_generator_platform(${arch_flag})
|
||||
_ov_process_msvc_generator_platform()
|
||||
elseif(MINGW OR (MSVC AND NOT CMAKE_CROSSCOMPILING))
|
||||
_ie_process_msvc_generator_platform(${arch_flag})
|
||||
elseif(CMAKE_OSX_ARCHITECTURES AND APPLE)
|
||||
if(CMAKE_OSX_ARCHITECTURES STREQUAL "arm64")
|
||||
set(AARCH64 ON)
|
||||
elseif(CMAKE_OSX_ARCHITECTURES STREQUAL "x86_64")
|
||||
set(X86_64 ON)
|
||||
elseif(CMAKE_OSX_ARCHITECTURES MATCHES ".*x86_64.*" AND CMAKE_OSX_ARCHITECTURES MATCHES ".*arm64.*")
|
||||
set(UNIVERSAL2 ON)
|
||||
else()
|
||||
message(FATAL_ERROR "Unsupported value: CMAKE_OSX_ARCHITECTURES = ${CMAKE_OSX_ARCHITECTURES}")
|
||||
endif()
|
||||
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "amd64.*|x86_64.*|AMD64.*")
|
||||
set(X86_64 ON)
|
||||
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "i686.*|i386.*|x86.*|amd64.*|AMD64.*")
|
||||
set(X86 ON)
|
||||
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm64.*|aarch64.*|AARCH64.*|ARM64.*)")
|
||||
set(AARCH64 ON)
|
||||
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^(arm.*|ARM.*)")
|
||||
set(ARM ON)
|
||||
elseif(CMAKE_SYSTEM_PROCESSOR MATCHES "^riscv64$")
|
||||
set(RISCV64 ON)
|
||||
_ov_process_msvc_generator_platform()
|
||||
else()
|
||||
_ov_detect_arch_by_processor_type()
|
||||
endif()
|
||||
|
||||
if(CMAKE_SYSTEM_NAME STREQUAL "Emscripten")
|
||||
set(EMSCRIPTEN ON)
|
||||
endif()
|
||||
|
||||
if(UNIX AND NOT (APPLE OR ANDROID OR EMSCRIPTEN))
|
||||
if(UNIX AND NOT (APPLE OR ANDROID OR EMSCRIPTEN OR CYGWIN))
|
||||
set(LINUX ON)
|
||||
endif()
|
||||
|
||||
if(NOT DEFINED CMAKE_HOST_LINUX AND CMAKE_HOST_SYSTEM_NAME STREQUAL "Linux")
|
||||
if(CMAKE_VERSION VERSION_LESS 3.25 AND CMAKE_HOST_SYSTEM_NAME STREQUAL "Linux")
|
||||
# the variable is available since 3.25
|
||||
# https://cmake.org/cmake/help/latest/variable/CMAKE_HOST_LINUX.html
|
||||
set(CMAKE_HOST_LINUX ON)
|
||||
endif()
|
||||
|
||||
|
||||
@@ -2,18 +2,18 @@
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
set(IE_VS_VER_FILEVERSION_QUAD "${OpenVINO_VERSION_MAJOR},${OpenVINO_VERSION_MINOR},${OpenVINO_VERSION_PATCH},${OpenVINO_VERSION_BUILD}")
|
||||
set(IE_VS_VER_PRODUCTVERSION_QUAD "${OpenVINO_VERSION_MAJOR},${OpenVINO_VERSION_MINOR},${OpenVINO_VERSION_PATCH},${OpenVINO_VERSION_BUILD}")
|
||||
set(IE_VS_VER_FILEVERSION_STR "${OpenVINO_VERSION_MAJOR}.${OpenVINO_VERSION_MINOR}.${OpenVINO_VERSION_PATCH}.${OpenVINO_VERSION_BUILD}")
|
||||
set(OV_VS_VER_FILEVERSION_QUAD "${OpenVINO_VERSION_MAJOR},${OpenVINO_VERSION_MINOR},${OpenVINO_VERSION_PATCH},${OpenVINO_VERSION_BUILD}")
|
||||
set(OV_VS_VER_PRODUCTVERSION_QUAD "${OpenVINO_VERSION_MAJOR},${OpenVINO_VERSION_MINOR},${OpenVINO_VERSION_PATCH},${OpenVINO_VERSION_BUILD}")
|
||||
set(OV_VS_VER_FILEVERSION_STR "${OpenVINO_VERSION_MAJOR}.${OpenVINO_VERSION_MINOR}.${OpenVINO_VERSION_PATCH}.${OpenVINO_VERSION_BUILD}")
|
||||
|
||||
set(IE_VS_VER_COMPANY_NAME_STR "Intel Corporation")
|
||||
set(IE_VS_VER_PRODUCTVERSION_STR "${CI_BUILD_NUMBER}")
|
||||
set(IE_VS_VER_PRODUCTNAME_STR "OpenVINO toolkit")
|
||||
set(IE_VS_VER_COPYRIGHT_STR "Copyright (C) 2018-2021, Intel Corporation")
|
||||
set(IE_VS_VER_COMMENTS_STR "https://docs.openvino.ai/")
|
||||
set(OV_VS_VER_COMPANY_NAME_STR "Intel Corporation")
|
||||
set(OV_VS_VER_PRODUCTVERSION_STR "${CI_BUILD_NUMBER}")
|
||||
set(OV_VS_VER_PRODUCTNAME_STR "OpenVINO toolkit")
|
||||
set(OV_VS_VER_COPYRIGHT_STR "Copyright (C) 2018-2021, Intel Corporation")
|
||||
set(OV_VS_VER_COMMENTS_STR "https://docs.openvino.ai/")
|
||||
|
||||
#
|
||||
# ie_add_vs_version_file(NAME <name>
|
||||
# ov_add_vs_version_file(NAME <name>
|
||||
# FILEDESCRIPTION <file description>
|
||||
# [COMPANY_NAME <company name>]
|
||||
# [FILEVERSION <file version>]
|
||||
@@ -25,7 +25,7 @@ set(IE_VS_VER_COMMENTS_STR "https://docs.openvino.ai/")
|
||||
# [FILEVERSION_QUAD <name>]
|
||||
# [PRODUCTVERSION_QUAD <name>])
|
||||
#
|
||||
function(ie_add_vs_version_file)
|
||||
function(ov_add_vs_version_file)
|
||||
if(NOT WIN32 OR NOT BUILD_SHARED_LIBS)
|
||||
return()
|
||||
endif()
|
||||
@@ -38,14 +38,14 @@ function(ie_add_vs_version_file)
|
||||
|
||||
get_target_property(target_type ${VS_VER_NAME} TYPE)
|
||||
if(NOT target_type MATCHES "^(SHARED|MODULE)_LIBRARY$")
|
||||
message(FATAL_ERROR "ie_add_vs_version_file can work only with dynamic libraries")
|
||||
message(FATAL_ERROR "ov_add_vs_version_file can work only with dynamic libraries")
|
||||
endif()
|
||||
|
||||
macro(_vs_ver_update_variable name)
|
||||
if(VS_VER_NAME AND DEFINED IE_${VS_VER_NAME}_VS_VER_${name})
|
||||
set(IE_VS_VER_${name} "${IE_${VS_VER_NAME}_VS_VER_${name}}")
|
||||
if(VS_VER_NAME AND DEFINED OV_${VS_VER_NAME}_VS_VER_${name})
|
||||
set(OV_VS_VER_${name} "${OV_${VS_VER_NAME}_VS_VER_${name}}")
|
||||
elseif(VS_VER_${name})
|
||||
set(IE_VS_VER_${name} "${VS_VER_${name}}")
|
||||
set(OV_VS_VER_${name} "${VS_VER_${name}}")
|
||||
endif()
|
||||
endmacro()
|
||||
|
||||
@@ -53,10 +53,10 @@ function(ie_add_vs_version_file)
|
||||
_vs_ver_update_variable(PRODUCTVERSION_QUAD)
|
||||
|
||||
macro(_vs_ver_update_str_variable name)
|
||||
if(VS_VER_NAME AND DEFINED IE_${VS_VER_NAME}_VS_VER_${name})
|
||||
set(IE_VS_VER_${name}_STR "${IE_${VS_VER_NAME}_VS_VER_${name}}")
|
||||
if(VS_VER_NAME AND DEFINED OV_${VS_VER_NAME}_VS_VER_${name})
|
||||
set(OV_VS_VER_${name}_STR "${OV_${VS_VER_NAME}_VS_VER_${name}}")
|
||||
elseif(VS_VER_${name})
|
||||
set(IE_VS_VER_${name}_STR "${VS_VER_${name}}")
|
||||
set(OV_VS_VER_${name}_STR "${VS_VER_${name}}")
|
||||
endif()
|
||||
endmacro()
|
||||
|
||||
@@ -69,8 +69,8 @@ function(ie_add_vs_version_file)
|
||||
_vs_ver_update_str_variable(PRODUCTVERSION)
|
||||
_vs_ver_update_str_variable(COMMENTS)
|
||||
|
||||
set(IE_VS_VER_ORIGINALFILENAME_STR "${CMAKE_SHARED_LIBRARY_PREFIX}${VS_VER_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}")
|
||||
set(IE_VS_VER_INTERNALNAME_STR ${VS_VER_NAME})
|
||||
set(OV_VS_VER_ORIGINALFILENAME_STR "${CMAKE_SHARED_LIBRARY_PREFIX}${VS_VER_NAME}${CMAKE_SHARED_LIBRARY_SUFFIX}")
|
||||
set(OV_VS_VER_INTERNALNAME_STR ${VS_VER_NAME})
|
||||
|
||||
set(vs_version_output "${CMAKE_CURRENT_BINARY_DIR}/vs_version.rc")
|
||||
configure_file("${IEDevScripts_DIR}/vs_version/vs_version.rc.in" "${vs_version_output}" @ONLY)
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
#include <winver.h>
|
||||
|
||||
VS_VERSION_INFO VERSIONINFO
|
||||
FILEVERSION @IE_VS_VER_FILEVERSION_QUAD@
|
||||
PRODUCTVERSION @IE_VS_VER_PRODUCTVERSION_QUAD@
|
||||
FILEVERSION @OV_VS_VER_FILEVERSION_QUAD@
|
||||
PRODUCTVERSION @OV_VS_VER_PRODUCTVERSION_QUAD@
|
||||
FILEFLAGSMASK VS_FFI_FILEFLAGSMASK
|
||||
#ifdef _DEBUG
|
||||
FILEFLAGS 1
|
||||
@@ -17,15 +17,15 @@ BEGIN
|
||||
BEGIN
|
||||
BLOCK "040904E4"
|
||||
BEGIN
|
||||
VALUE "CompanyName", "@IE_VS_VER_COMPANY_NAME_STR@\0"
|
||||
VALUE "FileDescription", "@IE_VS_VER_FILEDESCRIPTION_STR@\0"
|
||||
VALUE "FileVersion", "@IE_VS_VER_FILEVERSION_STR@\0"
|
||||
VALUE "InternalName", "@IE_VS_VER_INTERNALNAME_STR@\0"
|
||||
VALUE "LegalCopyright", "@IE_VS_VER_COPYRIGHT_STR@\0"
|
||||
VALUE "OriginalFilename", "@IE_VS_VER_ORIGINALFILENAME_STR@\0"
|
||||
VALUE "ProductName", "@IE_VS_VER_PRODUCTNAME_STR@\0"
|
||||
VALUE "ProductVersion", "@IE_VS_VER_PRODUCTVERSION_STR@\0"
|
||||
VALUE "Comments", "@IE_VS_VER_COMMENTS_STR@\0"
|
||||
VALUE "CompanyName", "@OV_VS_VER_COMPANY_NAME_STR@\0"
|
||||
VALUE "FileDescription", "@OV_VS_VER_FILEDESCRIPTION_STR@\0"
|
||||
VALUE "FileVersion", "@OV_VS_VER_FILEVERSION_STR@\0"
|
||||
VALUE "InternalName", "@OV_VS_VER_INTERNALNAME_STR@\0"
|
||||
VALUE "LegalCopyright", "@OV_VS_VER_COPYRIGHT_STR@\0"
|
||||
VALUE "OriginalFilename", "@OV_VS_VER_ORIGINALFILENAME_STR@\0"
|
||||
VALUE "ProductName", "@OV_VS_VER_PRODUCTNAME_STR@\0"
|
||||
VALUE "ProductVersion", "@OV_VS_VER_PRODUCTVERSION_STR@\0"
|
||||
VALUE "Comments", "@OV_VS_VER_COMMENTS_STR@\0"
|
||||
END
|
||||
END
|
||||
BLOCK "VarFileInfo"
|
||||
|
||||
@@ -40,6 +40,7 @@ function(ieTargetLinkWholeArchive targetName)
|
||||
"-Wl,-noall_load"
|
||||
)
|
||||
else()
|
||||
# non-Apple Clang and GCC / MinGW
|
||||
list(APPEND libs
|
||||
"-Wl,--whole-archive"
|
||||
${staticLib}
|
||||
|
||||
@@ -6,7 +6,9 @@
|
||||
# Common cmake options
|
||||
#
|
||||
|
||||
ie_dependent_option (ENABLE_INTEL_CPU "CPU plugin for OpenVINO Runtime" ON "RISCV64 OR X86 OR X86_64" OFF)
|
||||
ie_dependent_option (ENABLE_INTEL_CPU "CPU plugin for OpenVINO Runtime" ON "RISCV64 OR X86 OR X86_64 OR AARCH64 OR ARM" OFF)
|
||||
|
||||
ie_dependent_option (ENABLE_ARM_COMPUTE_CMAKE "Enable ARM Compute build via cmake" OFF "ENABLE_INTEL_CPU" OFF)
|
||||
|
||||
ie_option (ENABLE_TESTS "unit, behavior and functional tests" OFF)
|
||||
|
||||
@@ -20,7 +22,7 @@ else()
|
||||
set(ENABLE_INTEL_GPU_DEFAULT OFF)
|
||||
endif()
|
||||
|
||||
ie_dependent_option (ENABLE_INTEL_GPU "GPU OpenCL-based plugin for OpenVINO Runtime" ${ENABLE_INTEL_GPU_DEFAULT} "X86_64 OR AARCH64;NOT APPLE;NOT MINGW;NOT WINDOWS_STORE;NOT WINDOWS_PHONE" OFF)
|
||||
ie_dependent_option (ENABLE_INTEL_GPU "GPU OpenCL-based plugin for OpenVINO Runtime" ${ENABLE_INTEL_GPU_DEFAULT} "X86_64 OR AARCH64;NOT APPLE;NOT WINDOWS_STORE;NOT WINDOWS_PHONE" OFF)
|
||||
|
||||
if (ANDROID OR (CMAKE_COMPILER_IS_GNUCXX AND CMAKE_CXX_COMPILER_VERSION VERSION_LESS 7.0))
|
||||
# oneDNN doesn't support old compilers and android builds for now, so we'll
|
||||
@@ -32,6 +34,10 @@ endif()
|
||||
|
||||
ie_dependent_option (ENABLE_ONEDNN_FOR_GPU "Enable oneDNN with GPU support" ${ENABLE_ONEDNN_FOR_GPU_DEFAULT} "ENABLE_INTEL_GPU" OFF)
|
||||
|
||||
ie_option (ENABLE_DEBUG_CAPS "enable OpenVINO debug capabilities at runtime" OFF)
|
||||
ie_dependent_option (ENABLE_GPU_DEBUG_CAPS "enable GPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS;ENABLE_INTEL_CPU" OFF)
|
||||
ie_dependent_option (ENABLE_CPU_DEBUG_CAPS "enable CPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS;ENABLE_INTEL_GPU" OFF)
|
||||
|
||||
ie_option (ENABLE_PROFILING_ITT "Build with ITT tracing. Optionally configure pre-built ittnotify library though INTEL_VTUNE_DIR variable." OFF)
|
||||
|
||||
ie_option_enum(ENABLE_PROFILING_FILTER "Enable or disable ITT counter groups.\
|
||||
@@ -79,41 +85,45 @@ ie_dependent_option (ENABLE_TBBBIND_2_5 "Enable TBBBind_2_5 static usage in Open
|
||||
ie_dependent_option (ENABLE_INTEL_GNA "GNA support for OpenVINO Runtime" ON
|
||||
"NOT APPLE;NOT ANDROID;X86_64;CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 5.4" OFF)
|
||||
|
||||
ie_option (ENABLE_INTEL_GNA_DEBUG "GNA debug build" OFF)
|
||||
|
||||
ie_dependent_option (ENABLE_INTEL_GNA_DEBUG "GNA debug build" OFF "ENABLE_INTEL_GNA" OFF)
|
||||
ie_dependent_option (ENABLE_V7_SERIALIZE "enables serialization to IR v7" OFF "ENABLE_INTEL_GNA" OFF)
|
||||
ie_dependent_option (ENABLE_IR_V7_READER "Enables IR v7 reader" ${BUILD_SHARED_LIBS} "ENABLE_TESTS;ENABLE_INTEL_GNA" OFF)
|
||||
|
||||
ie_option (ENABLE_GAPI_PREPROCESSING "Enables G-API preprocessing" ON)
|
||||
ie_dependent_option (ENABLE_GAPI_PREPROCESSING "Enables G-API preprocessing" ON "NOT MINGW64" OFF)
|
||||
|
||||
ie_option (ENABLE_MULTI "Enables MULTI Device Plugin" ON)
|
||||
ie_option (ENABLE_AUTO "Enables AUTO Device Plugin" ON)
|
||||
|
||||
ie_option (ENABLE_AUTO_BATCH "Enables Auto-Batching Plugin" ON)
|
||||
|
||||
ie_option (ENABLE_HETERO "Enables Hetero Device Plugin" ON)
|
||||
|
||||
ie_option (ENABLE_TEMPLATE "Enable template plugin" ON)
|
||||
|
||||
ie_dependent_option (ENABLE_PLUGINS_XML "Generate plugins.xml configuration file or not" OFF "NOT BUILD_SHARED_LIBS" OFF)
|
||||
ie_dependent_option (ENABLE_PLUGINS_XML "Generate plugins.xml configuration file or not" OFF "BUILD_SHARED_LIBS" OFF)
|
||||
|
||||
ie_dependent_option (GAPI_TEST_PERF "if GAPI unit tests should examine performance" OFF "ENABLE_TESTS;ENABLE_GAPI_PREPROCESSING" OFF)
|
||||
|
||||
ie_dependent_option (ENABLE_DATA "fetch models from testdata repo" ON "ENABLE_FUNCTIONAL_TESTS;NOT ANDROID" OFF)
|
||||
|
||||
ie_dependent_option (ENABLE_BEH_TESTS "tests oriented to check OpenVINO Runtime API correctness" ON "ENABLE_TESTS" OFF)
|
||||
|
||||
ie_dependent_option (ENABLE_FUNCTIONAL_TESTS "functional tests" ON "ENABLE_TESTS" OFF)
|
||||
|
||||
ie_option (ENABLE_SAMPLES "console samples are part of OpenVINO Runtime package" ON)
|
||||
|
||||
ie_option (ENABLE_OPENCV "enables custom OpenCV download" OFF)
|
||||
|
||||
ie_option (ENABLE_V7_SERIALIZE "enables serialization to IR v7" OFF)
|
||||
|
||||
set(OPENVINO_EXTRA_MODULES "" CACHE STRING "Extra paths for extra modules to include into OpenVINO build")
|
||||
|
||||
ie_dependent_option(ENABLE_TBB_RELEASE_ONLY "Only Release TBB libraries are linked to the OpenVINO Runtime binaries" ON "THREADING MATCHES TBB;LINUX" OFF)
|
||||
|
||||
find_host_package(PythonInterp 3 QUIET)
|
||||
ie_option(ENABLE_OV_ONNX_FRONTEND "Enable ONNX FrontEnd" ${PYTHONINTERP_FOUND})
|
||||
ie_option(ENABLE_OV_PADDLE_FRONTEND "Enable PaddlePaddle FrontEnd" ON)
|
||||
ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
|
||||
ie_option(ENABLE_OV_PYTORCH_FRONTEND "Enable PyTorch FrontEnd" ON)
|
||||
ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
|
||||
ie_option(ENABLE_OV_TF_FRONTEND "Enable TensorFlow FrontEnd" ON)
|
||||
ie_option(ENABLE_OV_TF_LITE_FRONTEND "Enable TensorFlow Lite FrontEnd" ON)
|
||||
ie_dependent_option(ENABLE_SNAPPY_COMPRESSION "Enables compression support for TF FE" ON
|
||||
"ENABLE_OV_TF_FRONTEND" ON)
|
||||
|
||||
if(CMAKE_HOST_LINUX AND LINUX)
|
||||
# Debian packages are enabled on Ubuntu systems
|
||||
# so, system TBB / pugixml / OpenCL can be tried for usage
|
||||
@@ -129,40 +139,37 @@ else()
|
||||
set(ENABLE_SYSTEM_TBB_DEFAULT ${ENABLE_SYSTEM_LIBS_DEFAULT})
|
||||
endif()
|
||||
|
||||
if(BUILD_SHARED_LIBS)
|
||||
set(ENABLE_SYSTEM_PUGIXML_DEFAULT ${ENABLE_SYSTEM_LIBS_DEFAULT})
|
||||
else()
|
||||
# for static libraries case libpugixml.a must be compiled with -fPIC
|
||||
# but we still need an ability to compile with system PugiXML and BUILD_SHARED_LIBS
|
||||
# for Conan case where everything is compiled statically
|
||||
set(ENABLE_SYSTEM_PUGIXML_DEFAULT OFF)
|
||||
endif()
|
||||
|
||||
# users wants to use his own TBB version, specific either via env vars or cmake options
|
||||
if(DEFINED ENV{TBBROOT} OR DEFINED ENV{TBB_DIR} OR DEFINED TBB_DIR OR DEFINED TBBROOT)
|
||||
set(ENABLE_SYSTEM_TBB_DEFAULT OFF)
|
||||
endif()
|
||||
|
||||
# for static libraries case libpugixml.a must be compiled with -fPIC
|
||||
ie_dependent_option (ENABLE_SYSTEM_PUGIXML "use the system copy of pugixml" ${ENABLE_SYSTEM_LIBS_DEFAULT} "BUILD_SHARED_LIBS" OFF)
|
||||
|
||||
ie_dependent_option (ENABLE_SYSTEM_TBB "use the system version of TBB" ${ENABLE_SYSTEM_TBB_DEFAULT} "THREADING MATCHES TBB" OFF)
|
||||
|
||||
ie_dependent_option (ENABLE_SYSTEM_OPENCL "Use the system version of OpenCL" ${ENABLE_SYSTEM_LIBS_DEFAULT} "BUILD_SHARED_LIBS;ENABLE_INTEL_GPU" OFF)
|
||||
|
||||
ie_option (ENABLE_DEBUG_CAPS "enable OpenVINO debug capabilities at runtime" OFF)
|
||||
|
||||
ie_dependent_option (ENABLE_GPU_DEBUG_CAPS "enable GPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS" OFF)
|
||||
|
||||
ie_dependent_option (ENABLE_CPU_DEBUG_CAPS "enable CPU debug capabilities at runtime" ON "ENABLE_DEBUG_CAPS" OFF)
|
||||
|
||||
find_host_package(PythonInterp 3 QUIET)
|
||||
ie_option(ENABLE_OV_ONNX_FRONTEND "Enable ONNX FrontEnd" ${PYTHONINTERP_FOUND})
|
||||
ie_option(ENABLE_OV_PADDLE_FRONTEND "Enable PaddlePaddle FrontEnd" ON)
|
||||
ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
|
||||
ie_option(ENABLE_OV_PYTORCH_FRONTEND "Enable PyTorch FrontEnd" ON)
|
||||
ie_option(ENABLE_OV_TF_FRONTEND "Enable TensorFlow FrontEnd" ON)
|
||||
ie_option(ENABLE_OV_TF_LITE_FRONTEND "Enable TensorFlow Lite FrontEnd" ON)
|
||||
ie_option(ENABLE_OV_IR_FRONTEND "Enable IR FrontEnd" ON)
|
||||
|
||||
ie_dependent_option(ENABLE_SNAPPY_COMPRESSION "Enables compression support for TF FE" ON
|
||||
"ENABLE_OV_TF_FRONTEND" ON)
|
||||
ie_dependent_option(ENABLE_SYSTEM_PROTOBUF "Enables use of system protobuf" OFF
|
||||
"ENABLE_OV_ONNX_FRONTEND OR ENABLE_OV_PADDLE_FRONTEND OR ENABLE_OV_TF_FRONTEND;BUILD_SHARED_LIBS" OFF)
|
||||
ie_dependent_option (ENABLE_SYSTEM_TBB "Enables use of system TBB" ${ENABLE_SYSTEM_TBB_DEFAULT}
|
||||
"THREADING MATCHES TBB" OFF)
|
||||
# TODO: turn it off by default during the work on cross-os distribution, because pugixml is not
|
||||
# available out of box on all systems (like RHEL, UBI)
|
||||
ie_option (ENABLE_SYSTEM_PUGIXML "Enables use of system PugiXML" ${ENABLE_SYSTEM_PUGIXML_DEFAULT})
|
||||
# the option is on by default, because we use only flatc compiler and don't use any libraries
|
||||
ie_dependent_option(ENABLE_SYSTEM_FLATBUFFERS "Enables use of system flatbuffers" ON
|
||||
"ENABLE_OV_TF_LITE_FRONTEND" OFF)
|
||||
ie_dependent_option(ENABLE_SYSTEM_SNAPPY "Enables use of system version of snappy" OFF "ENABLE_SNAPPY_COMPRESSION;BUILD_SHARED_LIBS" OFF)
|
||||
ie_dependent_option (ENABLE_SYSTEM_OPENCL "Enables use of system OpenCL" ${ENABLE_SYSTEM_LIBS_DEFAULT}
|
||||
"ENABLE_INTEL_GPU" OFF)
|
||||
# the option is turned off by default, because we compile our own static version of protobuf
|
||||
# with LTO and -fPIC options, while system one does not have such flags
|
||||
ie_dependent_option (ENABLE_SYSTEM_PROTOBUF "Enables use of system Protobuf" OFF
|
||||
"ENABLE_OV_ONNX_FRONTEND OR ENABLE_OV_PADDLE_FRONTEND OR ENABLE_OV_TF_FRONTEND" OFF)
|
||||
# the option is turned off by default, because we don't want to have a dependency on libsnappy.so
|
||||
ie_dependent_option (ENABLE_SYSTEM_SNAPPY "Enables use of system version of Snappy" OFF
|
||||
"ENABLE_SNAPPY_COMPRESSION" OFF)
|
||||
|
||||
ie_option(ENABLE_OPENVINO_DEBUG "Enable output for OPENVINO_DEBUG statements" OFF)
|
||||
|
||||
|
||||
@@ -10,8 +10,8 @@ macro(ov_cpack_settings)
|
||||
set(cpack_components_all ${CPACK_COMPONENTS_ALL})
|
||||
unset(CPACK_COMPONENTS_ALL)
|
||||
foreach(item IN LISTS cpack_components_all)
|
||||
# filter out some components, which are not needed to be wrapped to conda-forge | brew
|
||||
if(# python is not a part of conda | brew
|
||||
# filter out some components, which are not needed to be wrapped to conda-forge | brew | conan
|
||||
if(# python is not a part of conda | brew | conan
|
||||
NOT item MATCHES "^${OV_CPACK_COMP_PYTHON_OPENVINO}_python.*" AND
|
||||
# python wheels are not needed to be wrapped by conda | brew packages
|
||||
NOT item STREQUAL OV_CPACK_COMP_PYTHON_WHEELS AND
|
||||
|
||||
@@ -93,7 +93,7 @@ macro(ov_cpack_settings)
|
||||
# - 2022.1.0 is the last public release with debian packages from Intel install team
|
||||
# - 2022.1.1, 2022.2 do not have debian packages enabled, distributed only as archives
|
||||
# - 2022.3 is the first release where Debian updated packages are introduced, others 2022.3.X are LTS
|
||||
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5
|
||||
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5 2023.0.0 2023.0.1
|
||||
)
|
||||
|
||||
#
|
||||
@@ -156,17 +156,20 @@ macro(ov_cpack_settings)
|
||||
set(auto_copyright "generic")
|
||||
endif()
|
||||
|
||||
# intel-cpu
|
||||
if(ENABLE_INTEL_CPU OR DEFINED openvino_arm_cpu_plugin_SOURCE_DIR)
|
||||
if(ENABLE_INTEL_CPU)
|
||||
# cpu
|
||||
if(ENABLE_INTEL_CPU)
|
||||
if(ARM OR AARCH64)
|
||||
set(CPACK_DEBIAN_CPU_PACKAGE_NAME "libopenvino-arm-cpu-plugin-${cpack_name_ver}")
|
||||
set(CPACK_COMPONENT_CPU_DESCRIPTION "ARM® CPU plugin")
|
||||
set(cpu_copyright "arm_cpu")
|
||||
elseif(X86 OR X86_64)
|
||||
set(CPACK_DEBIAN_CPU_PACKAGE_NAME "libopenvino-intel-cpu-plugin-${cpack_name_ver}")
|
||||
set(CPACK_COMPONENT_CPU_DESCRIPTION "Intel® CPU plugin")
|
||||
set(cpu_copyright "generic")
|
||||
else()
|
||||
set(CPACK_COMPONENT_CPU_DESCRIPTION "ARM CPU")
|
||||
set(cpu_copyright "arm_cpu")
|
||||
message(FATAL_ERROR "Unsupported CPU architecture: ${CMAKE_SYSTEM_PROCESSOR}")
|
||||
endif()
|
||||
set(CPACK_COMPONENT_CPU_DEPENDS "${OV_CPACK_COMP_CORE}")
|
||||
set(CPACK_DEBIAN_CPU_PACKAGE_NAME "libopenvino-intel-cpu-plugin-${cpack_name_ver}")
|
||||
set(CPACK_DEBIAN_CPU_PACKAGE_CONTROL_EXTRA "${def_postinst};${def_postrm}")
|
||||
_ov_add_plugin(cpu OFF)
|
||||
endif()
|
||||
|
||||
@@ -6,7 +6,7 @@ if(CPACK_GENERATOR STREQUAL "DEB")
|
||||
include(cmake/packaging/debian.cmake)
|
||||
elseif(CPACK_GENERATOR STREQUAL "RPM")
|
||||
include(cmake/packaging/rpm.cmake)
|
||||
elseif(CPACK_GENERATOR MATCHES "^(CONDA-FORGE|BREW)$")
|
||||
elseif(CPACK_GENERATOR MATCHES "^(CONDA-FORGE|BREW|CONAN)$")
|
||||
include(cmake/packaging/common-libraries.cmake)
|
||||
elseif(CPACK_GENERATOR STREQUAL "NSIS")
|
||||
include(cmake/packaging/nsis.cmake)
|
||||
|
||||
@@ -79,7 +79,7 @@ macro(ov_cpack_settings)
|
||||
# - 2022.1.0 is the last public release with rpm packages from Intel install team
|
||||
# - 2022.1.1, 2022.2 do not have rpm packages enabled, distributed only as archives
|
||||
# - 2022.3 is the first release where RPM updated packages are introduced, others 2022.3.X are LTS
|
||||
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5
|
||||
2022.3.0 2022.3.1 2022.3.2 2022.3.3 2022.3.4 2022.3.5 2023.0.0 2023.0.1
|
||||
)
|
||||
|
||||
find_host_program(rpmlint_PROGRAM NAMES rpmlint DOC "Path to rpmlint")
|
||||
@@ -156,17 +156,20 @@ macro(ov_cpack_settings)
|
||||
set(auto_copyright "generic")
|
||||
endif()
|
||||
|
||||
# intel-cpu
|
||||
if(ENABLE_INTEL_CPU OR DEFINED openvino_arm_cpu_plugin_SOURCE_DIR)
|
||||
if(ENABLE_INTEL_CPU)
|
||||
# cpu
|
||||
if(ENABLE_INTEL_CPU)
|
||||
if(ARM OR AARCH64)
|
||||
set(CPACK_RPM_CPU_PACKAGE_NAME "libopenvino-arm-cpu-plugin-${cpack_name_ver}")
|
||||
set(CPACK_COMPONENT_CPU_DESCRIPTION "ARM® CPU plugin")
|
||||
set(cpu_copyright "arm_cpu")
|
||||
elseif(X86 OR X86_64)
|
||||
set(CPACK_RPM_CPU_PACKAGE_NAME "libopenvino-intel-cpu-plugin-${cpack_name_ver}")
|
||||
set(CPACK_COMPONENT_CPU_DESCRIPTION "Intel® CPU")
|
||||
set(cpu_copyright "generic")
|
||||
else()
|
||||
set(CPACK_COMPONENT_CPU_DESCRIPTION "ARM CPU")
|
||||
set(cpu_copyright "arm_cpu")
|
||||
message(FATAL_ERROR "Unsupported CPU architecture: ${CMAKE_SYSTEM_PROCESSOR}")
|
||||
endif()
|
||||
set(CPACK_RPM_CPU_PACKAGE_REQUIRES "${core_package}")
|
||||
set(CPACK_RPM_CPU_PACKAGE_NAME "libopenvino-intel-cpu-plugin-${cpack_name_ver}")
|
||||
_ov_add_package(plugin_packages cpu)
|
||||
endif()
|
||||
|
||||
|
||||
@@ -142,6 +142,14 @@ if(ENABLE_SYSTEM_PUGIXML)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
set(_IE_nlohmann_json_FOUND "@nlohmann_json_FOUND@")
|
||||
if(_IE_nlohmann_json_FOUND)
|
||||
find_dependency(nlohmann_json)
|
||||
set_target_properties(nlohmann_json::nlohmann_json PROPERTIES IMPORTED_GLOBAL ON)
|
||||
add_library(IE::nlohmann_json ALIAS nlohmann_json::nlohmann_json)
|
||||
endif()
|
||||
unset(_IE_nlohmann_json_FOUND)
|
||||
|
||||
# inherit OpenCV from main IE project if enabled
|
||||
if ("@OpenCV_FOUND@")
|
||||
load_cache("${cache_path}" READ_WITH_PREFIX "" OpenCV_DIR)
|
||||
|
||||
@@ -85,9 +85,9 @@
|
||||
#
|
||||
# `OpenVINO_VERSION_MAJOR`
|
||||
# Major version component
|
||||
#
|
||||
#
|
||||
# `OpenVINO_VERSION_MINOR`
|
||||
# minor version component
|
||||
# Minor version component
|
||||
#
|
||||
# `OpenVINO_VERSION_PATCH`
|
||||
# Patch version component
|
||||
@@ -138,7 +138,7 @@ endmacro()
|
||||
|
||||
macro(_ov_find_tbb)
|
||||
set(THREADING "@THREADING@")
|
||||
if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND NOT TBB_FOUND)
|
||||
if(THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
|
||||
set(enable_pkgconfig_tbb "@tbb_FOUND@")
|
||||
|
||||
# try tbb.pc
|
||||
@@ -153,10 +153,10 @@ macro(_ov_find_tbb)
|
||||
endif()
|
||||
|
||||
pkg_search_module(tbb
|
||||
${pkg_config_quiet_arg}
|
||||
${pkg_config_required_arg}
|
||||
IMPORTED_TARGET
|
||||
tbb)
|
||||
${pkg_config_quiet_arg}
|
||||
${pkg_config_required_arg}
|
||||
IMPORTED_TARGET
|
||||
tbb)
|
||||
unset(pkg_config_quiet_arg)
|
||||
unset(pkg_config_required_arg)
|
||||
|
||||
@@ -223,28 +223,185 @@ macro(_ov_find_tbb)
|
||||
PATHS ${_tbb_bind_dir}
|
||||
NO_CMAKE_FIND_ROOT_PATH
|
||||
NO_DEFAULT_PATH)
|
||||
set_target_properties(${TBBBIND_2_5_IMPORTED_TARGETS} PROPERTIES IMPORTED_GLOBAL ON)
|
||||
unset(_tbb_bind_dir)
|
||||
endif()
|
||||
unset(install_tbbbind)
|
||||
endif()
|
||||
endmacro()
|
||||
|
||||
macro(_ov_find_pugixml)
|
||||
set(_OV_ENABLE_SYSTEM_PUGIXML "@ENABLE_SYSTEM_PUGIXML@")
|
||||
if(_OV_ENABLE_SYSTEM_PUGIXML)
|
||||
set(_ov_pugixml_pkgconfig_interface "@pugixml_FOUND@")
|
||||
set(_ov_pugixml_cmake_interface "@PugiXML_FOUND@")
|
||||
|
||||
if(_ov_pugixml_pkgconfig_interface AND NOT ANDROID)
|
||||
_ov_find_dependency(PkgConfig)
|
||||
elseif(_ov_pugixml_cmake_interface)
|
||||
_ov_find_dependency(PugiXML REQUIRED)
|
||||
endif()
|
||||
|
||||
if(PugiXML_FOUND)
|
||||
if(TARGET pugixml)
|
||||
set(_ov_pugixml_target pugixml)
|
||||
elseif(TARGET pugixml::pugixml)
|
||||
set(_ov_pugixml_target pugixml::pugixml)
|
||||
endif()
|
||||
if(OpenVINODeveloperPackage_DIR)
|
||||
set_property(TARGET ${_ov_pugixml_target} PROPERTY IMPORTED_GLOBAL ON)
|
||||
# align with build tree
|
||||
add_library(openvino::pugixml ALIAS ${_ov_pugixml_target})
|
||||
endif()
|
||||
unset(_ov_pugixml_target)
|
||||
elseif(PkgConfig_FOUND)
|
||||
if(${CMAKE_FIND_PACKAGE_NAME}_FIND_QUIETLY)
|
||||
set(pkg_config_quiet_arg QUIET)
|
||||
endif()
|
||||
if(${CMAKE_FIND_PACKAGE_NAME}_FIND_REQUIRED)
|
||||
set(pkg_config_required_arg REQUIRED)
|
||||
endif()
|
||||
|
||||
pkg_search_module(pugixml
|
||||
${pkg_config_quiet_arg}
|
||||
${pkg_config_required_arg}
|
||||
IMPORTED_TARGET
|
||||
GLOBAL
|
||||
pugixml)
|
||||
|
||||
unset(pkg_config_quiet_arg)
|
||||
unset(pkg_config_required_arg)
|
||||
|
||||
if(pugixml_FOUND)
|
||||
if(OpenVINODeveloperPackage_DIR)
|
||||
add_library(openvino::pugixml ALIAS PkgConfig::pugixml)
|
||||
endif()
|
||||
|
||||
# PATCH: on Ubuntu 18.04 pugixml.pc contains incorrect include directories
|
||||
get_target_property(interface_include_dir PkgConfig::pugixml INTERFACE_INCLUDE_DIRECTORIES)
|
||||
if(interface_include_dir AND NOT EXISTS "${interface_include_dir}")
|
||||
set_target_properties(PkgConfig::pugixml PROPERTIES
|
||||
INTERFACE_INCLUDE_DIRECTORIES "")
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# debian 9 case: no cmake, no pkg-config files
|
||||
if(NOT TARGET openvino::pugixml)
|
||||
find_library(PUGIXML_LIBRARY NAMES pugixml DOC "Path to pugixml library")
|
||||
if(PUGIXML_LIBRARY)
|
||||
add_library(openvino::pugixml INTERFACE IMPORTED)
|
||||
set_target_properties(openvino::pugixml PROPERTIES INTERFACE_LINK_LIBRARIES "${PUGIXML_LIBRARY}")
|
||||
else()
|
||||
message(FATAL_ERROR "Failed to find system pugixml in OpenVINO Developer Package")
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
endmacro()
|
||||
|
||||
macro(_ov_find_itt)
|
||||
set(_ENABLE_PROFILING_ITT "@ENABLE_PROFILING_ITT@")
|
||||
# whether 'ittapi' is found via find_package
|
||||
set(_ENABLE_SYSTEM_ITTAPI "@ittapi_FOUND@")
|
||||
if(_ENABLE_PROFILING_ITT AND _ENABLE_SYSTEM_ITTAPI)
|
||||
_ov_find_dependency(ittapi)
|
||||
endif()
|
||||
unset(_ENABLE_PROFILING_ITT)
|
||||
unset(_ENABLE_SYSTEM_ITTAPI)
|
||||
endmacro()
|
||||
|
||||
macro(_ov_find_ade)
|
||||
set(_OV_ENABLE_GAPI_PREPROCESSING "@ENABLE_GAPI_PREPROCESSING@")
|
||||
# whether 'ade' is found via find_package
|
||||
set(_ENABLE_SYSTEM_ADE "@ade_FOUND@")
|
||||
if(_OV_ENABLE_GAPI_PREPROCESSING AND _ENABLE_SYSTEM_ADE)
|
||||
_ov_find_dependency(ade 0.1.2)
|
||||
endif()
|
||||
unset(_OV_ENABLE_GAPI_PREPROCESSING)
|
||||
unset(_ENABLE_SYSTEM_ADE)
|
||||
endmacro()
|
||||
|
||||
macro(_ov_find_intel_cpu_dependencies)
|
||||
set(_OV_ENABLE_CPU_ACL "@DNNL_USE_ACL@")
|
||||
if(_OV_ENABLE_CPU_ACL)
|
||||
if(_ov_as_external_package)
|
||||
set_and_check(ARM_COMPUTE_LIB_DIR "@PACKAGE_ARM_COMPUTE_LIB_DIR@")
|
||||
set(_ov_find_acl_options NO_DEFAULT_PATH)
|
||||
set(_ov_find_acl_path "${CMAKE_CURRENT_LIST_DIR}")
|
||||
else()
|
||||
set_and_check(_ov_find_acl_path "@PACKAGE_FIND_ACL_PATH@")
|
||||
endif()
|
||||
|
||||
_ov_find_dependency(ACL
|
||||
NO_MODULE
|
||||
PATHS "${_ov_find_acl_path}"
|
||||
${_ov_find_acl_options})
|
||||
|
||||
unset(ARM_COMPUTE_LIB_DIR)
|
||||
unset(_ov_find_acl_path)
|
||||
unset(_ov_find_acl_options)
|
||||
endif()
|
||||
unset(_OV_ENABLE_CPU_ACL)
|
||||
endmacro()
|
||||
|
||||
macro(_ov_find_intel_gpu_dependencies)
|
||||
set(_OV_ENABLE_INTEL_GPU "@ENABLE_INTEL_GPU@")
|
||||
set(_OV_ENABLE_SYSTEM_OPENCL "@ENABLE_SYSTEM_OPENCL@")
|
||||
if(_OV_ENABLE_INTEL_GPU AND _OV_ENABLE_SYSTEM_OPENCL)
|
||||
set(_OV_OpenCLICDLoader_FOUND "@OpenCLICDLoader_FOUND@")
|
||||
if(_OV_OpenCLICDLoader_FOUND)
|
||||
_ov_find_dependency(OpenCLICDLoader)
|
||||
else()
|
||||
_ov_find_dependency(OpenCL)
|
||||
endif()
|
||||
unset(_OV_OpenCLICDLoader_FOUND)
|
||||
endif()
|
||||
unset(_OV_ENABLE_INTEL_GPU)
|
||||
unset(_OV_ENABLE_SYSTEM_OPENCL)
|
||||
endmacro()
|
||||
|
||||
macro(_ov_find_intel_gna_dependencies)
|
||||
set(_OV_ENABLE_INTEL_GNA "@ENABLE_INTEL_GNA@")
|
||||
if(_OV_ENABLE_INTEL_GNA AND NOT libGNA_FOUND)
|
||||
if(_OV_ENABLE_INTEL_GNA)
|
||||
set_and_check(GNA_PATH "@PACKAGE_GNA_PATH@")
|
||||
_ov_find_dependency(libGNA
|
||||
COMPONENTS KERNEL
|
||||
CONFIG
|
||||
PATHS "${CMAKE_CURRENT_LIST_DIR}"
|
||||
NO_CMAKE_FIND_ROOT_PATH
|
||||
NO_DEFAULT_PATH)
|
||||
unset(GNA_PATH)
|
||||
endif()
|
||||
unset(_OV_ENABLE_INTEL_GNA)
|
||||
endmacro()
|
||||
|
||||
macro(_ov_find_protobuf_frontend_dependency)
|
||||
set(_OV_ENABLE_SYSTEM_PROTOBUF "@ENABLE_SYSTEM_PROTOBUF@")
|
||||
# TODO: remove check for target existence
|
||||
if(_OV_ENABLE_SYSTEM_PROTOBUF AND NOT TARGET protobuf::libprotobuf)
|
||||
_ov_find_dependency(Protobuf @Protobuf_VERSION@ EXACT)
|
||||
endif()
|
||||
unset(_OV_ENABLE_SYSTEM_PROTOBUF)
|
||||
endmacro()
|
||||
|
||||
macro(_ov_find_tensorflow_frontend_dependencies)
|
||||
set(_OV_ENABLE_SYSTEM_SNAPPY "@ENABLE_SYSTEM_SNAPPY@")
|
||||
set(_ov_snappy_lib "@ov_snappy_lib@")
|
||||
# TODO: remove check for target existence
|
||||
if(_OV_ENABLE_SYSTEM_SNAPPY AND NOT TARGET ${_ov_snappy_lib})
|
||||
_ov_find_dependency(Snappy @Snappy_VERSION@ EXACT)
|
||||
endif()
|
||||
unset(_OV_ENABLE_SYSTEM_SNAPPY)
|
||||
unset(_ov_snappy_lib)
|
||||
set(PACKAGE_PREFIX_DIR ${_ov_package_prefix_dir})
|
||||
endmacro()
|
||||
|
||||
macro(_ov_find_onnx_frontend_dependencies)
|
||||
set(_OV_ENABLE_SYSTEM_ONNX "@ENABLE_SYSTEM_ONNX@")
|
||||
if(_OV_ENABLE_SYSTEM_ONNX)
|
||||
_ov_find_dependency(ONNX @ONNX_VERSION@ EXACT)
|
||||
endif()
|
||||
unset(_OV_ENABLE_SYSTEM_ONNX)
|
||||
endmacro()
|
||||
|
||||
function(_ov_target_no_deprecation_error)
|
||||
if(NOT MSVC)
|
||||
if(CMAKE_CXX_COMPILER_ID STREQUAL "Intel")
|
||||
@@ -265,13 +422,41 @@ endfunction()
|
||||
# OpenVINO config
|
||||
#
|
||||
|
||||
cmake_policy(PUSH)
|
||||
# we need CMP0057 to allow IN_LIST in if() command
|
||||
if(POLICY CMP0057)
|
||||
cmake_policy(SET CMP0057 NEW)
|
||||
else()
|
||||
message(FATAL_ERROR "OpenVINO requires CMake 3.3 or newer")
|
||||
endif()
|
||||
|
||||
# need to store current PACKAGE_PREFIX_DIR, because it's overwritten by sub-package one
|
||||
set(_ov_package_prefix_dir "${PACKAGE_PREFIX_DIR}")
|
||||
|
||||
set(_OV_ENABLE_OPENVINO_BUILD_SHARED "@BUILD_SHARED_LIBS@")
|
||||
|
||||
if(NOT TARGET openvino)
|
||||
set(_ov_as_external_package ON)
|
||||
endif()
|
||||
|
||||
if(NOT _OV_ENABLE_OPENVINO_BUILD_SHARED)
|
||||
# common openvino dependencies
|
||||
_ov_find_tbb()
|
||||
|
||||
_ov_find_itt()
|
||||
_ov_find_pugixml()
|
||||
|
||||
# preprocessing dependencies
|
||||
_ov_find_ade()
|
||||
|
||||
# frontend dependencies
|
||||
_ov_find_protobuf_frontend_dependency()
|
||||
_ov_find_tensorflow_frontend_dependencies()
|
||||
_ov_find_onnx_frontend_dependencies()
|
||||
|
||||
# plugin dependencies
|
||||
_ov_find_intel_cpu_dependencies()
|
||||
_ov_find_intel_gpu_dependencies()
|
||||
_ov_find_intel_gna_dependencies()
|
||||
endif()
|
||||
|
||||
@@ -279,13 +464,26 @@ _ov_find_dependency(Threads)
|
||||
|
||||
unset(_OV_ENABLE_OPENVINO_BUILD_SHARED)
|
||||
|
||||
if(NOT TARGET openvino)
|
||||
set(_ov_as_external_package ON)
|
||||
set(_ov_imported_libs openvino::runtime openvino::runtime::c
|
||||
openvino::frontend::onnx openvino::frontend::paddle openvino::frontend::tensorflow
|
||||
openvino::frontend::pytorch openvino::frontend::tensorflow_lite)
|
||||
|
||||
if(_ov_as_external_package)
|
||||
include("${CMAKE_CURRENT_LIST_DIR}/OpenVINOTargets.cmake")
|
||||
|
||||
foreach(target IN LISTS _ov_imported_libs)
|
||||
if(TARGET ${target})
|
||||
get_target_property(imported_configs ${target} IMPORTED_CONFIGURATIONS)
|
||||
if(NOT RELWITHDEBINFO IN_LIST imported_configs)
|
||||
set_property(TARGET ${target} PROPERTY MAP_IMPORTED_CONFIG_RELWITHDEBINFO RELEASE)
|
||||
endif()
|
||||
unset(imported_configs)
|
||||
endif()
|
||||
endforeach()
|
||||
|
||||
# WA for cmake version < 3.16 which does not export
|
||||
# IMPORTED_LINK_DEPENDENT_LIBRARIES_** properties if no PUBLIC dependencies for the library
|
||||
if((THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO") AND TBB_FOUND)
|
||||
if(THREADING STREQUAL "TBB" OR THREADING STREQUAL "TBB_AUTO")
|
||||
foreach(type RELEASE DEBUG RELWITHDEBINFO MINSIZEREL)
|
||||
foreach(tbb_target TBB::tbb TBB::tbbmalloc PkgConfig::tbb)
|
||||
if(TARGET ${tbb_target})
|
||||
@@ -326,12 +524,12 @@ endif()
|
||||
# Apply common functions
|
||||
#
|
||||
|
||||
foreach(target openvino::runtime openvino::runtime::c
|
||||
openvino::frontend::onnx openvino::frontend::paddle openvino::frontend::tensorflow)
|
||||
foreach(target IN LISTS _ov_imported_libs)
|
||||
if(TARGET ${target} AND _ov_as_external_package)
|
||||
_ov_target_no_deprecation_error(${target})
|
||||
endif()
|
||||
endforeach()
|
||||
unset(_ov_imported_libs)
|
||||
unset(_ov_as_external_package)
|
||||
|
||||
# restore PACKAGE_PREFIX_DIR
|
||||
@@ -349,3 +547,7 @@ unset(${CMAKE_FIND_PACKAGE_NAME}_IR_FOUND)
|
||||
unset(${CMAKE_FIND_PACKAGE_NAME}_Paddle_FOUND)
|
||||
unset(${CMAKE_FIND_PACKAGE_NAME}_ONNX_FOUND)
|
||||
unset(${CMAKE_FIND_PACKAGE_NAME}_TensorFlow_FOUND)
|
||||
unset(${CMAKE_FIND_PACKAGE_NAME}_TensorFlowLite_FOUND)
|
||||
unset(${CMAKE_FIND_PACKAGE_NAME}_PyTorch_FOUND)
|
||||
|
||||
cmake_policy(POP)
|
||||
|
||||
@@ -56,6 +56,7 @@ find_dependency(OpenVINO
|
||||
NO_DEFAULT_PATH)
|
||||
|
||||
_ov_find_tbb()
|
||||
_ov_find_pugixml()
|
||||
|
||||
foreach(component @openvino_export_components@)
|
||||
# TODO: remove legacy targets from some tests
|
||||
@@ -65,58 +66,6 @@ foreach(component @openvino_export_components@)
|
||||
# endif()
|
||||
endforeach()
|
||||
|
||||
if(ENABLE_SYSTEM_PUGIXML)
|
||||
set(_ov_pugixml_pkgconfig_interface "@pugixml_FOUND@")
|
||||
set(_ov_pugixml_cmake_interface "@PugiXML_FOUND@")
|
||||
if(_ov_pugixml_pkgconfig_interface)
|
||||
find_dependency(PkgConfig)
|
||||
elseif(_ov_pugixml_cmake_interface)
|
||||
find_dependency(PugiXML)
|
||||
endif()
|
||||
if(PugiXML_FOUND)
|
||||
set_property(TARGET pugixml PROPERTY IMPORTED_GLOBAL TRUE)
|
||||
add_library(openvino::pugixml ALIAS pugixml)
|
||||
elseif(PkgConfig_FOUND)
|
||||
if(${CMAKE_FIND_PACKAGE_NAME}_FIND_QUIETLY)
|
||||
set(pkg_config_quiet_arg QUIET)
|
||||
endif()
|
||||
if(${CMAKE_FIND_PACKAGE_NAME}_FIND_REQUIRED)
|
||||
set(pkg_config_required_arg REQUIRED)
|
||||
endif()
|
||||
|
||||
pkg_search_module(pugixml
|
||||
${pkg_config_quiet_arg}
|
||||
${pkg_config_required_arg}
|
||||
IMPORTED_TARGET GLOBAL
|
||||
pugixml)
|
||||
|
||||
unset(pkg_config_quiet_arg)
|
||||
unset(pkg_config_required_arg)
|
||||
|
||||
if(pugixml_FOUND)
|
||||
add_library(openvino::pugixml ALIAS PkgConfig::pugixml)
|
||||
|
||||
# PATCH: on Ubuntu 18.04 pugixml.pc contains incorrect include directories
|
||||
get_target_property(interface_include_dir PkgConfig::pugixml INTERFACE_INCLUDE_DIRECTORIES)
|
||||
if(interface_include_dir AND NOT EXISTS "${interface_include_dir}")
|
||||
set_target_properties(PkgConfig::pugixml PROPERTIES
|
||||
INTERFACE_INCLUDE_DIRECTORIES "")
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# debian 9 case: no cmake, no pkg-config files
|
||||
if(NOT TARGET openvino::pugixml)
|
||||
find_library(PUGIXML_LIBRARY NAMES pugixml DOC "Path to pugixml library")
|
||||
if(PUGIXML_LIBRARY)
|
||||
add_library(openvino::pugixml INTERFACE IMPORTED GLOBAL)
|
||||
set_target_properties(openvino::pugixml PROPERTIES INTERFACE_LINK_LIBRARIES "${PUGIXML_LIBRARY}")
|
||||
else()
|
||||
message(FATAL_ERROR "Failed to find system pugixml in OpenVINO Developer Package")
|
||||
endif()
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# inherit OpenCV from main OpenVINO project if enabled
|
||||
if ("@OpenCV_FOUND@")
|
||||
load_cache("${cache_path}" READ_WITH_PREFIX "" OpenCV_DIR)
|
||||
|
||||
@@ -42,11 +42,12 @@ function(ov_model_convert SRC DST OUT)
|
||||
endif()
|
||||
|
||||
set(full_out_name "${DST}/${rel_out_name}")
|
||||
file(MAKE_DIRECTORY "${DST}/${rel_dir}")
|
||||
|
||||
if(ext STREQUAL ".prototxt")
|
||||
# convert .prototxt models to .onnx binary
|
||||
add_custom_command(OUTPUT ${full_out_name}
|
||||
COMMAND ${CMAKE_COMMAND} -E make_directory
|
||||
"${DST}/${rel_dir}"
|
||||
COMMAND ${PYTHON_EXECUTABLE} ${onnx_gen_script}
|
||||
"${SRC}/${in_file}" ${full_out_name}
|
||||
DEPENDS ${onnx_gen_script} "${SRC}/${in_file}"
|
||||
@@ -55,6 +56,8 @@ function(ov_model_convert SRC DST OUT)
|
||||
WORKING_DIRECTORY "${model_source_dir}")
|
||||
else()
|
||||
add_custom_command(OUTPUT ${full_out_name}
|
||||
COMMAND ${CMAKE_COMMAND} -E make_directory
|
||||
"${DST}/${rel_dir}"
|
||||
COMMAND "${CMAKE_COMMAND}" -E copy_if_different
|
||||
"${SRC}/${in_file}" ${full_out_name}
|
||||
DEPENDS ${onnx_gen_script} "${SRC}/${in_file}"
|
||||
@@ -68,18 +71,24 @@ function(ov_model_convert SRC DST OUT)
|
||||
set(${OUT} ${files} PARENT_SCOPE)
|
||||
endfunction()
|
||||
|
||||
if(OV_GENERATOR_MULTI_CONFIG AND CMAKE_VERSION VERSION_GREATER_EQUAL 3.20)
|
||||
set(test_model_zoo_output_dir "${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/$<CONFIG>/test_model_zoo")
|
||||
else()
|
||||
set(test_model_zoo_output_dir "${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/test_model_zoo")
|
||||
endif()
|
||||
|
||||
ov_model_convert("${CMAKE_CURRENT_SOURCE_DIR}/src/core/tests"
|
||||
"${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/test_model_zoo/core"
|
||||
"${test_model_zoo_output_dir}/core"
|
||||
core_tests_out_files)
|
||||
|
||||
set(rel_path "src/tests/functional/plugin/shared/models")
|
||||
ov_model_convert("${OpenVINO_SOURCE_DIR}/${rel_path}"
|
||||
"${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/test_model_zoo/func_tests/models"
|
||||
"${test_model_zoo_output_dir}/func_tests/models"
|
||||
ft_out_files)
|
||||
|
||||
set(rel_path "src/frontends/onnx/tests/models")
|
||||
ov_model_convert("${OpenVINO_SOURCE_DIR}/${rel_path}"
|
||||
"${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/test_model_zoo/onnx"
|
||||
"${test_model_zoo_output_dir}/onnx"
|
||||
onnx_fe_out_files)
|
||||
|
||||
if(ENABLE_TESTS)
|
||||
@@ -87,11 +96,12 @@ if(ENABLE_TESTS)
|
||||
${ft_out_files}
|
||||
${onnx_fe_out_files})
|
||||
|
||||
if (ENABLE_OV_PADDLE_FRONTEND)
|
||||
add_dependencies(test_model_zoo paddle_test_models)
|
||||
endif()
|
||||
# TODO Reenable PDPD after paddlepaddle==2.5.0 with compliant protobuf is released (ticket 95904)
|
||||
#if (ENABLE_OV_PADDLE_FRONTEND)
|
||||
# add_dependencies(test_model_zoo paddle_test_models)
|
||||
#endif()
|
||||
|
||||
install(DIRECTORY "${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/test_model_zoo"
|
||||
install(DIRECTORY "${test_model_zoo_output_dir}"
|
||||
DESTINATION tests COMPONENT tests EXCLUDE_FROM_ALL)
|
||||
|
||||
set(TEST_MODEL_ZOO "./test_model_zoo" CACHE PATH "Path to test model zoo")
|
||||
|
||||
95
cmake/toolchains/mingw-w64.toolchain.cmake
Normal file
95
cmake/toolchains/mingw-w64.toolchain.cmake
Normal file
@@ -0,0 +1,95 @@
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
# Prerequisites:
|
||||
#
|
||||
# Build platform: Ubuntu
|
||||
# apt-get install mingw-w64 mingw-w64-tools g++-mingw-w64-x86-64 gcc-mingw-w64-x86-64
|
||||
#
|
||||
# Build platform: macOS
|
||||
# brew install mingw-w64
|
||||
#
|
||||
|
||||
set(CMAKE_SYSTEM_NAME Windows)
|
||||
set(CMAKE_SYSTEM_PROCESSOR x86_64)
|
||||
|
||||
set(CMAKE_C_COMPILER x86_64-w64-mingw32-gcc-posix)
|
||||
set(CMAKE_CXX_COMPILER x86_64-w64-mingw32-g++-posix)
|
||||
set(PKG_CONFIG_EXECUTABLE x86_64-w64-mingw32-pkg-config CACHE PATH "Path to Windows x86_64 pkg-config")
|
||||
|
||||
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
|
||||
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
|
||||
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
|
||||
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
|
||||
|
||||
macro(__cmake_find_root_save_and_reset)
|
||||
foreach(v
|
||||
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
|
||||
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
|
||||
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
|
||||
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
|
||||
)
|
||||
set(__save_${v} ${${v}})
|
||||
set(${v} NEVER)
|
||||
endforeach()
|
||||
endmacro()
|
||||
|
||||
macro(__cmake_find_root_restore)
|
||||
foreach(v
|
||||
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
|
||||
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
|
||||
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
|
||||
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
|
||||
)
|
||||
set(${v} ${__save_${v}})
|
||||
unset(__save_${v})
|
||||
endforeach()
|
||||
endmacro()
|
||||
|
||||
|
||||
# macro to find programs on the host OS
|
||||
macro(find_host_program)
|
||||
__cmake_find_root_save_and_reset()
|
||||
if(CMAKE_HOST_WIN32)
|
||||
SET(WIN32 1)
|
||||
SET(UNIX)
|
||||
SET(APPLE)
|
||||
elseif(CMAKE_HOST_APPLE)
|
||||
SET(APPLE 1)
|
||||
SET(UNIX)
|
||||
SET(WIN32)
|
||||
elseif(CMAKE_HOST_UNIX)
|
||||
SET(UNIX 1)
|
||||
SET(WIN32)
|
||||
SET(APPLE)
|
||||
endif()
|
||||
find_program(${ARGN})
|
||||
SET(WIN32 1)
|
||||
SET(APPLE)
|
||||
SET(UNIX)
|
||||
__cmake_find_root_restore()
|
||||
endmacro()
|
||||
|
||||
# macro to find packages on the host OS
|
||||
macro(find_host_package)
|
||||
__cmake_find_root_save_and_reset()
|
||||
if(CMAKE_HOST_WIN32)
|
||||
SET(WIN32 1)
|
||||
SET(UNIX)
|
||||
SET(APPLE)
|
||||
elseif(CMAKE_HOST_APPLE)
|
||||
SET(APPLE 1)
|
||||
SET(WIN32)
|
||||
SET(UNIX)
|
||||
elseif(CMAKE_HOST_UNIX)
|
||||
SET(UNIX 1)
|
||||
SET(WIN32)
|
||||
SET(APPLE)
|
||||
endif()
|
||||
find_package(${ARGN})
|
||||
SET(WIN32 1)
|
||||
SET(APPLE)
|
||||
SET(UNIX)
|
||||
__cmake_find_root_restore()
|
||||
endmacro()
|
||||
@@ -24,7 +24,7 @@ set(CMAKE_LINKER ${RISCV_TOOLCHAIN_ROOT}/bin/riscv64-unknown-linux-gnu-ld)
|
||||
set(CMAKE_OBJCOPY ${RISCV_TOOLCHAIN_ROOT}/bin/riscv64-unknown-linux-gnu-objcopy)
|
||||
set(CMAKE_OBJDUMP ${RISCV_TOOLCHAIN_ROOT}/bin/riscv64-unknown-linux-gnu-objdump)
|
||||
set(CMAKE_READELF ${RISCV_TOOLCHAIN_ROOT}/bin/riscv64-unknown-linux-gnu-readelf)
|
||||
set(PKG_CONFIG_EXECUTABLE "NOT-FOUND" CACHE PATH "Path to ARM64 pkg-config")
|
||||
set(PKG_CONFIG_EXECUTABLE "NOT-FOUND" CACHE PATH "Path to RISC-V pkg-config")
|
||||
|
||||
# Don't run the linker on compiler check
|
||||
set(CMAKE_TRY_COMPILE_TARGET_TYPE STATIC_LIBRARY)
|
||||
|
||||
75
cmake/toolchains/x86_64.linux.toolchain.cmake
Normal file
75
cmake/toolchains/x86_64.linux.toolchain.cmake
Normal file
@@ -0,0 +1,75 @@
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
set(CMAKE_SYSTEM_NAME Linux)
|
||||
set(CMAKE_SYSTEM_PROCESSOR amd64)
|
||||
|
||||
set(CMAKE_C_COMPILER x86_64-linux-gnu-gcc)
|
||||
set(CMAKE_CXX_COMPILER x86_64-linux-gnu-g++)
|
||||
set(CMAKE_STRIP x86_64-linux-gnu-strip)
|
||||
set(PKG_CONFIG_EXECUTABLE "NOT-FOUND" CACHE PATH "Path to amd64 pkg-config")
|
||||
|
||||
set(CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)
|
||||
set(CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)
|
||||
set(CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)
|
||||
set(CMAKE_FIND_ROOT_PATH_MODE_PACKAGE ONLY)
|
||||
|
||||
macro(__cmake_find_root_save_and_reset)
|
||||
foreach(v
|
||||
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
|
||||
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
|
||||
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
|
||||
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
|
||||
)
|
||||
set(__save_${v} ${${v}})
|
||||
set(${v} NEVER)
|
||||
endforeach()
|
||||
endmacro()
|
||||
|
||||
macro(__cmake_find_root_restore)
|
||||
foreach(v
|
||||
CMAKE_FIND_ROOT_PATH_MODE_LIBRARY
|
||||
CMAKE_FIND_ROOT_PATH_MODE_INCLUDE
|
||||
CMAKE_FIND_ROOT_PATH_MODE_PACKAGE
|
||||
CMAKE_FIND_ROOT_PATH_MODE_PROGRAM
|
||||
)
|
||||
set(${v} ${__save_${v}})
|
||||
unset(__save_${v})
|
||||
endforeach()
|
||||
endmacro()
|
||||
|
||||
|
||||
# macro to find programs on the host OS
|
||||
macro(find_host_program)
|
||||
__cmake_find_root_save_and_reset()
|
||||
if(CMAKE_HOST_WIN32)
|
||||
SET(WIN32 1)
|
||||
SET(UNIX)
|
||||
elseif(CMAKE_HOST_APPLE)
|
||||
SET(APPLE 1)
|
||||
SET(UNIX)
|
||||
endif()
|
||||
find_program(${ARGN})
|
||||
SET(WIN32)
|
||||
SET(APPLE)
|
||||
SET(UNIX 1)
|
||||
__cmake_find_root_restore()
|
||||
endmacro()
|
||||
|
||||
# macro to find packages on the host OS
|
||||
macro(find_host_package)
|
||||
__cmake_find_root_save_and_reset()
|
||||
if(CMAKE_HOST_WIN32)
|
||||
SET(WIN32 1)
|
||||
SET(UNIX)
|
||||
elseif(CMAKE_HOST_APPLE)
|
||||
SET(APPLE 1)
|
||||
SET(UNIX)
|
||||
endif()
|
||||
find_package(${ARGN})
|
||||
SET(WIN32)
|
||||
SET(APPLE)
|
||||
SET(UNIX 1)
|
||||
__cmake_find_root_restore()
|
||||
endmacro()
|
||||
36
conan.lock
Normal file
36
conan.lock
Normal file
@@ -0,0 +1,36 @@
|
||||
{
|
||||
"version": "0.5",
|
||||
"requires": [
|
||||
"zlib/1.2.13#97d5730b529b4224045fe7090592d4c1%1692672717.049",
|
||||
"xbyak/6.73#250bc3bc73379f90f255876c1c00a4cd%1691853024.351",
|
||||
"snappy/1.1.10#916523630083f6d855cb2977de8eefb6%1689780661.062",
|
||||
"pybind11/2.10.4#dd44c80a5ed6a2ef11194380daae1248%1682692198.909",
|
||||
"pugixml/1.13#f615c1fcec55122b2e177d17061276e7%1691917296.869",
|
||||
"protobuf/3.21.12#d9f5f4e3b86552552dda4c0a2e928eeb%1685218275.69",
|
||||
"opencl-icd-loader/2023.04.17#5f73dd9f0c023d416a7f162e320b9c77%1692732261.088",
|
||||
"opencl-headers/2023.04.17#3d98f2d12a67c2400de6f11d5335b5a6%1683936272.16",
|
||||
"opencl-clhpp-headers/2023.04.17#7c62fcc7ac2559d4839150d2ebaac5c8%1685450803.672",
|
||||
"onnx/1.13.1#f11071c8aba52731a5205b028945acbb%1693130310.715",
|
||||
"onetbb/2021.10.0#cbb2fc43088070b48f6e4339bc8fa0e1%1693812561.235",
|
||||
"nlohmann_json/3.11.2#a35423bb6e1eb8f931423557e282c7ed%1666619820.488",
|
||||
"ittapi/3.24.0#9246125f13e7686dee2b0c992b71db94%1682969872.743",
|
||||
"hwloc/2.9.2#1c63e2eccac57048ae226e6c946ebf0e%1688677682.002",
|
||||
"gflags/2.2.2#48d1262ffac8d30c3224befb8275a533%1676224985.343",
|
||||
"flatbuffers/23.5.26#b153646f6546daab4c7326970b6cd89c%1685838458.449",
|
||||
"ade/0.1.2a#b569ff943843abd004e65536e265a445%1688125447.482"
|
||||
],
|
||||
"build_requires": [
|
||||
"zlib/1.2.13#97d5730b529b4224045fe7090592d4c1%1692672717.049",
|
||||
"protobuf/3.21.12#d9f5f4e3b86552552dda4c0a2e928eeb%1685218275.69",
|
||||
"protobuf/3.21.9#515ceb0a1653cf84363d9968b812d6be%1678364058.993",
|
||||
"patchelf/0.13#0eaada8970834919c3ce14355afe7fac%1680534241.341",
|
||||
"m4/1.4.19#c1c4b1ee919e34630bb9b50046253d3c%1676610086.39",
|
||||
"libtool/2.4.6#9ee8efc04c2e106e7fba13bb1e477617%1677509454.345",
|
||||
"gnu-config/cci.20210814#15c3bf7dfdb743977b84d0321534ad90%1681250000.747",
|
||||
"flatbuffers/23.5.26#b153646f6546daab4c7326970b6cd89c%1685838458.449",
|
||||
"cmake/3.27.4#a7e78418b024dccacccc887f049f47ed%1693515860.005",
|
||||
"automake/1.16.5#058bda3e21c36c9aa8425daf3c1faf50%1688481772.751",
|
||||
"autoconf/2.71#53be95d228b2dcb30dc199cb84262d8f%1693395343.513"
|
||||
],
|
||||
"python_requires": []
|
||||
}
|
||||
33
conanfile.txt
Normal file
33
conanfile.txt
Normal file
@@ -0,0 +1,33 @@
|
||||
[requires]
|
||||
ade/0.1.2a
|
||||
onetbb/[>=2021.2.1]
|
||||
pugixml/[>=1.10]
|
||||
protobuf/3.21.12
|
||||
ittapi/[>=3.23.0]
|
||||
zlib/[>=1.2.8]
|
||||
opencl-icd-loader/[>=2022.09.30]
|
||||
# opencl-clhpp-headers/[>=2022.09.30]
|
||||
opencl-headers/[>=2022.09.30]
|
||||
xbyak/[>=6.62]
|
||||
snappy/[>=1.1.7]
|
||||
gflags/2.2.2
|
||||
onnx/1.13.1
|
||||
nlohmann_json/[>=3.1.1]
|
||||
pybind11/[>=2.10.1]
|
||||
flatbuffers/[>=22.9.24]
|
||||
|
||||
[tool_requires]
|
||||
cmake/[>=3.15]
|
||||
patchelf/[>=0.12]
|
||||
protobuf/3.21.9
|
||||
flatbuffers/[>=22.9.24]
|
||||
|
||||
[options]
|
||||
protobuf/*:lite=True
|
||||
onetbb/*:tbbmalloc=True
|
||||
onetbb/*:tbbproxy=True
|
||||
flatbuffers/*:header_only=True
|
||||
|
||||
[generators]
|
||||
CMakeDeps
|
||||
CMakeToolchain
|
||||
@@ -77,7 +77,7 @@ function(build_docs)
|
||||
if(ENABLE_OPENVINO_NOTEBOOKS)
|
||||
set(NBDOC_SCRIPT "${DOCS_SOURCE_DIR}/nbdoc/nbdoc.py")
|
||||
list(APPEND commands
|
||||
COMMAND ${PYTHON_EXECUTABLE} "${NBDOC_SCRIPT}" "${RST_OUTPUT}/notebooks"
|
||||
COMMAND ${PYTHON_EXECUTABLE} "${NBDOC_SCRIPT}" "${DOCS_SOURCE_DIR}/notebooks" "${RST_OUTPUT}/notebooks"
|
||||
)
|
||||
endif()
|
||||
|
||||
|
||||
76
docs/Documentation/datumaro.md
Normal file
76
docs/Documentation/datumaro.md
Normal file
@@ -0,0 +1,76 @@
|
||||
# Datumaro {#datumaro_documentation}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Start working with Datumaro, which offers functionalities for basic data
|
||||
import/export, validation, correction, filtration and transformations.
|
||||
|
||||
|
||||
Datumaro provides a suite of basic data import/export (IE) for more than 35 public vision data
|
||||
formats and manipulation functionalities such as validation, correction, filtration, and some
|
||||
transformations. To achieve the web-scale training, this further aims to merge multiple
|
||||
heterogeneous datasets through comparator and merger. Datumaro is integrated into Geti™, OpenVINO™
|
||||
Training Extensions, and CVAT for the ease of data preparation. Datumaro is open-sourced and
|
||||
available on `GitHub <https://github.com/openvinotoolkit/datumaro>`__.
|
||||
Refer to the official `documentation <https://openvinotoolkit.github.io/datumaro/stable/docs/get-started/introduction.html>`__ to learn more.
|
||||
Plus, enjoy `Jupyter notebooks <https://github.com/openvinotoolkit/datumaro/tree/develop/notebooks>`__ for the real Datumaro practices.
|
||||
|
||||
Detailed Workflow
|
||||
#################
|
||||
|
||||
.. image:: ./_static/images/datumaro.png
|
||||
|
||||
1. To start working with Datumaro, download public datasets or prepare your own annotated dataset.
|
||||
|
||||
.. note::
|
||||
Datumaro provides a CLI `datum download` for downloading `TensorFlow Datasets <https://www.tensorflow.org/datasets>`__.
|
||||
|
||||
2. Import data into Datumaro and manipulate the dataset for the data quality using `Validator`, `Corrector`, and `Filter`.
|
||||
|
||||
3. Compare two datasets and transform the label schemas (category information) before merging them.
|
||||
|
||||
4. Merge two datasets to a large-scale dataset.
|
||||
|
||||
.. note::
|
||||
There are some choices of merger, i.e., `ExactMerger`, `IntersectMerger`, and `UnionMerger`.
|
||||
|
||||
5. Split the unified dataset into subsets, e.g., `train`, `valid`, and `test` through `Splitter`.
|
||||
|
||||
.. note::
|
||||
We can split data with a given ratio of subsets according to both the number of samples or
|
||||
annotations. Please see `SplitTask` for the task-specific split.
|
||||
|
||||
6. Export the cleaned and unified dataset for follow-up workflows such as model training.
|
||||
Go to :doc:`OpenVINO™ Training Extensions <ote_documentation>`.
|
||||
|
||||
If the results are unsatisfactory, add datasets and perform the same steps, starting with dataset annotation.
|
||||
|
||||
Datumaro Components
|
||||
###################
|
||||
|
||||
* `Datumaro CLIs <https://openvinotoolkit.github.io/datumaro/stable/docs/command-reference/overview.html>`__
|
||||
* `Datumaro APIs <https://openvinotoolkit.github.io/datumaro/stable/docs/reference/datumaro_module.html>`__
|
||||
* `Datumaro data format <https://openvinotoolkit.github.io/datumaro/stable/docs/data-formats/datumaro_format.html>`__
|
||||
* `Supported data formats <https://openvinotoolkit.github.io/datumaro/stable/docs/data-formats/formats/index.html>`__
|
||||
|
||||
Tutorials
|
||||
#########
|
||||
|
||||
* `Basic skills <https://openvinotoolkit.github.io/datumaro/stable/docs/level-up/basic_skills/index.html>`__
|
||||
* `Intermediate skills <https://openvinotoolkit.github.io/datumaro/stable/docs/level-up/intermediate_skills/index.html>`__
|
||||
* `Advanced skills <https://openvinotoolkit.github.io/datumaro/stable/docs/level-up/advanced_skills/index.html>`__
|
||||
|
||||
Python Hands-on Examples
|
||||
########################
|
||||
|
||||
* `Data IE <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/dataset_IO.html>`__
|
||||
* `Data manipulation <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/manipulate.html>`__
|
||||
* `Data exploration <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/explore.html>`__
|
||||
* `Data refinement <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/refine.html>`__
|
||||
* `Data transformation <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/transform.html>`__
|
||||
* `Deep learning end-to-end use-cases <https://openvinotoolkit.github.io/datumaro/stable/docs/jupyter_notebook_examples/e2e_example.html>`__
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -1,33 +0,0 @@
|
||||
# Running and Deploying Inference {#openvino_docs_deployment_guide_introduction}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
Run and Deploy Locally <openvino_deployment_guide>
|
||||
Deploy via Model Serving <ovms_what_is_openvino_model_server>
|
||||
|
||||
|
||||
Once you have a model that meets both OpenVINO™ and your requirements, you can choose how to deploy it with your application.
|
||||
|
||||
.. panels::
|
||||
|
||||
:doc:`Deploy via OpenVINO Runtime <openvino_deployment_guide>`
|
||||
^^^^^^^^^^^^^^
|
||||
|
||||
Local deployment uses OpenVINO Runtime that is called from, and linked to, the application directly.
|
||||
It utilizes resources available to the system and provides the quickest way of launching inference.
|
||||
---
|
||||
|
||||
:doc:`Deploy via Model Server <ovms_what_is_openvino_model_server>`
|
||||
^^^^^^^^^^^^^^
|
||||
|
||||
Deployment via OpenVINO Model Server allows the application to connect to the inference server set up remotely.
|
||||
This way inference can use external resources instead of those available to the application itself.
|
||||
|
||||
|
||||
Apart from the default deployment options, you may also :doc:`deploy your application for the TensorFlow framework with OpenVINO Integration <ovtf_integration>`
|
||||
|
||||
@endsphinxdirective
|
||||
@@ -17,7 +17,7 @@ OpenVINO Runtime offers multiple inference modes to allow optimum hardware utili
|
||||
The remaining modes assume certain levels of automation in selecting devices for inference. Using them in the deployed solution may potentially increase its performance and portability. The automated modes are:
|
||||
|
||||
* :doc:`Automatic Device Selection (AUTO) <openvino_docs_OV_UG_supported_plugins_AUTO>`
|
||||
* :doc:``Multi-Device Execution (MULTI) <openvino_docs_OV_UG_Running_on_multiple_devices>`
|
||||
* :doc:`Multi-Device Execution (MULTI) <openvino_docs_OV_UG_Running_on_multiple_devices>`
|
||||
* :doc:`Heterogeneous Execution (HETERO) <openvino_docs_OV_UG_Hetero_execution>`
|
||||
* :doc:`Automatic Batching Execution (Auto-batching) <openvino_docs_OV_UG_Automatic_Batching>`
|
||||
|
||||
|
||||
@@ -2,6 +2,10 @@
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Explore OpenCV Graph API and other media processing frameworks
|
||||
used for development of computer vision solutions.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
|
||||
@@ -1,6 +1,12 @@
|
||||
# Model Preparation {#openvino_docs_model_processing_introduction}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Preparing models for OpenVINO Runtime. Learn about the methods
|
||||
used to read, convert and compile models from different frameworks.
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
@@ -10,22 +16,52 @@
|
||||
omz_tools_downloader
|
||||
|
||||
|
||||
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as OpenVINO's :doc:`Open Model Zoo <model_zoo>`.
|
||||
Every deep learning workflow begins with obtaining a model. You can choose to prepare a custom one, use a ready-made solution and adjust it to your needs, or even download and run a pre-trained network from an online database, such as `TensorFlow Hub <https://tfhub.dev/>`__, `Hugging Face <https://huggingface.co/>`__, or `Torchvision models <https://pytorch.org/hub/>`__.
|
||||
|
||||
:doc:`OpenVINO™ supports several model formats <Supported_Model_Formats>` and allows to convert them to it's own, OpenVINO IR, providing a tool dedicated to this task.
|
||||
Import a model using ``read_model()``
|
||||
#################################################
|
||||
|
||||
:doc:`Model Optimizer <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` reads the original model and creates the OpenVINO IR model (.xml and .bin files) so that inference can ultimately be performed without delays due to format conversion. Optionally, Model Optimizer can adjust the model to be more suitable for inference, for example, by :doc:`alternating input shapes <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>`, :doc:`embedding preprocessing <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>` and :doc:`cutting training parts off <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>`.
|
||||
Model files (not Python objects) from :doc:`ONNX, PaddlePaddle, TensorFlow and TensorFlow Lite <Supported_Model_Formats>` (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`) do not require a separate step for model conversion, that is ``mo.convert_model``.
|
||||
|
||||
The approach to fully convert a model is considered the default choice, as it allows the full extent of OpenVINO features. The OpenVINO IR model format is used by other conversion and preparation tools, such as the Post-Training Optimization Tool, for further optimization of the converted model.
|
||||
The ``read_model()`` method reads a model from a file and produces `openvino.runtime.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__. If the file is in one of the supported original framework :doc:`file formats <Supported_Model_Formats>`, the method runs internal conversion to an OpenVINO model format. If the file is already in the :doc:`OpenVINO IR format <openvino_ir>`, it is read "as-is", without any conversion involved.
|
||||
|
||||
Conversion is not required for ONNX, PaddlePaddle, and TensorFlow models (check :doc:`TensorFlow Frontend Capabilities and Limitations <openvino_docs_MO_DG_TensorFlow_Frontend>`), as OpenVINO provides C++ and Python APIs for importing them to OpenVINO Runtime directly. It provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
|
||||
You can also convert a model from original framework to `openvino.runtime.Model <api/ie_python_api/_autosummary/openvino.runtime.Model.html>`__ using ``convert_model()`` method. More details about ``convert_model()`` are provided in :doc:`model conversion guide <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>` .
|
||||
|
||||
``ov.Model`` can be serialized to IR using the ``ov.serialize()`` method. The serialized IR can be further optimized using :doc:`Neural Network Compression Framework (NNCF) <ptq_introduction>` that applies post-training quantization methods.
|
||||
|
||||
.. note::
|
||||
|
||||
``convert_model()`` also allows you to perform input/output cut, add pre-processing or add custom Python conversion extensions.
|
||||
|
||||
Convert a model with Python using ``mo.convert_model()``
|
||||
###########################################################
|
||||
|
||||
Model conversion API, specifically, the ``mo.convert_model()`` method converts a model from original framework to ``ov.Model``. ``mo.convert_model()`` returns ``ov.Model`` object in memory so the ``read_model()`` method is not required. The resulting ``ov.Model`` can be inferred in the same training environment (python script or Jupiter Notebook). ``mo.convert_model()`` provides a convenient way to quickly switch from framework-based code to OpenVINO-based code in your inference application.
|
||||
|
||||
In addition to model files, ``mo.convert_model()`` can take OpenVINO extension objects constructed directly in Python for easier conversion of operations that are not supported in OpenVINO. The ``mo.convert_model()`` method also has a set of parameters to :doc:`cut the model <openvino_docs_MO_DG_prepare_model_convert_model_Cutting_Model>`, :doc:`set input shapes or layout <openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model>`, :doc:`add preprocessing <openvino_docs_MO_DG_Additional_Optimization_Use_Cases>`, etc.
|
||||
|
||||
The figure below illustrates the typical workflow for deploying a trained deep learning model, where IR is a pair of files describing the model:
|
||||
|
||||
* ``.xml`` - Describes the network topology.
|
||||
* ``.bin`` - Contains the weights and biases binary data.
|
||||
|
||||
.. image:: _static/images/model_conversion_diagram.svg
|
||||
:alt: model conversion diagram
|
||||
|
||||
|
||||
Convert a model using ``mo`` command-line tool
|
||||
#################################################
|
||||
|
||||
Another option to convert a model is to use ``mo`` command-line tool. ``mo`` is a cross-platform tool that facilitates the transition between training and deployment environments, performs static model analysis, and adjusts deep learning models for optimal execution on end-point target devices in the same measure, as the ``mo.convert_model()`` method.
|
||||
|
||||
``mo`` requires the use of a pre-trained deep learning model in one of the supported formats: TensorFlow, TensorFlow Lite, PaddlePaddle, or ONNX. ``mo`` converts the model to the OpenVINO Intermediate Representation format (IR), which needs to be read with the ``ov.read_model()`` method. Then, you can compile and infer the ``ov.Model`` later with :doc:`OpenVINO™ Runtime <openvino_docs_OV_UG_OV_Runtime_User_Guide>`.
|
||||
|
||||
The results of both ``mo`` and ``mo.convert_model()`` conversion methods described above are the same. You can choose one of them, depending on what is most convenient for you. Keep in mind that there should not be any differences in the results of model conversion if the same set of parameters is used.
|
||||
|
||||
This section describes how to obtain and prepare your model for work with OpenVINO to get the best inference results:
|
||||
|
||||
* :doc:`See the supported formats and how to use them in your project <Supported_Model_Formats>`.
|
||||
* :doc:`Convert different model formats to the OpenVINO IR format <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
|
||||
* `Automate model-related tasks with Model Downloader and additional OMZ Tools <https://docs.openvino.ai/latest/omz_tools_downloader.html>`__.
|
||||
* :doc:`Convert different model formats to the ov.Model format <openvino_docs_MO_DG_Deep_Learning_Model_Optimizer_DevGuide>`.
|
||||
|
||||
To begin with, you may want to :doc:`browse a database of models for use in your projects <model_zoo>`.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -2,21 +2,25 @@
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: OpenVINO™ is an ecosystem of utilities that have advanced capabilities, which help develop deep learning solutions.
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
ote_documentation
|
||||
ovtf_integration
|
||||
datumaro_documentation
|
||||
ovsa_get_started
|
||||
openvino_inference_engine_tools_compile_tool_README
|
||||
openvino_docs_tuning_utilities
|
||||
|
||||
|
||||
OpenVINO™ is not just one tool. It is an expansive ecosystem of utilities, providing a comprehensive workflow for deep learning solution development. Learn more about each of them to reach the full potential of OpenVINO™ Toolkit.
|
||||
|
||||
Neural Network Compression Framework (NNCF)
|
||||
###########################################
|
||||
|
||||
|
||||
**Neural Network Compression Framework (NNCF)**
|
||||
|
||||
A suite of advanced algorithms for Neural Network inference optimization with minimal accuracy drop. NNCF applies quantization, filter pruning, binarization and sparsity algorithms to PyTorch and TensorFlow models during training.
|
||||
|
||||
@@ -27,8 +31,7 @@ More resources:
|
||||
* `PyPI <https://pypi.org/project/nncf/>`__
|
||||
|
||||
|
||||
OpenVINO™ Training Extensions
|
||||
#############################
|
||||
**OpenVINO™ Training Extensions**
|
||||
|
||||
A convenient environment to train Deep Learning models and convert them using the OpenVINO™ toolkit for optimized inference.
|
||||
|
||||
@@ -38,71 +41,60 @@ More resources:
|
||||
* `GitHub <https://github.com/openvinotoolkit/training_extensions>`__
|
||||
* `Documentation <https://openvinotoolkit.github.io/training_extensions/stable/guide/get_started/introduction.html>`__
|
||||
|
||||
OpenVINO™ Security Add-on
|
||||
#########################
|
||||
|
||||
**OpenVINO™ Security Add-on**
|
||||
|
||||
A solution for Model Developers and Independent Software Vendors to use secure packaging and secure model execution.
|
||||
|
||||
More resources:
|
||||
|
||||
* `Documentation <https://docs.openvino.ai/latest/ovsa_get_started.html>`__
|
||||
* :doc:`Documentation <ovsa_get_started>`
|
||||
* `GitHub <https://github.com/openvinotoolkit/security_addon>`__
|
||||
|
||||
|
||||
OpenVINO™ integration with TensorFlow (OVTF)
|
||||
############################################
|
||||
|
||||
A solution empowering TensorFlow developers with OpenVINO's optimization capabilities. With just two lines of code in your application, you can offload inference to OpenVINO, while keeping the TensorFlow API.
|
||||
|
||||
More resources:
|
||||
|
||||
* `Documentation <https://github.com/openvinotoolkit/openvino_tensorflow>`__
|
||||
* `PyPI <https://pypi.org/project/openvino-tensorflow/>`__
|
||||
* `GitHub <https://github.com/openvinotoolkit/openvino_tensorflow>`__
|
||||
|
||||
DL Streamer
|
||||
###########
|
||||
|
||||
A streaming media analytics framework, based on the GStreamer multimedia framework, for creating complex media analytics pipelines.
|
||||
|
||||
More resources:
|
||||
|
||||
* `Documentation on GitHub <https://dlstreamer.github.io/index.html>`__
|
||||
* `Installation Guide on GitHub <https://github.com/openvinotoolkit/dlstreamer_gst/wiki/Install-Guide>`__
|
||||
|
||||
DL Workbench
|
||||
############
|
||||
|
||||
A web-based tool for deploying deep learning models. Built on the core of OpenVINO and equipped with a graphics user interface, DL Workbench is a great way to explore the possibilities of the OpenVINO workflow, import, analyze, optimize, and build your pre-trained models. You can do all that by visiting `Intel® Developer Cloud <https://software.intel.com/content/www/us/en/develop/tools/devcloud.html>`__ and launching DL Workbench online.
|
||||
|
||||
More resources:
|
||||
|
||||
* `Documentation <https://docs.openvino.ai/2022.3/workbench_docs_Workbench_DG_Introduction.html>`__
|
||||
* `Docker Hub <https://hub.docker.com/r/openvino/workbench>`__
|
||||
* `PyPI <https://pypi.org/project/openvino-workbench/>`__
|
||||
|
||||
Computer Vision Annotation Tool (CVAT)
|
||||
######################################
|
||||
|
||||
An online, interactive video and image annotation tool for computer vision purposes.
|
||||
|
||||
More resources:
|
||||
|
||||
* `Documentation on GitHub <https://opencv.github.io/cvat/docs/>`__
|
||||
* `Web application <https://www.cvat.ai/>`__
|
||||
* `Docker Hub <https://hub.docker.com/r/openvino/cvat_server>`__
|
||||
* `GitHub <https://github.com/openvinotoolkit/cvat>`__
|
||||
|
||||
Dataset Management Framework (Datumaro)
|
||||
#######################################
|
||||
**Dataset Management Framework (Datumaro)**
|
||||
|
||||
A framework and CLI tool to build, transform, and analyze datasets.
|
||||
|
||||
More resources:
|
||||
|
||||
* `Documentation on GitHub <https://openvinotoolkit.github.io/datumaro/docs/>`__
|
||||
|
||||
* :doc:`Overview <datumaro_documentation>`
|
||||
* `PyPI <https://pypi.org/project/datumaro/>`__
|
||||
* `GitHub <https://github.com/openvinotoolkit/datumaro>`__
|
||||
* `Documentation <https://openvinotoolkit.github.io/datumaro/stable/docs/get-started/introduction.html>`__
|
||||
|
||||
**Compile Tool**
|
||||
|
||||
|
||||
Compile tool is now deprecated. If you need to compile a model for inference on a specific device, use the following script:
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/export_compiled_model.py
|
||||
:language: python
|
||||
:fragment: [export_compiled_model]
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/export_compiled_model.cpp
|
||||
:language: cpp
|
||||
:fragment: [export_compiled_model]
|
||||
|
||||
|
||||
To learn which device supports the import / export functionality, see the :doc:`feature support matrix <openvino_docs_OV_UG_Working_with_devices>`.
|
||||
|
||||
For more details on preprocessing steps, refer to the :doc:`Optimize Preprocessing <openvino_docs_OV_UG_Preprocessing_Overview>`. To compile the model with advanced preprocessing capabilities, refer to the :doc:`Use Case - Integrate and Save Preprocessing Steps Into OpenVINO IR <openvino_docs_OV_UG_Preprocess_Usecase_save>`, which shows how to have all the preprocessing in the compiled blob.
|
||||
|
||||
**DL Workbench**
|
||||
|
||||
A web-based tool for deploying deep learning models. Built on the core of OpenVINO and equipped with a graphics user interface, DL Workbench is a great way to explore the possibilities of the OpenVINO workflow, import, analyze, optimize, and build your pre-trained models. You can do all that by visiting `Intel® Developer Cloud <https://software.intel.com/content/www/us/en/develop/tools/devcloud.html>`__ and launching DL Workbench online.
|
||||
|
||||
**OpenVINO™ integration with TensorFlow (OVTF)**
|
||||
|
||||
OpenVINO™ Integration with TensorFlow will no longer be supported as of OpenVINO release 2023.0. As part of the 2023.0 release, OpenVINO will feature a significantly enhanced TensorFlow user experience within native OpenVINO without needing offline model conversions. :doc:`Learn more <openvino_docs_MO_DG_TensorFlow_Frontend>`.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
@@ -1,55 +0,0 @@
|
||||
# OpenVINO™ integration with TensorFlow {#ovtf_integration}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
**OpenVINO™ integration with TensorFlow** is a solution for TensorFlow developers who want to get started with OpenVINO™ in their inferencing applications. By adding just two lines of code you can now take advantage of OpenVINO™ toolkit optimizations with TensorFlow inference applications across a range of Intel® computation devices.
|
||||
|
||||
This is all you need:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
import openvino_tensorflow
|
||||
openvino_tensorflow.set_backend('<backend_name>')
|
||||
|
||||
|
||||
**OpenVINO™ integration with TensorFlow** accelerates inference across many AI models on a variety of Intel® technologies, such as:
|
||||
|
||||
* Intel® CPUs
|
||||
* Intel® integrated GPUs
|
||||
|
||||
.. note::
|
||||
For maximum performance, efficiency, tooling customization, and hardware control, we recommend developers to adopt native OpenVINO™ solutions.
|
||||
|
||||
To find out more about the product itself, as well as learn how to use it in your project, check its dedicated `GitHub repository <https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/docs>`__.
|
||||
|
||||
|
||||
To see what you can do with **OpenVINO™ integration with TensorFlow**, explore the demos located in the `examples folder <https://github.com/openvinotoolkit/openvino_tensorflow/tree/master/examples>`__ in our GitHub repository.
|
||||
|
||||
Sample tutorials are also hosted on `Intel® DevCloud <https://www.intel.com/content/www/us/en/developer/tools/devcloud/edge/build/ovtfoverview.html>`__. The demo applications are implemented using Jupyter Notebooks. You can interactively execute them on Intel® DevCloud nodes, compare the results of **OpenVINO™ integration with TensorFlow**, native TensorFlow, and OpenVINO™.
|
||||
|
||||
License
|
||||
#######
|
||||
|
||||
**OpenVINO™ integration with TensorFlow** is licensed under `Apache License Version 2.0 <https://github.com/openvinotoolkit/openvino_tensorflow/blob/master/LICENSE>`__.
|
||||
By contributing to the project, you agree to the license and copyright terms therein
|
||||
and release your contribution under these terms.
|
||||
|
||||
Support
|
||||
#######
|
||||
|
||||
Submit your questions, feature requests and bug reports via `GitHub issues <https://github.com/openvinotoolkit/openvino_tensorflow/issues>`__.
|
||||
|
||||
How to Contribute
|
||||
#################
|
||||
|
||||
We welcome community contributions to **OpenVINO™ integration with TensorFlow**. If you have an idea for improvement:
|
||||
|
||||
* Share your proposal via `GitHub issues <https://github.com/openvinotoolkit/openvino_tensorflow/issues>`__.
|
||||
* Submit a `pull request <https://github.com/openvinotoolkit/openvino_tensorflow/pulls>`__.
|
||||
|
||||
We will review your contribution as soon as possible. If any additional fixes or modifications are necessary, we will guide you and provide feedback. Before you make your contribution, make sure you can build **OpenVINO™ integration with TensorFlow** and run all the examples with your fix/patch. If you want to introduce a large feature, create test cases for your feature. Upon our verification of your pull request, we will merge it to the repository provided that the pull request has met the above mentioned requirements and proved acceptable.
|
||||
|
||||
\* Other names and brands may be claimed as the property of others.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -1,6 +1,12 @@
|
||||
# OpenVINO™ Training Extensions {#ote_documentation}
|
||||
|
||||
@sphinxdirective
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: OpenVINO™ Training Extensions include advanced algorithms used
|
||||
to create, train and convert deep learning models with OpenVINO
|
||||
Toolkit for optimized inference.
|
||||
|
||||
|
||||
OpenVINO™ Training Extensions provide a suite of advanced algorithms to train
|
||||
Deep Learning models and convert them using the `OpenVINO™
|
||||
@@ -19,21 +25,22 @@ Detailed Workflow
|
||||
.. note::
|
||||
Prepare a separate dataset or split the dataset you have for more accurate quality evaluation.
|
||||
|
||||
3. Having successful evaluation results received, you have an opportunity to deploy your model or continue optimizing it, using NNCF and POT. For more information about these frameworks, go to :doc:`Optimization Guide <openvino_docs_model_optimization_guide>`.
|
||||
3. Having successful evaluation results received, you have an opportunity to deploy your model or continue optimizing it, using NNCF. For more information about these frameworks, go to :doc:`Optimization Guide <openvino_docs_model_optimization_guide>`.
|
||||
|
||||
If the results are unsatisfactory, add datasets and perform the same steps, starting with dataset annotation.
|
||||
|
||||
OpenVINO Training Extensions Components
|
||||
#######################################
|
||||
|
||||
- `OpenVINO Training Extensions SDK <https://github.com/openvinotoolkit/training_extensions/tree/master/ote_sdk>`__
|
||||
- `OpenVINO Training Extensions CLI <https://github.com/openvinotoolkit/training_extensions/tree/master/ote_cli>`__
|
||||
- `OpenVINO Training Extensions Algorithms <https://github.com/openvinotoolkit/training_extensions/tree/master/external>`__
|
||||
* `OpenVINO Training Extensions API <https://github.com/openvinotoolkit/training_extensions/tree/develop/otx/api>`__
|
||||
* `OpenVINO Training Extensions CLI <https://github.com/openvinotoolkit/training_extensions/tree/develop/otx/cli>`__
|
||||
* `OpenVINO Training Extensions Algorithms <https://github.com/openvinotoolkit/training_extensions/tree/develop/otx/algorithms>`__
|
||||
|
||||
Tutorials
|
||||
#########
|
||||
|
||||
`Object Detection <https://github.com/openvinotoolkit/training_extensions/blob/master/ote_cli/notebooks/train.ipynb>`__
|
||||
* `Base tutorial <https://openvinotoolkit.github.io/training_extensions/stable/guide/tutorials/base/index.html>`__
|
||||
* `Advanced tutorial <https://openvinotoolkit.github.io/training_extensions/stable/guide/tutorials/advanced/index.html>`__
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
@@ -3,22 +3,46 @@
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: OpenVINO toolkit workflow usually involves preparation,
|
||||
optimization, and compression of models, running inference and
|
||||
deploying deep learning applications.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
Model Preparation <openvino_docs_model_processing_introduction>
|
||||
Model Optimization and Compression <openvino_docs_model_optimization_guide>
|
||||
Running and Deploying Inference <openvino_docs_deployment_guide_introduction>
|
||||
Running Inference <openvino_docs_OV_UG_OV_Runtime_User_Guide>
|
||||
Deployment on a Local System <openvino_deployment_guide>
|
||||
Deployment on a Model Server <ovms_what_is_openvino_model_server>
|
||||
pytorch_2_0_torch_compile
|
||||
|
||||
|
||||
| :doc:`Model Preparation <openvino_docs_model_processing_introduction>`
|
||||
| With Model Downloader and Model Optimizer guides, you will learn to download pre-trained models and convert them for use with OpenVINO™. You can use your own models or choose some from a broad selection provided in the Open Model Zoo.
|
||||
| With model conversion API guide, you will learn to convert pre-trained models for use with OpenVINO™. You can use your own models or choose some from a broad selection in online databases, such as `TensorFlow Hub <https://tfhub.dev/>`__, `Hugging Face <https://huggingface.co/>`__, `Torchvision models <https://pytorch.org/hub/>`__..
|
||||
|
||||
| :doc:`Model Optimization and Compression <openvino_docs_model_optimization_guide>`
|
||||
| In this section you will find out how to optimize a model to achieve better inference performance. It describes multiple optimization methods for both the training and post-training stages.
|
||||
|
||||
| :doc:`Deployment <openvino_docs_deployment_guide_introduction>`
|
||||
| This section explains the process of deploying your own inference application using either OpenVINO Runtime or OpenVINO Model Server. It describes how to run inference which is the most basic form of deployment and the quickest way of launching inference.
|
||||
| :doc:`Running Inference <openvino_docs_OV_UG_OV_Runtime_User_Guide>`
|
||||
| This section explains describes how to run inference which is the most basic form of deployment and the quickest way of launching inference.
|
||||
|
||||
|
||||
Once you have a model that meets both OpenVINO™ and your requirements, you can choose how to deploy it with your application.
|
||||
|
||||
|
||||
| :doc:`Option 1. Deployment via OpenVINO Runtime <openvino_deployment_guide>`
|
||||
| Local deployment uses OpenVINO Runtime that is called from, and linked to, the application directly.
|
||||
| It utilizes resources available to the system and provides the quickest way of launching inference.
|
||||
| Deployment on a local system requires performing the steps from the running inference section.
|
||||
|
||||
|
||||
| :doc:`Option 2. Deployment via Model Server <ovms_what_is_openvino_model_server>`
|
||||
| Deployment via OpenVINO Model Server allows the application to connect to the inference server set up remotely.
|
||||
| This way inference can use external resources instead of those available to the application itself.
|
||||
| Deployment on a model server can be done quickly and without performing any additional steps described in the running inference section.
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
157
docs/Documentation/torch_compile.md
Normal file
157
docs/Documentation/torch_compile.md
Normal file
@@ -0,0 +1,157 @@
|
||||
# PyTorch Deployment via "torch.compile" {#pytorch_2_0_torch_compile}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
|
||||
The ``torch.compile`` feature enables you to use OpenVINO for PyTorch-native applications.
|
||||
It speeds up PyTorch code by JIT-compiling it into optimized kernels.
|
||||
By default, Torch code runs in eager-mode, but with the use of ``torch.compile`` it goes through the following steps:
|
||||
|
||||
1. **Graph acquisition** - the model is rewritten as blocks of subgraphs that are either:
|
||||
|
||||
* compiled by TorchDynamo and "flattened",
|
||||
* falling back to the eager-mode, due to unsupported Python constructs (like control-flow code).
|
||||
|
||||
2. **Graph lowering** - all PyTorch operations are decomposed into their constituent kernels specific to the chosen backend.
|
||||
3. **Graph compilation** - the kernels call their corresponding low-level device-specific operations.
|
||||
|
||||
|
||||
|
||||
How to Use
|
||||
#################
|
||||
|
||||
To use ``torch.compile``, you need to add an import statement and define one of the two available backends:
|
||||
|
||||
| ``openvino``
|
||||
| With this backend, Torch FX subgraphs are directly converted to OpenVINO representation without any additional PyTorch based tracing/scripting.
|
||||
|
||||
| ``openvino_ts``
|
||||
| With this backend, Torch FX subgraphs are first traced/scripted with PyTorch Torchscript, and then converted to OpenVINO representation.
|
||||
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: openvino
|
||||
:sync: backend-openvino
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
import openvino.torch
|
||||
...
|
||||
model = torch.compile(model, backend='openvino')
|
||||
|
||||
Execution diagram:
|
||||
|
||||
.. image:: _static/images/torch_compile_backend_openvino.svg
|
||||
:width: 992px
|
||||
:height: 720px
|
||||
:scale: 60%
|
||||
:align: center
|
||||
|
||||
.. tab-item:: openvino_ts
|
||||
:sync: backend-openvino-ts
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
import openvino.torch
|
||||
...
|
||||
model = torch.compile(model, backend='openvino_ts')
|
||||
|
||||
Execution diagram:
|
||||
|
||||
.. image:: _static/images/torch_compile_backend_openvino_ts.svg
|
||||
:width: 1088px
|
||||
:height: 720px
|
||||
:scale: 60%
|
||||
:align: center
|
||||
|
||||
|
||||
Environment Variables
|
||||
+++++++++++++++++++++++++++
|
||||
|
||||
* **OPENVINO_TORCH_BACKEND_DEVICE**: enables selecting a specific hardware device to run the application.
|
||||
By default, the OpenVINO backend for ``torch.compile`` runs PyTorch applications using the CPU. Setting
|
||||
this variable to GPU.0, for example, will make the application use the integrated graphics processor instead.
|
||||
* **OPENVINO_TORCH_MODEL_CACHING**: enables saving the optimized model files to a hard drive, after the first application run.
|
||||
This makes them available for the following application executions, reducing the first-inference latency.
|
||||
By default, this variable is set to ``False``. Setting it to ``True`` enables caching.
|
||||
* **OPENVINO_TORCH_CACHE_DIR**: enables defining a custom directory for the model files (if model caching set to ``True``).
|
||||
By default, the OpenVINO IR is saved in the ``cache`` sub-directory, created in the application's root directory.
|
||||
|
||||
Windows support
|
||||
++++++++++++++++++++++++++
|
||||
|
||||
Currently, PyTorch does not support ``torch.compile`` feature on Windows officially. However, it can be accessed by running
|
||||
the below instructions:
|
||||
|
||||
1. Install the PyTorch nightly wheel file - `2.1.0.dev20230713 <https://download.pytorch.org/whl/nightly/cpu/torch-2.1.0.dev20230713%2Bcpu-cp38-cp38-win_amd64.whl>`__ ,
|
||||
2. Update the file at ``<python_env_root>/Lib/site-packages/torch/_dynamo/eval_frames.py``
|
||||
3. Find the function called ``check_if_dynamo_supported()``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
def check_if_dynamo_supported():
|
||||
if sys.platform == "win32":
|
||||
raise RuntimeError("Windows not yet supported for torch.compile")
|
||||
if sys.version_info >= (3, 11):
|
||||
raise RuntimeError("Python 3.11+ not yet supported for torch.compile")
|
||||
|
||||
4. Put in comments the first two lines in this function, so it looks like this:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
def check_if_dynamo_supported():
|
||||
#if sys.platform == "win32":
|
||||
# raise RuntimeError("Windows not yet supported for torch.compile")
|
||||
if sys.version_info >= (3, 11):
|
||||
`raise RuntimeError("Python 3.11+ not yet supported for torch.compile")
|
||||
|
||||
|
||||
Support for Automatic1111 Stable Diffusion WebUI
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
Automatic1111 Stable Diffusion WebUI is an open-source repository that hosts a browser-based interface for the Stable Diffusion
|
||||
based image generation. It allows users to create realistic and creative images from text prompts.
|
||||
Stable Diffusion WebUI is supported on Intel CPUs, Intel integrated GPUs, and Intel discrete GPUs by leveraging OpenVINO
|
||||
``torch.compile`` capability. Detailed instructions are available in
|
||||
`Stable Diffusion WebUI repository. <https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon>`__
|
||||
|
||||
|
||||
Architecture
|
||||
#################
|
||||
|
||||
The ``torch.compile`` feature is part of PyTorch 2.0, and is based on:
|
||||
|
||||
* **TorchDynamo** - a Python-level JIT that hooks into the frame evaluation API in CPython,
|
||||
(PEP 523) to dynamically modify Python bytecode right before it is executed (PyTorch operators
|
||||
that cannot be extracted to FX graph are executed in the native Python environment).
|
||||
It maintains the eager-mode capabilities using
|
||||
`Guards <https://pytorch.org/docs/stable/dynamo/guards-overview.html>`__ to ensure the
|
||||
generated graphs are valid.
|
||||
|
||||
* **AOTAutograd** - generates the backward graph corresponding to the forward graph captured by TorchDynamo.
|
||||
* **PrimTorch** - decomposes complicated PyTorch operations into simpler and more elementary ops.
|
||||
* **TorchInductor** - a deep learning compiler that generates fast code for multiple accelerators and backends.
|
||||
|
||||
|
||||
|
||||
|
||||
When the PyTorch module is wrapped with ``torch.compile``, TorchDynamo traces the module and
|
||||
rewrites Python bytecode to extract sequences of PyTorch operations into an FX Graph,
|
||||
which can be optimized by the OpenVINO backend. The Torch FX graphs are first converted to
|
||||
inlined FX graphs and the graph partitioning module traverses inlined FX graph to identify
|
||||
operators supported by OpenVINO.
|
||||
|
||||
All the supported operators are clustered into OpenVINO submodules, converted to the OpenVINO
|
||||
graph using OpenVINO's PyTorch decoder, and executed in an optimized manner using OpenVINO runtime.
|
||||
All unsupported operators fall back to the native PyTorch runtime on CPU. If the subgraph
|
||||
fails during OpenVINO conversion, the subgraph falls back to PyTorch's default inductor backend.
|
||||
|
||||
|
||||
|
||||
Additional Resources
|
||||
############################
|
||||
|
||||
* `PyTorch 2.0 documentation <https://pytorch.org/docs/stable/index.html>`_
|
||||
|
||||
@endsphinxdirective
|
||||
@@ -2,6 +2,11 @@
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Learn the details of custom kernel support for the GPU device to
|
||||
enable operations not supported by OpenVINO.
|
||||
|
||||
|
||||
To enable operations not supported by OpenVINO™ out of the box, you may need an extension for OpenVINO operation set, and a custom kernel for the device you will target. This article describes custom kernel support for the GPU device.
|
||||
|
||||
The GPU codepath abstracts many details about OpenCL. You need to provide the kernel code in OpenCL C and an XML configuration file that connects the kernel and its parameters to the parameters of the operation.
|
||||
@@ -13,18 +18,20 @@ There are two options for using the custom operation configuration file:
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
|
||||
.. doxygensnippet:: docs/snippets/gpu/custom_kernels_api.cpp
|
||||
:language: cpp
|
||||
:fragment: [part0]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/gpu/custom_kernels_api.py
|
||||
:language: python
|
||||
:fragment: [part0]
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/gpu/custom_kernels_api.cpp
|
||||
:language: cpp
|
||||
:fragment: [part0]
|
||||
|
||||
|
||||
All OpenVINO samples, except the trivial ``hello_classification``, and most Open Model Zoo demos
|
||||
feature a dedicated command-line option ``-c`` to load custom kernels. For example, to load custom operations for the classification sample, run the command below:
|
||||
@@ -235,7 +242,8 @@ Example Configuration File
|
||||
The following code sample provides an example configuration file in XML
|
||||
format. For information on the configuration file structure, see the `Configuration File Format <#config-file-format>`__.
|
||||
|
||||
.. code-block:: cpp
|
||||
.. code-block:: xml
|
||||
:force:
|
||||
|
||||
<CustomLayer name="ReLU" type="SimpleGPU" version="1">
|
||||
<Kernel entry="example_relu_kernel">
|
||||
|
||||
@@ -2,6 +2,11 @@
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Explore OpenVINO™ Extensibility API, which allows adding
|
||||
support for models with custom operations and their further implementation
|
||||
in applications.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
@@ -9,7 +14,6 @@
|
||||
openvino_docs_Extensibility_UG_add_openvino_ops
|
||||
openvino_docs_Extensibility_UG_Frontend_Extensions
|
||||
openvino_docs_Extensibility_UG_GPU
|
||||
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
@@ -18,14 +22,20 @@
|
||||
openvino_docs_transformations
|
||||
OpenVINO Plugin Developer Guide <openvino_docs_ie_plugin_dg_overview>
|
||||
|
||||
The Intel® Distribution of OpenVINO™ toolkit supports neural network models trained with various frameworks, including
|
||||
TensorFlow, PyTorch, ONNX, PaddlePaddle, Apache MXNet, Caffe, and Kaldi. The list of supported operations is different for
|
||||
each of the supported frameworks. To see the operations supported by your framework, refer to :doc:`Supported Framework Operations <openvino_docs_MO_DG_prepare_model_Supported_Frameworks_Layers>`.
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer
|
||||
|
||||
The Intel® Distribution of OpenVINO™ toolkit supports neural-network models trained with various frameworks, including
|
||||
TensorFlow, PyTorch, ONNX, TensorFlow Lite, and PaddlePaddle (OpenVINO support for Apache MXNet, Caffe, and Kaldi is currently
|
||||
being deprecated and will be removed entirely in the future). The list of supported operations is different for each of the supported frameworks.
|
||||
To see the operations supported by your framework, refer to :doc:`Supported Framework Operations <openvino_resources_supported_operations_frontend>`.
|
||||
|
||||
Custom operations, which are not included in the list, are not recognized by OpenVINO out-of-the-box. The need for custom operation may appear in two cases:
|
||||
|
||||
1. A new or rarely used regular framework operation is not supported in OpenVINO yet.
|
||||
|
||||
2. A new user operation that was created for some specific model topology by the author of the model using framework extension capabilities.
|
||||
|
||||
Importing models with such operations requires additional steps. This guide illustrates the workflow for running inference on models featuring custom operations. This allows plugging in your own implementation for them. OpenVINO Extensibility API enables adding support for those custom operations and using one implementation for Model Optimizer and OpenVINO Runtime.
|
||||
@@ -52,13 +62,13 @@ Mapping from Framework Operation
|
||||
|
||||
Mapping of custom operation is implemented differently, depending on model format used for import. You may choose one of the following:
|
||||
|
||||
1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX), PaddlePaddle or TensorFlow formats, then one of the classes from :doc:`Frontend Extension API <openvino_docs_Extensibility_UG_Frontend_Extensions>` should be used. It consists of several classes available in C++ which can be used with the ``--extensions`` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the ``read_model`` method. Python API is also available for runtime model import.
|
||||
1. If a model is represented in the ONNX (including models exported from Pytorch in ONNX), TensorFlow Lite, PaddlePaddle or TensorFlow formats, then one of the classes from :doc:`Frontend Extension API <openvino_docs_Extensibility_UG_Frontend_Extensions>` should be used. It consists of several classes available in C++ which can be used with the ``--extensions`` option in Model Optimizer or when a model is imported directly to OpenVINO runtime using the ``read_model`` method. Python API is also available for runtime model import.
|
||||
|
||||
2. If a model is represented in the Caffe, Kaldi or MXNet formats, then :doc:`Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` should be used. This approach is available for model conversion in Model Optimizer only.
|
||||
2. If a model is represented in the Caffe, Kaldi or MXNet formats (as legacy frontends), then :doc:`[Legacy] Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` should be used. This approach is available for model conversion in Model Optimizer only.
|
||||
|
||||
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle and TensorFlow) and legacy frontends (Caffe, Kaldi and Apache MXNet). Model Optimizer can use both front-ends in contrast to the direct import of model with ``read_model`` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
|
||||
Existing of two approaches simultaneously is explained by two different types of frontends used for model conversion in OpenVINO: new frontends (ONNX, PaddlePaddle, TensorFlow Lite, and TensorFlow) and legacy frontends (Caffe, Kaldi, and Apache MXNet). Model Optimizer can use both frontends in contrast to the direct import of model with ``read_model`` method which can use new frontends only. Follow one of the appropriate guides referenced above to implement mappings depending on framework frontend.
|
||||
|
||||
If you are implementing extensions for new ONNX, PaddlePaddle or TensorFlow frontends and plan to use the ``--extensions`` option in Model Optimizer for model conversion, then the extensions should be:
|
||||
If you are implementing extensions for new ONNX, PaddlePaddle, TensorFlow Lite or TensorFlow frontends and plan to use the ``--extensions`` option in Model Optimizer for model conversion, then the extensions should be:
|
||||
|
||||
1. Implemented in C++ only.
|
||||
|
||||
@@ -85,6 +95,13 @@ Extensions can be loaded from a code with the ``:ref:`ov::Core::add_extension <
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.py
|
||||
:language: python
|
||||
:fragment: [add_extension]
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@@ -92,18 +109,18 @@ Extensions can be loaded from a code with the ``:ref:`ov::Core::add_extension <
|
||||
:language: cpp
|
||||
:fragment: [add_extension]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.py
|
||||
:language: python
|
||||
:fragment: [add_extension]
|
||||
|
||||
|
||||
The ``Identity`` is a custom operation class defined in :doc:`Custom Operation Guide <openvino_docs_Extensibility_UG_add_openvino_ops>`. This is sufficient to enable reading OpenVINO IR which uses the ``Identity`` extension operation emitted by Model Optimizer. In order to load original model directly to the runtime, add a mapping extension:
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.py
|
||||
:language: python
|
||||
:fragment: [add_frontend_extension]
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@@ -111,16 +128,11 @@ The ``Identity`` is a custom operation class defined in :doc:`Custom Operation G
|
||||
:language: cpp
|
||||
:fragment: [add_frontend_extension]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.py
|
||||
:language: python
|
||||
:fragment: [add_frontend_extension]
|
||||
|
||||
When Python API is used, there is no way to implement a custom OpenVINO operation. Even if custom OpenVINO operation is implemented in C++ and loaded into the runtime by a shared library, there is still no way to add a frontend mapping extension that refers to this custom operation. In this case, use C++ shared library approach to implement both operations semantics and framework mapping.
|
||||
|
||||
Python can still be used to map and decompose operations when only operations from the standard OpenVINO operation set are used.
|
||||
|
||||
.. _create_a_library_with_extensions:
|
||||
|
||||
Create a Library with Extensions
|
||||
++++++++++++++++++++++++++++++++
|
||||
@@ -165,13 +177,6 @@ This CMake script finds OpenVINO, using the ``find_package`` CMake command.
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [add_extension_lib]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
@@ -179,6 +184,13 @@ This CMake script finds OpenVINO, using the ``find_package`` CMake command.
|
||||
:language: python
|
||||
:fragment: [add_extension_lib]
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [add_extension_lib]
|
||||
|
||||
|
||||
See Also
|
||||
########
|
||||
@@ -187,4 +199,4 @@ See Also
|
||||
* :doc:`Using OpenVINO Runtime Samples <openvino_docs_OV_UG_Samples_Overview>`
|
||||
* :doc:`Hello Shape Infer SSD sample <openvino_inference_engine_samples_hello_reshape_ssd_README>`
|
||||
|
||||
@endsphinxdirective
|
||||
@endsphinxdirective
|
||||
@@ -2,6 +2,11 @@
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Explore OpenVINO™ Extension API which enables registering
|
||||
custom operations to support models with operations
|
||||
not supported by OpenVINO.
|
||||
|
||||
OpenVINO™ Extension API allows you to register custom operations to support models with operations which OpenVINO™ does not support out-of-the-box. This capability requires writing code in C++, so if you are using Python to develop your application you need to build a separate shared library implemented in C++ first and load it in Python using ``add_extension`` API. Please refer to :ref:`Create library with extensions <create_library_with_extensions>` for more details on library creation and usage. The remining part of this document describes how to implement an operation class.
|
||||
|
||||
Operation Class
|
||||
|
||||
@@ -2,32 +2,58 @@
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
The goal of this chapter is to explain how to use Frontend extension classes to facilitate mapping of custom operations from framework model representation to OpenVINO representation. Refer to :doc:`Introduction to OpenVINO Extension <openvino_docs_Extensibility_UG_Intro>` to understand entire flow.
|
||||
.. meta::
|
||||
:description: Learn how to use frontend extension classes to facilitate the mapping
|
||||
of custom operations from the framework model representation to the OpenVINO
|
||||
representation.
|
||||
|
||||
This API is applicable for new frontends only, which exist for ONNX, PaddlePaddle and TensorFlow. If a different model format is used, follow legacy :doc:`Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>` guide.
|
||||
|
||||
The goal of this chapter is to explain how to use Frontend extension classes to facilitate
|
||||
mapping of custom operations from framework model representation to OpenVINO representation.
|
||||
Refer to :doc:`Introduction to OpenVINO Extension <openvino_docs_Extensibility_UG_Intro>` to
|
||||
understand the entire flow.
|
||||
|
||||
This API is applicable to new frontends only, which exist for ONNX, TensorFlow Lite, PaddlePaddle, and TensorFlow.
|
||||
If a different model format is used, follow legacy
|
||||
:doc:`Model Optimizer Extensions <openvino_docs_MO_DG_prepare_model_customize_model_optimizer_Customize_Model_Optimizer>`
|
||||
guide.
|
||||
|
||||
.. note::
|
||||
This documentation is written based on the `Template extension <https://github.com/openvinotoolkit/openvino/tree/master/src/core/template_extension/new>`__, which demonstrates extension development details based on minimalistic ``Identity`` operation that is a placeholder for your real custom operation. You can review the complete code, which is fully compliable, to see how it works.
|
||||
|
||||
Single Operation Mapping with OpExtension
|
||||
#########################################
|
||||
This documentation is written based on the `Template extension <https://github.com/openvinotoolkit/openvino/tree/master/src/core/template_extension/new>`__,
|
||||
which demonstrates extension development details based on minimalistic ``Identity``
|
||||
operation that is a placeholder for your real custom operation. You can review the complete code,
|
||||
which is fully compilable, to see how it works.
|
||||
|
||||
This section covers the case when a single operation in framework representation is mapped to a single operation in OpenVINO representation. This is called *one-to-one mapping*. There is ``OpExtension`` class that works well if all the following conditions are satisfied:
|
||||
|
||||
1. Number of inputs to operation in the Framework representation is the same as in the OpenVINO representation.
|
||||
|
||||
2. Number of outputs is also the same in both representations.
|
||||
|
||||
3. Inputs can be indexed and are mapped in order correspondingly, e.g. input with index 0 in framework representation maps to input with index 0 in OpenVINO representation and so on.
|
||||
|
||||
4. The same for outputs.
|
||||
|
||||
5. Each attribute in OpenVINO operation can be initialized from one of the attributes of original operation or by some predefined constant value. Value of copied attributes cannot contain expressions, value is accepted as-is, so type of a value should be compatible.
|
||||
|
||||
.. note::
|
||||
``OpExtension`` class is currently available for ONNX and TensorFlow frontends. PaddlePaddle frontend has named inputs and outputs for operation (not indexed) therefore OpExtension mapping is not applicable for this case.
|
||||
You can find more examples of extensions in `openvino_contrib repository <https://github.com/openvinotoolkit/openvino_contrib/tree/master/modules/custom_operations>`_.
|
||||
|
||||
The next example maps ONNX operation with type `Identity <https://github.com/onnx/onnx/blob/main/docs/Operators.md#Identity>`__ to OpenVINO template extension ``Identity`` class.
|
||||
|
||||
Single Operation Mapping with OpExtension
|
||||
#########################################
|
||||
|
||||
This section covers the case when a single operation in framework representation is mapped to a single
|
||||
operation in OpenVINO representation. This is called *one-to-one mapping*. There is ``OpExtension``
|
||||
class that works well if all the following conditions are satisfied:
|
||||
|
||||
1. Number of inputs to operation in the Framework representation is the same as in the OpenVINO representation.
|
||||
2. Number of outputs is also the same in both representations.
|
||||
3. Inputs can be indexed and are mapped in order correspondingly, e.g.
|
||||
input with index 0 in framework representation maps to input with index 0 in OpenVINO representation and so on.
|
||||
4. The same for outputs.
|
||||
5. Each attribute in OpenVINO operation can be initialized from one of the attributes of original operation or by
|
||||
some predefined constant value. Value of copied attributes cannot contain expressions, value is accepted as-is,
|
||||
so type of a value should be compatible.
|
||||
|
||||
.. note::
|
||||
|
||||
``OpExtension`` class is currently available for ONNX and TensorFlow frontends.
|
||||
PaddlePaddle frontend has named inputs and outputs for operation (not indexed)
|
||||
therefore OpExtension mapping is not applicable for this case.
|
||||
|
||||
The following example maps ONNX operation with the type of `Identity <https://github.com/onnx/onnx/blob/main/docs/Operators.md#Identity>`__
|
||||
to OpenVINO template extension ``Identity`` class.
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
@@ -39,25 +65,42 @@ The next example maps ONNX operation with type `Identity <https://github.com/onn
|
||||
|
||||
The mapping doesn’t involve any attributes, as operation Identity doesn’t have them.
|
||||
|
||||
Extension objects, like just constructed ``extension`` can be used to add to the OpenVINO runtime just before the loading a model that contains custom operations:
|
||||
Extension objects, like just constructed ``extension`` can be used to add to the
|
||||
OpenVINO runtime just before the loading a model that contains custom operations:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_read_model]
|
||||
|
||||
Or extensions can be constructed in a separately compiled shared library. Separately compiled library can be used in Model Optimizer or ``benchmark_app``. Read about how to build and load such library in chapter “Create library with extensions” in :doc:`Introduction to OpenVINO Extension <openvino_docs_Extensibility_UG_Intro>`.
|
||||
Or extensions can be constructed in a separately compiled shared library.
|
||||
Separately compiled library can be used in Model Optimizer or ``benchmark_app``.
|
||||
Read about how to build and load such a library in the chapter of “Create library with extensions” in
|
||||
:doc:`Introduction to OpenVINO Extension <openvino_docs_Extensibility_UG_Intro>`.
|
||||
|
||||
If operation have multiple inputs and/or outputs they will be mapped in order. The type of elements in input/output tensors should match expected types in the surrounding operations. For example, if custom operation produces ``f32`` data type then operation that consumes this output should also support ``f32``. Otherwise, model conversion fails with an error, there are no automatic type conversion happens.
|
||||
If operation have multiple inputs and/or outputs they will be mapped in order.
|
||||
The type of elements in input/output tensors should match expected types in the surrounding operations.
|
||||
For example, if a custom operation produces the ``f32`` data type, the operation that consumes this output
|
||||
should also support ``f32``. Otherwise, model conversion fails with an error, as no automatic type conversion is performed.
|
||||
|
||||
Converting to Standard OpenVINO Operation
|
||||
+++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
``OpExtension`` class can be used when mapping to one of the operations from standard OpenVINO operation set is what you need and there is no class like ``TemplateExtension::Identity`` implemented.
|
||||
``OpExtension`` class can be used when mapping to one of the operations from standard OpenVINO
|
||||
operation set is what you need and there is no class like ``TemplateExtension::Identity`` implemented.
|
||||
|
||||
Here is an example for a custom framework operation “MyRelu”. Suppose it is mathematically equivalent to standard `Relu` that exists in OpenVINO operation set, but for some reason has type name “MyRelu”. In this case you can directly say that “MyRelu” -> ``Relu`` mapping should be used:
|
||||
Here is an example of a custom framework operation 'MyRelu'. Assume it is mathematically equivalent
|
||||
to standard ``Relu`` that exists in the OpenVINO operation set, but for some reason has the type name of 'MyRelu'.
|
||||
In this case, you can directly say that 'MyRelu' -> ``Relu`` mapping should be used:
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.py
|
||||
:language: python
|
||||
:fragment: [py_frontend_extension_MyRelu]
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@@ -65,34 +108,47 @@ Here is an example for a custom framework operation “MyRelu”. Suppose it is
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_MyRelu]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: python
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.py
|
||||
:language: python
|
||||
:fragment: [py_frontend_extension_MyRelu]
|
||||
|
||||
In the resulting converted OpenVINO model, “MyRelu” operation will be replaced by the standard operation
|
||||
``Relu`` from the latest available OpenVINO operation set. Notice that when standard operation is used,
|
||||
it can be specified using just a type string (“Relu”) instead of using a ``ov::opset8::Relu`` class name as a
|
||||
template parameter for ``OpExtension``. This method is available for operations from the standard operation set only.
|
||||
For a user custom OpenVINO operation the corresponding class should be always specified as a template parameter
|
||||
as it was demonstrated with ``TemplateExtension::Identity``.
|
||||
|
||||
In the resulting converted OpenVINO model, “MyRelu” operation will be replaced by the standard operation ``Relu`` from the latest available OpenVINO operation set. Notice that when standard operation is used, it can be specified using just a type string (“Relu”) instead of using a ``ov::opset8::Relu`` class name as a template parameter for ``OpExtension``. This method is available for operations from the standard operation set only. For a user custom OpenVINO operation the corresponding class should be always specified as a template parameter as it was demonstrated with ``TemplateExtension::Identity``.
|
||||
|
||||
Attributes Mapping
|
||||
Attribute Mapping
|
||||
++++++++++++++++++
|
||||
|
||||
As described above, ``OpExtension`` is useful when attributes can be mapped one by one or initialized by a constant. If the set of attributes in framework representation and OpenVINO representation completely match by their names and types, nothing should be specified in OpExtension constructor parameters. The attributes are discovered and mapped automatically based on ``visit_attributes`` method that should be defined for any OpenVINO operation.
|
||||
As described above, ``OpExtension`` is useful when attributes can be mapped one by one or initialized by a constant.
|
||||
Attributes in OpenVINO operators are identified by their names, so for frameworks that also have named attributes (like TensorFlow, PaddlePaddle, ONNX),
|
||||
you can specify name to name mapping. For frameworks where OpenVINO operator's attributes can be mapped to one of the framework
|
||||
operator inputs (like PyTorch), there's a name to input index mapping.
|
||||
|
||||
Imagine you have CustomOperation class implementation that has two attributes with names ``attr1`` and ``attr2``:
|
||||
|
||||
Named attributes mapping
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If the set of attributes in framework representation and OpenVINO representation completely match by their names and types,
|
||||
no attribute mapping has to be specified in OpExtension constructor parameters. The attributes are discovered and mapped automatically
|
||||
based on ``visit_attributes`` method that should be defined for any OpenVINO operation.
|
||||
|
||||
Imagine you have CustomOperation class implementation that has two attributes with names: ``attr1`` and ``attr2``.
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_CustomOperation]
|
||||
|
||||
And original model in framework representation also has operation with name “CustomOperatoin” with the same ``attr1`` and ``attr2`` attributes. Then with the following code:
|
||||
And original model in framework representation also has operation with name ``CustomOperation`` with the same
|
||||
``attr1`` and ``attr2`` attributes. Then with the following code:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_CustomOperation_as_is]
|
||||
|
||||
both ``attr1`` and ``attr2`` are copied from framework representation to OpenVINO representation automatically. If for some reason names of attributes are different but values still can be copied “as-is” you can pass attribute names mapping in ``OpExtension`` constructor:
|
||||
Both ``attr1`` and ``attr2`` are copied from framework representation to OpenVINO representation automatically.
|
||||
|
||||
If for some reason names of attributes are different but values still can be copied “as-is” you can pass attribute
|
||||
names mapping in ``OpExtension`` constructor:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
@@ -100,65 +156,200 @@ both ``attr1`` and ``attr2`` are copied from framework representation to OpenVIN
|
||||
|
||||
Where ``fw_attr1`` and ``fw_attr2`` are names for corresponding attributes in framework operation representation.
|
||||
|
||||
If copying of an attribute is not what you need, ``OpExtension`` also can set attribute to predefined constant value. For the same ``CustomOperation``, imagine you want to set ``attr2`` to value 5 instead of copying from ``fw_attr2``, to achieve that do the following:
|
||||
If copying of an attribute is not what you need, ``OpExtension`` also can set attribute to predefined constant value.
|
||||
For the same ``CustomOperation``, imagine you want to set ``attr2`` to value 5 instead of copying from ``fw_attr2``,
|
||||
to achieve that do the following:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_CustomOperation_rename_set]
|
||||
|
||||
|
||||
So the conclusion is that each attribute of target OpenVINO operation should be initialized either by
|
||||
|
||||
1. Setting automatically due to name matching
|
||||
|
||||
2. Mapped by attribute name
|
||||
|
||||
3. Set to a constant value
|
||||
|
||||
This is achieved by specifying maps as arguments for `OpExtension` constructor.
|
||||
This is achieved by specifying maps as arguments for ``OpExtension`` constructor.
|
||||
|
||||
### Mapping custom operations to frontends with OPENVINO_FRAMEWORK_MAP macro
|
||||
|
||||
> **NOTE**: Below solution works only for ONNX and Tensorflow frontends.
|
||||
Attribute mapping with named inputs and outputs
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
`OPENVINO_FRAMEWORK_MAP` is a macro that should be used inside OpenVINO operation's class definition and that lets you specify the mapping between this operation to a frontend operation.
|
||||
Mappings in previous examples assume that inputs and outputs of an operator in framework model representation come
|
||||
with a particular order so you can directly map framework operation input ``0`` to OpenVINO operation input ``0`` and so on.
|
||||
That's not always the case, for frameworks like PaddlePaddle, operation inputs and outputs are identified by their names
|
||||
and may be defined in any order. So to map it to OpenVINO operation inputs and outputs, you have to specify that order yourself.
|
||||
This can be done by creating two vector of strings, one for input and one for output, where framework operation
|
||||
input name at position ``i`` maps to OpenVINO operation input at position ``i`` (and similarly for outputs).
|
||||
|
||||
Let's consider the following example. Imagine you have an ONNX model with `CustomOp` operation (and this operation has `mode` attribute) and a Tensorflow model with `CustomOpV3` operation (this operation has `axis` attribute) and both of them can be implemented with a single OpenVINO operation `CustomOp` like follows:
|
||||
|
||||
@snippet ov_extensions.cpp frontend_extension_framework_map_macro_headers
|
||||
@snippet ov_extensions.cpp frontend_extension_framework_map_macro_CustomOp
|
||||
Let's see the following example. Like previously, we'd like to map ``CustomOperation`` in the original model,
|
||||
to OpenVINO ``CustomOperation`` as is (so their name and attributes names match). This time, that framework operation
|
||||
inputs and outputs are not stricly ordered and can be identified by their names ``A``, ``B``, ``C`` for inputs
|
||||
and ``X``, ``Y`` for outputs. Those inputs and outputs can be mapped to OpenVINO operation, such that inputs
|
||||
``A``, ``B``, ``C`` map to OpenVINO ``CustomOperation`` first, second and third input and ``X`` and ``Y``
|
||||
outputs map to OpenVINO ``CustomOperation`` first and second output respectively.
|
||||
|
||||
Let's take a closer look at the parameters this macro takes:
|
||||
```cpp
|
||||
OPENVINO_FRAMEWORK_MAP(framework, name, attributes_map, attributes_values)
|
||||
```
|
||||
- `framework` - framework name.
|
||||
- `name` - the framework operation name. It's optional if the OpenVINO custom operation name (that is the name that is passed as the first parameter to `OPENVINO_OP` macro) is the same as the framework operation name and both `attributes_map` and `attributes_values` are not provided.
|
||||
- `attributes_map` - used to provide a mapping between OpenVINO operation attribute and framework operation attribute. Contains key-value pairs, where key is an OpenVINO operation attribute name and value is its corresponding framework operation attribute name. This parameter is optional if the number of OpenVINO operation attributes and their names match one-to-one with framework operation attributes.
|
||||
- `attributes_values` - used to provide default values for OpenVINO operation attributes that are not specified in `attributes_map`. Contains key-value pairs, where key is an OpenVINO operation attribute name and the value is this attribute value. This parameter cannot be provided if `attributes_map` contains all of OpenVINO operation attributes or if `attributes_map` is not provided.
|
||||
Given that, such custom operation can be registered by the following:
|
||||
|
||||
In the example above, `OPENVINO_FRAMEWORK_MAP` is used twice.
|
||||
First, OpenVINO `CustomOp` is mapped to ONNX `CustomOp` operation, `m_mode` attribute is mapped to `mode` attribute, while `m_axis` attribute gets the default value `-1`.
|
||||
Secondly, OpenVINO `CustomOp` is mapped to Tensorflow `CustomOpV3` operation, `m_axis` attribute is mapped to `axis` attribute, while `m_mode` attribute gets the default value `"linear"`.
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_CustomOperation_as_is_paddle]
|
||||
|
||||
|
||||
Second example shows how to map the operation with named inputs and outputs, but when names of attributes are different:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_CustomOperation_rename_paddle]
|
||||
|
||||
|
||||
and the last one shows how to map the operation with named inputs and outputs, but when (in order to correctly map framework
|
||||
operation to OpenVINO operation) one of the attributes has to be set to predefined value:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_CustomOperation_rename_set_paddle]
|
||||
|
||||
|
||||
Mapping attributes from operation inputs
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
For models (like PyTorch models), where operations have attributes on the input list, you can specify name to input index mapping.
|
||||
For example, imagine you have created a custom OpenVINO operation that implements a variant of ELU activation function
|
||||
with two attributes ``alpha`` and ``beta``:
|
||||
|
||||
.. math::
|
||||
|
||||
CustomElu=\left\lbrace
|
||||
\begin{array}{ll}
|
||||
beta * x & \textrm{if x > 0} \newline
|
||||
alpha * (exp(x) - 1) & \textrm{otherwise}
|
||||
\end{array}
|
||||
\right.
|
||||
|
||||
Below is a snippet of ``CustomElu`` class showing how to define its attributes:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_framework_map_CustomElu]
|
||||
|
||||
Let's see an example of how you can map ``CustomElu`` to PyTorch `aten::elu <https://pytorch.org/docs/stable/generated/torch.nn.functional.elu.html>`_
|
||||
(note that if ``beta`` is equal to ``1``, ``CustomElu`` works the same as ``aten::elu``).
|
||||
``aten::elu`` has ``alpha`` attribute second on the input list, but it doesn't have ``beta``.
|
||||
So in order to map it to ``CustomElu`` you can use the following:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_framework_map_CustomElu_mapping]
|
||||
|
||||
This will map ``alpha`` to the second input and map ``beta`` attribute to constant value ``1.0f``.
|
||||
|
||||
Such created extension can be used, e.g. in dynamic library, please refer to :ref:`Create a library with extensions <create_a_library_with_extensions>`.
|
||||
|
||||
Mapping custom operations to frontends with OPENVINO_FRAMEWORK_MAP macro
|
||||
########################################################################
|
||||
|
||||
``OPENVINO_FRAMEWORK_MAP`` is a macro that should be used inside OpenVINO operation's class definition and that lets you specify
|
||||
the mapping between this operation to a frontend operation.
|
||||
|
||||
Let's consider the following example. Imagine you have an ONNX model with ``CustomOp`` operation (and this operation has ``mode`` attribute),
|
||||
a TensorFlow model with ``CustomOpV3`` operation (this operation has ``axis`` attribute) and a PaddlePaddle model with ``CustomOp`` (with ``mode`` attribute)
|
||||
that has input named "X" and output named "Out" and all of them can be implemented with a single OpenVINO operation ``CustomOp`` like follows:
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_framework_map_macro_headers]
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_framework_map_macro_CustomOp]
|
||||
|
||||
Let's take a closer look at the parameters this macro takes (note that there are two flavors - the second one is to map
|
||||
for PaddlePaddle operations where input and output names have to be specified).
|
||||
|
||||
.. code-block:: cpp
|
||||
|
||||
OPENVINO_FRAMEWORK_MAP(framework, name, attributes_map, attributes_values)
|
||||
OPENVINO_FRAMEWORK_MAP(framework, input_names, output_names, name, attributes_map, attributes_values)
|
||||
|
||||
- ``framework`` - framework name.
|
||||
- ``name`` - the framework operation name. It's optional if the OpenVINO custom operation name
|
||||
(that is the name that is passed as the first parameter to ``OPENVINO_OP`` macro) is the same
|
||||
as the framework operation name and both ``attributes_map`` and ``attributes_values`` are not provided.
|
||||
- ``input_names`` - vector of strings that specify the names of inputs (needed to map PaddlePaddle to OpenVINO operations),
|
||||
- ``output_names`` - vector of strings that specify the names of outputs (needed to map PaddlePaddle to OpenVINO operations),
|
||||
- ``attributes_map`` - used to provide a mapping between OpenVINO operation attribute and
|
||||
framework operation attribute. Contains key-value pairs, where key is an OpenVINO operation
|
||||
attribute name and value is its corresponding framework operation attribute name.
|
||||
This parameter is optional if the number of OpenVINO operation attributes and their names
|
||||
match one-to-one with framework operation attributes.
|
||||
- ``attributes_values`` - used to provide default values for OpenVINO operation attributes
|
||||
that are not specified in ``attributes_map``. Contains key-value pairs, where key is an OpenVINO
|
||||
operation attribute name and the value is this attribute value. This parameter cannot be provided
|
||||
if ``attributes_map`` contains all of OpenVINO operation attributes or if ``attributes_map`` is not provided.
|
||||
|
||||
In the example above, ``OPENVINO_FRAMEWORK_MAP`` is used three times.
|
||||
First, OpenVINO ``CustomOp`` is mapped to ONNX ``CustomOp`` operation, ``m_mode`` attribute is mapped to ``mode``
|
||||
attribute, while ``m_axis`` attribute gets the default value ``-1``. Secondly, OpenVINO ``CustomOp`` is mapped
|
||||
to TensorFlow ``CustomOpV3`` operation, ``m_axis`` attribute is mapped to ``axis`` attribute, while ``m_mode``
|
||||
attribute gets the default value ``"linear"``. Thirdly, OpenVINO ``CustomOp`` is mapped to PaddlePaddle ``CustomOp`` operation,
|
||||
``m_mode`` attribute is mapped to ``mode`` attribute, while ``m_axis`` attribute gets the default value ``-1``.
|
||||
This mapping also specifies the input name "X" and output name "Out".
|
||||
|
||||
The last step is to register this custom operation by following:
|
||||
@snippet ov_extensions.cpp frontend_extension_framework_map_macro_add_extension
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_framework_map_macro_add_extension]
|
||||
|
||||
.. important::
|
||||
|
||||
To map an operation on a specific framework, you have to link to a respective
|
||||
frontend (``openvino::frontend::onnx``, ``openvino::frontend::tensorflow``, ``openvino::frontend::paddle``) in the ``CMakeLists.txt`` file:
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
target_link_libraries(${TARGET_NAME} PRIVATE openvino::frontend::onnx)
|
||||
|
||||
|
||||
Mapping to Multiple Operations with ConversionExtension
|
||||
#######################################################
|
||||
|
||||
Previous sections cover the case when a single operation is mapped to a single operation with optional adjustment in names and attribute values. That is likely enough for your own custom operation with existing C++ kernel implementation. In this case your framework representation and OpenVINO representation for the operation are under your control and inputs/outpus/attributes can be aligned to make ``OpExtension`` usable.
|
||||
Previous sections cover the case when a single operation is mapped to a single operation with optional
|
||||
adjustment in names and attribute values. That is likely enough for your own custom operation with existing
|
||||
C++ kernel implementation. In this case your framework representation and OpenVINO representation for the
|
||||
operation are under your control and inputs/outpus/attributes can be aligned to make ``OpExtension`` usable.
|
||||
|
||||
In case if one-to-one mapping is not possible, *decomposition to multiple operations* should be considered. It is achieved by using more verbose and less automated ``ConversionExtension`` class. It enables writing arbitrary code to replace a single framework operation by multiple connected OpenVINO operations constructing dependency graph of any complexity.
|
||||
In case if one-to-one mapping is not possible, *decomposition to multiple operations* should be considered.
|
||||
It is achieved by using more verbose and less automated ``ConversionExtension`` class.
|
||||
It enables writing arbitrary code to replace a single framework operation by multiple connected OpenVINO
|
||||
operations constructing dependency graph of any complexity.
|
||||
|
||||
``ConversionExtension`` maps a single operation to a function which builds a graph using OpenVINO operation classes. Follow chapter :ref:`Build a Model in OpenVINO Runtime <ov_ug_build_model>` to learn how to use OpenVINO operation classes to build a fragment of model for replacement.
|
||||
``ConversionExtension`` maps a single operation to a function which builds a graph using OpenVINO
|
||||
operation classes. Follow chapter :ref:`Build a Model in OpenVINO Runtime <ov_ug_build_model>` to
|
||||
learn how to use OpenVINO operation classes to build a fragment of model for replacement.
|
||||
|
||||
The next example illustrates using ``ConversionExtension`` for conversion of “ThresholdedRelu” from ONNX according to the formula: ``ThresholdedRelu(x, alpha) -> Multiply(x, Convert(Greater(x, alpha), type=float))``.
|
||||
Below example illustrates using ``ConversionExtension`` for conversion of “ThresholdedRelu”
|
||||
from ONNX according to the formula: ``ThresholdedRelu(x, alpha) -> Multiply(x, Convert(Greater(x, alpha), type=float))``.
|
||||
|
||||
.. note::
|
||||
``ThresholdedRelu`` is one of the standard ONNX operators which is supported by ONNX frontend natively out-of-the-box. Here we are re-implementing it to illustrate how you can add a similar support for your custom operation instead of ``ThresholdedRelu``.
|
||||
|
||||
``ThresholdedRelu`` is one of the standard ONNX operators which is supported by ONNX frontend
|
||||
natively out-of-the-box. Here we are re-implementing it to illustrate how you can add a similar
|
||||
support for your custom operation instead of ``ThresholdedRelu``.
|
||||
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.py
|
||||
:language: python
|
||||
:fragment: [py_frontend_extension_ThresholdedReLU_header]
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
|
||||
@@ -166,15 +357,14 @@ The next example illustrates using ``ConversionExtension`` for conversion of “
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_ThresholdedReLU_header]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: python
|
||||
.. tab-set::
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: py
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.py
|
||||
:language: python
|
||||
:fragment: [py_frontend_extension_ThresholdedReLU_header]
|
||||
|
||||
|
||||
.. tab-set::
|
||||
:fragment: [py_frontend_extension_ThresholdedReLU]
|
||||
|
||||
.. tab-item:: C++
|
||||
:sync: cpp
|
||||
@@ -183,21 +373,46 @@ The next example illustrates using ``ConversionExtension`` for conversion of “
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_ThresholdedReLU]
|
||||
|
||||
.. tab-item:: Python
|
||||
:sync: python
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.py
|
||||
:language: python
|
||||
:fragment: [py_frontend_extension_ThresholdedReLU]
|
||||
|
||||
The next example shows how to use ``ConversionExtension`` to convert PyTorch
|
||||
`aten::hardtanh <https://pytorch.org/docs/stable/generated/torch.nn.functional.hardtanh.html>`_
|
||||
to demonstrate how to use ``get_values_from_const_input`` function to fetch an attribute value from input:
|
||||
|
||||
|
||||
To access original framework operation attribute value and connect to inputs, ``node`` object of type ``NodeContext`` is used. It has two main methods:
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.py
|
||||
:language: python
|
||||
:fragment: [py_frontend_extension_aten_hardtanh]
|
||||
|
||||
|
||||
To access original framework operation attribute value and connect to inputs, ``node`` object of type ``NodeContext`` is used. It has three main methods:
|
||||
|
||||
* ``NodeContext::get_input`` to get input with a given index,
|
||||
|
||||
* ``NodeContext::get_attribute`` to get attribute value with a given name.
|
||||
* ``NodeContext::get_attribute`` to get attribute value with a given name,
|
||||
|
||||
The conversion function should return a vector of node outputs that are mapped to corresponding outputs of the original framework operation in the same order.
|
||||
* ``NodeContext::get_values_from_const_input`` to get an attribute with a given input index.
|
||||
|
||||
The conversion function should return a vector of node outputs that are mapped to
|
||||
corresponding outputs of the original framework operation in the same order.
|
||||
|
||||
Some frameworks require output names of the operation to be provided during conversion.
|
||||
For PaddlePaddle operations, it is generally necessary to provide names for all outputs using the ``NamedOutputs`` container.
|
||||
Usually those names can be found in source code of the individual operation in PaddlePaddle code.
|
||||
The next example shows such conversion for the ``top_k_v2`` operation.
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_paddle_TopK]
|
||||
|
||||
For TensorFlow framework, if an operation has more than one output, it is recommended to assign names to
|
||||
those outputs using the ``NamedOutputVector`` structure which allows both indexed and named output access.
|
||||
For a description of TensorFlow operations, including the names of their outputs, refer to the
|
||||
`tf.raw_ops <https://www.tensorflow.org/api_docs/python/tf/raw_ops/>`__ documentation page.
|
||||
The next example shows such conversion for the ``TopKV2`` operation.
|
||||
|
||||
.. doxygensnippet:: docs/snippets/ov_extensions.cpp
|
||||
:language: cpp
|
||||
:fragment: [frontend_extension_tf_TopK]
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
@@ -2,6 +2,11 @@
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Get to know how Graph Rewrite handles running multiple matcher passes on
|
||||
ov::Model in a single graph traversal.
|
||||
|
||||
|
||||
``:ref:`ov::pass::GraphRewrite <doxid-classov_1_1pass_1_1_graph_rewrite>``` serves for running multiple matcher passes on ``:ref:`ov::Model <doxid-classov_1_1_model>``` in a single graph traversal.
|
||||
Example:
|
||||
|
||||
|
||||
@@ -2,6 +2,11 @@
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Learn how to create a pattern, implement a callback, register
|
||||
the pattern and Matcher to execute MatcherPass transformation
|
||||
on a model.
|
||||
|
||||
``:ref:`ov::pass::MatcherPass <doxid-classov_1_1pass_1_1_matcher_pass>``` is used for pattern-based transformations.
|
||||
|
||||
Template for MatcherPass transformation class
|
||||
|
||||
@@ -2,6 +2,11 @@
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Learn how to use Model Pass transformation class to take entire
|
||||
ov::Model as input and process it.
|
||||
|
||||
|
||||
``:ref:`ov::pass::ModelPass <doxid-classov_1_1pass_1_1_model_pass>``` is used for transformations that take entire ``:ref:`ov::Model <doxid-classov_1_1_model>``` as an input and process it.
|
||||
|
||||
Template for ModelPass transformation class
|
||||
|
||||
@@ -2,6 +2,11 @@
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Learn how to apply additional model optimizations or transform
|
||||
unsupported subgraphs and operations, using OpenVINO™ Transformations API.
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:f7c8ab4f15874d235968471bcf876c89c795d601e69891208107b8b72aa58eb1
|
||||
size 70014
|
||||
@@ -1,3 +0,0 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:3d5ccf51fe1babb93d96d042494695a6a6e055d1f8ebf7eef5083d54d8987a23
|
||||
size 58789
|
||||
@@ -1,40 +0,0 @@
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
#! [complex:transformation]
|
||||
|
||||
from openvino.tools.mo.front.common.replacement import FrontReplacementSubgraph
|
||||
from openvino.tools.mo.graph.graph import Graph
|
||||
|
||||
|
||||
class Complex(FrontReplacementSubgraph):
|
||||
enabled = True
|
||||
|
||||
def pattern(self):
|
||||
return dict(
|
||||
nodes=[
|
||||
('strided_slice_real', dict(op='StridedSlice')),
|
||||
('strided_slice_imag', dict(op='StridedSlice')),
|
||||
('complex', dict(op='Complex')),
|
||||
],
|
||||
edges=[
|
||||
('strided_slice_real', 'complex', {'in': 0}),
|
||||
('strided_slice_imag', 'complex', {'in': 1}),
|
||||
])
|
||||
|
||||
@staticmethod
|
||||
def replace_sub_graph(graph: Graph, match: dict):
|
||||
strided_slice_real = match['strided_slice_real']
|
||||
strided_slice_imag = match['strided_slice_imag']
|
||||
complex_node = match['complex']
|
||||
|
||||
# make sure that both strided slice operations get the same data as input
|
||||
assert strided_slice_real.in_port(0).get_source() == strided_slice_imag.in_port(0).get_source()
|
||||
|
||||
# identify the output port of the operation producing datat for strided slice nodes
|
||||
input_node_output_port = strided_slice_real.in_port(0).get_source()
|
||||
input_node_output_port.disconnect()
|
||||
|
||||
# change the connection so now all consumers of "complex_node" get data from input node of strided slice nodes
|
||||
complex_node.out_port(0).get_connection().set_source(input_node_output_port)
|
||||
#! [complex:transformation]
|
||||
@@ -1,27 +0,0 @@
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
#! [complex_abs:transformation]
|
||||
import numpy as np
|
||||
|
||||
from openvino.tools.mo.ops.elementwise import Pow
|
||||
from openvino.tools.mo.ops.ReduceOps import ReduceSum
|
||||
from openvino.tools.mo.front.common.replacement import FrontReplacementOp
|
||||
from openvino.tools.mo.graph.graph import Graph, Node
|
||||
from openvino.tools.mo.ops.const import Const
|
||||
|
||||
|
||||
class ComplexAbs(FrontReplacementOp):
|
||||
op = "ComplexAbs"
|
||||
enabled = True
|
||||
|
||||
def replace_op(self, graph: Graph, node: Node):
|
||||
pow_2 = Const(graph, {'value': np.float32(2.0)}).create_node()
|
||||
reduce_axis = Const(graph, {'value': np.int32(-1)}).create_node()
|
||||
pow_0_5 = Const(graph, {'value': np.float32(0.5)}).create_node()
|
||||
|
||||
sq = Pow(graph, dict(name=node.in_node(0).name + '/sq', power=2.0)).create_node([node.in_node(0), pow_2])
|
||||
sum = ReduceSum(graph, dict(name=sq.name + '/sum')).create_node([sq, reduce_axis])
|
||||
sqrt = Pow(graph, dict(name=sum.name + '/sqrt', power=0.5)).create_node([sum, pow_0_5])
|
||||
return [sqrt.id]
|
||||
#! [complex_abs:transformation]
|
||||
@@ -1,33 +0,0 @@
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
# ! [fft_ext:extractor]
|
||||
from ...ops.FFT import FFT
|
||||
from openvino.tools.mo.front.extractor import FrontExtractorOp
|
||||
|
||||
|
||||
class FFT2DFrontExtractor(FrontExtractorOp):
|
||||
op = 'FFT2D'
|
||||
enabled = True
|
||||
|
||||
@classmethod
|
||||
def extract(cls, node):
|
||||
attrs = {
|
||||
'inverse': 0
|
||||
}
|
||||
FFT.update_node_stat(node, attrs)
|
||||
return cls.enabled
|
||||
|
||||
|
||||
class IFFT2DFrontExtractor(FrontExtractorOp):
|
||||
op = 'IFFT2D'
|
||||
enabled = True
|
||||
|
||||
@classmethod
|
||||
def extract(cls, node):
|
||||
attrs = {
|
||||
'inverse': 1
|
||||
}
|
||||
FFT.update_node_stat(node, attrs)
|
||||
return cls.enabled
|
||||
# ! [fft_ext:extractor]
|
||||
@@ -1,27 +0,0 @@
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
#! [fft:operation]
|
||||
from openvino.tools.mo.front.common.partial_infer.elemental import copy_shape_infer
|
||||
from openvino.tools.mo.graph.graph import Graph
|
||||
from openvino.tools.mo.ops.op import Op
|
||||
|
||||
|
||||
class FFT(Op):
|
||||
op = 'FFT'
|
||||
enabled = False
|
||||
|
||||
def __init__(self, graph: Graph, attrs: dict):
|
||||
super().__init__(graph, {
|
||||
'type': self.op,
|
||||
'op': self.op,
|
||||
'version': 'custom_opset',
|
||||
'inverse': None,
|
||||
'in_ports_count': 1,
|
||||
'out_ports_count': 1,
|
||||
'infer': copy_shape_infer
|
||||
}, attrs)
|
||||
|
||||
def backend_attrs(self):
|
||||
return ['inverse']
|
||||
#! [fft:operation]
|
||||
@@ -1,106 +0,0 @@
|
||||
# Copyright (C) 2018-2023 Intel Corporation
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
#! [mri_demo:demo]
|
||||
import numpy as np
|
||||
import cv2 as cv
|
||||
import argparse
|
||||
import time
|
||||
from openvino.inference_engine import IECore
|
||||
|
||||
|
||||
def kspace_to_image(kspace):
|
||||
assert(len(kspace.shape) == 3 and kspace.shape[-1] == 2)
|
||||
fft = cv.idft(kspace, flags=cv.DFT_SCALE)
|
||||
img = cv.magnitude(fft[:,:,0], fft[:,:,1])
|
||||
return cv.normalize(img, dst=None, alpha=255, beta=0, norm_type=cv.NORM_MINMAX, dtype=cv.CV_8U)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(description='MRI reconstrution demo for network from https://github.com/rmsouza01/Hybrid-CS-Model-MRI (https://arxiv.org/abs/1810.12473)')
|
||||
parser.add_argument('-i', '--input', dest='input', help='Path to input .npy file with MRI scan data.')
|
||||
parser.add_argument('-p', '--pattern', dest='pattern', help='Path to sampling mask in .npy format.')
|
||||
parser.add_argument('-m', '--model', dest='model', help='Path to .xml file of OpenVINO IR.')
|
||||
parser.add_argument('-l', '--cpu_extension', dest='cpu_extension', help='Path to extensions library with FFT implementation.')
|
||||
parser.add_argument('-d', '--device', dest='device', default='CPU',
|
||||
help='Optional. Specify the target device to infer on; CPU, '
|
||||
'GPU, GNA is acceptable. For non-CPU targets, '
|
||||
'HETERO plugin is used with CPU fallbacks to FFT implementation. '
|
||||
'Default value is CPU')
|
||||
args = parser.parse_args()
|
||||
|
||||
xml_path = args.model
|
||||
assert(xml_path.endswith('.xml'))
|
||||
bin_path = xml_path[:xml_path.rfind('.xml')] + '.bin'
|
||||
|
||||
ie = IECore()
|
||||
ie.add_extension(args.cpu_extension, "CPU")
|
||||
|
||||
net = ie.read_network(xml_path, bin_path)
|
||||
|
||||
device = 'CPU' if args.device == 'CPU' else ('HETERO:' + args.device + ',CPU')
|
||||
exec_net = ie.load_network(net, device)
|
||||
|
||||
# Hybrid-CS-Model-MRI/Data/stats_fs_unet_norm_20.npy
|
||||
stats = np.array([2.20295299e-01, 1.11048916e+03, 4.16997984e+00, 4.71741395e+00], dtype=np.float32)
|
||||
# Hybrid-CS-Model-MRI/Data/sampling_mask_20perc.npy
|
||||
var_sampling_mask = np.load(args.pattern) # TODO: can we generate it in runtime?
|
||||
print('Sampling ratio:', 1.0 - var_sampling_mask.sum() / var_sampling_mask.size)
|
||||
|
||||
data = np.load(args.input)
|
||||
num_slices, height, width = data.shape[0], data.shape[1], data.shape[2]
|
||||
pred = np.zeros((num_slices, height, width), dtype=np.uint8)
|
||||
data /= np.sqrt(height * width)
|
||||
|
||||
print('Compute...')
|
||||
start = time.time()
|
||||
for slice_id, kspace in enumerate(data):
|
||||
kspace = kspace.copy()
|
||||
|
||||
# Apply sampling
|
||||
kspace[var_sampling_mask] = 0
|
||||
kspace = (kspace - stats[0]) / stats[1]
|
||||
|
||||
# Forward through network
|
||||
input = np.expand_dims(kspace.transpose(2, 0, 1), axis=0)
|
||||
outputs = exec_net.infer(inputs={'input_1': input})
|
||||
output = next(iter(outputs.values()))
|
||||
output = output.reshape(height, width)
|
||||
|
||||
# Save predictions
|
||||
pred[slice_id] = cv.normalize(output, dst=None, alpha=255, beta=0, norm_type=cv.NORM_MINMAX, dtype=cv.CV_8U)
|
||||
|
||||
print('Elapsed time: %.1f seconds' % (time.time() - start))
|
||||
|
||||
WIN_NAME = 'MRI reconstruction with OpenVINO'
|
||||
|
||||
slice_id = 0
|
||||
def callback(pos):
|
||||
global slice_id
|
||||
slice_id = pos
|
||||
|
||||
kspace = data[slice_id]
|
||||
img = kspace_to_image(kspace)
|
||||
|
||||
kspace[var_sampling_mask] = 0
|
||||
masked = kspace_to_image(kspace)
|
||||
|
||||
rec = pred[slice_id]
|
||||
|
||||
# Add a header
|
||||
border_size = 20
|
||||
render = cv.hconcat((img, masked, rec))
|
||||
render = cv.copyMakeBorder(render, border_size, 0, 0, 0, cv.BORDER_CONSTANT, value=255)
|
||||
cv.putText(render, 'Original', (0, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, color=0)
|
||||
cv.putText(render, 'Sampled (PSNR %.1f)' % cv.PSNR(img, masked), (width, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, color=0)
|
||||
cv.putText(render, 'Reconstructed (PSNR %.1f)' % cv.PSNR(img, rec), (width*2, 15), cv.FONT_HERSHEY_SIMPLEX, 0.5, color=0)
|
||||
|
||||
cv.imshow(WIN_NAME, render)
|
||||
cv.waitKey(1)
|
||||
|
||||
cv.namedWindow(WIN_NAME, cv.WINDOW_NORMAL)
|
||||
print(num_slices)
|
||||
cv.createTrackbar('Slice', WIN_NAME, num_slices // 2, num_slices - 1, callback)
|
||||
callback(num_slices // 2) # Trigger initial visualization
|
||||
cv.waitKey()
|
||||
#! [mri_demo:demo]
|
||||
@@ -1,45 +1,77 @@
|
||||
# Asynchronous Inference Request {#openvino_docs_ov_plugin_dg_async_infer_request}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Use the base ov::IAsyncInferRequest class to implement a custom asynchronous inference request in OpenVINO.
|
||||
|
||||
Asynchronous Inference Request runs an inference pipeline asynchronously in one or several task executors depending on a device pipeline structure.
|
||||
OpenVINO Runtime Plugin API provides the base ov::IAsyncInferRequest class:
|
||||
|
||||
- The class has the `m_pipeline` field of `std::vector<std::pair<std::shared_ptr<ov::threading::ITaskExecutor>, ov::threading::Task> >`, which contains pairs of an executor and executed task.
|
||||
- All executors are passed as arguments to a class constructor and they are in the running state and ready to run tasks.
|
||||
- The class has the ov::IAsyncInferRequest::stop_and_wait method, which waits for `m_pipeline` to finish in a class destructor. The method does not stop task executors and they are still in the running stage, because they belong to the compiled model instance and are not destroyed.
|
||||
* The class has the ``m_pipeline`` field of ``std::vector<std::pair<std::shared_ptr<ov::threading::ITaskExecutor>, ov::threading::Task> >``, which contains pairs of an executor and executed task.
|
||||
* All executors are passed as arguments to a class constructor and they are in the running state and ready to run tasks.
|
||||
* The class has the ov::IAsyncInferRequest::stop_and_wait method, which waits for ``m_pipeline`` to finish in a class destructor. The method does not stop task executors and they are still in the running stage, because they belong to the compiled model instance and are not destroyed.
|
||||
|
||||
AsyncInferRequest Class
|
||||
------------------------
|
||||
#######################
|
||||
|
||||
OpenVINO Runtime Plugin API provides the base ov::IAsyncInferRequest class for a custom asynchronous inference request implementation:
|
||||
|
||||
@snippet src/async_infer_request.hpp async_infer_request:header
|
||||
.. doxygensnippet:: src/plugins/template/src/async_infer_request.hpp
|
||||
:language: cpp
|
||||
:fragment: [async_infer_request:header]
|
||||
|
||||
### Class Fields
|
||||
Class Fields
|
||||
++++++++++++
|
||||
|
||||
- `m_wait_executor` - a task executor that waits for a response from a device about device tasks completion
|
||||
* ``m_cancel_callback`` - a callback which allows to interrupt the execution
|
||||
* ``m_wait_executor`` - a task executor that waits for a response from a device about device tasks completion
|
||||
|
||||
> **NOTE**: If a plugin can work with several instances of a device, `m_wait_executor` must be device-specific. Otherwise, having a single task executor for several devices does not allow them to work in parallel.
|
||||
.. note::
|
||||
|
||||
If a plugin can work with several instances of a device, ``m_wait_executor`` must be device-specific. Otherwise, having a single task executor for several devices does not allow them to work in parallel.
|
||||
|
||||
### AsyncInferRequest()
|
||||
AsyncInferRequest()
|
||||
+++++++++++++++++++
|
||||
|
||||
The main goal of the `AsyncInferRequest` constructor is to define a device pipeline `m_pipeline`. The example below demonstrates `m_pipeline` creation with the following stages:
|
||||
The main goal of the ``AsyncInferRequest`` constructor is to define a device pipeline ``m_pipeline``. The example below demonstrates ``m_pipeline`` creation with the following stages:
|
||||
|
||||
- `infer_preprocess_and_start_pipeline` is a CPU ligthweight task to submit tasks to a remote device.
|
||||
- `wait_pipeline` is a CPU non-compute task that waits for a response from a remote device.
|
||||
- `infer_postprocess` is a CPU compute task.
|
||||
* ``infer_preprocess_and_start_pipeline`` is a CPU ligthweight task to submit tasks to a remote device.
|
||||
* ``wait_pipeline`` is a CPU non-compute task that waits for a response from a remote device.
|
||||
* ``infer_postprocess`` is a CPU compute task.
|
||||
|
||||
.. doxygensnippet:: src/plugins/template/src/async_infer_request.cpp
|
||||
:language: cpp
|
||||
:fragment: [async_infer_request:ctor]
|
||||
|
||||
@snippet src/async_infer_request.cpp async_infer_request:ctor
|
||||
|
||||
The stages are distributed among two task executors in the following way:
|
||||
|
||||
- `infer_preprocess_and_start_pipeline` prepare input tensors and run on `m_request_executor`, which computes CPU tasks.
|
||||
- You need at least two executors to overlap compute tasks of a CPU and a remote device the plugin works with. Otherwise, CPU and device tasks are executed serially one by one.
|
||||
- `wait_pipeline` is sent to `m_wait_executor`, which works with the device.
|
||||
* ``infer_preprocess_and_start_pipeline`` prepare input tensors and run on ``m_request_executor``, which computes CPU tasks.
|
||||
* You need at least two executors to overlap compute tasks of a CPU and a remote device the plugin works with. Otherwise, CPU and device tasks are executed serially one by one.
|
||||
* ``wait_pipeline`` is sent to ``m_wait_executor``, which works with the device.
|
||||
|
||||
> **NOTE**: `m_callback_executor` is also passed to the constructor and it is used in the base ov::IAsyncInferRequest class, which adds a pair of `callback_executor` and a callback function set by the user to the end of the pipeline.
|
||||
.. note::
|
||||
|
||||
``m_callback_executor`` is also passed to the constructor and it is used in the base ov::IAsyncInferRequest class, which adds a pair of ``callback_executor`` and a callback function set by the user to the end of the pipeline.
|
||||
|
||||
### ~AsyncInferRequest()
|
||||
~AsyncInferRequest()
|
||||
++++++++++++++++++++
|
||||
|
||||
In the asynchronous request destructor, it is necessary to wait for a pipeline to finish. It can be done using the ov::IAsyncInferRequest::stop_and_wait method of the base class.
|
||||
|
||||
@snippet src/async_infer_request.cpp async_infer_request:dtor
|
||||
.. doxygensnippet:: src/plugins/template/src/async_infer_request.cpp
|
||||
:language: cpp
|
||||
:fragment: [async_infer_request:dtor]
|
||||
|
||||
cancel()
|
||||
++++++++
|
||||
|
||||
The method allows to cancel the infer request execution:
|
||||
|
||||
.. doxygensnippet:: src/plugins/template/src/async_infer_request.cpp
|
||||
:language: cpp
|
||||
:fragment: [async_infer_request:cancel]
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
@@ -1,69 +1,105 @@
|
||||
# Build Plugin Using CMake {#openvino_docs_ov_plugin_dg_plugin_build}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Learn how to build a plugin using CMake and OpenVINO Developer Package.
|
||||
|
||||
|
||||
OpenVINO build infrastructure provides the OpenVINO Developer Package for plugin development.
|
||||
|
||||
OpenVINO Developer Package
|
||||
------------------------
|
||||
##########################
|
||||
|
||||
To automatically generate the OpenVINO Developer Package, run the `cmake` tool during a OpenVINO build:
|
||||
To automatically generate the OpenVINO Developer Package, run the ``cmake`` tool during a OpenVINO build:
|
||||
|
||||
```bash
|
||||
$ mkdir openvino-release-build
|
||||
$ cd openvino-release-build
|
||||
$ cmake -DCMAKE_BUILD_TYPE=Release ../openvino
|
||||
```
|
||||
.. code-block:: sh
|
||||
|
||||
Once the commands above are executed, the OpenVINO Developer Package is generated in the `openvino-release-build` folder. It consists of several files:
|
||||
- `OpenVINODeveloperPackageConfig.cmake` - the main CMake script which imports targets and provides compilation flags and CMake options.
|
||||
- `OpenVINODeveloperPackageConfig-version.cmake` - a file with a package version.
|
||||
- `targets_developer.cmake` - an automatically generated file which contains all targets exported from the OpenVINO build tree. This file is included by `OpenVINODeveloperPackageConfig.cmake` to import the following targets:
|
||||
- Libraries for plugin development:
|
||||
* `openvino::runtime` - shared OpenVINO library
|
||||
* `openvino::runtime::dev` - interface library with OpenVINO Developer API
|
||||
* `openvino::pugixml` - static Pugixml library
|
||||
* `openvino::xbyak` - interface library with Xbyak headers
|
||||
* `openvino::itt` - static library with tools for performance measurement using Intel ITT
|
||||
- Libraries for tests development:
|
||||
* `openvino::gtest`, `openvino::gtest_main`, `openvino::gmock` - Google Tests framework libraries
|
||||
* `openvino::commonTestUtils` - static library with common tests utilities
|
||||
* `openvino::funcTestUtils` - static library with functional tests utilities
|
||||
* `openvino::unitTestUtils` - static library with unit tests utilities
|
||||
* `openvino::ngraphFunctions` - static library with the set of `ov::Model` builders
|
||||
* `openvino::funcSharedTests` - static library with common functional tests
|
||||
* `openvino::ngraph_reference` - static library with operation reference implementations.
|
||||
$ mkdir openvino-release-build
|
||||
$ cd openvino-release-build
|
||||
$ cmake -DCMAKE_BUILD_TYPE=Release ../openvino
|
||||
|
||||
> **NOTE**: it's enough just to run `cmake --build . --target ov_dev_targets` command to build only targets from the
|
||||
> OpenVINO Developer package.
|
||||
Once the commands above are executed, the OpenVINO Developer Package is generated in the ``openvino-release-build`` folder. It consists of several files:
|
||||
|
||||
* ``OpenVINODeveloperPackageConfig.cmake`` - the main CMake script which imports targets and provides compilation flags and CMake options.
|
||||
* ``OpenVINODeveloperPackageConfig-version.cmake`` - a file with a package version.
|
||||
* ``targets_developer.cmake`` - an automatically generated file which contains all targets exported from the OpenVINO build tree. This file is included by ``OpenVINODeveloperPackageConfig.cmake`` to import the following targets:
|
||||
|
||||
* Libraries for plugin development:
|
||||
|
||||
* ``openvino::runtime`` - shared OpenVINO library
|
||||
* ``openvino::runtime::dev`` - interface library with OpenVINO Developer API
|
||||
* ``openvino::pugixml`` - static Pugixml library
|
||||
* ``openvino::xbyak`` - interface library with Xbyak headers
|
||||
* ``openvino::itt`` - static library with tools for performance measurement using Intel ITT
|
||||
|
||||
* Libraries for tests development:
|
||||
|
||||
* ``openvino::gtest``, ``openvino::gtest_main``, ``openvino::gmock`` - Google Tests framework libraries
|
||||
* ``openvino::commonTestUtils`` - static library with common tests utilities
|
||||
* ``openvino::funcTestUtils`` - static library with functional tests utilities
|
||||
* ``openvino::unitTestUtils`` - static library with unit tests utilities
|
||||
* ``openvino::ngraphFunctions`` - static library with the set of ``ov::Model`` builders
|
||||
* ``openvino::funcSharedTests`` - static library with common functional tests
|
||||
* ``openvino::ngraph_reference`` - static library with operation reference implementations.
|
||||
|
||||
.. note::
|
||||
|
||||
It's enough just to run ``cmake --build . --target ov_dev_targets`` command to build only targets from the OpenVINO Developer package.
|
||||
|
||||
Build Plugin using OpenVINO Developer Package
|
||||
------------------------
|
||||
#############################################
|
||||
|
||||
To build a plugin source tree using the OpenVINO Developer Package, run the commands below:
|
||||
|
||||
```cmake
|
||||
$ mkdir template-plugin-release-build
|
||||
$ cd template-plugin-release-build
|
||||
$ cmake -DOpenVINODeveloperPackage_DIR=../openvino-release-build ../template-plugin
|
||||
```
|
||||
.. code-block:: sh
|
||||
|
||||
$ mkdir template-plugin-release-build
|
||||
$ cd template-plugin-release-build
|
||||
$ cmake -DOpenVINODeveloperPackage_DIR=../openvino-release-build ../template-plugin
|
||||
|
||||
|
||||
A common plugin consists of the following components:
|
||||
|
||||
1. Plugin code in the `src` folder
|
||||
2. Code of tests in the `tests` folder
|
||||
1. Plugin code in the ``src`` folder
|
||||
2. Code of tests in the ``tests`` folder
|
||||
|
||||
To build a plugin and its tests, run the following CMake scripts:
|
||||
|
||||
- Root `CMakeLists.txt`, which finds the OpenVINO Developer Package using the `find_package` CMake command and adds the `src` and `tests` subdirectories with plugin sources and their tests respectively:
|
||||
@snippet template/CMakeLists.txt cmake:main
|
||||
> **NOTE**: The default values of the `ENABLE_TESTS`, `ENABLE_FUNCTIONAL_TESTS` options are shared via the OpenVINO Developer Package and they are the same as for the main OpenVINO build tree. You can override them during plugin build using the command below:
|
||||
```bash
|
||||
$ cmake -DENABLE_FUNCTIONAL_TESTS=OFF -DOpenVINODeveloperPackage_DIR=../openvino-release-build ../template-plugin
|
||||
```
|
||||
- Root ``CMakeLists.txt``, which finds the OpenVINO Developer Package using the ``find_package`` CMake command and adds the ``src`` and ``tests`` subdirectories with plugin sources and their tests respectively:
|
||||
|
||||
- `src/CMakeLists.txt` to build a plugin shared library from sources:
|
||||
@snippet template/src/CMakeLists.txt cmake:plugin
|
||||
> **NOTE**: `openvino::...` targets are imported from the OpenVINO Developer Package.
|
||||
.. doxygensnippet:: src/plugins/template/CMakeLists.txt
|
||||
:language: cpp
|
||||
:fragment: [cmake:main]
|
||||
|
||||
.. note::
|
||||
|
||||
The default values of the ``ENABLE_TESTS``, ``ENABLE_FUNCTIONAL_TESTS`` options are shared via the OpenVINO Developer Package and they are the same as for the main OpenVINO build tree. You can override them during plugin build using the command below:
|
||||
|
||||
.. code-block:: sh
|
||||
|
||||
$ cmake -DENABLE_FUNCTIONAL_TESTS=OFF -DOpenVINODeveloperPackage_DIR=../openvino-release-build ../template-plugin
|
||||
|
||||
|
||||
* ``src/CMakeLists.txt`` to build a plugin shared library from sources:
|
||||
|
||||
.. doxygensnippet:: src/plugins/template/src/CMakeLists.txt
|
||||
:language: cpp
|
||||
:fragment: [cmake:plugin]
|
||||
|
||||
.. note::
|
||||
|
||||
``openvino::...`` targets are imported from the OpenVINO Developer Package.
|
||||
|
||||
* ``tests/functional/CMakeLists.txt`` to build a set of functional plugin tests:
|
||||
|
||||
.. doxygensnippet:: src/plugins/template/tests/functional/CMakeLists.txt
|
||||
:language: cpp
|
||||
:fragment: [cmake:functional_tests]
|
||||
|
||||
.. note::
|
||||
|
||||
The ``openvino::funcSharedTests`` static library with common functional OpenVINO Plugin tests is imported via the OpenVINO Developer Package.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
- `tests/functional/CMakeLists.txt` to build a set of functional plugin tests:
|
||||
@snippet template/tests/functional/CMakeLists.txt cmake:functional_tests
|
||||
> **NOTE**: The `openvino::funcSharedTests` static library with common functional OpenVINO Plugin tests is imported via the OpenVINO Developer Package.
|
||||
|
||||
@@ -1,89 +1,135 @@
|
||||
# Compiled Model {#openvino_docs_ov_plugin_dg_compiled_model}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Use the ov::CompiledModel class as the base class for a compiled
|
||||
model and to create an arbitrary number of ov::InferRequest objects.
|
||||
|
||||
ov::CompiledModel class functionality:
|
||||
- Compile an ov::Model instance to a backend specific graph representation
|
||||
- Create an arbitrary number of ov::InferRequest objects
|
||||
- Hold some common resources shared between different instances of ov::InferRequest. For example:
|
||||
- ov::ICompiledModel::m_task_executor task executor to implement asynchronous execution
|
||||
- ov::ICompiledModel::m_callback_executor task executor to run an asynchronous inference request callback in a separate thread
|
||||
|
||||
* Compile an ov::Model instance to a backend specific graph representation
|
||||
* Create an arbitrary number of ov::InferRequest objects
|
||||
* Hold some common resources shared between different instances of ov::InferRequest. For example:
|
||||
|
||||
* ov::ICompiledModel::m_task_executor task executor to implement asynchronous execution
|
||||
* ov::ICompiledModel::m_callback_executor task executor to run an asynchronous inference request callback in a separate thread
|
||||
|
||||
CompiledModel Class
|
||||
------------------------
|
||||
###################
|
||||
|
||||
OpenVINO Plugin API provides the interface ov::ICompiledModel which should be used as a base class for a compiled model. Based on that, a declaration of an compiled model class can look as follows:
|
||||
|
||||
@snippet src/compiled_model.hpp compiled_model:header
|
||||
.. doxygensnippet:: src/plugins/template/src/compiled_model.hpp
|
||||
:language: cpp
|
||||
:fragment: [compiled_model:header]
|
||||
|
||||
### Class Fields
|
||||
|
||||
Class Fields
|
||||
++++++++++++
|
||||
|
||||
The example class has several fields:
|
||||
|
||||
- `m_request_id` - Tracks a number of created inference requests, which is used to distinguish different inference requests during profiling via the Intel® Instrumentation and Tracing Technology (ITT) library.
|
||||
- `m_cfg` - Defines a configuration a compiled model was compiled with.
|
||||
- `m_model` - Keeps a reference to transformed `ov::Model` which is used in OpenVINO reference backend computations. Note, in case of other backends with backend specific graph representation `m_model` has different type and represents backend specific graph or just a set of computational kernels to perform an inference.
|
||||
- `m_loaded_from_cache` - Allows to understand that model was loaded from cache.
|
||||
* ``m_request_id`` - Tracks a number of created inference requests, which is used to distinguish different inference requests during profiling via the Intel® Instrumentation and Tracing Technology (ITT) library.
|
||||
* ``m_cfg`` - Defines a configuration a compiled model was compiled with.
|
||||
* ``m_model`` - Keeps a reference to transformed ``ov::Model`` which is used in OpenVINO reference backend computations. Note, in case of other backends with backend specific graph representation ``m_model`` has different type and represents backend specific graph or just a set of computational kernels to perform an inference.
|
||||
* ``m_loaded_from_cache`` - Allows to understand that model was loaded from cache.
|
||||
|
||||
### CompiledModel Constructor
|
||||
CompiledModel Constructor
|
||||
+++++++++++++++++++++++++
|
||||
|
||||
This constructor accepts a generic representation of a model as an ov::Model and is compiled into a backend specific device graph:
|
||||
|
||||
@snippet src/compiled_model.cpp compiled_model:ctor
|
||||
.. doxygensnippet:: src/plugins/template/src/compiled_model.cpp
|
||||
:language: cpp
|
||||
:fragment: [compiled_model:ctor]
|
||||
|
||||
The implementation `compile_model()` is fully device-specific.
|
||||
The implementation ``compile_model()`` is fully device-specific.
|
||||
|
||||
### compile_model()
|
||||
compile_model()
|
||||
+++++++++++++++
|
||||
|
||||
The function accepts a const shared pointer to `ov::Model` object and applies OpenVINO passes using `transform_model()` function, which defines plugin-specific conversion pipeline. To support low precision inference, the pipeline can include Low Precision Transformations. These transformations are usually hardware specific. You can find how to use and configure Low Precisions Transformations in [Low Precision Transformations](@ref openvino_docs_OV_UG_lpt) guide.
|
||||
The function accepts a const shared pointer to ``ov::Model`` object and applies OpenVINO passes using ``transform_model()`` function, which defines plugin-specific conversion pipeline. To support low precision inference, the pipeline can include Low Precision Transformations. These transformations are usually hardware specific. You can find how to use and configure Low Precisions Transformations in :doc:`Low Precision Transformations <openvino_docs_OV_UG_lpt>` guide.
|
||||
|
||||
@snippet src/compiled_model.cpp compiled_model:compile_model
|
||||
.. doxygensnippet:: src/plugins/template/src/compiled_model.cpp
|
||||
:language: cpp
|
||||
:fragment: [compiled_model:compile_model]
|
||||
|
||||
> **NOTE**: After all these steps, the backend specific graph is ready to create inference requests and perform inference.
|
||||
|
||||
### export_model()
|
||||
.. note::
|
||||
|
||||
After all these steps, the backend specific graph is ready to create inference requests and perform inference.
|
||||
|
||||
The implementation of the method should write all data to the `model_stream`, which is required to import a backend specific graph later in the `Plugin::import_model` method:
|
||||
export_model()
|
||||
++++++++++++++
|
||||
|
||||
@snippet src/compiled_model.cpp compiled_model:export_model
|
||||
The implementation of the method should write all data to the ``model_stream``, which is required to import a backend specific graph later in the ``Plugin::import_model`` method:
|
||||
|
||||
### create_sync_infer_request()
|
||||
.. doxygensnippet:: src/plugins/template/src/compiled_model.cpp
|
||||
:language: cpp
|
||||
:fragment: [compiled_model:export_model]
|
||||
|
||||
create_sync_infer_request()
|
||||
+++++++++++++++++++++++++++
|
||||
|
||||
The method creates an synchronous inference request and returns it.
|
||||
|
||||
@snippet src/compiled_model.cpp compiled_model:create_sync_infer_request
|
||||
.. doxygensnippet:: src/plugins/template/src/compiled_model.cpp
|
||||
:language: cpp
|
||||
:fragment: [compiled_model:create_sync_infer_request]
|
||||
|
||||
While the public OpenVINO API has a single interface for inference request, which can be executed in synchronous and asynchronous modes, a plugin library implementation has two separate classes:
|
||||
|
||||
- [Synchronous inference request](@ref openvino_docs_ov_plugin_dg_infer_request), which defines pipeline stages and runs them synchronously in the `infer` method.
|
||||
- [Asynchronous inference request](@ref openvino_docs_ov_plugin_dg_async_infer_request), which is a wrapper for a synchronous inference request and can run a pipeline asynchronously. Depending on a device pipeline structure, it can has one or several stages:
|
||||
- For single-stage pipelines, there is no need to define this method and create a class derived from ov::IAsyncInferRequest. For single stage pipelines, a default implementation of this method creates ov::IAsyncInferRequest wrapping a synchronous inference request and runs it asynchronously in the `m_request_executor` executor.
|
||||
- For pipelines with multiple stages, such as performing some preprocessing on host, uploading input data to a device, running inference on a device, or downloading and postprocessing output data, schedule stages on several task executors to achieve better device use and performance. You can do it by creating a sufficient number of inference requests running in parallel. In this case, device stages of different inference requests are overlapped with preprocessing and postprocessing stage giving better performance.
|
||||
> **IMPORTANT**: It is up to you to decide how many task executors you need to optimally execute a device pipeline.
|
||||
* :doc:`Synchronous inference request <openvino_docs_ov_plugin_dg_infer_request>`, which defines pipeline stages and runs them synchronously in the ``infer`` method.
|
||||
|
||||
* :doc:`Asynchronous inference request <openvino_docs_ov_plugin_dg_async_infer_request>`, which is a wrapper for a synchronous inference request and can run a pipeline asynchronously. Depending on a device pipeline structure, it can have one or several stages:
|
||||
|
||||
* For single-stage pipelines, there is no need to define this method and create a class derived from ov::IAsyncInferRequest. For single stage pipelines, a default implementation of this method creates ov::IAsyncInferRequest wrapping a synchronous inference request and runs it asynchronously in the ``m_request_executor`` executor.
|
||||
* For pipelines with multiple stages, such as performing some preprocessing on host, uploading input data to a device, running inference on a device, or downloading and postprocessing output data, schedule stages on several task executors to achieve better device use and performance. You can do it by creating a sufficient number of inference requests running in parallel. In this case, device stages of different inference requests are overlapped with preprocessing and postprocessing stage giving better performance.
|
||||
|
||||
.. important::
|
||||
|
||||
It is up to you to decide how many task executors you need to optimally execute a device pipeline.
|
||||
|
||||
|
||||
### create_infer_request()
|
||||
create_infer_request()
|
||||
++++++++++++++++++++++
|
||||
|
||||
The method creates an asynchronous inference request and returns it.
|
||||
|
||||
@snippet src/compiled_model.cpp compiled_model:create_infer_request
|
||||
.. doxygensnippet:: src/plugins/template/src/compiled_model.cpp
|
||||
:language: cpp
|
||||
:fragment: [compiled_model:create_infer_request]
|
||||
|
||||
### get_property()
|
||||
get_property()
|
||||
++++++++++++++
|
||||
|
||||
Returns a current value for a property with the name `name`. The method extracts configuration values a compiled model is compiled with.
|
||||
Returns a current value for a property with the name ``name``. The method extracts configuration values a compiled model is compiled with.
|
||||
|
||||
@snippet src/compiled_model.cpp compiled_model:get_property
|
||||
.. doxygensnippet:: src/plugins/template/src/compiled_model.cpp
|
||||
:language: cpp
|
||||
:fragment: [compiled_model:get_property]
|
||||
|
||||
This function is the only way to get configuration values when a model is imported and compiled by other developers and tools.
|
||||
|
||||
### set_property()
|
||||
set_property()
|
||||
++++++++++++++
|
||||
|
||||
The methods allows to set compiled model specific properties.
|
||||
|
||||
@snippet src/compiled_model.cpp compiled_model:set_property
|
||||
.. doxygensnippet:: src/plugins/template/src/compiled_model.cpp
|
||||
:language: cpp
|
||||
:fragment: [compiled_model:set_property]
|
||||
|
||||
### get_runtime_model()
|
||||
get_runtime_model()
|
||||
+++++++++++++++++++
|
||||
|
||||
The methods returns the runtime model with backend specific information.
|
||||
|
||||
@snippet src/compiled_model.cpp compiled_model:get_runtime_model
|
||||
.. doxygensnippet:: src/plugins/template/src/compiled_model.cpp
|
||||
:language: cpp
|
||||
:fragment: [compiled_model:get_runtime_model]
|
||||
|
||||
The next step in plugin library implementation is the [Synchronous Inference Request](@ref openvino_docs_ov_plugin_dg_infer_request) class.
|
||||
The next step in plugin library implementation is the :doc:`Synchronous Inference Request <openvino_docs_ov_plugin_dg_infer_request>` class.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -1,92 +1,149 @@
|
||||
# Synchronous Inference Request {#openvino_docs_ov_plugin_dg_infer_request}
|
||||
|
||||
`InferRequest` class functionality:
|
||||
- Allocate input and output tensors needed for a backend-dependent network inference.
|
||||
- Define functions for inference process stages (for example, `preprocess`, `upload`, `infer`, `download`, `postprocess`). These functions can later be used to define an execution pipeline during [Asynchronous Inference Request](@ref openvino_docs_ov_plugin_dg_async_infer_request) implementation.
|
||||
- Call inference stages one by one synchronously.
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Use the ov::ISyncInferRequest interface as the base class to implement a synchronous inference request in OpenVINO.
|
||||
|
||||
|
||||
``InferRequest`` class functionality:
|
||||
|
||||
* Allocate input and output tensors needed for a backend-dependent network inference.
|
||||
* Define functions for inference process stages (for example, ``preprocess``, ``upload``, ``infer``, ``download``, ``postprocess``). These functions can later be used to define an execution pipeline during :doc:`Asynchronous Inference Request <openvino_docs_ov_plugin_dg_async_infer_request>` implementation.
|
||||
* Call inference stages one by one synchronously.
|
||||
|
||||
InferRequest Class
|
||||
------------------------
|
||||
##################
|
||||
|
||||
OpenVINO Plugin API provides the interface ov::ISyncInferRequest which should be
|
||||
used as a base class for a synchronous inference request implementation. Based of that, a declaration
|
||||
of a synchronous request class can look as follows:
|
||||
|
||||
@snippet src/sync_infer_request.hpp infer_request:header
|
||||
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.hpp
|
||||
:language: cpp
|
||||
:fragment: [infer_request:header]
|
||||
|
||||
### Class Fields
|
||||
Class Fields
|
||||
++++++++++++
|
||||
|
||||
The example class has several fields:
|
||||
|
||||
- `m_profiling_task` - array of the `std::array<openvino::itt::handle_t, numOfStages>` type. Defines names for pipeline stages. Used to profile an inference pipeline execution with the Intel® instrumentation and tracing technology (ITT).
|
||||
- `m_durations` - array of durations of each pipeline stage.
|
||||
- backend specific fields:
|
||||
- `m_backend_input_tensors` - input backend tensors.
|
||||
- `m_backend_output_tensors` - output backend tensors.
|
||||
- `m_executable` - an executable object / backend computational graph.
|
||||
- `m_eval_context` - an evaluation context to save backend states after the inference.
|
||||
- `m_variable_states` - a vector of variable states.
|
||||
* ``m_profiling_task`` - array of the ``std::array<openvino::itt::handle_t, numOfStages>`` type. Defines names for pipeline stages. Used to profile an inference pipeline execution with the Intel® instrumentation and tracing technology (ITT).
|
||||
|
||||
### InferRequest Constructor
|
||||
* ``m_durations`` - array of durations of each pipeline stage.
|
||||
|
||||
* backend-specific fields:
|
||||
|
||||
* ``m_backend_input_tensors`` - input backend tensors.
|
||||
* ``m_backend_output_tensors`` - output backend tensors.
|
||||
* ``m_executable`` - an executable object / backend computational graph.
|
||||
* ``m_eval_context`` - an evaluation context to save backend states after the inference.
|
||||
* ``m_variable_states`` - a vector of variable states.
|
||||
|
||||
InferRequest Constructor
|
||||
++++++++++++++++++++++++
|
||||
|
||||
The constructor initializes helper fields and calls methods which allocate tensors:
|
||||
|
||||
@snippet src/sync_infer_request.cpp infer_request:ctor
|
||||
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
|
||||
:language: cpp
|
||||
:fragment: [infer_request:ctor]
|
||||
|
||||
> **NOTE**: Use inputs/outputs information from the compiled model to understand shape and element type of tensors, which you can set with ov::InferRequest::set_tensor and get with ov::InferRequest::get_tensor. A plugin uses these hints to determine its internal layouts and element types for input and output tensors if needed.
|
||||
.. note::
|
||||
|
||||
### ~InferRequest Destructor
|
||||
Use inputs/outputs information from the compiled model to understand shape and element type of tensors, which you can set with ov::InferRequest::set_tensor and get with ov::InferRequest::get_tensor. A plugin uses these hints to determine its internal layouts and element types for input and output tensors if needed.
|
||||
|
||||
~InferRequest Destructor
|
||||
++++++++++++++++++++++++
|
||||
|
||||
Destructor can contain plugin specific logic to finish and destroy infer request.
|
||||
|
||||
@snippet src/sync_infer_request.cpp infer_request:dtor
|
||||
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
|
||||
:language: cpp
|
||||
:fragment: [infer_request:dtor]
|
||||
|
||||
### set_tensors_impl()
|
||||
set_tensors_impl()
|
||||
+++++++++++++++++++
|
||||
|
||||
The method allows to set batched tensors in case if the plugin supports it.
|
||||
|
||||
@snippet src/sync_infer_request.cpp infer_request:set_tensors_impl
|
||||
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
|
||||
:language: cpp
|
||||
:fragment: [infer_request:set_tensors_impl]
|
||||
|
||||
### query_state()
|
||||
query_state()
|
||||
+++++++++++++
|
||||
|
||||
The method returns variable states from the model.
|
||||
|
||||
@snippet src/sync_infer_request.cpp infer_request:query_state
|
||||
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
|
||||
:language: cpp
|
||||
:fragment: [infer_request:query_state]
|
||||
|
||||
### infer()
|
||||
infer()
|
||||
+++++++
|
||||
|
||||
The method calls actual pipeline stages synchronously. Inside the method plugin should check input/output tensors, move external tensors to backend and run the inference.
|
||||
|
||||
@snippet src/sync_infer_request.cpp infer_request:infer
|
||||
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
|
||||
:language: cpp
|
||||
:fragment: [infer_request:infer]
|
||||
|
||||
#### 1. infer_preprocess()
|
||||
1. infer_preprocess()
|
||||
----------------------
|
||||
|
||||
Below is the code of the `infer_preprocess()` method. The method checks user input/output tensors and demonstrates conversion from user tensor to backend specific representation:
|
||||
Below is the code of the ``infer_preprocess()`` method. The method checks user input/output tensors and demonstrates conversion from user tensor to backend specific representation:
|
||||
|
||||
@snippet src/sync_infer_request.cpp infer_request:infer_preprocess
|
||||
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
|
||||
:language: cpp
|
||||
:fragment: [infer_request:infer_preprocess]
|
||||
|
||||
#### 2. start_pipeline()
|
||||
2. start_pipeline()
|
||||
--------------------
|
||||
|
||||
Executes a pipeline synchronously using `m_executable` object:
|
||||
Executes a pipeline synchronously using ``m_executable`` object:
|
||||
|
||||
@snippet src/sync_infer_request.cpp infer_request:start_pipeline
|
||||
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
|
||||
:language: cpp
|
||||
:fragment: [infer_request:start_pipeline]
|
||||
|
||||
#### 3. wait_pipeline()
|
||||
3. wait_pipeline()
|
||||
--------------------
|
||||
|
||||
Waits a pipeline in case of plugin asynchronous execution:
|
||||
|
||||
@snippet src/sync_infer_request.cpp infer_request:wait_pipeline
|
||||
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
|
||||
:language: cpp
|
||||
:fragment: [infer_request:wait_pipeline]
|
||||
|
||||
#### 4. infer_postprocess()
|
||||
4. infer_postprocess()
|
||||
----------------------
|
||||
|
||||
Converts backend specific tensors to tensors passed by user:
|
||||
|
||||
@snippet src/sync_infer_request.cpp infer_request:infer_postprocess
|
||||
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
|
||||
:language: cpp
|
||||
:fragment: [infer_request:infer_postprocess]
|
||||
|
||||
### get_profiling_info()
|
||||
get_profiling_info()
|
||||
+++++++++++++++++++++
|
||||
|
||||
The method returns the profiling info which was measured during pipeline stages execution:
|
||||
|
||||
@snippet src/sync_infer_request.cpp infer_request:get_profiling_info
|
||||
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
|
||||
:language: cpp
|
||||
:fragment: [infer_request:get_profiling_info]
|
||||
|
||||
The next step in the plugin library implementation is the [Asynchronous Inference Request](@ref openvino_docs_ov_plugin_dg_async_infer_request) class.
|
||||
cancel()
|
||||
+++++++++
|
||||
|
||||
The plugin specific method allows to interrupt the synchronous execution from the AsyncInferRequest:
|
||||
|
||||
.. doxygensnippet:: src/plugins/template/src/sync_infer_request.cpp
|
||||
:language: cpp
|
||||
:fragment: [infer_request:cancel]
|
||||
|
||||
|
||||
The next step in the plugin library implementation is the :doc:`Asynchronous Inference Request <openvino_docs_ov_plugin_dg_async_infer_request>` class.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -2,6 +2,12 @@
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Develop and implement independent inference solutions for
|
||||
different devices with the components of plugin architecture
|
||||
of OpenVINO.
|
||||
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: Converting and Preparing Models
|
||||
@@ -19,55 +25,75 @@
|
||||
openvino_docs_ie_plugin_detailed_guides
|
||||
openvino_docs_ie_plugin_api_references
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
The plugin architecture of the OpenVINO allows to develop and plug independent inference
|
||||
The plugin architecture of OpenVINO allows to develop and plug independent inference
|
||||
solutions dedicated to different devices. Physically, a plugin is represented as a dynamic library
|
||||
exporting the single `CreatePluginEngine` function that allows to create a new plugin instance.
|
||||
exporting the single ``CreatePluginEngine`` function that allows to create a new plugin instance.
|
||||
|
||||
OpenVINO Plugin Library
|
||||
-----------------------
|
||||
#######################
|
||||
|
||||
OpenVINO plugin dynamic library consists of several main components:
|
||||
|
||||
1. [Plugin class](@ref openvino_docs_ov_plugin_dg_plugin):
|
||||
- Provides information about devices of a specific type.
|
||||
- Can create an [compiled model](@ref openvino_docs_ov_plugin_dg_compiled_model) instance which represents a Neural Network backend specific graph structure for a particular device in opposite to the ov::Model
|
||||
which is backend-independent.
|
||||
- Can import an already compiled graph structure from an input stream to an
|
||||
[compiled model](@ref openvino_docs_ov_plugin_dg_compiled_model) object.
|
||||
2. [Compiled Model class](@ref openvino_docs_ov_plugin_dg_compiled_model):
|
||||
- Is an execution configuration compiled for a particular device and takes into account its capabilities.
|
||||
- Holds a reference to a particular device and a task executor for this device.
|
||||
- Can create several instances of [Inference Request](@ref openvino_docs_ov_plugin_dg_infer_request).
|
||||
- Can export an internal backend specific graph structure to an output stream.
|
||||
3. [Inference Request class](@ref openvino_docs_ov_plugin_dg_infer_request):
|
||||
- Runs an inference pipeline serially.
|
||||
- Can extract performance counters for an inference pipeline execution profiling.
|
||||
4. [Asynchronous Inference Request class](@ref openvino_docs_ov_plugin_dg_async_infer_request):
|
||||
- Wraps the [Inference Request](@ref openvino_docs_ov_plugin_dg_infer_request) class and runs pipeline stages in parallel on several task executors based on a device-specific pipeline structure.
|
||||
5. [Plugin specific properties](@ref openvino_docs_ov_plugin_dg_properties):
|
||||
- Provides the plugin specific properties.
|
||||
6. [Remote Context](@ref openvino_docs_ov_plugin_dg_remote_context):
|
||||
- Provides the device specific remote context. Context allows to create remote tensors.
|
||||
7. [Remote Tensor](@ref openvino_docs_ov_plugin_dg_remote_tensor)
|
||||
- Provides the device specific remote tensor API and implementation.
|
||||
1. :doc:`Plugin class <openvino_docs_ov_plugin_dg_plugin>`:
|
||||
|
||||
> **NOTE**: This documentation is written based on the `Template` plugin, which demonstrates plugin development details. Find the complete code of the `Template`, which is fully compilable and up-to-date,
|
||||
at `<openvino source dir>/src/plugins/template`.
|
||||
* Provides information about devices of a specific type.
|
||||
* Can create an :doc:`compiled model <openvino_docs_ov_plugin_dg_compiled_model>` instance which represents a Neural Network backend specific graph structure for a particular device in opposite to the ov::Model which is backend-independent.
|
||||
* Can import an already compiled graph structure from an input stream to a :doc:`compiled model <openvino_docs_ov_plugin_dg_compiled_model>` object.
|
||||
|
||||
|
||||
Detailed guides
|
||||
-----------------------
|
||||
2. :doc:`Compiled Model class <openvino_docs_ov_plugin_dg_compiled_model>`:
|
||||
|
||||
* [Build](@ref openvino_docs_ov_plugin_dg_plugin_build) a plugin library using CMake
|
||||
* Plugin and its components [testing](@ref openvino_docs_ov_plugin_dg_plugin_testing)
|
||||
* [Quantized networks](@ref openvino_docs_ov_plugin_dg_quantized_models)
|
||||
* [Low precision transformations](@ref openvino_docs_OV_UG_lpt) guide
|
||||
* [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide
|
||||
* Is an execution configuration compiled for a particular device and takes into account its capabilities.
|
||||
* Holds a reference to a particular device and a task executor for this device.
|
||||
* Can create several instances of :doc:`Inference Request <openvino_docs_ov_plugin_dg_infer_request>`.
|
||||
* Can export an internal backend specific graph structure to an output stream.
|
||||
|
||||
|
||||
3. :doc:`Inference Request class <openvino_docs_ov_plugin_dg_infer_request>`:
|
||||
|
||||
* Runs an inference pipeline serially.
|
||||
* Can extract performance counters for an inference pipeline execution profiling.
|
||||
|
||||
|
||||
4. :doc:`Asynchronous Inference Request class <openvino_docs_ov_plugin_dg_async_infer_request>`:
|
||||
|
||||
* Wraps the :doc:`Inference Request <openvino_docs_ov_plugin_dg_infer_request>` class and runs pipeline stages in parallel on several task executors based on a device-specific pipeline structure.
|
||||
|
||||
|
||||
5. :doc:`Plugin specific properties <openvino_docs_ov_plugin_dg_properties>`:
|
||||
|
||||
* Provides the plugin specific properties.
|
||||
|
||||
|
||||
6. :doc:`Remote Context <openvino_docs_ov_plugin_dg_remote_context>`:
|
||||
|
||||
* Provides the device specific remote context. Context allows to create remote tensors.
|
||||
|
||||
|
||||
7. :doc:`Remote Tensor <openvino_docs_ov_plugin_dg_remote_tensor>`
|
||||
|
||||
* Provides the device specific remote tensor API and implementation.
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
This documentation is written based on the ``Template`` plugin, which demonstrates plugin development details. Find the complete code of the ``Template``, which is fully compilable and up-to-date, at ``<openvino source dir>/src/plugins/template``.
|
||||
|
||||
|
||||
Detailed Guides
|
||||
###############
|
||||
|
||||
* :doc:`Build <openvino_docs_ov_plugin_dg_plugin_build>` a plugin library using CMake
|
||||
* Plugin and its components :doc:`testing <openvino_docs_ov_plugin_dg_plugin_testing>`
|
||||
* :doc:`Quantized networks <openvino_docs_ov_plugin_dg_quantized_models>`
|
||||
* :doc:`Low precision transformations <openvino_docs_OV_UG_lpt>` guide
|
||||
* :doc:`Writing OpenVINO™ transformations <openvino_docs_transformations>` guide
|
||||
|
||||
API References
|
||||
-----------------------
|
||||
##############
|
||||
|
||||
* [OpenVINO Plugin API](@ref ov_dev_api)
|
||||
* [OpenVINO Transformation API](@ref ie_transformation_api)
|
||||
* `OpenVINO Plugin API <https://docs.openvino.ai/2023.0/groupov_dev_api.html>`__
|
||||
* `OpenVINO Transformation API <https://docs.openvino.ai/2023.0/groupie_transformation_api.html>`__
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -1,171 +1,235 @@
|
||||
# Plugin {#openvino_docs_ov_plugin_dg_plugin}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Explore OpenVINO Plugin API, which includes functions and
|
||||
helper classes that simplify the development of new plugins.
|
||||
|
||||
|
||||
OpenVINO Plugin usually represents a wrapper around a backend. Backends can be:
|
||||
- OpenCL-like backend (e.g. clDNN library) for GPU devices.
|
||||
- oneDNN backend for Intel CPU devices.
|
||||
- NVIDIA cuDNN for NVIDIA GPUs.
|
||||
|
||||
* OpenCL-like backend (e.g. clDNN library) for GPU devices.
|
||||
* oneDNN backend for Intel CPU devices.
|
||||
* NVIDIA cuDNN for NVIDIA GPUs.
|
||||
|
||||
The responsibility of OpenVINO Plugin:
|
||||
- Initializes a backend and throw exception in `Engine` constructor if backend cannot be initialized.
|
||||
- Provides information about devices enabled by a particular backend, e.g. how many devices, their properties and so on.
|
||||
- Loads or imports [compiled model](@ref openvino_docs_ov_plugin_dg_compiled_model) objects.
|
||||
|
||||
* Initializes a backend and throw exception in ``Engine`` constructor if backend cannot be initialized.
|
||||
* Provides information about devices enabled by a particular backend, e.g. how many devices, their properties and so on.
|
||||
* Loads or imports :doc:`compiled model <openvino_docs_ov_plugin_dg_compiled_model>` objects.
|
||||
|
||||
In addition to the OpenVINO Public API, the OpenVINO provides the Plugin API, which is a set of functions and helper classes that simplify new plugin development:
|
||||
|
||||
- header files in the `src/inference/dev_api/openvino` directory
|
||||
- implementations in the `src/inference/src/dev/` directory
|
||||
- symbols in the OpenVINO shared library
|
||||
* header files in the ``src/inference/dev_api/openvino`` directory
|
||||
* implementations in the ``src/inference/src/dev/`` directory
|
||||
* symbols in the OpenVINO shared library
|
||||
|
||||
To build an OpenVINO plugin with the Plugin API, see the [OpenVINO Plugin Building](@ref openvino_docs_ov_plugin_dg_plugin_build) guide.
|
||||
To build an OpenVINO plugin with the Plugin API, see the :doc:`OpenVINO Plugin Building <openvino_docs_ov_plugin_dg_plugin_build>` guide.
|
||||
|
||||
Plugin Class
|
||||
------------------------
|
||||
############
|
||||
|
||||
OpenVINO Plugin API provides the helper ov::IPlugin class recommended to use as a base class for a plugin.
|
||||
Based on that, declaration of a plugin class can look as follows:
|
||||
|
||||
@snippet template/src/plugin.hpp plugin:header
|
||||
.. doxygensnippet:: src/plugins/template/src/plugin.hpp
|
||||
:language: cpp
|
||||
:fragment: [plugin:header]
|
||||
|
||||
### Class Fields
|
||||
|
||||
Class Fields
|
||||
++++++++++++
|
||||
|
||||
The provided plugin class also has several fields:
|
||||
|
||||
* `m_backend` - a backend engine that is used to perform actual computations for model inference. For `Template` plugin `ov::runtime::Backend` is used which performs computations using OpenVINO™ reference implementations.
|
||||
* `m_waitExecutor` - a task executor that waits for a response from a device about device tasks completion.
|
||||
* `m_cfg` of type `Configuration`:
|
||||
* ``m_backend`` - a backend engine that is used to perform actual computations for model inference. For ``Template`` plugin ``ov::runtime::Backend`` is used which performs computations using OpenVINO™ reference implementations.
|
||||
* ``m_waitExecutor`` - a task executor that waits for a response from a device about device tasks completion.
|
||||
* ``m_cfg`` of type ``Configuration``:
|
||||
|
||||
@snippet template/src/config.hpp configuration:header
|
||||
.. doxygensnippet:: src/plugins/template/src/config.hpp
|
||||
:language: cpp
|
||||
:fragment: [configuration:header]
|
||||
|
||||
As an example, a plugin configuration has three value parameters:
|
||||
|
||||
- `device_id` - particular device ID to work with. Applicable if a plugin supports more than one `Template` device. In this case, some plugin methods, like `set_property`, `query_model`, and `compile_model`, must support the ov::device::id property.
|
||||
- `perf_counts` - boolean value to identify whether to collect performance counters during [Inference Request](@ref openvino_docs_ov_plugin_dg_infer_request) execution.
|
||||
- `streams_executor_config` - configuration of `ov::threading::IStreamsExecutor` to handle settings of multi-threaded context.
|
||||
- `performance_mode` - configuration of `ov::hint::PerformanceMode` to set the performance mode.
|
||||
- `disable_transformations` - allows to disable transformations which are applied in the process of model compilation.
|
||||
* ``device_id`` - particular device ID to work with. Applicable if a plugin supports more than one ``Template`` device. In this case, some plugin methods, like ``set_property``, ``query_model``, and ``compile_model``, must support the ov::device::id property.
|
||||
* ``perf_counts`` - boolean value to identify whether to collect performance counters during :doc:`Inference Request <openvino_docs_ov_plugin_dg_infer_request>` execution.
|
||||
* ``streams_executor_config`` - configuration of ``ov::threading::IStreamsExecutor`` to handle settings of multi-threaded context.
|
||||
* ``performance_mode`` - configuration of ``ov::hint::PerformanceMode`` to set the performance mode.
|
||||
* ``disable_transformations`` - allows to disable transformations which are applied in the process of model compilation.
|
||||
* ``exclusive_async_requests`` - allows to use exclusive task executor for asynchronous infer requests.
|
||||
|
||||
### Plugin Constructor
|
||||
Plugin Constructor
|
||||
++++++++++++++++++
|
||||
|
||||
A plugin constructor must contain code that checks the ability to work with a device of the `Template`
|
||||
A plugin constructor must contain code that checks the ability to work with a device of the ``Template``
|
||||
type. For example, if some drivers are required, the code must check
|
||||
driver availability. If a driver is not available (for example, OpenCL runtime is not installed in
|
||||
case of a GPU device or there is an improper version of a driver is on a host machine), an exception
|
||||
must be thrown from a plugin constructor.
|
||||
|
||||
A plugin must define a device name enabled via the `set_device_name()` method of a base class:
|
||||
A plugin must define a device name enabled via the ``set_device_name()`` method of a base class:
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:ctor
|
||||
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
|
||||
:language: cpp
|
||||
:fragment: [plugin:ctor]
|
||||
|
||||
### Plugin Destructor
|
||||
Plugin Destructor
|
||||
+++++++++++++++++
|
||||
|
||||
A plugin destructor must stop all plugins activities, and clean all allocated resources.
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:dtor
|
||||
|
||||
### compile_model()
|
||||
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
|
||||
:language: cpp
|
||||
:fragment: [plugin:dtor]
|
||||
|
||||
The plugin should implement two `compile_model()` methods: the first one compiles model without remote context, the second one with remote context if plugin supports.
|
||||
compile_model()
|
||||
+++++++++++++++
|
||||
|
||||
This is the most important function of the `Plugin` class is to create an instance of compiled `CompiledModel`,
|
||||
The plugin should implement two ``compile_model()`` methods: the first one compiles model without remote context, the second one with remote context if plugin supports.
|
||||
|
||||
This is the most important function of the ``Plugin`` class is to create an instance of compiled ``CompiledModel``,
|
||||
which holds a backend-dependent compiled model in an internal representation:
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:compile_model
|
||||
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
|
||||
:language: cpp
|
||||
:fragment: [plugin:compile_model]
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:compile_model_with_remote
|
||||
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
|
||||
:language: cpp
|
||||
:fragment: [plugin:compile_model_with_remote]
|
||||
|
||||
Before a creation of an `CompiledModel` instance via a constructor, a plugin may check if a provided
|
||||
Before a creation of an ``CompiledModel`` instance via a constructor, a plugin may check if a provided
|
||||
ov::Model object is supported by a device if it is needed.
|
||||
|
||||
Actual model compilation is done in the `CompiledModel` constructor. Refer to the [CompiledModel Implementation Guide](@ref openvino_docs_ov_plugin_dg_compiled_model) for details.
|
||||
Actual model compilation is done in the ``CompiledModel`` constructor. Refer to the :doc:`CompiledModel Implementation Guide <openvino_docs_ov_plugin_dg_compiled_model>` for details.
|
||||
|
||||
> **NOTE**: Actual configuration map used in `CompiledModel` is constructed as a base plugin
|
||||
> configuration set via `Plugin::set_property`, where some values are overwritten with `config` passed to `Plugin::compile_model`.
|
||||
> Therefore, the config of `Plugin::compile_model` has a higher priority.
|
||||
.. note::
|
||||
|
||||
### transform_model()
|
||||
Actual configuration map used in ``CompiledModel`` is constructed as a base plugin configuration set via ``Plugin::set_property``, where some values are overwritten with ``config`` passed to ``Plugin::compile_model``. Therefore, the config of ``Plugin::compile_model`` has a higher priority.
|
||||
|
||||
The function accepts a const shared pointer to `ov::Model` object and applies common and device-specific transformations on a copied model to make it more friendly to hardware operations. For details how to write custom device-specific transformation, please, refer to [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide. See detailed topics about model representation:
|
||||
* [Intermediate Representation and Operation Sets](@ref openvino_docs_MO_DG_IR_and_opsets)
|
||||
* [Quantized models](@ref openvino_docs_ov_plugin_dg_quantized_models).
|
||||
transform_model()
|
||||
+++++++++++++++++
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:transform_model
|
||||
The function accepts a const shared pointer to `ov::Model` object and applies common and device-specific transformations on a copied model to make it more friendly to hardware operations. For details how to write custom device-specific transformation, refer to :doc:`Writing OpenVINO™ transformations <openvino_docs_transformations>` guide. See detailed topics about model representation:
|
||||
|
||||
> **NOTE**: After all these transformations, an `ov::Model` object contains operations which can be perfectly mapped to backend kernels. E.g. if backend has kernel computing `A + B` operations at once, the `transform_model` function should contain a pass which fuses operations `A` and `B` into a single custom operation `A + B` which fits backend kernels set.
|
||||
* :doc:`Intermediate Representation and Operation Sets <openvino_docs_MO_DG_IR_and_opsets>`
|
||||
* :doc:`Quantized models <openvino_docs_ov_plugin_dg_quantized_models>`.
|
||||
|
||||
### query_model()
|
||||
|
||||
Use the method with the `HETERO` mode, which allows to distribute model execution between different
|
||||
devices based on the `ov::Node::get_rt_info()` map, which can contain the `"affinity"` key.
|
||||
The `query_model` method analyzes operations of provided `model` and returns a list of supported
|
||||
operations via the ov::SupportedOpsMap structure. The `query_model` firstly applies `transform_model` passes to input `ov::Model` argument. After this, the transformed model in ideal case contains only operations are 1:1 mapped to kernels in computational backend. In this case, it's very easy to analyze which operations is supposed (`m_backend` has a kernel for such operation or extensions for the operation is provided) and not supported (kernel is missed in `m_backend`):
|
||||
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
|
||||
:language: cpp
|
||||
:fragment: [plugin:transform_model]
|
||||
|
||||
1. Store original names of all operations in input `ov::Model`
|
||||
2. Apply `transform_model` passes. Note, the names of operations in a transformed model can be different and we need to restore the mapping in the steps below.
|
||||
3. Construct `supported` map which contains names of original operations. Note, that since the inference is performed using OpenVINO™ reference backend, the decision whether the operation is supported or not depends on whether the latest OpenVINO opset contains such operation.
|
||||
4. `ov.SupportedOpsMap` contains only operations which are fully supported by `m_backend`.
|
||||
.. note::
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:query_model
|
||||
After all these transformations, an ``ov::Model`` object contains operations which can be perfectly mapped to backend kernels. E.g. if backend has kernel computing ``A + B`` operations at once, the ``transform_model`` function should contain a pass which fuses operations ``A`` and ``B`` into a single custom operation `A + B` which fits backend kernels set.
|
||||
|
||||
### set_property()
|
||||
query_model()
|
||||
+++++++++++++
|
||||
|
||||
Use the method with the ``HETERO`` mode, which allows to distribute model execution between different
|
||||
devices based on the ``ov::Node::get_rt_info()`` map, which can contain the ``affinity`` key.
|
||||
The ``query_model`` method analyzes operations of provided ``model`` and returns a list of supported
|
||||
operations via the ov::SupportedOpsMap structure. The ``query_model`` firstly applies ``transform_model`` passes to input ``ov::Model`` argument. After this, the transformed model in ideal case contains only operations are 1:1 mapped to kernels in computational backend. In this case, it's very easy to analyze which operations is supposed (``m_backend`` has a kernel for such operation or extensions for the operation is provided) and not supported (kernel is missed in ``m_backend``):
|
||||
|
||||
1. Store original names of all operations in input ``ov::Model``.
|
||||
2. Apply ``transform_model`` passes. Note, the names of operations in a transformed model can be different and we need to restore the mapping in the steps below.
|
||||
3. Construct ``supported`` map which contains names of original operations. Note that since the inference is performed using OpenVINO™ reference backend, the decision whether the operation is supported or not depends on whether the latest OpenVINO opset contains such operation.
|
||||
4. ``ov.SupportedOpsMap`` contains only operations which are fully supported by ``m_backend``.
|
||||
|
||||
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
|
||||
:language: cpp
|
||||
:fragment: [plugin:query_model]
|
||||
|
||||
set_property()
|
||||
++++++++++++++
|
||||
|
||||
Sets new values for plugin property keys:
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:set_property
|
||||
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
|
||||
:language: cpp
|
||||
:fragment: [plugin:set_property]
|
||||
|
||||
In the snippet above, the `Configuration` class overrides previous configuration values with the new
|
||||
In the snippet above, the ``Configuration`` class overrides previous configuration values with the new
|
||||
ones. All these values are used during backend specific model compilation and execution of inference requests.
|
||||
|
||||
> **NOTE**: The function must throw an exception if it receives an unsupported configuration key.
|
||||
.. note::
|
||||
|
||||
The function must throw an exception if it receives an unsupported configuration key.
|
||||
|
||||
### get_property()
|
||||
get_property()
|
||||
++++++++++++++
|
||||
|
||||
Returns a current value for a specified property key:
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:get_property
|
||||
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
|
||||
:language: cpp
|
||||
:fragment: [plugin:get_property]
|
||||
|
||||
The function is implemented with the `Configuration::Get` method, which wraps an actual configuration
|
||||
The function is implemented with the ``Configuration::Get`` method, which wraps an actual configuration
|
||||
key value to the ov::Any and returns it.
|
||||
|
||||
> **NOTE**: The function must throw an exception if it receives an unsupported configuration key.
|
||||
.. note::
|
||||
|
||||
The function must throw an exception if it receives an unsupported configuration key.
|
||||
|
||||
### import_model()
|
||||
import_model()
|
||||
++++++++++++++
|
||||
|
||||
The importing of compiled model mechanism allows to import a previously exported backend specific model and wrap it
|
||||
using an [CompiledModel](@ref openvino_docs_ov_plugin_dg_compiled_model) object. This functionality is useful if
|
||||
using an :doc:`CompiledModel <openvino_docs_ov_plugin_dg_compiled_model>` object. This functionality is useful if
|
||||
backend specific model compilation takes significant time and/or cannot be done on a target host
|
||||
device due to other reasons.
|
||||
|
||||
During export of backend specific model using `CompiledModel::export_model`, a plugin may export any
|
||||
During export of backend specific model using ``CompiledModel::export_model``, a plugin may export any
|
||||
type of information it needs to import a compiled model properly and check its correctness.
|
||||
For example, the export information may include:
|
||||
|
||||
- Compilation options (state of `Plugin::m_cfg` structure)
|
||||
- Information about a plugin and a device type to check this information later during the import and
|
||||
throw an exception if the `model` stream contains wrong data. For example, if devices have different
|
||||
capabilities and a model compiled for a particular device cannot be used for another, such type of
|
||||
information must be stored and checked during the import.
|
||||
- Compiled backend specific model itself
|
||||
* Compilation options (state of ``Plugin::m_cfg`` structure).
|
||||
* Information about a plugin and a device type to check this information later during the import and throw an exception if the ``model`` stream contains wrong data. For example, if devices have different capabilities and a model compiled for a particular device cannot be used for another, such type of information must be stored and checked during the import.
|
||||
* Compiled backend specific model itself.
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:import_model
|
||||
@snippet template/src/plugin.cpp plugin:import_model_with_remote
|
||||
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
|
||||
:language: cpp
|
||||
:fragment: [plugin:import_model]
|
||||
|
||||
### create_context()
|
||||
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
|
||||
:language: cpp
|
||||
:fragment: [plugin:import_model_with_remote]
|
||||
|
||||
The Plugin should implement `Plugin::create_context()` method which returns `ov::RemoteContext` in case if plugin supports remote context, in other case the plugin can throw an exception that this method is not implemented.
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:create_context
|
||||
create_context()
|
||||
++++++++++++++++
|
||||
|
||||
### get_default_context()
|
||||
The Plugin should implement ``Plugin::create_context()`` method which returns ``ov::RemoteContext`` in case if plugin supports remote context, in other case the plugin can throw an exception that this method is not implemented.
|
||||
|
||||
`Plugin::get_default_context()` also needed in case if plugin supports remote context, if the plugin doesn't support it, this method can throw an exception that functionality is not implemented.
|
||||
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
|
||||
:language: cpp
|
||||
:fragment: [plugin:create_context]
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:get_default_context
|
||||
|
||||
get_default_context()
|
||||
+++++++++++++++++++++
|
||||
|
||||
``Plugin::get_default_context()`` also needed in case if plugin supports remote context, if the plugin doesn't support it, this method can throw an exception that functionality is not implemented.
|
||||
|
||||
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
|
||||
:language: cpp
|
||||
:fragment: [plugin:get_default_context]
|
||||
|
||||
Create Instance of Plugin Class
|
||||
------------------------
|
||||
###############################
|
||||
|
||||
OpenVINO plugin library must export only one function creating a plugin instance using OV_DEFINE_PLUGIN_CREATE_FUNCTION macro:
|
||||
|
||||
@snippet template/src/plugin.cpp plugin:create_plugin_engine
|
||||
.. doxygensnippet:: src/plugins/template/src/plugin.cpp
|
||||
:language: cpp
|
||||
:fragment: [plugin:create_plugin_engine]
|
||||
|
||||
Next step in a plugin library implementation is the [CompiledModel](@ref openvino_docs_ov_plugin_dg_compiled_model) class.
|
||||
|
||||
Next step in a plugin library implementation is the :doc:`CompiledModel <openvino_docs_ov_plugin_dg_compiled_model>` class.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -1,45 +1,72 @@
|
||||
# Plugin Testing {#openvino_docs_ov_plugin_dg_plugin_testing}
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Use the openvino::funcSharedTests library, which includes
|
||||
a predefined set of functional tests and utilities to verify a plugin.
|
||||
|
||||
|
||||
OpenVINO tests infrastructure provides a predefined set of functional tests and utilities. They are used to verify a plugin using the OpenVINO public API.
|
||||
All the tests are written in the [Google Test C++ framework](https://github.com/google/googletest).
|
||||
All the tests are written in the `Google Test C++ framework <https://github.com/google/googletest>`__.
|
||||
|
||||
OpenVINO Plugin tests are included in the `openvino::funcSharedTests` CMake target which is built within the OpenVINO repository
|
||||
(see [Build Plugin Using CMake](@ref openvino_docs_ov_plugin_dg_plugin_build) guide). This library contains tests definitions (the tests bodies) which can be parametrized and instantiated in plugins depending on whether a plugin supports a particular feature, specific sets of parameters for test on supported operation set and so on.
|
||||
OpenVINO Plugin tests are included in the ``openvino::funcSharedTests`` CMake target which is built within the OpenVINO repository
|
||||
(see :doc:`Build Plugin Using CMake <openvino_docs_ov_plugin_dg_plugin_build>` guide). This library contains tests definitions (the tests bodies) which can be parametrized and instantiated in plugins depending on whether a plugin supports a particular feature, specific sets of parameters for test on supported operation set and so on.
|
||||
|
||||
Test definitions are split into tests class declaration (see `src/tests/functional/plugin/shared/include`) and tests class implementation (see `src/tests/functional/plugin/shared/src`) and include the following scopes of plugin conformance tests:
|
||||
Test definitions are split into tests class declaration (see ``src/tests/functional/plugin/shared/include``) and tests class implementation (see ``src/tests/functional/plugin/shared/src``) and include the following scopes of plugin conformance tests:
|
||||
|
||||
1. **Behavior tests** (`behavior` sub-folder), which are a separate test group to check that a plugin satisfies basic OpenVINO concepts: plugin creation, multiple compiled models support, multiple synchronous and asynchronous inference requests support, and so on. See the next section with details how to instantiate the tests definition class with plugin-specific parameters.
|
||||
1. **Behavior tests** (``behavior`` sub-folder), which are a separate test group to check that a plugin satisfies basic OpenVINO concepts: plugin creation, multiple compiled models support, multiple synchronous and asynchronous inference requests support, and so on. See the next section with details how to instantiate the tests definition class with plugin-specific parameters.
|
||||
|
||||
2. **Single layer tests** (`single_layer_tests` sub-folder). This groups of tests checks that a particular single layer can be inferenced on a device. An example of test instantiation based on test definition from `openvino::funcSharedTests` library:
|
||||
2. **Single layer tests** (``single_layer_tests`` sub-folder). This groups of tests checks that a particular single layer can be inferenced on a device. An example of test instantiation based on test definition from ``openvino::funcSharedTests`` library:
|
||||
|
||||
- From the declaration of convolution test class we can see that it's a parametrized GoogleTest based class with the `convLayerTestParamsSet` tuple of parameters:
|
||||
@snippet single_layer/convolution.hpp test_convolution:definition
|
||||
* From the declaration of convolution test class we can see that it's a parametrized GoogleTest based class with the ``convLayerTestParamsSet`` tuple of parameters:
|
||||
|
||||
.. doxygensnippet:: src/tests/functional/shared_test_classes/include/shared_test_classes/single_layer/convolution.hpp
|
||||
:language: cpp
|
||||
:fragment: [test_convolution:definition]
|
||||
|
||||
- Based on that, define a set of parameters for `Template` plugin functional test instantiation:
|
||||
@snippet single_layer_tests/convolution.cpp test_convolution:declare_parameters
|
||||
* Based on that, define a set of parameters for ``Template`` plugin functional test instantiation:
|
||||
|
||||
.. doxygensnippet:: src/plugins/template/tests/functional/shared_tests_instances/single_layer_tests/convolution.cpp
|
||||
:language: cpp
|
||||
:fragment: [test_convolution:declare_parameters]
|
||||
|
||||
- Instantiate the test itself using standard GoogleTest macro `INSTANTIATE_TEST_SUITE_P`:
|
||||
@snippet single_layer_tests/convolution.cpp test_convolution:instantiate
|
||||
* Instantiate the test itself using standard GoogleTest macro ``INSTANTIATE_TEST_SUITE_P``:
|
||||
|
||||
3. **Sub-graph tests** (`subgraph_tests` sub-folder). This group of tests is designed to tests small patterns or combination of layers. E.g. when a particular topology is being enabled in a plugin e.g. TF ResNet-50, there is no need to add the whole topology to test tests. In opposite way, a particular repetitive subgraph or pattern can be extracted from `ResNet-50` and added to the tests. The instantiation of the sub-graph tests is done in the same way as for single layer tests.
|
||||
> **Note**, such sub-graphs or patterns for sub-graph tests should be added to `openvino::ngraphFunctions` library first (this library is a pre-defined set of small `ov::Model`) and re-used in sub-graph tests after.
|
||||
.. doxygensnippet:: src/plugins/template/tests/functional/shared_tests_instances/single_layer_tests/convolution.cpp
|
||||
:language: cpp
|
||||
:fragment: [test_convolution:instantiate]
|
||||
|
||||
4. **HETERO tests** (`subgraph_tests` sub-folder) contains tests for `HETERO` scenario (manual or automatic affinities settings, tests for `query_model`).
|
||||
3. **Sub-graph tests** (``subgraph_tests`` sub-folder). This group of tests is designed to tests small patterns or combination of layers. E.g. when a particular topology is being enabled in a plugin e.g. TF ResNet-50, there is no need to add the whole topology to test tests. In opposite way, a particular repetitive subgraph or pattern can be extracted from ``ResNet-50`` and added to the tests. The instantiation of the sub-graph tests is done in the same way as for single layer tests.
|
||||
|
||||
.. note::
|
||||
|
||||
Such sub-graphs or patterns for sub-graph tests should be added to ``openvino::ngraphFunctions`` library first (this library is a pre-defined set of small ``ov::Model``) and re-used in sub-graph tests after.
|
||||
|
||||
4. **HETERO tests** (``subgraph_tests`` sub-folder) contains tests for ``HETERO`` scenario (manual or automatic affinities settings, tests for ``query_model``).
|
||||
|
||||
5. **Other tests**, which contain tests for other scenarios and has the following types of tests:
|
||||
- Tests for execution graph
|
||||
- Etc.
|
||||
|
||||
To use these tests for your own plugin development, link the `openvino::funcSharedTests` library to your test binary and instantiate required test cases with desired parameters values.
|
||||
* Tests for execution graph
|
||||
* Other
|
||||
|
||||
> **NOTE**: A plugin may contain its own tests for use cases that are specific to hardware or need to be extensively tested.
|
||||
To use these tests for your own plugin development, link the ``openvino::funcSharedTests`` library to your test binary and instantiate required test cases with desired parameters values.
|
||||
|
||||
To build test binaries together with other build artifacts, use the `make all` command. For details, see
|
||||
[Build Plugin Using CMake*](@ref openvino_docs_ov_plugin_dg_plugin_build).
|
||||
.. note::
|
||||
|
||||
A plugin may contain its own tests for use cases that are specific to hardware or need to be extensively tested.
|
||||
|
||||
### How to Extend OpenVINO Plugin Tests
|
||||
To build test binaries together with other build artifacts, use the ``make all`` command. For details, see :doc:`Build Plugin Using CMake <openvino_docs_ov_plugin_dg_plugin_build>`.
|
||||
|
||||
How to Extend OpenVINO Plugin Tests
|
||||
+++++++++++++++++++++++++++++++++++
|
||||
|
||||
OpenVINO Plugin tests are open for contribution.
|
||||
Add common test case definitions applicable for all plugins to the `openvino::funcSharedTests` target within the OpenVINO repository. Then, any other plugin supporting corresponding functionality can instantiate the new test.
|
||||
Add common test case definitions applicable for all plugins to the ``openvino::funcSharedTests`` target within the OpenVINO repository. Then, any other plugin supporting corresponding functionality can instantiate the new test.
|
||||
|
||||
.. note::
|
||||
|
||||
When implementing a new subgraph test, add new single-layer tests for each operation of the subgraph if such test does not exist.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
> **NOTE**: When implementing a new subgraph test, add new single-layer tests for each operation of the subgraph if such test does not exist.
|
||||
|
||||
@@ -1,10 +1,22 @@
|
||||
# Plugin Properties {#openvino_docs_ov_plugin_dg_properties}
|
||||
|
||||
Plugin can provide own device specific properties.
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Use the ov::Property class to define access rights and
|
||||
specific properties of an OpenVINO plugin.
|
||||
|
||||
|
||||
Plugin can provide own device-specific properties.
|
||||
|
||||
Property Class
|
||||
------------------------
|
||||
##############
|
||||
|
||||
OpenVINO API provides the interface ov::Property which allows to define the property and access rights. Based on that, a declaration of plugin specific properties can look as follows:
|
||||
|
||||
@snippet include/template/properties.hpp properties:public_header
|
||||
.. doxygensnippet:: src/plugins/template/include/template/properties.hpp
|
||||
:language: cpp
|
||||
:fragment: [properties:public_header]
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
@@ -1,15 +1,25 @@
|
||||
# Quantized models compute and restrictions {#openvino_docs_ov_plugin_dg_quantized_models}
|
||||
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Learn about the support for quantized models with different
|
||||
precisions and the FakeQuantize operation used to express
|
||||
quantization rules.
|
||||
|
||||
One of the feature of OpenVINO is the support of quantized models with different precisions: INT8, INT4, etc.
|
||||
However, it is up to the plugin to define what exact precisions are supported by the particular HW.
|
||||
All quantized models which can be expressed in IR have a unified representation by means of *FakeQuantize* operation.
|
||||
For more details about low-precision model representation please refer to this [document](@ref openvino_docs_ie_plugin_dg_lp_representation).
|
||||
For more details about low-precision model representation please refer to this :doc:`document <openvino_docs_ie_plugin_dg_lp_representation>`.
|
||||
|
||||
Interpreting FakeQuantize at runtime
|
||||
####################################
|
||||
|
||||
### Interpreting FakeQuantize at runtime
|
||||
During the model load each plugin can interpret quantization rules expressed in *FakeQuantize* operations:
|
||||
- Independently based on the definition of *FakeQuantize* operation.
|
||||
- Using a special library of low-precision transformations (LPT) which applies common rules for generic operations,
|
||||
such as Convolution, Fully-Connected, Eltwise, etc., and translates "fake-quantized" models into models with low-precision operations.
|
||||
|
||||
* Independently based on the definition of *FakeQuantize* operation.
|
||||
* Using a special library of low-precision transformations (LPT) which applies common rules for generic operations, such as Convolution, Fully-Connected, Eltwise, etc., and translates "fake-quantized" models into models with low-precision operations.
|
||||
|
||||
Here we provide only a high-level overview of the interpretation rules of FakeQuantize.
|
||||
At runtime each FakeQuantize can be split into two independent operations: **Quantize** and **Dequantize**.
|
||||
@@ -17,33 +27,47 @@ The former one is aimed to transform the input data into the target precision wh
|
||||
In practice *Dequantize* operations can be propagated forward through the linear operations, such as *Convolution* or *Fully-Connected*,
|
||||
and in some cases fused with the following *Quantize* operation for the next layer into the so-called *Requantize* operation (see Fig. 1).
|
||||
|
||||
![qdq_propagation]
|
||||
<div align="center">Figure 1. Quantization operations propagation at runtime. Q, DQ, RQ stand for Quantize, Dequantize, and Requantize correspondingly.</div>
|
||||
.. image:: _static/images/qdq_propagation.png
|
||||
|
||||
Figure 1. Quantization operations propagation at runtime. Q, DQ, RQ stand for Quantize, Dequantize, and Requantize correspondingly.
|
||||
|
||||
From the calculation standpoint, the FakeQuantize formula also is split into two parts accordingly:
|
||||
`output = round((x - input_low) / (input_high - input_low) * (levels-1)) / (levels-1) * (output_high - output_low) + output_low`
|
||||
|
||||
``output = round((x - input_low) / (input_high - input_low) * (levels-1)) / (levels-1) * (output_high - output_low) + output_low``
|
||||
|
||||
The first part of this formula represents *Quantize* operation:
|
||||
`q = round((x - input_low) / (input_high - input_low) * (levels-1))`
|
||||
|
||||
``q = round((x - input_low) / (input_high - input_low) * (levels-1))``
|
||||
|
||||
The second is responsible for the dequantization:
|
||||
`r = q / (levels-1) * (output_high - output_low) + output_low`
|
||||
|
||||
``r = q / (levels-1) * (output_high - output_low) + output_low``
|
||||
|
||||
From the scale/zero-point notation standpoint the latter formula can be written as follows:
|
||||
`r = (output_high - output_low) / (levels-1) * (q + output_low / (output_high - output_low) * (levels-1))`
|
||||
|
||||
``r = (output_high - output_low) / (levels-1) * (q + output_low / (output_high - output_low) * (levels-1))``
|
||||
|
||||
Thus we can define:
|
||||
- **Scale** as `(output_high - output_low) / (levels-1)`
|
||||
- **Zero-point** as `-output_low / (output_high - output_low) * (levels-1)`
|
||||
|
||||
> **NOTE**: During the quantization process the values `input_low`, `input_high`, `output_low`, `output_high` are selected so that to map a floating-point zero exactly to an integer value (zero-point) and vice versa.
|
||||
* **Scale** as ``(output_high - output_low) / (levels-1)``
|
||||
* **Zero-point** as ``-output_low / (output_high - output_low) * (levels-1)``
|
||||
|
||||
## Quantization specifics and restrictions
|
||||
In general, OpenVINO can represent and execute quantized models from different sources. However, the Post-training Optimization Tool (POT)
|
||||
is considered the default way to get optimized models. Since the POT supports HW-aware quantization it means that specific rules can be implemented in it for
|
||||
.. note::
|
||||
During the quantization process the values ``input_low``, ``input_high``, ``output_low``, ``output_high`` are selected so that to map a floating-point zero exactly to an integer value (zero-point) and vice versa.
|
||||
|
||||
Quantization specifics and restrictions
|
||||
#######################################
|
||||
|
||||
In general, OpenVINO can represent and execute quantized models from different sources. However, the Neural Network Compression Framework (NNCF)
|
||||
is considered the default way to get optimized models. Since the NNCF supports HW-aware quantization it means that specific rules can be implemented in it for
|
||||
the particular HW. However, it is reasonable to have compatibility with general-purpose HW such as CPU and GPU and support their quantization schemes.
|
||||
Below we define these rules as follows:
|
||||
- Support of mixed-precision models where some layers can be kept in the floating-point precision.
|
||||
- Per-channel quantization of weights of Convolutional and Fully-Connected layers.
|
||||
- Per-channel quantization of activations for channel-wise and element-wise operations, e.g. Depthwise Convolution, Eltwise Add/Mul, ScaleShift.
|
||||
- Symmetric and asymmetric quantization of weights and activations with the support of per-channel scales and zero-points.
|
||||
- Non-unified quantization parameters for Eltwise and Concat operations.
|
||||
- Non-quantized models output, i.e. there are no quantization parameters for it.
|
||||
|
||||
[qdq_propagation]: images/qdq_propagation.png
|
||||
* Support of mixed-precision models where some layers can be kept in the floating-point precision.
|
||||
* Per-channel quantization of weights of Convolutional and Fully-Connected layers.
|
||||
* Per-channel quantization of activations for channel-wise and element-wise operations, e.g. Depthwise Convolution, Eltwise Add/Mul, ScaleShift.
|
||||
* Symmetric and asymmetric quantization of weights and activations with the support of per-channel scales and zero-points.
|
||||
* Non-unified quantization parameters for Eltwise and Concat operations.
|
||||
* Non-quantized network output, i.e. there are no quantization parameters for it.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
@@ -1,49 +1,75 @@
|
||||
# Remote Context {#openvino_docs_ov_plugin_dg_remote_context}
|
||||
|
||||
ov::RemoteContext class functionality:
|
||||
- Represents device specific inference context.
|
||||
- Allows to create remote device specific tensor.
|
||||
@sphinxdirective
|
||||
|
||||
> **NOTE**: If plugin provides a public API for own Remote Context, the API should be header only and doesn't depend on the plugin library.
|
||||
.. meta::
|
||||
:description: Use the ov::RemoteContext class as the base class for a plugin-specific remote context.
|
||||
|
||||
|
||||
ov::RemoteContext class functionality:
|
||||
|
||||
* Represents device-specific inference context.
|
||||
* Allows to create remote device specific tensor.
|
||||
|
||||
.. note::
|
||||
|
||||
If plugin provides a public API for own Remote Context, the API should be header only and does not depend on the plugin library.
|
||||
|
||||
|
||||
RemoteContext Class
|
||||
------------------------
|
||||
###################
|
||||
|
||||
OpenVINO Plugin API provides the interface ov::IRemoteContext which should be used as a base class for a plugin specific remote context. Based on that, a declaration of an compiled model class can look as follows:
|
||||
|
||||
@snippet src/remote_context.hpp remote_context:header
|
||||
.. doxygensnippet:: src/plugins/template/src/remote_context.hpp
|
||||
:language: cpp
|
||||
:fragment: [remote_context:header]
|
||||
|
||||
### Class Fields
|
||||
Class Fields
|
||||
++++++++++++
|
||||
|
||||
The example class has several fields:
|
||||
|
||||
- `m_name` - Device name.
|
||||
- `m_property` - Device specific context properties. It can be used to cast RemoteContext to device specific type.
|
||||
* ``m_name`` - Device name.
|
||||
* ``m_property`` - Device-specific context properties. It can be used to cast RemoteContext to device specific type.
|
||||
|
||||
### RemoteContext Constructor
|
||||
RemoteContext Constructor
|
||||
+++++++++++++++++++++++++
|
||||
|
||||
This constructor should initialize the remote context device name and properties.
|
||||
|
||||
@snippet src/remote_context.cpp remote_context:ctor
|
||||
.. doxygensnippet:: src/plugins/template/src/remote_context.cpp
|
||||
:language: cpp
|
||||
:fragment: [remote_context:ctor]
|
||||
|
||||
### get_device_name()
|
||||
get_device_name()
|
||||
++++++++++++++++++
|
||||
|
||||
The function returns the device name from the remote context.
|
||||
|
||||
@snippet src/remote_context.cpp remote_context:get_device_name
|
||||
.. doxygensnippet:: src/plugins/template/src/remote_context.cpp
|
||||
:language: cpp
|
||||
:fragment: [remote_context:get_device_name]
|
||||
|
||||
### get_property()
|
||||
get_property()
|
||||
+++++++++++++++
|
||||
|
||||
The implementation returns the remote context properties.
|
||||
|
||||
@snippet src/remote_context.cpp remote_context:get_property
|
||||
.. doxygensnippet:: src/plugins/template/src/remote_context.cpp
|
||||
:language: cpp
|
||||
:fragment: [remote_context:get_property]
|
||||
|
||||
|
||||
### create_tensor()
|
||||
create_tensor()
|
||||
+++++++++++++++
|
||||
|
||||
The method creates device specific remote tensor.
|
||||
|
||||
@snippet src/remote_context.cpp remote_context:create_tensor
|
||||
.. doxygensnippet:: src/plugins/template/src/remote_context.cpp
|
||||
:language: cpp
|
||||
:fragment: [remote_context:create_tensor]
|
||||
|
||||
The next step to support device specific tensors is a creation of device specific :doc:`Remote Tensor <openvino_docs_ov_plugin_dg_remote_tensor>` class.
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
The next step to support device specific tensors is a creation of device specific [Remote Tensor](@ref openvino_docs_ov_plugin_dg_remote_tensor) class.
|
||||
|
||||
@@ -1,30 +1,43 @@
|
||||
# Remote Tensor {#openvino_docs_ov_plugin_dg_remote_tensor}
|
||||
|
||||
ov::RemoteTensor class functionality:
|
||||
- Provide an interface to work with device specific memory.
|
||||
@sphinxdirective
|
||||
|
||||
> **NOTE**: If plugin provides a public API for own Remote Tensor, the API should be header only and doesn't depend on the plugin library.
|
||||
.. meta::
|
||||
:description: Use the ov::IRemoteTensor interface as a base class for device-specific remote tensors.
|
||||
|
||||
|
||||
ov::RemoteTensor class functionality:
|
||||
|
||||
* Provides an interface to work with device-specific memory.
|
||||
|
||||
.. note::
|
||||
|
||||
If plugin provides a public API for own Remote Tensor, the API should be header only and does not depend on the plugin library.
|
||||
|
||||
|
||||
Device Specific Remote Tensor Public API
|
||||
------------------------------------------
|
||||
########################################
|
||||
|
||||
The public interface to work with device specific remote tensors should have header only implementation and doesn't depend on the plugin library.
|
||||
|
||||
@snippet include/template/remote_tensor.hpp remote_tensor:public_header
|
||||
.. doxygensnippet:: src/plugins/template/include/template/remote_tensor.hpp
|
||||
:language: cpp
|
||||
:fragment: [remote_tensor:public_header]
|
||||
|
||||
The implementation below has several methods:
|
||||
|
||||
### type_check()
|
||||
type_check()
|
||||
+++++++++++++++++++++++++
|
||||
|
||||
Static method is used to understand that some abstract remote tensor can be casted to this particular remote tensor type.
|
||||
|
||||
### get_data()
|
||||
get_data()
|
||||
+++++++++++++++++++++++++
|
||||
|
||||
The set of methods (specific for the example, other implementation can have another API) which are helpers to get an access to remote data.
|
||||
|
||||
Device Specific Internal tensor implementation
|
||||
-----------------------------------------------
|
||||
Device-Specific Internal tensor implementation
|
||||
##############################################
|
||||
|
||||
The plugin should have the internal implementation of remote tensor which can communicate with public API.
|
||||
The example contains the implementation of remote tensor which wraps memory from stl vector.
|
||||
@@ -33,55 +46,70 @@ OpenVINO Plugin API provides the interface ov::IRemoteTensor which should be use
|
||||
|
||||
The example implementation have two remote tensor classes:
|
||||
|
||||
- Internal type dependent implementation which has as an template argument the vector type and create the type specific tensor.
|
||||
- The type independent implementation which works with type dependent tensor inside.
|
||||
* Internal type dependent implementation which has as an template argument the vector type and create the type specific tensor.
|
||||
* The type independent implementation which works with type dependent tensor inside.
|
||||
|
||||
Based on that, an implementation of a type independent remote tensor class can look as follows:
|
||||
|
||||
@snippet src/remote_context.cpp vector_impl:implementation
|
||||
.. doxygensnippet:: src/plugins/template/src/remote_context.cpp
|
||||
:language: cpp
|
||||
:fragment: [vector_impl:implementation]
|
||||
|
||||
The implementation provides a helper to get wrapped stl tensor and overrides all important methods of ov::IRemoteTensor class and recall the type dependent implementation.
|
||||
|
||||
The type dependent remote tensor has the next implementation:
|
||||
|
||||
@snippet src/remote_context.cpp vector_impl_t:implementation
|
||||
.. doxygensnippet:: src/plugins/template/src/remote_context.cpp
|
||||
:language: cpp
|
||||
:fragment: [vector_impl_t:implementation]
|
||||
|
||||
### Class Fields
|
||||
Class Fields
|
||||
++++++++++++
|
||||
|
||||
The class has several fields:
|
||||
|
||||
- `m_element_type` - Tensor element type.
|
||||
- `m_shape` - Tensor shape.
|
||||
- `m_strides` - Tensor strides.
|
||||
- `m_data` - Wrapped vector.
|
||||
- `m_dev_name` - Device name.
|
||||
- `m_properties` - Remote tensor specific properties which can be used to detect the type of the remote tensor.
|
||||
* ``m_element_type`` - Tensor element type.
|
||||
* ``m_shape`` - Tensor shape.
|
||||
* ``m_strides`` - Tensor strides.
|
||||
* ``m_data`` - Wrapped vector.
|
||||
* ``m_dev_name`` - Device name.
|
||||
* ``m_properties`` - Remote tensor specific properties which can be used to detect the type of the remote tensor.
|
||||
|
||||
### VectorTensorImpl()
|
||||
VectorTensorImpl()
|
||||
++++++++++++++++++
|
||||
|
||||
The constructor of remote tensor implementation. Creates a vector with data, initialize device name and properties, updates shape, element type and strides.
|
||||
|
||||
|
||||
### get_element_type()
|
||||
get_element_type()
|
||||
++++++++++++++++++
|
||||
|
||||
The method returns tensor element type.
|
||||
|
||||
### get_shape()
|
||||
get_shape()
|
||||
+++++++++++
|
||||
|
||||
The method returns tensor shape.
|
||||
|
||||
### get_strides()
|
||||
get_strides()
|
||||
+++++++++++++
|
||||
|
||||
The method returns tensor strides.
|
||||
|
||||
### set_shape()
|
||||
set_shape()
|
||||
+++++++++++
|
||||
|
||||
The method allows to set new shapes for the remote tensor.
|
||||
|
||||
### get_properties()
|
||||
get_properties()
|
||||
++++++++++++++++
|
||||
|
||||
The method returns tensor specific properties.
|
||||
|
||||
### get_device_name()
|
||||
get_device_name()
|
||||
+++++++++++++++++
|
||||
|
||||
The method returns tensor specific device name.
|
||||
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
@@ -2,6 +2,11 @@
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Learn more about plugin development and specific features in
|
||||
OpenVINO: precision transformations and support for quantized
|
||||
models with different precisions.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
@@ -9,10 +14,11 @@
|
||||
openvino_docs_ov_plugin_dg_quantized_models
|
||||
openvino_docs_OV_UG_lpt
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
The guides below provides extra information about specific features of OpenVINO needed for understanding during OpenVINO plugin development:
|
||||
|
||||
* [Quantized networks](@ref openvino_docs_ov_plugin_dg_quantized_models)
|
||||
* [Low precision transformations](@ref openvino_docs_OV_UG_lpt) guide
|
||||
* [Writing OpenVINO™ transformations](@ref openvino_docs_transformations) guide
|
||||
* :doc:`Quantized networks <openvino_docs_ov_plugin_dg_quantized_models>`
|
||||
* :doc:`Low precision transformations guide <openvino_docs_OV_UG_lpt>`
|
||||
* :doc:`Writing OpenVINO™ transformations guide <openvino_docs_transformations>`
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
|
||||
@@ -2,6 +2,10 @@
|
||||
|
||||
@sphinxdirective
|
||||
|
||||
.. meta::
|
||||
:description: Learn about extra API references required for the development of
|
||||
plugins in OpenVINO.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:hidden:
|
||||
@@ -9,9 +13,9 @@
|
||||
../groupov_dev_api
|
||||
../groupie_transformation_api
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
The guides below provides extra API references needed for OpenVINO plugin development:
|
||||
|
||||
* [OpenVINO Plugin API](@ref ov_dev_api)
|
||||
* [OpenVINO Transformation API](@ref ie_transformation_api)
|
||||
* `OpenVINO Plugin API <https://docs.openvino.ai/2023.0/groupov_dev_api.html>`__
|
||||
* `OpenVINO Transformation API <https://docs.openvino.ai/2023.0/groupie_transformation_api.html>`__
|
||||
|
||||
@endsphinxdirective
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user